SQL Server 2014 Standard Edition Large Table performance - sql-server

I have a question with regards to performance currently I have a table that is having trouble with query performance whenever the table rows in already millions of record.
This is the table:
CREATE TABLE [dbo].[HistorySampleValues]
(
[HistoryParameterID] [int] NOT NULL,
[SourceTimeStamp] [datetime2](7) NOT NULL,
[ArchiveTimestamp] [datetime2](7) NOT NULL CONSTRAINT [DF__HistorySa__Archi__2A164134] DEFAULT (getutcdate()),
[ValueStatus] [int] NOT NULL,
[ArchiveStatus] [int] NOT NULL,
[IntegerValue] [bigint] SPARSE NULL,
[DoubleValue] [float] SPARSE NULL,
[StringValue] [varchar](100) SPARSE NULL,
[EnumNamedSetName] [varchar](100) SPARSE NULL,
[EnumNumericValue] [int] SPARSE NULL,
[EnumTextualValue] [varchar](256) SPARSE NULL
) ON [PRIMARY]
CREATE CLUSTERED INDEX [Source_HistParameterID_Index] ON [dbo].[HistorySampleValues]
(
[HistoryParameterID] ASC,
[SourceTimeStamp] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO
It's fairly flat with a clustered index on HistoryParameterID and SourceTimeStamp.
This is the stored procedure that I'm using
SET NOCOUNT ON;
DECLARE #SqlCommand NVARCHAR(MAX)
SET #SqlCommand = 'SELECT HistoryParameterID,
SourceTimestamp, ArchiveTimestamp,ValueStatus,ArchiveStatus,
IntegerValue,DoubleValue,StringValue,EnumNumericValue,
EnumTextualValue,EnumNamedSetName
FROM [HistorySampleValues] WITH(NOLOCK)
WHERE ([HistoryParameterID] =' + #ParamIds + '
AND
[SourceTimeStamp] >= ''' + CONVERT(VARCHAR(30),#StartTime, 25) + '''
AND
[SourceTimeStamp] <= ''' + CONVERT(VARCHAR(30),#EndTime, 25) + ''')
AND ValueStatus = ' + #ValueStatus
EXECUTE( #SqlCommand )
As you can see the HistoryParameterID and SourceTimestamp are being used as the parameters for the first query. And retrieving 8hrs worth of records which is ~28k records, it returns with an erratic performance, 1.8seconds - 700ms
Will the design scale? whenever it reaches 77 billion records? or is there any strategy to be used? the version of SQL Server is Standard Edition so there is no partitioning, columnstore to be used. Or have I reached the maximum performance of SQL Server Standard Edition?
this is the updated stored proc
#ParamIds int,
#StartTime datetime,
#EndTime datetime,
#ValueStatus int
AS
BEGIN
SET NOCOUNT ON;
SELECT HistoryParameterID,
SourceTimestamp, ArchiveTimestamp,ValueStatus,ArchiveStatus,
IntegerValue,DoubleValue,StringValue,EnumNumericValue,
EnumTextualValue,EnumNamedSetName
FROM [HistorySampleValues] WITH(NOLOCK)
WHERE
HistoryParameterID = #ParamIds
AND (SourceTimeStamp >= #StartTime AND SourceTimeStamp <=#EndTime)
AND (#ValueStatus = -1 OR ValueStatus = #ValueStatus)
I got a 1.396 second client processing time in retrieving 41213 rows to a ~849600000 rows in the table.
is there a way to improve this?

Everytime you execute a new SQL command, it has to be compiled by the MS SQL Server. If you re-use the command, you save on compilation time. You need to directly execute the command in the stored procedure something like this, which should allow compilation and give you more consistent results.
SELECT ...
WHERE ([HistoryParameterID] = #ParamIds
AND [SourceTimeStamp] >= #StartTime
AND [SourceTimeStamp] <= #EndTime
AND ValueStatus = #ValueStatus
This will give you also an opportunity to monitor the performance of the command.

Related

How to fix error converting data type varchar to numeric

I'm customizing a legacy ASP.NET MVC application that uses both raw SQL and models. I have some data to be committed that has two decimal places for example 4,615.38, 11.51. When I attempt to commit data im getting an error
Error converting datatype varchar to numeric
I need help on how to properly define my table and stored procedure. Should I use any casting in table definition or LEFT function?
In TaxTableController.cs i have :
Models.TaxTable.Zimra zimra = new Models.TaxTable.Zimra();
zimra.TableName = Helpers.SanitiseInput(Convert.ToString(formcollection["TableName"]));
zimra.TierName = Helpers.SanitiseInput(Convert.ToString(formcollection["TierName"]));
zimra.MinSalary = Convert.ToDouble(formcollection["MinSalary"]);
zimra.MaxSalary = Convert.ToDouble(formcollection["MaxSalary"]);
zimra.CreatedOn = DateTime.Now;
zimra.CreatedBy = Convert.ToString(Session["UserId"]);
if (ModelState.IsValid)
{
using (SqlConnection conn = new SqlConnection(Helpers.DatabaseConnect))
{
SqlCommand cmd = new SqlCommand("SaveZimraTable", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#TableName", zimra.TableName);
cmd.Parameters.AddWithValue("#TierName", zimra.TierName);
cmd.Parameters.AddWithValue("#MaxSalary", zimra.MaxSalary);
cmd.Parameters.AddWithValue("#MinSalary", zimra.MinSalary);
cmd.Parameters.AddWithValue("#CreatedBy", zimra.CreatedBy);
cmd.Parameters.AddWithValue("#CreatedOn", zimra.CreatedOn);
My table definition is as below (using Script Table As--Create) :
[Id] [int] IDENTITY(1,1) NOT NULL,
[MinSalary] [decimal](18, 2) NOT NULL,
[MaxSalary] [decimal](18, 2) NOT NULL,
[CreatedBy] [varchar](50) NULL,
[TableName] [varchar](50) NULL,
[TierName] [varchar](50) NOT NULL,
[CreatedOn] [datetime] NULL,
CONSTRAINT [PK__Zimra__3214EC07397C51AA] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
And the stored procedure is as below:
CREATE PROCEDURE [dbo].[SaveZimraTable]
#TableName varchar(50),
#TierName varchar(50),
#MinSalary decimal(18,2),
#MaxSalary decimal(18,2),
#CreatedBy varchar(50),
#CreatedOn datetime
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
INSERT INTO Zimra VALUES (#TableName,#TierName,#MinSalary,#MaxSalary,#CreatedBy,#CreatedOn)
END
GO
Always specify the columns list in an insert statement. Otherwise, you must specify values for all the columns (except identity columns) and you must do it in the correct order - which is clearly not the case in your statement. Also, you run the risk of breaking the statement if you change the table structure - either by adding a column or switching columns order.
INSERT INTO Zimra (TableName, TierName, MinSalary, MaxSalary, CreatedBy, CreatedOn)
VALUES (#TableName,#TierName,#MinSalary,#MaxSalary,#CreatedBy,#CreatedOn)
Also, as noted in the comments, do not use AddWithValues to add parameters to your command object - instead, use Add:
cmd.Parameters.Add("#TableName", SqlDbType.VarChar).Value = zimra.TableName;
// do the same for all parameters.

Why is a simple SQL script so much slower in Azure SQL

I run a simple insert/update/delete 1m rows script to crudely check the health of our SQL server installations. It's 10 times slower in Azure SQL(S6) than on our in-house test server. Anyone experienced similar problems? Is there a fundamental difference in the way that Azure SQL behaves which invalidates the test?
Test Results
Our Internal Server
32GB RAM, Intel Xenon 3.3 Ghz
(1000000 rows affected)
Insert Duration = 124 Seconds
Update Duration = 3 Seconds
Delete Duration = 3 Seconds
Azure SQL database 400 DTUs (S6)
(1000000 rows affected)
Insert Duration = 1267 Seconds
Update Duration = 36 Seconds
Delete Duration = 71 Seconds
SQL Server Script
IF (SELECT COUNT(name) FROM sysobjects WHERE name = 'PerfTest') >0
BEGIN
DROP TABLE PerfTest
CREATE TABLE [dbo].[PerfTest](
[PerfID] [int] NOT NULL,
[PerfTX] [varchar](20) COLLATE Latin1_General_CI_AI NULL,
[PerfDT] [datetime] NULL,
[PerfNm] [int] NULL,
CONSTRAINT [PK_PerfTest] PRIMARY KEY CLUSTERED
([PerfID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
ELSE
BEGIN
CREATE TABLE [dbo].[PerfTest](
[PerfID] [int] NOT NULL,
[PerfTX] [varchar](20) COLLATE Latin1_General_CI_AI NULL,
[PerfDT] [datetime] NULL,
[PerfNm] [int] NULL,
CONSTRAINT [PK_PerfTest] PRIMARY KEY CLUSTERED
([PerfID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
DECLARE
#InsertStart DATETIME,
#InsertEnd DATETIME,
#DeleteStart DATETIME,
#DeleteEnd DATETIME,
#UpdateStart DATETIME,
#UpdateEnd DATETIME,
#PID AS INT,
#PTX AS VARCHAR(20),
#PDT AS DATETIME,
#PNM AS INT
BEGIN
PRINT 'Timings will be at the bottom of this result set'
SET #PID = 0
SET #PNM = 0
SET #PTX = 'ABCDEFGHIJABCDEFGHIJ'
SET #InsertStart = GETDATE()
--Insert Test
WHILE (#PID < 1000000)
BEGIN
SET #PID = #PID + 1
SET #PNM = #PNM + 1
SET #PDT = GETDATE()
INSERT INTO PerfTest VALUES(#PID, #PTX, #PDT, #PNM)
END
SET #InsertEnd = GETDATE()
--Begin Update Test
SET #UpdateStart = GETDATE()
UPDATE PerfTest SET PerfNm = PerfNm + 1
SET #UpdateEnd = GETDATE()
--Begin Delete Test
SET #DeleteStart = GETDATE()
DELETE FROM PerfTest
SET #DeleteEnd = GETDATE()
PRINT 'Insert Duration = ' ++ CAST(DATEDIFF(SS,#InsertStart, #InsertEnd) AS CHAR(5)) ++ ' Seconds'
PRINT 'Update Duration = ' ++ CAST(DATEDIFF(SS,#UpdateStart, #UpdateEnd) AS CHAR(5)) ++ ' Seconds'
PRINT 'Delete Duration = ' ++ CAST(DATEDIFF(SS,#DeleteStart, #DeleteEnd) AS CHAR(5)) ++ ' Seconds'
DROP TABLE PerfTest
END
Thanks people, I'd really appreciate you sharing any experience you have in this area

Create Order Number using a Stored procedure select and update within a transaction

I need to create a method of creating a unique order number. Each order number must always be greater than the last, however they should not always be consecutive. The solution must work within a web farm environment.
Currently have a stored procedure which is responsible for getting a new Order number, which has to be seeded so that the order number is not consecutive. The application is now moving from a single server to a web farm and therefore controlling access to the stored procedure via a using a lock in C# is no longer viable as a method of controlling access. I have updated the stored procedure as below however I am concerned that I am going to introduce blocks\locks\deadlocks when concurrent calls occur.
The table and index structures are as follows
MyAppSetting Table
CREATE TABLE [dbo].[MyAppSetting](
[SettingName] [nvarchar](255) NOT NULL,
[SettingValue] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_MyAppSetting] PRIMARY KEY CLUSTERED
(
[SettingName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
My Order table
CREATE TABLE [dbo].[MyOrder](
[id] [int] IDENTITY(1,1) NOT NULL,
[OrderNumber] [nvarchar](50) NOT NULL CONSTRAINT [DF_MyOrder_OrderNumber] DEFAULT (N''),
... rest of the table
CONSTRAINT [PK_MyOrder] PRIMARY KEY NONCLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
The Sql Transaction
Set Transaction Isolation Level Serializable;
Begin Transaction
--Gen random number
SELECT #Random = ROUND(((#HighSeed - #LowSeed -1) * RAND() + #LowSeed), 0)
--Get Seed
select #Seed = [SettingValue] FROM [MyAppSetting] where [SettingName] = 'OrderNumberSeed'
--Removed concurrency and not required as order numbe should not exceed the seed number
--select #MaxOrderNUmber = Max(OrderNumber) FROM MyOrder
--if #MaxOrderNumber >= #Seed Begin
-- Set #Seed = #MaxOrderNumber
--end
-- New Seed
Set #OrderNumber = #Seed + #Random
Update [MyAppSetting] Set [SettingValue] = #OrderNumber where [SettingName] = 'OrderNumberSeed'
select #OrderNumber
Commit
With the revised SQL you provided you only select and update one table. You can do this in a single query which should avoid the risk of deadlocks, and avoids the need for an explicit transaction.
Setup:
CREATE TABLE OrderNumber ( NextOrderNumber int)
INSERT OrderNumber(NextOrderNumber) values (123)
Get Next Order Number
DECLARE #MinIncrement int = 5
DECLARE #MaxIncrement int = 50
DECLARE #Random int = ROUND(((#MaxIncrement - #MinIncrement -1) * RAND() + #MinIncrement), 0)
DECLARE #OrderNumber int
UPDATE OrderNumber
SET #OrderNumber=NextOrderNumber, NextOrderNumber = NextOrderNumber + #Random
SELECT #OrderNumber
I changed LowSeed and HighSeed to MinIncrement and MaxIncrement as I found the term Seed here to be confusing. I would use a table dedicated to tracking the order number to avoid locking anything else on the MyAppSetting table.
I would also challenge the requirement of having an order that always increases, but not sequentially - without this a GUID would be easier.
Alternatives to consider would be to have the order number derived from the time somehow - with last digit to identify different servers.

Weird table's data in SQL server

I have a table in my database called notifications, data will be inserted into this table whenever notifications arrive from other users in my application. The table's schema looks like this:
CREATE TABLE [dbo].[notifications](
[id] [int] IDENTITY(1,1) NOT NULL,
[sender] [char](17) NULL,
[reciever] [char](17) NULL,
[letter_code] [char](15) NULL,
[txt] [nvarchar](max) NULL,
[dateandtime] [datetime2](7) NULL,
[letter_kind] [nvarchar](50) NULL,
[seen] [bit] NULL,
CONSTRAINT [PK_notifications] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
the inserted row must be something like this:
id || sender || reciever || letter_code || txt || dateandtime || letter_kind || seen
============================================================================================================
1 || 2 || 2 || 1734 || message || 2015-10-12 09:59:01 || PS || flase
today I was checking my database's tables, and I noticed something strange has happened. Some strange data are inserted into the notification table:
As you can see the txt column contains a very strange value:
1<div style="display:none">looking to cheat go how many men have affairs</div>
And other columns contain 1 !
Any idea?
PS: I'm sure in only one place data for this table will be written:
context.InsTotNotification(WebContext.Current.User.UserCode, CheckedUsersForSend.ElementAt(j), LetCods.ElementAt(i),
string.Format("letter kind {0} letter code {1} datetime.",
LetterKind, Convert.ToString(Convert.ToDouble(LetCods.ElementAt(i).Substring(4, 11)), CultureInfo.InvariantCulture)), DateTime.Now, LetterKind);
Update: There's no form for allowing users to input the data, data will be written using the backend not users.
Update 2: I'm using EntityFramework Database First and InsTotNotification is a stored procedure inside my context:
[Invoke]
public string InsTotNotification(string sender, string reciever,string letter_code,string Txt,DateTime dateandtime,string Letter_kind)
{
var MQuery = ObjectContext.InsTo_Notification(sender, reciever, letter_code, Txt, dateandtime, Letter_kind).FirstOrDefault();
return "Ok";
}
And here's the sp:
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER PROCEDURE [dbo].[InsTo_Notification]
#Sender char(17),
#Reciever char(17),
#Letter_code char(15),
#Txt nvarchar(MAX),
#DateandTime datetime2,
#Letter_kind nvarchar(50)
AS
BEGIN TRANSACTION InsertNotifications
Declare #T1 int
INSERT notifications (sender, reciever, letter_code, txt, dateandtime, letter_kind)
values (#Sender, #Reciever, #Letter_code, #Txt, #DateandTime,#Letter_kind)
SELECT #T1=##ERROR
--------------------------
if (#T1=0)
BEGIN
COMMIT TRANSACTION InsertNotifications
SELECT #Letter_code as LetterNo
END
ELSE
BEGIN
ROLLBACK TRANSACTION InsertNotifications
SELECT 'NO' as 'It has Problem'
END
Update 3: There's also these types of rows in the table:
Notice that the text نامه PS به شماره 11968 به شما ارجاع داده شد in the selected row is the actual value for txt field.
definitely SQL Injection...
Is this coming from a Web application or something that can be reahed by the web, right?
Your stored procedure takes any char inputs without proper validation.
Try to check if parameters contain "<", ">", reserved SQL Server terms/statements or other unwanted chars.

Updates and Inserts can cause index fragmentation even if you rollback the transaction

I am aware that, some causes for index fragmentation are:
Non Sequential inserts – when doing a non-sequential insert, SQL Server moves ~50% of data from the old page to the newly allocated page. This would result in a page split, with each page having ~50% of data from the old page.
Updates to an existing row value with a larger value, which doesn’t fit on the same page
I have heard that even if you rollback the transaction, the fragmentation remains, but I could not find documentation for that.
does anybody have documentation for that, or a script to prove this?
Today I did some tests, and the results were not exactly what I would expect.
The environment:
Microsoft SQL Server 2008 (SP2) - 10.0.4000.0 (X64) Sep 16 2010 19:43:16 Copyright (c) 1988-2008 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.0 (Build 6002: Service Pack 2)
First of all I looked for a table that already has fragmentation.
Surprisingly, inside my DBA database I found a table called tableSizeBenchmark.
USE [DBA]
GO
CREATE TABLE [dbo].[tableSizeBenchmark](
[lngID] [bigint] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[dbName] [varchar](100) NOT NULL,
[tableName] [varchar](100) NOT NULL,
[creationDate] [smalldatetime] NOT NULL,
[numberOfRows] [bigint] NULL,
[spaceUsedMb] [numeric](18, 0) NULL,
CONSTRAINT [PK_tableSizeBenchmark] PRIMARY KEY CLUSTERED
(
[lngID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
ON [PRIMARY]
) ON [PRIMARY]
GO
USE [DBA]
GO
CREATE UNIQUE NONCLUSTERED INDEX [UIXtableSizeBenchmark] ON [dbo].[tableSizeBenchmark]
(
[dbName] ASC,
[tableName] ASC,
[creationDate] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF,
IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
This is the level of fragmentation BEFORE doing any test:
You need to create these 2 procedures in order to carry on the same test.
basically I used random string generator and a random number generator, just because I wanted to insert 10,000 records and see how the make the fragmentation worse, and later on ROLLBACK the transaction and see the true, if the fragmentation remains or goes away.
--DROP PROCEDURE GetRandomString
--GO
--DROP PROCEDURE GetRandomNumber
--GO
create procedure GetRandomString (#STR VARCHAR(100) OUTPUT)
as
begin
-- generates a random string
-- marcelo miorelli
-- 01-oct-2014
-- one of the other features that makes this more flexible:
-- By repeating blocks of characters in #CharPool,
-- you can increase the weighting on certain characters so that they are more likely to be chosen.
DECLARE #z INT
, #i INT
, #MIN_LENGTH INT
, #MAX_LENGTH INT
DECLARE #CharPool VARCHAR(255)
DECLARE #RandomString VARCHAR(255)
DECLARE #PoolLength INT
SELECT #MIN_LENGTH = 20
SELECT #MAX_LENGTH = 100
--SET #z = RAND() * (#max_length - #min_length + 1) + #min_length
SET #Z = 50
-- define allowable character explicitly - easy to read this way an easy to
-- omit easily confused chars like l (ell) and 1 (one) or 0 (zero) and O (oh)
SET #CharPool =
'abcdefghijkmnopqrstuvwxyzABCDEFGHIJKLMNPQRSTUVWXYZ23456789.,-_!$##%^&*'
SET #CharPool =
'ABCDEFGHIJKLMNPQRSTUVWXYZ'
SET #PoolLength = Len(#CharPool)
SET #i = 0
SET #RandomString = ''
WHILE (#i < #z) BEGIN
SELECT #RandomString = #RandomString +
SUBSTRING(#Charpool, CONVERT(int, RAND() * #PoolLength), 1)
SELECT #i = #i + 1
END
SELECT #STR = #RandomString
end
GO
create procedure GetRandomNumber (#number int OUTPUT)
as
begin
-- generate random numbers
-- marcelo miorelli
-- 01-oct-2014
DECLARE #maxval INT, #minval INT
select #maxval=10000,#minval=500
SELECT #Number = CAST(((#maxval + 1) - #minval) *
RAND(CHECKSUM(NEWID())) + #minval AS INT)
end
go
After you have created the procedures above, see below the code that I have used to run this test:
SELECT object_id AS ObjectID,
object_NAME (Object_id) as Table_NAME,
index_id AS IndexID,
avg_fragmentation_in_percent AS PercentFragment,
fragment_count AS TotalFrags,
avg_fragment_size_in_pages AS PagesPerFrag,
page_count AS NumPages
FROM sys.dm_db_index_physical_stats(DB_ID('dba'),
NULL, NULL, NULL , 'DETAILED')
WHERE OBJECT_ID = OBJECT_ID('tableSizeBenchmark')
and avg_fragmentation_in_percent > 0
so the result:
After running the above script but BEFORE ROLLBACK OR COMMIT, the transaction is still open:
Please note that the fragmentation has INCREASED.
Would this fragmentation REMAIN or DISAPPEAR after we rollback this transaction?
Let me post here also, the script that I use to see the fragmentation level:
SELECT object_id AS ObjectID,
object_NAME (Object_id) as Table_NAME,
index_id AS IndexID,
avg_fragmentation_in_percent AS PercentFragment,
fragment_count AS TotalFrags,
avg_fragment_size_in_pages AS PagesPerFrag,
page_count AS NumPages
FROM sys.dm_db_index_physical_stats(DB_ID('dba'),
NULL, NULL, NULL , 'DETAILED')
WHERE OBJECT_ID = OBJECT_ID('tableSizeBenchmark')
and avg_fragmentation_in_percent > 0
and the results of this experiment:
As you can see, the fragmentation levels went back to the original situation, before the transaction.
Hope this helps
Marcelo

Resources