I have a table of call data that has grown to 1.3 billion rows and 173 gigabytes of data There are two columns that we no longer use, one is char(15) and the other is varchar(24). They have both been getting inserted with NULL for some time, I've been putting off removing the columns because I am unsure of the implications. We have limited space on both the drive with the database and the drive with the transaction log.
In addition I found this post saying the space would not be available until a DBCC REINDEX was done. I see this as both good and bad. It's good because dropping the columns should be very fast and not involve a lot of logging, but bad because the space will not be reclaimed. Will newly inserted records take up less space though? That would be fine in my case as we prune the old data after 18 months so the space will gradually decrease.
If we did a DBCC REINDEX (or ALTER INDEX REBUILD) would that actually help since the columns are not part of any index? Would that take up log space or lock the table so it could not be used?
I found your question interesting, so decided to model it on a development database.
SQL Server 2008, database size 400 Mb, log 2.4 Gb.
I assume, from link provided you created a table with clustered index:
CREATE TABLE [dbo].[big_table](
[recordID] [int] IDENTITY(1,1) NOT NULL,
[col1] [varchar](50) NOT NULL,
[col2] [char](15) NULL,
[col3] [varchar](24) NULL,
CONSTRAINT [PK_big_table] PRIMARY KEY CLUSTERED
(
[recordID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
This table consist of 12 Million records.
sp_spaceused big_table, true
name-big_table, rows-12031303, reserved-399240 KB, data-397760 KB, index_size-1336 KB, unused-144 KB.
drop columns
sp_spaceused big_table, true
Table size stays the same. Database and log size remained the same.
add 3 million of rows to the rest of the table
name-big_table, rows-15031303, reserved-511816 KB, data-509904 KB, index_size-1752 KB, unused-160 KB.
database size 500 Mb, log 3.27 Gb.
After
DBCC DBREINDEX( big_table )
Log is the same size, but database size increased to 866 Mb
name-big_table, rows-12031303, reserved-338376 KB, data-337704 KB, index_size-568 KB, unused-104 KB.
Again add 3 million rows to see if they going into available space within database.
Database size is the same, log 3.96 Gb, which clearly shows they are.
Hope it makes sense.
No, newly inserted records would not take up less space. I was looking at this exact issue earlier today as it happens.
Test table
CREATE TABLE T
(
id int identity primary key,
FixedWidthColToBeDropped char(10),
VariableWidthColToBeDropped varchar(10),
FixedWidthColToBeWidened char(7),
FixedWidthColToBeShortened char(20),
VariableWidthColToBeWidened varchar(7),
VariableWidthColToBeShortened varchar(20),
VariableWidthColWontBeAltered varchar(20)
)
Offsets Query
WITH T
AS (SELECT ISNULL(LEFT(MAX(name), 30), 'Dropped') AS column_name,
MAX(column_id) AS column_id,
ISNULL(MAX(case
when column_id IS NOT NULL THEN max_inrow_length
END), MAX(max_inrow_length)) AS max_inrow_length,
leaf_offset,
CASE
WHEN leaf_offset < 0 THEN SUM(CASE
WHEN column_id IS NULL THEN 2 ELSE 0
END)
ELSE MAX(max_inrow_length) - MAX(CASE
WHEN column_id IS NULL THEN 0
ELSE max_inrow_length
END)
END AS wasted_space
FROM sys.system_internals_partition_columns pc
JOIN sys.partitions p
ON p.partition_id = pc.partition_id
LEFT JOIN sys.columns c
ON column_id = partition_column_id
AND c.object_id = p.object_id
WHERE p.object_id = object_id('T')
GROUP BY leaf_offset)
SELECT CASE
WHEN GROUPING(column_name) = 0 THEN column_name
ELSE 'Total'
END AS column_name,
column_id,
max_inrow_length,
leaf_offset,
SUM(wasted_space) AS wasted_space
FROM T
GROUP BY ROLLUP ((column_name,
column_id,
max_inrow_length,
leaf_offset))
ORDER BY GROUPING(column_name),
CASE
WHEN leaf_offset > 0 THEN leaf_offset
ELSE 10000 - leaf_offset
END
Initial State of the Table
column_name column_id max_inrow_length leaf_offset wasted_space
------------------------------ ----------- ---------------- ----------- ------------
id 1 4 4 0
FixedWidthColToBeDropped 2 10 8 0
FixedWidthColToBeWidened 4 7 18 0
FixedWidthColToBeShortened 5 20 25 0
VariableWidthColToBeDropped 3 10 -1 0
VariableWidthColToBeWidened 6 7 -2 0
VariableWidthColToBeShortened 7 20 -3 0
VariableWidthColWontBeAltered 8 20 -4 0
Total NULL NULL NULL 0
Now make some changes
ALTER TABLE T
ALTER COLUMN FixedWidthColToBeWidened char(12)
ALTER TABLE T
ALTER COLUMN FixedWidthColToBeShortened char(10)
ALTER TABLE T
ALTER COLUMN VariableWidthColToBeWidened varchar(12)
ALTER TABLE T
ALTER COLUMN VariableWidthColToBeShortened varchar(10)
ALTER TABLE T
DROP COLUMN FixedWidthColToBeDropped, VariableWidthColToBeDropped
Look at the table again
column_name column_id max_inrow_length leaf_offset wasted_space
------------------------------ ----------- ---------------- ----------- ------------
id 1 4 4 0
Dropped NULL 10 8 10
Dropped NULL 7 18 7
FixedWidthColToBeShortened 5 10 25 10
FixedWidthColToBeWidened 4 12 45 0
Dropped NULL 10 -1 2
VariableWidthColToBeWidened 6 12 -2 0
Dropped NULL 20 -3 2
VariableWidthColWontBeAltered 8 20 -4 0
VariableWidthColToBeShortened 7 10 -5 0
Total NULL NULL NULL 31
Insert a row and look at the page
INSERT INTO T
([FixedWidthColToBeWidened]
,[FixedWidthColToBeShortened]
,[VariableWidthColToBeWidened]
,[VariableWidthColToBeShortened])
VALUES
('1','2','3','4')
DECLARE #DBCCPAGE nvarchar(100)
SELECT TOP 1 #DBCCPAGE = 'DBCC PAGE(''tempdb'',' + CAST(file_id AS VARCHAR) + ',' + CAST(page_id AS VARCHAR) + ',3)'
FROM T
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)
DBCC TRACEON(3604)
EXEC (#DBCCPAGE)
Returns
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Record Size = 75
Memory Dump #0x000000000D5CA060
0000000000000000: 30003900 01000000 26a44500 00000000 †0.9.....&¤E.....
0000000000000010: ffffffff ffffff7f 00322020 20202020 †ÿÿÿÿÿÿÿ..2
0000000000000020: 20202003 00000000 98935c0d 00312020 † ......\..1
0000000000000030: 20202020 20202020 200a0080 00050049 † ......I
0000000000000040: 004a004a 004a004b 003334†††††††††††††.J.J.J.K.34
Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4
id = 1
Slot 0 Column 67108868 Offset 0x8 Length 0 Length (physical) 10
DROPPED = NULL
Slot 0 Column 67108869 Offset 0x0 Length 0 Length (physical) 0
DROPPED = NULL
Slot 0 Column 67108865 Offset 0x12 Length 0 Length (physical) 7
DROPPED = NULL
Slot 0 Column 67108866 Offset 0x19 Length 0 Length (physical) 20
DROPPED = NULL
Slot 0 Column 6 Offset 0x49 Length 1 Length (physical) 1
VariableWidthColToBeWidened = 3
Slot 0 Column 67108867 Offset 0x0 Length 0 Length (physical) 0
DROPPED = NULL
Slot 0 Column 8 Offset 0x0 Length 0 Length (physical) 0
VariableWidthColWontBeAltered = [NULL]
Slot 0 Column 4 Offset 0x2d Length 12 Length (physical) 12
FixedWidthColToBeWidened = 1
Slot 0 Column 5 Offset 0x19 Length 10 Length (physical) 10
FixedWidthColToBeShortened = 2
Slot 0 Column 7 Offset 0x4a Length 1 Length (physical) 1
VariableWidthColToBeShortened = 4
Slot 0 Offset 0x0 Length 0 Length (physical) 0
KeyHashValue = (010086470766)
You can see the dropped (and altered) columns are still consuming space even though the table was actually empty when the schema was changed.
The impact of the dropped columns in your case will be 15 bytes wasted for the char one and 2 bytes for the varchar one unless it is the last column in the variable section when it will take up no space.
Related
I am trying to get the total number of trips a meter has undergone based on a set of records.
-- MeterRecord Table
Id IVoltage ICurrent
--------------------
1 340 0 <<<-- (Trip is zero at this point)
2 288 1
3 312 2
4 236 1
5 343 0 <<<-- (Trip is one at this point)
6 342 0
7 264 1
8 269 0 <<<-- (Trip is two at this point)
Trip is incremented by one only when 'ICurrent' value returns back to zero from a previous non-zero state.
What i have tried using Count function:
Select SUM(IVoltage) as Sum_Voltage, COUNT(case when ICurrent = 0 then 1 else 0 end) as Trips
This returns
Sum_Voltage Trips
---------------------
45766 8
What i am trying to achieve based on the table above
--MeterRecord View
Sum_Voltage Trips
---------------------
45766 2
Use LAG to determine if you have a trip:
DROP TABLE IF EXISTS #meterRecord
CREATE TABLE #meterRecord
(
Id INT,
IVoltage INT,
ICurrent INT
);
INSERT INTO #meterRecord
VALUES
(1,340,0),
(2,288,1),
(3,312,2),
(4,236,1),
(5,343,0),
(6,342,0),
(7,264,1),
(8,269,0);
WITH cte AS
(
SELECT IVoltage,
CASE WHEN ICurrent = 0 AND LAG(ICurrent,1) OVER(ORDER BY Id) != 0 THEN 1 ELSE 0 END isTrip
FROM #meterRecord
)
SELECT SUM(cte.IVoltage) AS Sum_Voltage,
SUM(isTrip) AS Trips
FROM cte
I would like to migrate DDL from Oracle to SQLServer.
It was able to migrate to a certain extent.
However, some items can not be migrated.
Oracle DDL:
CREATE TABLE ExampleTbl
(
code CHAR(3) NOT NULL,
code2 CHAR(3) NOT NULL,
username VARCHAR2(255) NOT NULL,
d DATETIME
CONSTRAINT PK_Example PRIMARY KEY (code, code2) USING INDEX
PCTFREE 10
INITRANS 2 -- <-?
MAXTRANS 255 -- <-?
TABLESPACE TBSP01
STORAGE(INITIAL 64K NEXT 1M MINEXTENTS 1 MAXEXTENTS 2147483645 BUFFER_POOL DEFAULT) -- <-?
LOGGING -- <-?
ENABLE -- <-?
)
PCTFREE 10
MAXTRANS 255
TABLESPACE TBSP01
STORAGE(INITIAL 64K NEXT 1M MINEXTENTS 1 MAXEXTENTS 2147483645 BUFFER_POOL DEFAULT) -- <-?
NOCACHE -- <-?
LOGGING
/
COMMENT ON TABLE ExampleTbl IS 'Table comment!'
/
SQLServer DDL:
CREATE TABLE [dbo].[ExampleTbl](
[code] [char](10) NOT NULL,
[code2] [char](10) NOT NULL,
[username] [varchar](255) NOT NULL,
[d] [datetime] NULL,
CONSTRAINT [PK_ExampleTbl] PRIMARY KEY CLUSTERED
(
[code] ASC,
[code2] ASC
)
WITH
(
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON,
FILLFACTOR = 90 -- FillFactor = 100 - Oracle.PCTFREE(10)
) ON [TBSP01] -- Oracle.TableSpace
) ON [TBSP01] -- Oracle.TableSpace
GO
EXEC sys.sp_addextendedproperty
#name=N'MS_Description',
#value=N'Table comment!' , -- Oracle.Comment
#level0type=N'SCHEMA',
#level0name=N'dbo',
#level1type=N'TABLE',
#level1name=N'ExampleTbl'
GO
Don't worry about column names.
How do I migrate these?
INITRANS, MAXTRANS, STORAGE, LOGGING, ENABLE, NOCACHE.
And, are there any other problems?
CREATE TABLE Statement
Converting CREATE TABLE statement keywords and clauses:
Oracle SQL Server
1 ENABLE constraint attribute Removed
Storage and physical attributes:
Oracle SQL Server
1 PCTFREE num Removed
2 PCTUSED num Removed
3 INITRANS num Removed
4 MAXTRANS num Removed
5 COMPRESS [BASIC] | COMPRESS num | NOCOMPRESS Removed
6 LOGGING | NOLOGGING Removed
7 SEGMENT CREATION IMMEDIATE | DEFERRED Removed
8 TABLESPACE name ON name
9 LOB (column) STORE AS BASIC FILE (params) Removed
10 PARALLEL num | NOPARALLEL Removed
11 NOCACHE Removed
12 NOMONITORING Removed
STORAGE clause:
Oracle SQL Server
1 INITIAL num Removed
2 NEXT num Removed
3 MINEXTENTS num Removed
4 MAXEXTENTS num | UNLIMITED Removed
5 PCTINCREASE num Removed
6 FREELISTS num Removed
7 FREELIST GROUPS num Removed
8 BUFFER_POOL DEFAULT | KEEP | RECYCLE Removed
9 FLASH_CACHE DEFAULT | KEEP | NONE Removed
10 CELL_FLASH_CACHE DEFAULT | KEEP | NONE Removed
LOB storage clause:
Oracle SQL Server
1 TABLESPACE name Removed
2 DISABLE | ENABLE STORAGE IN ROW Removed
3 CHUNK num Removed
4 NOCACHE Removed
5 LOGGING Removed
More Details http://www.sqlines.com/oracle-to-sql-server
For the table definition
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
Containing 21 rows (7 for each of the AccountId values 4,6,7).
It has 1 root page and 4 leaf pages
index_depth page_count index_level
----------- -------------------- -----------
2 4 0
2 1 1
The root page looks like
FileId PageId ROW LEVEL ChildFieldId ChildPageId AccountId (KEY) UNIQUIFIER (KEY) KeyHashValue
----------- ----------- ----------- ----------- ------------ ----------- --------------- ---------------- ------------------------------
1 121 0 1 1 119 NULL NULL NULL
1 121 1 1 1 151 6 0 NULL
1 121 2 1 1 175 6 3 NULL
1 121 3 1 1 215 7 1 NULL
The actual distribution of AccountId records over these pages is
AccountID page_id Num
----------- ----------- -----------
4 119 7
6 151 3
6 175 4
7 175 1
7 215 6
The Query
SELECT AccountID
FROM Accounts
WHERE AccountID IN (4,6,7)
Gives the following IO stats
Table 'Accounts'. Scan count 3, logical reads 13
Why?
I thought for each seek it would seek into the first page that might potentially contain that value and then (if necessary) continue along the linked list until it found the first row not equal to the seeked value.
However that only adds up to 10 page accesses
4) Root Page -> Page 119 -> Page 151 (Page 151 Contains a 6 so should stop)
6) Root Page -> Page 119 -> Page 151 -> Page 175 (Page 175 Contains a 7 so should stop)
7) Root Page -> Page 175 -> Page 215 (No more pages)
So what accounts for the additional 3?
Full script to reproduce
USE tempdb
SET NOCOUNT ON;
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
CREATE CLUSTERED INDEX ix ON Accounts(AccountID)
INSERT INTO Accounts(AccountID)
SELECT C
FROM (SELECT 4 UNION ALL SELECT 6 UNION ALL SELECT 7) Vals(C)
CROSS JOIN (SELECT TOP (7) 1 FROM master..spt_values) T(X)
DECLARE #AccountID INT
SET STATISTICS IO ON
SELECT #AccountID=AccountID FROM Accounts WHERE AccountID IN (4,6,7)
SET STATISTICS IO OFF
SELECT index_depth,page_count,index_level
FROM
sys.dm_db_index_physical_stats (2,OBJECT_ID('Accounts'), DEFAULT,DEFAULT, 'DETAILED')
SELECT AccountID, P.page_id, COUNT(*) AS Num
FROM Accounts
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) P
GROUP BY AccountID, P.page_id
ORDER BY AccountID, P.page_id
DECLARE #index_info TABLE
(PageFID VARCHAR(10),
PagePID VARCHAR(10),
IAMFID TINYINT,
IAMPID INT,
ObjectID INT,
IndexID TINYINT,
PartitionNumber TINYINT,
PartitionID BIGINT,
iam_chain_type VARCHAR(30),
PageType TINYINT,
IndexLevel TINYINT,
NextPageFID TINYINT,
NextPagePID INT,
PrevPageFID TINYINT,
PrevPagePID INT,
PRIMARY KEY (PageFID, PagePID));
INSERT INTO #index_info
EXEC ('DBCC IND ( tempdb, Accounts, -1)' );
DECLARE #DynSQL NVARCHAR(MAX) = 'DBCC TRACEON (3604);'
SELECT #DynSQL = #DynSQL + '
DBCC PAGE(tempdb, ' + PageFID + ', ' + PagePID + ', 3); '
FROM #index_info
WHERE IndexLevel = 1
SET #DynSQL = #DynSQL + '
DBCC TRACEOFF(3604); '
CREATE TABLE #index_l1_info
(FileId INT,
PageId INT,
ROW INT,
LEVEL INT,
ChildFieldId INT,
ChildPageId INT,
[AccountId (KEY)] INT,
[UNIQUIFIER (KEY)] INT,
KeyHashValue VARCHAR(30));
INSERT INTO #index_l1_info
EXEC(#DynSQL)
SELECT *
FROM #index_l1_info
DROP TABLE #index_l1_info
DROP TABLE Accounts
Just to supply the answer in answer form rather than as discussion in the comments...
The additional reads arise due to the read ahead mechanism. This scans the parent pages of the leaf level in case it needs to issue an asynchronous IO to bring the leaf level pages into the buffer cache so they are ready when the range seek reaches them.
It is possible to use trace flag 652 to disable the mechanism (server wide) and verify that the number of reads is now exactly 10 as expected.
From what I see from the output of DBCC IND, there is 1 root page (type = 10), 1 key page (type = 2) and four leaf pages (type = 1), total of 6 pages.
So each scan goes as root -> key -> leaf -> … -> final leaf which gives 4 reads for 4 and 7 and 5 reads for 6, total 4 + 4 + 5 = 13.
I tried to figured out how SQL Server stores Tinyint (which is supposed to be 1-byte long) column.
-- Create table
CREATE TABLE MyTest.dbo.TempTable
(
Col1 Tinyint NOT NULL
);
-- Fill it up
INSERT INTO dbo.TempTable VALUES (3);
-- Get page info
dbcc ind
(
'MyTest' /*Database Name*/
,'dbo.TempTable' /*Table Name*/
,-1 /*Display information for all pages of all indenxes*/
);
-- Get page data
dbcc traceon(3604)
dbcc page
(
'MyTest' /*Database Name*/
,1 /*File ID*/
,182 /*Page ID*/
,3 /*Output mode: 3 - display page header and row details */
)
Here is the result:
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
PAGE: (1:182)
...
...
...
Slot 0 Offset 0x60 Length 9
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP Record Size = 9
Memory Dump #0x000000000545A060
0000000000000000: 10000600 03000100 00†††††††††††††††††.........
Slot 0 Column 1 Offset 0x4 Length 2 Length (physical) 2
Col1 = 3
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Interpretation:
The actual data row is 10 00 0600 0300 0100 00 as:
10: Status bits A
00: Status bit B
0600: Position where number of columns is stored
0300: Tinyint data
0100: Number of column
00: Null bitmap
Total bytes: 1 + 1 + 2 + 2 + 2 + 1 = 9 bytes
Comparing with 'Smallint':
Altering 'Col1' type to 'Smallint' (which is 2-byte long) produced exactly the same result.
Question
Why does SQL Server dedicate 2 bytes to 'Tinyint' column? Why doesn't it distinguish between 'Tinyint' and 'Smallint' in store size?
Try looking at the output of DBCC PAGE WITH TABLERESULTS.
When I put in two rows, one with all 0 and one with all 1, I can clearly see the tinyint field is using only one byte:
CREATE TABLE dbo.SpaceTest
(
biggest BIGINT ,
medium INT ,
small SMALLINT ,
tiny TINYINT
)
INSERT INTO dbo.SpaceTest
( biggest, medium, small, tiny )
VALUES ( 0, 0, 0, 0 ),
( 1, 1, 1, 1 )
--Get a list of pages used by the table
DBCC IND('Sandbox', 'SpaceTest',0)
DBCC TRACEON (3604);
DBCC PAGE (Sandbox,1,42823,3) WITH tableresults;
GO
For the table definition
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
Containing 21 rows (7 for each of the AccountId values 4,6,7).
It has 1 root page and 4 leaf pages
index_depth page_count index_level
----------- -------------------- -----------
2 4 0
2 1 1
The root page looks like
FileId PageId ROW LEVEL ChildFieldId ChildPageId AccountId (KEY) UNIQUIFIER (KEY) KeyHashValue
----------- ----------- ----------- ----------- ------------ ----------- --------------- ---------------- ------------------------------
1 121 0 1 1 119 NULL NULL NULL
1 121 1 1 1 151 6 0 NULL
1 121 2 1 1 175 6 3 NULL
1 121 3 1 1 215 7 1 NULL
The actual distribution of AccountId records over these pages is
AccountID page_id Num
----------- ----------- -----------
4 119 7
6 151 3
6 175 4
7 175 1
7 215 6
The Query
SELECT AccountID
FROM Accounts
WHERE AccountID IN (4,6,7)
Gives the following IO stats
Table 'Accounts'. Scan count 3, logical reads 13
Why?
I thought for each seek it would seek into the first page that might potentially contain that value and then (if necessary) continue along the linked list until it found the first row not equal to the seeked value.
However that only adds up to 10 page accesses
4) Root Page -> Page 119 -> Page 151 (Page 151 Contains a 6 so should stop)
6) Root Page -> Page 119 -> Page 151 -> Page 175 (Page 175 Contains a 7 so should stop)
7) Root Page -> Page 175 -> Page 215 (No more pages)
So what accounts for the additional 3?
Full script to reproduce
USE tempdb
SET NOCOUNT ON;
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
CREATE CLUSTERED INDEX ix ON Accounts(AccountID)
INSERT INTO Accounts(AccountID)
SELECT C
FROM (SELECT 4 UNION ALL SELECT 6 UNION ALL SELECT 7) Vals(C)
CROSS JOIN (SELECT TOP (7) 1 FROM master..spt_values) T(X)
DECLARE #AccountID INT
SET STATISTICS IO ON
SELECT #AccountID=AccountID FROM Accounts WHERE AccountID IN (4,6,7)
SET STATISTICS IO OFF
SELECT index_depth,page_count,index_level
FROM
sys.dm_db_index_physical_stats (2,OBJECT_ID('Accounts'), DEFAULT,DEFAULT, 'DETAILED')
SELECT AccountID, P.page_id, COUNT(*) AS Num
FROM Accounts
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) P
GROUP BY AccountID, P.page_id
ORDER BY AccountID, P.page_id
DECLARE #index_info TABLE
(PageFID VARCHAR(10),
PagePID VARCHAR(10),
IAMFID TINYINT,
IAMPID INT,
ObjectID INT,
IndexID TINYINT,
PartitionNumber TINYINT,
PartitionID BIGINT,
iam_chain_type VARCHAR(30),
PageType TINYINT,
IndexLevel TINYINT,
NextPageFID TINYINT,
NextPagePID INT,
PrevPageFID TINYINT,
PrevPagePID INT,
PRIMARY KEY (PageFID, PagePID));
INSERT INTO #index_info
EXEC ('DBCC IND ( tempdb, Accounts, -1)' );
DECLARE #DynSQL NVARCHAR(MAX) = 'DBCC TRACEON (3604);'
SELECT #DynSQL = #DynSQL + '
DBCC PAGE(tempdb, ' + PageFID + ', ' + PagePID + ', 3); '
FROM #index_info
WHERE IndexLevel = 1
SET #DynSQL = #DynSQL + '
DBCC TRACEOFF(3604); '
CREATE TABLE #index_l1_info
(FileId INT,
PageId INT,
ROW INT,
LEVEL INT,
ChildFieldId INT,
ChildPageId INT,
[AccountId (KEY)] INT,
[UNIQUIFIER (KEY)] INT,
KeyHashValue VARCHAR(30));
INSERT INTO #index_l1_info
EXEC(#DynSQL)
SELECT *
FROM #index_l1_info
DROP TABLE #index_l1_info
DROP TABLE Accounts
Just to supply the answer in answer form rather than as discussion in the comments...
The additional reads arise due to the read ahead mechanism. This scans the parent pages of the leaf level in case it needs to issue an asynchronous IO to bring the leaf level pages into the buffer cache so they are ready when the range seek reaches them.
It is possible to use trace flag 652 to disable the mechanism (server wide) and verify that the number of reads is now exactly 10 as expected.
From what I see from the output of DBCC IND, there is 1 root page (type = 10), 1 key page (type = 2) and four leaf pages (type = 1), total of 6 pages.
So each scan goes as root -> key -> leaf -> … -> final leaf which gives 4 reads for 4 and 7 and 5 reads for 6, total 4 + 4 + 5 = 13.