SQL Server: Why does 'DBCC Page' report 'Tinyint' is 2 byetes? - sql-server

I tried to figured out how SQL Server stores Tinyint (which is supposed to be 1-byte long) column.
-- Create table
CREATE TABLE MyTest.dbo.TempTable
(
Col1 Tinyint NOT NULL
);
-- Fill it up
INSERT INTO dbo.TempTable VALUES (3);
-- Get page info
dbcc ind
(
'MyTest' /*Database Name*/
,'dbo.TempTable' /*Table Name*/
,-1 /*Display information for all pages of all indenxes*/
);
-- Get page data
dbcc traceon(3604)
dbcc page
(
'MyTest' /*Database Name*/
,1 /*File ID*/
,182 /*Page ID*/
,3 /*Output mode: 3 - display page header and row details */
)
Here is the result:
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
PAGE: (1:182)
...
...
...
Slot 0 Offset 0x60 Length 9
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP Record Size = 9
Memory Dump #0x000000000545A060
0000000000000000: 10000600 03000100 00†††††††††††††††††.........
Slot 0 Column 1 Offset 0x4 Length 2 Length (physical) 2
Col1 = 3
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Interpretation:
The actual data row is 10 00 0600 0300 0100 00 as:
10: Status bits A
00: Status bit B
0600: Position where number of columns is stored
0300: Tinyint data
0100: Number of column
00: Null bitmap
Total bytes: 1 + 1 + 2 + 2 + 2 + 1 = 9 bytes
Comparing with 'Smallint':
Altering 'Col1' type to 'Smallint' (which is 2-byte long) produced exactly the same result.
Question
Why does SQL Server dedicate 2 bytes to 'Tinyint' column? Why doesn't it distinguish between 'Tinyint' and 'Smallint' in store size?

Try looking at the output of DBCC PAGE WITH TABLERESULTS.
When I put in two rows, one with all 0 and one with all 1, I can clearly see the tinyint field is using only one byte:
CREATE TABLE dbo.SpaceTest
(
biggest BIGINT ,
medium INT ,
small SMALLINT ,
tiny TINYINT
)
INSERT INTO dbo.SpaceTest
( biggest, medium, small, tiny )
VALUES ( 0, 0, 0, 0 ),
( 1, 1, 1, 1 )
--Get a list of pages used by the table
DBCC IND('Sandbox', 'SpaceTest',0)
DBCC TRACEON (3604);
DBCC PAGE (Sandbox,1,42823,3) WITH tableresults;
GO

Related

SQL Server: Record size larger than expected

My table consists of 3 columns
| Column Name | Data Type | Size
| Value | real | 4
| LogId | int | 4
| SigId | smallint | 2
One primary key is set for columns LogId, SigId.
The sum of all size's is 4+4+2=10, however using sys.dm_db_index_physical_statsI get, that the average (and min/max) record size in bytes is 25. Can someone explain? Am I comparing apples and oranges?
The physical record length includes row overhead in addition to the space needed for the actual column values. On my SQL Server instance, I get an average record length of 17 reported with the following table:
CREATE TABLE dbo.Example1(
Value real NOT NULL
, LogId int NOT NULL
, SigId smallint NOT NULL
, CONSTRAINT PK_Example1 PRIMARY KEY CLUSTERED(LogId, SigId)
);
GO
INSERT INTO dbo.Example1 (Value, LogId, SigId) VALUES(1, 2, 3);
GO
SELECT avg_record_size_in_bytes
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID(N'dbo.Example1'),1,0,'DETAILED')
WHERE index_level = 0;
GO
The 17 byte record length reported by sys.dm_db_index_physical_stats includes 10 bytes for data, 4 bytes for the record header, 2 bytes for the column count, and 1 byte for the NULL bitmap. See Paul Randal's Anatomy of a record article for details of the record structure.
Below is a script to dump the first clustered index data page using DBCC_PAGE as determined by the undocumented (don't use it in production) sys.dm_db_database_page_allocations table-valued function:
DECLARE
#database_id int = DB_ID()
, #object_id int = OBJECT_ID(N'dbo.Example1')
, #allocated_page_file_id int
, #allocated_page_page_id int;
--get first clustered index data page
SELECT
#allocated_page_file_id = allocated_page_file_id
, #allocated_page_page_id = allocated_page_page_id
FROM sys.dm_db_database_page_allocations(#database_id, #object_id, 1, 1, 'DETAILED')
WHERE
page_type_desc = N'DATA_PAGE'
AND previous_page_page_id IS NULL --first page of clustered index;
--dump record
DBCC TRACEON(3604);
DBCC PAGE(#database_id,#allocated_page_file_id,#allocated_page_page_id,1);
DBCC TRACEOFF(3604);
GO
Here is an excerpt from the results on my instance with the physical record structure fields called out:
DATA:
Slot 0, Offset 0x60, Length 17, DumpStyle BYTE
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP Record Size = 17
Memory Dump #0x0000002262C7A060
0000000000000000: 10000e00 02000000 03000000 803f0300 00 .............?...
| | | | | |null bitmap (1 byte)
| | | | |column count (2 bytes)
| | | |Value column data (4-byte real)
| | |SigId column data (2-byte smallint)
| |LogId column data (4-byte int)
|Record header (2-byte record type and 2 byte offset to null bitmap)
As to why your actual record length is 25 instead of 17 as in this example, the likely cause is schema changes were made after the table was initially created as Martin suggested in his comment. If the database has a row-versioning isolation level enabled, there will be additional overhead as mentioned in Paul's blog post but I doubt that is the reason here since that overhead would be more than 8 bytes.

Number of logical reads in SQL Server [duplicate]

For the table definition
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
Containing 21 rows (7 for each of the AccountId values 4,6,7).
It has 1 root page and 4 leaf pages
index_depth page_count index_level
----------- -------------------- -----------
2 4 0
2 1 1
The root page looks like
FileId PageId ROW LEVEL ChildFieldId ChildPageId AccountId (KEY) UNIQUIFIER (KEY) KeyHashValue
----------- ----------- ----------- ----------- ------------ ----------- --------------- ---------------- ------------------------------
1 121 0 1 1 119 NULL NULL NULL
1 121 1 1 1 151 6 0 NULL
1 121 2 1 1 175 6 3 NULL
1 121 3 1 1 215 7 1 NULL
The actual distribution of AccountId records over these pages is
AccountID page_id Num
----------- ----------- -----------
4 119 7
6 151 3
6 175 4
7 175 1
7 215 6
The Query
SELECT AccountID
FROM Accounts
WHERE AccountID IN (4,6,7)
Gives the following IO stats
Table 'Accounts'. Scan count 3, logical reads 13
Why?
I thought for each seek it would seek into the first page that might potentially contain that value and then (if necessary) continue along the linked list until it found the first row not equal to the seeked value.
However that only adds up to 10 page accesses
4) Root Page -> Page 119 -> Page 151 (Page 151 Contains a 6 so should stop)
6) Root Page -> Page 119 -> Page 151 -> Page 175 (Page 175 Contains a 7 so should stop)
7) Root Page -> Page 175 -> Page 215 (No more pages)
So what accounts for the additional 3?
Full script to reproduce
USE tempdb
SET NOCOUNT ON;
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
CREATE CLUSTERED INDEX ix ON Accounts(AccountID)
INSERT INTO Accounts(AccountID)
SELECT C
FROM (SELECT 4 UNION ALL SELECT 6 UNION ALL SELECT 7) Vals(C)
CROSS JOIN (SELECT TOP (7) 1 FROM master..spt_values) T(X)
DECLARE #AccountID INT
SET STATISTICS IO ON
SELECT #AccountID=AccountID FROM Accounts WHERE AccountID IN (4,6,7)
SET STATISTICS IO OFF
SELECT index_depth,page_count,index_level
FROM
sys.dm_db_index_physical_stats (2,OBJECT_ID('Accounts'), DEFAULT,DEFAULT, 'DETAILED')
SELECT AccountID, P.page_id, COUNT(*) AS Num
FROM Accounts
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) P
GROUP BY AccountID, P.page_id
ORDER BY AccountID, P.page_id
DECLARE #index_info TABLE
(PageFID VARCHAR(10),
PagePID VARCHAR(10),
IAMFID TINYINT,
IAMPID INT,
ObjectID INT,
IndexID TINYINT,
PartitionNumber TINYINT,
PartitionID BIGINT,
iam_chain_type VARCHAR(30),
PageType TINYINT,
IndexLevel TINYINT,
NextPageFID TINYINT,
NextPagePID INT,
PrevPageFID TINYINT,
PrevPagePID INT,
PRIMARY KEY (PageFID, PagePID));
INSERT INTO #index_info
EXEC ('DBCC IND ( tempdb, Accounts, -1)' );
DECLARE #DynSQL NVARCHAR(MAX) = 'DBCC TRACEON (3604);'
SELECT #DynSQL = #DynSQL + '
DBCC PAGE(tempdb, ' + PageFID + ', ' + PagePID + ', 3); '
FROM #index_info
WHERE IndexLevel = 1
SET #DynSQL = #DynSQL + '
DBCC TRACEOFF(3604); '
CREATE TABLE #index_l1_info
(FileId INT,
PageId INT,
ROW INT,
LEVEL INT,
ChildFieldId INT,
ChildPageId INT,
[AccountId (KEY)] INT,
[UNIQUIFIER (KEY)] INT,
KeyHashValue VARCHAR(30));
INSERT INTO #index_l1_info
EXEC(#DynSQL)
SELECT *
FROM #index_l1_info
DROP TABLE #index_l1_info
DROP TABLE Accounts
Just to supply the answer in answer form rather than as discussion in the comments...
The additional reads arise due to the read ahead mechanism. This scans the parent pages of the leaf level in case it needs to issue an asynchronous IO to bring the leaf level pages into the buffer cache so they are ready when the range seek reaches them.
It is possible to use trace flag 652 to disable the mechanism (server wide) and verify that the number of reads is now exactly 10 as expected.
From what I see from the output of DBCC IND, there is 1 root page (type = 10), 1 key page (type = 2) and four leaf pages (type = 1), total of 6 pages.
So each scan goes as root -> key -> leaf -> … -> final leaf which gives 4 reads for 4 and 7 and 5 reads for 6, total 4 + 4 + 5 = 13.

is T-SQL processing on 0 faster than 1 when data type is bit?

Some T-SQL developers use 0 by default to show that if (for example) user is active and when the user is passive it's value is 1. The followin code (in my example) shows currently active users:
SELECT * FROM USERS WHERE ISACTIVE = 0
My question is that, is this query processing faster than the following? =>
SELECT * FROM USERS WHERE ISACTIVE = 1
In the simplest scenario there will be no difference. As with any performance question, the key is to test, so I set up the following table with 1,000,000 randomly distributed rows (500,000 each for 1 and 0).
CREATE TABLE #T (ID INT IDENTITY PRIMARY KEY, Filler CHAR(1000), Active BIT NOT NULL);
INSERT #T (Active)
SELECT Active
FROM ( SELECT TOP 500000 Active = 1
FROM sys.all_objects AS a
CROSS JOIN sys.all_objects AS b
UNION ALL
SELECT TOP 500000 Active = 0
FROM sys.all_objects AS a
CROSS JOIN sys.all_objects AS b
) AS t
ORDER BY NEWID();
The next step is a simple test of how long a clustered index scan takes on each:
SET STATISTICS TIME ON;
SET STATISTICS IO ON;
SELECT COUNT(Filler) FROM #T WHERE Active = 1;
SELECT COUNT(Filler) FROM #T WHERE Active = 0;
The execution plan is exactly the same for both:
As is the IO:
Scan count 5, logical reads 143089, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Then looking at the elapsed time, over 10 runs (not really enough but the picture is fairly clear) the elapsed times were (in ms)
Active = 1 Active = 0
---------------------------
125 132
86 86
89 61
83 89
88 89
63 64
85 93
126 125
100 117
66 68
--------------------------
91.1 92.4 (Mean)
So a mean difference of approx 1ms, which is not significant enough to be considered material. So in your case no, there is no difference.
I then though perhaps it makes a difference with a sorted index on the column, so added one:
CREATE INDEX IX_T__Active ON #T (Active) INCLUDE (Filler);
And again the results showed no (relevant) difference:
Active = 1 Active = 0
--------------------------
57 55
42 48
56 57
58 55
44 42
46 41
41 42
42 52
43 43
52 59
--------------------------
48.1 49.4
In summary, it does not make a material difference, and I am pretty sure this is the exact kind of premature optimisation that Donald Knuth was referring too.
You can quickly test this on tempdb.
create table #tmp
(
id int, flag bit default(0)
);
DECLARE #max AS INT, #rc AS INT;
SET #max = 200000;
SET #rc = 1;
INSERT INTO #tmp VALUES(1, 0);
WHILE #rc * 2 <= #max
BEGIN
INSERT INTO #tmp SELECT id + #rc, 0 as flag FROM #tmp;
SET #rc = #rc * 2;
END
INSERT INTO #tmp
SELECT id + #rc, 1 as flag FROM #tmp WHERE id + #rc <= #max;
GO
set statistics time on
Go
select * from #tmp where flag = 1
set statistics time off
Go
set statistics time on
Go
select * from #tmp where flag = 0
set statistics time off
Go
Try creating indexes as well, you will see more differences i.e. how bit column work in case of indexes with different values.
Result will be same when count of both flag is equal with/without
index on this column.
Result will be same when count of both flag is
equal or close to equal when column has index
Result will vary when count of both flag is unequal and index is present on flag column
I don't think this design is there because of performance as both of you queries will generate the same execution plan. More important aspect is if you have index on this column or not.
There are multiple other reasons why to mark inactive users with value 1.
Some reasons on why I would do so:
1) 0 is default value of int and bool
In some ORMs (for example, EF6) you don't have to specify any value to status column and it will be set to 0. So, user will be active by default.
In most systems most of the users will be active. If user is inactive, it is an special case which needs to be covered. Not all around.
2) Future value considerations
This column might contain different values in future to indicate that user has been suspended, deleted, etc.
It would not make much sense to have
0-inactive, 1-active, 2-suspended, etc.
instead of
0-active, 1-inactive, 2-suspended, etc.
That would allow querying problematic users by simple expression status > 0.

Logical reads for seeks on a non unique clustered index

For the table definition
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
Containing 21 rows (7 for each of the AccountId values 4,6,7).
It has 1 root page and 4 leaf pages
index_depth page_count index_level
----------- -------------------- -----------
2 4 0
2 1 1
The root page looks like
FileId PageId ROW LEVEL ChildFieldId ChildPageId AccountId (KEY) UNIQUIFIER (KEY) KeyHashValue
----------- ----------- ----------- ----------- ------------ ----------- --------------- ---------------- ------------------------------
1 121 0 1 1 119 NULL NULL NULL
1 121 1 1 1 151 6 0 NULL
1 121 2 1 1 175 6 3 NULL
1 121 3 1 1 215 7 1 NULL
The actual distribution of AccountId records over these pages is
AccountID page_id Num
----------- ----------- -----------
4 119 7
6 151 3
6 175 4
7 175 1
7 215 6
The Query
SELECT AccountID
FROM Accounts
WHERE AccountID IN (4,6,7)
Gives the following IO stats
Table 'Accounts'. Scan count 3, logical reads 13
Why?
I thought for each seek it would seek into the first page that might potentially contain that value and then (if necessary) continue along the linked list until it found the first row not equal to the seeked value.
However that only adds up to 10 page accesses
4) Root Page -> Page 119 -> Page 151 (Page 151 Contains a 6 so should stop)
6) Root Page -> Page 119 -> Page 151 -> Page 175 (Page 175 Contains a 7 so should stop)
7) Root Page -> Page 175 -> Page 215 (No more pages)
So what accounts for the additional 3?
Full script to reproduce
USE tempdb
SET NOCOUNT ON;
CREATE TABLE Accounts
(
AccountID INT ,
Filler CHAR(1000)
)
CREATE CLUSTERED INDEX ix ON Accounts(AccountID)
INSERT INTO Accounts(AccountID)
SELECT C
FROM (SELECT 4 UNION ALL SELECT 6 UNION ALL SELECT 7) Vals(C)
CROSS JOIN (SELECT TOP (7) 1 FROM master..spt_values) T(X)
DECLARE #AccountID INT
SET STATISTICS IO ON
SELECT #AccountID=AccountID FROM Accounts WHERE AccountID IN (4,6,7)
SET STATISTICS IO OFF
SELECT index_depth,page_count,index_level
FROM
sys.dm_db_index_physical_stats (2,OBJECT_ID('Accounts'), DEFAULT,DEFAULT, 'DETAILED')
SELECT AccountID, P.page_id, COUNT(*) AS Num
FROM Accounts
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) P
GROUP BY AccountID, P.page_id
ORDER BY AccountID, P.page_id
DECLARE #index_info TABLE
(PageFID VARCHAR(10),
PagePID VARCHAR(10),
IAMFID TINYINT,
IAMPID INT,
ObjectID INT,
IndexID TINYINT,
PartitionNumber TINYINT,
PartitionID BIGINT,
iam_chain_type VARCHAR(30),
PageType TINYINT,
IndexLevel TINYINT,
NextPageFID TINYINT,
NextPagePID INT,
PrevPageFID TINYINT,
PrevPagePID INT,
PRIMARY KEY (PageFID, PagePID));
INSERT INTO #index_info
EXEC ('DBCC IND ( tempdb, Accounts, -1)' );
DECLARE #DynSQL NVARCHAR(MAX) = 'DBCC TRACEON (3604);'
SELECT #DynSQL = #DynSQL + '
DBCC PAGE(tempdb, ' + PageFID + ', ' + PagePID + ', 3); '
FROM #index_info
WHERE IndexLevel = 1
SET #DynSQL = #DynSQL + '
DBCC TRACEOFF(3604); '
CREATE TABLE #index_l1_info
(FileId INT,
PageId INT,
ROW INT,
LEVEL INT,
ChildFieldId INT,
ChildPageId INT,
[AccountId (KEY)] INT,
[UNIQUIFIER (KEY)] INT,
KeyHashValue VARCHAR(30));
INSERT INTO #index_l1_info
EXEC(#DynSQL)
SELECT *
FROM #index_l1_info
DROP TABLE #index_l1_info
DROP TABLE Accounts
Just to supply the answer in answer form rather than as discussion in the comments...
The additional reads arise due to the read ahead mechanism. This scans the parent pages of the leaf level in case it needs to issue an asynchronous IO to bring the leaf level pages into the buffer cache so they are ready when the range seek reaches them.
It is possible to use trace flag 652 to disable the mechanism (server wide) and verify that the number of reads is now exactly 10 as expected.
From what I see from the output of DBCC IND, there is 1 root page (type = 10), 1 key page (type = 2) and four leaf pages (type = 1), total of 6 pages.
So each scan goes as root -> key -> leaf -> … -> final leaf which gives 4 reads for 4 and 7 and 5 reads for 6, total 4 + 4 + 5 = 13.

Considerations when dropping columns in large tables

I have a table of call data that has grown to 1.3 billion rows and 173 gigabytes of data There are two columns that we no longer use, one is char(15) and the other is varchar(24). They have both been getting inserted with NULL for some time, I've been putting off removing the columns because I am unsure of the implications. We have limited space on both the drive with the database and the drive with the transaction log.
In addition I found this post saying the space would not be available until a DBCC REINDEX was done. I see this as both good and bad. It's good because dropping the columns should be very fast and not involve a lot of logging, but bad because the space will not be reclaimed. Will newly inserted records take up less space though? That would be fine in my case as we prune the old data after 18 months so the space will gradually decrease.
If we did a DBCC REINDEX (or ALTER INDEX REBUILD) would that actually help since the columns are not part of any index? Would that take up log space or lock the table so it could not be used?
I found your question interesting, so decided to model it on a development database.
SQL Server 2008, database size 400 Mb, log 2.4 Gb.
I assume, from link provided you created a table with clustered index:
CREATE TABLE [dbo].[big_table](
[recordID] [int] IDENTITY(1,1) NOT NULL,
[col1] [varchar](50) NOT NULL,
[col2] [char](15) NULL,
[col3] [varchar](24) NULL,
CONSTRAINT [PK_big_table] PRIMARY KEY CLUSTERED
(
[recordID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
This table consist of 12 Million records.
sp_spaceused big_table, true
name-big_table, rows-12031303, reserved-399240 KB, data-397760 KB, index_size-1336 KB, unused-144 KB.
drop columns
sp_spaceused big_table, true
Table size stays the same. Database and log size remained the same.
add 3 million of rows to the rest of the table
name-big_table, rows-15031303, reserved-511816 KB, data-509904 KB, index_size-1752 KB, unused-160 KB.
database size 500 Mb, log 3.27 Gb.
After
DBCC DBREINDEX( big_table )
Log is the same size, but database size increased to 866 Mb
name-big_table, rows-12031303, reserved-338376 KB, data-337704 KB, index_size-568 KB, unused-104 KB.
Again add 3 million rows to see if they going into available space within database.
Database size is the same, log 3.96 Gb, which clearly shows they are.
Hope it makes sense.
No, newly inserted records would not take up less space. I was looking at this exact issue earlier today as it happens.
Test table
CREATE TABLE T
(
id int identity primary key,
FixedWidthColToBeDropped char(10),
VariableWidthColToBeDropped varchar(10),
FixedWidthColToBeWidened char(7),
FixedWidthColToBeShortened char(20),
VariableWidthColToBeWidened varchar(7),
VariableWidthColToBeShortened varchar(20),
VariableWidthColWontBeAltered varchar(20)
)
Offsets Query
WITH T
AS (SELECT ISNULL(LEFT(MAX(name), 30), 'Dropped') AS column_name,
MAX(column_id) AS column_id,
ISNULL(MAX(case
when column_id IS NOT NULL THEN max_inrow_length
END), MAX(max_inrow_length)) AS max_inrow_length,
leaf_offset,
CASE
WHEN leaf_offset < 0 THEN SUM(CASE
WHEN column_id IS NULL THEN 2 ELSE 0
END)
ELSE MAX(max_inrow_length) - MAX(CASE
WHEN column_id IS NULL THEN 0
ELSE max_inrow_length
END)
END AS wasted_space
FROM sys.system_internals_partition_columns pc
JOIN sys.partitions p
ON p.partition_id = pc.partition_id
LEFT JOIN sys.columns c
ON column_id = partition_column_id
AND c.object_id = p.object_id
WHERE p.object_id = object_id('T')
GROUP BY leaf_offset)
SELECT CASE
WHEN GROUPING(column_name) = 0 THEN column_name
ELSE 'Total'
END AS column_name,
column_id,
max_inrow_length,
leaf_offset,
SUM(wasted_space) AS wasted_space
FROM T
GROUP BY ROLLUP ((column_name,
column_id,
max_inrow_length,
leaf_offset))
ORDER BY GROUPING(column_name),
CASE
WHEN leaf_offset > 0 THEN leaf_offset
ELSE 10000 - leaf_offset
END
Initial State of the Table
column_name column_id max_inrow_length leaf_offset wasted_space
------------------------------ ----------- ---------------- ----------- ------------
id 1 4 4 0
FixedWidthColToBeDropped 2 10 8 0
FixedWidthColToBeWidened 4 7 18 0
FixedWidthColToBeShortened 5 20 25 0
VariableWidthColToBeDropped 3 10 -1 0
VariableWidthColToBeWidened 6 7 -2 0
VariableWidthColToBeShortened 7 20 -3 0
VariableWidthColWontBeAltered 8 20 -4 0
Total NULL NULL NULL 0
Now make some changes
ALTER TABLE T
ALTER COLUMN FixedWidthColToBeWidened char(12)
ALTER TABLE T
ALTER COLUMN FixedWidthColToBeShortened char(10)
ALTER TABLE T
ALTER COLUMN VariableWidthColToBeWidened varchar(12)
ALTER TABLE T
ALTER COLUMN VariableWidthColToBeShortened varchar(10)
ALTER TABLE T
DROP COLUMN FixedWidthColToBeDropped, VariableWidthColToBeDropped
Look at the table again
column_name column_id max_inrow_length leaf_offset wasted_space
------------------------------ ----------- ---------------- ----------- ------------
id 1 4 4 0
Dropped NULL 10 8 10
Dropped NULL 7 18 7
FixedWidthColToBeShortened 5 10 25 10
FixedWidthColToBeWidened 4 12 45 0
Dropped NULL 10 -1 2
VariableWidthColToBeWidened 6 12 -2 0
Dropped NULL 20 -3 2
VariableWidthColWontBeAltered 8 20 -4 0
VariableWidthColToBeShortened 7 10 -5 0
Total NULL NULL NULL 31
Insert a row and look at the page
INSERT INTO T
([FixedWidthColToBeWidened]
,[FixedWidthColToBeShortened]
,[VariableWidthColToBeWidened]
,[VariableWidthColToBeShortened])
VALUES
('1','2','3','4')
DECLARE #DBCCPAGE nvarchar(100)
SELECT TOP 1 #DBCCPAGE = 'DBCC PAGE(''tempdb'',' + CAST(file_id AS VARCHAR) + ',' + CAST(page_id AS VARCHAR) + ',3)'
FROM T
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)
DBCC TRACEON(3604)
EXEC (#DBCCPAGE)
Returns
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Record Size = 75
Memory Dump #0x000000000D5CA060
0000000000000000: 30003900 01000000 26a44500 00000000 †0.9.....&¤E.....
0000000000000010: ffffffff ffffff7f 00322020 20202020 †ÿÿÿÿÿÿÿ..2
0000000000000020: 20202003 00000000 98935c0d 00312020 † ......\..1
0000000000000030: 20202020 20202020 200a0080 00050049 † ......I
0000000000000040: 004a004a 004a004b 003334†††††††††††††.J.J.J.K.34
Slot 0 Column 1 Offset 0x4 Length 4 Length (physical) 4
id = 1
Slot 0 Column 67108868 Offset 0x8 Length 0 Length (physical) 10
DROPPED = NULL
Slot 0 Column 67108869 Offset 0x0 Length 0 Length (physical) 0
DROPPED = NULL
Slot 0 Column 67108865 Offset 0x12 Length 0 Length (physical) 7
DROPPED = NULL
Slot 0 Column 67108866 Offset 0x19 Length 0 Length (physical) 20
DROPPED = NULL
Slot 0 Column 6 Offset 0x49 Length 1 Length (physical) 1
VariableWidthColToBeWidened = 3
Slot 0 Column 67108867 Offset 0x0 Length 0 Length (physical) 0
DROPPED = NULL
Slot 0 Column 8 Offset 0x0 Length 0 Length (physical) 0
VariableWidthColWontBeAltered = [NULL]
Slot 0 Column 4 Offset 0x2d Length 12 Length (physical) 12
FixedWidthColToBeWidened = 1
Slot 0 Column 5 Offset 0x19 Length 10 Length (physical) 10
FixedWidthColToBeShortened = 2
Slot 0 Column 7 Offset 0x4a Length 1 Length (physical) 1
VariableWidthColToBeShortened = 4
Slot 0 Offset 0x0 Length 0 Length (physical) 0
KeyHashValue = (010086470766)
You can see the dropped (and altered) columns are still consuming space even though the table was actually empty when the schema was changed.
The impact of the dropped columns in your case will be 15 bytes wasted for the char one and 2 bytes for the varchar one unless it is the last column in the variable section when it will take up no space.

Resources