SQL Server bit column constraint, 1 row = 1, all others 0 - sql-server

I have a bit IsDefault column. Only one row of data within the table may have this bit column set to 1, all the others must be 0.
How can I enforce this?

All versions:
Trigger
Indexed view
Stored proc (eg test on write)
SQL Server 2008: a filtered index
CREATE UNIQUE INDEX IX_foo ON bar (MyBitCol) WHERE MyBitCol = 1

Assuming your PK is a single, numeric column, you could add a computed column to your table:
ALTER TABLE YourTable
ADD IsDefaultCheck AS CASE IsDefault
WHEN 1 THEN -1
WHEN 0 THEN YourPK
END
Then create a unique index on the computed column.
CREATE UNIQUE INDEX IX_DefaultCheck ON YourTable(IsDefaultCheck)

I think the trigger is the best idea if you want to change the old default record to 0 when you insert/update a new one and if you want to make sure one record always has that value (i.e. if you delete the record with the value you would assign it to a different record). You would have to decide on the rules for doing so. These triggers can be tricky because you have to account for multiple records in the inserted and deleted tables. So if 3 records in a batch try to update to become the default record, which one wins?
If you want to make sure the one default record never changes when someone else tries to change it, the filtered index is a good idea.

Different approaches can be taken here, but I think only two are correct. But lets do it step by step.
We have table Hierachy table in which we have Root column. This column tells us what row is currently the starting point. As in question asked, we want to have only one starting point.
We think that we can do it with:
Constraint
Indexed View
Trigger
Different table and relation
Constraint
In this approach first we need to create function which will do the job.
CREATE FUNCTION [gt].[fnOnlyOneRoot]()
RETURNS BIT
BEGIN
DECLARE #rootAmount TINYINT
DECLARE #result BIT
SELECT #rootAmount=COUNT(1) FROM [gt].[Hierarchy] WHERE [Root]=1
IF #rootAmount=1
set #result=1
ELSE
set #result=0
RETURN #result
END
GO
And then the constraint:
ALTER TABLE [gt].[Hierarchy] WITH CHECK ADD CONSTRAINT [ckOnlyOneRoot] CHECK (([gt].[fnOnlyOneRoot]()=(1)))
Unfortunately approach is wrong as this constraint won't allow us to change any values in the table. It need to have exactly one root marked (insert with Root=1 will throw exception, and update with set Root=0 also)
We could change the fnOnyOneRoot to allow having 0 selected roots but it not what we wanted.
Index
Index will remove all rows which are defined in the where clause and on the rest data will setup unique constraint. We have different options here:
- Root can be nullable and we can add in where Root!=0 and Root is not null
- Root must have value and we can add only in where Root!=0
- and different combinations
CREATE UNIQUE INDEX ix_OnyOneRoot ON [gt].[Hierarchy](Root) WHERE Root !=0 and Root is not null
This approach also is not perfect. Maximum one Root will be forced, but minimum not. To update data we need to set previous rows to null or 0.
Trigger
We can do two kinds of trigger both behaves differently
- Prevent trigger - which won't allow us to put wrong data
- DoTheJob trigger - which in background will update data for us
Prevent trigger
This is basically the same as constraint, if we want to force only one root than we cannot update or insert.
CREATE TRIGGER tOnlyOneRoot
ON [gt].[Hierarchy]
AFTER INSERT, UPDATE
AS
DECLARE #rootAmount TINYINT
DECLARE #result BIT
SELECT #rootAmount=COUNT(1) FROM [gt].[Hierarchy] WHERE [Root]=1
IF #rootAmount=1
set #result=1
ELSE
set #result=0
IF #result=0
BEGIN
RAISERROR ('Only one root',0,0);
ROLLBACK TRANSACTION
RETURN
END
GO
DoTheJob trigger
This trigger will check for all inserted/updated rows and if more than one Root will be passed it will throw exception. In other case, so if one new Root will be updated or inserted, trigger will allow to do it and after operation it will change Root value for all other rows to 0.
CREATE TRIGGER tOnlyOneRootDoTheJob
ON [gt].[Hierarchy]
AFTER INSERT, UPDATE
AS
DECLARE #insertedCount TINYINT
SELECT #insertedCount = COUNT(1) FROM inserted WHERE [Root]=1
if (#insertedCount > 1)
BEGIN
RAISERROR ('Only one root',0,0);
ROLLBACK TRANSACTION
RETURN
END
DECLARE #newRootId INT
SELECT #newRootId = [HierarchyId] FROM inserted WHERE [Root]=1
UPDATE [gt].[Hierarchy] SET [Root]=0 WHERE [HierarchyId] <> #newRootId
GO
This is the solution we tried to achieve. Only one root rule is always meet. (Additional trigger for Delete should be done)
Different table and relation
This is lets say more normalized way. We create new table allow only to have one row (using the options described above) and we join.
CREATE TABLE [gt].[HierarchyDefault](
[HierarchyId] INT PRIMARY KEY NOT NULL,
CONSTRAINT FK_HierarchyDefault_Hierarchy FOREIGN KEY (HierarchyId) REFERENCES [gt].[Hierarchy](HierarchyId)
)
Does it will hit the performance?
With one column
SET STATISTICS TIME ON;
SELECT [HierarchyId],[ParentHierarchyId],[Root]
FROM [gt].[Hierarchy] WHERE [root]=1
SET STATISTICS TIME OFF;
Result
CPU time = 0 ms, elapsed time = 0 ms.
With join:
SET STATISTICS TIME ON;
SELECT h.[HierarchyId],[ParentHierarchyId],[Root]
FROM [gt].[Hierarchy] h
INNER JOIN [gt].[HierarchyDefault] hd on h.[HierarchyId]=hd.[HierarchyId]
WHERE [root]=1
SET STATISTICS TIME OFF;
Result
CPU time = 0 ms, elapsed time = 0 ms.
Summary
I will use the trigger. It is some magic in the table, but it did all job under the hood.
Easy table creation:
CREATE TABLE [gt].[Hierarchy](
[HierarchyId] INT PRIMARY KEY IDENTITY(1,1),
[ParentHierarchyId] INT NULL,
[Root] BIT
CONSTRAINT FK_Hierarchy_Hierarchy FOREIGN KEY (ParentHierarchyId)
REFERENCES [gt].[Hierarchy](HierarchyId)
)

You could apply an Instead of Insert trigger and check the value as it's coming in.
Create Trigger TRG_MyTrigger
on MyTable
Instead of Insert
as
Begin
--Check to see if the row is marked as active....
If Exists(Select * from inserted where IsDefault= 1)
Begin
Update Table Set IsDefault=0 where ID= (select ID from inserted);
insert into Table(Columns)
select Columns from inserted
End
End
Alternatively you could apply a unique constraint on the column.

The accepted answer to the below question is both interesting and relevant:
Constraint for only one record marked as default
"But the serious relational folks will tell you this information
should just be in another table."
Have a separate 1 row table that tells you which record is 'default'. Anon touched on this in his comment.
I think this is the best approach - simple, clean & doesn't require a 'clever' esoteric solution prone to errors or later misunderstanding. You can even drop the IsDefualt column.

Related

Missing rows after updating SQL Server index key column

"T-SQL Querying" book (http://www.amazon.com/Inside-Microsoft-Querying-Developer-Reference/dp/0735626030) has an interesting example, where, querying a table under default transaction isolation level during clustered index key column update, you may miss a row or read a row twice. It looks to be acceptable, since updating table/entity key is not a good idea anyway. However, I've updated this example so that the same happens, when you update non-clustered index key column value.
Following is the table structure:
SET NOCOUNT ON;
USE master;
IF DB_ID('TestIndexColUpdate') IS NULL CREATE DATABASE TestIndexColUpdate;
GO
USE TestIndexColUpdate;
GO
IF OBJECT_ID('dbo.Employees', 'U') IS NOT NULL DROP TABLE dbo.Employees;
CREATE TABLE dbo.Employees
(
empid CHAR(900) NOT NULL, -- this column should be big enough, so that 9 rows fit on 2 index pages
salary MONEY NOT NULL,
filler CHAR(1) NOT NULL DEFAULT('a')
);
CREATE INDEX idx_salary ON dbo.Employees(salary) include (empid); -- include empid into index, so that test query reads from it
ALTER TABLE dbo.Employees ADD CONSTRAINT PK_Employees PRIMARY KEY NONCLUSTERED(empid);
INSERT INTO dbo.Employees(empid, salary) VALUES
('A', 1500.00),('B', 2000.00),('C', 3000.00),('D', 4000.00),
('E', 5000.00),('F', 6000.00),('G', 7000.00),('H', 8000.00),
('I', 9000.00);
This is what needs to be done in the first connection (on each update, the row will jump between 2 index pages):
SET NOCOUNT ON;
USE TestIndexColUpdate;
WHILE 1=1
BEGIN
UPDATE dbo.Employees SET salary = 10800.00 - salary WHERE empid = 'I'; -- on each update, "I" employee jumps between 2 pages
END
This is what needs to be done in the second connection:
SET NOCOUNT ON;
USE TestIndexColUpdate;
DECLARE #c INT
WHILE 1 = 1
BEGIN
SELECT salary, empid FROM dbo.Employees
if ##ROWCOUNT <> 9 BREAK;
END
Normally, this query should return 9 records we inserted in the first code sample. However, very soon, I see 8 records being returned. This query reads all it's data from the "idx_salary" index, which is being updated by previous sample code.
This seems to be quite lax attitude towards data consistency from SQL Server. I would expect some locking coordination, when data is being read from index, while its key column is being updated.
Do I interpret this behavior correctly? Does this mean, that even non-clustered index keys should not be updated?
UPDATE:
To solve this problem, you only need to enable "snapshots" on the db (READ_COMMITTED_SNAPSHOT ON). No more deadlocking or missing rows. I've tried summarize all of this here: http://blog.konstantins.net/2015/01/missing-rows-after-updating-sql-server.html
UPDATE 2:
This seems to be the very same problem, as in this good old article: http://blog.codinghorror.com/deadlocked/
Do I interpret this behavior correctly?
Yes.
Does this mean, that even non-clustered index keys should not be updated?
No. You should use a proper isolation level or make the application tolerate the inconsistencies that READ COMMITTED allows.
This issue of missing rows is not limited to clustered indexes. It is caused by moving a row in a b-tree. Clustered and nonclustered indexes are implemented as b-trees with only tiny physical differences between them.
So you are seeing the exact same physical phenomenon. It applies every time your query reads a range of rows from a b-tree. The contents of that range can move around.
Use an isolation level that provides you the guarantees that you need. For read-only transactions the snapshot isolation level is usually a very elegant and total solution to concurrency. It seems to apply to your case.
This seems to be quite lax attitude towards data consistency from SQL Server. I would expect some locking coordination, when data is being read from index, while its key column is being updated.
This is an understandable request. On the other hand you specifically requested a low level of isolation. You can dial all the way up to SERIALIZABLE of you want. SERIALIZABLE presents you as-if serial execution.
Missing rows are just one special case of the many effects that READ COMMITTED allows. It makes no sense to specifically prevent them while allowing all kinds of other inconsistencies.
SET NOCOUNT ON;
USE TestIndexColUpdate;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
DECLARE #c INT
WHILE 1 = 1
BEGIN
DECLARE #count INT
SELECT #count = COUNT(*) FROM dbo.Employees WITH (INDEX (idx_salary))
WHERE empid > '' AND CONVERT(NVARCHAR(MAX), empid) > '__'
AND salary > 0
if #count <> 9 BREAK;
END

SQL Server fastest way to change data types on large tables

We need to change the data type of about 10 primary keys in our db from numeric(19,0) to bigint. On the smaller tables a simple update of the datatype works just fine but on the larger tables (60-70 million rows) it takes a considerable amount of time.
What is the fastest way to achieve this, preferably without locking the database.
I've written a script that generates the following (which I believe I got from a different SO post)
--Add a new temporary column to store the changed value.
ALTER TABLE query_log ADD id_bigint bigint NULL;
GO
CREATE NONCLUSTERED INDEX IX_query_log_id_bigint ON query_log (id_bigint)
INCLUDE (id); -- the include only works on SQL 2008 and up
-- This index may help or hurt performance, I'm not sure... :)
GO
declare #count int
declare #iteration int
declare #progress int
set #iteration = 0
set #progress = 0
select #count = COUNT(*) from query_log
RAISERROR ('Processing %d records', 0, 1, #count) WITH NOWAIT
-- Update the table in batches of 10000 at a time
WHILE 1 = 1 BEGIN
UPDATE X -- Updating a derived table only works on SQL 2005 and up
SET X.id_bigint = id
FROM (
SELECT TOP 10000 * FROM query_log WHERE id_bigint IS NULL
) X;
IF ##RowCount = 0 BREAK;
set #iteration = #iteration + 1
set #progress = #iteration * 10000
RAISERROR ('processed %d of %d records', 0, 1, #progress, #count) WITH NOWAIT
END;
GO
--kill the pkey on the old column
ALTER TABLE query_log
DROP CONSTRAINT PK__query_log__53833672
GO
BEGIN TRAN; -- now do as *little* work as possible in this blocking transaction
UPDATE T -- catch any updates that happened after we touched the row
SET T.id_bigint = T.id
FROM query_log T WITH (TABLOCKX, HOLDLOCK)
WHERE T.id_bigint <> T.id;
-- The lock hints ensure everyone is blocked until we do the switcheroo
EXEC sp_rename 'query_log.id', 'id_numeric';
EXEC sp_rename 'query_log.id_bigint', 'id';
COMMIT TRAN;
GO
DROP INDEX IX_query_log_id_bigint ON query_log;
GO
ALTER TABLE query_log ALTER COLUMN id bigint NOT NULL;
GO
/*
ALTER TABLE query_log DROP COLUMN id_numeric;
GO
*/
ALTER TABLE query_log
ADD CONSTRAINT PK_query_log PRIMARY KEY (id)
GO
This works very well for the smaller tables but is extremely slow going for the very large tables.
Note this is in preparation for a migration to Postgres and the EnterpriseDB Migration toolkit doesn't seem to understand the numeric(19,0) datatype
If is not possible to change a primary key without locking. The fastest way with the least impact is to create a new table with the new columns and primary keys without foreign keys and indexes. Then batch insert blocks of data in sequential order relative to their primary key(s). When that is finished, add your indexes, then foreign keys back. Finally, drop or rename the old table and rename your new table to the systems expected table name.
In practice your approach will have to vary based on how many records are inserted, updated, and/or deleted. If you're only inserting then you can perform the initial load, and top of the table just before your swap.
This approach should provide the fastest migration, minimal logs, and very little fragmentation on your table and indexes.
You have to remember that every time you modify a record, the data is being modified, indexes are being modified, and foreign keys are being checked. All within one implicit or explicit transaction. The table and/or row(s) will be locked while all changes are made. Even if your database is set to simple logging, the server will still write all changes to the log files. Updates actually are a delete paired with an insert so it is not possible to prevent fragmentation during any other process.

Stuck at creating a trigger

performing some MSSQL exercises, and I am trying to create a trigger. However, the solution I have, comes across as theoretically correct to me but it is not working.
The aim is to create a trigger for a table that has only two columns. One column is the primary key and is Identity and does not allow null values. The other column is one that ALLOWS NULL values. However, it permits NULL values ONLY FOR A SINGLE ROW in the entire table. Basically a trigger should fire for an insert/update operation on this table which attempts to insert/update the column to a NULL value when there is already an existing NULL value for the column in the table.
This condition I capture in my trigger code as follows:
After Insert, Update
AS
set ANSI_WARNINGS OFF
If ( (select count(NoDupName) from TestUniqueNulls where NoDupName is null) > 1 )
BEGIN
Print 'There already is a row that contains a NULL value, transaction aborted';
ROLLBACK TRAN
END
However, the transaction executes itself nonetheless. I am not sure why this is happening and the trigger is not firing itself.
So anybody to enlighten my misgivings here?
I also have used set ANSI_WARNINGS OFF at the start of the trigger.
count(col) only counts non null values so count(NoDupName) ... where NoDupName is null will always be zero. You would need to check count(*) instead.
I realise this is just a practice exercise but an indexed view might be a better mechanism for this.
CREATE VIEW dbo.NoMoreThanOneNull
WITH SCHEMABINDING
AS
SELECT NoDupName
FROM dbo.TestUniqueNulls
WHERE NoDupName IS NULL
GO
CREATE UNIQUE CLUSTERED INDEX ix ON dbo.NoMoreThanOneNull(NoDupName)
Yeah that's a gotcha. The expression inside parens of the COUNT has to evaluate to not null, otherwise it will not be counted. So it is safer to use *, or 1 or any not nullable column in the expression. The most commonly encountered expression is '*', although you can come across '1' as well. There is no difference between these expressions in terms of performance. However if you use expression that can evaluate to null (like nullable column), your counts and other aggregations can be completely off.
create table nulltest(a int null)
go
insert nulltest(a) values (1), (null), (2)
go
select * from nulltest
select COUNT(*) from nulltest
select COUNT(1) from nulltest
select COUNT(a) from nulltest
go
drop table nulltest

Using a trigger to simulate a second identity column in SQL Server 2005

I have various reasons for needing to implement, in addition to the identity column PK, a second, concurrency safe, auto-incrementing column in a SQL Server 2005 database. Being able to have more than one identity column would be ideal, but I'm looking at using a trigger to simulate this as close as possible to the metal.
I believe I have to use a serializable isolation level transaction in the trigger. Do I go about this like Ii would use such a transaction in a normal SQL query?
It is a non-negotiable requirement that the business meaning of the second incrementing column remain separated from the behind the scenes meaning of the first, PK, incrementing column.
To put things as simply as I can, if I create JobCards '0001', '0002', and '0003', then delete JobCards '0002' and '0003', the next Jobcard I create must have ID '0002', not '0004'.
Just an idea, if you have 2 "identity" columns, then surely they would be 'in sync' - if not exactly the same value, then would differ by a constant value. If so, then why not add the "second identity" column as a COMPUTED column, which offsets the primary identity? Or is my logic flawed here?
Edit : As per Martin's comment, note that your calc might need to be N * id + C, where N is the Increment and C the offset / delta - excuse my rusty maths.
For example:
ALTER TABLE MyTable ADD OtherIdentity AS Id * 2 + 1;
Edit
Note that for Sql 2012 and later, that you can now use an independent sequence to create two or more independently incrementing columns in the same table.
Note: OP has edited the original requirement to include reclaiming sequences (noting that identity columns in SQL do not reclaim used ID's once deleted).
I would disallow all the deletes from this table altogether. Instead of deleting, I would mark rows as available or inactive. Instead of inserting, I would first search if there are inactive rows, and reuse the one with the smallest ID if they exist. I would insert only if there are no available rows already in the table.
Of course, I would serialize all inserts and deletes with sp_getapplock.
You can use a trigger to disallow all deletes, it is simpler than filling gaps.
A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying" is to create another table with a single row that holds the current max value.
CREATE TABLE dbo.Sequence(
val int
)
Then to allocate a range of sufficient size for your insert
CREATE PROC dbo.GetSequence
#val AS int OUTPUT,
#n as int =1
AS
UPDATE dbo.Sequence
SET #val = val = val + #n;
SET #val = #val - #n + 1;
This will block other concurrent attempts to increment the sequence until the first transaction commits.
For a non blocking solution that doesn't handle multi row inserts see my answer here.
This is probably a terrible idea, but it works in at least a limited use scenario
Just use a regular identity and reseed on deletes.
create table reseedtest (
a int identity(1,1) not null,
name varchar(100)
)
insert reseedtest values('erik'),('john'),('selina')
select * from reseedtest
go
CREATE TRIGGER TR_reseedtest_D ON reseedtest FOR DELETE
AS
BEGIN TRAN
DECLARE #a int
SET #a = (SELECT TOP 1 a FROM reseedtest WITH (TABLOCKX, HOLDLOCK))
--anyone know another way to lock a table besides doing something to it?
DBCC CHECKIDENT(reseedtest, reseed, 0)
DBCC CHECKIDENT(reseedtest, reseed)
COMMIT TRAN
GO
delete reseedtest where a >= 2
insert reseedtest values('katarina'),('david')
select * from reseedtest
drop table reseedtest
This won't work if you are deleting from the "middle of the stack" as it were, but it works fine for deletes from the incrementing end.
Reseeding once to 0 then again is just a trick to avoid having to calculate the correct reseed value.
if you never delete from the table, you could create a view with a materialized column that uses ROW_NUMBER().
ALSO, a SQL Server identity can get out of sync with a user generated one, depending on the use of rollback.

How do you add a NOT NULL Column to a large table in SQL Server?

To add a NOT NULL Column to a table with many records, a DEFAULT constraint needs to be applied. This constraint causes the entire ALTER TABLE command to take a long time to run if the table is very large. This is because:
Assumptions:
The DEFAULT constraint modifies existing records. This means that the db needs to increase the size of each record, which causes it to shift records on full data-pages to other data-pages and that takes time.
The DEFAULT update executes as an atomic transaction. This means that the transaction log will need to be grown so that a roll-back can be executed if necessary.
The transaction log keeps track of the entire record. Therefore, even though only a single field is modified, the space needed by the log will be based on the size of the entire record multiplied by the # of existing records. This means that adding a column to a table with small records will be faster than adding a column to a table with large records even if the total # of records are the same for both tables.
Possible solutions:
Suck it up and wait for the process to complete. Just make sure to set the timeout period to be very long. The problem with this is that it may take hours or days to do depending on the # of records.
Add the column but allow NULL. Afterward, run an UPDATE query to set the DEFAULT value for existing rows. Do not do UPDATE *. Update batches of records at a time or you'll end up with the same problem as solution #1. The problem with this approach is that you end up with a column that allows NULL when you know that this is an unnecessary option. I believe that there are some best practice documents out there that says that you should not have columns that allow NULL unless it's necessary.
Create a new table with the same schema. Add the column to that schema. Transfer the data over from the original table. Drop the original table and rename the new table. I'm not certain how this is any better than #1.
Questions:
Are my assumptions correct?
Are these my only solutions? If so, which one is the best? I f not, what else could I do?
I ran into this problem for my work also. And my solution is along #2.
Here are my steps (I am using SQL Server 2005):
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn varchar(40) DEFAULT('')
2) Add a NOT NULL constraint with the NOCHECK option. The NOCHECK does not enforce on existing values:
ALTER TABLE MyTable WITH NOCHECK
ADD CONSTRAINT MyColumn_NOTNULL CHECK (MyColumn IS NOT NULL)
3) Update the values incrementally in table:
GO
UPDATE TOP(3000) MyTable SET MyColumn = '' WHERE MyColumn IS NULL
GO 1000
The update statement will only update maximum 3000 records. This allow to save a chunk of data at the time. I have to use "MyColumn IS NULL" because my table does not have a sequence primary key.
GO 1000 will execute the previous statement 1000 times. This will update 3 million records, if you need more just increase this number. It will continue to execute until SQL Server returns 0 records for the UPDATE statement.
Here's what I would try:
Do a full backup of the database.
Add the new column, allowing nulls - don't set a default.
Set SIMPLE recovery, which truncates the tran log as soon as each batch is committed.
The SQL is: ALTER DATABASE XXX SET RECOVERY SIMPLE
Run the update in batches as you discussed above, committing after each one.
Reset the new column to no longer allow nulls.
Go back to the normal FULL recovery.
The SQL is: ALTER DATABASE XXX SET RECOVERY FULL
Backup the database again.
The use of the SIMPLE recovery model doesn't stop logging, but it significantly reduces its impact. This is because the server discards the recovery information after every commit.
You could:
Start a transaction.
Grab a write lock on your original table so no one writes to it.
Create a shadow table with the new schema.
Transfer all the data from the original table.
execute sp_rename to rename the old table out.
execute sp_rename to rename the new table in.
Finally, you commit the transaction.
The advantage of this approach is that your readers will be able to access the table during the long process and that you can perform any kind of schema change in the background.
Just to update this with the latest information.
In SQL Server 2012 this can now be carried out as an online operation in the following circumstances
Enterprise Edition only
The default must be a runtime constant
For the second requirement examples might be a literal constant or a function such as GETDATE() that evaluates to the same value for all rows. A default of NEWID() would not qualify and would still end up updating all rows there and then.
For defaults that qualify SQL Server evaluates them and stores the result as the default value in the column metadata so this is independent of the default constraint which is created (which can even be dropped if no longer required). This is viewable in sys.system_internals_partition_columns. The value doesn't get written out to the rows until next time they happen to get updated.
More details about this here: online non-null with values column add in sql server 2012
Admitted that this is an old question. My colleague recently told me that he was able to do it in one single alter table statement on a table with 13.6M rows. It finished within a second in SQL Server 2012. I was able to confirm the same on a table with 8M rows. Something changed in later version of SQL Server?
Alter table mytable add mycolumn char(1) not null default('N');
I think this depends on the SQL flavor you are using, but what if you took option 2, but at the very end alter table table to not null with the default value?
Would it be fast, since it sees all the values are not null?
If you want the column in the same table, you'll just have to do it. Now, option 3 is potentially the best for this because you can still have the database "live" while this operation is going on. If you use option 1, the table is locked while the operation happens and then you're really stuck.
If you don't really care if the column is in the table, then I suppose a segmented approach is the next best. Though, I really try to avoid that (to the point that I don't do it) because then like Charles Bretana says, you'll have to make sure and find all the places that update/insert that table and modify those. Ugh!
I had a similar problem, and went for your option #2.
It takes 20 minutes this way, as opposed to 32 hours the other way!!! Huge difference, thanks for the tip.
I wrote a full blog entry about it, but here's the important sql:
Alter table MyTable
Add MyNewColumn char(10) null default '?';
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 0 and 1000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 1000000 and 2000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 2000000 and 3000000
go
..etc..
Alter table MyTable
Alter column MyNewColumn char(10) not null;
And the blog entry if you're interested:
http://splinter.com.au/adding-a-column-to-a-massive-sql-server-table
I had a similar problem and I went with modified #3 approach. In my case the database was in SIMPLE recovery mode and the table to which column was supposed to be added was not referenced by any FK constraints.
Instead of creating a new table with the same schema and copying contents of original table, I used SELECT…INTO syntax.
According to Microsoft (http://technet.microsoft.com/en-us/library/ms188029(v=sql.105).aspx)
The amount of logging for SELECT...INTO depends on the recovery model
in effect for the database. Under the simple recovery model or
bulk-logged recovery model, bulk operations are minimally logged. With
minimal logging, using the SELECT… INTO statement can be more
efficient than creating a table and then populating the table with an
INSERT statement. For more information, see Operations That Can Be
Minimally Logged.
The sequence of steps :
1.Move data from old table to new while adding new column with default
SELECT table.*, cast (‘default’ as nvarchar(256)) new_column
INTO table_copy
FROM table
2.Drop old table
DROP TABLE table
3.Rename newly created table
EXEC sp_rename 'table_copy', ‘table’
4.Create necessary constraints and indexes on the new table
In my case the table had more than 100 million rows and this approach completed faster than approach #2 and log space growth was minimal.
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn int default 0
2) Update the values incrementally in the table (same effect as accepted answer). Adjust the number of records being updated to your environment, to avoid blocking other users/processes.
declare #rowcount int = 1
while (#rowcount > 0)
begin
UPDATE TOP(10000) MyTable SET MyColumn = 0 WHERE MyColumn IS NULL
set #rowcount = ##ROWCOUNT
end
3) Alter the column definition to require not null. Run the following at a moment when the table is not in use (or schedule a few minutes of downtime). I have successfully used this for tables with millions of records.
ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL
I would use CURSOR instead of UPDATE. Cursor will update all matching records in batch, record by record -- it takes time but not locks table.
If you want to avoid locks use WAIT.
Also I am not sure, that DEFAULT constrain changes existing rows.
Probably NOT NULL constrain use together with DEFAULT causes case described by author.
If it changes add it in the end
So pseudocode will look like:
-- without NOT NULL constrain -- we will add it in the end
ALTER TABLE table ADD new_column INT DEFAULT 0
DECLARE fillNullColumn CURSOR LOCAL FAST_FORWARD
SELECT
key
FROM
table WITH (NOLOCK)
WHERE
new_column IS NULL
OPEN fillNullColumn
DECLARE
#key INT
FETCH NEXT FROM fillNullColumn INTO #key
WHILE ##FETCH_STATUS = 0 BEGIN
UPDATE
table WITH (ROWLOCK)
SET
new_column = 0 -- default value
WHERE
key = #key
WAIT 00:00:05 --wait 5 seconds, keep in mind it causes updating only 12 rows per minute
FETCH NEXT FROM fillNullColumn INTO #key
END
CLOSE fillNullColumn
DEALLOCATE fillNullColumn
ALTER TABLE table ALTER COLUMN new_column ADD CONSTRAIN xxx
I am sure that there are some syntax errors, but I hope that this
help to solve your problem.
Good luck!
Vertically segment the table. This means you will have two tables, with the same primary key, and exactly the same number of records... One will be the one you already have, the other will have just the key, and the new Non-Null column (with default value) .
Modify all Insert, Update, and delete code so they keep the two tables in synch... If you want you can create a view that "joins" the two tables together to create a single logical combination of the two that appears like a single table for client Select statements...

Resources