Is there an easy way to remove an identity from a table in SQL Server 2005?
When I use Management Studio, it generates a script that creates a mirror table without the identity, copies the data, drops the table, then renames the mirror table, etc. This script has 5231 lines in it because this table/column have many FK relations.
I'd feel much more comfortable running a simple alter/drop. Any ideas?
EDIT
I think I'm just going to go with the 5,231 line script from Enterprise Manager. However, I'm going to break it up into smaller parts which I can run and control better. This table "behaves" strange, if you try to delete 1 row (even one you just inserted, which is not in any other FK table), you get this error:
delete MyTable where MyPrimaryKey=1234
Msg 8621, Level 17, State 2, Line 1
The query processor ran out of stack space during query optimization. Please simplify the query.
No doubt, all the FKs. We will halt all access to our application and run in single user mode when we make these schema and related application changes. However, we need this to run fast, and I need an idea of how long it will take. I guess that I'll just have to test, test, test.
If you are on SQL Server 2005 or later, you can do this as a simple metadata change (NB: doesn't require an edition supporting partitioning as I originally stated).
Example code pilfered shamelessly from the workaround by Paul White on this Microsoft Connect Item.
USE tempdb;
GO
-- A table with an identity column
CREATE TABLE dbo.Source
(row_id INTEGER IDENTITY PRIMARY KEY NOT NULL, data SQL_VARIANT NULL);
GO
-- Some sample data
INSERT dbo.Source (data)
VALUES (CONVERT(SQL_VARIANT, 4)),
(CONVERT(SQL_VARIANT, 'X')),
(CONVERT(SQL_VARIANT, {d '2009-11-07'})),
(CONVERT(SQL_VARIANT, N'áéíóú'));
GO
-- Remove the identity property
BEGIN TRY;
-- All or nothing
BEGIN TRANSACTION;
-- A table with the same structure as the one with the identity column,
-- but without the identity property
CREATE TABLE dbo.Destination
(row_id INTEGER PRIMARY KEY NOT NULL, data SQL_VARIANT NULL);
-- Metadata switch
ALTER TABLE dbo.Source SWITCH TO dbo.Destination;
-- Drop the old object, which now contains no data
DROP TABLE dbo.Source;
-- Rename the new object to make it look like the old one
EXECUTE sp_rename N'dbo.Destination', N'Source', 'OBJECT';
-- Success
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
-- Bugger!
IF XACT_STATE() <> 0 ROLLBACK TRANSACTION;
PRINT ERROR_MESSAGE();
END CATCH;
GO
-- Test the the identity property has indeed gone
INSERT dbo.Source (row_id, data)
VALUES (5, CONVERT(SQL_VARIANT, N'This works!'))
SELECT row_id,
data
FROM dbo.Source;
GO
-- Tidy up
DROP TABLE dbo.Source;
I don't believe you can directly drop the IDENTITY part of the column. Your best bet is probably to:
add another non-identity column to the table
copy the identity values to that column
drop the original identity column
rename the new column to replace the original column
If the identity column is part of a key or other constraint, you will need to drop those constraints and re-create them after the above operations are complete.
You could add a column to the table that is not an identity column, copy the data, drop the original column, and rename the new column to the old column and recreate the indexes.
Here is a link that shows an example. Still not a simple alter, but it is certainly better than 5231 lines.
Related
I support a data replication product. I have a client who is very frustrated that SQL Server can't have a table with an Identity column that BOTH increments automatically when a row is added without providing a value for that column, and at the same time will accept/use a value when it is provided - and I might add, with both of those things happening continuously at a high rate and across hundreds of tables. They point to other databases that apparently can do this.
Everything I see online and my own experimentation seems to indicate that this simply can't be done in SQL Server, but I wanted to put it out there in case I'm just wrong and missing something. My only advice to them so far has been to switch to a Sequence (instead of Identity) and use it as a default value for the column. I've tested that and it works perfectly like they would want, but they are groaning at the idea of doing that for hundreds of tables. Thanks.
The point of an IDENTITY is that SQL Server is in control of it; you let SQL Server manage the value completely. What you really want is a SEQUENCE as a DEFAULT value.
CREATE TABLE dbo.SomeTable (ID int NOT NULL,
SomeColumn varchar(10));
GO
CREATE SEQUENCE dbo.SomeTableID START WITH 1 INCREMENT BY 1;
GO
ALTER TABLE dbo.SomeTable ADD CONSTRAINT PK_SomeTable PRIMARY KEY CLUSTERED (ID);
ALTER TABLE dbo.SomeTable ADD CONSTRAINT DF_SomeTableID DEFAULT NEXT VALUE FOR dbo.SomeTableID FOR ID;
GO
INSERT INTO dbo.SomeTable (SomeColumn)
VALUES ('abc'),('def');
GO
INSERT INTO dbo.SomeTable(ID,SomeColumn)
VALUES(3,'xyz');
GO
--Errors due to 3 already in use, but intended.
INSERT INTO dbo.SomeTable (SomeColumn)
VALUES ('abc');
GO
INSERT INTO dbo.SomeTable (SomeColumn)
VALUES ('def'); --4
GO
--Cleanup
DROP TABLE dbo.SomeTable;
DROP SEQUENCE dbo.SomeTableID;
db<>fiddle
I have to change the auto increment on ID to explicitly define ID. For this I Go to
datatabse-> tables -> mytable -> design. There I set is dentity (under identity specification) to No. But when I click save it throws an error saying.
Saving changes is not permitted. The changes you have made require the following tables to
be droped and re created....
Is there any way to do it without dropping the table. I searched this error and got the solution to run a following query
SET IDENTITY_INSERT mytable ON GO
But when I try to insert from code, it throws error that
Cannot insert explicit value for identity column in table 'mytable' when IDENTITY_INSERT is set to OFF
Is there any way to get out of this problem
Once identity, always identity. You cannot change the identity property on a column. Technically, you could use IDENTITY_INSERT to get around it, but this requires setting the option on every single insert you do (this setting doesn't persist over sessions). This is probably not what you want.
Your only alternative, if recreating the table isn't an option, is to create a new column that isn't an identity column, then dropping the old one:
ALTER TABLE MyTable ADD NotAnID INT NULL;
GO
BEGIN TRANSACTION
UPDATE MyTable SET NotAnID = ID;
ALTER TABLE MyTable ALTER COLUMN NotAnID INT NOT NULL;
ALTER TABLE MyTable DROP COLUMN ID;
EXECUTE sp_rename 'MyTable.NotAnID', 'ID';
COMMIT;
This assumes your identity column is NOT NULL (as it usually is), that ID is not the primary key, that it isn't participating in foreign key constraints, and that you want the new column to take place of the old one.
If ID is the primary key, this exercise gets more involved because you need to drop the primary key constraint and recreate it -- which has its own challenges. Doubly so if it's also the clustered index. In this case, you are probably better off recreating the table anyway, because recreating the clustered index means the whole table is rewritten -- this will almost certainly interrupt production work, so you may as well let SSMS do the tough work for you. To allow that, go to Tools -> Options -> Designers and uncheck "Prevent saving changes that require table re-creation".
Is it possible to add a column to a table at a specific ordinal position in Microsoft SQL Server?
For instance, our tables always have CreatedOn, CreatedBy, LastModifiedOn, LastModifiedBy columns at the "end" of each table definition? I'd like the new column to show up in SSMS above these columns.
If I am scripting all my database changes, is there a way to preserve this order at the end of the table?
FYI, I'm not trying to institute a flame war on if this should even be done. If you want to read about a thread that degenerates quickly into that, here's a good one:
http://www.developersdex.com/sql/message.asp?p=581&r=5014513
You have to create a temp table that mirrors the original table's schema but with the column order that you want, then copy the contents of the original to temp. Delete the original and rename the temp.
This is what SQL Management Studio does behind the scenes.
With a schema sync tool, you can generate these scripts automatically.
go into SQL Server management Studio, and "design" an existing table. Insert a column in the middle, right click in an empty area and select Generate Change Script...
Now look at the script it creates. it will basically create a temp table with the proper column order, insert the data from the original table, drop the original table, and rename the temp table. This is probably what you'll need to do.
You may also need to uncheck this option to allow creation of change scripts
The answer is yes, it is technically possible, but you will have a headache doing so and it will take a long time to execute and set up.
One: Create/Copy/Drop/Rename
This is actually what SQL Server is doing in the graphical interface: here's an example of the script it is generating and executing when you click the 'save' button after adding a new column to the beginning of a table.
/* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_SomeTable
(
MyNewColumn int NOT NULL,
OriginalIntColumn int NULL,
OriginalVarcharColumn varchar(100) NULL
) ON [PRIMARY]
TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_SomeTable SET (LOCK_ESCALATION = TABLE)
GO
SET IDENTITY_INSERT dbo.Tmp_SomeTable ON
GO
IF EXISTS(SELECT * FROM dbo.SomeTable)
EXEC('INSERT INTO dbo.Tmp_SomeTable (OriginalIntColumn, OriginalVarcharColumn FROM dbo.SomeTable WITH (HOLDLOCK TABLOCKX)')
GO
SET IDENTITY_INSERT dbo.Tmp_SomeTable OFF
GO
DROP TABLE dbo.SomeTable
GO
EXECUTE sp_rename N'dbo.Tmp_SomeTable', N'SomeTable', 'OBJECT'
GO
GO
COMMIT
Two: ADD COLUMN / UPDATE / DROP COLUMN / RENAME
This method basically involves creating a copy of any existing columns that you want to add to the 'right' of your new column, transferring the data to the new column, then dropping the originals and renaming the new ones. This will play havoc with any indexes or constraints you have, since you have to repoint them. It's technically possible, but again time-consuming both in terms of development and execution.
CREATE TABLE MyTest (a int, b int, d int, e int)
INSERT INTO MyTest (a,b,d,e) VALUES(1,2,4,5)
SELECT * FROM MyTest -- your current table
ALTER TABLE MyTest ADD c int -- add a new column
ALTER TABLE MyTest ADD d_new int -- create copies of the existing columns you want to move
ALTER TABLE MyTest ADD e_new int
UPDATE MyTest SET d_new = d, e_new = e -- transfer data to the new columns
ALTER TABLE MyTest DROP COLUMN d -- remove the originals
ALTER TABLE MyTest DROP COLUMN e
EXEC SP_RENAME 'MyTest.d_new', 'd'; -- rename the new columns
EXEC SP_RENAME 'MyTest.e_new', 'e';
SELECT * FROM MyTest
DROP TABLE MyTest -- clean up the sample
Three: Live with it
This mightily offends my sense of order ... but sometimes, it just isn't worth reshuffling.
To my knowledge there is no known method to change the order of the column. Behind the scenes SQL Management Studio does what Jose Basilio said. And if you have a big table then it is impractical to change the column orders like this way.
You can use a "view". With SQL views you can use any order you like without getting affected by the table column changes.
I am using SSMS 18. I did in simple way
Opened design of table
positioning the required column by dragging it
And as per the answer from KM (second in thread) - uncheck the option to allow creation of change scripts refer image above.
Save the changes.
Done. Check your table now.
TFS 2013 will do this for you automatically.
Add the new column(s) to your table anyway you like, and then commit your changes to TFS. From there you can open the table's sql file in Visual Studio and manually move the order of the columns in the T-SQL CREATE script. Then you can update your target database by using VS's schema compare tool found under Tools > SQL Server > New Schema Comparison. Choose your Database project with your change as the source, and the database you want to update as the target. Compare, select the table's script, and Update. VS will drop and add automatically. All your data will be safe, and indexes too.
What i think is simple is to add the column ALTER TABLE table1 ADD .. and then create a tmp table like tmp_table1 from the select like
SELECT col1,col2,col5,col3,col4 into tmp_table1 from table1;
and then drop table1 and rename the tmp_table1 to table1, that is it. I hope it will help someone
Select all the columns into a temp table, and create a new table with the new column you want. Then drop the old table, select all the columns from the temp table, and insert them into the new table with the reordered column. No data is lost.
SELECT * FROM TEMP
SELECT * FROM originaltbl
SELECT * FROM #Stagintbl
DECLARE #ColumnName nvarchar(max);
SET #ColumnName=(SELECT
DISTINCT STUFF((
SELECT ',' + a.COLUMN_NAME
FROM (
SELECT Column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='originaltbl') a
for xml path('')
),1,1,'') AS ColumnName)
DECLARE #Sqlquery nvarchar(max)
SET #Sqlquery = 'SELECT ' + #ColumnName + ' FROM #Stagintbl' + '';
INSERT INTO originaltbl
EXECUTE(#Sqlquery)
Dirty and simple.
Export table to csv.
Insert new data at desired position.
Drop table.
Create new table with desired column specifications.
Load columns from csv to new table.
I am not sure if the thread is still active. I was having the same query with MySQL database. Right clicking the table and selecting 'ALTER' auto generated the below code. Sample provided from sakila db and it worked. Just find out the column after which you want to place your new column and use 'AFTER' keyword
ALTER TABLE `sakila`.`actor`
CHANGE COLUMN `middle_name` `middle_name` VARCHAR(50) NULL DEFAULT NULL AFTER `first_name`;
In SQL Server (in my case, 2005) how can I add the identity property to an existing table column using T-SQL?
Something like:
alter table tblFoo
alter column bar identity(1,1)
I don't beleive you can do that. Your best bet is to create a new identity column and copy the data over using an identity insert command (if you indeed want to keep the old values).
Here is a decent article describing the process in detail:
http://www.mssqltips.com/tip.asp?tip=1397
The solution posted by Vikash doesn't work; it produces an "Incorrect syntax" error in SQL Management Studio (2005, as the OP specified). The fact that the "Compact Edition" of SQL Server supports this kind of operation is just a shortcut, because the real process is more like what Robert & JohnFX said--creating a duplicate table, populating the data, renaming the original & new tables appropriately.
If you want to keep the values that already exist in the field that needs to be an identity, you could do something like this:
CREATE TABLE tname2 (etc.)
INSERT INTO tname2 FROM tname1
DROP TABLE tname1
CREATE TABLE tname1 (with IDENTITY specified)
SET IDENTITY_INSERT tname1 ON
INSERT INTO tname1 FROM tname2
SET IDENTITY_INSERT tname1 OFF
DROP tname2
Of course, dropping and re-creating a table (tname1) that is used by live code is NOT recommended! :)
Is the table populated? If not drop and recreate the table.
If it is populated what values already exist in the column? If they are values you don't want to keep.
Create a new table as you desire it, load the records from your old table into your new talbe and let the database populate the identity column as normal. Rename your original table and rename the new one to the correct name :).
Finally if the column you wish to make identity currently contains primary key values and is being referenced already by other tables you will need to totally re think if you're sure this is what you want to do :)
There is no direct way of doing this except:
A) through SQL i.e.:
-- make sure you have the correct CREATE TABLE script ready with IDENTITY
SELECT * INTO abcTable_copy FROM abcTable
DROP TABLE abcTable
CREATE TABLE abcTable -- this time with the IDENTITY column
SET IDENTITY_INSERT abcTable ON
INSERT INTO abcTable (..specify all columns!) FROM (..specify all columns!) abcTable_copy
SET INDENTITY_INSERT abcTable OFF
DROP TABLE abcTable_copy
-- I would suggest to verify the contents of both tables
-- before dropping the copy table
B) Through MSSMS which will do exactly the same in the background but will less fat-fingering.
In the MSSMS Object Explorer right click the table you need to modify
Select "design" Select the column you'd like to add IDENTITY to
Change the identity setting from NO -> YES (possibly seed)
Ctr+S the table
This will drop and recreate the table with all original data in it.
If you get a warning:
Go to MSSMS Tools -> Options -> Designers -> Table and database Designers
and uncheck the option "Prevent saving changes that require table re-creation"
Things to be careful about:
your DB has enough disk space before you do this
the DB is not in use (especially the table you are changing)
make sure to backup your DB before doing it
if the table has a lot of data (over 1G) try it somewhere else first
before using in real DB
Create a New Table
SELECT * INTO Table_New FROM Table_Current WHERE 1 = 0;
Drop Column from New Table
Alter table Table_New drop column id;
Add column with identity
Alter table Table_New add id int primary key identity;
Get All Data in New Table
SET IDENTITY_INSERT Table_New ON;
INSERT INTO Table_New (id, Name,CreatedDate,Modified)
SELECT id, Name,CreatedDate,Modified FROM Table_Current;
SET IDENTITY_INSERT Table_New OFF;
Drop old Table
drop table Table_Current;
Rename New Table as old One
EXEC sp_rename 'Table_New', 'Table_Current';
alter table tablename
alter column columnname
add Identity(100,1)
To add a NOT NULL Column to a table with many records, a DEFAULT constraint needs to be applied. This constraint causes the entire ALTER TABLE command to take a long time to run if the table is very large. This is because:
Assumptions:
The DEFAULT constraint modifies existing records. This means that the db needs to increase the size of each record, which causes it to shift records on full data-pages to other data-pages and that takes time.
The DEFAULT update executes as an atomic transaction. This means that the transaction log will need to be grown so that a roll-back can be executed if necessary.
The transaction log keeps track of the entire record. Therefore, even though only a single field is modified, the space needed by the log will be based on the size of the entire record multiplied by the # of existing records. This means that adding a column to a table with small records will be faster than adding a column to a table with large records even if the total # of records are the same for both tables.
Possible solutions:
Suck it up and wait for the process to complete. Just make sure to set the timeout period to be very long. The problem with this is that it may take hours or days to do depending on the # of records.
Add the column but allow NULL. Afterward, run an UPDATE query to set the DEFAULT value for existing rows. Do not do UPDATE *. Update batches of records at a time or you'll end up with the same problem as solution #1. The problem with this approach is that you end up with a column that allows NULL when you know that this is an unnecessary option. I believe that there are some best practice documents out there that says that you should not have columns that allow NULL unless it's necessary.
Create a new table with the same schema. Add the column to that schema. Transfer the data over from the original table. Drop the original table and rename the new table. I'm not certain how this is any better than #1.
Questions:
Are my assumptions correct?
Are these my only solutions? If so, which one is the best? I f not, what else could I do?
I ran into this problem for my work also. And my solution is along #2.
Here are my steps (I am using SQL Server 2005):
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn varchar(40) DEFAULT('')
2) Add a NOT NULL constraint with the NOCHECK option. The NOCHECK does not enforce on existing values:
ALTER TABLE MyTable WITH NOCHECK
ADD CONSTRAINT MyColumn_NOTNULL CHECK (MyColumn IS NOT NULL)
3) Update the values incrementally in table:
GO
UPDATE TOP(3000) MyTable SET MyColumn = '' WHERE MyColumn IS NULL
GO 1000
The update statement will only update maximum 3000 records. This allow to save a chunk of data at the time. I have to use "MyColumn IS NULL" because my table does not have a sequence primary key.
GO 1000 will execute the previous statement 1000 times. This will update 3 million records, if you need more just increase this number. It will continue to execute until SQL Server returns 0 records for the UPDATE statement.
Here's what I would try:
Do a full backup of the database.
Add the new column, allowing nulls - don't set a default.
Set SIMPLE recovery, which truncates the tran log as soon as each batch is committed.
The SQL is: ALTER DATABASE XXX SET RECOVERY SIMPLE
Run the update in batches as you discussed above, committing after each one.
Reset the new column to no longer allow nulls.
Go back to the normal FULL recovery.
The SQL is: ALTER DATABASE XXX SET RECOVERY FULL
Backup the database again.
The use of the SIMPLE recovery model doesn't stop logging, but it significantly reduces its impact. This is because the server discards the recovery information after every commit.
You could:
Start a transaction.
Grab a write lock on your original table so no one writes to it.
Create a shadow table with the new schema.
Transfer all the data from the original table.
execute sp_rename to rename the old table out.
execute sp_rename to rename the new table in.
Finally, you commit the transaction.
The advantage of this approach is that your readers will be able to access the table during the long process and that you can perform any kind of schema change in the background.
Just to update this with the latest information.
In SQL Server 2012 this can now be carried out as an online operation in the following circumstances
Enterprise Edition only
The default must be a runtime constant
For the second requirement examples might be a literal constant or a function such as GETDATE() that evaluates to the same value for all rows. A default of NEWID() would not qualify and would still end up updating all rows there and then.
For defaults that qualify SQL Server evaluates them and stores the result as the default value in the column metadata so this is independent of the default constraint which is created (which can even be dropped if no longer required). This is viewable in sys.system_internals_partition_columns. The value doesn't get written out to the rows until next time they happen to get updated.
More details about this here: online non-null with values column add in sql server 2012
Admitted that this is an old question. My colleague recently told me that he was able to do it in one single alter table statement on a table with 13.6M rows. It finished within a second in SQL Server 2012. I was able to confirm the same on a table with 8M rows. Something changed in later version of SQL Server?
Alter table mytable add mycolumn char(1) not null default('N');
I think this depends on the SQL flavor you are using, but what if you took option 2, but at the very end alter table table to not null with the default value?
Would it be fast, since it sees all the values are not null?
If you want the column in the same table, you'll just have to do it. Now, option 3 is potentially the best for this because you can still have the database "live" while this operation is going on. If you use option 1, the table is locked while the operation happens and then you're really stuck.
If you don't really care if the column is in the table, then I suppose a segmented approach is the next best. Though, I really try to avoid that (to the point that I don't do it) because then like Charles Bretana says, you'll have to make sure and find all the places that update/insert that table and modify those. Ugh!
I had a similar problem, and went for your option #2.
It takes 20 minutes this way, as opposed to 32 hours the other way!!! Huge difference, thanks for the tip.
I wrote a full blog entry about it, but here's the important sql:
Alter table MyTable
Add MyNewColumn char(10) null default '?';
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 0 and 1000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 1000000 and 2000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 2000000 and 3000000
go
..etc..
Alter table MyTable
Alter column MyNewColumn char(10) not null;
And the blog entry if you're interested:
http://splinter.com.au/adding-a-column-to-a-massive-sql-server-table
I had a similar problem and I went with modified #3 approach. In my case the database was in SIMPLE recovery mode and the table to which column was supposed to be added was not referenced by any FK constraints.
Instead of creating a new table with the same schema and copying contents of original table, I used SELECT…INTO syntax.
According to Microsoft (http://technet.microsoft.com/en-us/library/ms188029(v=sql.105).aspx)
The amount of logging for SELECT...INTO depends on the recovery model
in effect for the database. Under the simple recovery model or
bulk-logged recovery model, bulk operations are minimally logged. With
minimal logging, using the SELECT… INTO statement can be more
efficient than creating a table and then populating the table with an
INSERT statement. For more information, see Operations That Can Be
Minimally Logged.
The sequence of steps :
1.Move data from old table to new while adding new column with default
SELECT table.*, cast (‘default’ as nvarchar(256)) new_column
INTO table_copy
FROM table
2.Drop old table
DROP TABLE table
3.Rename newly created table
EXEC sp_rename 'table_copy', ‘table’
4.Create necessary constraints and indexes on the new table
In my case the table had more than 100 million rows and this approach completed faster than approach #2 and log space growth was minimal.
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn int default 0
2) Update the values incrementally in the table (same effect as accepted answer). Adjust the number of records being updated to your environment, to avoid blocking other users/processes.
declare #rowcount int = 1
while (#rowcount > 0)
begin
UPDATE TOP(10000) MyTable SET MyColumn = 0 WHERE MyColumn IS NULL
set #rowcount = ##ROWCOUNT
end
3) Alter the column definition to require not null. Run the following at a moment when the table is not in use (or schedule a few minutes of downtime). I have successfully used this for tables with millions of records.
ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL
I would use CURSOR instead of UPDATE. Cursor will update all matching records in batch, record by record -- it takes time but not locks table.
If you want to avoid locks use WAIT.
Also I am not sure, that DEFAULT constrain changes existing rows.
Probably NOT NULL constrain use together with DEFAULT causes case described by author.
If it changes add it in the end
So pseudocode will look like:
-- without NOT NULL constrain -- we will add it in the end
ALTER TABLE table ADD new_column INT DEFAULT 0
DECLARE fillNullColumn CURSOR LOCAL FAST_FORWARD
SELECT
key
FROM
table WITH (NOLOCK)
WHERE
new_column IS NULL
OPEN fillNullColumn
DECLARE
#key INT
FETCH NEXT FROM fillNullColumn INTO #key
WHILE ##FETCH_STATUS = 0 BEGIN
UPDATE
table WITH (ROWLOCK)
SET
new_column = 0 -- default value
WHERE
key = #key
WAIT 00:00:05 --wait 5 seconds, keep in mind it causes updating only 12 rows per minute
FETCH NEXT FROM fillNullColumn INTO #key
END
CLOSE fillNullColumn
DEALLOCATE fillNullColumn
ALTER TABLE table ALTER COLUMN new_column ADD CONSTRAIN xxx
I am sure that there are some syntax errors, but I hope that this
help to solve your problem.
Good luck!
Vertically segment the table. This means you will have two tables, with the same primary key, and exactly the same number of records... One will be the one you already have, the other will have just the key, and the new Non-Null column (with default value) .
Modify all Insert, Update, and delete code so they keep the two tables in synch... If you want you can create a view that "joins" the two tables together to create a single logical combination of the two that appears like a single table for client Select statements...