This question already has answers here:
How to add identity to the column in SQL Server?
(4 answers)
Closed 8 years ago.
I have a table and primary key is already set to that table and now I want that column to be autoincrement. Table has many records. Is it possible? or which one is fastest way to do that?
I think you have to make some effort for this as you cannot create identity column on existing column. However you may have a workaround for this like first try this to add a new column having identity field:
ALTER TABLE dbo.Table_name
ADD ID INT IDENTITY
and then make your ID as primary key like this:
ALTER TABLE dbo.Table_name
ADD CONSTRAINT PK_YourTable
PRIMARY KEY(ID)
And yes you have to remove the old dependencies before performing the above steps like this:
ALTER TABLE Table_name
DROP CONSTRAINT PK_Table1_Col1
EDIT:-
From the source:
We can use ALTER TABLE...SWITCH to work around this by only modifying metadata. See Books Online for restrictions on using the SWITCH method presented below. The process is practically instant even for the largest tables.
USE tempdb;
GO
-- A table with an identity column
CREATE TABLE dbo.Source (row_id INTEGER IDENTITY PRIMARY KEY NOT NULL, data SQL_VARIANT NULL);
GO
-- Some sample data
INSERT dbo.Source (data)
VALUES (CONVERT(SQL_VARIANT, 4)),
(CONVERT(SQL_VARIANT, 'X')),
(CONVERT(SQL_VARIANT, {d '2009-11-07'})),
(CONVERT(SQL_VARIANT, N'áéíóú'));
GO
-- Remove the identity property
BEGIN TRY;
-- All or nothing
BEGIN TRANSACTION;
-- A table with the same structure as the one with the identity column,
-- but without the identity property
CREATE TABLE dbo.Destination (row_id INTEGER PRIMARY KEY NOT NULL, data SQL_VARIANT NULL);
-- Metadata switch
ALTER TABLE dbo.Source SWITCH TO dbo.Destination;
-- Drop the old object, which now contains no data
DROP TABLE dbo.Source;
-- Rename the new object to make it look like the old one
EXECUTE sp_rename N'dbo.Destination', N'Source', 'OBJECT';
-- Success
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
-- Bugger!
IF XACT_STATE() <> 0 ROLLBACK TRANSACTION;
PRINT ERROR_MESSAGE();
END CATCH;
GO
-- Test the the identity property has indeed gone
INSERT dbo.Source (row_id, data)
VALUES (5, CONVERT(SQL_VARIANT, N'This works!'))
SELECT row_id,
data
FROM dbo.Source;
GO
-- Tidy up
DROP TABLE dbo.Source;
Related
i have two table userGold and AspNetUserRoles(UserId,RoleId) .
primary key type of UserGold is nvarchar(450)
same for AspNetUserRole.
my problem is that i want to get the last inserted primary key in UsereGold and insert it in AspNetUserRoles table using a trigger.
SCOPE_IDENTITY didn't work cause my primary key type is nvarchar.
i don't know what to do.
i saw solution like output inserted but it didn't work
create trigger addrole
on UserGold
after Insert
as
Begin
declare
#userid nvarchar(450)
set #userid=CAST(SCOPE_IDENTITY() AS nvarchar(450))
insert into AspNetUserRoles(UserId,RoleId)
values(#userid,'2c258e8d-c648-4b38-9b01-989d4dd525fe')
end
You should not write triggers - or anything else in a database - that only expect a single row to change. Databases work with sets of information, not single "records". Imagine what happens if someone create 5 rows in UserGold in a single insert statement. You can't put 5 userid values into a single #userId variable.
What you want is something like
-- assuming your tables are in the dbo schema! Make sure you include the schema name
create trigger AddDefaultRole on dbo.UserGold after insert as begin
set nocount on;
insert dbo.AspNetUserRoles (UserId, RoleId)
select UserId, '2c258e8d-c648-4b38-9b01-989d4dd525fe'
from inserted;
end
For more information, see Inserted and Deleted tables for triggers
I have one table called [FridgeTemperture], when any record inserted it should add one value in the new table MpSensors. But records are not being inserted in the new table when a record is inserted.
Error
Explicit value must be specified for identity column in table
'MpSensors' either identity_insert is set to ON or when a replication
user is inserting into a not for replication identity column.
CREATE TRIGGER [dbo].[FridgeTemperature_INSERT]
ON [dbo].[FridgeTemperture]
AFTER INSERT
AS
BEGIN
SET IDENTITY_INSERT MpSensors ON;
SET NOCOUNT ON;
DECLARE #fridge_temp varchar(10)
INSERT INTO MpSensors(fridge_temp)
VALUES(#fridge_temp)
SET IDENTITY_INSERT MpSensors OFF;
END
GO
table schema
CREATE TABLE [dbo].[MpSensors](
[id] [int] IDENTITY(1,1) NOT NULL,
[fridge_temp] [varchar](10) NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[FridgeTemperture](
[Id] [int] IDENTITY(1,1) NOT NULL,
[ShopId] [nvarchar](4) NULL,
[Fridgetemp] [decimal](4, 2) NOT NULL,
[UpdatedDate] [datetime2](7) NOT NULL
GO
You don't need the set identity_insert on if you are not attempting to insert values to the identity column. Also, your current insert statement, if you loose the set identity_insert, will simply inside a single null row for any insert statement completed successfully on the FridgeTemperture table.
When using triggers, you have access to the records effected by the statement that fired the trigger via the auto-generated tables called inserted and deleted.
I think you are after something like this:
CREATE TRIGGER [dbo].[FridgeTemperature_INSERT]
ON [dbo].[FridgeTemperture]
AFTER INSERT
AS
BEGIN
INSERT INTO MpSensors(fridge_temp)
SELECT CAST(Fridgetemp as varchar(10))
FROM inserted
END
Though I can't really see any benefit of storing the same value in two different places, and in two different data types.
Update
Following our conversation in the comments, you can simply use an update statement in the trigger instead of an insert statement:
UPDATE MpSensors
SET fridge_temp = (
SELECT TOP 1 CAST(Fridgetemp as varchar(10))
FROM inserted
ORDER BY Id DESC
)
This should give you the latest record in case you have an insert statement that inserts more than a single record into the FridgeTemperture table in a single statement.
create TRIGGER [dbo].[FridgeTemperature_INSERT]
ON [dbo].[FridgeTemperture]
AFTER INSERT
AS
BEGIN
UPDATE MpSensors
SET fridge_temp = CAST(Fridgetemp as varchar(10))
FROM inserted
END
You need to use Select statement with CAST as [fridge_temp] is varchar in MpSensors table in Trigger. Try like this:
CREATE trigger <table_name>
ON <table_name>
AFTER Insert
AS
BEGIN
INSERT INTO <table_name>(column_name)
Select CAST(column_name as varchar(10))
FROM inserted
END
The inserted table stores copies of the affected rows during INSERT and UPDATE statements. During an insert or update transaction, new rows are added to both the inserted table and the trigger table. The rows in the inserted table are copies of the new rows in the trigger table.
I'm doing some DB schema re-structuring.
I have a script that looks broadly like this:
BEGIN TRAN LabelledTransaction
--Remove FKs
ALTER TABLE myOtherTable1 DROP CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 DROP CONSTRAINT <constraintStuff>
--Remove PK
ALTER TABLE myTable DROP CONSTRAINT PK_for_myTable
--Add replacement id column with new type and IDENTITY
ALTER TABLE myTable ADD id_new int Identity(1, 1) NOT NULL
GO
ALTER TABLE myTable ADD CONSTRAINT PK_for_myTable PRIMARY KEY CLUSTERED (id_new)
GO
SELECT * FROM myTable
--Change referencing table types
ALTER TABLE myOtherTable1 ALTER COLUMN col_id int NULL
ALTER TABLE myOtherTable2 ALTER COLUMN col_id int NOT NULL
--Change referencing table values
UPDATE myOtherTable1 SET consignment_id = Target.id_new FROM myOtherTable1 AS Source JOIN <on key table>
UPDATE myOtherTable2 SET consignment_id = Target.id_new FROM myOtherTable2 AS Source JOIN <on key table>
--Replace old column with new column
ALTER TABLE myTable DROP COLUMN col_id
GO
EXEC sp_rename 'myTable.id_new', 'col_id', 'Column'
GO
--Reinstate any OTHER PKs disabled
ALTER TABLE myTable ADD CONSTRAINT <PK defn>
--Reinstate FKs
ALTER TABLE myOtherTable1 WITH CHECK ADD CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 WITH CHECK ADD CONSTRAINT <constraintStuff>
SELECT * FROM myTable
-- Reload out-of-date views
EXEC sp_refreshview 'someView'
-- Remove obsolete sequence
DROP SEQUENCE mySeq
ROLLBACK TRAN LabelledTransaction
Obviously that's all somewhat redacted, but the fine detail isn't the important thing in here.
Naturally, it's quite hard to locate all the things that need to be turned off/editted before the core change (even with some meta-queries to help me), so I don't always get the script correct first time.
But I put in the ROLLBACK in order to ensure that the failed attempts left the DB unchanged.
But what I actually see is that the ROLLBACK doesn't occur if there were errors in the TRAN. I think I get errors about "no matching TRAN for the rollback"?
My first instinct was that it was about the GO statements, but https://stackoverflow.com/a/11121382/1662268 suggests that labeling the TRAN should have fixed that?
What's happening? Why don't the changes get rolled back properly if there are errors.
How can I write and test these scripts in such a way that I don't have to manually revert any partial changes if the script isn't perfect first time?
EDIT:
Additional comments based on the first answer.
If the linked answer is not applicable to this query, could you expand on why that is, and why it's different from the example that they had given in their answer?
I can't (or rather, I believe that I can't) remove the GOs, because the script above requires the GOs in order to compile. If I remove the GOs then later statements that depend on the newly added/renamed columns don't compile. and the query can't run.
Is there any way to work around this, to remove the GOs?
If you have any error which automatically causes the transaction to be rolled back then the transaction will roll back as part of the current batch.
Then, control will return back to the client tool which will then send the next batch to the server and this next batch (and subsequent ones) will not be wrapped in any transaction.
Finally, when the final batch is executed that tries to run the rollback then you'll get the error message you received.
So, you need to protect each batch from running when its not protected by a transaction.
One way to do it would be to insert our old fried GOTO:
GO
IF ##TRANCOUNT=0 GOTO NBATCH
...Rest of Code
NBATCH:
GO
or SET FMTONLY:
GO
IF ##TRANCOUNT=0 BEGIN
SET FMTONLY ON
END
...Rest of Code
GO
Of course, this won't address all issues - some statements need to be the first or only statement in a batch. To resolve these, we have to combine one of the above techniques with an EXEC of some form:
GO
IF ##TRANCOUNT=0 BEGIN
SET FMTONLY ON
END
EXEC sp_executesql N'/*Code that needs to be in its own batch*/'
GO
(You'll also have to employ this technique if a batch of code relies on work a previous batch has performed which introduces new database objects (tables, columns, etc), since if that previous batch never executed, the new object will not exist)
I've also just discovered the existence of the -b option for the sqlcmd tool. The following script generates two errors when run through SSMS:
begin transaction
go
set xact_abort on
go
create table T(ID int not null,constraint CK_ID check (ID=4))
go
insert into T(ID) values (3)
go
rollback
Errors:
Msg 547, Level 16, State 0, Line 7
The INSERT statement conflicted with the CHECK constraint "CK_ID". The conflict occurred in database "TestDB", table "dbo.T", column 'ID'.
Msg 3903, Level 16, State 1, Line 9
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
However, the same script saved as Abortable.sql and run with the following commandline:
sqlcmd -b -E -i Abortable.sql -S .\SQL2014 -d TestDB
Generates the single error:
Msg 547, Level 16, State 1, Server .\SQL2014, Line 1
The INSERT statement conflicted with the CHECK constraint "CK_ID". The conflict
occurred in database "TestDB", table "dbo.T", column 'ID'.
So, it looks like running your scripts from the commandline and using the -b option may be another approach to take. I've just scoured the SSMS options/properties to see if I can find something equivalent to -b but I've not found it.
Remove the 'GO', that finishes the transaction
Only ROLLBACK if completes - just use TRY/CATCH:
BEGIN TRANSACTION;
BEGIN TRY
--Remove FKs
ALTER TABLE myOtherTable1 DROP CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 DROP CONSTRAINT <constraintStuff>
--Remove PK
ALTER TABLE myTable DROP CONSTRAINT PK_for_myTable
--Add replacement id column with new type and IDENTITY
ALTER TABLE myTable ADD id_new int Identity(1, 1) NOT NULL
ALTER TABLE myTable ADD CONSTRAINT PK_for_myTable PRIMARY KEY CLUSTERED (id_new)
SELECT * FROM myTable
--Change referencing table types
ALTER TABLE myOtherTable1 ALTER COLUMN col_id int NULL
ALTER TABLE myOtherTable2 ALTER COLUMN col_id int NOT NULL
--Change referencing table values
UPDATE myOtherTable1 SET consignment_id = Target.id_new FROM myOtherTable1 AS Source JOIN <on key table>
UPDATE myOtherTable2 SET consignment_id = Target.id_new FROM myOtherTable2 AS Source JOIN <on key table>
--Replace old column with new column
ALTER TABLE myTable DROP COLUMN col_id
EXEC sp_rename 'myTable.id_new', 'col_id', 'Column'
--Reinstate any OTHER PKs disabled
ALTER TABLE myTable ADD CONSTRAINT <PK defn>
--Reinstate FKs
ALTER TABLE myOtherTable1 WITH CHECK ADD CONSTRAINT <constraintStuff>
ALTER TABLE myOtherTable2 WITH CHECK ADD CONSTRAINT <constraintStuff>
SELECT * FROM myTable
-- Reload out-of-date views
EXEC sp_refreshview 'someView'
-- Remove obsolete sequence
DROP SEQUENCE mySeq
ROLLBACK TRANSACTION
END TRY
BEGIN CATCH
print 'Error caught'
select ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
Its about ORACLE (PL/SQL) script. I am not very familiar with databse to be honest.
I want to alter the length of a string in a column from 30 to 60. It is not null column.
If the table is empty and I run following script then it works:
alter table [TABLE_NAME] add ( NEW_COLUMN NVARCHAR2(60) DEFAULT 'null' NOT NULL );
/
alter table [TABLE_NAME] DROP CONSTRAINT PK_[TABLE_NAME];
/
begin
for rec in ( select * from [TABLE_NAME] )
loop
update [TABLE_NAME] set NEW_COLUMN =rec.OLD_COLUMN where Name_ID=rec.Name_ID;
end loop;
end;
/
alter table [TABLE_NAME] drop column OLD_COLUMN;
/
alter table [TABLE_NAME] rename column NEW_COLUMN to OLD_COLUMN;
/
alter table [TABLE_NAME] add CONSTRAINT PK_[TABLE_NAME] PRIMARY KEY(Name_ID);
/
But if the table has values then this script does not work.
It gives error: Cannot drop constraint - nonexistent constraint
However, if I remove lines about constraints (second and second last) then it works.
Now I don’t know if the table will be empty or it will have data so I need a script that can work in both the situations. Can anyone help please?
Following script for creating table:
CREATE TABLE TABLE_NAME
(
Name_ID NVARCHAR2(7) NOT NULL,
OLD_COLUMN NVARCHAR2(30) NOT NULL,
CONSTRAINT PK_TABLE_NAME PRIMARY KEY(Name_ID, OLD_COLUMN)
)
/
So while creating table it puts the primary key constraints but while updating table it drops this constraints somehow. I am simplyfying the sitation here. The tables are updates through java code. What I need to do is make a script that work in both situations - with data or just after creating table and modifying the column.
The following script works for me, regardless of whether the insert statement is present or not (ie. the table has or has not data):
CREATE TABLE TABLE_NAME
(
Name_ID NVARCHAR2(7) NOT NULL,
OLD_COLUMN NVARCHAR2(30) NOT NULL,
CONSTRAINT PK_TABLE_NAME PRIMARY KEY(Name_ID, OLD_COLUMN)
);
insert into table_name (name_id, old_column)
values ('test', 'test_old_col');
commit;
alter table table_name add (new_column nvarchar2(60) default 'null' not null);
update table_name set new_column = old_column;
commit;
alter table table_name drop constraint PK_TABLE_NAME;
alter table table_name drop column old_column;
alter table table_name rename column new_column to old_column;
alter table TABLE_NAME add CONSTRAINT PK_TABLE_NAME PRIMARY KEY(Name_ID, old_column);
drop table table_name;
I have assumed that you meant to recreate the primary key with the old_column in it, otherwise you would be unable to recreate it if there are any duplicate values present in the name_id column.
As an alternative, you can save the old data and create a new table with new parameters. Then insert the old values.
In SQL Server Management Studio:
"your database" => task => generatescripts => select specific database object => "your table" => advanced => types of data to script - schema and data => generate
I have a table table1 in SQL server 2008 and it has records in it.
I want the primary key table1_Sno column to be an auto-incrementing column. Can this be done without any data transfer or cloning of table?
I know that I can use ALTER TABLE to add an auto-increment column, but can I simply add the AUTO_INCREMENT option to an existing column that is the primary key?
Changing the IDENTITY property is really a metadata only change. But to update the metadata directly requires starting the instance in single user mode and messing around with some columns in sys.syscolpars and is undocumented/unsupported and not something I would recommend or will give any additional details about.
For people coming across this answer on SQL Server 2012+ by far the easiest way of achieving this result of an auto incrementing column would be to create a SEQUENCE object and set the next value for seq as the column default.
Alternatively, or for previous versions (from 2005 onwards), the workaround posted on this connect item shows a completely supported way of doing this without any need for size of data operations using ALTER TABLE...SWITCH. Also blogged about on MSDN here. Though the code to achieve this is not very simple and there are restrictions - such as the table being changed can't be the target of a foreign key constraint.
Example code.
Set up test table with no identity column.
CREATE TABLE dbo.tblFoo
(
bar INT PRIMARY KEY,
filler CHAR(8000),
filler2 CHAR(49)
)
INSERT INTO dbo.tblFoo (bar)
SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM master..spt_values v1, master..spt_values v2
Alter it to have an identity column (more or less instant).
BEGIN TRY;
BEGIN TRANSACTION;
/*Using DBCC CHECKIDENT('dbo.tblFoo') is slow so use dynamic SQL to
set the correct seed in the table definition instead*/
DECLARE #TableScript nvarchar(max)
SELECT #TableScript =
'
CREATE TABLE dbo.Destination(
bar INT IDENTITY(' +
CAST(ISNULL(MAX(bar),0)+1 AS VARCHAR) + ',1) PRIMARY KEY,
filler CHAR(8000),
filler2 CHAR(49)
)
ALTER TABLE dbo.tblFoo SWITCH TO dbo.Destination;
'
FROM dbo.tblFoo
WITH (TABLOCKX,HOLDLOCK)
EXEC(#TableScript)
DROP TABLE dbo.tblFoo;
EXECUTE sp_rename N'dbo.Destination', N'tblFoo', 'OBJECT';
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF XACT_STATE() <> 0 ROLLBACK TRANSACTION;
PRINT ERROR_MESSAGE();
END CATCH;
Test the result.
INSERT INTO dbo.tblFoo (filler,filler2)
OUTPUT inserted.*
VALUES ('foo','bar')
Gives
bar filler filler2
----------- --------- ---------
10001 foo bar
Clean up
DROP TABLE dbo.tblFoo
SQL Server: How to set auto-increment on a table with rows in it:
This strategy physically copies the rows around twice which can take a much longer time if the table you are copying is very large.
You could save out your data, drop and rebuild the table with the auto-increment and primary key, then load the data back in.
I'll walk you through with an example:
Step 1, create table foobar (without primary key or auto-increment):
CREATE TABLE foobar(
id int NOT NULL,
name nchar(100) NOT NULL,
)
Step 2, insert some rows
insert into foobar values(1, 'one');
insert into foobar values(2, 'two');
insert into foobar values(3, 'three');
Step 3, copy out foobar data into a temp table:
select * into temp_foobar from foobar
Step 4, drop table foobar:
drop table foobar;
Step 5, recreate your table with the primary key and auto-increment properties:
CREATE TABLE foobar(
id int primary key IDENTITY(1, 1) NOT NULL,
name nchar(100) NOT NULL,
)
Step 6, insert your data from temp table back into foobar
SET IDENTITY_INSERT temp_foobar ON
INSERT into foobar (id, name) select id, name from temp_foobar;
Step 7, drop your temp table, and check to see if it worked:
drop table temp_foobar;
select * from foobar;
You should get this, and when you inspect the foobar table, the id column is auto-increment of 1 and id is a primary key:
1 one
2 two
3 three
If you want to do this via the designer you can do it by following the instructions here "Save changes is not permitted" when changing an existing column to be nullable
Yes, you can. Go to Tools > Designers > Table and Designers and uncheck "Prevent Saving Changes That Prevent Table Recreation".
No, you can not add an auto increment option to an existing column with data, I think the option which you mentioned is the best.
Have a look here.
If you don't want to add a new column, and you can guarantee that your current int column is unique, you could select all of the data out into a temporary table, drop the table and recreate with the IDENTITY column specified. Then using SET IDENTITY INSERT ON you can insert all of your data in the temporary table into the new table.
Below script can be a good solution.Worked in large data as well.
ALTER DATABASE WMlive SET RECOVERY SIMPLE WITH NO_WAIT
ALTER TABLE WMBOMTABLE DROP CONSTRAINT PK_WMBomTable
ALTER TABLE WMBOMTABLE drop column BOMID
ALTER TABLE WMBOMTABLE ADD BomID int IDENTITY(1, 1) NOT NULL;
ALTER TABLE WMBOMTABLE ADD CONSTRAINT PK_WMBomTable PRIMARY KEY CLUSTERED (BomID);
ALTER DATABASE WMlive SET RECOVERY FULL WITH NO_WAIT