Alter Column to Not Null where System Versioned column was nullable - sql-server

I'm using SQL Server and system-versioned (temporal) tables. In my main table, I have an INT column that's currently allowing NULLs. I want to update this to not allow nulls, but the system/history copy of the table allows nulls.
I run this statement:
ALTER TABLE dbo.MyTable
ALTER COLUMN MyInt INT NOT NULL;
And I get this error:
Cannot insert the value NULL into column 'MyInt', table 'mydb.dbo.MyTable_History'; column does not allow nulls. UPDATE fails.
I had created the system versioned table using this script:
ALTER TABLE dbo.MyTable
ADD
ValidFrom DATETIME2 (2) GENERATED ALWAYS AS ROW START HIDDEN CONSTRAINT DFMyTable_ValidFrom DEFAULT DATEADD(SECOND, -1, SYSUTCDATETIME()),
ValidTo DATETIME2 (2) GENERATED ALWAYS AS ROW END HIDDEN CONSTRAINT DFMyTable_ValidTo DEFAULT '9999.12.31 23:59:59.99',
PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);
ALTER TABLE dbo.MyTable
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.MyTable_History));
GO
Is there some other way I can make my main table's column non-nullable in this scenario? I suppose I could (maybe) manually update the existing system-versioned null values with an arbitrary garbage value, but it seems like this scenario should be supported with temporal tables.

I also looked at this and it seems you have to update the NULL values in the system version column to some value.
ALTER TABLE dbo.MyTable
SET (SYSTEM_VERSIONING = OFF)
GO
UPDATE dbo.MyTable_History
SET MyInt = 0 WHERE MyInt IS NULL --Update to default value
UPDATE dbo.MyTable
SET MyInt = 0 WHERE MyInt IS NULL --Update to default value
ALTER TABLE dbo.MyTable
ALTER COLUMN MyInt INT NOT NULL
ALTER TABLE dbo.MyTable_History
ALTER COLUMN MyInt INT NOT NULL
GO
ALTER TABLE dbo.MyTable
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.MyTable_History));
GO

I got this issue when I was trying to add a new non-null column. I was originally trying to create the column as nullable, update all the values, and then set it to non-nullable:
ALTER TABLE dbo.MyTable
ADD COLUMN MyInt INT NULL;
GO
UPDATE dbo.MyTable
SET MyInt = 0;
GO
ALTER TABLE dbo.MyTable
ALTER COLUMN MyInt INT NOT NULL;
But I managed to get around it by using a temporary default constraint instead:
ALTER TABLE dbo.MyTable
ADD COLUMN MyInt INT NOT NULL CONSTRAINT DF_MyTable_MyInt DEFAULT 0;
ALTER TABLE dbo.MyTable
DROP CONSTRAINT DF_MyTable_MyInt;

Whilst you can change the schema of temporal tables there are certain actions that you cannot do by a direct ALTER whilst a table is system versioned. One of those is to change a Nullable column to be NOT NULL.
See Important Remarks - Changing the schema of a system-versioned temporal table
In this scenario the only thing you can do is to turn off system versioning using the following:
ALTER TABLE schema.TableName SET (SYSTEM_VERSIONING = OFF);
This leaves you with 2 separate tables - the table itself and it's history table both as separate objects. You can now make your schema updates to BOTH tables (they have to be schema aligned) and then you can turn system versioning back on:
ALTER TABLE schema.TableName SET (SYSTEM_VERSIONING = ON);

Related

ALTER table to increase column size does not reflect in History (versioned) table

I am currently using SQL Server 2016 and I have a table with SYSTEM VERSIONING TURNED ON. My alter command to increase the column size does not reflect in the history table (versioned table). Please can you advise?
I created the table using the below command
CREATE TABLE Report (
ReportId INTEGER NOT NULL,
ReportName VARCHAR(300) NULL,
CONSTRAINT PK_RPT PRIMARY KEY (ReportId)
)
ALTER TABLE Report
ADD ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL DEFAULT SYSUTCDATETIME(),
ValidTo DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL DEFAULT CAST('9999-12-31 23:59:59.9999999' AS DATETIME2),
PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);
ALTER TABLE ICEBERG.Report
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = ICEBERG.ReportHistory));
I am now trying to modify the column size ReportName in this table but the change in column size does not reflect in the ReportHistory table.
ALTER TABLE ICEBERG.Report SET (SYSTEM_VERSIONING = OFF); -- note comment below
ALTER TABLE ICEBERG.Report ALTER COLUMN ReportName VARCHAR(500) NULL;
ALTER TABLE ICEBERG.Report SET (SYSTEM_VERSIONING = ON); -- note comment below
Please note , I have tried to execute only the alter table command without turning SYSTEM_VERSIONING ON/OFF as above, even that does not help.

SQL Server 8111 - but column is NOT NULLABLE

I want to add a primary key constraint to an existing column on an existing table that contains data. The column is not nullable.
However, when I call
alter table mytable add primary key (mycolumn)
I get an 8111:
Msg 8111, Level 16, State 1, Line 2 Cannot define PRIMARY KEY
constraint on nullable column in table 'mytable'
Even if I call both instructions in a row:
alter table mytable alter column mycolumn INT NOT NULL;
alter table mytable add primary key (mycolumn)
I still get an 8111
- and the column description in SQL Server Management Studio confirms, that mycolumn is set to NOT NULL
What can I do this?
You need to separate your batches. It would be best to include the schema name as well.
alter table dbo.mytable alter column mycolumn INT NOT NULL;
go
alter table dbo.mytable add primary key (mycolumn);
rextester demo: http://rextester.com/TZLEWP56616

postgreSQL concurrently change column type from int to bigint

I have a pretty big table (around 1 billion rows), and I need to update the id type from SERIAL to BIGSERIAL; guess why?:).
Basically this could be done with this command:
execute "ALTER TABLE my_table ALTER COLUMN id SET DATA TYPE bigint"
Nevertheless that would lock my table forever and put my web service down.
Is there a quite simple way of doing this operation concurrently (whatever the time it will take)?
If you don't have foreign keys pointing your id you could add new column, fill it, drop old one and rename new to old:
alter table my_table add column new_id bigint;
begin; update my_table set new_id = id where id between 0 and 100000; commit;
begin; update my_table set new_id = id where id between 100001 and 200000; commit;
begin; update my_table set new_id = id where id between 200001 and 300000; commit;
begin; update my_table set new_id = id where id between 300001 and 400000; commit;
...
create unique index my_table_pk_idx on my_table(new_id);
begin;
alter table my_table drop constraint my_table_pk;
alter table my_table alter column new_id set default nextval('my_table_id_seq'::regclass);
update my_table set new_id = id where new_id is null;
alter table my_table add constraint my_table_pk primary key using index my_table_pk_idx;
alter table my_table drop column id;
alter table my_table rename column new_id to id;
commit;
Radek's solution looks great. I would add a comment if I had the reputation for it, but I just want to mention that if you are doing this you'll likely want to widen the sequence for the primary key as well.
ALTER SEQUENCE my_table_id_seq AS bigint;
If you just widen the column type, you'll still end up with problems when you hit 2 billion records if the sequence is still integer sized.
I think the issue that James points out about adding the primary key requiring a table scan can be solved with the NOT VALID/VALIDATE dance. Instead of doing alter table my_table add constraint my_table_pk primary key using index my_table_pk_idx;, you can do
ALTER TABLE my_table ADD UNIQUE USING INDEX my_table_pk_idx;
ALTER TABLE my_table ADD CONSTRAINT my_table_id_not_null CHECK (id IS NOT NULL) NOT VALID;
ALTER TABLE my_table VALIDATE CONSTRAINT my_table_id_not_null;
I think it's also worth mentioning that
create unique index my_table_pk_idx on my_table(new_id);
will do a full table scan with an exclusive lock on my_table. It is better to do
CREATE UNIQUE INDEX CONCURRENTLY ON my_table(new_id);
Merging both #radek-postołowicz and #ethan-pailes answers for a full concurrent solution, with some tweaks we get:
alter table my_table add column new_id bigint;
-- new records filling
CREATE FUNCTION public.my_table_fill_newid() RETURNS trigger
LANGUAGE plpgsql AS $$
DECLARE
record record;
BEGIN
new.new_id = new.id;
return new;
END;
$$;
CREATE TRIGGER my_table_fill_newid BEFORE INSERT ON my_table
FOR EACH ROW EXECUTE FUNCTION public.my_table_fill_newid();
-- old records filling
update my_table set new_id = id where id between 0 and 100000;
update my_table set new_id = id where id between 100001 and 200000;
update my_table set new_id = id where id between 200001 and 300000;
...
-- slow but concurrent part
create unique index concurrently my_table_pk_idx on my_table(new_id);
ALTER TABLE my_table ADD CONSTRAINT my_table_new_id_not_null
CHECK (new_id IS NOT NULL) NOT VALID; -- delay validate for concurrency
ALTER TABLE my_table VALIDATE CONSTRAINT my_table_new_id_not_null;
-- locking
begin;
ALTER TABLE my_table alter column new_id set not null; -- needed for pkey
ALTER TABLE my_table drop constraint my_table_new_id_not_null;
ALTER SEQUENCE my_table_id_seq AS bigint;
alter table my_table drop constraint my_table_pk;
alter table my_table add constraint my_table_pk primary key using index my_table_pk_idx;
alter table my_table drop column id;
alter table my_table rename column new_id to id;
drop trigger my_table_fill_newid on my_table;
commit;
I tried #radek-postołowicz solution, but it failed for me as I needed to set the new_id column as not null, and that locks the table for a long time.
My solution:
Select records from the old table, and insert it into a new table my_table_new with id being bigint. Run this as a standalone transaction.
In another transaction: do the step 1) again for the records which could have been created in the meantime, drop my_table and rename my_table_new to my_table.
The downside of this solution is that it auto-scaled the storage of my AWS RDS, and it could not be scaled back.

SQL alter column datatype from nvarchar to int

Can the datatype of a field be changed to int from nvarchar??
alter table employee alter column designation int
is this valid?? If not can it be done in some other way??
P.S: I am using MS SQL Server
You can try doing an alter table. If it fails do this:
Create a new column that's an integer:
ALTER TABLE tableName ADD newCol int;
Select the data from the old column into the new one:
UPDATE tableName SET newCol = CAST(oldCol AS int);
Drop the old column
It is possible only when you column has no value or blank. If your column has some value which have nvarchar value and you should try to convert it into int, it will give error.
ALTER TABLE [table_name] ALTER COLUMN [column_name] [data_type]
Add new numeric column.
Copy from old char column to new column with trim and conversion.
Drop old char column.
Rename numeric column to old column name.
This worked for me (with decimals but I suppose it will work with ints):
alter table MyTable add MyColNum decimal(15,2) null
go
update MyTable set MyColNum=CONVERT(decimal(15,2), REPLACE(LTRIM(RTRIM(MyOldCol)), ',', '.')) where ISNUMERIC(MyOldCol)=1
go
alter table MyTable drop column MyOldCol
go
EXEC sp_rename 'MyTable.MyColNum', 'MyOldCol', 'COLUMN'
go
Can be done even simpler in just 2 steps
Update the column and set all non numberic values to null so alter won't fail.
Alter the table and set the type to int.
UPDATE employee
SET designation = (CASE WHEN ISNUMERIC(designation)=1 THEN CAST(CAST(designation AS FLOAT) AS INT)END )
ALTER TABLE employee
ALTER COLUMN designation INT
This takes the assumption that that the columns allow nulls. If not then that needs to be handled as well. For example: By altering the column to allow null, then after it has been converted to int then set all null values to 0 and alter the table to not allow null
Create a temp column
ALTER TABLE MYTABLE ADD MYNEWCOLUMN NUMBER(20,0) NULL;
Copy and casts the data from the old column to the new one
UPDATE MYTABLE SET MYNEWCOLUMN=CAST(MYOLDCOLUMN AS NUMBER(20,0));
Delete the old column
ALTER TABLE MYTABLE DROP COLUMN MYOLDCOLUMN;
Rename the new one to match the same name as the old one.
ALTER TABLE MYTABLE RENAME COLUMN MYNEWCOLUMN TO MYOLDCOLUMN;
Can you try this ?
alter table MyTable add MyColNum Varchar(500) null;
alter table MyTable add MyColNum int null;

SQL Server, How to set auto increment after creating a table without data loss?

I have a table table1 in SQL server 2008 and it has records in it.
I want the primary key table1_Sno column to be an auto-incrementing column. Can this be done without any data transfer or cloning of table?
I know that I can use ALTER TABLE to add an auto-increment column, but can I simply add the AUTO_INCREMENT option to an existing column that is the primary key?
Changing the IDENTITY property is really a metadata only change. But to update the metadata directly requires starting the instance in single user mode and messing around with some columns in sys.syscolpars and is undocumented/unsupported and not something I would recommend or will give any additional details about.
For people coming across this answer on SQL Server 2012+ by far the easiest way of achieving this result of an auto incrementing column would be to create a SEQUENCE object and set the next value for seq as the column default.
Alternatively, or for previous versions (from 2005 onwards), the workaround posted on this connect item shows a completely supported way of doing this without any need for size of data operations using ALTER TABLE...SWITCH. Also blogged about on MSDN here. Though the code to achieve this is not very simple and there are restrictions - such as the table being changed can't be the target of a foreign key constraint.
Example code.
Set up test table with no identity column.
CREATE TABLE dbo.tblFoo
(
bar INT PRIMARY KEY,
filler CHAR(8000),
filler2 CHAR(49)
)
INSERT INTO dbo.tblFoo (bar)
SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM master..spt_values v1, master..spt_values v2
Alter it to have an identity column (more or less instant).
BEGIN TRY;
BEGIN TRANSACTION;
/*Using DBCC CHECKIDENT('dbo.tblFoo') is slow so use dynamic SQL to
set the correct seed in the table definition instead*/
DECLARE #TableScript nvarchar(max)
SELECT #TableScript =
'
CREATE TABLE dbo.Destination(
bar INT IDENTITY(' +
CAST(ISNULL(MAX(bar),0)+1 AS VARCHAR) + ',1) PRIMARY KEY,
filler CHAR(8000),
filler2 CHAR(49)
)
ALTER TABLE dbo.tblFoo SWITCH TO dbo.Destination;
'
FROM dbo.tblFoo
WITH (TABLOCKX,HOLDLOCK)
EXEC(#TableScript)
DROP TABLE dbo.tblFoo;
EXECUTE sp_rename N'dbo.Destination', N'tblFoo', 'OBJECT';
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF XACT_STATE() <> 0 ROLLBACK TRANSACTION;
PRINT ERROR_MESSAGE();
END CATCH;
Test the result.
INSERT INTO dbo.tblFoo (filler,filler2)
OUTPUT inserted.*
VALUES ('foo','bar')
Gives
bar filler filler2
----------- --------- ---------
10001 foo bar
Clean up
DROP TABLE dbo.tblFoo
SQL Server: How to set auto-increment on a table with rows in it:
This strategy physically copies the rows around twice which can take a much longer time if the table you are copying is very large.
You could save out your data, drop and rebuild the table with the auto-increment and primary key, then load the data back in.
I'll walk you through with an example:
Step 1, create table foobar (without primary key or auto-increment):
CREATE TABLE foobar(
id int NOT NULL,
name nchar(100) NOT NULL,
)
Step 2, insert some rows
insert into foobar values(1, 'one');
insert into foobar values(2, 'two');
insert into foobar values(3, 'three');
Step 3, copy out foobar data into a temp table:
select * into temp_foobar from foobar
Step 4, drop table foobar:
drop table foobar;
Step 5, recreate your table with the primary key and auto-increment properties:
CREATE TABLE foobar(
id int primary key IDENTITY(1, 1) NOT NULL,
name nchar(100) NOT NULL,
)
Step 6, insert your data from temp table back into foobar
SET IDENTITY_INSERT temp_foobar ON
INSERT into foobar (id, name) select id, name from temp_foobar;
Step 7, drop your temp table, and check to see if it worked:
drop table temp_foobar;
select * from foobar;
You should get this, and when you inspect the foobar table, the id column is auto-increment of 1 and id is a primary key:
1 one
2 two
3 three
If you want to do this via the designer you can do it by following the instructions here "Save changes is not permitted" when changing an existing column to be nullable
Yes, you can. Go to Tools > Designers > Table and Designers and uncheck "Prevent Saving Changes That Prevent Table Recreation".
No, you can not add an auto increment option to an existing column with data, I think the option which you mentioned is the best.
Have a look here.
If you don't want to add a new column, and you can guarantee that your current int column is unique, you could select all of the data out into a temporary table, drop the table and recreate with the IDENTITY column specified. Then using SET IDENTITY INSERT ON you can insert all of your data in the temporary table into the new table.
Below script can be a good solution.Worked in large data as well.
ALTER DATABASE WMlive SET RECOVERY SIMPLE WITH NO_WAIT
ALTER TABLE WMBOMTABLE DROP CONSTRAINT PK_WMBomTable
ALTER TABLE WMBOMTABLE drop column BOMID
ALTER TABLE WMBOMTABLE ADD BomID int IDENTITY(1, 1) NOT NULL;
ALTER TABLE WMBOMTABLE ADD CONSTRAINT PK_WMBomTable PRIMARY KEY CLUSTERED (BomID);
ALTER DATABASE WMlive SET RECOVERY FULL WITH NO_WAIT

Resources