SQL Server: Rename column after dropping another one without GO - sql-server

Is this the right way to make sure that a column of my table is only renamed after another column is dropped?
ALTER TABLE mytbl DROP COLUMN tmpcol
EXEC sp_rename 'mytbl.tmpcol2', 'tmpcol', 'COLUMN'
I'm not allowed to use the "GO"-separator.
I've tested the above two lines for a bunch of different table-sizes.
It worked as I expected, i.e. the 2nd Line is only executed after the 1st one.
But how can I make sure, that this will be the preferred execution plan for any table?
Is this guaranteed by EXEC ?

SQL Server run query line-by-line, there is no chance that EXEC can run before ALTER TABLE

Related

Procedure for Altering Table and updating keeps failing with invalid column

I'm trying to add new columns to a table then update the table and set the new column with a date format change of the old column.
I have my procedure set out as follows:
begin
alter table [dbo].[mytable]
add New_Field1 varchar(24)
end
......
update [dbo].[SMR06_TARGET]
set New_Field1 = convert(varchar(24),Old_Field1,103)
.....
I have multiple alter table statements at the top of the table and update statements at the bottom for each new column. I think this is a rule with SQL keeping DDL at top and DML at bottom.
Ok so everytime I execute this to create the procedure it fails with incorrect column name New_Field1. I really can't peg down what is causing this. I've tried different variations of BEGIN....END tried commenting out the apprent offending statement, then it runs, then it fails again with the next statement.
I'm reckoning it's something to do with the way the statement(s) are terminated. I'm not sure as haven't done this type of procedure statement before with mixed DDL/DML.
Any hints would be most welcome.
Thanks
Andrew
You need to batch the statement that adds the column separately from the statement that updates it.
BEGIN TRANSACTION
GO
ALTER TABLE [dbo].[mytable]
ADD New_Field1 varchar(24) NULL
GO
UPDATE [dbo].[mytable]
SET New_Field1 = convert(varchar(24),Old_Field1,103)
GO
ALTER TABLE dbo.Batch SET (LOCK_ESCALATION = TABLE)
GO
COMMIT
The entire batch is reviewed by the parser before it starts executing the first line. Adding Old_Field1 is in the same batch as the reference to use Old_Field1. At the time the parser considers the statement containing Old_Field1, the statement to add Old_Field1 has not been executed, so that field does not yet exist.
If you're running in SSMS, include GO between each statement to force multiple batches. If you're running this in another tool that can't use GO, you'll need to submit each statement individually to ensure that they are fully executed before the next step is parsed.

How do I make ALTER COLUMN idempotent?

I have a migration script with the following statement:
ALTER TABLE [Tasks] ALTER COLUMN [SortOrder] int NOT NULL
What will happen if I run that twice? Will it change anything the second time? MS SQL Management Studio just reports "Command(s) completed successfully", but with no details on whether they actually did anything.
If it's not already idempotent, how do I make it so?
I would say that second time, SQL Server checks metadata and do nothing because nothing has changed.
But if you don't like possibility of multiple execution you can add simple condition to your script:
CREATE TABLE Tasks(SortOrder VARCHAR(100));
IF NOT EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE [TABLE_NAME] = 'Tasks'
AND [COLUMN_NAME] = 'SortOrder'
AND IS_NULLABLE = 'NO'
AND DATA_TYPE = 'INT')
BEGIN
ALTER TABLE [Tasks] ALTER COLUMN [SortOrder] INT NOT NULL
END
SqlFiddleDemo
When you execute it the second time, the query gets executed but since the table is already altered, there is no effect. So it makes no effect on the table.
No change is there when the script executes twice.
Here is a good MSDN read about: Inside ALTER TABLE
Let's look at what SQL Server does internally when performing an ALTER
TABLE command. SQL Server can carry out an ALTER TABLE command in any
of three ways:
SQL Server might need to change only metadata.
SQL Server might need to examine all the existing data to make sure
it's compatible with the change but then change only metadata.
SQL Server might need to physically change every row.

Multiple Go in stored procedure

I'd like to write a stored procedure and store it in a SQL Server database. The procedure is supposed to remove all tables regardless of dependency constraints.
CREATE PROCEDURE sp_clear_db AS
BEGIN
EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL';
EXEC sp_MSForEachTable 'DROP TABLE ?';
END
However, when I call sp_helptext #objname = 'dbo.sp_clear_db', only the first exec statement is shown. I assume in order to execute the first function, a GO has to be called. But a GO as part of the stored procedure definition won't work either. Does anyone know a way to fix this? Maybe there is another better option to achieve the same...
Cheers,
Max
You can't have "GO" in a stored procedure. (http://msdn.microsoft.com/en-us/library/ms188037.aspx) GO is used by SQL Query analyser to separate statements into "Batches" which are then sent to SQL server. So you'd need to make two separate calls, one for the ALTER calls and one for the DROP.
Ideally you would just call "DROP DATABASE" unless you were trying to keep your stored procs and then re-create the tables.
Another solution would be to use a cursor to loop through each row in sys.tables where type='U' and generate some dynamic sql to remove the contraints and drop the table.
Don't use sp_helptext. Use OBJECT_DEFINITION or sys.sql_modules

SQL Server equivalent of MySQL Dump to produce insert statements for all data in a table

I have an application that uses a SQL Server database with several instances of the database...test, prod, etc... I am making some application changes and one of the changes involves changing a column from a nvarchar(max) to a nvarchar(200) so that I can add a unique constraint on it. SQL Server tells me that this requires dropping the table and recreating it.
I want to put together a script that will do the table drop, recreate it with the new schema, and then reinsert the data that was there previously all in one go, if possible, just to keep things simple for use when I migrate this change to production.
There is probably a good SQL Server way to do this but I'm just not aware of it. If I was using Mysql I would mysqldump the table and its contents, and use that as my script for applying that change to production. I can't find any export functionality in SQL server that will give me a text file consisting of inserts for all data in a table.
Use SQL Server's Generate Scripts command
right click on the database; Tasks -> Generate Scripts
select your tables, click Next
click the Advanced button
find Types of data to script - choose Schema and Data.
you can then choose to save to file, or put in new query window.
results in INSERT statements for all table data selected in bullet 2.
No need to script
here are two ways
1 use alter table ....alter column.....
example..you have to do 1 column at a time
create table Test(SomeColumn nvarchar(max))
go
alter table Test alter column SomeColumn nvarchar(200)
go
2 dump into a new table while converting the column
select <columns except for the columns you want to change>,
convert(nvarchar(200),YourColumn) as YourColumn
into SomeNewTable
from OldTable
drop old table
rename this table to the same table as the old table
EXEC sp_rename 'SomeNewTable', 'OldTable';
Now add your index

Bulk copy of data from one column to another in SQL Server

I want to copy the value of one column to another column in SQL Server. This operation needs to be carried out across the whole DB which has 200M rows. My query syntax is:
UPDATE [Net].[dbo].[LINK]
SET [LINK_ERRORS] = [OLD_LINK_ERRORS]
However, very soon I exhaust the transaction log and the query aborts. What's the best way to initiate this in batches?
Thanks,
Updating 200M rows is not a good idea.
You could either select all of the data into a new table and copy the LINK_ERRORS field in the SELECT,
select *, OLD_LINK_ERRORS as LINK_ERRORS into LINK_tmp from LINK
GO
exec sp_rename LINK, LINK_bkp
GO
exec sp_rename LINK_tmp, LINK
GO
drop table LINK_bkp
or if the next thing you're going to do is null out the original OLD_LINK_ERRORS column, you could do something like this:
sp_rename 'LINK.OLD_LINK_ERRORS', 'LINK_ERRORS', 'COLUMN'
GO
ALTER TABLE LINK ADD OLD_LINK_ERRORS <data type>
GO
multiple updates might work.
update dbo.LINK
set LINK_ERRORS=OLD_LINK_ERRORS
where ID between 1 and 1000000
update dbo.LINK
set LINK_ERRORS=OLD_LINK_ERRORS
where ID between 1000001 and 2000000
etc...
I would consider doing this in SSIS where you can easily control the batch (transaction) size and take advantage of bulk operations SSIS provides. Of course, this may not work if you need a programmatic solution. This would be a very trivial SSIS operation.

Resources