How do I make ALTER COLUMN idempotent? - sql-server

I have a migration script with the following statement:
ALTER TABLE [Tasks] ALTER COLUMN [SortOrder] int NOT NULL
What will happen if I run that twice? Will it change anything the second time? MS SQL Management Studio just reports "Command(s) completed successfully", but with no details on whether they actually did anything.
If it's not already idempotent, how do I make it so?

I would say that second time, SQL Server checks metadata and do nothing because nothing has changed.
But if you don't like possibility of multiple execution you can add simple condition to your script:
CREATE TABLE Tasks(SortOrder VARCHAR(100));
IF NOT EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS
WHERE [TABLE_NAME] = 'Tasks'
AND [COLUMN_NAME] = 'SortOrder'
AND IS_NULLABLE = 'NO'
AND DATA_TYPE = 'INT')
BEGIN
ALTER TABLE [Tasks] ALTER COLUMN [SortOrder] INT NOT NULL
END
SqlFiddleDemo

When you execute it the second time, the query gets executed but since the table is already altered, there is no effect. So it makes no effect on the table.
No change is there when the script executes twice.
Here is a good MSDN read about: Inside ALTER TABLE
Let's look at what SQL Server does internally when performing an ALTER
TABLE command. SQL Server can carry out an ALTER TABLE command in any
of three ways:
SQL Server might need to change only metadata.
SQL Server might need to examine all the existing data to make sure
it's compatible with the change but then change only metadata.
SQL Server might need to physically change every row.

Related

How to find/see tempdb in SQL Server

Should I expect to be able to see tempdb tables in SSMS?
e.g. If I run this code, should I expect to be able to see the table in SSMS?
-- Drop the table if it already exists
IF (SELECT OBJECT_ID('tempdb..#TempPasswords')) IS NOT NULL
DROP TABLE #TempPasswords
CREATE TABLE #TempPasswords (
MemberNo INT,
Password nvarchar(120),
NewPassword varbinary(MAX)
)
Only if you run your code snippet in SSMS. In that session you will be able to execute
T-SQL that include your temp table.
In order to see temp tables globally use two hashes ##.
It is possible to see the temp DB but the name only. To see temp DB follow the instruction as per the image. After running the query right click on the Temporary Tables and Refresh .

Errors: "INSERT EXEC statement cannot be nested." and "Cannot use the ROLLBACK statement within an INSERT-EXEC statement." How to solve this?

I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"

Moving all non-clustered indexes to another filegroup in SQL Server

In SQL Server 2008, I want to move ALL non-clustered indexes in a DB to a secondary filegroup. What's the easiest way to do this?
Run this updated script to create a stored procedure called MoveIndexToFileGroup. This procedure moves all the non-clustered indexes on a table to a specified file group. It even supports the INCLUDE columns that some other scripts do not. In addition, it will not rebuild or move an index that is already on the desired file group. Once you've created the procedure, call it like this:
EXEC MoveIndexToFileGroup #DBName = '<your database name>',
#SchemaName = '<schema name that defaults to dbo>',
#ObjectNameList = '<a table or list of tables>',
#IndexName = '<an index or NULL for all of them>',
#FileGroupName = '<the target file group>';
To create a script that will run this for each table in your database, switch your query output to text, and run this:
SELECT 'EXEC MoveIndexToFileGroup '''
+TABLE_CATALOG+''','''
+TABLE_SCHEMA+''','''
+TABLE_NAME+''',NULL,''the target file group'';'
+char(13)+char(10)
+'GO'+char(13)+char(10)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
ORDER BY TABLE_SCHEMA, TABLE_NAME;
Please refer to the original blog for more details. I did not write this procedure, but updated it according to the blog's responses and confirmed it works on both SQL Server 2005 and 2008.
Updates
#psteffek modified the script to work on SQL Server 2012. I merged his changes.
The procedure fails when your table has the IGNORE_DUP_KEY option on. No fix for this yet.
#srutzky pointed out the procedure does not guarantee to preserve the order of an index and made suggestions on how to fix it. I updated the procedure accordingly.
ojiNY noted the procedure left out index filters (for compatibility with SQL 2005). Per his suggestion, I added them back in.
Script them, change the ON clause, drop them, re-run the new script. There is no alternative really.
Luckily, there are scripts on the Interwebs such as this one that will deal with scripting for you.
Update: This thing will take long time to do step 2 manually if you are using MS SQL Server manager 2008R2 or earlier. I used sql server manager 2014, so it works well (because the way it export the drop and create index is easy to modify)
I tried to run script in SQL server 2014 and got some issue, I'm too lazy to detect the problems, SO I come up with another solution that not depend on the version of SQL server you are running.
Export your index (with drop and create)
2.Update your script, remove all things related to drop create tables, keep the thing belong to indexs. and Replace your original index with the new index (in my case, I replace ON [PRIMARY] by ON [SECONDARY]
[]5
Run script! And wait until it done.
(You may want to save the script to run in some others environment).

Procedures using IDENTITY column fail with primary key violation after restoring sql 2000 backup onto sql 2008

I've just moved a database from a SQL 2000 instance to a SQL 2008 instance and have encountered an odd problem which appears to be related to IDENTITY columns and stored procedures.
I have a number of stored procedures in the database along the lines of this
create procedure usp_add_something #somethingId int, #somethingName nvarchar(100)
with encryption
as
-- If there's an ID then update the record
if #somethingId <> -1 begin
UPDATE something SET somethingName = #somethingName
end else begin
-- Add a new record
INSERT INTO something ( somethingName ) VALUES ( #somethingName )
end
go
These are all created as ENCRYPTED stored procedures. The id column (e.g. somethingId in this example) is an IDENTITY(1,1) with a PRIMARY KEY on it, and there are lots of rows in these tables.
Upon restoring onto the SQL 2008 instance a lot of my database seems to be working fine, but calls like
exec usp_add_something #somethingId = -1, #somethingName = 'A Name'
result in an error like this:
Violation of PRIMARY KEY constraint 'Something_PK'. Cannot insert duplicate key in object 'dbo.something'.
It seems that something is messed up that either causes SQL Server to not allocate the next IDENTITY correctly...or something like that. This is very odd!
I'm able to INSERT into the table directly without specifying the id column and it allocates an id just fine for the identity column.
There are no records with somethingId = -1 ... not that that should make any difference.
If I drop and recreate the procedure the problem goes away. But I have lots of these procedures so don't really want to do that in case I miss some or there is a customized procedure in the database that I overwrite.
Does anyone know of any known issues to do with this? (and a solution ideally!)
Is there a different way I should be moving my sql 2000 database to the sql 2008 instance? e.g. is it likely that Detach and Attach would behave differently?
I've tried recompiling the procedure using sp_recompile 'usp_add_something' but that didn't solve the problem, so I can't simply call that on all procedures.
thanks for any help
R
(cross-posted here)
If the problem is an improperly set identity seed, you can reset a table this way:
DBCC CHECKIDENT (TableName, RESEED, 0);
DBCC CHECKIDENT (TableName, RESEED);
This will automatically find the highest value in the table and set the seed appropriately so you don't have to do a SELECT Max() query. Now fixing the table can be done in automation, without dynamic SQL or manual script writing.
But you said you can insert to the table directly without a problem, so it's probably not the issue. But I wanted to post to set the record straight about the easy way to reset the identity seed.
Note: if your table's increment is negative, or you in the past reset the seed to use up all negative numbers starting at the lowest after consuming all the positive numbers, all bets are off. Especially in the latter case (having a positive increment, but you are using identity values lower than others already in the table), then you do not want to run DBCC CHECKIDENT without specifying NORESEED ever. Because just DBCC CHECKIDENT (TableName); will screw up your identity value. You must use DBCC CHECKIDENT (TableName, NORESEED). Fun times will ensue if you forget this. :)
First, check the maximum ID from your table:
select max(id_column) from YourTable
Then, check the current identity seed:
select ident_seed('YourTable')
If the current seed is lower than the maximum, reseed the table with dbcc checkident:
DBCC CHECKIDENT (YourTable, RESEED, 42)
Where 42 is the current maximum.
Demonstration code for how this can go wrong:
create table YourTable (id int identity primary key, name varchar(25))
DBCC CHECKIDENT (YourTable, RESEED, 42)
insert into YourTable (name) values ('Zaphod Beeblebrox')
DBCC CHECKIDENT (YourTable, RESEED, 41)
insert into YourTable (name) values ('Ford Prefect') --> Violation of PRIMARY KEY
I tried and was unable to replicate this on another server.
However, on my Live servers I dropped the problem database from sql 2008 and recreated it using a detach and reattach and this worked fine, without these PRIMARY KEY VIOLATION errors.
Since I wanted to keep the original database live, in fact my exact steps were:
back up sourceDb and restore as sourceDbCopy on the same instance
take sourceDbCopy offline
move the sourceDbCopy files to the new server
attach the database
rename the database to the original name
If recreating the procedures helps, here's an easy way to generate a recreation script:
Right click database -> Tasks -> Generate scripts
On page 2 ("Choose Objects") select the stored procedures
On page 3 ("set scripting options") choose Advanced -> Script DROP and CREATE and set it to Script DROP and CREATE.
Save the script somewhere and run it

SQL Server parallels to Oracle Alter Table Set Column Unused?

Is there a parallel in Microsoft SQL Server (2005, preferably) for the Oracle functionality of setting a column to be unused? For example:
ALTER TABLE Person SET UNUSED Nickname;
Nothing turns up in searches, so my thought is this feature must be Oracle specific.
Don't think there's anything like that in SQL server.
You could create a 1:1 relation to a new table containing the hidden columns:
insert into NewTable
select (keycol, Nickname) from ExistingTable
alter table ExistingTable drop column Nickname
That way you still have the data, but the column is in a table nobody knows about.
Alternatively, you could use column level permissions:
DENY SELECT (Nickname) ON ExistingTable TO domain\user
DENY SELECT (Nickname) ON ExistingTable TO public
...
This will return an error when someone tries to read the column. The big disadvantage of this method is that select * will also fail.
There is no equivalent statement, but depending on your need you could probably write a trigger to roll back any changes if made.

Resources