In SSMS, are commands separated with GO guaranteed to be synchronous? - sql-server

If I execute the following script:
EXECUTE LongRunningSP1
GO
EXECUTE LongRunningSP2
GO
Assuming both procedures take several minutes, will the GO batching cause any concurrency to happen or is LongRunningSP1 guaranteed to finish before LongRunningSP2 starts?

The GO will just split your code in batches, but it won't cause any concurrency: all batches are executed one at time, in the order they appear in the code.

LongRunningSP1 is guaranteed to finish before LongRunningSP2 with or without the GO in between; GO is a batch separator for the command processor.
It's easier to see what it does when using the command line utility SQLCMD.
SQLCMD
1> exec LongRunningSP1
-- nothing happens
2> exec LongRunningSP2
-- nothing happens
3> GO
-- both procs are run, first SP1, then SP2

Yes!! Go will actually make it into batches to be executed.
So it's LongRunningSP1 which gets completed first, ALWAYS!

GO is not a Transact-SQL statement; it is a command recognized by the sqlcmd and osql utilities and SQL Server Management Studio Code editor. It is a batch terminator, it will not change the order of your query. You can however change it to whatever you want under options.
Here are a set of very simple, easy steps to customize the batch separator in SSMS:
Launch SSMS
Go to Tools –> Options
Click on the “Query Execution” node
Notice that we have an option to change the Batch Separator
Change the batch separator
Click “OK”

Related

Will assigning a long string to an int stop SSMS processing and prevent a disastrous "naked" F5 from running wild

Today in SSMS I misplaced my pointer and clicked the Execute button instead of the Database drop-down (they're adjacent on the screen). Fortunately no damage done, but it scared me that I might have executed everything in the current query window since nothing was highlighted. I'd like to put a simple command at the top of any new query window which will stop an F5 type of execution in its tracks. This seemed to work:
UPDATE atable SET intfield = 'freakout prevention' WHERE tablekey = 123
UPDATE atable SET intfield = 55 WHERE tablekey = 123
where intfield is a column defined as an int. Running both lines resulted in
Conversion failed when converting the varchar value 'freakout prevention' to data type int.
Furthermore, the value of intfield was NOT set to 55.
So is this a fairly reliable (I don't need 100.00% here -- just enough to catch the rare time when I accidentally execute a "bare" F5) method of "prefixing" a query window to prevent wild execution when nothing is highlighted and the execute command is given?
If you don't have batches (e.g., GO), then you can put a RETURN as the first line.
If you need to prevent all batches from running, you can put a delay at the beginning of the script. Or you can add a bit of messaging....
I edited this answer to add some extra code to check for an open transaction. This has nothing to do with the original question, but I have found this to be a bigger issue with some developers.
RAISERROR('You ran me by accident. I will be wait for an hour for you to kill me.', 10, 1) WITH NOWAIT
WHILE (1=1) BEGIN
WAITFOR DELAY '1:00:00'
RAISERROR('I''m still waiting. Please kill me. Here goes another hour...', 10, 1) WITH NOWAIT
END
GO
RAISERROR('OMG! Get the backups ready for a restore in production! Also, update the resume.', 16, 1) WITH NOWAIT
GO
BEGIN TRAN
GO
-- [Updated] Extra check - open transaction
WHILE ( ##TRANCOUNT > 0 ) BEGIN
RAISERROR('Execution is almost complete; however, a transaction is open. Please COMMIT or ROLLBACK after you kill me. Waiting 1 minute...', 10, 1) WITH NOWAIT
WAITFOR DELAY '0:01:00'
END
RAISERROR('Execution is complete.', 10, 1) WITH NOWAIT
Per the comment, one option is to add set noexec on to the top of the query window. This setting persists across batches. It is evaluated at execution time and can therefore be run conditionally (unlike many other set statements).
As noted by Randy in Marin, this is still not completely safe because the script could contain a set noexec off. If you set noexec on SQL will still execute the set noexec off (obviously, otherwise there wouldn't be a way to turn it off!), and then any subsequent statements would be executed.
Another option - and possibly an even better one - would be set parseonly on.
One difference between the two is that with set parseonly on The engine will literally do only that - parse the syntax. It won't actually do any "work". With set noexec on any statements will still produce plans, but the plans won't be executed. So set parseonly on is "cheaper" than set noexec on.
The other difference is that set parseonly on cannot be executed conditionally. That is to say, a line of code like if (1 = 0) set parseonly on will result in parseonly being set to "on", because the if is evaluated at execution time, but for obvious reasons parseonly is not evaluated at execution time, because that would defeat the point!
And another difference: while parseonly will persist across batches, only one parseonly within each individual batch counts, and it's the last one. For example:
set parseonly on;
select 'hello';
set parseonly off;
go
This will return the result set "hello", because there is a parseonly off in the batch, and it is the last parseonly setting in the batch.
And of course, even with parseonly a similar danger applies: If the script has a set parseonly off, then some statements can still get executed. Not just the statements following, but even other statements that precede it in the same batch, if it is the last setting for parseonly in that batch.
Is there anything else you can do? Yes. Enter sqlcmd. The :on error exit sqlcmd directive will tell sqlcmd to stop executing anything if any kind of error occurs - batch terminating or otherwise.
What we do in our deployment scripts is this:
:on error exit
set xact_abort on;
begin tran;
-- migration script content here
commit;
You can do something similar here to avoid execution of anything in the script in a way that has no danger of being turned off, you can put the query window into sqlcmd mode, and put this at the top:
:on error exit
throw;
-- rest of script here
Now, the throw here won't actually throw an error, since we're not in a catch block. In fact, it is an error to have this throw here. But... that's all we need. The error will cause the :on error exit to terminate all further execution of the file. You could also just have a raiserror (...) instead, but that means more typing :P
Is this now a guaranteed solution? No, because what if you forget to put your window into sqlcmd mode? You can set windows to open in sqlcmd mode by default... but what if you turn it off? The first batch will fail (syntax error since the sqlcmd syntax won't be valid), but subsequent batches will execute.
You can of course combine both methods...
:on error exit
throw;
go
set noexec on;
-- ... rest of script with many batches
But as already described, its still possible for statements to be executed if you are not in sqlcmd mode and there are set noexec off statements anywhere in the script.

xp_cmdshell command not executing last command when run as job

First off, before everybody shouts at me - I'm bug fixing in legacy code and a re-write is off the cards for now - I have to try to find a fix using the xp_cmdshell command.
I have a proc which is executed via a scheduled job. The proc is full of TSQL like the below to dump data to a log file.
SELECT *
INTO Temp
FROM MyView
SET #cmd1 = 'bcp "SELECT * FROM [myDatabase].dbo.Temp" queryout "C:\temp.txt" -T -c -t" "'
SET #cmd2= 'type "C:\temp.txt" >> "C:\output.txt"'
EXEC master..xp_cmdshell #cmd1
EXEC master..xp_cmdshell #cmd2
DROP TABLE Temp
The problem is that the last of these commands in the proc doesn't appear to run. I can see the result in the text.txt file, but not the output.txt file. All of the preceding work fine though and it works fine when I run this on it's own.
Can anyone suggest why this might happen or suggest an alternative way to achieve this?
Thanks
I think, that BCP as external process runs async. So it could be, that your file is not yet written in the moment you are trying to copy its content.
Suggestion 1: Include an appropriate wait time
Suggestion 2: Call your first command a second time with changed target file name
Suggestion 3: Use copy rather than type
You might create a file c\temp.txt with just hello world in it. Try to type it into one file before the BCP and type it into another file after the BCP.

Execute stored procedure by passing the Script File(.sql) as Parameter

I have a stored procedure, in wWhich I m passing the script file (.sql file) as a parameter.
I want to know how .sql file gets executed through command (not command prompt).
exec my_sp VersionNumber, SqlDeltaScript.sql (it is a file)
I want my stored procedure to execute SqlDeltaScript.sql
Can anyone please help regarding this ...
Thanks in advance ...
This does not sound like an ideal situation, but if you have to do this then you could use xp_cmdshell to run the sqlcmd utility to run your script file.
The xp_cmdshell SP must be enabled in order ot use it - see Enable 'xp_cmdshell' SQL Server.

Suppress reorg rebuild sysmessages in sybase stored proc

I have a stored procedure in Sybase that uses reorg rebuild statement in a loop for all the tables in my database. What I want to do is to suppress the reorg rebuild sysmessages for tables that succedeed the procedure and only to print the tables that were locked etc...thus the problematic ones....The thing is that I did not succeed to find out anything to use in manual or in any workshops...dow you have any idea?
Thanks in advance !!!!!
If you are running the SQL with isql at a command prompt, you can always capture the output in a text file and filter it out with other tools.
Create a script to run the SQL in isql and then use a script that calls a text processing tool (awk,sed,...) to only find the lines of interest.
Here is an example from a windows batch file with a regex that removes lines that start with a space (i.e. Rows Effected messages)
isql -SDBDEV1 -DMyDbName -U%DBLOG% -P%DBPWD% -iLoadBatchStats.sql -o%TEMP%\LoadBatchStats.log
type %TEMP%\LoadBatchStats.log | gawk "/^[ ]/{print $0}" >>%TEMP%\LoadBatchSummary.log

sql server - setting variables at runtime

In Oracle you can use &&VAR_NAME in a script and then the script will ask you for that value when you run it.
In SQLSERVER you can use $(VAR_NAME) and reference a property file using:
:r c:/TEMP/sqlserver.properties
And in the property file you have something like:
:setvar VAR_NAME_some_value
Can you do the equivalent of &&VAR_NAME so the script asks you for the value when you run it instead of having the value predefined in a script.
If I've understood correctly, you're talking about variable substitution with the SQLCMD utility.
I don't see that SQLCMD supports the behaviour you describe.
An alternative would be to exploit the fact that SQLCMD will substitute the values of system or user environment variables (see the link above), and create a wrapper CMD script which prompts the user for the variable value(s) using SET with the /P flag.
There is nothing in sql server like this, you should predefine all parameters values before using them, like this:
DECLARE #i SMALLINT
SET #i = 1
The problem with having a form pop up and ask you for the parameter is that you normally want rather more control over the nature of the form, even for an admin script. I'd use the variable substitution in SQLCMD, from within a Powershell or Python script so you can provide the guy running the script a better and more helpful form. That would make a very powerful combination.
You can do quite a lot with template variable substitution in SSMS, but that would only go so far as formulating the correct SQL to execute. you'd then have to bang the button. It would be a bit clunky outside the development environment!

Resources