Terminate IF block in Transact-SQL - sql-server

This is probably quite a rookie question, but I can't find in the MSDN examples online of how to end an IF/ELSE block in Transact-SQL. For example, I want to create a SP that does this:
USE [MyDb];
GO
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
CREATE PROCEDURE [dbo].[usp_MyStoredProc]
#param1 DATE, #param2 INT, #param3 BIT, #param4 [dbo].[CustomDataType] READONLY
WITH EXEC AS CALLER
AS
-- Only run this part if #param3 == true
IF (#param3 = 1)
BEGIN
--do stuff...
END
-- I don't really need an else here, as I'm simply doing an
-- extra step at the beginning if my param tells me to
-- I always want this next part to run...
-- do more stuff...
GO
I have a solution that I think works (posted as answer below), but I feel like I'm working around my lack of knowledge instead of doing the best pattern.
Edit- I deleted my first solution as it was a horrible idea and didn't want others to try implementing that. Turns out the question was actually the solution in this case; thanks to all.

The syntax for the IF statement is
IF Boolean_expression
{ sql_statement | statement_block }
[ ELSE
{ sql_statement | statement_block } ]
The ELSE part is optional.
A "sql_statement" is any single statement. A "statement_block" is a group of statements wrapped in a BEGIN and END
More details:http://msdn.microsoft.com/en-us/library/ms182717.aspx

There doesn't have to be an else statement. Whatever's between the BEGIN/END will be run if it meets the IF condition and then your SQL will progress normally after that. It's the same thing as:
If (x == 1) {
// condition met, do stuff
}
// do more stuff
You were really close. Just needed some tweaking.
IF (#param3 = 1)
BEGIN
-- do stuff if a param tells me to (has to be done before main block of code is ran below)
END
-- do stuff that should always be ran.
GO
Do yourself and your team a favor: never use a goto unless it's 100% necessary. Usually, they are not necessary with well-structured code.

Related

BEGIN..END not working in Nested IF..ELSE

I am confused with the BEGIN.. END in the nested IF..ELSE condition.
For example, when I am trying to execute below mentioned query. It is
returning an error -- "Incorrect syntax near ElSE.."
IF ( ISNULL(#tin,'')=''AND ISNULL(#prpr_ntwrk,'')<>'' )
BEGIN
IF (ISNULL(#prpr_ntwrk,'')='P')
BEGIN
-- CODE
END
ELSE
BEGIN
-- CODE
END
END
please suggest.
The code below executes on ASE 15.7.0
DECLARE #tin varchar, #prpr_ntwrk varchar
IF ( ISNULL(#tin,'')=''AND ISNULL(#prpr_ntwrk,'')<>'' )
BEGIN
IF (ISNULL(#prpr_ntwrk,'')='P')
BEGIN
SELECT "P"
END
ELSE
BEGIN
SELECT "Not P"
END
END
Note that ASE gets confused with the empty stubs "-- Code" in the original example.
Your first IF statement may not do what you want, but I am sticking to the question asked.
Here are my recommendations :
1.-Give more whitespaces in this part of your code so it can be legible and avoid problems.
IF ( ISNULL(#tin,'')=''AND ISNULL(#prpr_ntwrk,'')<>'' )
2.- Insert a new line after the last END
3.- Run everything on terminal with isql because interactive SQL can be buggy sometimes
4.-Post the code between the if and end please.
Hope it helps

Constraints check: TRY/CATCH vs Exists()

I have a table with unique constraint on it:
create table dbo.MyTab
(
MyTabID int primary key identity,
SomeValue nvarchar(50)
);
Create Unique Index IX_UQ_SomeValue
On dbo.MyTab(SomeValue);
Go
Which code is better to check for duplicates (success = 0 if duplicate found)?
Option 1
Declare #someValue nvarchar(50) = 'aaa'
Declare #success bit = 1;
Begin Try
Insert Into MyTab(SomeValue) Values ('aaa');
End Try
Begin Catch
-- lets assume that only constraint errors can happen
Set #success = 0;
End Catch
Select #success
Option 2
Declare #someValue nvarchar(50) = 'aaa'
Declare #success bit = 1;
IF EXISTS (Select 1 From MyTab Where SomeValue = #someValue)
Set #success = 0;
Else
Insert Into MyTab(SomeValue) Values ('aaa');
Select #success
From my point of view- i do believe that Try/Catch is for errors, that were NOT expected (like deadlock or even constraints when duplicates are not expected). In this case- it is possible that sometimes a user will try to submit duplicate, so the error is expected.
I have found article by Aaron Bertrand that states- checking for duplicates is not much slower even if most of inserts are successful.
There is also loads of advices over the net to use Try/Catch (to avoid 2 statements not 1). In my environment there could be just like 1% of unsuccessful cases, so that kind of makes sense too.
What is your opinion? Whats other reasons to use option 1 OR option 2?
UPDATE: I'm not sure it is important in this case, but table have instead of update trigger (for audit purposes- row deletion also happens through Update statement).
I've seen that article but note that for low failure rates I'd prefer the "JFDI" pattern. I've used this on high volume systems before (40k rows/second).
In Aaron's code, you can still get a duplicate when testing first under high load and lots of writes. (explained here on dba.se) This is important: your duplicates still happen, just less often. You still need exception handling and knowing when to ignore the duplicate error (2627)
Edit: explained succinctly by Remus in another answer
However, I would have a separate TRY/CATCH to test only for the duplicate error
BEGIN TRY
-- stuff
BEGIN TRY
INSERT etc
END TRY
BEGIN CATCH
IF ERROR_NUMBER() <> 2627
RAISERROR etc
END CATCH
--more stuff
BEGIN CATCH
RAISERROR etc
END CATCH
To start with, the EXISTS(SELECT ...) is incorrect as it fails under concurrency: multiple transactions could run the check concurrently and all conclude that they have to INSERT, one will be the the lucky winner that inserts first, all the rest will hit constraint violation. In other words you have a race condition between the check and the insert. So you will have to TRY/CATCH anyway, so better just try/catch.
Error logging
Don't hold me for this but there are likely logging implications when an exception is thrown. If you check before inserting no such thing happens.
Knowing why and when it can break
try/catch block should be used for parts that can break for non-deterministic reasons. I would say it's wiser in your case to check existing records because you know it can break and why exactly. So checking it yourself is from a developer's point of view a better way.
But in your code it may still break on insert because between the check time and insert time some other user inserted it already... But that is (as said previously) non-deterministic error. That's why you:
should be checking with exists
inserting within try/catch
Self explanatory code
Another positive is also that it is plain to see from the code why it can break while the try/catch block can hide that and one may remove them thinking why is this here, it's just inserting records...
Option - 3
Begin Try
SET XACT_ABORT ON
Begin Tran
IF NOT EXISTS (Select 1 From MyTab Where SomeValue = #someValue)
Begin
Insert Into MyTab(SomeValue) Values ('aaa');
End
Commit Tran
End Try
begin Catch
Rollback Tran
End Catch
Why not implement a INSTEAD OF INSERT trigger on the table? You can check if the row exists, do nothing if it does, and insert the row if it doesn't.

Can you have just a comment in a block of your SQL if-statement?

I'd like to just put in a comment in the block of my if-statement, but I get an error when I try. I want to be more like Steve McConnell.
declare #ConstraintName varchar(255)
set #ConstraintName = 'PK_Whatever'
IF LEFT(#ConstraintName, 2) = 'PK'
BEGIN
--can't drop primary keys
END
The error I get is:
Incorrect syntax near 'END'.
If I add something after the comment, i.e. PRINT #ConstraintName, it works fine.
No, you cannot have an empty if block (or one that contains only comments).
You don't say why you would want this. If you are just trying to comment out the contents of the if for debugging, you should comment the entire if.
SELECT NULL will generate a result-set and could affect client apps. Seems better to do something that will have no effect, like:
IF LEFT(#ConstraintName, 2) = 'PK'
BEGIN
DECLARE #Dummy bit -- Do nothing here, but this is required to compile
END
I can't say for sure in SQL Server, but in Oracle PL/SQL you would put a NULL statement in a block that you want to do nothing:
BEGIN
-- This is a comment
NULL;
END
Good tips here: How to do nothing in SQL Server
BEGIN
DONOTHING:
END
So you're just defining a label.
No, I don't think you can. If you want to temporarily comment that out, you'll probably need to just put a /* ... */ around the entire statement.
It's not the comment. It's that you have an empty if block. You have to have at least one statement in there. Putting in a print statement might be your best bet.
Since you can't have an "empty" blocks (thanks Charles Graham), I'll place a comment above the if-statement for the intention of the conditional (thanks BlackWasp), and then have a comment within the begin..end block that describes a dummy declare (thanks GiLM).
Do you think this is how I should comment the code?
declare #ConstraintName varchar(255)
set #ConstraintName = 'PK_Whatever'
--can't drop primary keys
IF LEFT(#ConstraintName, 2) = 'PK'
BEGIN
--do nothing here
DECLARE #Dummy bit --required to compile
END
Would it not be better to design your SQL statement around items you do wish to drop constraints for? So If you wish to remove the ability for this then
If left(#constraintname,2 <> 'PK'
BEGIN
-- Drop your constraint here
ALTER TABLE dbo.mytable DROP constraint ... -- etc
END
i know it doesn't answer your original question as to whether you can place just a comment inside a block, but why not inverse your conditional so the block only executes if <> 'PK'?
-- drop only if not primary
IF LEFT (#ConstraintName, 2) <> 'PK'
BEGIN
--do something here
END

Why is my stored procedure receiving a null parameter?

Ok, this is a curly one. I'm working on some Delphi code that I didn't write, and I'm encountering a very strange problem. One of my stored procedures' parameters is coming through as null, even though it's definitely being sent 1.
The Delphi code uses a TADOQuery to execute the stored procedure (anonymized):
ADOQuery1.SQL.Text := "exec MyStoredProcedure :Foo,:Bar,:Baz,:Qux,:Smang,:Jimmy";
ADOQuery1.Parameters.ParamByName("Foo").Value := Integer(someFunction());
// other parameters all set similarly
ADOQuery1.ExecSQL;
Integer(SomeFunction()) currently always returns 1 - I checked with the debugger.
However, in my stored proc ( altered for debug purposes ):
create procedure MyStoredProcedure (
#Foo int, #Bar int, #Baz int,
#Qux int, #Smang int, #Jimmy varchar(20)
) as begin
-- temp debug
if ( #Foo is null ) begin
insert into TempLog values ( "oh crap" )
end
-- do the rest of the stuff here..
end
TempLog does indeed end up with "oh crap" in it (side question: there must be a better way of debugging stored procs: what is it?).
Here's an example trace from profiler:
exec [MYDB]..sp_procedure_params_rowset N'MyStoredProcedure',1,NULL,NULL
declare #p3 int
set #p3=NULL
exec sp_executesql
N'exec MyStoredProcedure #P1,#P2,#P3,#P4,#P5,#P6',
N'#P1 int OUTPUT,#P2 int,#P3 int,#P4 int,#P5 int,#P6 int',
#p3 output,1,1,1,0,200
select #p3
This looks a little strange to me. Notice that it's using #p3 and #P3 - could this be causing my issue?
The other strange thing is that it seems to depend on which TADOConnection I use.
The project is a dll which is passed a TADOConnection from another application. It calls all the stored procedures using this connection.
If instead of using this connection, I first do this:
ConnectionNew := TADOQuery.Create(ConnectionOld.Owner);
ConnectionNew.ConnectionString := ConnectionOld.ConnectionString;
TADOQuery1.Connection := ConnectionNew;
Then the issue does not occur! The trace from this situation is this:
exec [MYDB]..sp_procedure_params_rowset N'MyStoredProcedure',1,NULL,NULL
declare #p1 int
set #p1=64
exec sp_prepare #p1 output,
N'#P1 int,#P2 int,#P3 int,#P4 int,#P5 int,#P6 varchar(20)',
N'exec MyStoredProcedure #P1,#P2,#P3,#P4,#P5,#P6',
1
select #p1
SET FMTONLY ON exec sp_execute 64,0,0,0,0,0,' ' SET FMTONLY OFF
exec sp_unprepare 64
SET NO_BROWSETABLE OFF
exec sp_executesql
N'exec MyStoredProcedure #P1,#P2,#P3,#P4,#P5,#P6',
N'#P1 int,#P2 int,#P3 int,#P4 int,#P5 int,#P6 varchar(20)',
1,1,1,3,0,'400.00'
Which is a bit much for lil ol' me to follow, unfortunately. What sort of TADOConnection options could be influencing this?
Does anyone have any ideas?
Edit: Update below (didn't want to make this question any longer :P)
In my programs, I have lots of code very similar to your first snippet, and I haven't encountered this problem.
Is that actually your code, or is that how you've represented the problem for us to understand? Is the text for the SQL stored in your DFM or populated dynamically?
I was wondering if perhaps somehow the Params property of the query had already got a list of parameters defined/cached, in the IDE, and that might explain why P1 was being seen as output (which is almost certainly causing your NULL problem).
Just before you set the ParamByName.Value, try
ParamByName("Foo").ParamType=ptInput;
I'm not sure why you changing the connection string would also fix this, unless it's resetting the internal sense of the parameters for that query.
Under TSQLQuery, the Params property of a query gets reset/recreated whenever the SQL.Text value is changed (I'm not sure if that's true for a TADOQuery mind you), so that first snippet of yours ought to have caused any existing Params information to have been dropped.
If the 'ParamByname.ParamType' suggestion above does fix it for you, then surely there's something happening to the query elsewhere (at create-time? on the form?) that is causing it to think Foo is an output parameter...
does that help at all? :-)
caveat: i don't know delphi, but this issue rings a faint bell and so i'm interested in it
do you get the same result if you use a TADOStoredProc instead of a TADOQuery? see delphi 5 developers guide
also, it looks like the first trace does no prepare call and thinks #P1 is an output paramer in the execute, while the second trace does a prepare call with #P1 as an output but does not show #P1 as an output in the execute step - is this significant? it does seem odd, and so may be a clue
you might also try replacing the function call with a constant 1
good luck, and please let us know what you find out!
I suspect you have some parameters mismatch left over from the previous use of your ADOQuery.
Have you tried to reset your parameters after changing the SQL.Text:
ADOQuery1.Parameters.Refresh;
Also you could try to clear the parameters and explicitly recreate them:
ADOQuery1.Parameters.Clear;
ADOQuery1.Parameters.CreateParameter('Foo', ftInteger, pdInput, 0, 1);
[...]
I think changing the connection actually forces an InternalRefresh of the parameters.
ADOQuery1.Parameters.ParamByName("Foo").Value = Integer(someFunction());
Don't they use := for assignment in Object Pascal?
#Constantin
It must be a typo from the Author of the question.
#Blorgbeard
Hmmm... When you change SQL of a TADOQuery, is good use to
clear the parameters and recreate then using CreateParameter.
I would not rely on ParamCheck in runtime - since it leaves
the parameters' properties mostly undefined.
I've had such type of problem when relying on ParamCheck to
autofill the parameters - is rare but occurs.
Ah, if you go the CreateParameter route, create as first
parameter the #RETURN_VALUE one, since it'll catch the returned
value of the MSSQL SP.
The only time I've had a problem like this was when the DB Provider couldn't distinguish between Output (always sets it to null) and InputOutput (uses what you provide) parameters.
Ok, progress is made.. sort of.
#Robsoft was correct, setting the parameter direction to pdInput fixed the issue.
I traced into the VCL code, and it came down to TParameters.InternalRefresh.RefreshFromOleDB. This function is being called when I set the SQL.Text. Here's the (abridged) code:
function TParameters.InternalRefresh: Boolean;
procedure RefreshFromOleDB;
// ..
if OLEDBParameters.GetParameterInfo(ParamCount, PDBPARAMINFO(ParamInfo), #NamesBuffer) = S_OK then
for I := 0 to ParamCount - 1 do
with ParamInfo[I] do
begin
// ..
Direction := dwFlags and $F; // here's where the wrong value comes from
// ..
end;
// ..
end;
// ..
end;
So, OLEDBParameters.GetParameterInfo is returning the wrong flags for some reason.
I've verified that with the original connection, (dwFlags and $F) is 2 (DBPARAMFLAGS_ISOUTPUT), and with the new connection, it's 1 (DBPARAMFLAGS_ISINPUT).
I'm not really sure I want to dig any deeper than that, for now at least.
Until I have more time and inclination, I'll just make sure all parameters are set to pdInput before I open the query. Unless anyone has any more bright ideas now..?
Anyway, thanks everyone for your suggestions so far.
I was having a very similar issue using TADOQuery to retrieve some LDAP info, there is a bug in the TParameter.InternalRefresh function that causes an access violation even if your query has no parameters.
To solve this, simply set TADOQuery.ParamCheck to false.

How do I conditionally create a stored procedure in SQL Server?

As part of my integration strategy, I have a few SQL scripts that run in order to update the database. The first thing all of these scripts do is check to see if they need to run, e.g.:
if #version <> #expects
begin
declare #error varchar(100);
set #error = 'Invalid version. Your version is ' + convert(varchar, #version) + '. This script expects version ' + convert(varchar, #expects) + '.';
raiserror(#error, 10, 1);
end
else
begin
...sql statements here...
end
Works great! Except if I need to add a stored procedure. The "create proc" command must be the only command in a batch of sql commands. Putting a "create proc" in my IF statement causes this error:
'CREATE/ALTER PROCEDURE' must be the first statement in a query batch.
Ouch! How do I put the CREATE PROC command in my script, and have it only execute if it needs to?
Here's what I came up with:
Wrap it in an EXEC(), like so:
if #version <> #expects
begin
...snip...
end
else
begin
exec('CREATE PROC MyProc AS SELECT ''Victory!''');
end
Works like a charm!
SET NOEXEC ON is good way to switch off some part of code
IF NOT EXISTS (SELECT * FROM sys.assemblies WHERE name = 'SQL_CLR_Functions')
SET NOEXEC ON
GO
CREATE FUNCTION dbo.CLR_CharList_Split(#list nvarchar(MAX), #delim nchar(1) = N',')
RETURNS TABLE (str nvarchar(4000)) AS EXTERNAL NAME SQL_CLR_Functions.[Granite.SQL.CLR.Functions].CLR_CharList_Split
GO
SET NOEXEC OFF
Found here:
https://codereview.stackexchange.com/questions/10490/conditional-create-must-be-the-only-statement-in-the-batch
P.S. Another way is SET PARSEONLY { ON | OFF }.
But watch out for single quotes within your Stored Procedure - they need to be "escaped" by adding a second one. The first answer has done this, but just in case you missed it. A trap for young players.
Versioning your database is the way to go, but... Why conditionally create stored procedures. For Views, stored procedures, functions, just conditionally drop them and re-create them every time. If you conditionally create, then you will not clean-up databases that have a problem or a hack that got put in 2 years ago by another developer (you or I would never do this) who was sure he would remember to remove the one time emergency update.
Problem with dropping and creating is you lose any security grants that had previously been applied to the object being dropped.
This is an old thread, but Jobo is incorrect: Create Procedure must be the first statement in a batch. Therefore, you can't use Exists to test for existence and then use either Create or Alter. Pity.
It is much better to alter an existing stored proc because of the potential for properties and permissions that have been added AND which will be lost if the stored proc is dropped.
So, test to see if it NOT EXISTS, if it does not then create a dummy proc. Then after that use an alter statement.
IF NOT EXISTS(SELECT * FROM sysobjects WHERE Name = 'YOUR_STORED_PROC_NAME' AND xtype='P')
EXECUTE('CREATE PROC [dbo].[YOUR_STORED_PROC_NAME] as BEGIN select 0 END')
GO
ALTER PROC [dbo].[YOUR_STORED_PROC_NAME]
....
I must admit, I would normally agree with #Peter - I conditionally drop and then unconditionally recreate every time. I've been caught out too many times in the past when trying to second-guess the schema differences between databases, with or without any form of version control.
Having said that, your own suggestion #Josh is pretty cool. Certainly interesting. :-)
My solution is to check if the proc exists, if so then drop it, and then create the proc (same answer as #robsoft but with an example...)
IF EXISTS(SELECT * FROM sysobjects WHERE Name = 'PROC_NAME' AND xtype='P')
BEGIN
DROP PROCEDURE PROC_NAME
END
GO
CREATE PROCEDURE PROC_NAME
#value int
AS
BEGIN
UPDATE SomeTable
SET SomeColumn = 1
WHERE Value = #value
END
GO
use the 'Exists' command in T-SQL to see if the stored proc exists. If it does, use 'Alter', else use 'Create'
IF NOT EXISTS(SELECT * FROM sys.procedures WHERE name = 'pr_MyStoredProc')
BEGIN
CREATE PROCEDURE pr_MyStoredProc AS .....
SET NOCOUNT ON
END
ALTER PROC pr_MyStoredProc
AS
SELECT * FROM tb_MyTable

Resources