T-SQL CASE Statement without overlapping criteria test - sql-server

Since the following code prints 'First' and 'Second' INSERT that order, can I conclude that the first condition satisfied is ALWAYS executed?
DECLARE #Constant1 int = 1
DECLARE #Constant2 int = 2
select
case
when #Constant1 = 1
then 'First'
when #Constant1 = 1 and #Constant2 = 2
then 'Second'
end as Result
select
case
when #Constant1 = 1 and #Constant2 = 2
then 'Second'
when #Constant1 = 1
then 'First'
end as Result
I know that sometimes parallel processing effects the outcome and I was trying to understand IF this type of situation that I see in Production would always return the same result.
This question is intended to understand if there is a potential issue in production code. If I were going to write the code anew, I think I would try to make the code explicitly mutually exclusive..
select
case
when #Constant1 = 1 and #Constant2 != 2
then 'First'
when #Constant1 = 1 and #Constant2 = 2
then 'Second'
end as Result

The Documentation for CASE states.
Searched CASE expression:
Evaluates, in the order specified, Boolean_expression for each WHEN clause.
Returns result_expression of the first Boolean_expression that evaluates to TRUE.
If no Boolean_expression evaluates to TRUE, the Database Engine returns the else_result_expression if an ELSE clause is specified, or
a NULL value if no ELSE clause is specified.
So it will return the first true branch.
For a simple query such as in the question I would expect it to not evaluate the other branches either.
A few cases where this short circuiting behaviour does not work as expected/advertised are discussed in this DBA site question.
Does SQL Server read all of a COALESCE function even if the first argument is not NULL?
But just to be clear these issues do not affect the left to right precedence order of the result (except for the case when evaluating a later branch causes an error to occur such that no result is returned at all)

Related

Invalid length parameter passed to the substring function case

I have an issue with substring function while using it inside case statement.
select
case when -1<0 then 'ok' else SUBSTRING('abcd',1,-1) end
gives me an issue:
Invalid length parameter passed to the substring function.
Why is the case "looking" at the else condition since the first condition is met?
On the other hand query:
declare #a int;
set #a=-1
select
#a,
case when #a<0 then 'ok' else SUBSTRING('abcd',1,#a) end
presents the right answer 'ok' without any errors.
The problem is that the literal value of -1 is parsed by the compiler before run-time for the length parameter. The compiler knows that -1 is invalid, as length must have a positive value, so the error is flagged before the SQL is even run.
In the latter statement, the length passed is a variable. At compile time, the variable has an "unknown" value, as it's not SET till run time, thus the syntax is fine.
Simply put, the compiler knows that a length of -1 for SUBSTRINGis invalid, regardless of if that SQL will actually run, and so errors.
Unlike other functions, such as STUFF and REPLICATE, which state "If length is negative, a null string is returned.", SUBSTRING, LEFT, and RIGHT all state: "If integer_expression is negative, an error is returned." For a literal value, it appears that the compiler is checking these values, even if they will never be used, and then flagging the error.
This isn't limited to logic within a CASE either. For example, the using logical flow operators such as IF generates the same behaviour:
IF 1 = 0
SELECT LEFT('abc',-1)
As does the ISNULL function:
SELECT ISNULL('ok',RIGHT('abc',-1));
It only, however, occurs with literal values. If, for example, you were to use the values from a column, the behaviour is not seen:
IF 1 = 0
SELECT SUBSTRING('abc',1,n) FROM (VALUES(1),(-1)) V(n);
This does not return an error, even though everything in VALUES is a literal. That is because the value of n has not been evaluated.
You can try this as it needed positive end values as length can not be in negative but can be 0.
select
case when -1 < 0 then 'ok' else SUBSTRING('abcd',1, 1) end

CASE Statement SQL: Priority in cases?

I have a general question for when you are using a CASE statement in SQL (Server 2008), and more than one of your WHEN conditions are true but the resulting flag is to be different.
This is hypothetical example but may be transferable when applying checks across multiple columns to classify data in rows. The output of the code below is dependant on how the cases are ordered, as both are true.
DECLARE #TESTSTRING varchar(5)
SET #TESTSTRING = 'hello'
SELECT CASE
WHEN #TESTSTRING = 'hello' THEN '0'
WHEN #TESTSTRING <> 'hi' THEN '1'
ELSE 'N/A'
END AS [Output]
In general, would it be considered bad practice to create flags in this way? Would a WHERE, OR statement be better?
Case statements are guaranteed to be evaluated in the order they are written. The first matching value is used. So, for your example, the value 0 would be returned.
This is clearly described in the documentation:
Searched CASE expression:
Evaluates, in the order specified, Boolean_expression for each WHEN clause.
Returns result_expression of the first Boolean_expression that evaluates to TRUE.
If no Boolean_expression evaluates to TRUE, the Database Engine returns the else_result_expression if an ELSE clause is specified, or
a NULL value if no ELSE clause is specified.
As for whether this is good or bad practice, I would lean on the side of neutrality. This is ANSI behavior so you can depend on it, and in some cases it is quite useful:
select (case when val < 10 then 'Less than 10'
when val < 100 then 'Between 10 and 100'
when val < 1000 then 'Between 100 and 1000'
else 'More than 1000' -- or NULL
end) as MyGroup
To conclude further - SQL will stop reading the rest of the of the case/when statement when one of the WHEN clauses is TRUE. Example:
SELECT
CASE
WHEN 3 = 3 THEN 3
WHEN 4 = 4 THEN 4
ELSE NULL
END AS test
This statement returns 3 since this is the first WHEN clause to return a TRUE, even though the following statement is also a TRUE.

SQL Server CASE 1 WHEN 1, WHERE 1=1 AND 1=1

I inherited the following query from a previous application. I'm having a hard time understanding the "Case" in the "Select" and "Where" clause also.
SELECT J1.AC_CODE, J1.PERIOD, J1.JRNAL_NO, J1.DESCRIPTN, - J1.AMOUNT ,
J1.ANAL_T3,
CASE 1
WHEN 1 THEN 'A'
ELSE J1.ACCNT_CODE
END ,
J1.JRNAL_LINE
FROM dbo.JSource J1
WHERE 1=1
AND 1=1
AND NOT ('A' LIKE '%Z%'
AND J1.JRNAL_SRCE IN ('B/F',
'CLRDN')
AND J1.JRNAL_NO = 0)
AND CASE 1
WHEN 1 THEN 'A'
ELSE J1.AC_CODE
END ='A'
AND J1.AC_CODE='156320'
AND J1.PERIOD BETWEEN 2014001 AND 2014012
AND J1.ANAL_T3='ANAL001'
ORDER BY 1,2,3,4,5,6,7,8
I'm not sure If I understand the following clauses correctly:
1st Clause:
CASE 1
WHEN 1 THEN 'A'
ELSE J1.AC_CODE
END
I understood as: If column 1 is true, then choose literal A ortherwise choose J1.AC_CODE.
2nd clause:
WHERE 1=1
AND 1=1
AND NOT ('A' LIKE '%Z%'
AND J1.JRNAL_SRCE IN ('B/F',
'CLRDN')
AND J1.JRNAL_NO = 0)
AND CASE 1
WHEN 1 THEN 'A'
ELSE J1.AC_CODE
END ='A'
AND J1.AC_CODE='156320'
AND J1.PERIOD BETWEEN 2014001 AND 2014012
AND J1.ANAL_T3='ANAL001'
I'm totally lost with this "Where" clause.
Can you help explain this query and write a better version for this whole query?
I'm running this query on SQL Server 2008 (R2)
I understood as: If column 1 is true, then choose literal A ortherwise
choose J1.AC_CODE.
No, it is comparing the value 1 with the value 1 and if that is true the case returns an A and that is of course always true so the case statement will always return A.
Your where clause does not do anything at all.
1=1
AND 1=1
will always be true and the case will always be true and 'A' LIKE '%Z%' will always be false and that makes the entire AND NOT 'A' LIKE '%Z%' .... expression to always be true.
A simpler version of your query would look like this.
SELECT J1.AC_CODE,
J1.PERIOD,
J1.JRNAL_NO,
J1.DESCRIPTN,
- J1.AMOUNT,
J1.ANAL_T3,
'A',
J1.JRNAL_LINE
FROM dbo.JSource J1
WHERE J1.AC_CODE='156320' AND
J1.PERIOD BETWEEN 2014001 AND 2014012 AND
J1.ANAL_T3='ANAL001'
ORDER BY 1,2,3,4,5,6,7,8
Without knowing the history of this query, I am guessing that this was written with testing/debugging in mind and some of that code has been left in place. The case statement in the select line could (and I repeat could as this is my guess from looking at the query) have had other with clauses during creation of the query used for testing and these would have been switched between by changing the value after the CASE (example SELECT ..... CASE 1 WHEN 1 THEN 'A' WHEN 2 THEN 'some value' WHEN 3 'some other value' ELSE J1.ACCNT_CODE).
As for the where 1 = 1, I have seen this used during query creation/testing - mainly because it means each of the true conditions can easily be commented/uncommented or cut & pasted as the first where condition is always true. I've not seen AND 1 = 1 before. Not sure what that line was intended for, but I'd still think came about from testing/debugging and was not taken out the query.

Is SQL Server's double checking needed here?

IF #insertedValue IS NOT NULL AND #insertedValue > 0
This logic is in a trigger.
The value comes from a deleted or inserted row (doesn't matter).
2 questions :
Do I need to check both conditions? (I want all value > 0, value in db can be nullable)
Does SQL Server check the expression in the order I wrote it ?
1) Actually, no, since if the #insertedValue is NULL, the expression #insertedValue > 0 will evaulate to false. (Actually, as Martin Smith points out in his comment, it will evaluate to a special value "unknown", which when forced to a Boolean result on its own collapses to false - examples: unknown AND true = unknown which is forced to false, unknown OR true = true.) But you're relying on comparison behaviour with NULL values. A single step equivalent method, BTW, would be:
IF ISNULL(#insertedValue, 0) > 0
IMHO, you're better sticking with the explicit NULL check for clarity if nothing else.
2) Since the query will be optimised before execution, there is absolutely no guarantee of order of execution or short circuiting of the AND operator.
Combining the two - if the double check is truly unnecessary, then it will probably be optimised out before execution anyway, but your SQL code will be more maintainable in my view if you make this explicit.
You can use COALESCE => Returns the first nonnull expression among its arguments.
Now you can make the query more flexible, by increasing the column limits and again you need to check the Greater Then Zero condition. Important point to note down here is you have the option to check values in multiple columns.
declare #val int
set #val = COALESCE( null, 1, 10 )
if(#val>0)
select 'fine'
else
select 'not fine'

Is there an Alternate for Where is Null using Where = Null?

If not, is there an alternate way to switch through SELECT statements using a CASE or IF/THEN identifier WITHOUT putting the statement in a scalar variable first?
Is there a way to format this without using IS and using an = sign for it to work?
SELECT ID FROM TABLE WHERE ID = Null
No. NULL isn't a value. Think of NULL as a condition, with IS NULL or IS NOT NULL is testing for this condition.
In this example you can test for the actual value, or lack of value represented by a conditon
WHERE
(X IS NULL OR X = #X)
OR
WHERE
(#X IS NULL OR X = #X)
Or test for your definite conditions first:
WHERE
CASE X
WHEN 1 THEN
WHEN 2 THEN
ELSE -- includes NULL
END = ...
Your question is abstract so hard to give a more precise answer.
For example, are you having problems with NOT IN and NULL? If so, use NOT EXISTS.

Resources