I have a general question for when you are using a CASE statement in SQL (Server 2008), and more than one of your WHEN conditions are true but the resulting flag is to be different.
This is hypothetical example but may be transferable when applying checks across multiple columns to classify data in rows. The output of the code below is dependant on how the cases are ordered, as both are true.
DECLARE #TESTSTRING varchar(5)
SET #TESTSTRING = 'hello'
SELECT CASE
WHEN #TESTSTRING = 'hello' THEN '0'
WHEN #TESTSTRING <> 'hi' THEN '1'
ELSE 'N/A'
END AS [Output]
In general, would it be considered bad practice to create flags in this way? Would a WHERE, OR statement be better?
Case statements are guaranteed to be evaluated in the order they are written. The first matching value is used. So, for your example, the value 0 would be returned.
This is clearly described in the documentation:
Searched CASE expression:
Evaluates, in the order specified, Boolean_expression for each WHEN clause.
Returns result_expression of the first Boolean_expression that evaluates to TRUE.
If no Boolean_expression evaluates to TRUE, the Database Engine returns the else_result_expression if an ELSE clause is specified, or
a NULL value if no ELSE clause is specified.
As for whether this is good or bad practice, I would lean on the side of neutrality. This is ANSI behavior so you can depend on it, and in some cases it is quite useful:
select (case when val < 10 then 'Less than 10'
when val < 100 then 'Between 10 and 100'
when val < 1000 then 'Between 100 and 1000'
else 'More than 1000' -- or NULL
end) as MyGroup
To conclude further - SQL will stop reading the rest of the of the case/when statement when one of the WHEN clauses is TRUE. Example:
SELECT
CASE
WHEN 3 = 3 THEN 3
WHEN 4 = 4 THEN 4
ELSE NULL
END AS test
This statement returns 3 since this is the first WHEN clause to return a TRUE, even though the following statement is also a TRUE.
Related
I am using SQL to compare two columns and return TRUE/FALSE if they are equal.
In some cases, the two columns contain exactly the same string (no spaces or anything) but I am still getting false.
What may the reason for this be?
I am using this code:
CASE WHEN column1 = column2 THEN 0 ELSE 1 END AS [check]
The values are different despite the displayed value.
Using T-SQL, run a query like this to see the exact difference in the underlying raw values:
SELECT
column1
, CAST(column1 AS varbinary(MAX)) AS column1Binary
, column2
, CAST(column2 AS varbinary(MAX)) AS column2Binary
FROM dbo.YourTable;
This will reveal underlying differences like tabs or subtle character differences.
In fact, a likely explanation for what you are seeing is that one/both of the strings has leading and/or trailing whitespace. On SQL Server you may try:
CASE WHEN LTRIM(column1) = LTRIM(column2) THEN 0 ELSE 1 END AS [check]
If the above does not detect the problematical records, then try checking the length:
CASE WHEN LEN(column1) = LEN(column2) THEN 0 ELSE 1 END AS [check2]
I need to compare variable in case condition in stored procedure.
case when #column <> 0 then ...
ELSE 0 END
but whenever am getting #column as NULL, then the above script returning as 0.
will sql consider both 0 & NULL as same.
Am using sql server 2014.
Thanks All
No. SQL considers NULL as "I have no idea". Comparing anything with "I have no idea" results in an answer of "I totally have no idea". Look:
- How high is John?
- I have no idea.
- What is two centimeters higher than John?
- I have no idea.
Even comparison between two NULL values is not true: if I have no idea how tall John is and if I also have no idea how tall Jack is, I can't conclude that John is equally tall as Jack (and I can't conclude that John is not equally tall as Jack). The only sensible answer is... "I have no idea".
The way to test for NULL is with IS operator, which specifically exists for this scenario (e.g. #column IS NULL, or #column IS NOT NULL).
So NULL is not equal to 0, nor is it NOT equal to 0. The result of NULL <> 0 is NULL. However, NULL is falsy where conditionals are concerned, so CASE thinks you should get the ELSE branch any time #column is NULL.
In case if you want to execute the then part of case if the column value is null, then modify your condition to check for nulls also
CASE WHEN (#column <> 0 OR #column IS NULL) then ...
ELSE 0 END
I inherited the following query from a previous application. I'm having a hard time understanding the "Case" in the "Select" and "Where" clause also.
SELECT J1.AC_CODE, J1.PERIOD, J1.JRNAL_NO, J1.DESCRIPTN, - J1.AMOUNT ,
J1.ANAL_T3,
CASE 1
WHEN 1 THEN 'A'
ELSE J1.ACCNT_CODE
END ,
J1.JRNAL_LINE
FROM dbo.JSource J1
WHERE 1=1
AND 1=1
AND NOT ('A' LIKE '%Z%'
AND J1.JRNAL_SRCE IN ('B/F',
'CLRDN')
AND J1.JRNAL_NO = 0)
AND CASE 1
WHEN 1 THEN 'A'
ELSE J1.AC_CODE
END ='A'
AND J1.AC_CODE='156320'
AND J1.PERIOD BETWEEN 2014001 AND 2014012
AND J1.ANAL_T3='ANAL001'
ORDER BY 1,2,3,4,5,6,7,8
I'm not sure If I understand the following clauses correctly:
1st Clause:
CASE 1
WHEN 1 THEN 'A'
ELSE J1.AC_CODE
END
I understood as: If column 1 is true, then choose literal A ortherwise choose J1.AC_CODE.
2nd clause:
WHERE 1=1
AND 1=1
AND NOT ('A' LIKE '%Z%'
AND J1.JRNAL_SRCE IN ('B/F',
'CLRDN')
AND J1.JRNAL_NO = 0)
AND CASE 1
WHEN 1 THEN 'A'
ELSE J1.AC_CODE
END ='A'
AND J1.AC_CODE='156320'
AND J1.PERIOD BETWEEN 2014001 AND 2014012
AND J1.ANAL_T3='ANAL001'
I'm totally lost with this "Where" clause.
Can you help explain this query and write a better version for this whole query?
I'm running this query on SQL Server 2008 (R2)
I understood as: If column 1 is true, then choose literal A ortherwise
choose J1.AC_CODE.
No, it is comparing the value 1 with the value 1 and if that is true the case returns an A and that is of course always true so the case statement will always return A.
Your where clause does not do anything at all.
1=1
AND 1=1
will always be true and the case will always be true and 'A' LIKE '%Z%' will always be false and that makes the entire AND NOT 'A' LIKE '%Z%' .... expression to always be true.
A simpler version of your query would look like this.
SELECT J1.AC_CODE,
J1.PERIOD,
J1.JRNAL_NO,
J1.DESCRIPTN,
- J1.AMOUNT,
J1.ANAL_T3,
'A',
J1.JRNAL_LINE
FROM dbo.JSource J1
WHERE J1.AC_CODE='156320' AND
J1.PERIOD BETWEEN 2014001 AND 2014012 AND
J1.ANAL_T3='ANAL001'
ORDER BY 1,2,3,4,5,6,7,8
Without knowing the history of this query, I am guessing that this was written with testing/debugging in mind and some of that code has been left in place. The case statement in the select line could (and I repeat could as this is my guess from looking at the query) have had other with clauses during creation of the query used for testing and these would have been switched between by changing the value after the CASE (example SELECT ..... CASE 1 WHEN 1 THEN 'A' WHEN 2 THEN 'some value' WHEN 3 'some other value' ELSE J1.ACCNT_CODE).
As for the where 1 = 1, I have seen this used during query creation/testing - mainly because it means each of the true conditions can easily be commented/uncommented or cut & pasted as the first where condition is always true. I've not seen AND 1 = 1 before. Not sure what that line was intended for, but I'd still think came about from testing/debugging and was not taken out the query.
Since the following code prints 'First' and 'Second' INSERT that order, can I conclude that the first condition satisfied is ALWAYS executed?
DECLARE #Constant1 int = 1
DECLARE #Constant2 int = 2
select
case
when #Constant1 = 1
then 'First'
when #Constant1 = 1 and #Constant2 = 2
then 'Second'
end as Result
select
case
when #Constant1 = 1 and #Constant2 = 2
then 'Second'
when #Constant1 = 1
then 'First'
end as Result
I know that sometimes parallel processing effects the outcome and I was trying to understand IF this type of situation that I see in Production would always return the same result.
This question is intended to understand if there is a potential issue in production code. If I were going to write the code anew, I think I would try to make the code explicitly mutually exclusive..
select
case
when #Constant1 = 1 and #Constant2 != 2
then 'First'
when #Constant1 = 1 and #Constant2 = 2
then 'Second'
end as Result
The Documentation for CASE states.
Searched CASE expression:
Evaluates, in the order specified, Boolean_expression for each WHEN clause.
Returns result_expression of the first Boolean_expression that evaluates to TRUE.
If no Boolean_expression evaluates to TRUE, the Database Engine returns the else_result_expression if an ELSE clause is specified, or
a NULL value if no ELSE clause is specified.
So it will return the first true branch.
For a simple query such as in the question I would expect it to not evaluate the other branches either.
A few cases where this short circuiting behaviour does not work as expected/advertised are discussed in this DBA site question.
Does SQL Server read all of a COALESCE function even if the first argument is not NULL?
But just to be clear these issues do not affect the left to right precedence order of the result (except for the case when evaluating a later branch causes an error to occur such that no result is returned at all)
IF #insertedValue IS NOT NULL AND #insertedValue > 0
This logic is in a trigger.
The value comes from a deleted or inserted row (doesn't matter).
2 questions :
Do I need to check both conditions? (I want all value > 0, value in db can be nullable)
Does SQL Server check the expression in the order I wrote it ?
1) Actually, no, since if the #insertedValue is NULL, the expression #insertedValue > 0 will evaulate to false. (Actually, as Martin Smith points out in his comment, it will evaluate to a special value "unknown", which when forced to a Boolean result on its own collapses to false - examples: unknown AND true = unknown which is forced to false, unknown OR true = true.) But you're relying on comparison behaviour with NULL values. A single step equivalent method, BTW, would be:
IF ISNULL(#insertedValue, 0) > 0
IMHO, you're better sticking with the explicit NULL check for clarity if nothing else.
2) Since the query will be optimised before execution, there is absolutely no guarantee of order of execution or short circuiting of the AND operator.
Combining the two - if the double check is truly unnecessary, then it will probably be optimised out before execution anyway, but your SQL code will be more maintainable in my view if you make this explicit.
You can use COALESCE => Returns the first nonnull expression among its arguments.
Now you can make the query more flexible, by increasing the column limits and again you need to check the Greater Then Zero condition. Important point to note down here is you have the option to check values in multiple columns.
declare #val int
set #val = COALESCE( null, 1, 10 )
if(#val>0)
select 'fine'
else
select 'not fine'