T-SQL view is "optimized" but I don't understand why - sql-server

I'm struggling with some sort of auto-optimization when creating a view in TSQL.
The optimization is done both when using the Designer and when using CREATE VIEW.
The output is equal, but I don't understand why it's done.
Can anyone please explain me why this is optimized / why the lower one is better:
[...]
WHERE
today.statusId = 7
AND yesterday.cardId IS NULL
AND NOT (
today.name LIKE 'TEST_%'
AND today.department LIKE 'T_%'
)
gets optimized into the following
[...]
WHERE (
today.statusId = 7
AND yesterday.cardId IS NULL
AND NOT (today.name LIKE 'TEST_%')
)
OR (
today.statusId = 7
AND yesterday.cardId IS NULL
AND NOT (today.department LIKE 'T_%')
)
Isn't the second where clause forcing the view to check statusId and cardId two times no matter of their value? While the first allows it to abort as soon as statusId is 6 (e.g.)
Also in the first one the parentheses part can abort as soon as one value is FALSE.
This behavior also does not change when the inner parentheses contain like 20 values. The optimizer will create 20 blocks checking over and over again the values of statusId and cardId...
Thanks in advance.

The visual designers do not try to optimize your code ever.
Don't use them unless you are prepared to put up with them mangling your formatting and rewriting your queries in this manner.
SQL Server does not guarantee short circuit evaluation in any case but certainly this type of rewrite can act as a pessimisization. Especially if one of the predicates involves a sub query the rewritten form could end up significantly more expensive.
Presumably the reason for the rewrite is just so it can present them easily in a grid and allow editing of points at individual grid intersections - or so you can select the whole column and delete.

Related

DECIMAL Index on CHAR column

I am dealing with some legacy code that looks like this:
Declare #PolicyId int
;
Select top 1 #PolicyId = policyid from policytab
;
Select col002
From SOMETAB
Where (cast(Col001 as int) = #PolicyId)
;
The code is actually in a loop, but the problem is the same. col001 is a CHAR(10)
Is there a way to specify an index on SOMETAB.Col001 that would speed up this code?
Are there other ways to speed up this code without modifying it?
The context of that question is that I am guessing that a simple index on col001 will not speed up the code because the select statement doing a cast on the column.
I am looking for a solution that does not involve changing this code because this technique was used on several tables and in several scripts for each table.
Once I determine that it is hopeless to speed up this code without changing it I have several options. I am bringing that so this post can stay on the topic of speeding up the code without changing the code.
Shortcut to hopeless if you can not change the code, (cast(Col001 as int) = #PolicyId) is not SARGable.
Sargable
SARGable functions in SQL Server - Rob Farley
SARGable expressions and performance - Daniel Hutmachier
After shortcut, avoid loops when possible and keep your search arguments SARGable. Indexed persisted computed columns are an option if you must maintain the char column and must compare to an integer.
If you cannot change the table structure, cast your parameter to the data type you are searching on in your select statement.
Cast(#PolicyId as char(10)) . This is a code change, and a good place to start looking if you decide to change code based on sqlZim's answer.
Zim's advice is excellent, and searching on int will always be faster than char. But, you may find this method an acceptable alternative to any schema changes.
Is policy stored as an int in PolicyTab and char in SomeTab for certain?

SQL Server : function precedence and short curcuiting in where clause

Consider this setup:
create table #test (val varchar(10))
insert into #test values ('20100101'), ('1')
Now if I run this query
select *
from #test
where ISDATE(val) = 1
and CAST(val as datetimeoffset) > '2005-03-01 00:00:00 +00:00'
it will fail with
Conversion failed when converting date and/or time from character string
which tells me that the where conditions are not short-circuited and both functions are evaluated. OK.
However if I run
select *
from #test
where LEN(val) > 2
and CAST(val as datetimeoffset) > '2005-03-01 00:00:00 +00:00'
it doesn't fail, which tells me that where clause is short-circuited in this case.
This
select *
from #test
where ISDATE(val) = 1
and CAST(val as datetimeoffset) > '2005-03-01 00:00:00 +00:00'
and LEN(val) > 2
fails again, but if I move length check to before cast, it work. So looks like the functions are evaluated in the order they appear in query.
Can anyone explain why first query fails?
It fails because SQL is declarative so the order of your conditions is not taken into account when the plan is generated (nor is it required to do so).
The usual way to get around this is to use CASE which has strict rules about sequence and when to stop.
In your case you will probably need nested CASEs, something like this:
WHERE
(
case when ISDATE(val) = 1 then
case when CAST(val as datetimeoffset) > '2005-03-01 00:00:00 +00:00' and
LEN(val) > 2
THEN 1 ELSE 0 END
ELSE 0
END
) = 1
(note this is unlikely to be actually correct SQL as I just typed it in).
By the way, even if you get it "working" by rearranging the conditions, I'd advise you don't. Accept that SQL simply doesn't work in that way. As the data changes & stats change, SQL is upgraded, workload varies, indexes are added the query plan could change. Any attempts to "get it working" are going to be short-lived at best so go with the CASE which will continue to work once you've got it right (provided you nest CASE statements where required and don't fall into the same precedence trap in the CASE conditions!)
The mystery is answered if you examine the Execution Plan. Both the CAST() and the LEN() are applied as part of the Table Scan step, while the test for IsDate() is a separate Filter test after the Table Scan.
It appears that the SQL Engine's internal optimizations use certain filtering functions as part of the retrieval of the data, and others as separate filters, almost certainly as a form of query optimization to minimize the load from disk into main memory. However, more complex functions, such as IsDate(), which is dependent on system variables such as system date format in some cases (is '01/02/2017' Jan 2nd or Feb 1st?), need to have the data retrieved before the filter is applied.
Although I have no hard information on this, I strongly suspect that any filter more resource intensive than a certain level is delegated to the Filter steps in the query plan, and anything simple/fast enough to be checked as the data is being read in is applied during the Scan/Seek steps. Also, if a filter could be applied on the data in the index, I am certain that it will be tested before any non-index data is tested, solely to minimize disk reads, which are bad performance juju (this may not apply on the Clustered index of the table). In these cases, the short-circuiting might not be straightforward, with an IsDate() test specified on a non-index field being executed after a similar test on an indexed field, no matter where they are in the list of conditions.
That said, it appears to be true that conditions short-circuit when they are executed in the same step of the query plan. If you insert a string like '201612123' into the temp table, then add a check on Len(val) < 9 after the date comparison, it still generates an error, instead of checking both LEN() conditions at the same time in a tiny optimization.
which tells me that where conditions are not short-circuited and both functions are evaluated.
To expand on LoztInSpace's answer, your terminology suggests you are not interpreting SQL correctly, on its own terms.
The various parts of a SELECT statement are not "functions". The entire statement is atomic. You supply the query as unit, and the DBMS responds. There is no "before" and no "after". There is just the query.
Those are the rules. Your job in formulating the query is to supply one that is valid. It's a logical progression: valid question, valid answer, etc. The moment you step out of that frame, you might as well be asking, "why is the sky seven?".
One a small clarification to #LoztInSpace's answer. When he refers to the order of your statements, he's presumably talking about the phrasing of your query, which for purposes of evaluation is inconsequential. Sequential SQL statements are executed sequentially, as presented. That is guaranteed by the SQL standard.

Why CASE statement in Where clause for MS Sql server

Doing some performance tuning on a 3rd party vendor system in MS SQL Server 2012.
They do a lot of where clauses like this:
WHERE
(
CASE
WHEN #searchID = '' THEN 1
WHEN ((C.SAPCustomerNumber = #searchID ) OR (C.SAPCustomerNumber = #SapID)) THEN 1
ELSE 0
END = 1
)
Other than confusing the query plan, what advantage if any would there be to Case 1 or 0 in the where clause?
Thanks
The most common, by far, cause for this sort of code to appear is someone who is new to SQL, CASE expressions or both. They somehow become fixated on CASE and decide that it should be used for all conditional evaluations.
This is usually a mistake and can be replaced with simpler boolean logical, as #Bogdan suggested in the comments:
((C.SAPCustomerNumber = #searchID ) OR (C.SAPCustomerNumber = #SapID) OR (#searchID = '' ))
The second reason this can be done is if someone is attempting to enforce the order of evaluation of predicates1. CASE is documented to evaluate its WHEN conditions in order (provided that they are scalar expressions, not aggregates). I'd seriously not recommend that anyone actually writes code like this though - it's easily mistaken for the first form instead. And even if a particular evaluation order is best today, with today's data, who's to say whether it will still be correct tomorrow, or in a month or years time?
1SQL does not guarantee any evaluation order for WHERE clause predicates in general, nor any form of short-circuiting evaluation. The optimizer is generally free to re-order predicates - both within the WHERE clause and in JOIN/ON clauses - to try to achieve the overall result as cheaply as possible. You shouldn't, generally, try to prevent it from doing so.
If it's not picking the most efficient plan, this is far better fixed by updating/creating indexes and statistics or actually forcing a specific plan, rather than using CASE.

How to force table select to go over blocks

How can I make Sybase's database engine return an unsorted list of records in non-numeric order?
~~~
I have an issue where I need to reproduce an error in the application where I select from a table where the ID is generated in sequence, but the ID is not the last one in the selection.
Let me explain.
ID STATUS
_____________
1234 C
1235 C
1236 O
Above is 3 IDs. I had code where these would be the results of a
select #p_id = ID from table where (conditions).
However, there wasn't a clause to check for status = 'O' (open). Remember Sybase saves the last returned record into a variable.
~~~~~
I'm being asked to give the testing team something that will make the results not work. If Sybase selects the above in an unordered list, it could appear in ascending order, or, if the database engine needs to change blocks of stored data or something technical magic stuff, the order could be messed up. The original error was when the procedure would return say 1234 instead of 1236.
Is there a way that I can have a 100% guarantee that Sybase will search over a block of data and have to double back, effectively 'breaking' the ascending search, and returning not the last record, but any other one? (all records except the maximum will end up erroring, because they are all 'Closed')
I want some sort of magical SQL code that will make sure things don't search the table in exactly numeric order. Ideally I'd like to not have to change the procedure, as the test team want to see the exact same procedure breaking (as easy as plonking a order by id desc would fudge the results).
If you don't specify an order, there is no way to guarantee the return order of the results. It will be however the index is built - and can depend on the order of insertion, the type of index, and the content of index keys.
It's generally a bad idea to do those sorts of singleton SELECTs. You should always specify a specific record with the WHERE clause, or use a cursor, or TOPn or similar. The problem comes when someone tries to understand your code, because some databases when they see multiple hits take the first value, some take the last value, some take a random value (they call that "implementation-defined"), and some throw an error.
Is this by any chance related to 1156837? :)

T-SQL Where Clause Case Statement Optimization (optional parameters to StoredProc)

I've been battling this one for a while now. I have a stored proc that takes in 3 parameters that are used to filter. If a specific value is passed in, I want to filter on that. If -1 is passed in, give me all.
I've tried it the following two ways:
First way:
SELECT field1, field2...etc
FROM my_view
WHERE
parm1 = CASE WHEN #PARM1= -1 THEN parm1 ELSE #PARM1 END
AND parm2 = CASE WHEN #PARM2 = -1 THEN parm2 ELSE #PARM2 END
AND parm3 = CASE WHEN #PARM3 = -1 THEN parm3 ELSE #PARM3 END
Second Way:
SELECT field1, field2...etc
FROM my_view
WHERE
(#PARM1 = -1 OR parm1 = #PARM1)
AND (#PARM2 = -1 OR parm2 = #PARM2)
AND (#PARM3 = -1 OR parm3 = #PARM3)
I read somewhere that the second way will short circuit and never eval the second part if true. My DBA said it forces a table scan. I have not verified this, but it seems to run slower on some cases.
The main table that this view selects from has somewhere around 1.5 million records, and the view proceeds to join on about 15 other tables to gather a bunch of other information.
Both of these methods are slow...taking me from instant to anywhere from 2-40 seconds, which in my situation is completely unacceptable.
Is there a better way that doesn't involve breaking it down into each separate case of specific vs -1 ?
Any help is appreciated. Thanks.
I read somewhere that the second way will short circuit and never eval the second part if true. My DBA said it forces a table scan.
You read wrong; it will not short circuit. Your DBA is right; it will not play well with the query optimizer and likely force a table scan.
The first option is about as good as it gets. Your options to improve things are dynamic sql or a long stored procedure with every possible combination of filter columns so you get independent query plans. You might also try using the "WITH RECOMPILE" option, but I don't think it will help you.
if you are running SQL Server 2005 or above you can use IFs to make multiple version of the query with the proper WHERE so an index can be used. Each query plan will be placed in the query cache.
also, here is a very comprehensive article on this topic:
Dynamic Search Conditions in T-SQL by Erland Sommarskog
it covers all the issues and methods of trying to write queries with multiple optional search conditions
here is the table of contents:
Introduction
The Case Study: Searching Orders
The Northgale Database
Dynamic SQL
Introduction
Using sp_executesql
Using the CLR
Using EXEC()
When Caching Is Not Really What You Want
Static SQL
Introduction
x = #x OR #x IS NULL
Using IF statements
Umachandar's Bag of Tricks
Using Temp Tables
x = #x AND #x IS NOT NULL
Handling Complex Conditions
Hybrid Solutions – Using both Static and Dynamic SQL
Using Views
Using Inline Table Functions
Conclusion
Feedback and Acknowledgements
Revision History
If you pass in a null value when you want everything, then you can write your where clause as
Where colName = IsNull(#Paramater, ColName)
This is basically same as your first method... it will work as long as the column itself is not nullable... Null values IN the column will mess it up slightly.
The only approach to speed it up is to add an index on the column being filtered on in the Where clause. Is there one already? If not, that will result in a dramatic improvement.
No other way I can think of then doing:
WHERE
(MyCase IS NULL OR MyCase = #MyCaseParameter)
AND ....
The second one is more simpler and readable to ther developers if you ask me.
SQL 2008 and later make some improvements to optimization for things like (MyCase IS NULL OR MyCase = #MyCaseParameter) AND ....
If you can upgrade, and if you add an OPTION (RECOMPILE) to get decent perf for all possible param combinations (this is a situation where there is no single plan that is good for all possible param combinations), you may find that this performs well.
http://blogs.msdn.com/b/bartd/archive/2009/05/03/sometimes-the-simplest-solution-isn-t-the-best-solution-the-all-in-one-search-query.aspx

Resources