ORDER BY is taking too much time when executed for VIEW - sql-server

I have a relatively complicated set up. I'm using SQL Server 2012 with 3 linked servers which are IBM DB2 servers. I have several queries which join tables from all three linked servers to fetch data. Due to some specifics of the version that I'm using I can't use some OLAP functions directly so since an upgrade is not an option the workaround was to create views and execute those functions on the views. One problem that I'm facing right now is that using ORDER BY from the view almost triples up the time needed for the view to be executed.
When I execute the only with SELECT it takes 24 seconds (Yeah, I know we talk about ridiculous times here, but still, I want to fix the problem with the order by since I'm not allowed to change the queries to the DB2 servers but the order by is on my side), when I use ORDER BY it goes from 68 to 80 seconds depending on which column I'm ordering on. I can't create a schemabound view because it's now allowed with OpenQuery, I've read the anyways it's not allowed to use ORDER BY when creating a view, I haven't tried that but since I need the order by to be available on multiple columns it's not a n option unless I create as much views as columns I have which sounds kinda ridiculous but... dunno.
Since I have trivial knowledge about SQL at general I'm not sure what is the best choice here. Even if the execution times are fixed I don't want my Order by clause to be so much time consuming compared to the time needed for the whole query. If I can make it as fast as it is when I execute it directly in the query - when I don't use view and I add the ORDER BY to the initial query the original time is 24 seconds and then it goes up to 36 which in percents is still much better than the performance when the same ORDER BY function is executed from the view.
So my questions are - what causes the ORDER BY to be executed so slow from the view and how can I make it as fast as if it was part from the original query, also, if this is just not possible, how can I reduce the huge time it takes?

Views use different execution plans than the queries that make them. This is, in my opinion, a bit of a shortcoming of views. ORDER BY is a particularly expensive command, so it makes the difference in execution plans very noticeable.
The alternative to this issue I've found was going the Table Valued Function route as it does appear to use the same execution plan as just running the query.
Here's a decent write-up of Table Valued Functions:
http://technet.microsoft.com/en-us/library/ms191165(v=sql.105).aspx

Related

Which is better; Multiple CTE's in a single query or multiple views joined?

I am currently in the progress of a database migration from MS Access to SQL Server. To improve performance of a specific query, I am translating from access to T-SQL and executing server-side. The query in question is essentially made up of almost 15 subqueries branching off in to different directions with varying levels of complexity. The top level query is a culmination (final Select) of all of these queries.
Without actually going into the specifics of the fields and relationships in my queries, I want to ask a question on a generic example.
Take the following:
Top Level Query
|
___________|___________
| |
Query 1 <----------> Query 2
_________________________| Views? |_______________________________
| | | |
Query 1.1 Query 1.2 Query 2.1 Query 2.2
________|______ ______|________
| | | |
Query 1.1.1 Query 1.1.2 Query 2.1.1 Query 2.1.2
| | | |
... ... ... ...
I am attempting to convert the above MS Access query structure to T-SQL, whilst maximising performance. So far I have converted all of Query1 Into a single query starting from the bottom and working my way up. I have achieved by using CTE's to represent every single subquery and then finally selected from this entire CTE tree to produce Query1. Due to the original design of the query, there is a high level of dependency between the subqueries.
Now my question is quite simple actually. With regards to Query2, should I continue to use this same method within the same query window or should I make both Query1 and Query2 seperate entities (Views) and then do a select from each? Or should I just continue adding more CTE's and then get the final Top Level Query result from this one super query?
This is an extremely bastardised version of the actual query, I am working with which has a large number of calculated fields and more subquery levels.
What do you think is the best approach here?
There is no way to say for sure from a diagram like this, but I suspect that you want to use Views for a number of reasons.
1) If the sub-query/view is used in more than one place there is a high chance that caching will allow for results to be shared in more than one place, but it is not as strong effect as a CTE but can be mitigated with a materialized query
2) It is easy turn a view into a materialized view. Then you get huge bonus if it is used multiple times or is used many times before it needs to be refreshed.
3) If you find a slow part it will be isolated to one view -- then you can optimize and change that small section easier.
I would recommend using views for EVERY sub-view if you can. Unless you can demonstrate (via execution plan or testing) that the CTE runs faster.
Final note as someone who has migrated Access to SQL in the past. Access encourages more sub-queries than needed with modern SQL and windowing functions. It is very likely with some analysis these access queries can be made much simpler. Try to find cases where you can roll them up to the parent query
A query you submit, or a "view" is all the same thing.
Your BASIC simple question stands!
Should you use a query on query, or a use a CTE?
Well, first, lets get rid of some confusing you have here.
A CTE is great for eliminaton of the need to build a query, save the query (say as a view) AND THEN query against it.
However, in your question we are mixing up TWO VERY different issues.
Are you doing a query against a query, or in YOUR case using a sub-query? While these two things SEEM simular, they really at not!!!
In the case of a sub-query, using a CTE will in fact save you the pain of having to build a separate query/view and saving that query. In this case, you are I suppose doing a query on query, but it REALLY a sub query. From a performance point of view, I don't believe you find any difference, so do what works best for you. I do in some ways like adopting CTE's since THEN you have have the "whole thing" in one spot. And updates to the whole mess occurs in one place. This can especially be an advantage if you have several sites. So to update things, you ONLY have to update this one huge big saved "thing". I do find this a signficant advantage.
The advantages of breaking out into separate views (as opposed to using CTE's) can often be the simple issue that how do you eat a elephant?
Answer: One bite at a time.
However, I in fact consider the conept and approahc of a sub-query a DIFFERENT issue then building a query on query. One of the really big reasons to using CTE'S in sql server, is SQL server has one REALLY big limitation compared to Access SQL. That limiation of course is being able to re-use derived columns.
eg this:
SELECT ID, Company, State, TaxRate, Purchased, Payments,
(Purchased - Payments) as Balance,
(Balance * TaxRate) as BalanceWithTax
FROM Customers
Of course in T-SQL, you can't re-use expression like you can in Access T-SQL. So the above is a GREAT use of CTE'S. Balance in t-sql cannot be re-used. So you are having to constant repeat expressions in t-sql (my big pet peeve with t-sql). Using a CTE means we CAN use the above ability to repeat a expression.
So I tend to think of the CTE solving two issues, and you should keep these concepts seperate:
I want to eleiminate the need for a query on query, and that includes sub-queries.
So, sure, using CTE for above is a good use.
The SECOND use case is the abilty to repeat use expession columns. This is VERY painful in T-SQL, and CTE's go a long way to reducing this pain point (Access SQL is still far better), but CTE's are at least very helpfull.
So, from a performance point of view, using CTE's to eliminate sub query should not effect perfomance, and as noted you can saving having to create 2-5 seperate queries for this to work.
Then there is a query on query (especially in the above use case of being able to re-use columns in expressions. In this case, I believe some performance advantages exist, but again, likely not enough to justify one approach or the other. So once again, adopting CTE's should be which road is LESS work for you! (but for a very nasty say sub-query that sums() and does some real work, and THEN you need to re-use that columns, then that is really when CTE's shine here.
So as a general coding approach, I used CTE's to advoid query on query (but NOT a sub quer s you are doing). And I use CTE's to gain re-use of a complex expression column.
Using CTE's to eliminate having sub-queries is not really all that great of a benefit. (I mean, just shove the sub-query into the main query - MOST of the time a CTE will not help you).
So, using CTE's just for the concept of a sub-query is not all that great of a advantage. You can, but I don't see great gains from a developer poitn of view. However, in the case of query on query (to gain re-use of column expressions?). Well then the CTE's elemintes the need for a query against the same table/query.
So, for just sub-queries, I can't say CTE's are huge advantage. But for re-use of column expressions, then you MUST either do a query on a query (say a saved view), and THEN you gain re-use of the columns as expressions to be used in additional expressions.
While this is somewhat opinion?
CTE'S ability to allow re-use of columns is their use case, because this elimiantes the need to create a sepeate view. It is not so much that you elimited the need for a seperate view (query on query), but that you gained used of a column exprssion for re-use is the main benefit here.
So, you certainly can use CTE's to eliminate having to create views (yes, a good idea), but in your case you likly could have just used sub-queries anyway, and the CTE's are not really required. For column re-use, you have NO CHOICE in the matter. Since you MUST use a query on query for column expression re-use, the the CTE's will eliminate this need. In your case (at least so far) you really did not need to use a CTE's and you were not being forced to in regards to your solution. For column re-use you have zero choice - you ARE being forced to query on query - so a CTE's eliminates this need.
As far as I can tell, so far you don't really need necessary to use a CTE unless the issue is being able to re-use some columns in other expressions like we could/can in Access sql.
If column re-use is the goal? Then yes, CTE's are a great solution. So it more of a column re-use issue then that of choosing to use query on query. If you did not have the additional views in Access, then no question that adopting CTE's to keep a similar approach and design makes a lot of sense. So the motivation of column re-use is what we lost by going to sql server, and CTE's do a lot to regain this ability.

improve database querying in ms sql

what's a fast way to query large amounts of data (between 10.000 - 100.000, it will get bigger in the future ... maybe 1.000.000+) spread across multiple tables (20+) that involves left joins, functions (sum, max, count,etc.)?
my solution would be to make one table that contains all the data i need and have triggers that update this table whenever one of the other tables gets updated. i know that trigger aren't really recommended, but this way i take the load off the querying. or do one big update every night.
i've also tried with views, but once it starts involving left joins and calculations it's way too slow and times out.
Since your question is too general, here's a general answer...
The path you're taking right now is optimizing a single query/single issue. Sure, it might solve the issue you have right now, but it's usually not very good in the long run (not to mention the cumulative cost of maintainance of such a thing).
The common path to take is to create an 'analytics' database - the real-time copy of your production database that you're going to query for all your reports. This analytics database can eventually be even a full blown DWH, but you're probably going to start with a simple real-time replication (or replicate nightly or whatever) and work from there...
As I said, the question/problem is too broad to be answered in a couple of paragraphs, these only some of the guidelines...
Need a bit more details, but I can already suggest this:
Use "with(nolock)", this will slightly improve the speed.
Reference: Effect of NOLOCK hint in SELECT statements
Use Indexing for your table fields for fetching data fast.
Reference: sql query to select millions record very fast

Better way to join table valued function

I am trying to generate a table that holds a client age analysis at a certain time. My source data is from Pastel Evolution accounting system.
They have a table valued function [_efnAgedPostARBalancesSum] that takes 2 Parameters (date and client link) and returns Age1, Age2, etc for entered client link. I need to get the ageing for all the clients in the client table.
I managed to get it working by using cross apply as per below, but it takes a long time to execute. If I run the age analysis from within Pastel it takes about 20 seconds, in Sql it takes about 6 minutes.
The function is encrypted so I cannot see what it does. I am using SQL Server 2008 R2.
Is there a more efficient alternative to cross apply?
SELECT
f.AccountLink,
f.AccountBalance,
f.Age1,
f.Age2,
f.Age3,
f.Age4,
f.Age5,
f.Age6,
f.Age7
FROM
Client
CROSS APPLY [_efnAgedPostARBalancesSum] ('2014-09-30', Client.DCLink) AS f
It looks like an AR aging bucket function from the outside - and probably has custom bucket sizes (given the generic age1, age2, etc). They're notoriously compute intensive. Its the kind of query that often spawns the need for a separate BI database as an OLTP system is not ideal for analytical queries. Its not only slow to run, its also likely to be impacting other work in your OLTP system while this function is banging on it.
You can bet it's looking at the due dates from the various documents that contain balances due (very likely several sources). They might not all be indexed on the due date columns. Look for that first. If you run the query in SSMS with show plan on, it may suggest one or more indexes to speed the execution - right-click in the query window and select "Show Actual Query Plan". From this, you can at least discover the tables that are being touched and the predicates involved in gathering the data...and you might get lucky with indexing.
There's no telling how efficiently the function computes the buckets. If they're not using some kind of window functions, it can be kind of horrible. You might find it advantageous to write your own UDF to get only what you want. Since it's generic, it may have a lot more work to do to cover all the possible bases - something your organization may not need.
If it is an inline function, you might get some relief by asking only for the columns you really need to look at. They're returning (at least) 7 buckets and a lot of AR reporting and analysis needs only 3 (the 30, 60, 90 day buckets, for example). It might also be worth doing a little pre-analysis to find out what clients you need to apply the function to - to keep from having to run it against your whole client domain.
Just looking at the function name - makes me think it's not a documented API per se. Encryption reinforces this hunch. Not sure how badly you really want to use such a function - no telling how it might get refactored (or removed) going forward.

Pros and cons of using a cursor (in SQL server)

I asked a question here Using cursor in OLTP databases (SQL server)
where people responded saying cursors should never be used.
I feel cursors are very powerful tools that are meant to be used (I don't think Microsoft supports cursors for bad developers).Suppose you have a table where the value of a column in a row is dependent on the value of the same column in the previous row. If it is a one time back end process, don't you think using a cursor would be an acceptable choice?
Off the top of my head I can think of a couple of scenarios where I feel there should be no shame in using cursors. Please let me know if you guys feel otherwise.
A one time back end process to clean bad data which completes execution within a few minutes.
Batch processes that run once in a long period of time (something like once a year).
If in the above scenarios, there is no visible strain on the other processes, wouldn't it be unreasonable to spend extra hours writing code to avoid cursors? In other words in certain cases the developer's time is more important than the performance of a process that has almost no impact on anything else.
In my opinion these would be some scenarios where you should seriously try to avoid using a cursor.
A stored proc called from a website that can get called very often.
A SQL job that would run multiple times a day and consume a lot of resources.
I think its very superficial to make a general statement like "cursors should never be used" without analyzing the task at hand and actually weighing it against the alternatives.
Please let me know of your thoughts.
There are several scenarios where cursors actually perform better than set-based equivalents. Running totals is the one that always comes to mind - look for Itzik's words on that (and ignore any that involve SQL Server 2012, which adds new windowing functions that give cursors a run for their money in this situation).
One of the big problems people have with cursors is that they perform slowly, they use temporary storage, etc. This is partially because the default syntax is a global cursor with all kinds of inefficient default options. The next time you're doing something with a cursor that doesn't need to do things like UPDATE...WHERE CURRENT OF (which I've been able to avoid my entire career), give it a fair shake by comparing these two syntax options:
DECLARE c CURSOR
FOR <SELECT QUERY>;
DECLARE c CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR <SELECT QUERY>;
In fact the first version represents a bug in the undocumented stored procedure sp_MSforeachdb which makes it skip databases if the status of any database changes during execution. I subsequently wrote my own version of the stored procedure (see here) which both fixed the bug (simply by using the latter version of the syntax above) and added several parameters to control which databases would be chosen.
A lot of people think that a methodology is not a cursor because it doesn't say DECLARE CURSOR. I've seen people argue that a while loop is faster than a cursor (which I hope I've dispelled here) or that using FOR XML PATH to perform group concatenation is not performing a hidden cursor operation. Looking at the plan in a lot of cases will show the truth.
In a lot of cases cursors are used where set-based is more appropriate. But there are plenty of valid use cases where a set-based equivalent is much more complicated to write, for the optimizer to generate a plan for, both, or not possible (e.g. maintenance tasks where you're looping through tables to update statistics, calling a stored procedure for each value in a result, etc.). The same is true for a lot of big multi-table queries where the plan gets too monstrous for the optimizer to handle. In these cases it can be better to dump some of the intermediate results into a temporary structure first. The same goes for some set-based equivalents to cursors (like running totals). I've also written about the other way, where people almost always think instinctively to use a while loop / cursor and there are clever set-based alternatives that are much better.
UPDATE 2013-07-25
Just wanted to add some additional blog posts I've written about cursors, which options you should be using if you do have to use them, and using set-based queries instead of loops to generate sets:
Best Approaches for Running Totals - Updated for SQL Server 2012
What impact can different cursor options have?
Generate a Set or Sequence Without Loops: [Part 1] [Part 2] [Part 3]
The issue with cursors in SQL Server is that the engine is set-based internally, unlike other DBMS's like Oracle which are cursor-based internally. This means that when you create a cursor in SQL Server, temporary storage needs to be created and the set-based resultset needs to be copied over to the temporary cursor storage. You can see why this would be expensive right off the bat, not to mention any row-by-row processing that you might be doing on top of the cursor itself. The bottom line is that set-based processing is more efficient, and often times your cursor-based operation can be done better using a CTE or temp table.
That being said, there are cases where a cursor is probably acceptable, as you said for one-off operations. The most common use I can think of is in a maintenance plan where you may be iterating through all the databases on a server executing various maintenance tasks. As long as you limit your usage and don't design whole applications around RBAR (row-by-agonizing-row) processing, you should be fine.
In general cursors are a bad thing. However in some cases it is more practical to use a cursor and in some it is even faster to use one. A good example is a cursor through a contact table sending emails based on some criteria. (Not to open up the question if sending an email from your DBMS is a good idea - let's just assume it is for the problem at hand.) There is no way to write that set-based. You could use some trickery to come up with a set-based solution to generate dynamic SQL, but a real set-based solution does not exist.
However, a calculation involving the previous row can be done using a self join. That is usually still faster than a cursor.
In all cases you need to balance the effort involved in developing a faster solution. If nobody cares, if you process runs in 1 minute or in one hour, use what gets the job done quickest. If you are looping through a dataset that grows over time like an [orders] table, try to stay away from a cursor if possible. If you are not sure, do a performance test comparing a cursor base with a set-based solution on several significantly different data sizes.
I had always disliked cursors because of their slow performance. However, I found I didn't fully understand the different types of cursors and that in certain instances, cursors are a viable solution.
When you have a business problem that can only be solved by processing one row at a time, then a cursor is appropriate.
So to improve performance with the cursor, change the type of cursor you are using. Something I didn't know was, if you don't specify which type of cursor you are declaring, you get the Dynamic Optimistic type by default, which is the one that is the slowest for performance because it's doing lots of work under the hood. However, by declaring your cursor as a different type, say a static cursor, it has very good performance.
See these articles for a fuller explanation:
The Truth About Cursors: Part I
The Truth About Cursors: Part II
The Truth About Cursors: Part III
I think the biggest con against cursors is performance, however, not laying out a task in a set based approach would probably rank second. Third would be readability and layout of the tasks as they usually don't have a lot of helpful comments.
SQL Server is optimized to run the set based approach. You write the query to return a result set of data, like a join on tables for example, but the SQL Server execution engine determines which join to use: Merge Join, Nested Loop Join, or Hash Join. SQL Server determines the best possible joining algorithm based upon the participating columns, data volume, indexing structure, and the set of values in the participating columns. So using a set based approach is generally the best approach in performance over the procedural cursor approach.
They are necessary for things like dynamic SQL pivoting, but you should try and avoid using them whenever possible.

SQL Server performance and fully qualified table names

It seems to be fairly accepted that including the schema owner in the query increases db performance, e.g.:
SELECT x FROM [dbo].Foo vs SELECT x FROM Foo.
This is supposed to save a lookup, because SQL Server will otherwise look for a Foo table belonging to the user in the connection context.
Today I was told that always including the database name improves the performance the same way, even if you are querying the database you selected in your connection string:
SELECT x FROM MyDatabase.[dbo].Foo
Is there any truth to this? Does this make sense as a coding standard? Does any of this (even the first example) translate to measurable benefits?
Are we talking about a few cycles for an extra dictionary lookup on the database server vs more bloated SQL and extra concatenation on the web server (or other client)?
One thing to keep in mind is that this is a compilation binding, not an execution one. So if you execute the same query 1 million times, only the first execution will 'hit' the look up time, the rest will reuse the same plan and plans are pre-bound (names are already resolved to object ids).
In this case I would personally prefer readability over the tiny performance increase that this could possibly cause, if any.
SELECT * FROM Foo
Seems a lot easier to scan than:
SELECT * FROM MyDatabase.[dbo].Foo
Try it out? Just loop through a million queries of both and see which one finishes first.
I'm guessing it's a load of bunk though. The developers of MS SQL spend millions of hours researching efficiency for search algorithms and storage methods only to be thwarted by users not specifying fully qualified table names? Laughable.
SQL server will not make an extra look up if the default schema is the same. It should be included if it's not and it's a query that is used a lot.
The database name will not benefit the query performance. I think this could be seen with the Estimated Execution Plan in the Management Studio.
As Spencer said - try it, of course make sure you clear the cache each time as this will interfere with your results.
http://www.devx.com/tips/Tip/14401
I also would be suprised if it made any apprecible difference.

Resources