Is the number of result sets limited that a stored procedure can return in SQL Server? Or is there any other component between server and a .Net Client using sqlncli11 limiting it? I'm thinking of really large numbers like 100000 result sets.
I couldn't find a specific answer to this in the Microsoft docs or here on SO.
My use case:
A stored procedure that iterates over a cursor and produces around 100 rows each iteration. I could collect all the rows in a temp table first, but since this is a long-running operation I want the client to start sooner with processing the results. Also the temp table can get quite large and the execution plans shows 98% cost on the INSERT INTO part.
I'm thinking of really large numbers like 100000 result sets.
Ah, I hope you have a LOT of time.
100k result sets means 100k SELECT statements.
Just switching from one result set to the next will take - together - a long time. 1ms? that is 100 seconds.
Is the number of result sets limited that a stored procedure can return in SQL Server?
Not to my knowledge. Remember, those are not part of any real metadata - there is a stream of data, endmarker, next stream. The number of resultsets a procedure returns is not defined (as: it can vary).
Also the temp table can get quite large
I have seen temp tables with hundreds of GB.
and the execution plans shows 98% cost on the INSERT INTO part.
That basically indicates that there is otherwise not a lot happening. Note that unless you do optimization - the relative cost is not relevant, the absolute is.
Have you considered a middle ground? Collect data and return resultsets grouping i.e. 100 resultsets.
But yes, staging into temp has a lot of overhead. It also means you can not start returning data BEFORE all processing is finished. That can be a bummer. Your approach will allow processing to start while the SP is still working on more data.
Related
When you use the Snowflake TOP clause in a query, does the SQL Server engine stop searching for rows once it has enough to satisfy the TOP X needed to be returned?
I think it depends on the rest of your query. For example, if you use TOP 10 but don't supply an order by then yes, it will stop as soon as the 10 records are returned but your results are non-deterministic.
If do you use an order by, then the entire query has to be executed first before the top 10 results can be returned but your results will be deterministic.
Here is a real example. If I run a select on the SAMPLE_DATA.TPCH_SF10000.CUSTOMER table with a limit 10 it returns in 1.8 seconds (no caching). This table has 1,500,000,000 rows in it. If I then check the query plan it has only scanned a tiny portion of the table, 1 out of 6,971 partitions:
You can see that it will return when 10 records have been streamed back from the initial table scan since there is nothing more it has to do.
From my testing and understanding, it does not stop. You can see typically see that the last step in the execution plan is the "limit" step. You can also see what's going on by looking at the execution plans. You will typically see the LIMIT (or whatever) after full processing. Also, if you take a query that runs for say 20 seconds without a LIMIT (or similar) and add the LIMIT, you will typically not see any difference in the execution time (but be aware of fetch time). I typically run query performance testing in the UI to avoid issues with client side tools that can mislead you due to limits on fectching and/or use of cursors.
this is a newbe question.
I have two tables in a SQL data base. Both simply a dozen columns of string, int or date, no indexes, no stored procedures. In a select * from statement I get ~30,000 rows per second. But on insert into ... I only get < 1000 inserts per second.
Is this (factor) what I should expect. (I actually expected a comparable speed on the insert part.)
Insert speed varies wildly by the method of inserting. There is a disk and a CPU component. From the low speed of inserting I guess that that you are inserting rows one by one in a separate transaction each. This pretty much maximizes CPU and disk usage. Each insert is a write to disk.
Make yourself familiar with efficient ways of inserting. There are plenty with various degrees of performance and development time required to program them.
To get you started with something simple: Enclose many (100+) inserts in one transaction. Insert in batches.
Are there any tools to specifically monitor/detect for parameter sniffing problems as opposed to those which report queries that take a long time?
I have just got hit with a parameter sniffing problem. (It wasn't too serious as it caused a report to take about 2 minutes to run instead of a few seconds if properly cached and maybe 30 seconds if recompiled. And since the report is usually only run a few times per month, it is not really a problem).
However, since I wrote the report and I knew what it did, I was curious and went investigating and using SQL Profiler, I could see a section in the query plan where the number of estimated rows was 1, but the actual number of rows was several hundred thousand.
So, it struck me, that if SQL has these figures, (or at least can get these figures), that perhaps there is some way of getting sql to track and report which plans were significantly out.
You've got a couple of questions in there:
Are there any tools to specifically monitor/detect for parameter sniffing problems as opposed to those which report queries that take a long time?
To catch this, you need to monitor the procedure cache to find out when a query's execution plan changes from good to bad. SQL Server 2008 made this a lot easier by adding query_hash and query_plan_hash fields to sys.dm_exec_query_stats. You can compare the current query plan to past ones for the same query_hash, and when it changes, compare the number of logical reads or amount of worker time from the old query to the new one. If it skyrockets, you might have a parameter sniffing problem.
Then again, someone might have just eliminated an index or changed the code in a UDF that's being called or a change in MAXDOP or any one of a million settings that influence query plan behavior.
What you want is a single dashboard that shows the most resource-consuming queries in aggregate (because you might have this problem on a query that's called extremely frequently, but consumes tiny amounts of resources each time) and then shows you changes in its execution plan over time, plus lays over system and database level changes. Quest Foglight Performance Analysis does this. (I used to work for Quest, so I know the product, but I'm not shilling here.) Note that Quest sells a separate product, Foglight, that has nothing to do with Performance Analysis. I'm not aware of any other product that goes into this level of detail.
I could see a section in the query plan where the number of estimated rows was 1, but the actual number of rows was several hundred thousand.
That's not necessarily parameter sniffing - that could be bad stats or table variable usage, for example. To catch this kind of issue, I like the free SQL Sentry Plan Advisor tool. In the Top Operations tab, it highlights variances between estimated and actual rows.
Now, that's only for one plan at a time, and you have to know the plan first. You want to do this 24/7, right? Sure you do - but it's computationally intensive. The procedure cache can be huge (I've got clients with >100GB of procedure cache), and it's all unindexed XML. To compare estimated vs actual rows, you have to shred all that XML - and keep in mind that the procedure cache can be constantly changing under load.
What you really want is a product that could very rapidly dump the entire procedure cache into a database, throw XML indexes on it, and then compare estimates versus actual rows. I can imagine a script doing that, but I haven't seen one yet.
You said
"estimated rows was 1, but the actual number of rows was several hundred thousand."
This can be caused by table variables which don't have statistics.
To detect parameter sniffing is difficult but you can verify it is happening by running sp_updatestats. If the problems disappears it's most likely parameter sniffing. If it doesn't then you have other problems, such as too large table variables
We use parameter masking consistently now (system was developed on SQL Server 2000). We don't need it 99.9+ % of the time but the < 0.1% justifies it because of user confidence + support overhead it entails.
You can set up a trace that to record the query text of all batches / stored procedures run that have duration > Ns.
You obviously need to tailor N for your system (and probably add rules to exclude batch jobs that take a long time even during normal execution), but this should identify which queries offer the poorest performance and will also record any queries (along with their parameters) which have abnormally long execution times - potentially the result of a parameter sniffing problem.
See How to create a SQL trace using T-SQL on how to create a trace using T-SQL. This will give better performance than using SQL Profiler as this only captures the events that you set trace events for (SQL Profiler reportedly captures all events and then filters them in the application).
I've got a SQL 2005 DB running under a virtual environment.
To simplify things, let's say I have two SQL SELECT Queries. They both do the exact same thing. But I'm trying to analyze them for performance purposes.
Generally, I'd fire up a local DB, load up some data and using timing to compare one variant to other variants.
But in this case, since the DB is large and it's a testbox, the client has placed it on a host that's serving other VM's as well.
The DB is too large to pull down locally, so that's out (at least for now).
But my main issue is that when I run queries against the server, the timing is all over the place. I can run the +exact+ same query 4 times and get timings of 7secs, 8 minutes, 3:45min and 15min.
My first thought was use SET STATISTICS IO ON.
But, that yields basically read and write stats on the tables being queries, which, depending on the variations in the queries (temp tables, vs views, vs joins, etc) can't really be accurately compared, except in aggregate.
I then though of SET STATISTICS TIME ON, and just using the CPU time, but that seems to discount all the IO, which also doesn't make for a good baseline.
My question is is there any other statistic or performance analysis technique that could be useful in a situation like this?
The STATISTICS IO information will still be useful. You may see significantly different numbers of reads, writes and scans that will make it obvious which query is better.
You can also view Execution Plan information for each query. You can select Query -> Display Estimated Execution Plan to see a graphical presentation of the SQL Server estimate to run the query. You can also use the Query -> Include Actual Execution Plan to show the actual plan used.
And, you can also use SET SHOWPLAN_TEXT, SET SHOWPLAN_ALL or SET SHOWPLAN_XML to include the execution plan to view a textual display of the plan.
When viewing the results of the execution plan, you can look at the estimated cost value and compare the values for each query. The estimated cost is a relative value that can be used to compare the cost of each option.
I'm profiling (SQL Server 2008) some of our views and queries to determine their efficiency with regards to CPU usage and Reads. I understand Reads are the number of logical disk reads in 8KB pages. But I'm having a hard time determining what I should be satisfied with.
For example, when I query one of our views, which in turn joins with another view and has three OUTER APPLYs with table-valued UDFs, I get a Reads value of 321 with a CPU value of 0. My first thought is that I should be happy with this. But how do I evaluate the value of 321? This tells me 2,654,208 bytes of data were logically read to satisfy the query (which returned a single row and 30 columns).
How would some of you go about determining if this is good enough, or requires more fine tuning? What criteria would you use?
Also, I'm curious what is included in the 2,654,208 bytes of logical data read. Does this include all the data contained in the 30 columns in the single row returned?
The 2.5MB includes all data in the 321 pages, including the other rows in the same pages as those retrieved for your query, as well as the index pages retrieved to find your data. Note that these are logical reads, not physical reads, e.g. read from a cached page will make the read much 'cheaper' - take CPU and profiler cost indicator as well when optimising.
w.r.t. How to determine an optimum 'target' for reads.
FWIW I compare the actual reads with a optimum value which I can think of as the minimum number of pages needed to return the data in your query in a 'perfect' world.
e.g. if you calculate roughly 5 rows per page from table x, and your query returns 20 rows, the 'perfect' number of reads would be 4, plus some overhead of navigating indexes (assuming of course that the rows are clustered 'perfectly' for your query) - so utopia would be around say 5-10 pages.
For a performance critical query, you can use the actual reads vs 'utopian' reads to micro-optimise, e.g.:
Whether I can fit more rows per page in the cluster (table), e.g. replacing non-searched strings with varchar() not char, or using varchar not nvarchar() or using smaller integer types etc.
Whether the clustered index could be changed such that fewer pages would need to be fetched (e.g. if the 20 rows for the above query were scattered across different pages, then reads would be > 4)
Failing which (since you can only one CI), whether covering indexes could replace the need to go to the table data (cluster) at all, since covering indexes fitting your query will have higher 'row' densities
And for indexes, density improvements such as fillfactors or narrower indexing for indexes can mean less index reads
You might find this article useful
HTH!
321 reads with a CPU value of 0 sounds pretty good, but it all depends.
How often is this query run? Why are table-returning UDFs used instead of just doing joins? What is the context of database use (how many users, number of transactions per second, database size, is it OLTP or data warehousing)?
The extra data reads come from:
All the other data in the pages needed to satisfy the reads done in the execution plan. Note this includes clustered and nonclustered indexes. Examining the execution plan will give you a better idea of what exactly is being read. You'll see references to all sorts of indexes and tables, and whether a seek or scan was required. Note that a scan means every page in the whole index or table was read. That is why seeks are desirable over scans.
All the related data in tables INNER JOINed to in the views regardless of whether these JOINs are needed to give correct results for the query you're performing, since the optimizer doesn't know that these INNER JOINs will or won't exclude/include rows until it JOINs them.
If you provide the queries and execution plans, as requested, I would probably be able to give you better advice. Since you're using table-valued UDFs, I would also need to see the UDFs themselves or at least the execution plan of the UDFs (which is only possibly by tearing out its meat and running outside a function context, or converting it to a stored procedure).