The actual execution plan of a query shows a total of 2.040s time(taking the sum of time taken at every step) but takes 52 secs(time shown at the bottom of SQL Server Management Studio) to complete.
Why there is so much difference in both the times? And how can I reduce this 52 secs time?
The elapsed time in SSMS includes network round-trip time, client render time, etc.
The execution plan indicates how long it took the server to process the query, not how long it took to stream the results to you.
If you're outputting data to a messages pane or, worse, a grid, that isn't free. As SSMS draws the data in the grid, the server is sending you rows over the network, but the query engine isn't doing anything anymore. Its job is done.
The execution plan itself only knows about the time the query took on the server. It has no idea about network latency or slow client processing. SSMS will tell you how much time it spent doing that, and the execution plan doesn't have any visibility into it at all because it's generated before SSMS has done its thing.
The execution plan runs on the server. It doesn't even know what SSMS is, never mind what it's doing with your 236,833 rows. Let's think about it another way:
You buy some groceries, and the cash register receipt says it took you 4 minutes to check out. Then you take the long way home, stop for coffee, and you dropped the groceries on the way into the house, and then it took you 20 minutes to remember where everything goes. Finally, you sit down on the couch. The cash register receipt doesn't then update and add your travel time and organization time, which is equivalent to what SSMS is doing when it is struggling trying to show you 236,833 rows.
And this is why we don't try to time the performance of a query by adding in unrealistic things that won't happen in the real world, because no real world user can process 200,000 rows of anything. Really don't draw any conclusions about real world performance from your testing in a client GUI. Is your application going to do pagination, or aggregation, or something else so an end user doesn't have to wait for 200,000 rows to render? If so, test that. If not, reconsider.
To make this faster in the meantime, try with the "Discard results after execution" option in SSMS.
Related
I'm using a snowflake trial version to do a performance test.
I perform 9 heavy queries (20 mins taken by XS cluster) at the same time and watch the warehouse or history pane. However, the time to display page is too much; about 30 seconds.
I think the cloudservice (like hadoop headnode?) doesn't have adequate resources to do this.
Is it because I'm using the trial version? If I use enterprise or business critical versions, will it happen?
The "cloud services", is actually unrelated to your warehouses, and 9 queries is not enough overload that. But at the same time the trial accounts might be on slightly underpowered SC layer, but I am not sure why that would be the case. The allocated credit spend is just that.
I am puzzled what you are trying to "test" about running many slow for the server sized queries at the same time?
When you say the page takes 30 seconds to load, do you mean if you do nothing the query execution status/time is only updated every ~30 seconds, or if you do a full page reload it is blank for 30 seconds?
I have spent a lot of time optimizing a query for our DB Admin. When I run it in our test server and our live server I get similar results. However, once it is actually running in production with the query being utilized a lot it runs very poorly.
I could post code but the query is over 1400 lines and it would take a lot of time to obfuscate. All data types match and I am using indices on my queries. It is broken down into 58 temp tables. When I test it using SQL Sentry I get it using 707 CPU cycles and 90,0007 reads and a time of 1.2 seconds to run a particular exec statement. The same parameters in production last week used 10,831 CPU cycles, 2.9 million reads, and took 33.9 seconds to run.
My question is what could be making the optimizer run more cycles and reads in production than my one off tests? Like I mentioned, I could post code if needed, but I am looking for answers that point me in a direction to troubleshoot such a discrepancy. This particular procedure is run a lot during our Billing cycle so it is hitting the server hundreds of times a day, it will become thousands as we near the 15th of the month.
Thanks
ADDENDUM:
I know how to optimized queries as this is a regular part of my job duties. As I stated in a comment, my test queries don't have large differences like this usually from actual production. I don't know the SQL Server side of things and wondered if there was something I needed to be aware of that might affect my query when the server is under a heavier load. This may be outside the scope of this forum, but I thought I would reach out for ideas from the community.
UPDATE:
I am still troubleshooting this, just replying to some of the comments.
The execution plans are the same between my one off tests and the production level executions. I am testing in the same environment, on the same server as production. This is a procedure for report data. The data returned, the records, and tables hit, are all the same. I am testing using the same parameters that during production took astronomical amounts of time to process, the difference between what I am doing and what happened during the production run is the load on the server. Not all of production executions are taking a long time, the vast majority are within the thresholds of acceptable CPU and reads, when the outliers have such a large discrepancy it is 500 times the CPU and 150 times the reads of the average execution (even with the same parameters).
I have zero control over the server side of things. I only can control how to code the proc. I realize that my proc is large and without it, it is probably impossible to get a good answer on this forum. I also realize that even with the code posted here I would not likely get a good answer due to the size of the procedure.
I was/am looking only for insights, directions of things to look at, using anecdotal evidence of issues other developers have overcome when dealing with similar problems. Comments that state the size of my code is the reason why performance is in the toilet, and that code that size is rarely needed, are not helpful and quite frankly ignorant. I am working with legacy c# code and a database that deals with millions of transactions for billing. There are thousands of tables in dozens of interconnected databases with an ERD that would blow your mind, I am optimizing a nightmare. That said, I am very good at it, and when myself and the database administrators are stumped as to why we see such stark numbers I thought I would widen my net and see if this larger community had any ideas.
Below is an image showing a report of the top 32 executions for this procedure in a 15 min window. Even among the top 32 the numbers are not consistent. The image below that shows all of the temp tables and main query that I just ran on the same server for #1 resource hog of the first image. The rows are different temp tables with a sum at the bottom. The sum shows 1.5 (1.492) seconds to run with 534 CPU and 92,045 reads. Contrast that with the 33.9 seconds, 10,831 CPU, and 2.9 million reads yesterday:
We are experiencing seemingly random timeouts on a two app (one ASP.Net and one WinForms) SQL Server application. I had SQL Profiler run during an hour block to see what might be causing the problem. I then isolated the times when the timeouts were occurring.
There are a large number of Reads but there is no large difference in the reads when the timeout errors occur and when they don't. There are virtually no writes during this period (primarily because everyone is getting time outs and can't write).
Example:
Timeout occurs 11:37. There are an average of 1500 transactions a minute leading up to the timeout, with about 5709219 reads.
That seems high EXCEPT that during a period in between timeouts (over a ten minute span), there are just as many transactions per minute and the reads are just as high. The reads do spike a little before the timeout (jumping up to over 6005708) but during the non-timeout period, they go as high as 8251468. The timeouts are occurring in both applications.
The bigger problem here is that this only started occurring in the past week and the application has been up and running for several years. So yes, the Profiler has given us a lot of data to work with but the current issue is the timeouts.
Is there something else that I should be possibly looking for in the Profiler or should I move to Performance Monitor (or another tool) over on the server?
One possible culprit might be the Database Size. The database is fairly large (>200 GB) but the AutoGrow setting was set to 1MB. Could it be that SQL Server is resizing itself and that transaction doesn't show itself in the profiler?
Many thanks
Thanks to the assistance here, I was able to identify a few bottlenecks but I wanted to outline my process to possibly help anyone going through this.
The #1 problem was found to be a high number of LOCK_MK_S entries found from the SQLDiag and other tools.
Run the Trace Profiler over two different periods of time. Comparing durations for similar methods led me to find that certain UPDATE calls were always taking the same amount of time, over 10 seconds.
Further investigation found that these UPDATE stored procs were updating a table with a trigger that was taking too much time. Since a trigger may lock the table while it completes, it was affecting every other query. ( See the comment section - I was incorrectly stating that the trigger would always lock the table - in our case, the trigger was preventing the lock from being released)
Watch the use of Triggers for doing major updates.
I have created a vb.net application that uses a SQL Server database at a remote location over the internet.
There are 10 vb.net clients that are working on the same time.
The problem is in the delay time that happens when inserting a new row or retrieving rows from the database, the form appears to be freezing for a while when it deals with the database, I don't want to use a background worker to overcome the freeze problem.
I want to eliminate that delay time and decrease it as much as possible
Any tips, advises or information are welcomed, thanks in advance
Well, 2 problems:
The form appears to be freezing for a while when it deals with the database, I don't want to use a background worker
to overcome the freeze problem.
Vanity, arroaance and reality rarely mix. ANY operation that takes more than a SHORT time (0.1-0.5 seconds) SHOULD run async, only way to kep the UI responsive. Regardless what the issue is, if that CAN take longer of is on an internet app, decouple them.
But:
The problem is in the delay time that happens when inserting a new records or retrieving records from the database,
So, what IS The problem? Seriously. Is this a latency problem (too many round trips, work on more efficient sql, batch, so not send 20 q1uestions waiting for a result after each) or is the server overlaoded - it is not clear from the question whether this really is a latency issue.
At the end:
I want to eliminate that delay time
Pray to whatever god you believe in to change the rules of physics (mostly the speed of light) or to your local physician tof finally get quantum teleportation workable for a low cost. Packets take time at the moment to travel, no way to change that.
Check whether you use too many ound trips. NEVER (!) use sql server remotely with SQL - put in a web service and make it fitting the application, possibly even down to a 1:1 match to your screens, so you can ask for data and send updates in ONE round trip, not a dozen. WHen we did something similar 12 years ago with our custom ORM in .NET we used a data access layer for that that acepted multiple queries in one run and retuend multiple result sets for them - so a form with 10 drop downs could ask for all 10 data sets in ONE round trip. If a request takes 0.1 seconds internet time - then this saves 0.9 seconds. We had a form with about 100 (!) round trips (creating a tree) and got that down to less than 5 - talk of "takes time" to "whow, there". Plus it WAS async, sorry.
Then realize moving a lot of data is SLOW unless you have instant high bandwidth connections.
THis is exaclty what async is done for - if you have transfer time or latency time issues that can not be optimized, and do not want to use async, go on delivering a crappy experience.
You can execute the SQL call asynchronously and let Microsoft deal with the background process.
http://msdn.microsoft.com/en-us/library/7szdt0kc.aspx
Please note, this does not decrease the response time from the SQL server, for that you'll have to try to improve your network speed or increase the performance of your SQL statements.
There are a few things you could potentially do to speed things up, however it is difficult to say without seeing the code.
If you are using generic inserts - start using stored procedures
If you are closing the connection after every command then... well dont. Establishing a connection is typically one of the more 'expensive' operations
Increase the pipe between the two.
Add an index
Investigate your SQL Server perhaps it not setup in a preferred manner.
Does anyone know how to specifically identify the portion of the overall compilation time that any queries spent waiting on statistics (after stats are deemed stale) to be updated in SQL 2005/2008? (I do not desire to turn on the async thread to update stats in the background just in case that point of conversation comes up). Thanks!
Quantum,
I doubt that level of detail and granularity is exposed in SQL Server. What is the real question here? Are you trying to gauge how long it takes for the queries to re-compile when the stats are deemed stale to the normal compilation time? Is this a one off request or are you planning to put something in production and measure the difference over a period of time?
If it is former then, you can get that info by figuring out the time taken individually (set statistics time on) and combing them together. If it is latter then I am NOT sure there is anything that is currently available in SQL Server.
PS: I haven't checked Extended Events (in DENALI) in detail for this activity but there could be something there for you. You may want to check that out if you are really interested.