MSSQL slow stored proc on first run, uncached indexes? - sql-server

Have a 20GB database in SQL Server 2014 behind an IIS web application - DB is queried 24/7 so it's never inactive and auto-close is off but there's a manually-triggered "daily work queue" stored procedure which runs inconsistently during the first execute.
When it's used in the morning for the first time it runs slowly - if you wait, execute again it's back to an immediate response. Minimal other loads on the server at the same time, page life expectancy is healthy and should have necessary indexes to support this query - or at least no additional indexes are being recommended.
Been trying to approach this as a query optimisation problem and getting nowhere, so began exploring other ideas.
Restored DB from backup onto local dev server - first execution is slow, the second execution is fast and see the large (500mb+) indexes loaded via sys.dm_os_buffer_descriptors - if I run DBCC DROPCLEANBUFFERS to simulate everything being unloaded from the buffer, next execution will be slow, can watch the indexes being cached, which point it executes quickly.
Seems to fit the pattern we're experiencing and doesn't seem unreasonable to assume MSSQL might uncache data that's not been used for 10+ hours.
Have I missed something more obvious? Assuming I'm on the right path, can't be the first person to run across this issue so must be an elegant solution out there...

If your "daily work queue" change the data. Then index need to be rebuild and reload in the cache. Is the same as start an old car you need to wait a little bit.
What I do is plan those "work" at night and also program some basic query so index are load in the cache.
Also check your hardware (disk, memory) the first time you run the query the db has to bring the data from disk (slow) and store it in memory (fast but small size).

Related

SQL server - high buffer IO and network IO

I have a a performance tuning question on SQL server.
I have a program that needs to run every month and it takes more than 24hrs to finish. I need to tune this program in the hope that I can decrease the running time to 12 hrs or less.
As this program isn't developed by us, i can't check the program content and modify it. All i can do is just open the SQL server profiler and activity monitor to trace and analyze the sql content. I have disabled unused triggers and did some housekeeping, but the running time only decreased 1 hr.
I found that the network I/O and buffer I/O are high, but i don't know the cause and meaning of this ?
Can anyone tell me the cause of these two issues (network I/O and butter I/O)? Are there any suggestions for optimizing this program?
Thank you!
. According to your descriptions, I think your I/O is normal, your
question is only one:one procedure is too slowly. the solution:
1.open the SSMS
2.find the procedure
3.click the buttton named "Display estimated execution plan"
4.fix the procedure.
To me it seems like your application reads a lot of data into the application, which would explain the figures. Still, I would check out the following:
Is there blocking? That can easily be a huge waste of time if the process is just waiting for something else to complete. It doesn't look like that based on your statistics, but it's still important to check.
Are the tables indexed properly? Good indexes to match search criteria / joins. If there's huge key lookups, covering indexes might make a big difference. Too many indexes / unnecessary indexes can slow down updates.
You should look into plan cache to see statements responsible for the most I/O or CPU usage
Are the query plans correct for the most costly operations? You might have statistics that are outdated or other optimization issues.
If the application transfers a lot of data to and/or from the database, is the network latency & bandwidth good enough or could it be causing slowness? Is the server where the application is running a bottleneck?
If these don't help, you should probably post a new question with detailed information: The SQL statements that are causing the issues, table & indexing structure of the involved tables with row counts and query plans.

Drastic difference of the time cost for the same store procedure

I am using SQL Server 2008 R2.
The process is actually like this:
First, about 2 million records are pulled from a remote server,
then a join is done locally,
the final result is thousands of records.
The time cost varies from less one 1 min to 30 mins.
And after I experienced the 30 mins delay, it seems the following time costs are all only around 3 mins.
It is the same data, same SP.
What could cause this drastic difference?
Update
I delet the SP, re-start the SQL server service, and re-creat the SP. The execution took only 50 seconds!
What's wrong?
The behaviour you describe seems extreme - but (if you exclude the client), there are 3 logical places to look.
The first is the query execution on the database server. It's worth using the Query Analyzer tool to see if it's using any indices - by far the most common reason for variable performance of database queries is that the query is not using (the right) indices, and that therefore the impact of the query cache plays a big part. SQL Server will cache a lot of data, and the first run of your proc populates that cache; the second run is faster because it hits the cache. After a while, the cache goes stale, and running the proc slows down again.
The second possibility is that the database server is wobbly - it may just not be powerful enough to do all the work it's supposed to do. In that case, one moment you get lucky, have all the server resources to yourself; the next, someone else is running a query and yours slows down. That would make all queries slow, not just this one - so it doesn't sound likely.
Third possibility is networking weirdness - as Phil says, "thousands of records" is nothing too scary, but if they're big, and your network is saturated with pictures of kittens, it might have an impact. Again, that would manifest in general network slowness, and is unlikely to explain a delay of 30 minutes...
Fourth, is anything going on at the same time?
Fifth, does your SP use dynamically generated SQL statements? This would cause the SP not to become pre-compiled. If possible seperate such statements into child SPs.

Prevent Caching in SQL Server

Having looked around the net using Uncle Google, I cannot find an answer to this question:
What is the best way to monitor the performance and responsiveness of production servers running IIS and MS SQL Server 2005?
I'm currently using Pingdom and would like it to point to a URL which basically mimics a 'real world query' but for obvious reasons do not want the query to run from cache. The URL will be called every 5 minutes.
I cannot clear out the cache, buffers, etc since this would impact negatively on the production server. I have tried using a random generated number within the SELECT statement in order to generate unique queries, but the cached query is still used.
Is there any way to simulate the NO_CACHE in MySQL?
Regards
To clear the SQL buffer and plan cache:
DBCC DROPCLEANBUFFERS
GO
DBCC FREEPROCCACHE
GO
A little info about these commands from MSDN:
Use DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. (source)
Use DBCC FREEPROCCACHE to clear the plan cache carefully. Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. (source)
SQL Server does not have a results cache like MySQL or Oracle, so I am a bit confused about your question. If you want the server to recompile the plan cache for a stored procedure, you can execute it WITH RECOMPILE. You can drop your buffer cache, but that would affect all queries as you know.
At my company, we test availability and performance separately. I would suggest you use this query just to make sure you your system is working together from front-end to database, then write other tests that check the individual components to judge performance. SQL Server comes with an amazing amount of ways to check if you are experiencing bottlenecks and where they are. I use PerfMon and DMVs extensively. Using PerfMon, I check CPU and page life expectancy, as well as seeing how long my disk queue is. Using DMVs, I can find out if my queries are taking too long (sys.dm_exec_query_stats) or if wait times are long (sys.dm_os_wait_stats).
The two biggest bottlenecks with IIS tend to be CPU and memory, and IIS comes with its own suite of PerfMon objects to query, but I am not as familiar with those.

SQL Server 2008 R2, don't use up all resources for a specific task

We have a server that hosts an application running on SQL Server 2008 R2. Throughout the week we have a stored procedure that is called around 400,000 times per day by 25 users. It runs very efficiently and memory usage remains low.
On Saturday when nobody is on the system a large update (several million inserts and updates) runs and after its completion all the memory on the server gets consumed. The server only hosts this application and I do not have any problems with SQL Server taking up all the RAM it can get, however part of me thinks its wasteful that all the memory gets consumed by this one job. I was wondering if there was a way to tell SQL Server to not consume so much memory when a specific job runs. Since this job runs on weekend when nobody is on the system it is not so time critical.
In the meantime I was simply planning on restarting SQL service. I was wondering if there is a better way of handling this?
Thanks
The memory is getting used because SQL caches the data pages as it reads them from disk, which is a good thing. The next time that data page needs to be accessed, it will be read from memory which is much faster than reding from the disk.
SQL will flush out the cached pages as they expire, or if other pages are loaded.
I would not worry at all about this, especially if SQL Server is the only big process running on that server.
If you really want to flush the cash yourself, you can issue the following statement
DBCC DropCleanBuffers
I was wondering if there was a way to tell SQL Server to not consume so much memory when a
specific job runs. Since this job runs on weekend when nobody is on the system it is not so time
critical.
Hahaha. Waht do you care? Sql server will erpurpose it's caches the moment other stuff starts requesting other data, you know.
I do not have any problems with SQL Server taking up all the RAM it can get
It is VERY good that you agree that the behavior you confgiured (!) (or allowed by default configuration) is acceptable.
Standard setup: SQL server sues us much memory as possible as LRU (Last recently used) page cache. A request answered from memory is faster than one that hits the disc.
A hugh update without other activity naturally will kill other items off the memroy - how should SQL Server magically decide to not use the resources? WHY should it?
Next day the new data requested will anyway go back into the cache pretty much immediately.
In the meantime I was simply planning on restarting SQL service.
So, when you dont like the funniture in your house you burn it down?
YOu could restart Windows totally - then you also clear off the OS level caches.
You dont really get any result out of it. Basiaclly what you did is a worse cure than wrong data in the cacehs. Not used data gets flushed anyway, so you restart the server as a way to tell SQQL Server it can forget old data, which it will do anyway, just because you dont like the memory to be used after allowing it to?
;) Sounds like a read in the documentation is in order.
Seriously. there is no problem, you totally make this up. SQL Server behaves sensible by any lopgical general standard and you dont really do ANYTHING except take the server down and do some number faking by rstarting it. Read up how SQL Server uses the memory. It is not like a page stored in emmory stays there forever.

"Priming" a whole database in SQL Server for first-hit speed

For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards.
One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM?
It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.
I'd use a startup stored proc that invoked sp_updatestats
It will benefit queries anyway
It already loops through everything anyway (you have indexes, right?)

Resources