Recently we have migrated SQLServer 2005 to 2012 and seen huge slowness in stored procedures execution.it's taking long time in first attempt but as on expected time in second attempt. I was under impression this may be due to caching but I have used "dbcc freeproccache" but When I'm executing all procedures again they are taking expected time, I don't have seen any performance issue on first attempt to nth times after executing "dbcc freeproccache". To improve the performance
I already did following things which are not working for me.
Rebuild the indexes.
Updated historic stats using 100% sampling data.
Related
I have a stored procedure which is loading a huge amount of data into a table. Suddenly it's run time has increased to a very high extent. Earlier it was taking ~2 minutes, now it's taking ~20 minutes. What could be the reason and how to debug the issue ? I am using SQL Server 2014.
I have a performance issue with a method that calls org.hibernate.Query#list. The duration of the method call vary over time: it usually lasts about one second but some days, for maybe half a day, it takes about 20 seconds.
How can this issue be resolved? How can the cause for this issue be determined?
More elements in the analysis of this issue:
Performance issues have been observed in production environment, but the described issue is in a test environment.
The issue has been observed for at least several weeks but the date of its origin is unknown.
The underlying query is a view (select) in MS SQL Server (2008 R2):
Database reads/writes in this test environment are from a few users at a time only: the database server should not be overly sollicited and the data only changes slowly over time.
Executing the exact query directly from a MS SQL Server client always takes less than a second.
Duplicating the database (using the MS SQL Server client to backup the database and restore this backup as a new database) does not allow to reproduce the problem: the method call results in being fast on the duplicate.
The application uses Hibernate (4.2.X) and Java 6.
Upgrading from Hibernate 3.5 to 4.2 has not changed anything about the problem.
The method call is always with the same arguments: there is a test method that does the operation.
Profiling the method call (using hprof) shows that when it is long, most of the time is spent on "Object.wait" and "ref.ReferenceQueue.remove".
Using log4jdbc to log the underlying query duration during the method call shows the following results :
query < 1s => method ~ 1s
query ~ 3s => method ~ 20s
The query generates POJO as described in the most up-voted answer from this issue.
I have not tried using a Constructor with all attributes as described in the most up-voted answer from this other similar issue because I do not understand what effect that would have.
A possible cause of apparently random slowness with an Hibernate query is the flushing of the session. If some statements (inserts, updates, deletes) in the same transaction are unflushed, the list method of Query might do an autoflush (depending on the current flush mode). If that's the case, the performance issue might not even be caused by the query on which list() is called.
It seems the issue is with MS SQL Server and the updating of procedure's plan: following DBCC FREEPROCCACHE, DBCC DROPCLEANBUFFERS the query and method times are consistent.
A solution to the issue may be to upgrade MS SQL Server: upgrading to MS SQL Server 2008 R2 SP2 resulted in the issue not appearing anymore.
It seems the difference between the duration of the query and that of the method is an exponential factor related to the objects being returned: most of the time is spent on a socket read of the result set.
We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.
I work with legacy systems that have tens of thousand of lines of stored procedure code, where many of the stored procedures are obsolete and not used anymore. There doesn't seem to be a way to check execution history, so my question is if it might be a good idea to start each stored procedure by inserting a row into a table that keeps records of the execution?
could be very simple like:
insert into
executionHistory (
name,
date
)
select
'spName',
getdate()
-- then rest of procedure
I imagine this could be very useful for doing cleanups of old unused code, and might also be handy when trying to decide where to optimize. I mean it's better to shave 10 seconds off execution time on a procedure that is executed 50 times a day, than saving 10 minutes execution time on a procedure that is only used once a year.
There is a tracing option (SQL Profiler) in SQL server. you could take a trace of a days SQL activity and see which sprocs are executed there.
This will give you a good idea of where to focus your optimisations.
because you're using sql server 2008 i wouldn't do what rwmnau suggest because this would mean you have to modify all your stored procedures.
SQL Server 2008 introduces a feature called Extended Events and SQL Server Auditing based on them. Extended events are high performance tracing system.
by using SQL Server Auditing you can trace your system withouth the overhead of sql trace.
I think your idea is simple enough and would accomplish your goal. Though it would involve modifying every SP, it's the route I would choose. Then you can ensure that you're getting an accurate recording of all activity on the database.
Another poster suggested you do a trace - while this works for short periods, it's only going to catch the time you're watching. You'd have to make sure you traces across any important, high-traffic periods, like month-end financial closing, and even then, you're missing other times you don't think are that big a deal, so you're being subjective.
I have an application that runs a huge stored procedure on SQL Server 2000. Usually it takes about 1 minute to complete, but occasionally it will take MUCH longer.
Just now I ran it three times in a row in my test system. It took 1:12, 1:23, and 55:25.
What would cause that behavior? There are other things going on in the database, so I wonder if it has something to do with locks. How can I catch this in the act?
Create a trace and examine it in Profiler. That should at least point towards where the problem lies - in your procedure or elsewhere.
It's probably parameter sniffing: based on the input, Sql Server chose a different query plan.
Another possibility is that a separate query was running at the same time and locked everything up.