I work with legacy systems that have tens of thousand of lines of stored procedure code, where many of the stored procedures are obsolete and not used anymore. There doesn't seem to be a way to check execution history, so my question is if it might be a good idea to start each stored procedure by inserting a row into a table that keeps records of the execution?
could be very simple like:
insert into
executionHistory (
name,
date
)
select
'spName',
getdate()
-- then rest of procedure
I imagine this could be very useful for doing cleanups of old unused code, and might also be handy when trying to decide where to optimize. I mean it's better to shave 10 seconds off execution time on a procedure that is executed 50 times a day, than saving 10 minutes execution time on a procedure that is only used once a year.
There is a tracing option (SQL Profiler) in SQL server. you could take a trace of a days SQL activity and see which sprocs are executed there.
This will give you a good idea of where to focus your optimisations.
because you're using sql server 2008 i wouldn't do what rwmnau suggest because this would mean you have to modify all your stored procedures.
SQL Server 2008 introduces a feature called Extended Events and SQL Server Auditing based on them. Extended events are high performance tracing system.
by using SQL Server Auditing you can trace your system withouth the overhead of sql trace.
I think your idea is simple enough and would accomplish your goal. Though it would involve modifying every SP, it's the route I would choose. Then you can ensure that you're getting an accurate recording of all activity on the database.
Another poster suggested you do a trace - while this works for short periods, it's only going to catch the time you're watching. You'd have to make sure you traces across any important, high-traffic periods, like month-end financial closing, and even then, you're missing other times you don't think are that big a deal, so you're being subjective.
Related
We currently have performance issues as I’m sure most data-driven systems do have the same problems.
Currently, they basically fall into 2 categories that I think a single solution can solve:
Stored procedures sometimes get automatically recompiled in the system with a bad plan, that causes it to run really slowly. The reason for this is that the set of parameters that it first gets recompiled with are not representative / normal / optimal. This then causes the stored procedure to run really slowly and it needs to be recompiled to pick up a better plan
Due to the dynamic nature of how SQL Server works, as a table grows, as different parts of the system maybe query it differently – the indexes need to change or a code change is required to remove sub-optimal coding, like OR’s, functions in WHERE conditions, etc.
Is there any system tables that track the cost of stored procedures?
We need to create a script, that should run for every hour for all the week (7 days) and we need to have/store the data of stored procedure (like execution time, cost of stored procedure and so on). From this, we can identify the list of stored procedures which are performing very worst and running for longer time and generate the list of stored procedures. From that, we can perform tuning on these stored procedures to improve the performance.
Start here: sp_BlitzFirst from Brent Ozar Unlimited or BrentOzarULTD/SQL-Server-First-Responder-Kit on github.
Quoting Kendra Little on her page for performance monitoring:
"It’s only worth it to write your own tools when nobody offers a solution that fits you."
I have one question in order to speed up SQL Server 2000.
I want to use caching mechanism, but I don't know how to use.
I found some articles about it, but can you give an example for how to use.
For example:
there is a stored procedure - sp_stackOverFlow - it executes when every user enter to the program/web site and it is clear it makes slower running.
Is there a way of caching sp_stackOverFlow in every 2 minutes or another?
Your question isn't clear, not least because it isn't obvious what the stored procedure does. If the results are different for every execution and/or user then they cannot easily be cached anyway.
But more fundamentally, "I have a slow stored procedure" does not automatically mean "I need caching"; the database engine itself already caches data when it can. You need to understand why the stored procedure is running slowly: underpowered hardware, poor TSQL code, poor data model design and poor indexing are all very common issues that have major effects on performance.
You can find a lot of information on this site and by Googling about how to troubleshoot slow execution times for procedures, but you can start by reviewing the execution plan for the procedure in Query Analyzer and tracing the execution using Profiler. That will immediately tell you which statements are taking the most time, if there are table scans happening etc.
Because performance troubleshooting is potentially complex, if you need more assistance please post short, specific questions about individual issues. If the code for your stored procedure is very short (< 30 lines formatted) people may be willing to comment on it directly, otherwise it would be better to post only the individual SQL statements that are causing a problem.
Finally, mainstream support for MSSQL 2000 stopped 3 years ago, so you should definitely look into upgrading to newer version. The performance tools in newer versions will make resolving your issue much easier.
We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.
I have a problem with this one stored procedure that works 99% of the time throughout our application, but will time out when called from a particular part of the application.
The table only has 3 columns and contains about 300 records. The stored proc will only bring back one record and looks like this
"Select * from Table Where Column = #parameter"
When the sp is executed in management studio it takes :00 seconds.
The stored procedure is used a lot in our application, but only seems to time out in one particular part of our program. I can't think of any reason why such a simple sp would time out. Any ideas?
This is a vb.net desktop application and using sql server 2005.
You've got some code that's already holding a lock on the table so it can't be read.
try
SELECT * FROM Table WITH (NOLOCK) WHERE Column = #parameter
We had a very similar problem, we had several stored procedures that would keep timing out in the application (~30 sec), but run fine in SSMS.
The short term solution that we used was to re-run the stored procedures which fixed the problem temporarily. If this also fixes the problem temporarily for you, then you should investigate parameter sniffing problems.
For futher information see http://dannykendrick.blogspot.co.nz/2012/08/sql-parameter-sniffing.html
you need to get performance metrics. Use the sql profiler to confirm that the SP is slow at that time or something else. If it is the sql that's slow at that time - consider things like locks that may be forcing your query to wait. Lets us know and we might be able to give more specific information at that point.
If it not the SP but say the VB code, a decent profile like RedGate's Ants or JetBrains' DotTrace may help.
I have an application that runs a huge stored procedure on SQL Server 2000. Usually it takes about 1 minute to complete, but occasionally it will take MUCH longer.
Just now I ran it three times in a row in my test system. It took 1:12, 1:23, and 55:25.
What would cause that behavior? There are other things going on in the database, so I wonder if it has something to do with locks. How can I catch this in the act?
Create a trace and examine it in Profiler. That should at least point towards where the problem lies - in your procedure or elsewhere.
It's probably parameter sniffing: based on the input, Sql Server chose a different query plan.
Another possibility is that a separate query was running at the same time and locked everything up.