I'm trying to evaluate the relative performance of using a WHERE... IN clause in my SP vs UNIONs.
I've tried looking at the execution time and using SET STATISTICS TIME ON but everything just comes back as taking 0ms all the time.
So I'm trying to use SQL Server Profiler. I selected the TSQL_SPs template but even before I run the SP the trace is filling up with garbage. How do I tell it to only capture relevant data for a specific SP?
In Sql profiler, when you are creating a new trace, you can change the trace properties. Click on Events selection tab in trace properties and go to column filters.
Then in textData, click on Like and add some unique word from your SP that you need and then run the trace. This way your trace will give you data of your SP.
You can play around with column filters according to your need.
Related
I'm using SQL Server and SSRS 2012. Intermittently when running reports on live environments, changing a single
parameter can cause the entire report to lock up, show the loading icon, and not allow other parameter changes for minutes at a time.
I found a similar ticket on microsoft connect that said it was fixed in a cumulative update for 2008 R2, but I'm experiencing it in SSRS 2012. I'm not sure what to do. Because it's intermittent, it's difficult to replicate, and I haven't been able to find any solutions for this online.
EDIT: This is only when changing the parameter, the loading occurs before I get the chance to hit 'View Report.' It can occur with several of the parameters, and most of them have dependencies. It can be on the parent or the child parameter.
I have also checked the execution log - the time taken to retrieve and process the parameters from shared data sets is much less than the time the 'loading' box stays on the screen. Max data retrieval time is 20secs total, loading box lasts for minutes at a time.
Do you mean when you re-run the report after changing a parameter or just changing the parameter without hitting View Report? If you are just changing the parameter, is the parameter used to refrsh othr related parameters? Basically we need to determine if the issue is with a query that's executing.
If it is then it could be a parameter sniffing issue where the query optimizer has used previous parameters to build a query plan that it not suitable. You can test this quickly by adding OPTION (RECOMPILE) to the end of the affected dataset query (assuming it's just a SQL script).
I recently used a free sql profiler product from Anjlab that was great and allowed me to sort the trace results even while the trace is running. The next time I tried to do this in the Sql Profiler that actually comes with Sql Server I didn't see a way to sort the trace results. Am I missing something or does the profiler that comes with Sql Server just really not let you do that?
You can when the trace is stopped go to File -> Properties -> Events Selection -> Organise Columns and set up "Grouping" by the desired sort column(s) and then select "Grouped View" rather than "Aggregated View" in the short cut menu to get the results displayed sorted.
Doesn't look as though the grouping columns are alterable in a running trace however as the buttons are greyed out.
I'm not aware of a way to sort SQL Profiler output while the trace is running.
You can set up "groups" before you start a trace that including some sorting, but they're a bit klunky.
What I usually do is to have SQL Profiler save the results in a table, and do my analysis from there, using T-SQL.
I've been trying to diagnose a performance issue in my database and have googled a lot on maxdop. I have seen in many places where ActualNumberOfRows, ActualRebinds etc. are shown in properties view but the first thing I see is DefinedValues.
After running execution plan I right click an Index Scan for example and expect to see these fields so I can determine how rows are distributed amongst threads.
I am using SQL Server 2005 Enterprise.
include the Actual Execution plan and in that click on the arrow button, there we can see the Actual Number of Rows
Profiler shows my server is overloaded by lots of calls to sp_cursorfetch, but I want to know which queries are causing all this traffic.
Profiler won't work in this case.
I ran one to test it out, and queried the table I created from it with this:
select CPU, TextData FROM cpu where LoginName = 'db_name_here' order by CPU desc
// Be sure to replace db_name_here
Result I got showed stuff like this:
CPU----TextData-----------------------------
0------exec sp_cursorfetch 180150000, 16, 7415, 1
*Note: The "-" above are just to format it so it's actually readable on this site.
========
The only answers I found on this are:
Select statements are the only cause of these cursor fetches, and examining your indexes of most commonly used tables is a good 1st start to resolving the problem
You maybe able to filter a trace on the SPID of the cursorfetch call to see what it's doing before and after the sp_cursorfetch is ran.
Only fetch a subset of the total RecordSet you are currently. Say you grab 100 rows now. Only grab 10,because 10 is the most the user can see at any given time.
In response to the comment:
Thanks for your suggestions, all of which are helpful. Unfortunately the queries in question are from a third party application, of which I do not have direct access to view or modify the queries. If I find a query that is a particuar problem, I can submit a support request to have it reviewed. I just need to know what the query is, first. – Shandy Apr 21 at 8:17
You don't need access to the application to try out most of my aforementioned recommendations. Lets go over them:
Select statements are the only cause of these cursor fetches, and examining your indexes of most commonly used tables is a good 1st start to resolving the problem
This is done on the database server. You need to just run a tuning profile on the database, and to run the SQL Tuning Advisor using the profile generated. This will assist with improving indexes
You maybe able to filter a trace on the SPID of the cursorfetch call to see what it's doing before and after the sp_cursorfetch is ran.
This is something you do using SQL profiler as well
Only fetch a subset of the total RecordSet you are currently. Say you grab 100 rows now. Only grab 10,because 10 is the most the user can see at any given time.
This is done at the application level
What SQL server version are you running on? The resolution for this ended up being an upgrade to SQL Server 2008 in my case. I would try this out to see where it goes.
Since you don't have access to the application, getting around cursor use is going to be a problem most likely. If you take a look at http://sqlpractices.wordpress.com/2008/01/11/performance-tuning-sql-server-cursors/ you can see that most alternatives involve editing the application queries ran.
What is the real problem? Why are you profiling the database?
You might need to use profiler for this.
I'm not sure what you are trying to achieve, if you are doing a batch process execution plan might be helpful.
Hope it helps :)
When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)