Force a Plan Guide in SQL Server 2008 - sql-server

I have a query embedded in an application which I cannot access to change without contacting the original developers and getting them to change it.
The query that I am trying to alter is very slow to run and yields incomplete data, I have an improved version of this query and am looking for a way in SQL Server 2008 to essentially substitute the original query with the improved one when the original query is run through the application.
I have tried to create and force a Plan Guide based on the original query to force the new query. Following this article - https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms190772(v=sql.105) (as well as others).
So far every attempt to use plan forcing seems to have failed and the original query still gets executed. Does anyone know if I'm taking the right approach here? Or is there a better solution to the problem that I've described.

As others have said, this is not possible. If you can get the developers to change the query in the app, ask them to call a stored procedure. That way you can update the proc whenever you need to - it gives you much more flexibility in how the query operates and what it does.

Related

Sending a "Select functionname001" to SQL to easily identify long running queries and where called from

We had a performance issue with one of our queries in our application that was taking 20 seconds to run. Using azure data studio we figured out the SQL that was long running and then eventually traced that back to the entity framework query that was executed.
I had an idea of adding a logging function to our code where it is called before any data access is done (insert, select, delete, update etc) in the entity framework code.
What the function would do is simple execute a "Select user_functionname_now" sql statement.
Then in azure data studio profiler we would see :
The image tells me that the user ran the load invoice function and it took 2717 milliseconds.
Granted if you have 100 users doing things in the app the logs might get mixed up a bit but it would go a long way in being able to figure out where in the code the long running query is executing from.
I was also thinking that we could add a fixed column to each query run so that you could see something like this:
But the issue with adding a column is you are returning extra data each time a query is run which requires more data back and forth between the SQL server and the application and that for sure is not a good thing.
So my question is: Is adding a "Select XYZ" before every CRUD call a bad idea? If we add this logging call to some or all of our code where it executes our queries will it cause a performance issue/slowdown that I haven't thought about?
I don't think using any "select ..." is reasonable in your case.
Perhaps, SET CONTEXT_INFO or sp_set_session_context would be better.
This is the scenario that EF Query Tags are for.

Azure SQL Query Editor vs Management Studio

I'm pretty new to azure and cloud computing in general and would like to ask your help in figuring out issue.
Issue was first seen when we had webpage that time outs due to sql timeout set to (30 seconds).
First thing I did was connect to the Production database using MS SQL management studio 2014 (Connected to the azure prod db)
Ran the stored procedure being used by the low performing page but got the return less than 0 seconds. This made me confused since what could be causing the issue.
By accident i also tried to run the same query in the Azure SQL query editor and was shock that it took 29 seconds to run it.
My main question is why is there a difference between running the query in azure sql query editor vs Management studio. This is the exact same database.
DTU usage is at 98% and im thingking there is a performance issue with the stored proc but want to know first why sql editor is running the SP slower than Management studio.
Current azure db has 50 dtu's.
Two guesses (posting query plans will help get you an answer for situations like this):
SQL Server has various session-level settings. For example, there is one to determine if you should use ansi_nulls behavior (vs. the prior setting from very old versions of SQL Server). There are others for how identifiers are quoted and similar. Due to legacy reasons, some of the drivers have different default settings. These different settings can impact which query plans get chosen, in the limit. While they won't always impact performance, there is a chance that you get a scan instead of a seek on some query of interest to you.
The other main possible path for explaining this kind of issue is that you have a parameter sniffing difference. SQL's optimizer will peek into the parameter values used to pick a better plan (hoping that the value will represent the average use case for future parameter values). Oracle calls this bind peeking - SQL calls it parameter sniffing. Here's the post I did on this some time ago that goes through some examples:
https://blogs.msdn.microsoft.com/queryoptteam/2006/03/31/i-smell-a-parameter/
I recommend you do your experiments and then look at the query store to see if there are different queries or different plans being picked. You can learn about the query store and the SSMS UI here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
For this specific case, please note that the query store exposes those different session-level settings using "context settings". Each unique combination of context settings will show up as a different context settings id, and this will inform how query texts are interpreted. In query store parlance, the same query text can be interpreted different ways under different context settings, so two different context settings for the same query text would imply two semantically different queries.
Hope that helps - best of luck on your perf problem

sp_prepare and sp_execute

I've been consulting google for some time now, but it hasn't been able to give me a satisfactory answer...
In a SQL Server 2005 trace, I've got lots of "exec sp_execute" statements. I know they are connected to a corresponding "exec sp_prepare" statement which specifies the actual SQL.
But...
One: Is it possible to find out the SQL behind the sp_execute without finding the sp_prepare?
Two: What type of construct would typically hide behind the sp_execute? That is, is it a stored procedure? Is it just a string in code? Or what?
Three: Should I fear bad performance seeing these in the trace?
Any input is appreciated
Use
select * from sys.dm_exec_query_plan(PlanHandle)
to generate an xml document that indicates what sql sp_execute is using.
Those are API Server Cursors, most probably used by an old (or not so old, but badly developed) application.
99% of the times, cursors affect performance on your server. Disk and network I/O are the potential victims.
Read this, it helped me understanding how server side cursors work.
Late answer, but I recently had an application with bad performance executing sp_prepare and sp_execute.
One: Answered before
Two: It could be anything, stored procedures, any valid sql query basically.
Three: I had problems with SQL Server failing to generate good execution plans when the application was using sp_prepare. Basically, SQL Server analyzes the incoming parameters to generate a good execution plan, but with sp_prepare no values for the parameters are supplied, since they are only added when executing sp_execute. So in the mean time, SQL Server applies generic costs for different operators and might very well generate a suboptimal plan.
If you look at your reads/cpu-usage for your traces, you should be able to determine if your queries are behaving badly or as expected.
Also see http://blogs.infosupport.com/speeding-up-ssis-literally

How do I determine the query that sp_cursorfetch is using

Profiler shows my server is overloaded by lots of calls to sp_cursorfetch, but I want to know which queries are causing all this traffic.
Profiler won't work in this case.
I ran one to test it out, and queried the table I created from it with this:
select CPU, TextData FROM cpu where LoginName = 'db_name_here' order by CPU desc
// Be sure to replace db_name_here
Result I got showed stuff like this:
CPU----TextData-----------------------------
0------exec sp_cursorfetch 180150000, 16, 7415, 1
*Note: The "-" above are just to format it so it's actually readable on this site.
========
The only answers I found on this are:
Select statements are the only cause of these cursor fetches, and examining your indexes of most commonly used tables is a good 1st start to resolving the problem
You maybe able to filter a trace on the SPID of the cursorfetch call to see what it's doing before and after the sp_cursorfetch is ran.
Only fetch a subset of the total RecordSet you are currently. Say you grab 100 rows now. Only grab 10,because 10 is the most the user can see at any given time.
In response to the comment:
Thanks for your suggestions, all of which are helpful. Unfortunately the queries in question are from a third party application, of which I do not have direct access to view or modify the queries. If I find a query that is a particuar problem, I can submit a support request to have it reviewed. I just need to know what the query is, first. – Shandy Apr 21 at 8:17
You don't need access to the application to try out most of my aforementioned recommendations. Lets go over them:
Select statements are the only cause of these cursor fetches, and examining your indexes of most commonly used tables is a good 1st start to resolving the problem
This is done on the database server. You need to just run a tuning profile on the database, and to run the SQL Tuning Advisor using the profile generated. This will assist with improving indexes
You maybe able to filter a trace on the SPID of the cursorfetch call to see what it's doing before and after the sp_cursorfetch is ran.
This is something you do using SQL profiler as well
Only fetch a subset of the total RecordSet you are currently. Say you grab 100 rows now. Only grab 10,because 10 is the most the user can see at any given time.
This is done at the application level
What SQL server version are you running on? The resolution for this ended up being an upgrade to SQL Server 2008 in my case. I would try this out to see where it goes.
Since you don't have access to the application, getting around cursor use is going to be a problem most likely. If you take a look at http://sqlpractices.wordpress.com/2008/01/11/performance-tuning-sql-server-cursors/ you can see that most alternatives involve editing the application queries ran.
What is the real problem? Why are you profiling the database?
You might need to use profiler for this.
I'm not sure what you are trying to achieve, if you are doing a batch process execution plan might be helpful.
Hope it helps :)

SPROC hangs in SQL Server 2005

I get a problem with SQL Server 2005, where a stored procedure seems to randomly hang/lock, and never return any result.
What the stored procedure does is to call a function, which in turn makes a union of two different functions – returning the same type of data, but from different criteria. Nothing advanced. I don’t think it’s the functions hanging, because there are other SPROCs that call the same functions without a problem, even when the first one has locked up.
After the SPROC hangs, any further attempts to call it will result in a time out – not in the call itself, but the response time will be too great, as no result is returned the code will throw an exception.
It has happened at least three times in two months in a relatively low-load system. Restarting SQL Server solves the situation, but I don’t regard that as a “solution” to the problem.
I’ve looked for information, and found something about the query cache going corrupt. However, that was in regard to dynamic SQL strings, which my problem is not. I guess it could still be the query cache.
Has anyone had the same problem, and if so, what did you do about it (don’t say “restart SQL Server every morning” ;) )? Is there any way of debugging the issue to try and find exactly what and where things go wrong? I can’t recreate the problem, but when it appears again it would be good if I knew where to take a closer look.
I don't think it makes any difference, but just for the record, the SPROC is called from .NET 3.5 code, using the Entity Franework. I say it doesn't make a difference, because when I've tested to just execute the SPROC directly from SQL Server Management Studio, no result is returned either.
It's most likely parameter sniffing
Restarting SQL server clears the plan cache. If you rebuild statistics or indexes the problem will also go away "ALTER INDEX" and "sp_updatestats"
I suggest using "parameter masking" (not WITH RECOMPILE!) to get around it
SO answer already by me:
One
Two
Are your STATISTICS up to date? One of the common causes of an cached query plan that is incorrect, is out of date statistics.
Do you have a regularly scheduled index rebuild job?
Did you verify the SQL Server log..? Defenetly the cause for the problem is been logged.. atleast you can get some hint about that. pls check that.
This excellent MSDN Article SQL Server technical bulletin - How to resolve a deadlock
explains the steps needed to identify and resolve the deadlock issues in very detail.

Resources