sql query notification service issue - sql-server

We are getting timeout issues on our databases. So I trurned on SQL Server Profiler and see SQLQueryNotificationService running every second with long duration. I checked the Service Broker and there are bunch of SQLQueryNotificationService queues created. I don't think we created any of these queues also there are bunch of stored procedures like these SqlQueryNotificationStoredProcedure-15c5b84b-42b0-4cfb-9707-9b1697f44127. Could you please let me know how to drop them? If I drop them is there any impact on the database? Please let me know. I appreciate any suggestions.

Do you have an ASP.Net web site running or another application that creates Sql Server Cache dependencies? It is Sql Server Service Broker queues, it executes that WAITFOR ... statement which waits for around one minute (60000 msecs), then executes again next second. Shouldn't normally cause problems, it shouldn't block or delay your "normal" queries or stored procedures.
However, I saw it causing issues for me once - one of the stored procedures, when executed from the same web application that established the cache dependency, did timeout (or rather came back in 120 secs which is not acceptable). Exactly the same stored procedure, executed under the same account with same parameters, but from Management Studio, ran fine without any issues. It was SQL Server 2005 SP4.
SQL Profiler showed that in the middle of execution of my stored procedure (and always after the same INSERT INTO ... statement), its execution was interrupted and instead of its statement there was that WAIT FOR .... from Sql Query Notification, completed in one minute, then another WAIT FOR... starting and again, completed in 59 secs - and only after that the Profiler showed me my stored procedure completed. With the duration of 119000, which is almost exactly two minutes.
It was if that query notifications were joining the transaction within my stored procedure.
What helped: recompiled the offending stored procedure. I simply changed its script, did ALTER statement with some minor syntax changes. No problems after that.

Related

SQL Server stored procedure long running intermittently

I have 5 stored procedures, called by an api, that run in under 100 ms normally, but intermittently, all the stored procedures suddenly take > 10 seconds in production. We did stress test before going live and it was all good during stress test.
For each of the stored procedures, I capture start time using SYSDATETIME() and log it an table at the end of the stored procedure with end time as well. So I run profiler on production and notice strange things.
For example, today, profiler RPC:completed event start time is 09:18:04.680 where as my stored procedure execution log tables says the execution started at 09:19:54.288. So there is a 110second mismatch between profiler execution start vs my stored procedure internal start time. So this happens to all stored procedures for a window of 2-3mins and everything clears by itself.
I've ran perfmon and nothing shows out of ordinary to me. The SQL Server has high capacity with the application only running few users currently so not a very high traffic application.
I also have Redgate SQL Monitor and it doesn't show any abnormal wait times.
I'm not even sure where to look for. I'm not sure its parameter sniffing because one of the stored procedure affected doesn't accept any parameters. After the 2-3 minutes all the stored procedures run as expected.

SQL Server Query Plan creation in SSMS vs Application Server

I have the following scenario:
After a database deployment we have a .Net application server that is attempting to execute a stored procedure.
The timeout on the application is 30 seconds. When the application first attempts to execute the stored proc an attempt is made to create a new query plan but this takes longer than 30 seconds and the application has many timeouts. My experience with this is that if the stored procedure is run manually(with representative data inputs) from SSMS the first time it runs it takes about 1-2 minutes, a plan gets generated and then the application then runs smoothly.
I work with a third party company and there is s DBA there who is claiming the following:
"Manually invoking this stored procedure will create a plan that is specific to the connection properties used (SSMS), the plan generated would not be used when the procedure is invoked by an application server."
Is this correct? It seems like a poor design if query plan used was linked to connection properties? Is there a difference between a query plan created if you run the stored procedure manually in SSMS vs when it is executed by an application?
If so, What is the optimal way to resolve this issue? Is increase the timeout the only option?

Sql server performance changes after recreating the procedure

We have a main stored procedure that returns around 1000 records, changes by the user permissions.
Lately the procedure performance became very bad - but only from the web-service - more than a minute!
but when running the same SP with the same parameters from ssms took only 3 seconds!!
When I tried to check the problem I added writes to log table, and immediately this change improved the performance again to 3 seconds from the web-service.
This is a mystery for me:
1. The difference between running from web-service and ssms
2. The change after adding the logging
Your issue is called parameter sniffing. There were 2 execution plans for this procedure, one created the first time you launched it from web server and another created when you lanuched it from SSMS. And the parameters of these two plans were different. The next time you execute this proc, one of this plans is used: when you execute from SSMS, the second plan is used, and from web service the first plan is used. The parameters passed to this proc were atypical when executed from wb service, and typical when executed from SSMS.
When you altered your procedure, those 2 plans were invalidated as the procedure has changed, then the new execution plan was built for SSMS and for web service, this times both plans were made for the same or similar paremeters.
If you could extract old plans from plan cache you'd see they were different and the parameters sniffed also were different while now the plans are the same and parameter sniffed are the same or similar.
Here you can read more on parameter sniffing: Slow in the Application, Fast in SSMS?
Understanding Performance Mysteries
Please do not use functions on TOP, on recordset, and, on WHERE/JOIN clauses.
When youre calling SP from SSMS, server optimizes it. But, when calling from frontend, it is huge problem. So, eliminate functions, if possible.
If you want to view about what im talking, please start profiler and then log RPC starting/completed, sql statament events. Call quantity is same, as recordset. So, assume, youre calling procedure 1000 times when usig FN on statement , returning recordset.

how to force a stored procedure be pre-cached and stay in memory?

Stored procedures are compiled on first use.
There are options to clear cache:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
--To Verify whether the cache is emptied
--DBCC PROCCACHE
or to recompile or to reduce recompilations.
But is it possible to force frequently used stored procedures' execution plans be pre-cached and stay in memory?
I know how to do it in ADO.NET, i.e. from outside of SQL Server, but this question is how to do inside SQL Server - to be launched with the start of SQL Server itself.
(*) For example, I see in SSMS Activity Monitor a running process (Task State: RUNNING, Command: SELECT) that is continuously executing T-SQL (according to Profiler) in context of tempdb database though SQL Server Agent is disabled and SQL Server is not loaded by anything, see "Details of session 54" in "Where are all those SQL Server sessions from?".
How would I do the similar resident process (or, rather, auto-starting by SQL Server start service or session) periodically recycling stored procedure?
Related question:
Stored procedure executes slowly on first run
Update:
Might be I should have forked this question in 2 but my main curiosity is how to have periodic/ looping activity with SQL Server Agent disabled?
How was it made with mentioned above RUNNING SELECT session (*)?
Update2:
Frequently I observe considerable delays while executing stored procedures querying very small amount of data which cannot be explained only through necessity to read huge amounts of data.
Can we consider this - considerable delays on insignificantly small data - as context of this question?
Just execute it from a script. You could do this after any sql server restart. If they are frequently used, it shouldn't be much of a problem after that.
Seems like this question eventually got answered in:
Can I get SQL Server to call a stored proc every n seconds?
Update: These tips will do the trick:
Keeping data available in the SQL Server data cache with PINTABLE
Automatically Running Stored Procedures at SQL Server Startup

Implementation of periodic writing timestamps in ms sql database

Task:
Write timestamps into MS SQL database table every second.
Solutions:
External application which writes timestamps by schedule (Sql agent for example).
Stored procedure, which writes timestamps in infinite loop.
Questions.
Which of the solutions is best?
Is there any drawbacks of running infinite loop in stored procedure?
How to start stored procedure after server reboot?
Any other solutions?
Either, but I'd tend to the stored procedure route with WAITFOR DELAY
Not really
sp_procoption and start up stored procedures
Without involving external client or system, I can't think of any
1, Both have advantages and disadvantages. Evaluate and choose based on your environment
Procedure Advantages:
- Don’t have overhead of SQL Agent processing once per second. (In fact, I don't think you can get SQL Agent to consistantly launch the same job once per second.)
- I don’t think a procedure in WAITFOR mode is using resources – but you’d want to check
Procedure Disads:
- If procedure fails (gets stopped somehow), it won’t start up. (Unless you have a SQL Agent job running to check if the procedure has stopped, in which case might as well go with the procedure)
- Might be easier to stop/break than you’d think (concurrency/deadlocks, detached DBs, manually stop during maintenance then forget to restart)
Job Advantages:
- If job fails (perhaps db is unavailable), next job will still get launched
Job Disads:
- Seems very kludgy, having SQL agent run every second. Measure required server overhead if you do this.
- If SQL Agent fails or is stopped, job won’t run
A suggestion: must it be once per second? Can it be once per 5, 10, 15, or 30?
2, Shouldn’t be any, barring what’s mentioned above. Make darn sure you can’t hit locking, blocking, or deadlock situations!
3, Like #gbn says, sp_procoption
4, Nothing that doesn’t involve cumbersome tricks based on pessimistic locking techniques, or byzantine logic based on the timestamp (not datetime) datatype. The best fix would seem to be the long-term one of combining those two databases into one, but that’s not a short-term option.
Out of sheer paranoia, I'd combine the two like so:
Job set to run every 2, 3, or 5 minutes
Job calls procedure that updates your timesampt, then waitfors a few seconds
Procedure does not stop, so job continues to run; while job is running, it will not be started (because it's still executing)
If procedure somehow dies, job will launch it again next time it's scheduled to run.
Try using SQL Service broker to do this asynchronously and its queue system allows you to not miss any data even if the SQL Server service would have to be restarted. I ve used this once sometime back for this kind of polling scenario.
http://msdn.microsoft.com/en-us/library/ms345108(SQL.90).aspx#sqlsvcbr_topic2
This might help,
http://msdn.microsoft.com/en-us/library/bb839488.aspx

Resources