I have some .NET application which communicates with a SQL Server 2012 database.
Generally it gets a request from the client, runs some set of SQL queries to process the request and returns the result. To optimize performance, I prepare all the queries at startup and then execute the prepared statements each and every time. This works good most of the time, but occasionally I got some strange performance glitches, so I tried to investigate what's going on with the help of profiler.
I caught Showplan XML Statistics Profile and RPC:Completed events.
When the application behaves normally, I can see in profiler that on startup it executes something like:
declare #p1 int
set #p1 = 5
exec sp_prepexec #p1 output, N'#my_param_list', N'select my_field from my_table'
and then on each client request:
exec sp_execute 5, #myparam1=value1, #myparam2=value2 ...
Before this RPC:Completed line I can see Showplan XML Statistics Profile event with pretty normal execution plan. The duration of the query is ~50ms, which is good.
Now, when the problem occurs, I can see in profiler about 2500(!) lines of Showplan XML Statistics Profile events - each one with some meaningless index scan on one of the tables participating in the query (the only difference is the index which participates in the scan), followed by "Stream Aggregate" . And after all these lines I can see at last RPC:Completed event with the following text:
declare #p1 int
set #p1 = NULL
exec sp_prepexec #p1 output, N'#my_param_list', N'select my_field from my_table'
and the duration of more than 30 seconds. I can't understand why it takes so long, what is the meaning of all these scans, and why the handler is NULL in sp_prepexec. It looks to me like SQL Server tries very hard to execute the prepared statement, than abandons it or probably tries to re-prepare it.
So I'd be happy to get an explanation of what's wrong with this and what is going on.... Thanks in advance and sorry for the long explanation.
Related
We recently moved a database to it's own SQL Server which broke a call to a stored proc wrapped in a sp_cursoropen statement.
The original call, which worked, was to a database on the same server as the calling database:
using database1
exec sp_cursoropen #p1 output, N'exec database2.dbo.sp_mystoredproc 1234'
We had to move the second database to its own server for compliance reasons and created a linked server between the two. We changed the call to:
using database1
exec sp_cursoropen #p1 output, N'exec LinkedDBServer.database2.dbo.sp_mystoredproc 1234'
The call no longer works and gives us two errors
A server cursor cannot be opened on the given statement or statements. Use a default result set or client cursor.
The cursor was not declared
The funny thing is that running the stored proc command on it's own works and returns rows of data:
using database1
exec LinkedDBServer1.database2.dbo.sp_mystoredproc 1234
Also funny, putting direct SQL from the stored proc itself and pasting it in place of sp_mystoredproc also works, even with the cursor:
using database1
exec sp_cursoropen #p1 output, N'Select * from LinkedDBServer1.Dbname.dbo.sometable where sometable.id = 1234'
This leads me to believe that the return type of the stored procedure that runs via a linked server is somehow different than that of the same stored procedure running on the same SQL server as the calling database. Unfortunately, I cannot find any documentation to support my hypothesis.
Does anyone know why a stored procedure that returns rows of data wouldn't play well with sp_cursoropen when a linked server is involved?
Note: I'm not interested in workarounds. I know there are many ways to fix this such as writing to a temp table first, using a different service to grab the data or some other method of completing the task. I'm only interested in what low level difference in the SQL server is causing this error even though the separate pieces being called to through the linked server work independently.
I've got a stored procedure with an int output parameter. If I run SQL Server Profiler, execute the stored procedure via some .Net code, and capture the RPC:Completed event, the Text Data looks like this:
declare #p1 int
set #p1=13
exec spStoredProcedure #OutParam=#p1 output
select #p1
Why does it look like it's getting the value of the output parameter before executing the stored procedure?
I found an answer that it is RPC:completed event class. so it already know the result at that time. But I cant understand why exec statement is there, after completion of RPC:completed event.
The RPC Completed TextData you see in the Profiler (or Extended Events) trace is just a rendering of the RPC request, not the actual statement that was executed by SQL Server. This facilitates copy/paste of the text into an SSMS window for ad-hoc execution as a SQL batch.
Since the actual output value is known after the RPC has completed, the trace text uses the actual value to initialize the parameter variable. It would probably be clearer it was initialized to NULL prior to execution.
SP is RecordSource of the form.
When form is opened SP executed and after a time-out of query, connection is closing with nothing.
If SP executed from SSMS it performed for about 2 seconds and returns a set of records.
As I watched through the SSMS Profiler calls are identical, but count of Reads value (an execute from Access) > 28 million, and about 70 thousand from the SSMS.
Help me, I'm confused.
Screen with profiler
http://take.ms/u7tTy
#tobypls,
thank you very much - your link was helpful.
Simple solution is rewrite (for example)
from
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
SELECT * FROM Table WHERE ID = #param1
to
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
DECLARE #param1a int
SET #param1a = #param1
SELECT * FROM Table WHERE ID = #param1a
I get it from this post.
But if you need full understanding of trouble then you must read really great article
Slow in the Application, Fast in SSMS?
Understanding Performance Mysteries
Maybe someone already ask this, but I can't find appropriate answer to this question.
If I have, let's say, following query:
SELECT
column1,
column2,
column3
FROM Table1 AS t1
WAITFOR DELAY '10:00:00'
where this query returns around 100000 rows.
Did WAITFOR statement waiting 10 hours before telling SQL Server to execute query and produce result or SQL Server execute query immediately and keep result in RAM for 10 hours and then send it over network or just show?
Am I missing here something?
I would appreciate if someone give me real example that prove first or second solution.
I executed next query:
SELECT GETDATE()
SELECT GETDATE()
WAITFOR DELAY '00:00:05'
The result was two dates that were the same. On this basis, I will conclude that SQL Server immediately executes the query and keeps the result for certain time to show, but that made little sense for me.
According to the docs, the WAITFOR command is used to block a statement or transaction for the specified amount of time. In that case, you'd use it to delay subsequent statements. In other words, you'd want to execute something after the WAITFOR command, not before. Here's a few examples:
The following example executes the stored procedure after a two-hour delay.
BEGIN
WAITFOR DELAY '02:00';
EXECUTE sp_helpdb;
END;
GO
The following example executes the stored procedure sp_update_job at 10:20 P.M. (22:20).
USE msdb;
EXECUTE sp_add_job #job_name = 'TestJob';
BEGIN
WAITFOR TIME '22:20';
EXECUTE sp_update_job #job_name = 'TestJob',
#new_name = 'UpdatedJob';
END;
GO
When I write pl/sql procedure in Oracle and suspect that it possible will run long enough I usually use DBMS_APPLICATION_INFO.SET_MODULE ('Some calculation', i||' records of '||total_count||' were processed') in order to have ability to monitor calculation process.
Is there something similar in SQL Server to monitor calculation process through system views?
To view progress in a long running SQL job I normally just intersperse PRINT or RAISERROR messages.
RAISERROR ('Some calculation %i records of %i were processed',0,1,50,100) WITH NOWAIT;
These info messages can be retrieved and displayed by the executing application (printed in the messages tab of SSMS for example).
Sounds like the Oracle thing is a bit different. You can stuff arbitrary 128 byte messages in CONTEXT-INFO
DECLARE #Msg BINARY(128) = CAST('Some calculation 50 records of 100 were processed' AS BINARY(128))
SET CONTEXT_INFO #Msg
And then retrieve it as
SELECT CAST(context_info AS CHAR(128))
FROM sys.dm_exec_sessions
WHERE session_id = 55 /*Change as needed*/
Another possibility would be to fire a custom profiler event with EXEC sp_trace_generateevent that you could then capture.
But probably easier to just add a logging table that your steps get inserted into (may need to query this with NOLOCK if your steps are running inside a transaction).