Analog DBMS_APPLICATION_INFO.SET_MODULE in SQL Server - sql-server

When I write pl/sql procedure in Oracle and suspect that it possible will run long enough I usually use DBMS_APPLICATION_INFO.SET_MODULE ('Some calculation', i||' records of '||total_count||' were processed') in order to have ability to monitor calculation process.
Is there something similar in SQL Server to monitor calculation process through system views?

To view progress in a long running SQL job I normally just intersperse PRINT or RAISERROR messages.
RAISERROR ('Some calculation %i records of %i were processed',0,1,50,100) WITH NOWAIT;
These info messages can be retrieved and displayed by the executing application (printed in the messages tab of SSMS for example).
Sounds like the Oracle thing is a bit different. You can stuff arbitrary 128 byte messages in CONTEXT-INFO
DECLARE #Msg BINARY(128) = CAST('Some calculation 50 records of 100 were processed' AS BINARY(128))
SET CONTEXT_INFO #Msg
And then retrieve it as
SELECT CAST(context_info AS CHAR(128))
FROM sys.dm_exec_sessions
WHERE session_id = 55 /*Change as needed*/
Another possibility would be to fire a custom profiler event with EXEC sp_trace_generateevent that you could then capture.
But probably easier to just add a logging table that your steps get inserted into (may need to query this with NOLOCK if your steps are running inside a transaction).

Related

SELECT statement is not blocked by an existing exclusive table lock

For testing, I am trying to simulate a condition in which a query from our web application to our SQL Server backend would timeout. The web application is configured so this happens if the query runs longer than 30 seconds. I felt the easiest way to do this would be to take and hold an exclusive lock on the the table that the web application wants to query. As I understand it, an exclusive lock should prevent any additional locks (even the shared locks taken by a SELECT statement).
I used the following methodology:
CREATE A LONG-HELD LOCK
Open a first query window in SSMS and run
BEGIN TRAN;
SELECT * FROM MyTable WITH (TABLOCKX);
WAITFOR DELAY '00:02:00';
ROLLBACK;
(see https://stackoverflow.com/a/25274225/2824445 )
CONFIRM THE LOCK
I can EXEC sp_lock and see results with ObjId matching MyTable, Type of TAB, Mode of X
TRY TO GET BLOCKED BY THE LOCK
Open a second query window in SSMS and run SELECT * FROM MyTable
I would expect this to sit and wait, not returning any results until after the lock is released by the first query. Instead, the second query returns with full results immediately.
STUFF I TRIED
In the second query window, if I SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, then the second query waits until the first completes as expected. However, the point is to simulate a timeout in our web application, and I do not have any easy way to alter the transaction isolation level of the web application's connections away from the default of READ COMMITTED.
In the first window, I tried modifying the table's values inside the transaction. In this case, when the second query returns immediately, the values it shows are the unmodified values.
Figured it out. We had READ_COMMITTED_SNAPSHOT turned on, which is how the second query was able to return the previous, unmodified values in part 2 of "Stuff I tried". I was able to determine this with SELECT is_read_committed_snapshot_on FROM sys.databases WHERE name = 'MyDatabase'. Once it was turned off with ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT OFF, I began to see the expected behavior in which the second query would wait for the first to complete.

cursor execute command prematurely exits without error for long running stored procedure?

Have a pyodbc script that is supposed to execute a stored procedure on an MSSQL server, but appears to be cutting out without error in the middle of running.
I have pyodbc that connects to a MSSQL (2012) server runs a stored procedure (that runs several other subordinate stored procedures) which normally lasts for about 45min, but when running via pyodbc it appears to exit after a certain amount of time (though never exactly consistent in how long) where it just quits without warning or error (does not occur when running manually on the MSSQL server itself, thus why I think having "SET ANSI_WARNINGS OFF" is fine). Adding debugging print statements to the subordinate stored procedures seems to confirm this: execution seems to simply end in the middle of running (usually after about 25-26min). The code looks like...
cnxn = pyodbc.connect(f"DSN={CONFS['odbc_dsn']};"
f"DATABASE={'mydb'};"
f"UID={CONFS['username']};PWD={CONFS['password']};"
f"MultipleActiveResultSets=True;",
autocommit=True)
cursor = cnxn.cursor()
print("\n\n\nRunning web reporting processes...")
stored_procs = [
"mydb..some_initialization_stuff",
"mydb..long_running_stored_proc"
]
cursor.commit()
for sp in stored_procs:
print(f"\n\t[{datetime.datetime.now()}] Running stored procedure {sp}")
cursor.execute(f"SET ANSI_WARNINGS OFF; exec {sp}")
# cursor.commit()
print(f"\t[{datetime.datetime.now()}] stored procedure {sp} completed")
# print(cursor.fetchall())
cursor.close()
cnxn.close()
Anyone with more experience with pyodbc know what could be causing this? Any other information / specific debugging steps to improve this question?
After asking question on the pyodbc github issues page, basically the reason is...
When processing the results of a batch, SQL Server fills the output buffer of the connection with the result sets that are created by the batch. These result sets must be processed by the client application. If you are executing a large batch with multiple result sets, SQL Server fills that output buffer until it hits an internal limit and cannot continue to process more result sets. At that point, control returns to the client. When the client starts to consume the result sets, SQL Server starts to execute the batch again because there is now available memory in the output buffer.
You can either...
Method 1: Flush all the output result sets...
or
Method 2: Add the statement SET NOCOUNT ON to the beginning of your batch...
which is why using
cursor.execute(f"SET NOCOUNT ON; exec {sp}")
worked for me.
(Not sure method 1 is an option in my case since we are only running a single large stored procedure (that calls other stored procedures) and not sure if there is a way to have the buffer flush in the middle of that, but if there is way please do let me know).
Strip your stored procedure of any PRINT statements and SET NOCOUNT ON or the stored procedure might silently fail. This is not necessary if you are using the Microsoft non-ODBC driver.
SET NOCOUNT ON - Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored procedure from being returned as part of the result set.
PRINT - Returns a user-defined message to the client.
If you are not able to change the stored procedure then add SET NOCOUNT ON to the cursor execute() the command: cursor.execute(f"SET NOCOUNT ON; exec {sp}")
Reason:
When processing the results of a batch, SQL Server fills the output buffer of the connection with the result sets that are created by the batch. These result sets must be processed by the client application. If you are executing a large batch with multiple result sets, SQL Server fills that output buffer until it hits an internal limit and cannot continue to process more result sets. At that point, control returns to the client. When the client starts to consume the result sets, SQL Server starts to execute the batch again because there is now available memory in the output buffer.

Why Is My Azure SQL Database Table Permanently Locked?

I have an isolated Azure SQL test database that has no active connections except my development machine through SSMS and a development web application instance. I am the only one using this database.
I am running some tests on a table of ~1M records where we need to do a large UPDATE to data in nearly all of the ~1M records.
DECLARE #BatchSize INT = 1000
WHILE #BatchSize > 0
BEGIN
UPDATE TOP (#BatchSize)
[MyTable]
SET
[Data] = [Data] + ' a change'
WHERE
[Data] IS NOT NULL
SET #BatchSize = ##ROWCOUNT
RAISERROR('Updated %d records', 0, 1, #BatchSize) WITH NOWAIT
END
This query works fine, and I can see my data being updated 1000 records at a time every few seconds.
Performing additional INSERT/UPDATE/DELETE commands on MyTable seem to be somewhat affected by this batch query running, but these operations do execute within a few seconds when ran. I assume this is because locks are being taken on MyTable and my other commands will execute in between the batch query's locks/looping iterations.
This behavior is all expected.
However, every so often while the batch query is running I notice that additional INSERT/UPDATE/DELETE commands on MyTable will no longer execute. They always time out/never finish. I assume some type of lock has occurred on MyTable, but it seems that the lock is never being released. Further, even if I cancel the long-running update batch query I can still no longer run any INSERT/UPDATE/DELETE commands on MyTable. Even after 10-15 minutes of the database sitting stale with nothing happening on it anymore I cannot execute write commands on MyTable. The only way I have found to "free up" the database from whatever is occurring is to scale it up and down to a new pricing tier. I assume that this pricing tier change is recycling/rebooting the instance or something.
I have reproduced this behavior multiple times during my testing today.
What is going on here?
Scaling up/down the tier rollback all open transactions and disconnect server logins.
About what you are seeing it seems is lock escalation. Try to serialize access to the database using sp_getapplock. You can also try lock hints.

SQL Server prepared statement execution - strange behavior

I have some .NET application which communicates with a SQL Server 2012 database.
Generally it gets a request from the client, runs some set of SQL queries to process the request and returns the result. To optimize performance, I prepare all the queries at startup and then execute the prepared statements each and every time. This works good most of the time, but occasionally I got some strange performance glitches, so I tried to investigate what's going on with the help of profiler.
I caught Showplan XML Statistics Profile and RPC:Completed events.
When the application behaves normally, I can see in profiler that on startup it executes something like:
declare #p1 int
set #p1 = 5
exec sp_prepexec #p1 output, N'#my_param_list', N'select my_field from my_table'
and then on each client request:
exec sp_execute 5, #myparam1=value1, #myparam2=value2 ...
Before this RPC:Completed line I can see Showplan XML Statistics Profile event with pretty normal execution plan. The duration of the query is ~50ms, which is good.
Now, when the problem occurs, I can see in profiler about 2500(!) lines of Showplan XML Statistics Profile events - each one with some meaningless index scan on one of the tables participating in the query (the only difference is the index which participates in the scan), followed by "Stream Aggregate" . And after all these lines I can see at last RPC:Completed event with the following text:
declare #p1 int
set #p1 = NULL
exec sp_prepexec #p1 output, N'#my_param_list', N'select my_field from my_table'
and the duration of more than 30 seconds. I can't understand why it takes so long, what is the meaning of all these scans, and why the handler is NULL in sp_prepexec. It looks to me like SQL Server tries very hard to execute the prepared statement, than abandons it or probably tries to re-prepare it.
So I'd be happy to get an explanation of what's wrong with this and what is going on.... Thanks in advance and sorry for the long explanation.

How do you access the Context_Info() variable in SQL2005 Profiler?

I am using the Context_Info() variable to keep track of the user that is executing a stored procedure and free-form sql. When troubleshooting issues on this server everyone session comes through. I would like to be able to bring in the value of the context_info() variable and filter based on it.
You can use the UserConfigurable Events along with sp_trace_generateevent (EventId's 82-91) when setting the context_info() to output the values to the trace. Your option is to either do that, or trace the statements setting the context_info(). You won't be able to get the value any other way unless you write a process to dump the output of sys.dm_exec_sessions in a loop while the trace is running:
select session_id, cast(context_info as varchar(128)) as context_info
from sys.dm_exec_sessions
where session_id > 50 -- user sessions
for SQL 2000 you can use sysprocesses:
select spid, cast(context_info as varchar(128)) as context_info
from sysprocesses
where sid > 50 -- user sessions

Resources