I would like to retrieve the statistics information returned by SQL Server when using SET STATISTICS IO ON through a JDBC interface.
Getting the execution plan is pretty simple because after running SET SHOWPLAN_XML ON the result of a call to Statement.execute() will be the execution plan. When using SET STATISTICS XML OFF a second ResultSet is returned by the Statement instance.
However when running SET STATISTICS IO ON before using Statement.execute() only the query results are returned. No further ResultSets, no Warnings nothing.
Does anyone have a clue how I can get that information? Where it might be hidden?
I tried using jTDS as well as Microsoft's JDBC driver (3.0 and 4.0) against SQL Server 2005, SQL Server 2008R2 and SQL Server 2012.
I checked all ResultSets returned by the query (checked by using Statement.getMoreResults()), as well as the Warning objects returned by the Connection.getWarnings() and Statement.getWarnings().
Contrary to SET SHOWPLAN_XML ON that change the resultset of any subsequent statement to the plan instead of the result of the query, SET STATISTICS IO ON doesn't.
SET STATISTICS IO ON let the query executes and displays the stats as a message.
See http://msdn.microsoft.com/en-us/library/ms131350.aspx
In ADO.NET, there is a event on the SqlConnection object called InfoMessage on which you can plug a handler and get any message the server displays, as prints or IO stats for example.
I have quickly looked at MS JDBC driver for SQL Server and didn't found anything close, best I found was this : Is there a way to display PRINT results with SQL server JDBC driver?.
My Java knowledge is thin and there could be something similar in an other driver although the above link on SQL Server Message Results only mention "SQL Server Native Client OLE DB provider".
After looking a bit more, I found out that you can get the message runing a trace (with the profiler) and that the message appears as a trace message under the "User Error Message" EventClass, which contains the TransactionID you can use to relate it to your batch.
There is a default trace running in the background of SQL Server.
See http://www.simple-talk.com/sql/performance/the-default-trace-in-sql-server---the-power-of-performance-and-security-auditing/ and http://www.sqlservercentral.com/articles/sql+server+2005/64547/ and you might insert the event you need in the trace, or run an other one then read from it.
Though to be honnest, I hope you find a better solution.
EDIT :
I have no clue why you would be able to get print message but not IO stats with the JDBC function but I can propose something else, in the direction I started with the trace.
You can do that with extended events, put a handle on the right event, and read the trace after with something like this :
First execute this on the DB :
CREATE EVENT SESSION QueryStats ON SERVER
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.sql_text)
WHERE (severity = 10
AND state = 0
AND user_defined = 0
AND error = 3615)
)
ADD TARGET package0.ring_buffer
WITH(max_dispatch_latency = 1 seconds)
GO
then around your statement :
ALTER EVENT SESSION QueryStats ON SERVER STATE = START
SET STATISTICS IO ON
select * from MyTable -- your statement(s)
SET STATISTICS IO OFF
WAITFOR DELAY '00:00:01'; -- you have to wait because you can't set under 1 second of max_dispatch_latency
WITH QueryStats
AS (
SELECT CAST(target_data AS xml) AS SessionData
FROM sys.dm_xe_session_targets st
INNER JOIN sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = 'QueryStats'
)
SELECT
error.value('(#timestamp)[1]', 'datetime') as event_timestamp
,error.value('(action/value)[1]', 'nvarchar(max)') as sql_text
,error.value('(data/value)[5]', 'varchar(max)') as [message]
FROM QueryStats d
CROSS APPLY SessionData.nodes ('//RingBufferTarget/event') AS t(error)
ALTER EVENT SESSION QueryStats ON SERVER STATE = STOP
GO
And there you get a second result set with your IO stats.
Still, the solution is far from final, because there would be a need to remove the wait time and to scope the trace better, which might be possible.
You also might let the trace run and get all the statements / IO stats later in time depending on what you intend to do with those.
Related
I'm currently working on a project to migrate code base from using Advantage Database Server to SQL Server.
I'm using Firedac of XE8 linked to a Microsoft SQL Server 2014 Express.
I have a small test project. There's a TDBGrid showing the content of a table (the query lock mode is Pessimistic, lockpoint immediate).
I have another TQuery with a SQL command like this:
update myTable
set firstName = 'John'
where id = 1
What I do :
I put the first row in Edit mode (by writing something in the cell)
When I press a button, it runs executeSQL on the Update query
Nothing happens -- the update query does not go through
That's fine ... but I was expecting an error message telling me the UPDATE didn't go trough...
How can I get the same behavior but with an error message triggered ?
Essential connection settings to work with row locks :
TFDConnection.UpdateOptions.Lockmode := lmPessimistic;
TFDConnection.UpdateOptions.LockPoint := lmImmediate;
TFDConnection.UpdateOptions.LockWait := False;
Behaviour described is SQL Server waiting for the lock to be removed to finish commiting the UPDATE. By setting your FireDACconnection to 'no wait' it's going to raise an exception as soon as you attempt to do something on the row you've locked by putting the dataset in Edit. Then you can catch this exception to do what you want.
I'm trying to copy a table in SQL Server, but a simple statement seems to be locking my database when using pyodbc. Here's the code I'm trying:
dbCxn = db.connect(cxnString)
dbCursor = dbCxn.cursor()
query = """\
SELECT TOP(10) *
INTO production_data_adjusted
FROM production_data
"""
dbCursor.execute(query)
The last statement returns immediately, but both LINQPad and SQL Server Management Studio are locked out of the database afterwards (I try to refresh their table lists). Running sp_who2 shows that LINQPad/SSMS are stuck waiting for my pyodbc process. Other databases on the server seem fine, but all access to this database gets held up. The only way I can get these other applications to resolve their stalls is by closing the pyodbc database connection:
dbCxn.close()
This exact same SELECT ... INTO statement statement works fine and takes only a second from LINQPad and SSMS. The above code works fine and doesn't lock the database if I remove the INTO line. It even returns results if I add fetchone() or fetchall().
Can anyone tell me what I'm doing wrong here?
Call the commit function of either the cursor or connection after SELECT ... INTO is executed, for example:
...
dbCursor.execute(query)
dbCursor.commit()
Alternatively, automatic commit of transactions can be specified when the connection is created using autocommit. Note that autocommit is an argument to the connect function, not a connection string attribute, for example:
...
dbCxn = db.connect(cxnString, autocommit=True)
...
I have this query:
UPDATE Table SET Field = #value WHERE id = #id
id is the primary key.
When I execute this query against an arbitrary record, it works fine and returns almost imediately. But when I execute it against id 178413 specifically it runs forever, until a timeout is triggered.
No queries should be locking this record for more than a few milliseconds.
The server runs SQL Server 2012.
What can be happening?
I found the problem.
Apparently one of the clients has crashed and kept the database connection open, probably in the middle of a transaction.
As soon as I restarted the faulty program, the record become updatable again.
So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.
I'm 100% sure that this question is a duplicate but I searched for a few hours and I didn't find anything.
My environment : windows server 2003, sql server 2005 , .net 2.0 (c#)
My problem :
When I run 5 request in the same time , one of my stored proc raises a time-out.
If , during the period the 5 request are waiting, I run in Management Studio, I try to call this stored proc with the same argument, I get my results in 0sec :)
I tried to see if I have too much connection opened but I can't see anything in activity monitor (I can see 14 item with "awaiting command").
So what is my problem ? I think it's a configuration missing , if it is, can you explain me how I will choose the value of this configuration.
Thanks
You can also try altering the isolation level of the select statement in the SP using a table hint.
For instance:
SELECT col1, col2, col3 FROM Table1 WITH (READUNCOMMITTED)
There are several other isolation levels but READ UNCOMMITTED is the lowest and will read from a table that is exclusively locked. The downside is you can get dirty reads.
If the issue is with locking, this might help.