Statement Timeout: On Web Sessions ONLY - database

I have a website and on that website, certain reports are run using SQL via the PSQL ODBC Source. Any report where the SQL takes longer than 30 seconds to run seems to time out!
I have tried amending the statement_timeout settings within PostgreSQL and this hasn't helped. I have also attempted to set the statement timeout within the ODBC Data Source Administrator and that did not help either.
Can someone please advise as to why the statement timeouts would only affect the web sessions and nothing else?
The error shown in the web logs and PostgreSQL database logs is below:
16:33:16.06 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
I don't think the issue is in relation to the statement timeout setting in the PostgreSQL config file itself. The reason I say this, is because, I don't get the 30 second statement timeout when running queries in PGAdmin, I only get the timeout when I'm running reports on my website. The website connects to the PostgreSQL database via a PSQLODBC Driver.
Nonetheless, I did try setting the statement timeout within the PostgreSQL config file and I set the statement timeout to 90 Seconds and this change had no impact on the reports, they were still timing out after 30 seconds (The change was applied as show statement_timeout; did show the updated value). Secondly, I tried editing the Connect Settings via the ODBC Data Source Administrator Data Source Options and I set the statement timeout in here using the following command: SET STATEMENT_TIMEOUT TO 90000
The change was applied but had no impact again and the statement still kept timing out.
I have also tried altering the user statement timeout via running a query similar to the below:
Alter User "MM" set statement_timeout = 90000;
This again had no impact. The web session logs show the following:
21:36:32.46 Report SQL Failed: ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
21:36:32.46 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader)
at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Odbc.OdbcCommand.ExecuteReader()
at Path.PathWebPage.SQLtoJSON(String strSQL, Boolean fColumns, Boolean fFields, String strODBC, Boolean fHasX, Boolean fHasY, Boolean fIsChart, String strUser, String strPass)

I have tried amending the statement_timeout settings within Postgres and this hasn't helped
Please show what you tried, and what error message (if any) you got when you tried it. Also, after changing the setting, did show statement_timeout; reflect the change you tried to make? What does select source from pg_settings where name ='statement_timeout' show?
In the "stock" versions of PostgreSQL, a user is always able to change their session's setting for statement_timeout. Some custom compilations like those run by hosted providers might block such changes though (hopefully with a suitable error message).
There are several ways this could be getting set in a way that differs from the value specified in postgresql.conf. It could be set on a per-user basis or per-database basis using something like alter user "webuser" set statement_timeout=30000 (which will take effect next time that user logs on, or someone logs on to that database). It could be set in the connection string (although I don't know how that works in ODBC). Or if your app uses a centralized subroutine to establish the connection, that code could just execute a set statement_timeout=30000 SQL command on the connection before it hands it back to the caller.

Related

ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION following SQL Server Error 8114

I am writing in C++ against a SQL Server database. I have an object called SQLTransaction that when created at the start of a code block, sends 'begin transaction' to SQL Server.
I then send one or more SQL statements to the server. If all goes well, I set a flag in the SQLTransaction object to let it know the set of commands went well. When the SQLTransaction object then goes out of scope it either sends 'commit transaction' or 'rollback transaction to the server depending on the state of the flag.
It looks something like this:
{
TSQLTransaction SQLTran();
try
{
Send( SomeSQLCommand );
}
catch(EMSError &e)
{
InformOperator();
return;
}
SQLTran.commit();
}
I had a SQL statement in one of these blocks that sent a poor command and that command threw a SQL error 8114
Error converting data type varchar to numeric
I have since fixed that particular issue. What I don't understand is the fact that I was also receiving a second SQL error with the message
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
I can't find anything that tells me this transaction could or should not be rolled back after failure.
This exact same SQLTransaction object is used in many places in my application and always seemed to work correctly until now. This SQL error seems to be treated differently for some reason. Are there some errors that SQL Server automatically rolls back? I'd really like to understand what's happening here.
Thanks
There is a connection option, SET XACT_ABORT, that determines the fate of current transaction(s) when SQL statement throws an error. Basically, when set to OFF, the transaction (usually) survives and execution continues; if it's ON, all opened transactions in the current connection are rolled back and batch is terminated.
The option could be set:
On the connection level;
Different database access drivers might have different defaults for connection options;
There is a default value on the SQL Server instance level.
Check whether any of these were changed recently. Also, if you capture the trace in the SQL Profiler, the "ExistingConnection" event lists current connection settings. You can always check the option state there, and rule it out if it's turned off. In that case, I would look closer at the trace, there might be additional commands sent to the server that aren't apparent from your client code.

PerformPostRestoreFixup() method gives Exception: Microsoft.Synchronization.Data.DbNotProvisionedException

Below is my code:
SqlConnection.ClearPool(serverConn);
serverConn = new SqlConnection(Utility.ConnStr_SqlSync_Server);
SqlSyncStoreRestore databaseRestore = new SqlSyncStoreRestore(serverConn);
databaseRestore.CommandTimeout = 1000; //Database is about 300 Gb
databaseRestore.PerformPostRestoreFixup(); //line #31
Error message:
Unhandled Exception: Microsoft.Synchronization.Data.DbNotProvisionedException: The current operation could not be completed because the database is not provisioned for sync or you not have permissions to the sync configuration tables.
at Microsoft.Synchronization.Data.SqlServer.SqlManagementUtils.VerifyRuntimeAndSchemaVersionsMatch(SqlConnection connection, SqlTransaction trans, String objectPrefix, String objectSchema, Boolean throwWhenNotProvisioned)
at Microsoft.Synchronization.Data.SqlServer.SqlSyncStoreRestore.PerformPostRestoreFixup()
at FixSyncEnabledDbAfterBackup.Program.Main(String[] args) ..\Visual Studio 2010\Projects\FixSyncEnabledDbAfterBackup\FixSyncEnabledDbAfterBackup\Program.cs:line 31
And my question:
How do I know is there permission or provision problem?
I ran this code on two analogous systems with same result. One system sync perfectly and other gives following warning in trace log
System.ArgumentException: SQL Server Change Tracking is not enabled for table 'Users' and stops.
Change tracking is enabled as I have checked. By means of PerformPostRestoreFixup() method I hope to recover database of second system after it was switched to simple recovery model and back to full model (probable cause of sync problem as I think).
It was not the matter of enabling change tracking for the table as error message says. Permissions to View Change Tracking Information should be added for the process which sync data. It could be checked in table properties in SSMS.

Stored proc failing with SQL error 3621, runs fine in ssms and if dropped and recreated

I have a stored proc that is called by a .net application and passes an xml parameter - this is then shredded and forms the WHERE section of the query.
So in my query, I look for records with a documentType matching that contained in the XML. The table contains many more records with a documentType of C than P.
The query will run fine for a number of week, regardless of if the XML contains P or C for documentType. Then it stops working for documentType C.
I can run both queries from SSMS with no errors (using profiler to capture the exact call that was made). Profiler shows that when run from the application, the documentType C query starts a statement then finishes before the statement ends, and before completing the outstanding steps of the query.
I ran another profiler session to capture all errors and warnings. All I can see is error 3621 - The statement has been terminated.
There are no other errors relating to this spid, the only other things to be picked up were warnings changing database context.
I've checked the SQL logs and extended events and can find nothing. I don't think the query relates to the data content as it runs in SSMS without problems - I've also checked the range values for other fields in the WHERE clause and nothing unusual or untoward there. I also know that if I drop and recreate the procedure (i.e. exact same code) the problem will be fixed.
Does anyone know how I can trace the error that is causing the 3261 failure? Profiling does not pick this up.
In some situations, SQL Server raises two error messages, one is the actual error message saying exactly what is happening and the other one is 3621 which says The statement has been terminated.
Sometimes the first message get lost specially when you are calling an SQL query or object from a script.
I suggest you to go through each of your SQL statement and run them individually.
Another guess is you have a timeout error on your client side. If you have Attention event on your SQL Server trace, you can follow the timeout error messages.

Occasional failure of sub-query on SQL Server using ADO - properties not supported

I have a Delphi program (D2010) that accesses a local SQL Server 2005 Express database using the ADO components (TADOConnection and TADOQuery). At program startup I use a correlated sub-query to identify the maximum of a specific field for a range of values. This works well in all our testing.
However, on some customer systems we have seen that if our program is shutdown and restarted immediately, the program fails when running this subquery with an EOleException 'The requested properties cannot be supported'. Subsequent restarts of the program repeat this error, until the PC is rebooted. In this state, all other database access in the program seems OK; this is the only use of a correlated sub-query in the program.
The correlated sub-query is:
SELECT p1.*
FROM Packs p1
WHERE p1.MachID = :MachID
AND p1.BuildID <= :MaxPosID
AND p1.PackID =
(
SELECT MAX(p2.PackID)
FROM packs p2
WHERE p2.BuildID = p1.BuildID
and p2.MachID = p1.MachID
)
ORDER BY BuildID
The MachID and MaxPosID fields do not change on an individual system, so the query is the same in any run of the program. The only difference with the customer systems is that they may be running with larger databases (typ. 1GB).
I have added some code to iterate over the database connection properties, and seen that on our working systems the 'Subquery Support' property has a value of 31H, which according to
http://msdn.microsoft.com/en-us/library/office/aa140022(v=office.10).aspx means that correlated subqueries are supported.
I assume that when the problem occurs on customer sites the property does not have the same value set for some reason.
One workaround was to open a command prompt, and use sqlcmd to just 'USE (our database name)'. If this command prompt is left open, then our program starts normally. I have no idea how this would affect the running of our program, or the value of the properties returned by the connection object.
Any ideas about why the supported properties change, or why program shutdown/startup should see this change?
I can rewrite the code to replace the use of the correlated subquery with a slower search through the table until I find all the required values, and this would probably not be affected by the problem, but I would like to understand what is happening.
Edit: the connection string is:
Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=TQSquality
The connection string is modified at runtime to add 'OLE DB Services = -2', to switch off connection pooling.
The query is executed by:
SetCnx(LastPackIDQry, CnxQuality);
LastPackIDQry.Parameters[0].Value := GetMachNo;
LastPackIDQry.Parameters[1].Value := TQS.PosRange.Last;
QryOpen(LastPackIDQry);
try
while not Eof do
begin
...process the data
QryOpen is a utility that just calls .Open on the input query, but provides some logging on errors. As I mentioned, the two parameters are fixed for a specific machine, so I cannot believe the problem is with the query; has to be something to do with the connection or the database.
TIA Ian

SQL error: String or binary data would be truncated

I'm doing an integration on a community platform called Telligent. I'm using a 3rd-party add-on called BlogML to import blog posts from an XML file (in BlogML format) into my local Telligent site. The Telligent platform comes with many classes in their SDK so that I can programmatically add content, such as blog posts. E.g.
myWeblogService.AddPost(myNewPostObject);
The BlogML app I'm using essentially parses the XML and creates blog post objects then adds them to the site using code like the above sample line. After about 40 post imports I get a SQL error:
Exception Details: System.Data.SqlClient.SqlException:
String or binary data would be truncated.
The statement has been terminated.
I believe this error means that I'm trying to insert too much data into a db field that has a max size limit. Unfortunately, I cannot tell which field this is an issue for. I ran the SQL Server Profiler while doing the import but I cannot seem to see what stored procedure the error is occurring on. Is there another way to use the profiler or another tool to see exactly what stored procedure and even what field the error is being caused by? Are there any other tips to get more information about where specifically to look?
Oh the joys of 3rd-party tools...
You are correct in that the exception is due to trying to stuff too much data into a character/binary based field. Running a trace should definitely allow you to see which procedure/statement is throwing the exception if you are capturing the correct events, those you'd want to capture would include:
SQL:BatchStarting
SQL:BatchCompleted
SQL:StmtStarting
SQL:StmtCompleted
RPC:Starting
RPC:Completed
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
Exception
If you know for certain it is a stored procedure that includes the faulty code, you could do away with capturing #'s 1-4. Be sure you capture all associated columns in the trace as well (should be the default if you are running a trace using the Profiler tool). The Exception class will include the actual error in your trace, which should allow you to see the immediate preceding statement within the same SPID that threw the exception. You must include the starting events in addition to the completed events as an exception that occurs will preclude the associated completed events from firing in the trace.
If you can filter your trace to a particular database, application, host name, etc. that will certainly make it easier to debug if you are on a busy server, however if you are on an idle server you may not need to bother with the filtering.
Assuming you are using Sql 2005+, the trace will include a column called 'EventSequence', which is basically an incrementing value ordered by the sequence that events fire. Once you run the trace and capture the output, find the 'Exception' event that fired (if you are using profiler, the row's it will be in Red color), then you should be able to simply find the most recent SP:StmtStarting or SQL:StmtStarting event for the same SPID that occurred before the Exception.
Here is a screen shot of a profile I captured reproducing an event similar to yours:
You can see the exception line in Red, and the line highlighted is the immediate preceding SP:StmtStarting event that fired prior to the exception for the same SPID. If you want to find what stored procedure this statement is a part of, look for the values in the ObjectName and/or ObjectId columns.
By doing some silly mistakes you will get this error.
if you are trying to insert a string like.
String reqName="Food Non veg /n";
here /n is the culprit.Remove /n from the string to get out of this error.
I hope this will help some one.

Resources