Is there a way to override the server configuration 'statement_timeout' on Azure DB for PostgreSQL? - npgsql

In the Azure Portal under PostgreSQL the server configuration statement_timeout controls the following.
"Sets the maximum allowed duration (in milliseconds) of any statement.
0 turns this off."
There doesn't seem to be a way to override that setting once it is set. We use Npgsql's .NET Provider and have tried setting the timeout=0 and the command timeout=0 on the connection string to see that would override but it doesn't seem to have an effect on the timeout. We don't want to disable it or set it to a large interval, but we have some stored procedures that run a long time and would like to set on a per statement basis.
thank you.

You can set statement_timeout at a transaction level. In the example below, I start a transaction, set the timeout, call postgres' sleep function, then mark the end of the transaction.
BEGIN; SET LOCAL statement_timeout = 2000; SELECT pg_sleep(3); COMMIT;
Note that because the sleep function times out (which is marked as an ERROR), the transaction is rolled back.
Alternatively, you can set statement_timeout at a session level using just SET.
Info about SET and SET LOCAL: https://www.postgresql.org/docs/10/sql-set.html

Related

SQL Error during Lazy Loop awaiting Azure DB resize

I want to automate some DB scaling in my Azure SQL database.
This can be easily initiated using this:
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
But that command returns instantly, whilst the resize takes a few 10s of seconds to complete.
We can check the actual current size using the following, which doesn't update until the change is complete:
SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')
So naturally I wanted to combine this with a WHILE loop and a WAITFOR DELAY, in order to create a stored procedure that will change the DB size, and not return until the change has completed.
But when I wrote that stored procedure (script below) and ran it, I get the following error every time (at about the same time that the size change completes):
A severe error occurred on the current command. The results, if any, should be discarded.
The resize succeeds, but I get errors instead of a cleanly finishing stored procedure call.
Various things I've already tested:
If I separate the "Initiate" and the "WaitLoop" sections, and start the WaitLoop in a separate connection, after initiation but before completion, then that also gives the same error.
Adding a TRY...CATCH block doesn't help either.
Removing the stored procedure aspect, and just running the code directly doesn't fix it either
My interpretation is that the Resize isn't quite as transparent as one might hope, and that connections created before the resize completes get corrupted in some sense.
Whatever the exact cause, it seems to me that this stored procedure just isn't achievable at all; I'll have to do the polling from my external process - opening new connections each time. It's not an awful solution, but it is less pleasant than being able to encapsulate the whole thing in a single stored procedure. Ah well, such is life.
Question:
Before I give up on this entirely ... does anyone have an alternative explanation or solution for this error, which would thus allow a single stored procedure call to change the size and then not return until that sizeChange actually completed?
Initial stored procedure code (simplified to remove parameterisation complexity):
CREATE PROCEDURE [trusted].[sp_ResizeAzureDbToS3AndWaitForCompletion]
AS
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
WHILE ((SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')) != 'S3')
BEGIN
WAITFOR DELAY '00:00:05'
END
RETURN 0
Whatever the exact cause, it seems to me that this stored procedure
just isn't achievable at all; I'll have to do the polling from my
external process - opening new connections each time.
Yes this is correct. As described here when you change the service objective of a database
A new compute instance is created with the requested service tier and
compute size... the database remains online during this step, and
connections continue to be directed to the database in the original
compute instance ... [then] existing connections to the database in
the original compute instance are dropped. Any new connections are
established to the database in the new compute instance.
The bolded text will kill your stored procedure execution. You need to do this check externally

Statement Timeout: On Web Sessions ONLY

I have a website and on that website, certain reports are run using SQL via the PSQL ODBC Source. Any report where the SQL takes longer than 30 seconds to run seems to time out!
I have tried amending the statement_timeout settings within PostgreSQL and this hasn't helped. I have also attempted to set the statement timeout within the ODBC Data Source Administrator and that did not help either.
Can someone please advise as to why the statement timeouts would only affect the web sessions and nothing else?
The error shown in the web logs and PostgreSQL database logs is below:
16:33:16.06 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
I don't think the issue is in relation to the statement timeout setting in the PostgreSQL config file itself. The reason I say this, is because, I don't get the 30 second statement timeout when running queries in PGAdmin, I only get the timeout when I'm running reports on my website. The website connects to the PostgreSQL database via a PSQLODBC Driver.
Nonetheless, I did try setting the statement timeout within the PostgreSQL config file and I set the statement timeout to 90 Seconds and this change had no impact on the reports, they were still timing out after 30 seconds (The change was applied as show statement_timeout; did show the updated value). Secondly, I tried editing the Connect Settings via the ODBC Data Source Administrator Data Source Options and I set the statement timeout in here using the following command: SET STATEMENT_TIMEOUT TO 90000
The change was applied but had no impact again and the statement still kept timing out.
I have also tried altering the user statement timeout via running a query similar to the below:
Alter User "MM" set statement_timeout = 90000;
This again had no impact. The web session logs show the following:
21:36:32.46 Report SQL Failed: ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
21:36:32.46 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader)
at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Odbc.OdbcCommand.ExecuteReader()
at Path.PathWebPage.SQLtoJSON(String strSQL, Boolean fColumns, Boolean fFields, String strODBC, Boolean fHasX, Boolean fHasY, Boolean fIsChart, String strUser, String strPass)
I have tried amending the statement_timeout settings within Postgres and this hasn't helped
Please show what you tried, and what error message (if any) you got when you tried it. Also, after changing the setting, did show statement_timeout; reflect the change you tried to make? What does select source from pg_settings where name ='statement_timeout' show?
In the "stock" versions of PostgreSQL, a user is always able to change their session's setting for statement_timeout. Some custom compilations like those run by hosted providers might block such changes though (hopefully with a suitable error message).
There are several ways this could be getting set in a way that differs from the value specified in postgresql.conf. It could be set on a per-user basis or per-database basis using something like alter user "webuser" set statement_timeout=30000 (which will take effect next time that user logs on, or someone logs on to that database). It could be set in the connection string (although I don't know how that works in ODBC). Or if your app uses a centralized subroutine to establish the connection, that code could just execute a set statement_timeout=30000 SQL command on the connection before it hands it back to the caller.

Set query timeout in T-SQL

I'm executing queries to SQL Server from an application. Sometimes one of those queries lasts very long. Too long actually, it usually indicates that it will eventually fail. I would like to specify a maximum duration, after which the query should just fail.
Is there a way to specify a command timeout in T-SQL?
I know a (connection and) command timeout can be set in the connection string. But in this application I cannot control the connection string. And even if I could it should be longer for the other queries.
As far as I know you cannot limit query time unless specified in the connection string (which you can't do) or if the query is executed over a linked server.
Your DBA can set the timeout to a linked server as well as to queries but a direct query does not let you do so yourself. The bigger question I would have is why does the query fail? Is there a preset timeout already on the server (hopefully), are you running out of memory and paging, or any of a million other reasons. Do you have a DBA? Because if one of my servers was being hammered by such bad code, I would be contacting the person who was executing it. If he hasn't, you should reach out and ask for help determining the failure reason.
If the unexpectedly long duration happens to be due to a (local) lock, there is a way to break out of it. Set the lock timeout before running the query:
SET LOCK_TIMEOUT 600000 -- Wait 10 minutes max to get the lock.
Do not forget to set it back afterwards to prevent subsequent queries on the connection from timing out:
SET LOCK_TIMEOUT -1 -- Wait indefinitely again.

How to include a Set statement in tMSSQLOutput block

I have integrated with a SQL Server database using Talened. The jobs insert, update and delete records along with many reads.
Recently, the ERP using this database was upgraded and now we have to run a Set statement before executing any queries in order to perform the transactions.
The Set statement looks like this:
Set Contact_Info [binary value]
I cannot figure out a way to do this in Talend. Doesn't seem to be possible in a tMSSQLOutput block. Any suggestions?
The components tMSSqlOutput and tMSSqlInput are not designed to set properties or execute commands, You to use the tMSSqlRow component to execute such command.
I followed Balazs' approach and it worked perfectly. I added a connection block and the row block. I set the row block and my update/insert block to both use the connection block. The connection is also set to autocommit. Worked like a charm.

sql server set implicit_transactions off and other options

I am still learning sql server somewhat and recently came across a select query in a stored procedure which was causing a very slow fill of a dataset in c#. At first I thought this was to do with .NET but then found a suggestion to put in the stored procedure:
set implicit_transactions off
this seems to cure it but I would like to know why also I have seen other options such as:
set nocount off
set arithabort on
set concat_null_yields_null on
set ansi_nulls on
set cursor_close_on_commit off
set ansi_null_dflt_on on
set ansi_padding on
set ansi_warnings on
set quoted_identifier on
Does anyone know where to find good info on what each of these does and what is safe to use when I have stored procedures setup just to query of data for viewing.
I should note just to stop the usual use/don't use stored procedures debate these queries are complex select statements used on multiple programs in multiple languages it is the best place for them.
Edit: Got my answer didn't end up fully reviewing all the options but did find
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Sped up the complex queries dramatically, I am not worried about the dirty read in this instance.
This is the page out of SQL Server Books Online (BOL) that you want. It explains all the SET statements that can be used in a session.
http://msdn.microsoft.com/en-us/library/ms190356.aspx
Ouch, someone, somewhere is playing with fire big-time.
I have never had a production scenario where I had to enable implicit transactions. I always open transactions when I need them and commit them when I am done. The problem with implicit transactions is its really easy to "leak" an open transaction which can lead to horrible issues. What this setting means is "please open a transaction for me the first time I run a statement if there is no transaction open, don't worry about committing it".
For example have a look at the following examples:
set implicit_transactions on
go
select top 10 * from sysobjects
And
set implicit_transactions off
go
begin tran
select top 10 * from sysobjects
They both do the exact same thing, however in the second statement its pretty clear someone forgot to commit the transaction. This can get very complicated to track down if you have this set in an obscure place.
The best place to get documentation for all the set statements is the old trusty sql server books online. It together with a bit of experimentation in query analyzer are usually all that is required to get a grasp of most settings.
I would strongly recommend you find out who is setting up implicit transactions, find out why they are doing it, and remove the setting if its not really required. Also, you must confirm that whoever uses this setting commits their implicitly open transactions.
What was probably going on is that you had an open transaction that was blocking a bit of your your stored proc, and somewhere you have a timeout that is occurring, raising an error and being handled in code, when that timeout happens your stored proc continues running. My guess is that the delay is usually 30 seconds exactly.
I think you need to look deeper into your stored procedure. I don't think that SET IMPLICIT_TRANSACTIONS is really going to be what's sped up your procedure, I think it's probably a coincidence.
One thing that may be worth a look at is what is passed from the client to the server by using the profiler.
We had an odd situation where the default SET arguments for the ADO connection were causing an SP to take ages to run from the client which we resolved by looking at exactly what the server was receiving from the client, complete with default SET arguments compared to what was sent when executing from SSMS. We then made the client pass the same SET statements as those sent by SSMS.
This may be way off track but it is a useful method to use when the SP executes in a timely fashion on the server but not from the client.

Resources