When setting lock_timeout 10 seconds locally on psql as shown below:
SET LOCAL lock_timeout = 10000;
I got the warning below on psql:
WARNING: SET LOCAL can only be used in transaction blocks
Then, SET LOCAL lock_timeout = 10000; doesn't apply to the following transaction at all so lock table person; waits to lock the table forever without timeout after 10 seconds as shown below:
postgres=# SET LOCAL lock_timeout = 10000;
WARNING: SET LOCAL can only be used in transaction blocks
SET
postgres=# BEGIN;
BEGIN
postgres=*# LOCK TABLE person; # Waits to lock the table forever
So, how can I solve the warning then apply SET LOCAL lock_timeout = 10000; to the following transaction?
As the warning below says:
WARNING: SET LOCAL can only be used in transaction blocks
You need to use SET LOCAL in transaction after running BEGIN then it works properly as shown below:
postgres=# BEGIN;
BEGIN
postgres=*# SET LOCAL lock_timeout = 10000;
SET
postgres=*# LOCK TABLE person; # Waits to lock the table for 10 seconds
ERROR: canceling statement due to lock timeout # Cancelled after 10 seconds
postgres=!#
Related
Can Lock_timeout be set as 3 times less than STATEMENT_TIMEOUT_IN_SECONDS in snowflake so that it waits to get a lock on the resource if not will abort the queued query?
I have a case where concurrently delete query has taken long time in "waiting for locks" phase since the query/job needs to wait for table resource because it is locked by the other transaction.
LOCK_TIMEOUT can be set at the account, session or user level. It can be set to the same value that you set STATEMENT_TIMEOUT_IN_SECONDS or any value.
I have been trying to update a column in a table and I am getting the below error:
The transaction log for database 'STAGING' is full due to 'ACTIVE_TRANSACTION'.
I am trying to run the below statement :
UPDATE [STAGING].[dbo].[Stg_Encounter_Alias]
SET
[valid_flag] = 1
FROM [Stg_Encounter_Alias] Stg_ea
where [ACTIVE_IND] = 1
and [END_EFFECTIVE_DT_TM] > convert(date,GETDATE())
My table has approx 18 million rows. And the above update will modify all the rows. The table size is 2.5 GB. Also the DB is in simple recovery mode
This is something that I'll be doing very frequently on different tables. How can I manage this?
My Database size is as per below
Below are the database properties!!! I have tried changing the logsize to unlimited but it goes back to default.
Can any one tell me an efficient way to handle this scenario?
If I run in batches :
begin
DECLARE #COUNT INT
SET #COUNT = 0
SET NOCOUNT ON;
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) [STAGING].[dbo].[Stg_Encounter_Alias]
SET
[valid_flag] = 1
FROM [Stg_Encounter_Alias] Stg_ea
where [ACTIVE_IND] = 1
and [END_EFFECTIVE_DT_TM] > convert(date,GETDATE())
SET #Rows = ##ROWCOUNT;
END;
end
You are performing your update in a single transaction, and this causes the transaction log to grow very large.
Instead, perform your updates in batches, say 50K - 100K at a time.
Do you have an index on END_EFFECTIVE_DT_TM that includes ACTIVE_IND and valid_flag? That would help performance.
CREATE INDEX NC_Stg_Encounter_Alias_END_EFFECTIVE_DT_TM_I_
ON [dbo].[Stg_Encounter_Alias](END_EFFECTIVE_DT_TM)
INCLUDE (valid_flag)
WHERE ([ACTIVE_IND] = 1);
Another thing that can help performance drastically if you are running Enterprise Edition OR SQL Server 2016 SP1 or later (any edition), is turning on data_compression = page for the table and it's indexes.
I am trying to exec about the next query (see below) with the ManagentStudio. The query exec time starts to take long, I hit the red squared StopExecution button which is at the top of the ManStd window, the query stops being processed and its results get canceled. Then I issue a 'select ##trancount' statement and it shows there is an open transaction. Since I hit StopExecution, the transaction was to have been rolled back, right? Why am I recieving the message saying there is an open transaction and why does sp_lock show me there is a bunch of MyTable's RIDs under X lock? All actions are performed on SQL Server 2008(RTM)
Declare #i Integer = 1;
Begin transaction
While #i <= 100000
Begin
Insert into MyTable
Values(default);
Set i+=1;
End
Commit transaction
Cancelling a query will not rollback the transaction by default. When you press the cancel button in SSMS or a timeout occurs during execution, the application or client API just sends an attention request to instruct SQL Server to stop executing the current batch. The transaction will remain active by default.
You can specify SET XACT_ABORT ON so that the attention event will also rollback the transaction. This is configurable in SSMS (Query-->Query Options--Advanced). An explicit SET XACT_ABORT ON and should be included in all stored procs with BEGIN TRAN too to avoid problems after a query timeout.
SET XACT_ABORT ON;
DECLARE #i Integer = 1;
BEGIN TRANSACTION;
WHILE #i <= 100000
BEGIN
INSERT INTO dbo.MyTable
VALUES(default);
SET i+=1;
END
COMMIT;
I am doing some work on a remote sql server database which take some time and i need to block any other connection to it so no data get lost
i believe i should use single user mode to do this
i need to get it back to multi user mode after i finish my work but
my connection to the remote sever is not reliable and many times will get disconnected before finish and usually just roll back automatically and do it later
the problem is when i try to perform it within transaction i get this error :
ALTER DATABASE statement not allowed within multi-statement transaction
how can i perform
ALTER DATABASE dbName
SET SINGLE_USER WITH ROLLBACK IMMEDIATE
in a transaction and make sure it will roll back to Multi user mode if got disconnected ?
So, we're trying to arrange for a database to be returned to multi_user mode if our connection drops. Here's one way that works, but is as ugly as sin.
First, we set things up appropriately:
create database RevertTest
go
use master
go
create table RevertLock (L int not null)
go
declare #rc int
declare #job_id uniqueidentifier
exec #rc = msdb..sp_add_job #job_name='RevertSingleUser',
#description='Revert the RevertTest database to multi_user mode',
#delete_level=3,
#job_id = #job_id OUTPUT
if #rc != 0 goto Failed
exec #rc = msdb..sp_add_jobstep #job_id = #job_id,
#step_name = 'Wait to revert',
#command = '
WHILE EXISTS (SELECT * FROM RevertLock)
WAITFOR DELAY ''00:00:01''
ALTER DATABASE RevertTest set multi_user
DROP TABLE RevertLock'
if #rc != 0 goto Failed
declare #nowish datetime
declare #StartDate int
declare #StartTime int
set #nowish = DATEADD(minute,30,GETDATE())
select #StartDate = DATEPART(year,#nowish) * 10000 + DATEPART(month,#nowish) * 100 + DATEPART(day,#nowish),
#StartTime = DATEPART(hour,#nowish) * 10000 + DATEPART(minute,#nowish) * 100 + DATEPART(second,#nowish)
exec #rc = msdb..sp_add_jobschedule #job_id = #job_id,
#name='Failsafe',
#freq_type=1,
#active_start_date = #StartDate,
#active_start_time = #StartTime
if #rc != 0 goto Failed
exec #rc = msdb..sp_add_jobserver #job_id = #job_id
if #rc != 0 goto Failed
print 'Good to go!'
goto Fin
Failed:
print 'No good - couldn''t establish rollback plan'
Fin:
Basically, we create a job that tidies up after us. We schedule the job to start running in half an hours time, but that's just to protect us from a small race.
We now run our actual script to do the work that we want it to:
use RevertTest
go
alter database RevertTest set single_user with rollback immediate
go
begin transaction
go
insert into master..RevertLock(L) values (1)
go
exec msdb..sp_start_job #job_name='RevertSingleUser'
go
WAITFOR DELAY '01:00:00'
If you run this script, you'll be able to observe that the database has entered single-user mode - the WAITFOR DELAY at the end is just to simulate us "doing work" - whatever it is that you want to do within the database whilst it's in single-user mode. If you stop this query running and disconnect this query window, within a second you should see that the database has returned to multi_user mode.
To finish your script successfully, just make the last task (before COMMIT) to be to delete from the RevertLock table. Just as with the disconnection, the revert job1 will take care of switching the DB back into multi_user and then cleaning up after itself.
1The job is actually slightly deceptive. It won't actually sit looping and checking the table in master - since your transaction has an exclusive lock on it due to the INSERT. It instead sits and patiently waits to acquire a suitable lock, which only happens when your transaction commits or rolls back.
You cannot include the ALTER statement within your transaction. But you could top and tail your transaction, like so:
ALTER DATABASE TEST SET SINGLE_USER
GO
BEGIN TRANSACTION
-- Generate an error.
SELECT 1/0
ROLLBACK TRANSACTION
GO
ALTER DATABASE TEST SET MULTI_USER
This script sets the db to single user mode. Then encounters an error, before returning to multi user mode.
I wish to have a stored proc that is called every n seconds, is there a way to do this in SQL Server without depending on a separate process?
Use a timer and activation. No external process, continues to work after a clustering or mirroring failover, continues to work even after a restore on a different machine, and it works on Express too.
-- create a table to store the results of some dummy procedure
create table Activity (
InvokeTime datetime not null default getdate()
, data float not null);
go
-- create a dummy procedure
create procedure createSomeActivity
as
begin
insert into Activity (data) values (rand());
end
go
-- set up the queue for activation
create queue Timers;
create service Timers on queue Timers ([DEFAULT]);
go
-- the activated procedure
create procedure ActivatedTimers
as
begin
declare #mt sysname, #h uniqueidentifier;
begin transaction;
receive top (1)
#mt = message_type_name
, #h = conversation_handle
from Timers;
if ##rowcount = 0
begin
commit transaction;
return;
end
if #mt in (N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
, N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
begin
end conversation #h;
end
else if #mt = N'http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer'
begin
exec createSomeActivity;
-- set a new timer after 2s
begin conversation timer (#h) timeout = 2;
end
commit
end
go
-- attach the activated procedure to the queue
alter queue Timers with activation (
status = on
, max_queue_readers = 1
, execute as owner
, procedure_name = ActivatedTimers);
go
-- seed a conversation to start activating every 2s
declare #h uniqueidentifier;
begin dialog conversation #h
from service [Timers]
to service N'Timers', N'current database'
with encryption = off;
begin conversation timer (#h) timeout = 1;
-- wait 15 seconds
waitfor delay '00:00:15';
-- end the conversation, will stop activating
end conversation #h;
go
-- check that the procedure executed
select * from Activity;
You can set up a SQL Agent job - that's probably the only way to go.
SQL Server Agent is a component of SQL Server - not available in the Express editions, however - which allows you to automate certain tasks, like database maintenance etc. but you can also use it to call stored procs every n seconds.
I once set up a stored procedure that ran continuously, uisng a loop with a WAITFOR at the end of it.
The WHILE condition depended upon the value read from a simple configuration table. If the value got set to 0, the loop would be exited and the procedure finished.
I put a WAITFOR DELAY at the end, so that however long it took to process a given iteration, it would wait XX seconds until it ran it again. (XX was also set in and read from the configuration table.)
If it must run at precise intervales (say, 0, 15, 30, and 45 seconds in the minute), you could calculate the appropriate WATIFOR TIME value at the end of the loop.
Lastly, I had the procedure called by a SQL Agent job once a minute. The job would always be "running" showing that the procedure was running. If the procedure was killed or crashed, the job would start it up in no more than 1 minute. If the procedure was "turned off", the procedure still gets run but the WHILE loop containing the processing does not get entered making the overhead nill.
I didn't much like having it in my database, but it fulfilled the business requirements.
WAITFOR
{
DELAY 'time_to_pass'
| TIME 'time_to_execute'
| [ ( receive_statement ) | ( get_conversation_group_statement ) ]
[ , TIMEOUT timeout ]
}
If you want to keep a SSMS query window open:
While 1=1
Begin
exec "Procedure name here" ;
waitfor delay '00:00:15';
End