I am doing some work on a remote sql server database which take some time and i need to block any other connection to it so no data get lost
i believe i should use single user mode to do this
i need to get it back to multi user mode after i finish my work but
my connection to the remote sever is not reliable and many times will get disconnected before finish and usually just roll back automatically and do it later
the problem is when i try to perform it within transaction i get this error :
ALTER DATABASE statement not allowed within multi-statement transaction
how can i perform
ALTER DATABASE dbName
SET SINGLE_USER WITH ROLLBACK IMMEDIATE
in a transaction and make sure it will roll back to Multi user mode if got disconnected ?
So, we're trying to arrange for a database to be returned to multi_user mode if our connection drops. Here's one way that works, but is as ugly as sin.
First, we set things up appropriately:
create database RevertTest
go
use master
go
create table RevertLock (L int not null)
go
declare #rc int
declare #job_id uniqueidentifier
exec #rc = msdb..sp_add_job #job_name='RevertSingleUser',
#description='Revert the RevertTest database to multi_user mode',
#delete_level=3,
#job_id = #job_id OUTPUT
if #rc != 0 goto Failed
exec #rc = msdb..sp_add_jobstep #job_id = #job_id,
#step_name = 'Wait to revert',
#command = '
WHILE EXISTS (SELECT * FROM RevertLock)
WAITFOR DELAY ''00:00:01''
ALTER DATABASE RevertTest set multi_user
DROP TABLE RevertLock'
if #rc != 0 goto Failed
declare #nowish datetime
declare #StartDate int
declare #StartTime int
set #nowish = DATEADD(minute,30,GETDATE())
select #StartDate = DATEPART(year,#nowish) * 10000 + DATEPART(month,#nowish) * 100 + DATEPART(day,#nowish),
#StartTime = DATEPART(hour,#nowish) * 10000 + DATEPART(minute,#nowish) * 100 + DATEPART(second,#nowish)
exec #rc = msdb..sp_add_jobschedule #job_id = #job_id,
#name='Failsafe',
#freq_type=1,
#active_start_date = #StartDate,
#active_start_time = #StartTime
if #rc != 0 goto Failed
exec #rc = msdb..sp_add_jobserver #job_id = #job_id
if #rc != 0 goto Failed
print 'Good to go!'
goto Fin
Failed:
print 'No good - couldn''t establish rollback plan'
Fin:
Basically, we create a job that tidies up after us. We schedule the job to start running in half an hours time, but that's just to protect us from a small race.
We now run our actual script to do the work that we want it to:
use RevertTest
go
alter database RevertTest set single_user with rollback immediate
go
begin transaction
go
insert into master..RevertLock(L) values (1)
go
exec msdb..sp_start_job #job_name='RevertSingleUser'
go
WAITFOR DELAY '01:00:00'
If you run this script, you'll be able to observe that the database has entered single-user mode - the WAITFOR DELAY at the end is just to simulate us "doing work" - whatever it is that you want to do within the database whilst it's in single-user mode. If you stop this query running and disconnect this query window, within a second you should see that the database has returned to multi_user mode.
To finish your script successfully, just make the last task (before COMMIT) to be to delete from the RevertLock table. Just as with the disconnection, the revert job1 will take care of switching the DB back into multi_user and then cleaning up after itself.
1The job is actually slightly deceptive. It won't actually sit looping and checking the table in master - since your transaction has an exclusive lock on it due to the INSERT. It instead sits and patiently waits to acquire a suitable lock, which only happens when your transaction commits or rolls back.
You cannot include the ALTER statement within your transaction. But you could top and tail your transaction, like so:
ALTER DATABASE TEST SET SINGLE_USER
GO
BEGIN TRANSACTION
-- Generate an error.
SELECT 1/0
ROLLBACK TRANSACTION
GO
ALTER DATABASE TEST SET MULTI_USER
This script sets the db to single user mode. Then encounters an error, before returning to multi user mode.
Related
Every night, I backup my production server and push the backup to my dev server. My dev server then has a job that runs which first checks if the backup file exits, if so check if the database exists in dev and if so drop the database, then restore from file. This all works fine, unless the file is not yet complete due to slow transfer, etc. If the file is not completely downloaded when the job runs then the first step sees it exists and drops the database. The next step tries to restore and of course fails. The next day when the job runs I would expect that when it checks if the database exists, it would see that it does not and shouldn't attempt to drop it and just restore. However, what's happening is the job is unable to drop the database and just fails at that point. This requires manual intervention to get the database restored, which is my problem. I'm less concerned with the fact of having no database existing on the server for a day (in theory) as I can tweak the schedule further to restore sooner. What I am concerned with is why is the IF statement not working to check if the database exists and attempts to drop a database regardless? Here's the T-SQL code that I am using:
DECLARE #output INT
DECLARE #SqlPath varchar(500) = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\PROD-01_prod_backup.bak'
EXECUTE master.dbo.xp_fileexist #SqlPath, #output OUT
IF #output = 1
BEGIN
IF (EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE ('[' + name + ']' = '[PROD-01]')))
BEGIN
ALTER DATABASE [PROD-01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [PROD-01]
END
RESTORE DATABASE [PROD-01] FROM DISK = #SqlPath
END
I'm not sure how this is happening, as I am unable to reproduce, but a TRY / CATCH block is an ideal solution for this case:
SET XACT_ABORT ON
GO
DECLARE #output INT
DECLARE #SqlPath varchar(500) = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\PROD-01_prod_backup.bak'
EXECUTE master.dbo.xp_fileexist #SqlPath, #output OUT
IF #output = 1
BEGIN
IF (EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE (name = 'PROD-01')))
BEGIN TRY
ALTER DATABASE [PROD-01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [PROD-01]
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE();
END CATCH
RESTORE DATABASE [PROD-01] FROM DISK = #SqlPath
END
I am trying to exec about the next query (see below) with the ManagentStudio. The query exec time starts to take long, I hit the red squared StopExecution button which is at the top of the ManStd window, the query stops being processed and its results get canceled. Then I issue a 'select ##trancount' statement and it shows there is an open transaction. Since I hit StopExecution, the transaction was to have been rolled back, right? Why am I recieving the message saying there is an open transaction and why does sp_lock show me there is a bunch of MyTable's RIDs under X lock? All actions are performed on SQL Server 2008(RTM)
Declare #i Integer = 1;
Begin transaction
While #i <= 100000
Begin
Insert into MyTable
Values(default);
Set i+=1;
End
Commit transaction
Cancelling a query will not rollback the transaction by default. When you press the cancel button in SSMS or a timeout occurs during execution, the application or client API just sends an attention request to instruct SQL Server to stop executing the current batch. The transaction will remain active by default.
You can specify SET XACT_ABORT ON so that the attention event will also rollback the transaction. This is configurable in SSMS (Query-->Query Options--Advanced). An explicit SET XACT_ABORT ON and should be included in all stored procs with BEGIN TRAN too to avoid problems after a query timeout.
SET XACT_ABORT ON;
DECLARE #i Integer = 1;
BEGIN TRANSACTION;
WHILE #i <= 100000
BEGIN
INSERT INTO dbo.MyTable
VALUES(default);
SET i+=1;
END
COMMIT;
I have the following SQL script:
USE MyDatabase
ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
BEGIN TRANSACTION
-- Write to tables here
COMMIT TRANSACTION
ALTER DATABASE MyDatabase SET MULTI_USER
GO
I want to make sure that my database is not accessed while the script is running, which is why I am setting the database to SINGLE_USER at the top.
Now, if anything inside the transaction fails (e.g. syntax error) I will never reset the database back to MULTI_USER which means it will be stuck in single user mode permanently (not what I want).
Is there a way to ensure we always go back to MULTI_USER, even on failure?
I guess I'm looking for the equivalent of a finally block in SQL, but from what I have read this doesn't exist. Is there another way?
It's worth noting that I cannot move the ALTER DATABASE commands into the transaction, as this is not permitted (since they are committed immediately).
You should be able to use a TRY CATCH to make sure you always go back to MULTI_USER.
You will just need to move the command to switch back to MULTI_USER right after the TRY CATCH block, since FINALLY is not supported in SQL Server.
I ran a quick test with the following SQL and it worked as expected in SQL Server 2005. Just make sure you ROLLBACK the transaction in the CATCH block. I used SELECT 1/0 to force the code into the CATCH block. For debugging purposes, I added the SELECT user_access_desc ... statements to show that the database was indeed switching from single user back to multi user mode.
USE MyDatabase
ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
SELECT user_access_desc from sys.databases WHERE Name = 'MyDatabase'
DECLARE #errNum AS INT
DECLARE #errMsg AS VARCHAR(MAX)
SET #errNum = 0
BEGIN TRY
BEGIN TRANSACTION
SELECT 1/0
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT #errNum = ERROR_NUMBER(), #errMsg = ERROR_MESSAGE()
ROLLBACK TRANSACTION
END CATCH
IF #errNum <> 0
SELECT 'An error occurred: ' + CAST(#errNum AS VARCHAR) + '- ' + #errMsg
ALTER DATABASE MyDatabase SET MULTI_USER
GO
SELECT user_access_desc from sys.databases WHERE Name = 'MyDatabase'
EDIT
In my original answer, I had the ALTER ... MULTI_USER statement inside both the TRY and the CATCH block, but that was unnecessary so I moved the statement to right after the TRY CATCH block. I also added some error reporting. Some things to watch out for with this approach are:
If you do any error handling or reporting, you'll need to make sure that SQL doesn't error. I would probably write the #errNum and #errMsg values to a table (wrapped in a TRY CATCH), switch back to MULTI_USER mode, and then perform whatever other error handling measures that are required, as the local variables will go out of scope after the GO statement.
Some errors are unaffected by TRY CATCH. The documentation I linked to above does list out what those conditions are.
I have an INSERT trigger on a table that simply executes a job.
Example:
CREATE TABLE test
(
RunDate smalldatetime
)
CREATE TRIGGER StartJob ON test
AFTER INSERT
AS
EXEC msdb.dbo.sp_start_job 'TestJob'
When I insert a record to this table, the job is fired of without any issue. There are a few people, however, that have lower permissions than I do (db_datareader/db_datawriter on the database only); they are able to insert a record to the table, but the trigger does not fire.
I am a SQL Server novice and I was under the impression that users did not need elevated permissions to fire off a trigger (I thought that was one of the big benefits!). Is this a permission issue at the trigger level, or at the job level? What can I do to get around this limitation?
The trigger will execute in the context of the caller, which may or may not have the permissions to access msdb. That seems to be your problem. There are a few ways to extend these permissions using Execute As; they are greatly detailed in this link
Use impersonation within trigger:
CREATE TRIGGER StartJob ON test
with execute as owner
AFTER INSERT
AS
EXEC msdb.dbo.sp_start_job 'TestJob'
And set database to trustworthy (or read about signing in above link):
alter database TestDB set trustworthy on
Another way to go (depending on what operations the agent job performs) would be to leverage a Service Broker queue to handle the stored procedure activation. Your users' context would simply call to Send On the queue while, in an asynchronous process SvcBroker would activate a stored procedure which executed in context of higher elevated user. I would opt for this solution rather than relying on a trigger calling an agent job.
I wanted to test the call to Service Broker, so I wrote this simple test example. Instead of calling an SSIS package I simply send an email, but it is very similar to your situation. Notice I use SET TRUSTWORTHY ON at the top of the script. Please read about the implications of this setting.
To run this sample you will need to substitute your email profile info below, <your_email_address_here>, etc.
use Master;
go
if exists(select * from sys.databases where name = 'TestDB')
drop database TestDB;
create database TestDB;
go
alter database TestDB set ENABLE_BROKER;
go
alter database TestDB set TRUSTWORTHY ON;
use TestDB;
go
------------------------------------------------------------------------------------
-- create procedure that will be called by svc broker
------------------------------------------------------------------------------------
create procedure dbo.usp_SSISCaller
as
set nocount on;
declare #dlgid uniqueidentifier;
begin try
-- * figure out how to start SSIS package from here
-- for now, just send an email to illustrate the async callback
;receive top(1)
#dlgid = conversation_handle
from SSISCallerQueue;
if ##rowcount = 0
begin
return;
end
end conversation #dlgid;
exec msdb.dbo.sp_send_dbmail
#profile_name = '<your_profile_here>',
#importance = 'NORMAL',
#sensitivity = 'NORMAL',
#recipients = '<your_email_address_here>',
#copy_recipients = '',
#blind_copy_recipients = '',
#subject = 'test from ssis caller',
#body = 'testing',
#body_format = 'TEXT';
return 0;
end try
begin catch
declare #msg varchar(max);
select #msg = error_message();
raiserror(#msg, 16, 1);
return -1;
end catch;
go
------------------------------------------------------------------------------------
-- setup svcbroker objects
------------------------------------------------------------------------------------
create contract [//SSISCallerContract]
([http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer] sent by initiator)
create queue SSISCallerQueue
with status = on,
activation (
procedure_name = usp_SSISCaller,
max_queue_readers = 1,
execute as 'dbo' );
create service [//SSISCallerService]
authorization dbo
on queue SSISCallerQueue ([//SSISCallerContract]);
go
return;
-- usage
/*
-- put a row into the queue to trigger the call to usp_SSISCaller
begin transaction;
declare #dlgId uniqueidentifier;
begin dialog conversation #dlgId
from service [//SSISCallerService]
to service '//SSISCallerService',
'CURRENT DATABASE'
on contract [//SSISCallerContract]
with encryption = off;
begin conversation timer (#dlgId)
TIMEOUT = 5; -- seconds
commit transaction;
*/
It would be permissions at the job level. You can possibly assign those users the SQLAgentReaderRole in MSDB to be able to start a job, considering that they would be added to a group that owned the job. If they are not in a group which owns the job, it gets more difficult.
I wish to have a stored proc that is called every n seconds, is there a way to do this in SQL Server without depending on a separate process?
Use a timer and activation. No external process, continues to work after a clustering or mirroring failover, continues to work even after a restore on a different machine, and it works on Express too.
-- create a table to store the results of some dummy procedure
create table Activity (
InvokeTime datetime not null default getdate()
, data float not null);
go
-- create a dummy procedure
create procedure createSomeActivity
as
begin
insert into Activity (data) values (rand());
end
go
-- set up the queue for activation
create queue Timers;
create service Timers on queue Timers ([DEFAULT]);
go
-- the activated procedure
create procedure ActivatedTimers
as
begin
declare #mt sysname, #h uniqueidentifier;
begin transaction;
receive top (1)
#mt = message_type_name
, #h = conversation_handle
from Timers;
if ##rowcount = 0
begin
commit transaction;
return;
end
if #mt in (N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
, N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
begin
end conversation #h;
end
else if #mt = N'http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer'
begin
exec createSomeActivity;
-- set a new timer after 2s
begin conversation timer (#h) timeout = 2;
end
commit
end
go
-- attach the activated procedure to the queue
alter queue Timers with activation (
status = on
, max_queue_readers = 1
, execute as owner
, procedure_name = ActivatedTimers);
go
-- seed a conversation to start activating every 2s
declare #h uniqueidentifier;
begin dialog conversation #h
from service [Timers]
to service N'Timers', N'current database'
with encryption = off;
begin conversation timer (#h) timeout = 1;
-- wait 15 seconds
waitfor delay '00:00:15';
-- end the conversation, will stop activating
end conversation #h;
go
-- check that the procedure executed
select * from Activity;
You can set up a SQL Agent job - that's probably the only way to go.
SQL Server Agent is a component of SQL Server - not available in the Express editions, however - which allows you to automate certain tasks, like database maintenance etc. but you can also use it to call stored procs every n seconds.
I once set up a stored procedure that ran continuously, uisng a loop with a WAITFOR at the end of it.
The WHILE condition depended upon the value read from a simple configuration table. If the value got set to 0, the loop would be exited and the procedure finished.
I put a WAITFOR DELAY at the end, so that however long it took to process a given iteration, it would wait XX seconds until it ran it again. (XX was also set in and read from the configuration table.)
If it must run at precise intervales (say, 0, 15, 30, and 45 seconds in the minute), you could calculate the appropriate WATIFOR TIME value at the end of the loop.
Lastly, I had the procedure called by a SQL Agent job once a minute. The job would always be "running" showing that the procedure was running. If the procedure was killed or crashed, the job would start it up in no more than 1 minute. If the procedure was "turned off", the procedure still gets run but the WHILE loop containing the processing does not get entered making the overhead nill.
I didn't much like having it in my database, but it fulfilled the business requirements.
WAITFOR
{
DELAY 'time_to_pass'
| TIME 'time_to_execute'
| [ ( receive_statement ) | ( get_conversation_group_statement ) ]
[ , TIMEOUT timeout ]
}
If you want to keep a SSMS query window open:
While 1=1
Begin
exec "Procedure name here" ;
waitfor delay '00:00:15';
End