How do I lock a SQLite database? - c

I have a trigger on a table that I dont want to trigger during specific contexts.
In order to do this, I plan on:
locking the database
dropping the trigger
performing my operations
adding the trigger
unlocking the database
Locking the database is necessary so that any operations that other threads attempt to perform will halt until the triggers are back in place. How do I do this from the C code?

Perform your work in a transaction by using the BEGIN TRANSACTION and COMMIT TRANSACTION SQL:
BEGIN TRANSACTION;
DROP TRIGGER dbname.triggername;
(do other stuff)
CREATE TRIGGER ...;
COMMIT TRANSACTION;

Use sqlite3_db_config() with SQLITE_DBCONFIG_ENABLE_TRIGGER to temporarily disable triggers.

Related

What happens to a table when a trigger is dropped?

I need to delete a disabled DML trigger on one of our tables.
Problem is the table is critical to production so I don't want to cause any delays.
Therefore my question. When you drop a trigger does it run an Alter on the table on the inside? Or otherwise affect the table?
I know that disabling a Trigger requires running an Alter. But I'm unsure what exactly DROP TRIGGER does step by step. And if that could lock the table and prevent any other queries from running.
Edit: I've looked at the official MS website and it says "The table and the data upon which it is based are not affected." But it leaves it a bit ambiguous as to whether the table is locked for the duration of the DROP. Also it states that ALTER permissions are required, which makes me concerned that ALTER is actually run, under the hood.
If you test this you'll see that the DROP TRIGGER does require an exclusive lock on the table. It's a metadata-only operation so it shouldn't lock the table for long. But if the table is busy the DROP TRIGGER session's pending X or Sch-M lock will block other sessions new requests for locks on the table. If all the tasks the DROP TRIGGER session is waiting behind are short-running, then it will quickly get the locks it needs and complete its work.
However if the table is used by long-running queries then the blocking caused by the waiting DROP TRIGGER can be significant. So either wait for a maintenance window, or simulate waiting at low priority (which isn't available for DROP TRIGGER) by setting a lock_timeout and using a retry loop, something like:
set lock_timeout 500
while 1=1
begin
begin try
drop trigger SomeTrigger;
break;
end try
begin catch
if ERROR_NUMBER() <> 1222
throw;
RAISERROR ('retrying to drop trigger after lock timeout', 0, 1) WITH NOWAIT
end catch
end
The longer the lock_timeout the better the chances of success, but the more impact you might have on other sessions.

How to prevent deadlock of table in SQL Server

I have a table were values can be altered by different users and records of 100k rows.
I made a stored procedure where in, it has a begin tran and at the last part
to either commit or rollback the changes depending on the situation.
So for now the problem we're encountering is a lock of that table. For example 1st user is executing the stored procedure thru the system, then the other users won't be able to select or also execute the stored procedure because the table is currently locked.
So is there anyway where I can avoid lock other than using dirty read. Or a way where I can rollback the changes made without using begin tran, because it is the main reason why the table is locked up.
Yes, you can at least (quick & dirty) enable SNAPSHOT isolation level for transactions. That will prevent locks inside the transactions.
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
See for details.

Does a rollback inside a INSERT AFTER or UPDATE AFTER trigger rollback the entire transaction

Does a rollback inside a INSERT AFTER or an UPDATE AFTER trigger rollback the entire transaction or just the current row that is the reason for trigger, and is it same with Commit ?
I tried to check it through my current projects code which uses MSTDC for transactions, and it appears as if though the complete transaction is aborted.
If a Rollback in the trigger does rollback the entire transaction, is there a workaround for to restrict it just the current rows.
I found a link for sybase on this, but nothing on sql server
Yes it will rollback the entire transaction.
It's all in the docs (see Remarks). Note the comment I've emphasised - that's pretty important I would say!!
If a ROLLBACK TRANSACTION is issued in a trigger:
All data modifications made to that point in the current transaction
are rolled back, including any made by the trigger.
The trigger continues executing any remaining statements after the
ROLLBACK statement. If any of these statements modify data, the
modifications are not rolled back. No nested triggers are fired by the
execution of these remaining statements.
The statements in the batch after the statement that fired the trigger
are not executed.
As you've already been let to know, the ROLLBACK command can't possibly be modified/tuned so that it only roll back the statements issued by the trigger.
If you do need a way to "rollback" actions performed by the trigger only, you could,
as a workaround, consider modifying your trigger in such a way that before performing the actions, the trigger makes sure those actions do not produce exceptional situations that would cause the entire transaction to rollback.
For instance, if your trigger inserts rows, add a check to make sure the new rows do not violate e.g. unique constraints (or foreign key constraints), something like this:
IF NOT EXISTS (
SELECT *
FROM TableA
WHERE … /* a condition to test if a row or rows you are about
to insert aren't going to violate any constraint */
)
BEGIN
INSERT INTO TableA …
END;
Or, if your trigger deletes rows, check if it doesn't attempt to delete rows referenced by other tables (in which case you typically need to know beforehand which tables might reference the rows):
IF NOT EXISTS (
SELECT * FROM TableB WHERE …
)
AND NOT EXISTS (
SELECT * FROM TableC WHERE …
)
AND …
BEGIN
DELETE FROM TableA WHERE …
END
Similarly, you'd need to make checks for update statements, if any.
Any rollback command will roll back everything till the ##trancount is 0 unless you specify some savepoints and it doesnt matter where you put rollback tran command.
The best way is to look into the code once again and confirm business requirement and see why do you need a rollback in trigger?

Ignoring errors in Trigger

I have a stored procedure which is called inside a trigger on Insert/Update/Delete.
The problem is that there is a certain code block inside this SP which is not critical.
Hence I want to ignore any erros arising from this code block.
I inserted this code block inside a TRY CATCH block. But to my surprise I got the following error:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Then I tried using SAVE & ROLLBACK TRANSACTION along with TRY CATCH, that too failed with the following error:
The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.
My server version is: Microsoft SQL Server 2008 (SP2) - 10.0.4279.0 (X64)
Sample DDL:
IF OBJECT_ID('TestTrigger') IS NOT NULL
DROP TRIGGER TestTrigger
GO
IF OBJECT_ID('TestProcedure') IS NOT NULL
DROP PROCEDURE TestProcedure
GO
IF OBJECT_ID('TestTable') IS NOT NULL
DROP TABLE TestTable
GO
CREATE TABLE TestTable (Data VARCHAR(20))
GO
CREATE PROC TestProcedure
AS
BEGIN
SAVE TRANSACTION Fallback
BEGIN TRY
DECLARE #a INT = 1/0
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION Fallback
END CATCH
END
GO
CREATE TRIGGER TestTrigger
ON TestTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
EXEC TestProcedure
END
GO
Code to replicate the error:
BEGIN TRANSACTION
INSERT INTO TestTable VALUES('data')
IF ##ERROR > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
GO
I was going through the same torment, and I just solved it!!!
Just add this single line at the very first step of your TRIGGER and you're going to be fine:
SET XACT_ABORT OFF;
In my case, I'm handling the error feeding a specific table with the batch that caused the error and the error variables from SQL.
Default value for XACT_ABORT is ON, so the entire transaction won't be commited even if you're handling the error inside a TRY CATCH block (just as I'm doing). Setting its value for OFF will cause the transaction to be commited even when an error occurs.
However, I didn't test it when the error is not handled...
For more info:
SET XACT_ABORT (Transact-SQL) | Microsoft Docs
I'd suggest re-architecting this so that you don't poison the original transaction - maybe have the transaction send a service broker message (or just insert relevant data into some form of queue table), so that the "non-critical" part can take place in a completely independent transaction.
E.g. your trigger becomes:
CREATE TRIGGER TestTrigger
ON TestTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
INSERT INTO QueueTable (Col1,Col2)
SELECT COALESCE(i.Col1,d.Col1),COALESCE(i.Col2,d.Col2) from inserted i,deleted d
END
GO
You shouldn't do anything inside a trigger that might fail, unless you do want to force the transaction that initiated the trigger action to also fail.
This is a very similar question to Why try catch does not suppress exception in trigger
Also see the answer here T-SQL try catch transaction in trigger
I don’t think you can use savepoints inside a trigger. I mean, you can but I googled about it and I saw a few people saying that they don’t work. If you replace your “save transaction” for a begin transaction, it compiles. Of course it is not necessary because you have the outer transaction control and the inner rollback would rollback everything.

In SQL Server, how can I lock a single row in a way similar to Oracle's "SELECT FOR UPDATE WAIT"?

I have a program that connects to an Oracle database and performs operations on it. I now want to adapt that program to also support an SQL Server database.
In the Oracle version, I use "SELECT FOR UPDATE WAIT" to lock specific rows I need. I use it in situations where the update is based on the result of the SELECT and other sessions can absolutely not modify it simultaneously, so they must manually lock it first. The system is highly subject to sessions trying to access the same data at the same time.
For example:
Two users try to fetch the row in the database with the highest priority, mark it as busy, performs operations on it, and mark it as available again for later use.
In Oracle, the logic would go basically like this:
BEGIN TRANSACTION;
SELECT ITEM_ID FROM TABLE_ITEM WHERE ITEM_PRIORITY > 10 AND ITEM_CATEGORY = 'CT1'
ITEM_STATUS = 'available' AND ROWNUM = 1 FOR UPDATE WAIT 5;
UPDATE [locked item_id] SET ITEM_STATUS = 'unavailable';
COMMIT TRANSACTION;
Note that the queries are built dynamically in my code. Also note that when the previously most favorable row is marked as unavailable, the second user will automatically go for the next one and so on. Furthermore, different users working on different categories will not have to wait for each other's locks to be released. Worst comes to worst, after 5 seconds, an error would be returned and the operation would be cancelled.
So finally, the question is: how do I achieve the same results in SQL Server? I have been looking at locking hints which, in theory, seem like they should work. However, the only locks that prevents other locks are "UPDLOCK" AND "XLOCK" which both only work at a table level.
Those locking hints that do work at a row level are all shared locks, which also do not satisfy my needs (both users could lock the same row at the same time, both mark it as unavailable and perform redundant operations on the corresponding item).
Some people seem to add a "time modified" column so sessions can verify that they are the ones who modified it, but this sounds like there would be a lot of redundant and unnecessary accesses.
You're probably looking forwith (updlock, holdlock). This will make a select grab an exclusive lock, which is required for updates, instead of a shared lock. The holdlock hint tells SQL Server to keep the lock until the transaction ends.
FROM TABLE_ITEM with (updlock, holdlock)
As documentation sayed:
XLOCK
Specifies that exclusive locks are to be taken and held until the
transaction completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK,
the exclusive locks apply to the appropriate level of granularity.
So solution is using WITH(XLOCK, ROWLOCK):
BEGIN TRANSACTION;
SELECT ITEM_ID
FROM TABLE_ITEM
WITH(XLOCK, ROWLOCK)
WHERE ITEM_PRIORITY > 10 AND ITEM_CATEGORY = 'CT1' AND ITEM_STATUS = 'available' AND ROWNUM = 1;
UPDATE [locked item_id] SET ITEM_STATUS = 'unavailable';
COMMIT TRANSACTION;
In SQL Server there are locking hints but they do not span their statements like the Oracle example you provided. The way to do it in SQL Server is to set an isolation level on the transaction that contains the statements that you want to execute. See this MSDN page but the general structure would look something like:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
select * from ...
update ...
COMMIT TRANSACTION;
SERIALIZABLE is the highest isolation level. See the link for other options. From MSDN:
SERIALIZABLE Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would
fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
Have you tried WITH (ROWLOCK)?
BEGIN TRAN
UPDATE your_table WITH (ROWLOCK)
SET your_field = a_value
WHERE <a predicate>
COMMIT TRAN

Resources