I'm new to SQL. I have a large number of stored procedures in my production database. I planned to write an audit table that would be used by these stored procedures to keep track of changes ( these stored procedures would write to this audit table ). But the issue is that when a transaction rolls back, the rows inserted into the audit table also get rolled back. Is there any way to create a table that is not affected by transaction rollbacks. Any other idea that satisfies my requirement is welcome!!!
You can't, once a session starts a transaction all activity on that session is contained inside the transaction.
What you can do is to open a different session, for instance a CLR procedure that connects as an ordinary client (not using the context connection) and audits from this connection.
But auditing actions that rollback is a bit unusual, since you are auditing things never occurred from the database perspective and the audit record and the actual database state will conflict.
OK if you want to know what was rolled back here is what you do:
Let your exisiting audit process handle succesful inserts.
Put the values for the insert into a table variable in your sp. It is important that it is a table variable and not a temp table. Now inthe catch block for the transaction perform the rollback. This will not clear the table variable. Then insert into your audit table the values from the table variable (Add a field to the audt table so you can mark the records as rolled back and possibly one for the error message.)
We don't do this specifically for auditing but we have done this to record the errors.
Related
When oracle executes a delete sql(with no where clause), it locks the whole table. So while the data is getting deleted from a table by a particular user session and if during that time period, will oracle allow any other user sessions to read data from the same table as the table is being locked?
delete from tran_records;
Will there be any difference in behaviour for the above scenario in optimistic and pessimistic locking?
Readers are never (b)locked, so - yes, select will work, that user will see all data in the table UNTIL delete operation is committed. Then nobody will see anything as there'll be no data in the table (not talking about flashback here).
We only have SQL Servre Standard edition so I can't use the Snapshot functionality. Before spending the time just want to know if the following is possible (or if there is a better way) please:
At the end of every month I need to take a snapshot of the month and store it in table b. The following month take another snapshot and append that snapshots data to table b. And so on....
Is it possible to create a stored procedure to run at the end of every month that stores the snapshot data into a temp table A. Then using another stored procedure, take data from temp table A and append to table B? The second procedure can have a drop table A.
Cheers.
Yes, it is possible.
If I understand you, more or less, this is what you want:
Lock the table
Select everything into a staging table
Move everything from that staging table into your destination
You can lock the entire table (this will prevent changes, but can lead to deadlocks).
INSERT INTO stagingTable (
... -- field list
)
SELECT
... -- field list
FROM
myTable WITH (TABLOCK)
;
TABLOCK will place a shared lock on the table which will be released when the statement is executed (READ COMMITTED isolation level) or after the transaction is committed/rolled back (SERIALIZABLE).
If you want to keep the lock during the whole transaction, you can add the HOLDLOCK hint too, which switches the isolation level to serializable for the object, thus the lock will be released after COMMIT. Don't forget to start a transaction and commit/roll it back.
You can also use TABLOCKX, which is an exclusive lock preventing all processes to acquire a lock on the table or on anything on lower levels (pages, rows, etc) in the table. This will prevent concurrent reads too!
You can let the SQL Server to decide which lock it wants to use (a.k.a. omit the hint), in this case SQL Server may choose to use more granular locks (such as page or row locks) instead of locking the whole table.
Oracle 10g -- due to a compatibility issue with a 9i database, I'm pulling data through a 10g database (to be used by an 11g database) using INSERT INTO...SELECT statements via a scheduled job that runs every 15 minutes. I notice that TRUNCATE statements are much faster than DELETE statements and have read that a 'downside' to DELETE statements is that they never decrease the table high-water mark. My use for this data is purely read-only -- UPDATEs and INSERTs are never issued against the tables in question.
Given the above, I want to avoid the possible situation where my 'working' database (Oracle 11g) attempts to read from a table on my staging database (10g) that is empty for a period of time because the TRUNCATE happened straight away and the INSERT INTO...SELECT from the 9i database is taking a couple of minutes to complete.
So, I'm wondering if that is how Oracle handles TRUNCATEs within a transaction, or if the whole operation is performed and COMMITted, despite the fact that TRUNCATEs can't be rolled back? Or, put another way, from an external SELECT point of view, if I wrap a TRUNCANTE and INSERT INTO...SELECT on a table in a transaction, will the table ever appear empty to an external SELECT reading from the table?
Once a table has been truncated in a transaction, you cannot do anything else with that table in the same transaction; you have to commit (or rollback) the transaction before you can use that table again. Or, it may be that truncating a table effectively terminates the current transaction. Either way, if you use TRUNCATE, you have a window when the table is truncated (empty) but the INSERT operation has not completed. This is not what you wanted, but it is what Oracle provides.
You can do partition exchange. Have 2 partitions in staging table; p_OLD and p_NEW.
Before insert do partition exchange "new"->"old" and truncate "new" partition. (At this point if you select from table you see old data)
Insert data into "new" partition, truncate "old" partition. (At this point you see new data).
With this approach your table is never empty to the onlooker.
Why do you need 3 Oracle environments?
I want to have a stored procedure that inserts a record into tableA and updates record(s) in tableB.
The stored procedure will be called from within a trigger.
I want the inserted records in tableA to exist even if the outermost transaction of the trigger is rolled back.
The records in tableA are linearly linked and I must be able to rebuild the linear connection.
Write access to tableA is only ever through the triggers.
How do I go about this?
What you're looking for are autonomous transactions, and these do not exist in SQL Server today. Please vote / comment on the following items:
http://connect.microsoft.com/SQLServer/feedback/details/296870/add-support-for-autonomous-transactions
http://connect.microsoft.com/SQLServer/feedback/details/324569/add-support-for-true-nested-transactions
What you can consider doing is using xp_cmdshell or CLR to go outside the SQL engine to come back in (these actions can't be rolled back by SQL Server)... but these methods aren't without their own issues.
Another idea is to use INSTEAD OF triggers - you can log/update other tables and then just decide not to proceed with the actual action.
EDIT
And along the lines of #VoodooChild's suggestion, you can use a #table variable to temporarily hold data that you can reference after the rollback - this data will survive a rollback, unlike an insert into a #temp table.
See this post Logging messages during a transaction for a (somewhat convoluted) effective way of achieving what you want: the insert into the logging table is persisted even if the transaction had rolled back. The method Simon proposes has several advantages: requires no changes to the caller, is fast and is scalable, and it can be used safely from within a trigger. Simon's example is for logging, but the insert can be for anything.
One way is to create a linked server that points to the local server. Stored procedures executed over a linked server won't be rolled back:
EXEC LinkedServer.DbName.dbo.sp_LogInfo 'this won''t be rolled back'
You can call a remote stored procedure from a trigger.
I have stored procedures in SQL Server T-SQL that are called from .NET within a transaction scope.
Within my stored procedure, I am doing some logging to some auditing tables. I insert a row into the auditing table, and then later on in the transaction fill it up with more information by means of an update.
What I am finding, is that if a few people try the same thing simultaneously, 1 or 2 of them will become transaction deadlock victims. At the moment I am assuming that some kind of locking is occurring when I am inserting into the auditing tables.
I would like to execute the inserts and updates to the auditing tables outside of the transaction I am executing, so that the auditing will occur anyway, even if the transaction rolls back. I was hoping that this might stop any locks occurring, allowing more than one person to execute the procedure at once.
Can anyone help me do this in T-SQL?
Thanks,
Rich
Update- I have since found that the auditing was unrelated to the transaction deadlock, thanks to Josh's suggestion of using SQL Profiler to track down the source of the deadlock.
TranactionScope supports Suppress:
using (TransactionScope scope = new TransactionScope())
{
// Transactional code...
// Call a SQL stored procedure (but suppress the transaction)
using (TransactionScope suppress = new TransactionScope(TransactionScopeOption.Suppress))
{
using (SqlConnection conn = new SqlConnection(...))
{
conn.Open();
SqlCommand sqlCommand = conn.CreateCommand();
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.CommandText = "MyStoredProcedure";
int rows = (int)sqlCommand.ExecuteScalar();
}
}
scope.Complete();
}
But I would have to question why logging/auditing would run outside of the transaction? If the transaction is rolled back you will still have committed auditing/logging records and that's probably not what you want.
You haven't provided much information as to how you are logging. Does your audit table have Foreign keys pointing back to your main active tables? If so, remove the foreign keys (assuming the audit records only come from 'known' applications).
you could save your audits to a table variable (which are not affected by transactions) and then at the end of your SP (outside the scope of the transaction) insert the rows into the audit table.
However, it sounds like you are trying to fix the symptoms rather than the problem. you may want to track down the deadlocks and fix them.
I had a similar requirement where I needed to log errors into an errorlog table, but found that Rollback were wiping them out.
Solved this problem by popping previously inserted error records into a table variable, calling Rollback, then pushing back (inserting) the records into the table.
Works like a charm but code is messy, on account of it having to be inline. Can't put ROLLBACK into a stored procedure otherwise will get "Transaction count after EXECUTE... “ error.
Why are you updating the auditing table? If you were only doing inserts you might help prevent lock escalations. Also have you examined the deadlock trace to determine what exactly you were deadlocking?
You can do this by enabling trace flag 1204. Or running SQL Profiler. This will give you detailed information that will let you know what kind of deadlock (locks, threads, parrallel etc...).
Check out this article on Detecting and Ending Deadlocks.
One other way to do auditing is to decouple from the business transaction completly by sending all logging events to a queue at the application tier, this minimizes the impact logging has on your business transaction but is probally a very large for an existing application.