Is there a mechanic similar to Oracle PL/SQL's SAVE EXCEPTIONS in Microsoft T-SQL?
Currently I am doing the update using a cursor and it is extremely slow.
The description of SAVE EXCEPTIONS from Oracle's site:
SAVE EXCEPTIONS allows an UPDATE,
INSERT, or DELETE statement to
continue executing after it issues an
exception. When the statement
finishes, an error is issued to signal
that at least one exception occurred.
Exceptions are collected into an array
that you can examine using
%BULK_EXCEPTIONS after the statement
has executed.
link to the Save exceptions definition:
http://download.oracle.com/docs/cd/E11882_01/timesten.112/e13076/sqlexamples.htm#TTPLS364
If you are importing a large number of records, use an SSIS package ansd send the failed rows to an exception table. If you can;t uses SSIS for some reason, consider cleaning your data before trying to insert it, so that you have no failed rows. For instance delete any records that have a null where you are required to havea value, null out bad dates, etc.
If you are coming from Oracle, you need to stop using cursors and use set-based logic instead. SQL Server does not perform well with cursors.
I think the closest you could come to simulating this behavior would be to disable/enable (with check) the constraints. The downside with this approach is that the bad data is now in your table and you can't enable the constraints until it's cleaned up. You'd need to decide if this is an acceptable risk in your particular case.
ALTER TABLE YourTable NOCHECK CONSTRAINT ALL
/* Perform your DML operations */
ALTER TABLE YourTable WITH CHECK CHECK CONSTRAINT ALL
/* Deal with any errors that are thrown:
'The ALTER TABLE statement conflicted with the CHECK constraint ...'
clean up the bad data then enable constraints again */
Not sure exactly what kind of exceptions you are expecting. Some more detail along this line might be helpful.
I don't believe there is anything equivalent in MS SQL to what you are describing. A few ideas to do something somewhat similar:
You can use a TRY ... CATCH in SQL, but that's going to fail the whole batch if something goes wrong, not just the problematic rows.
An SSIS bulk insert task can be configured to have a separate path for "failed" rows, which you can then treat however you want.
If you are talking about unique index duplicates (insert all these rows, and if any are dups then just ignore them, but don't fail the whole batch), then you can declare the unique index with the IGNORE_DUP_KEY option (see this SO question)
Anything further, you'd probably need to be more explicit about what kinds of errors you imagine encountering.
Related
The code below results in error when a table [mytable] exists and has a columnstore index, even though the table will have been dropped and recreated without columnstore index before page compression is applied to it.
drop table if exists [mytable]
select top 0
*
into [mytable]
from [myexternaltable]
alter table [mytable] rebuild partition = all with (data_compression = page)
The error thrown:
This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.
At this point [mytable] has not been dropped so SQL Server has apparently not started executing any code.
The code runs just fine when I run the drop table statement first and the rest of the code after. SQL Server seemingly stops in error prematurely if it detects an inconsistency (that will not necessarily persist) with an existing table when starting a batch, but is perfectly happy with table [mytable] not existing at all, whereas a table not existing can hardly be seen as consistent with applying compression on it. SQL Server's consistency checking does not look particularly consistent itself.
I recall having had similar issues when using column references that did not exist yet and were to be created in code, if only SQL Server would allow the code to run instead of terminating on a wrongly predicted error.
What would be the most straightforward solution to this issue? I would not mind suppressing the error altogether - if possible - since it is obviously wrong.
I am trying to avoid work-arounds such as running the code as 2 separate batches, putting part of the code in an exec phrase, or trying and catching the error. The code is being used in hundreds of stored procedures, so the simpler the solution the better.
I am extracting data from a business system supplied by a third party to use in reporting. I am using a single SELECT statement issued from an SSIS data flow task source component that joins across multiple tables in the source system to create the dataset I want. We are using the default read-committed isolation level.
To my surprise I regularly find this extraction query is deadlocking and being selected as the victim. I didn't think a SELECT in a read-committed transaction could do this, but according to this SO answer it is possible: Can a readcommitted isolation level ever result in a deadlock (Sql Server)?
Through the use of the trace flags 1204 and 12222 I've identified the conflicting statement, and the object and index in question. Essentially, the contention is over a data page in the primary key of one of the tables. I need to extract from this table using a join on its key (so I'm taking out an S lock), the conflicting statement is performing an INSERT and is requesting an IX lock on the index data page.
(Side note: the above SO talks about this issue occurring with non-clustered indexes, but this appears to be occurring in the clustered PK. At least, that is what I believe based on my interpretation of the deadlock information in the event log and the "associatedObjectId" property.)
Here are my constraints:
The conflicting statement is in an encrypted stored procedure supplied by a third party as part of off-the-shelf software. There is no possibility of getting the plaintext code or having it changed.
I don't want to use dirty-reads as I need my extracted data to maintain its integrity.
It's not clear to me how or if restructuring my extract query could prevent this. The lock is on the PK of the table I'm most interested in, and I can't see any alternatives to using the PK.
I don't mind my extract query being the victim as I prefer this over interrupting the operational use of the source system. However, this does cause the SSIS execution to fail, so if it must be this way I'd like a cleaner, more graceful way to handle this situation.
Can anyone suggestion ways to, preferably, prevent the deadlock, or if not, then handle the error better?
My assumption here is that you are attempting to INSERT into the same table that you are SELECTing from. If no, then a screenshot of the data flow tab would be helpful in determining the problem. If yes, then you're in luck - I have had this problem before.
Add a sort to the data flow as this is a fully blocking transformation (see below regarding blocking transformations). What this means is that the SELECT will be required to complete loading all data into the pipeline buffer before any data is allowed to pass down to the destination. Otherwise, SSIS is attempting to INSERT data while there is a lock on the table/index. You might be able to get creative with your indexing strategies here (I have not tried this). But, a fully blocking transformation will do the trick and eliminates the need for any additional indexes to the table (and the overhead that entails).
Note: never use NOLOCK query hints when selecting data from a table as an attempt to get around this. I have never tried this nor do I intend to. You (the royal you) run the risk of ingesting uncommitted data into your ETL.
Reference:
https://jorgklein.com/2008/02/28/ssis-non-blocking-semi-blocking-and-fully-blocking-components/
I am adding versioning to my database a bit later than I should, and as such I have some tables with inconsistent states. I have a table that a column was added to in Java, but not all tables are guaranteed to have that column at this point.
What I had been doing is on the first run of the program, checking if the column existed, and adding it if it did not exist.
The library (flyway.org) I am using to deal with versioning takes in a bunch of .sql files in order to set up the database. For many tables, this is simple, I just have an sql file that has "CREATE TABLE IF NOT EXISTS XXX," which means it is easily handled, those can still be run.
I am wondering if there is some way to handle these alter tables without SQLite generating an error that I haven't thought of, or if I haven't found out how to do it.
I've tried looking to see if there is a command to add a column if it doesn't exist, but there doesn't seem to be one. I've tried to find a way to handle errors in sqlite, for example running the alter table anyways, and just ignoring the error, but there doesn't seem to be a way of doing that (as far as I can tell). Does anyone have any suggestions? I want a solution 100% in a .sql script if possible.
There is no "IF NOT EXIST" clause for Alter Tables in SQLite, it doesn't exist.
There is a way to interrogate the database on what columns a table contains with PRAGMA table_info(table_name);. But there is no 100% SQL way to take that information and apply it to an Alter Table statement.
Maybe one day, but not today.
I have an INSERT trigger on one of my tables that issues a THROW when it finds a duplicate. Problem is my transactions seem to be implicitly rolled back at this point - this is a problem, I want to control when transactions are rolled back.
The issue can be re-created with this script:
CREATE TABLE xTable (
id int identity not null
)
go
create trigger xTrigger on xTable after insert as
print 'inserting...';
throw 1600000, 'blah', 1
go
begin tran
insert into xTable default values
rollback tran
go
drop table xTable
If you run the rollback tran - it will tell you there is no begin tran.
If i swap the THROW for a 'normal' exception (like SELECT 1/0) the transaction is not rolled back.
I have checked xact_abort flag - and it is off.
Using SQL Server 2012 and testing through SSMS
Any help appreciated, thanks.
EDIT
After reading the articles posted by #Dan Guzman, i came to the following conclusion/summary...
SQL Server automatically sets XACT_ABORT ON in triggers.
My example (above) does not illustrate my situation - In reality I'm creating an extended constraint using a trigger.
My use case was contrived, I was trying to test multiple situations in the SAME unit test (not a real world situation, and NOT good unit test practice).
My handling of the extended constraint check and throwing an error in the trigger is correct, however there is no real situation in which I would not want to rollback the transaction.
It can be useful to SET XACT_ABORT OFF inside a trigger for a particular case; but your transaction will still be undermined by general batch-aborting errors (like deadlocks).
Historical reasons aside, i don't agree with SQL Server's handling of this; just because there is no current situation in which you'd like to continue the transaction, does not mean such a situation may not arise.
I'd like to see one able to setup SQL Server to maintain the integrity of transactions, if your chosen architecture is to have transactions strictly managed at origin, i.e. "he alone who starts the transaction, must finish it". This, aside from usual fail-safes, e.g. if your code is never reached due to system failure etc.
THROW will terminate the batch when outside the scope of TRY/CATCH (https://msdn.microsoft.com/en-us/library/ee677615.aspx). The implication here is that no further processing of the batch takes place, including the statements following the insert. You'll need to either surround your INSERT with a TRY/CATCH or use RAISERROR instead of THROW.
T-SQL error handing is a rather large and complex topic. I suggest you peruse the series of error-handling articles by Erland Sommarskog: http://www.sommarskog.se/error_handling/Part1.html. Most relevant here is the topic Can I Prevent the Trigger from Rolling Back the Transaction? http://www.sommarskog.se/error_handling/Part3.html#Triggers. The take away from a best practices point of view is that triggers are not the right solution if you enforce business rules in a trigger without a rollback.
I am debugging my database and I am finding that the replication is failing on a delete statement.
I look at the source and destination tables and they are the same. So someone deleted a row from the source and then put it back in. (The delete fails because of a FK reference to some manual data that I don't want to cascade delete.)
Is there a way to find out the PK of the row it is trying to delete?
(All Replication monitor will tell me is the name of the FK that is causing the delete statement to fail.)
There are a couple of ways. I'll tell you the easy one (because I'm lazy). Put a trace on your subscriber for non-successful stored procedure executions. You should get a hit for one called something like sp_MSdel_table (where table is the name of your table). The argument(s) to that procedure will be the primary key of the record that it's trying to delete.
Easy way number two is to modify the sproc identified in the previous method not to be angry at a missing row (after all, it's just going to delete it so the fact that's it's now missing isn't that big a deal). You might have other non-convergence issues, but at least you can get your commands flowing again. (EDIT: Just noticed the reason for your issue. I'd advise not having FK constraints at the subscriber since any referential integrity should be taken care of at the publisher. I'll make your replication faster when SQL doesn't have to check that each time it does an applicable insert, update, or delete).
Hard way number one involves looking at the error in replication monitor an noting that there's a transaction id and sequence number specified. You then use a sproc in the distribution database to get the text of the command being executed.
Hard way number two involved diffing the tables either with tablediff.exe, something like RedGate's SQLCompare, or a roll-your-own join over linked servers to show the difference. Keep this in your back pocket just in case one of the other one-row-at-a-time methods mentioned above doesn't do it for you. My threshold for such things is about three. YMMV.