I have a bulk insert inside a try - catch block:
BEGIN TRY
BULK INSERT dbo.EQUIP_STATUS_CODE
FROM 'filepath\filename.csv'
WITH ( MAXERRORS = 1, FIELDTERMINATOR = ',')
END TRY
BEGIN CATCH
EXECUTE dbo.ERROR_LOG_CSV;
END CATCH
I would like to be able to capture the following error when it occurs:
Bulk load data conversion error (truncation)
But it seems that I can't, even though the level is 16 which falls within the try-catch range. I was wondering if there is a way to capture this error when it occurs.
Before I specified the MAXERRORS to 1 I got this error:
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
Since the former error is much more descriptive to the problem, that is the one I'd like to record.
Though my competence is more Oracle than SQL Server, anyway I'll try to help somehow with this issue. I discovered that your situation is already in the bugtracker of SQL Server (bug id: 592960) with status "Won't fix" since 2010. You can see the corresponding discussion on connect.microsoft.com yourself (on the present moment host is unreachable so I used google cache).
Alexander has given you the answer but you have to read bug log very carefully and consider what might be going on. SQL Server (bug id: 592960)
You are trying to bulk insert directly from a data file to a data table?
From the article, there is a mismatch in data types or truncation. The SQL engine has a bug that does not report this as an error.
Quote from first person reporting the bug - "Inspite of the severity level being 16 I don't see the error being caught by TRY / CATCH construct. The code doesn't break and proceeds smoothly as if no error has occurred."
Have you investigated what fields that may contain bad data?
Here are some suggestions.
1 - COMMA DELIMITED FILES ARE PROBLEMATIC - I always hate comma delimited format since commas can be in the data stream. Try using a character like tilde ~ as the delimiter which occurs less often. Could the problem be that a text field has a comma , in it? Thus adding a field to the data stream?
2 - USE STAGING TABLE - It is sometimes better to import the data from the file into a staging table that is defined with columns as varchar (x). This allows the data to get into a table.
Then write a stored procedure to validate the data in the columns before transferring to the production table. Mark any bad rows as suspect.
Insert the data from the staging table to production leaving behind any bad rows.
Send an email for someone to look at the bad data. If this is a re-occurring data file transfer, you will want to fix it at the source.
3 - REWRITE PROCESS WITH A ETL TOOL - Skip writing this stuff in the Engine. SQL Server Integration Services (SSIS) is a great Extract Translate Load (ETL) tool.
There are options in the connection that you can state that text is quoted "", eliminates the above extra comma issue. You can send rows that fail to import into the production table to a hospital table for review.
In summary, there is a bug in the engine.
However, I would definitely consider changing to a tilde formatted file and/or use a staging table. Better yet, if you have the time, rewrite the process with a SSIS package!
Sincerely
J
PS: I am giving Alexander points since he did find the bug on SQL connect. However, I think the format of the file is the root cause.
This will probably catch this error because it catches error Msg 4860:
Q:
TRY doesn't CATCH error in BULK INSERT
BEGIN TRY
DECLARE #cmd varchar(1000)
SET #cmd = 'BULK INSERT [dbo].[tblABC]
FROM ''C:\temp.txt''
WITH (DATAFILETYPE = ''widechar'',FIELDTERMINATOR = '';'',ROWTERMINATOR = ''\n'')'
EXECUTE (#cmd)
END TRY
BEGIN CATCH
select error_message()
END CATCH
Related
When I try to create a #temp table, I get this:
A severe error occurred on the current command. The results, if any, should be discarded.
Any ideas how to solve this error? Thank you!
In one of my update queries, I had the same problem. I realized that the problem was from memory leakage.
Restarting MSSQL service will flush tempDb resource and free a large amount of memory. This will solve the problem.
SQL SERVER – Msg 0, Level 11 – A Severe Error Occurred on the Current Command. The results, if Any, Should be Discarded
CHECKLIST for this error
First and foremost, check database consistency. This can be done by running below command in SQL Server
DBCC CHECKDB('database_name');
If you have nailed down to a table, then check table consistency. We can execute below command
DBCC CHECKTABLE('table_name');
Check the LOG folder which contains ERRORLOG and look for any file named ‘SQLDump*’ at the same time when the error was reported. If you find any, you can either contact Microsoft or use the self-service by getting dump analyzed using diagnostic preview.
If you are getting this while using extended stored procedure, then you need to run, debug by running the exact piece of code one by one. Here is one such error which had none of 3 causes.
In case you want to see the error yourself, feel free to use below code
create table #ErrorLog (column1 char(8000))
go
insert into #ErrorLog
exec xp_readerrorlog 'SQL Error'
drop table #ErrorLog
Reference: Pinal Dave (https://blog.sqlauthority.com)
When I hit the follwing query I get 1 row
SELECT * FROM servers WHERE Node='abc_deeh32q6610007'
However when I hit the following query 0 rows are selected
SELECT * FROM servers WHERE Node LIKE '%_deeh32q6610007'
I thought it may be because of _ but the same pattern seen whhen I use the following queries
SELECT * FROM alerts WHERE TicketNumber like '%979415' --> returns 0 rows
SELECT * FROM alerts WHERE TicketNumber='IN979415' --> returns 1 row
I am using Sybase DB.
This kind of errors should not appear in a healthy database.
First check if maybe the characters are correct and you are using a correct % character code. Write a script in plan text and check it with isql using -i option run directly from command line.
If that won't help and your problem persists then probably you have some problems with the physical structures of the database:
Check if you have properly configured the sort order in the database: you can reload the character set order using the charset tool.
Check if you have no errors in the database structure: run dbcc checkdb and dbcc checkalloc to see if there are no physical errors in the data
Check if your don't have any errors in the database error log. All physical errors observed by the database should be logged here.
If that won't help try to reproduce the same problem in another table with copy of the data. Then on an another server with the same configuration. Try to narrow the problem.
About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?
I'm using SQL Server 2008 R2, trying to reverse-engineer an opaque application and duplicate some of its operations, so that I can automate some massive data loads.
I figured it should be easy to do -- just go into SQL Server Profiler, start a trace, do the GUI operation, and look at the results of the trace. My problem is that the filters aren't working as I'd expect. In particular, the "Writes" column often shows "0", even on statements that are clearly making changes to the database, such as INSERT queries. This makes it impossible to set a Writes >= 1 filter, as I'd like to do.
I have verified that this is exactly what's happening by setting up an all-inclusive trace, and running the app. I have checked the table beforehand, run the operation, and checked the table afterward, and it's definitely making a change to the table. I've looked through the trace, and there's not a single line that shows any non-zero number in the "Writes" column, including the line showing the INSERT query. The query is nothing special... Just something like
exec sp_executesql
N'INSERT INTO my_table([a], [b], [c])
values(#newA, #newB, #newC)',
N'#newA int,#newB int,#newC int', #newA=1, #newB=2, #newC=3
(if there's an error in the above, it's my typo here -- the statement is definitely inserting a record in the table)
I'm sure the key to this behavior is in the description of the "Writes" column: "Number of physical disk writes performed by the server on behalf of the event." Perhaps the server is caching the write, and it happens outside of the Profiler's purvue. I don't know, and perhaps it's not important.
Is there a way to reliably find and log all statements that change the database?
Have you tried a Server Side Trace? It also works to document read and writes, which - if I'm reading you correctly - you are wanting to document writes.
This is a sql 2000 database that I am working with.
I have what I call a staging table that is a raw data dump of data, so everything is ntext or nvarchar(255).
I need to cast/convert all of this data into the appropriate data types (ie int, decimal, nvarchar, etc.)
The way I was going to do this was to iterate through all records using a while loop and attempt a CAST on each column on a single record during each iteration, after I visit a particular record I flag it as processed (bit field).
But how can I log the error when/if it occurs but allow the while loop to continue.
At first I implemented this using a TRY CATCH in a local SQL 2005 instance (to get the project going) and all was working well, but i learned today that the dev & production database that the international DBA's have set up is a SQL 2000 instance so I have to conform.
EDIT: I am using a SSIS package to populate the staging table. I see that now I must revisit that package and implement a script component to handle the conversions. Thanks guys
EDIT: I am doing this on a record by record basis, not a batch insert, so the transaction idea seems like it would be feasible but I'm not sure how to trap ##ERROR and allow the stored procedure to continue.
EDIT: I really like Guy's approach, I am going to implement it this way.
Generally I don't like "loop through the record" solutions as they tend to be slow and you end up writing a lot of custom code.
So...
Depending on how many records are in your staging table, you could post process the data with a series of SQL statements that test the columns for correctness and mark any records that fail the test.
i.e.
UPDATE staging_table
SET status_code = 'FAIL_TEST_1'
WHERE status_code IS NULL
AND ISDATE(ntext_column1) = 0;
UPDATE staging_table
SET status_code = 'FAIL_TEST_2'
WHERE status_code IS NULL
AND ISNUMERIC(ntext_column2) = 0;
etc...
Finally
INSERT INTO results_table ( mydate, myprice )
SELECT ntext_column1 AS mydate, ntext_column2 AS myprice
FROM staging_table
WHERE status_code IS NULL;
DELETE FROM staging_table
WHERE status_code IS NULL;
And the staging table has all the errors, that you can export and report out.
What are you using to import the file? DTS has scripting abilities that can be used for data validation. If your not using DTS are you using a custom tool? If so do your validation there.
But i think this is what your looking for.
http://www.sqlteam.com/article/using-dts-to-automate-a-data-import-process
IF ##Error <> 0
GOTO LABEL
#op
In SSIS the "red line" from a data import task can redirect bad rows to a separate destination or transform. I haven't played with it in a while but hope it helps.
Run each cast in a transaction, after each cast, check ##ERROR, if its clear, commit and move on.
It looks like you are doomed. See this document.
TL/DR: A data conversion error always causes the whole batch to be aborted - your sql script will not continue to execute no matter what you do. Transactions won't help. You can't check ##ERROR because execution will already have aborted.
I would first reexamine why you need a staging database full of varchar(255) columns - can whatever fills that database do the conversion?
If not, I guess you'll need to write a program/script to select from the varchar columns, convert, and insert into the prod db.
You could try checking for the data type before casting and actually avoid throwing errors.
You could use functions like:
ISNUM - to check if the data is of a numeric type
ISDATE - to check if it can be cast to DATETIME