VBA SQL Bulk insert excepton handling - sql-server

I have to execute CSV records as batch by batch to SQL Server. In the macro, I am framing SQL query based on the record in the foreign key. So, I can not go for BULK Insert here.
I written code to iterate 100 rows and save in String like below:
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Insert into table ('name','address','region') values ('Test','xxx-test-address',1);
Now, I am executing 100 SQL inserts against SQL Server. Let's say if error is thrown at 50th row then remaining 50 rows are not executed and error throwing at VBA.
My question is how do I find which row is throwing error?
If can not be achieved in this approach, please let me know the approach to achieve. Since 10000 or more records will be in CSV, I can not execute the records in iteration. This will hit database many times.
Thanks in advance!

Related

Inserting data into SQL table from SAS dataset isn't working as expected

I am attempting to insert a SAS dataset into an existing table on a SQL Server. This is via a simple proc sql statement.
proc sql;
insert into Repo.Test_Table
select * from Work.MetaTable;
quit;
One of the fields, [Method], is not inserting into the SQL table as expected. The [Method] field in the SAS table contains several brackets and other punctuation so I think this is causing a problem. For example, the Work.MetaTable looks like this:
Field_ID
Method
1
([Field_1]<=[Field_8])
2
([Field_4]=[Field_5])
When I run the proc sql to insert this into SQL, it only inserts the first open bracket "(" and this is the case for every row. For example, those two rows look like this in the SQL table:
Field_ID
Method
1
(
2
(
The [Method] field in SQL is nvarchar(max).
Does anyone know what might be causing the issue here and how I can get around it?

continue insert statement even though exception is generated

I am trying to execute below query and during that one constraint violation exception is generated and due to that insert statement is terminated.
I want suppose from 10 records 9 records are clean then insertion will done for 9.right now statement is terminated and no insertion is performed.
I am using SQL Server 2012 and i do not want to rollback transaction and Insert ignore command is not there in SQL server and i do not want to insert data which contains error.i just want to insert clean data.
Query :
INSERT INTO rcmschargepostingmastertable
(clinicid,
clinicsiteid,
appointmentid,
patientid
)
SELECT clinicid,
clinicsiteid,
appointmentid,
patientid,
FROM #tempautopostbulkchargepostingmastertable
It is not possible to do what you stated in your comment:
i want to ignore any sql error and want to continue insertion for
clean records
SQL Server doesn't have any pure SQL mechanism for doing this. Your only choice is to use one of the proposed work-arounds (SSIS, WHERE clause).
One work-around that hasn't been mentioned because it's the worst performance-wise, but at least it's one that you haven't shot down, is to replace your set-based insert with a cursor that does the inserts one row at a time.
Then you could put the single-row insert in a TRY block, and if it errors, the cursor will skip it and move on to the next one.
I do not want to insert data which contains error.i just want to insert clean data.
Then you need to identify and filter out the bad data/constraint violating records before inserting into target table which will make your life easier.
........
modifiedbyid
FROM #tempautopostbulkchargepostingmastertable
Where some_column <> 'bad data'
Since you are using SQL Server 2012 you can use TRY_CONVERT to identify and filter out the bad data

SSIS list rows that do not update

I am using an OLE DB Command to update records in a table. I want to seperate rows that update successfully from rows that do not update (different than error). Some rows will not update because the key I am updating does not exist. This is different than an error, because the command ran so I can not use the red error line. The only idea I have would be the equivalent to when I execute the update in SQL Server and it says "(0 row(s) affected)" and I would be able to do a comparison.
Since this does not count as an error in SSIS, I can't use the red error line. Does anyone know how to catch records that do not update?
catch it in a table
Select *
INTO Some_table
FROM Table_you_are_updating_From as a
WHEre NOT EXISTS(Select *
FROM Table_you_are_updating as b WHERE a.key=b.key )

SQL Server Insert failure due to XML Schema validation error

I have a XML column in a table and it is defined by a schema. I am trying to insert values into this table by using Insert into tbl1 Select * from tbl for xml. But this is failing due to schema validation failure for one of the records. But i want to insert the records which have passed the validation atleast and i can capture the others later. Can someone help me in this.
SQL server validates all dataset, not single row. If you want to validate Row-by-Row using SQL server tools, methods are:
SQLCLR (fastest) link
SSIS (easy to create) - using loop FOREACH you try to insert row into table. All failed rows are redirecting to another table.
TSQL TRY/CATCH Block - insert xml from single row to schema validated variable. Slowest one.

Cannot insert duplicate record in a table

I am trying to execute stored proc through SSIS and it gives me following error:
[Execute SQL Task] Error: Executing
the query "Exec sp1 ?" failed with
the following error: "Procedure: sp1
Line: 66 Message: Cannot insert
duplicate key row in object
'table.sq1' with unique index
'UIX_sp1_Key'.". Possible failure
reasons: Problems with the query,
"ResultSet" property not set
correctly, parameters not set
correctly, or connection not
established correctly.
Actually the stored Proc sp1 is truncating & reloading a data into a table.
I am not able to figure out where exactly its trying to insert duplicate record.
Thanks in advance.
Your data must have duplicate values across the key column(s). You could remove the primary key temporarily (or create another table with the same structure but without the definition of the primary key, but that would mean changing the stored procedure), then load the data into the table. Then use this SQL statement:
select keycol1 {,keycol2 {,keycol3} ...}, count(*)
from tablename
group by keycol1 {,keycol2 {,keycol3} ...}
having count(*) > 1
That will show you the data values that are duplicates across the key column(s).
If you are truncating the table before load, then you must have duplicate data in the source.
Examine this to see what there is. use Excel or Access or some such if needed to assist. Or drop the unique constraint then query the staging table with an aggregate query.

Resources