SSIS: "Failure inserting into the read-only column <ColumnName>" - sql-server

I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error:
"Failure inserting into the read-only column ColumnName"
What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert?
EDIT (Additional info):
Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables
My view looks something like this.
CREATE VIEW ImportView
AS
SELECT
CONVERT(varchar(100, NULL) AS CustomerName,
CONVERT(varchar(100), NULL) AS Address1,
CONVERT(varchar(100), NULL) AS Address2,
CONVERT(varchar(100), NULL) AS City,
CONVERT(char(2), NULL) AS State,
CONVERT(varchar(250), NULL) AS ItemOrdered,
CONVERT(int, NULL) AS QuantityOrdered
...
I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

Additionally you could just select Keep Identity checkbox in OLEDB Destination Editor, if your column is IDENTITY

It's not an ideal solution but I found a workaround to my problem. Since SSIS was complaining about inserting into my view I created a table with the exact same structure as my view. Then, in an INSTEAD OF trigger on that table, I merely insert the information destined for the table into the view. This adds one more step into the import process but is not a big deal.

Why is the column "read only"? Could you post schema for the view and the underlying table(s)? Is the column IDENTITY? Is there a WITH CHECK OPTION on the view? Is it a derived (calculated) column?
UPDATE:
I see now, a bit unusual application of a view, maybe a stored procedure would have been more appropriate choice -- a stored procedure in the DB and an OLEDB Command in SSIS.
Your final solution with a table as a destination is actually faster, providing that you do not use trigger, but use bulk-insert from the staging table to "final" tables.

Related

SSIS OLEDB Command transformation (Insert if not exists)

Ok so according to Microsoft docs the OLE DB Command Transformation in SSIS does this
The OLE DB Command transformation runs an SQL statement for each row in a data flow. For example, you can run an SQL statement that inserts, updates, or deletes rows in a database table.
So I want to write some SQL to Insert rows in one of my tables only IF the record doesn't exists
So I tried this but the controls keeps complaining of bad sintaxys
IF NOT EXISTS
(SELECT * FROM M_Employee_Login WHERE
Column1=?
AND Column2=?
AND Column3=?)
INSERT INTO [M_Employee_Login]
([Column1]
,[Column2]
,[Column3])
VALUES
(?,?,?)
However if I remove the IF NOT EXISTS section (leaving the insert only) the controls says may code is Ok, what am I doing wrong.
Is there an easier solution?
Update: BTW My source is a Flat File (csv file)
Update since answer: Just to let people know. I ended up using the OLE DB Command Transformation like I planned cause is better than the OLE DB Destination for this operation. The difference is that I did used the Lookup Component to filter all the already existent records (like the answer suggested). Then use the OLE DB Command Transformation with the Insert SQL that I had in the question and it worked as expected. Hope it helps
OLEDB Command object is not the same as the OLE DB Destination
Rather than doing it as you describe, instead use a Lookup Component. Your data flow becomes Flat File Source -> Lookup Component -> OLE DB Destination
In your lookup, you will write the query SELECT Column1, Column2, Column3 FROM M_Employee_Login and configure it such that it will redirect no match entities to the stream instead of failure (depending on your version 2005 vs not 2005) this will be the default.
After the lookup, the output of No Match will contain the values that didn't find a corresponding match in the target table.
Finally, configure your OLEDB Destination to perform the fast load option.
Though you can make use of Look up component in SSIS to avoid the duplicates which is the best possible approach, but if you are looking for some query to avoid the duplicates then, you can simply insert all the data in some temp/staging table in your database, and run the following query.
INSERT INTO M_Employee_Login(Column1, Column2, Column3)
SELECT vAL1,vAL2,vAL3 from Staging_Table
EXCEPT
SELECT Column1, Column2, Column3 FROM M_Employee_Login

Create a copy of a table within the same database with SSIS

I want to create a copy of a table, say TestTable, with a new name, say TestTableNew, in the same database with the use of an SSIS package. I've created a "Transfer SQL Server Objects Task" for this with the source database specified as both the SourceDatabase and the DestinationDatabase. When I run this task, the original table TestTable is overwritten with a new -empty- TestTable.
This might well be something really obvious that I've overlooked, but can I somehow specify another name for the destination table somewhere in this transfer task? Or should I solve this in another way?
You can't use the "Transfer SQL Server Objects Task" to copy a table to the same database because there isn't an option to specify the new table name. You would be copying table "TestTable" to table "TestTable", which will fail because they both have the same name.
You can set the "DropObjectsFirst" property to true, but that will make you lose your original table and its data, which I think you did on your test, otherwise you would have received a failure message.
The best option here is to use an "Execute SQL Task" to create the structure of your TestTableNew based on your TestTable and then do a simple OleDBSource -> OleDBDestination transformation to load all the data from one table to another.
My knowledge of SSIS is very limited but I assume you can run sql commands passing in
parameters and therefore generating something like the following dynamically
select *
insert into TestTableNew
from TestTable

SQL Server Insert failure due to XML Schema validation error

I have a XML column in a table and it is defined by a schema. I am trying to insert values into this table by using Insert into tbl1 Select * from tbl for xml. But this is failing due to schema validation failure for one of the records. But i want to insert the records which have passed the validation atleast and i can capture the others later. Can someone help me in this.
SQL server validates all dataset, not single row. If you want to validate Row-by-Row using SQL server tools, methods are:
SQLCLR (fastest) link
SSIS (easy to create) - using loop FOREACH you try to insert row into table. All failed rows are redirecting to another table.
TSQL TRY/CATCH Block - insert xml from single row to schema validated variable. Slowest one.

How to merge table from access to SQL Express?

I have one table named "Staff" in access and also have this table(same name) in SQL 2008.
Both table have thousands of records. I want to merge records from the access table to sql table without affecting the existing records in sql. Normally, I just export using OCBC driver and that works fine if that table doesn't exist in sql server. Please advise. Thanks.
A simple append query from the local access table to the linked sql server table should work just fine in this case.
So, just drop in the first (from) table into the query builder. Then change the query type to append, and you are prompted for the append table name.
From that point on, just drop in the columns you want (do not drop in the PK column, as they need not be used nor transferred in this case).
You can also type in the sql directly in the query builder. Either way, you will wind up with something like:
INSERT INTO dbo_custsql
( ADMINID, Amount, Notes, Status )
SELECT ADMINID, Amount, Notes, Status
FROM custsql1;
This may help: http://www.red-gate.com/products/sql-development/sql-compare/
Or you could write a simple program to read from each data set and do the comparison, adding, updating, and deleting, etc.

check existing record before inserting

I want to insert multiple records (~1000) using C# and SQL Server 2000 as a datatabase but before inserting how can I check if the record i'm inserting already exists and if so the next record should be inserted. The records are coming from a structured excel file then I load them in a generic collection and iterate through each item and perform insert like this
// Insert records into database
private void insertRecords() {
try {
// iterate through all records
// and perform insert on each iteration
for (int i = 0; i < names.Count; i++) {
sCommand.Parameters.AddWithValue("#name", Names[i]);
sCommand.Parameters.AddWithValue("#person", ContactPeople[i]);
sCommand.Parameters.AddWithValue("#number", Phones[i]);
sCommand.Parameters.AddWithValue("#address", Addresses[i]);
// Open the connection
sConnection.Open();
sCommand.ExecuteNonQuery();
sConnection.Close();
}
} catch (SqlException ex) {
throw ex;
}
}
This code uses a stored procedure to insert the records but I can check the record before inserting?
Inside your stored procedure, you can have a check something like this (guessing table and column names, since you didn't specify):
IF EXISTS(SELECT * FROM dbo.YourTable WHERE Name = #Name)
RETURN
-- here, after the check, do the INSERT
You might also want to create a UNIQUE INDEX on your Name column to make sure no two rows with the same value exist:
CREATE UNIQUE NONCLUSTERED INDEX UIX_Name
ON dbo.YourTable(Name)
The easiest way would probably be to have an inner try block inside your loop. Catch any DB errors and re-throw them if they are not a duplicate record error. If it is a duplicate record error, then don't do anything (eat the exception).
Within the stored procedure, for the row to be added to the database, first check if the row is present in the table. If it is present, UPDATE it, otherwise INSERT it. SQL 2008 also has the MERGE command, which essentially moshes update and insert together.
Performance-wise, RBAR (row-by-agonizing-row) is pretty inefficient. If speed is an issue, you'd want to look into the various "insert a lot of rows all at once" procsses: BULK INSERT, the bcp utility, and SSIS packages. You still have the either/or issue, but at least it'd perform better.
Edit:
Bulk inserting data into an empty table is easy. Bulk inserting new data in a non-empty table is easy. Bulk inserting data into a table where some of the data (as, presumably, defined by the primary key) is already present is tricky. Alas, the specific steps get detailed quickly and are very dependent upon your system, code, data structures, etc. etc.
The general steps to follow are:
- Create a temporary table
- Load the data into the temporary table
- Compare the contents of the temporary table with those of the target table
- Where they match (old data), UPDATE
- Where they don't match (new data), INSERT
I did a quick search on SO for other posts that covered this, and stumbled across something I'd never thought of. Try this; not only would it work, its elegant.
Does your table have a primary key? If so you should be able to check that the key value to be inserted is not already in the table.

Resources