I have a DB2 stored procedure and trigger which are doing set of insertions.Some of these insert statements might already be present in the table. I am trying to avoid checking for the row before every insert as I believe it might add on to the processing overhead.
I am trying to find out the DB2 equivalent for 'ignore_dup_row' index attribute which is provided by Sybase. If there is no DB2 equivalent for this what else are the viable options to ignore transaction rollbacks when trying to perform a duplicate insert.
Use a merge statement:
merge into t as x
using (
values (...) -- new row
) y (c1, c2, ..., cn)
on x.[key] = y.[key]
when not matched then
insert (c1,c2,...cn) values (y.c1,y.c2,...y.cn);
If you are inserting rows one by one you can also include a continue handler for '23505' in your stored procedure.
Related
I created a trigger in SQL server that was designed to act whenever data was entered into a certain table, in this case a table called FORECAST_TEST_DATA. The trigger was to then take certain values from the inserted row and insert them into a table called the PRODUCT_TEST_DATE table. The other columns in the table were then to be filled with values which already existed within the table, using products that shared a common PROD_NUM value.
The query in SQL server, looks as follows:
CREATE OR ALTER TRIGGER FORECAST_TRIGGER ON FORECAST_TEST_DATA
FOR INSERT
AS
INSERT INTO PRODUCT_TEST_DATA
(PRODUCT_TEST_DATA.PROD_NUM, PRODUCT_TEST_DATA.MONTH, PRODUCT_TEST_DATA.STORE_TYPE,
PRODUCT_TEST_DATA.PRODUCT_KEY, PRODUCT_TEST_DATA.CATEGORY,
PRODUCT_TEST_DATA.BRAND_NAME,PRODUCT_TEST_DATA.COLOUR)
SELECT
inserted.PROD_NUM, inserted.MONTH, inserted.STORE_TYPE, inserted.PRODUCT_KEY,
PRODUCT_TEST_DATA.CATEGORY, PRODUCT_TEST_DATA.BRAND_NAME,PRODUCT_TEST_DATA.COLOUR
FROM inserted, PRODUCT_TEST_DATA
WHERE inserted.PROD_NUM = PRODUCT_TEST_DATA.PROD_NUM
GO
The trigger already has the desired functionality, it just needs to be rewritten into Oracle SQL.
Thanks for taking the time to read through this problem, any help is appreciated.
Here is oracle syntax -
CREATE OR REPLACE TRIGGER FORECAST_TRIGGER
AFTER INSERT ON FORECAST_TEST_DATA
AS
BEGIN
INSERT INTO PRODUCT_TEST_DATA
(PROD_NUM, MONTH, STORE_TYPE, PRODUCT_KEY, CATEGORY, BRAND_NAME, COLOUR)
SELECT
:new.PROD_NUM, :new.MONTH, :new.STORE_TYPE, :new.PRODUCT_KEY,
:new.CATEGORY, :new.BRAND_NAME, :new.COLOUR
FROM PRODUCT_TEST_DATA
WHERE :new.PROD_NUM = PROD_NUM;
END;
I'm interested in figuring out how to copy a row of data from an old column to a new column within the same table. This would be individually done during a trigger procedure, not something like UPDATE table SET columnB = columnA.
To try to clarify, table1.column1.row3 -> table1.column2.row3 if an INSERT or UPDATE statement is executed upon table1.column1.row3.
Have your trigger assign
NEW.column1 := NEW.column2
I have the following statement:
UPDATE Table SET Column=Value WHERE TableID IN ({0})
I have a comma delimited list of TableIDs that can be pretty lengthy(for replacing {0}). I've found that this is faster than using a SqlDataAdapter, however I also noticed that if the command text is too long, the SqlCommand might perform poorly.
Any ideas?
This is inside of a CLR trigger. Each SqlCommand execution incurs some sort of overhead. I've determined that the above command is better than SqlDataAdapter.Update() because Update() will update individual records incurring several SQL statements to be executed.
...I ended up doing the following(trigger time went from .7 to .25 seconds)
UPDATE T SET Column=Value FROM Table T INNER JOIN INSERTED AS I ON (I.TableID=T.TableID)
When there is a long list, the execution plan is probably using an index scan instead of an index seek. In this case, you are probably better off limiting the list to several items, but call the update command repeatedly until all items in the list are accommodated.
Split your list of IDs into batchs maybe. I assume you have the list of id numbers in a collection and you're building up the {0} string. So maybe update 20 or 100 at a time.
Wrap it in a transaction and perform all the updates before calling Commit()
If this is a stored procedue I would use a Table-Valued Parameter. If this is an ad hoc batch then consider populating a temporary table and joining to it in your batch. Your IN-clause is rationalized as a bunch of ORs which can quite easily negate the use of an index. With a JOIN you may get a better plan from the optimizer.
DECLARE #Value VARCHAR(100) = 'Some value';
CREATE TABLE #Table (TableID INT PRIMARY KEY);
INSERT INTO #Table VALUES (1),(2),(3),(n)...;
MERGE INTO Schema.Table AS target
USING #Table AS source
ON target.TableID = source.TableID
WHEN MATCHED THEN UPDATE SET Column = Value;
If you can use a stored procedure, you could use a MERGE statement instead.
MERGE INTO Table AS target
USING #TableIDList AS source
ON target.TableID = source.ID
WHEN MATCHED THEN UPDATE SET Column = source.Value
where #TableIDList is table type sent from code as a table-valued parameter with the IDs (and possibly Values) you need.
I want to insert multiple records (~1000) using C# and SQL Server 2000 as a datatabase but before inserting how can I check if the record i'm inserting already exists and if so the next record should be inserted. The records are coming from a structured excel file then I load them in a generic collection and iterate through each item and perform insert like this
// Insert records into database
private void insertRecords() {
try {
// iterate through all records
// and perform insert on each iteration
for (int i = 0; i < names.Count; i++) {
sCommand.Parameters.AddWithValue("#name", Names[i]);
sCommand.Parameters.AddWithValue("#person", ContactPeople[i]);
sCommand.Parameters.AddWithValue("#number", Phones[i]);
sCommand.Parameters.AddWithValue("#address", Addresses[i]);
// Open the connection
sConnection.Open();
sCommand.ExecuteNonQuery();
sConnection.Close();
}
} catch (SqlException ex) {
throw ex;
}
}
This code uses a stored procedure to insert the records but I can check the record before inserting?
Inside your stored procedure, you can have a check something like this (guessing table and column names, since you didn't specify):
IF EXISTS(SELECT * FROM dbo.YourTable WHERE Name = #Name)
RETURN
-- here, after the check, do the INSERT
You might also want to create a UNIQUE INDEX on your Name column to make sure no two rows with the same value exist:
CREATE UNIQUE NONCLUSTERED INDEX UIX_Name
ON dbo.YourTable(Name)
The easiest way would probably be to have an inner try block inside your loop. Catch any DB errors and re-throw them if they are not a duplicate record error. If it is a duplicate record error, then don't do anything (eat the exception).
Within the stored procedure, for the row to be added to the database, first check if the row is present in the table. If it is present, UPDATE it, otherwise INSERT it. SQL 2008 also has the MERGE command, which essentially moshes update and insert together.
Performance-wise, RBAR (row-by-agonizing-row) is pretty inefficient. If speed is an issue, you'd want to look into the various "insert a lot of rows all at once" procsses: BULK INSERT, the bcp utility, and SSIS packages. You still have the either/or issue, but at least it'd perform better.
Edit:
Bulk inserting data into an empty table is easy. Bulk inserting new data in a non-empty table is easy. Bulk inserting data into a table where some of the data (as, presumably, defined by the primary key) is already present is tricky. Alas, the specific steps get detailed quickly and are very dependent upon your system, code, data structures, etc. etc.
The general steps to follow are:
- Create a temporary table
- Load the data into the temporary table
- Compare the contents of the temporary table with those of the target table
- Where they match (old data), UPDATE
- Where they don't match (new data), INSERT
I did a quick search on SO for other posts that covered this, and stumbled across something I'd never thought of. Try this; not only would it work, its elegant.
Does your table have a primary key? If so you should be able to check that the key value to be inserted is not already in the table.
I'm still fairly new to T-SQL and SQL 2005. I need to import a column of integers from a table in database1 to a identical table (only missing the column I need) in database2. Both are sql 2005 databases. I've tried the built in import command in Server Management Studio but it's forcing me to copy the entire table. This causes errors due to constraints and 'read-only' columns (whatever 'read-only' means in sql2005). I just want to grab a single column and copy it to a table.
There must be a simple way of doing this. Something like:
INSERT INTO database1.myTable columnINeed
SELECT columnINeed from database2.myTable
Inserting won't do it since it'll attempt to insert new rows at the end of the table. What it sounds like your trying to do is add a column to the end of existing rows.
I'm not sure if the syntax is exactly right but, if I understood you then this will do what you're after.
Create the column allowing nulls in database2.
Perform an update:
UPDATE database2.dbo.tablename
SET database2.dbo.tablename.colname = database1.dbo.tablename.colname
FROM database2.dbo.tablename INNER JOIN database1.dbo.tablename ON database2.dbo.tablename.keycol = database1.dbo.tablename.keycol
There is a simple way very much like this as long as both databases are on the same server. The fully qualified name is dbname.owner.table - normally the owner is dbo and there is a shortcut for ".dbo." which is "..", so...
INSERT INTO Datbase1..MyTable
(ColumnList)
SELECT FieldsIWant
FROM Database2..MyTable
first create the column if it doesn't exist:
ALTER TABLE database2..targetTable
ADD targetColumn int null -- or whatever column definition is needed
and since you're using Sql Server 2005 you can use the new MERGE statement.
The MERGE statement has the advantage of being able to treat all situations in one statement like missing rows from source (can do inserts), missing rows from destination (can do deletes), matching rows (can do updates), and everything is done atomically in a single transaction. Example:
MERGE database2..targetTable AS t
USING (SELECT sourceColumn FROM sourceDatabase1..sourceTable) as s
ON t.PrimaryKeyCol = s.PrimaryKeyCol -- or whatever the match should be bassed on
WHEN MATCHED THEN
UPDATE SET t.targetColumn = s.sourceColumn
WHEN NOT MATCHED THEN
INSERT (targetColumn, [other columns ...]) VALUES (s.sourceColumn, [other values ..])
The MERGE statement was introduced to solve cases like yours and I recommend using it, it's much more powerful than solutions using multiple sql batch statements that basically accomplish the same thing MERGE does in one statement without the added complexity.
You could also use a cursor. Assuming you want to iterate all the records in the first table and populate the second table with new rows then something like this would be the way to go:
DECLARE #FirstField nvarchar(100)
DECLARE ACursor CURSOR FOR
SELECT FirstField FROM FirstTable
OPEN ACursor
FETCH NEXT FROM ACursor INTO #FirstField
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO SecondTable ( SecondField ) VALUES ( #FirstField )
FETCH NEXT FROM ACursor INTO #FirstField
END
CLOSE ACursor
DEALLOCATE ACursor
MERGE is only available in SQL 2008 NOT SQL 2005
insert into Test2.dbo.MyTable (MyValue) select MyValue from Test1.dbo.MyTable
This is assuming a great deal. First that the destination database is empty. Second that the other columns are nullable. You may need an update instead. To do that you will need to have a common key.