Insert Multiple Rows in SQL SERVER using a Trigger - sql-server

I need to insert multiple rows into another table using a trigger, but it only inserts the last record
I have checked some other posts in stackoverflow and didn't find an Answer,
This is my trigger
IF(#TNAEventID IN(1,2,3))
BEGIN
INSERT INTO [Biostar].Cen.WentOutLog (AutoID, nUserID, nOutDateTime,nOutTNAEvent ,nReaderID) values (#AutoID,#UseID, #DateTime,#TNAEventID, #ReaderID )
END
else IF(#TNAEventID=0)
BEGIN
UPDATE Cen.WentOutLog Set nINDateTime =#DateTime,nInTNAEvent = #TNAEventID Where AutoID = (Select top (1) AutoID from Cen.WentOutLog where nINDateTime is null AND nOutDateTime<#DateTime AND nUserID=#UseID order by nOutDateTime desc)
END
else
begin
....
end
Thanks in Advance.

You can try below code, Insert code is readily usable.
You might need to change the UPDATE statement as I do not know what is
your data:
INSERT INTO [Biostar].[Cen].[WentOutLog]
([AutoID], [nUserID], [nOutDateTime], [nOutTNAEvent], [nReaderID])
SELECT [AutoID], [nUserID], [nOutDateTime], [nOutTNAEvent], [nReaderID]
FROM INSERTED
WHERE TNAEventID IN (1, 2, 3)
UPDATE W
FROM [Cen].[WentOutLog]
SET W.[nINDateTime] = I.[DateTime],
W.[nInTNAEvent] = I.[TNAEventID]
INNER JOIN INSERTED I ON W.[AutoID] = I.[AutoID]
WHERE I.[TNAEventID] = 0
AND W.[nINDateTime] IS NULL
AND W.[nOutDateTime] < I.[DateTime]
AND W.[nUserID] = I.[UserID]

Related

trying to do multiple inserts in one trigger using field dependent on first insert

I am trying to manipulate a bunch of table triggers that start with an insert into one event table (TB A). This insert fires a trigger (T1) that does an insert into a secondary table (TB B). The secondary table has an insert trigger (T2) that does an update on the first table (TB A).
Pardon the confusion but basically I wanted to ensure that for the first trigger, do a second insert in the same table using the values of the first insert.
BEGIN
SET NOCOUNT ON
declare #Time int
declare #DeleteLinger int
select #Time = convert(integer,value) from Systemproperty
where [name] = 'KeepStoreLingerTimeInMinutes'
select #DeleteLinger = convert(integer,value) from Systemproperty
where [name] = 'KeepDeleteLingerTimeInMinutes'
IF (#DeleteLinger >= #Time) SET #Time=#DeleteLinger+1
insert StorageQueue
(TimeToExecute,Operation,Parameter,RuleID,GFlags)
select DateAdd(mi,#Time,getutcdate()), 1, I.ID, r.ID, r.GFlags
from inserted I, StorageRule r
where r.Active=1 and I.Active=0 and (I.OnlineCount > 0 OR
I.OnlineScreenCount > 0)
-- try and get the value that was just inserted into StorageQueue
select #SFlags=S.GFlags FROM StorageQueue S, StorageRule r, inserted I
WHERE r.ID = S.RuleID and I.ID = S.parameter
-- if a certain value do another insert into StorageQueue
If (#SFlags = 10)
INSERT INTO StorageQueue
(TimeToExecute,Operation,Parameter,RuleID,StoreFlags)
VALUES(DateAdd(mi,#Time,getutcdate()), 1, (SELECT parameter
FROM StorageQueue),2, #SFlags)
END
The problem is that there seems to be either an issue that the record is not yet inserted because the variable #SFlags is null or some other trigger accesses the values and makes changes. My question is whether this is a good way to do it. Is it possible to retrieve a value into the variable from within a trigger because it seems whichever way I try it, it doesnt work.

EF4 RowCount issue on instead of insert trigger while updating an other table

I have some trouble with entityFramework 4. Here is the thing :
We have a SQL server database. Every table have 3 instead of triggers for insert, update and delete.
We know EntityFramework has some issues to deal with theses triggers, that's why we added the following code at the end of triggers to force the rowCount :
for insert :
DECLARE #Identifier BIGINT;
SET #Identifier = scope_identity()
SELECT #Identifier AS Identifier
for update/delete :
CREATE TABLE #TempTable (temp INT PRIMARY KEY);
INSERT INTO #TempTable VALUES (1);
DROP TABLE #TempTable
It worked fine until now :
From an instead of insert trigger (let's say table A), I try to update a field of an other table (table B)
I know my update code perfectly work since a manual insert does the work. The issue shows up only when I'm using Entity framework.
I have the solution now, let's make a school case of this with a full example. :)
In this example, our application is an addressBook. We want to update the business Activity (IsActive column in Business)
everytime we add, update or delete a contact on this business. The business is considered as active if at least one of the contact
of the business is active. We record every state changements on the business in a table to have the full history.
So, we have 3 tables :
table Business (Identifier (PK Identity), Name, IsActive),
table Contact (Identifier (PK Identity), Name, IsActive, IdentifierBusiness)
table BusinessHistory (Identifier (PK Identity), IsActive, Date, IdentifierBusiness)
Here's are the triggers one we are interested in :
table Contact (trigger IoInsert):
-- inserting the new rows
INSERT INTO Contact
(
Name
,IsActive
,IdentifierBusiness
)
SELECT
t0.Name
,t0.IsActive
,t0.IdentifierBusiness
FROM
inserted AS t0
-- Updating the business
UPDATE
Business
SET
IsActive = CASE WHEN
(
(t0.IsActive = 1 AND Business.IsActive = 1)
OR
(t0.IsActive = 1 AND Business.IsActive = 0)
) THEN 1 ELSE 0
FROM
inserted AS t0
WHERE
Business.Identifier = t0.IdentifierBusiness
AND
t0.IsActive = 1
AND
Business.IsActive = 0
-- Forcing rowCount for EntityFramework
DECLARE #Identifier BIGINT;
SET #Identifier = scope_identity()
SELECT #Identifier AS Identifier
Table Business (trigger IoUpdate)
UPDATE
Business
SET
IsActive = 1
FROM
Contact AS t0
WHERE
Business.Identifier = t0.IdentifierBusiness
AND
t0.IsActive = 1
AND
Business.IsActive = 0
---- Updating BusinessHistory
INSERT INTO BusinessHistory
(
Date
,IsActive
,IdentifierBusiness
)
SELECT
DATE()
,t0.IsActive
,t0.Identifier
FROM
inserted AS t0
INNER JOIN
deleted AS t1 ON t0.Identifier = t1.Identifier
WHERE
(t0.Identifier <> t1.Identifier)
-- Forcing rowCount for EntityFramework
CREATE TABLE #TempTable (temp INT PRIMARY KEY);
INSERT INTO #TempTable VALUES (1);
DROP TABLE #TempTable
Table BusinessHistory :
-- Updating the business
UPDATE
Business
SET
IsActive = CASE WHEN
(
(t0.IsActive = 1 AND Business.IsActive = 1)
OR
(t0.IsActive = 1 AND Business.IsActive = 0)
) THEN 1 ELSE 0
FROM
inserted AS t0
WHERE
Business.Identifier = t0.IdentifierBusiness
AND
t0.IsActive = 1
AND
Business.IsActive = 0
-- inserting the new rows
INSERT INTO BusinessHistory
(
Date
,IsActive
,IdentifierBusiness
)
SELECT
DATE()
,t0.IsActive
,t0.Identifier
FROM
inserted AS t0
-- Forcing rowCount for EntityFramework
DECLARE #Identifier BIGINT;
SET #Identifier = scope_identity()
SELECT #Identifier AS Identifier
So, in a nutshell, what happened ?
We have 2 tables, Business and Contact. Contact is updating table Business on insert and update.
When Business is updated, it does an insert into BusinessHistory, which is storing the history of updates of table Business
,when the field IsActive is updated.
the thing is, even if I don't insert a new row in BusinessHistory, I launch an insert instruction and so, I go inside the instead of insert trigger of the table BusinessHistory. Of course, in the end of this one, there is a scope_identity(). You can use scope_identity only once, and it gives back the last identity inserted.
So, since I did not inserted any BusinessHistory, it was consuming the scope_identity of my newly inserted contact : the scope_identity of the instead of
insert of the contact table was empty !
How to isolate the issue ?
Using the profiler, you figure out that there are insert instruction in BusinessHistory when it should not be any of them.
Using the debugging, you will eventually end in the an insert trigger your are not supposed to be in.
How to fix it ?
Several alternatives here. What I did was to surround in table Business the insert of BusinessHistory by an If condition :
I want the insert to be inserted only if the statut "IsActive" has changed :
IF EXISTS
(
SELECT
1
FROM
inserted AS t0
INNER JOIN
deleted AS t1 ON t0.Identifier = t1.Identifier
WHERE
(t0.IsActive <> t1.IsActive)
)
BEGIN
INSERT INTO BusinessHistory
(
Date
,IsActive
,IdentifierBusiness
)
SELECT
DATE()
,t0.IsActive
,t0.Identifier
FROM
inserted AS t0
INNER JOIN
deleted AS t1 ON t0.Identifier = t1.Identifier
WHERE
(t0.IsActive <> t1.IsActive)
END
An other possibility is, in the trigger instead of insert of the table BusinessHistory, to surround the whole trigger by an IF EXISTS condition
IF EXISTS (SELECT 1 FROM inserted)
BEGIN
----Trigger's code here !
END
How to avoid it ?
Well, use one of these fixes !
Avoiding scope_identity(), ##IDENTITY is more than enough in most of the cases ! In my company, we only use scope_identity because of EF 4 !
I know my english is not perfect, I can edit if it's not good enough, or if someone want to add something on this subject !

SQL-Server Trigger on Insert into a table, 1 column into 6 in the same table

I have spent a lot of time investigating if this can be done outside of the database but to be honest I don't think so, well not very easily. We access the data in the tables via Access 2010 using VBA so I thought I could do it
via a action in the front end software. Easy to complete however there are two many permutations I cant control.
I have a table [TableData] with multiple columns. We have some externally supplied software that populates the table about 20-30 rows at a time. One of the fields [Fluctuation] currently allows us to transfer data up to 60 chars in length and our intention is to send data in the format 1.1,1.2,1.3,1.4,1.5,1.6 where we have six numbers of up to two decimal places separated by commas, no spaces. Column names Fluc1, Fluc2, Flu3 etc.
What I would like to do is create a trigger within the SQL database that operates once the row is inserted to split the above into six new columns only if 6 values separated by five commas exist.
I then need to complete maths on the 6 values but at least i will have them to complete the numbers to complete the maths on.
I have no knowledge of triggers so any help given would be very much appreciated.
Sample data examples are:
101.23,100.45,101.56,102.89,101,74,100.25
1.05,1.09,1.05,0.99,0.99,0.98
etc
I have VBA code to split the data and was going to do this via a SELECT query after the fact but as I cant control the data being entered from the external software thought a trigger would be more useful.
VBA code.
'This function returns the string data sperated by commas
Public Function FluctuationSeperation(strFluctuationData As String) As Variant
Dim strTest As String
Dim strArray() As String
Dim intCount As Integer
strArray = Split(strFluctuationData, ",")
Dim arr(5) As Variant
For intCount = LBound(strArray) To UBound(strArray)
arr(intCount) = Trim(strArray(intCount))
Next
FluctuationSeperation = arr
End Function
When writing a trigger you need to take care that it can launch for multiple inserted rows. There is inserted built in table alias available for that purpose. You need to iterate through all the inserted records and update them individually. You need to use your primary key (I have assumed a column id) to match inserted records with records to update.
CREATE TRIGGER TableData_ForInsert
ON [TableData]
AFTER INSERT
AS
BEGIN
DECLARE #id int
DECLARE #Fluctuation varchar(max)
DECLARE i CURSOR FOR
SELECT id, Fluctuation FROM inserted
FETCH NEXT FROM i INTO #id, #Fluctuation
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #pos1 int = charindex(',',#Fluctuation)
DECLARE #pos2 int = charindex(',',#Fluctuation, #pos1+1)
DECLARE #pos3 int = charindex(',',#Fluctuation, #pos2+1)
DECLARE #pos4 int = charindex(',',#Fluctuation, #pos3+1)
UPDATE [TableData]
SET fluc1 = ltrim(substring(#Fluctuation,1,#pos1-1)),
fluc2 = ltrim(substring(#Fluctuation,#pos1+1,#pos2-#pos1-1)),
fluc3 = ltrim(substring(#Fluctuation,#pos2+1,#pos3-#pos2-1)),
fluc4 = ltrim(substring(#Fluctuation,#pos3+1,#pos4-#pos3-1)),
fluc5 = ltrim(substring(#Fluctuation,#pos4+1,999))
WHERE id = #id -- need to find TableData record to update by inserted id
FETCH NEXT FROM i INTO #id, #Fluctuation
END
END
But because cursors are in many cases considered as a bad practice, it is better to write the same as a set based command. It can be achieved with APPLY clause like this:
CREATE TRIGGER TableData_ForInsert
ON [TableData]
AFTER INSERT
AS
BEGIN
UPDATE t SET
fluc1 = SUBSTRING(t.fluctuation, 0, i1.i),
fluc2 = SUBSTRING(t.fluctuation, i1.i+1, i2.i - i1.i -1),
fluc3 = SUBSTRING(t.fluctuation, i2.i+1, i3.i - i2.i -1),
fluc4 = SUBSTRING(t.fluctuation, i3.i+1, i4.i - i3.i -1),
fluc5 = SUBSTRING(t.fluctuation, i4.i+1, 999)
FROM [TableData] t
OUTER APPLY (select charindex(',', t.fluctuation) as i) i1
OUTER APPLY (select charindex(',', t.fluctuation, i1.i+1) as i) i2
OUTER APPLY (select charindex(',', t.fluctuation, i2.i+1) as i) i3
OUTER APPLY (select charindex(',', t.fluctuation, i3.i+1) as i) i4
JOIN INSERTED new ON new.ID = t.ID -- need to find TableData record to update by inserted id
END
This code example is missing handling malformed strings, it expects allways 5 numbers delimited by 4 commas.
For more tips how to split strings in SQL Server check this link.
Test case:
DECLARE #test TABLE
(
id int,
Fluctuation varchar(max),
fluc1 numeric(9,3) NULL,
fluc2 numeric(9,3) NULL,
fluc3 numeric(9,3) NULL,
fluc4 numeric(9,3) NULL,
fluc5 numeric(9,3) NULL
)
INSERT INTO #test (id, Fluctuation) VALUES(1, '1.2,5,8.52,6,7.521')
INSERT INTO #test (id, Fluctuation) VALUES(2, '2.2,6,9.52,7,8.521')
INSERT INTO #test (id, Fluctuation) VALUES(3, '2.5,3,4.52,9,7.522')
INSERT INTO #test (id, Fluctuation) VALUES(4, '2.53,4.52,97.522') -- this fails
UPDATE t SET
fluc1 = CASE WHEN i1.i<0 THEN NULL ELSE SUBSTRING(t.fluctuation, 0, i1.i) END,
fluc2 = CASE WHEN i2.i<0 THEN NULL ELSE SUBSTRING(t.fluctuation, i1.i+1, i2.i - i1.i -1) END,
fluc3 = CASE WHEN i3.i<0 THEN NULL ELSE SUBSTRING(t.fluctuation, i2.i+1, i3.i - i2.i -1) END,
fluc4 = CASE WHEN i4.i<0 THEN NULL ELSE SUBSTRING(t.fluctuation, i3.i+1, i4.i - i3.i -1) END,
fluc5 = CASE WHEN i4.i<0 THEN NULL ELSE SUBSTRING(t.fluctuation, i4.i+1, 999) END
FROM #test t
OUTER APPLY (select charindex(',', t.fluctuation) as i) i1
OUTER APPLY (select charindex(',', t.fluctuation, i1.i+1) as i) i2
OUTER APPLY (select charindex(',', t.fluctuation, i2.i+1) as i) i3
OUTER APPLY (select charindex(',', t.fluctuation, i3.i+1) as i) i4
SELECT * FROM #test

oracle after insert trigger process another table & update same row with status [duplicate]

Sorry for my english.
I have 2 tables:
Table1
id
table2_id
num
modification_date
and
Table2
id
table2num
I want to make a trigger which after insert or delete in Table1 updates the last value num in Table2.table1lastnum.
My trigger:
CREATE OR REPLACE TRIGGER TABLE1_NUM_TRG
AFTER INSERT OR DELETE ON table1
FOR EACH ROW
BEGIN
IF INSERTING then
UPDATE table2
SET table2num = :new.num
WHERE table2.id = :new.table2_id;
ELSE
UPDATE table2
SET table2num = (SELECT num FROM (SELECT num FROM table1 WHERE table2_id = :old.table2_id ORDER BY modification_date DESC) WHERE ROWNUM <= 1)
WHERE table2.id = :old.table2_id;
END IF;
END TABLE1_NUM_TRG;
But after delete in Table1 I have error:
ORA-04091: table BD.TABLE1 is mutating, trigger/function may not see it
ORA-06512: at "BD.TABLE1_NUM_TRG", line 11
ORA-04088: error during execution of trigger 'BD.TABLE1_NUM_TRG'
What am I doing wrong?
What you've run into is the classic "mutating table" exception. In a ROW trigger Oracle does not allow you to run a query against the table which the trigger is defined on - so it's the SELECT against TABLE1 in the DELETING part of the trigger that's causing this issue.
There are a couple of ways to work around this. Perhaps the best in this situation is to use a compound trigger, which would look something like:
CREATE OR REPLACE TRIGGER TABLE1_NUM_TRG
FOR INSERT OR DELETE ON TABLE1
COMPOUND TRIGGER
TYPE NUMBER_TABLE IS TABLE OF NUMBER;
tblTABLE2_IDS NUMBER_TABLE;
BEFORE STATEMENT IS
BEGIN
tblTABLE2_IDS := NUMBER_TABLE();
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
IF INSERTING THEN
UPDATE TABLE2 t2
SET t2.TABLE2NUM = :new.NUM
WHERE t2.ID = :new.TABLE2_ID;
ELSIF DELETING THEN
tblTABLE2_IDS.EXTEND;
tblTABLE2_IDS(tblTABLE2_IDS.LAST) := :new.TABLE2_ID;
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
IF tblTABLE2_IDS.COUNT > 0 THEN
FOR i IN tblTABLE2_IDS.FIRST..tblTABLE2_IDS.LAST LOOP
UPDATE TABLE2 t2
SET t2.TABLE2NUM = (SELECT NUM
FROM (SELECT t1.NUM
FROM TABLE1 t1
WHERE t1.TABLE2_ID = tblTABLE2_IDS(i)
ORDER BY modification_date DESC)
WHERE ROWNUM = 1)
WHERE t2.ID = tblTABLE2_IDS(i);
END LOOP;
END IF;
END AFTER STATEMENT;
END TABLE1_NUM_TRG;
A compound trigger allows each timing point (BEFORE STATEMENT, BEFORE ROW, AFTER ROW, and AFTER STATEMENT) to be handled. Note that the timing points are always invoked in the order given. When an appropriate SQL statement (i.e. INSERT INTO TABLE1 or DELETE FROM TABLE1) is executed and this trigger is fired the first timing point to be invoked will be BEFORE STATEMENT, and the code in the BEFORE STATEMENT handler will allocate a PL/SQL table to hold a bunch of numbers. In this case the numbers to be stored in the PL/SQL table will be the TABLE2_ID values from TABLE1. (A PL/SQL table is used instead of, for example, an array because a table can hold a varying number of values, while if we used an array we'd have to know in advance how many numbers we would need to store. We can't know in advance how many rows will be affected by a particular statement, so we use a PL/SQL table).
When the AFTER EACH ROW timing point is reached and we find that the statement being processed is an INSERT, the trigger just goes ahead and performs the necessary UPDATE to TABLE2 as this won't cause a problem. However, if a DELETE is being performed the trigger saves the TABLE1.TABLE2_ID into the PL/SQL table allocated earlier. When the AFTER STATEMENT timing point is finally reached, the PL/SQL table allocated earlier is iterated through, and for each TABLE2_ID found the appropriate update is performed.
Documentation here.
You have to define a before trigger for delete.Try using two triggers
CREATE OR REPLACE TRIGGER INS_TABLE1_NUM_TRG
AFTER INSERT ON table1
FOR EACH ROW
BEGIN
UPDATE table2
SET table2num = :new.num
WHERE table2.id = :new.table2_id;
END INS_TABLE1_NUM_TRG;
CREATE OR REPLACE TRIGGER DEL_TABLE1_NUM_TRG
BEFORE DELETE ON table1
FOR EACH ROW
BEGIN
UPDATE table2
SET table2num = (SELECT num FROM
(SELECT num FROM table1 WHERE table2_id = :old.table2_id
ORDER BY modification_date DESC)
WHERE ROWNUM <= 1)
WHERE table2.id = :old.table2_id;
END DEL_TABLE1_NUM_TRG;
#psaraj12 answer is the best IMHO, but in the DELETE trigger I would use the :OLD notation as the inside query is unnecessary and will slow trigger significantly:
...
BEFORE DELETE ON table1
FOR EACH ROW
UPDATE table2
SET table2num = :OLD.num
WHERE table2.id = :OLD.table2_id;
...

tsql bulk update

MyTableA has several million records. On regular occasions every row in MyTableA needs to be updated with values from TheirTableA.
Unfortunately I have no control over TheirTableA and there is no field to indicate if anything in TheirTableA has changed so I either just update everything or I update based on comparing every field which could be different (not really feasible as this is a long and wide table).
Unfortunately the transaction log is ballooning doing a straight update so I wanted to chunk it by using UPDATE TOP, however, as I understand it I need some field to determine if the records in MyTableA have been updated yet or not otherwise I'll end up in an infinite loop:
declare #again as bit;
set #again = 1;
while #again = 1
begin
update top (10000) MyTableA
set my.A1 = their.A1, my.A2 = their.A2, my.A3 = their.A3
from MyTableA my
join TheirTableA their on my.Id = their.Id
if ##ROWCOUNT > 0
set #again = 1
else
set #again = 0
end
is the only way this will work if I add in a
where my.A1 <> their.A1 and my.A2 <> their.A2 and my.A3 <> their.A3
this seems like it will be horribly inefficient with many columns to compare
I'm sure I'm missing an obvious alternative?
Assuming both tables are the same structure, you can get a resultset of rows that are different using
SELECT * into #different_rows from MyTable EXCEPT select * from TheirTable and then update from that using whatever key fields are available.
Well, the first, and simplest solution, would obviously be if you could change the schema to include a timestamp for last update - and then only update the rows with a timestamp newer than your last change.
But if that is not possible, another way to go could be to use the HashBytes function, perhaps by concatenating the fields into an xml that you then compare. The caveat here is an 8kb limit (https://connect.microsoft.com/SQLServer/feedback/details/273429/hashbytes-function-should-support-large-data-types) EDIT: Once again, I have stolen code, this time from:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2009/10/21/detecting-changed-rows-in-a-trigger-using-hashbytes-and-without-eventdata-and-or-s.aspx
His example is:
select batch_id
from (
select distinct batch_id, hash_combined = hashbytes( 'sha1', combined )
from ( select batch_id,
combined =( select batch_id, batch_name, some_parm, some_parm2
from deleted c -- need old values
where c.batch_id = d.batch_id
for xml path( '' ) )
from deleted d
union all
select batch_id,
combined =( select batch_id, batch_name, some_parm, some_parm2
from some_base_table c -- need current values (could use inserted here)
where c.batch_id = d.batch_id
for xml path( '' ) )
from deleted d
) as r
) as c
group by batch_id
having count(*) > 1
A last resort (and my original suggestion) is to try Binary_Checksum? As noted in the comment, this does open the risk for a rather high collision rate.
http://msdn.microsoft.com/en-us/library/ms173784.aspx
I have stolen the following example from lessthandot.com - link to the full SQL (and other cool functions) is below.
--Data Mismatch
SELECT 'Data Mismatch', t1.au_id
FROM( SELECT BINARY_CHECKSUM(*) AS CheckSum1 ,au_id FROM pubs..authors) t1
JOIN(SELECT BINARY_CHECKSUM(*) AS CheckSum2,au_id FROM tempdb..authors2) t2 ON t1.au_id =t2.au_id
WHERE CheckSum1 <> CheckSum2
Example taken from http://wiki.lessthandot.com/index.php/Ten_SQL_Server_Functions_That_You_Have_Ignored_Until_Now
I don't know if this is better than adding where my.A1 <> their.A1 and my.A2 <> their.A2 and my.A3 <> their.A3, but I would definitely give it a try (assuming SQL Server 2005+):
declare #again as bit;
set #again = 1;
declare #idlist table (Id int);
while #again = 1
begin
update top (10000) MyTableA
set my.A1 = their.A1, my.A2 = their.A2, my.A3 = their.A3
output inserted.Id into #idlist (Id)
from MyTableA my
join TheirTableA their on my.Id = their.Id
left join #idlist i on my.Id = i.Id
where i.Id is null
/* alternatively (instead of left join + where):
where not exists (select * from #idlist where Id = my.Id) */
if ##ROWCOUNT > 0
set #again = 1
else
set #again = 0
end
That is, declare a table variable for collecting the IDs of the rows being updated and use that table for looking up (and omitting) IDs that have already been updated.
A slight variation on the method would be to use a local temporary table instead of a table variable. That way you would be able to create an index on the ID lookup table, which might result in better performance.
If schema change is not possible. How about using trigger to save off the Ids that have changed. And only import/export those rows.
Or use trigger to export it immediately.

Resources