I am trying to use an update query but so far it is keep failing on me and i don't understand what am i doing wrong here. I am getting this error 'Update canceled: attempt to update a target row with values from multiple join rows'. I know the table called OTHER_TABLE has duplicate records. Here is my current query:
UPDATE MAINTABLE
SET BLDG_NBR = DM.BLDG_NBR
FROM OTHER_TABLE DM
WHERE MAINTABLE.BLDG_NM = DM.BLDG_NM
You need to join the two tables
UPDATE MAINTABLE
SET BLDG_NBR = DM.BLDG_NBR
FROM MAINTABLE INNER JOIN OTHER_TABLE DM
ON MAINTABLE.BLDG_NM = DM.BLDG_NM
According to your comment, you have no index set on the tables and because of that it performs full table scan. Try adding index on both tables before executing the update statement,
CREATE NONCLUSTERED INDEX MAINTABLE_BLDGNM_idx ON MAINTABLE(BLDG_NM);
CREATE NONCLUSTERED INDEX OTHERTABLE_BLDGNM_idx ON OTHER_TABLE(BLDG_NM);
Related
I have two tables:
Order and
Product.
I want a specific column(OnShelfQuantity) in the Product table to be updated as a new row is added in the Order table. I have used the below query to implement a trigger which will do that. But the problem is that when I insert a row in the Order table and then later check the Product table to see the changes, I notice that the Product table has been updated 3 times. For e.g: Order quantity inserted = 10, then only 10 should be subtracted from Product_TAB.OnShelfQuantity. But 30 gets subtracted. Please help!
create trigger dbo.Trigge
ON dbo.Ordertable
AFTER INSERT
AS
BEGIN
update Product_TAB set OnShelfQuantity= Product_TAB.OnShelfQuantity - Ordertable.Quantity
FROM dbo.Product_TAB
INNER JOIN Ordertable
ON Ordertable.ProductID = Product_TAB.ProductID;
END;
I think, you can use INSERTED table to resolve this Issue.
Inserted table is a table which is used by triggers to store the frequently inserted records in the tables.
So, you can use the same in your update statement to avoid this.
update Product_TAB set OnShelfQuantity= Product_TAB.OnShelfQuantity -
Ordertable.Quantity
FROM dbo.Product_TAB
INNER JOIN Ordertable ON Ordertable.ProductID = Product_TAB.ProductID
INNER JOIN inserted INS ON INS.Order_ID=Ordertable.Order_ID
You can have multiple rows in the inserted table. And, these rows could have the same product. A row in the target table is only updated once in an update statement. Hence, you want to aggregate the data before the update:
create trigger dbo.Trigge
ON dbo.Ordertable
AFTER INSERT
AS
BEGIN
update p
set OnShelfQuantity= p.OnShelfQuantity - i.total_quantity
from dbo.Product_TAB p JOIN
(SELECT i.ProductId, SUM(i.Quantity) as total_quantity
FROM inserted i
GROUP BY i.ProductId
) i
on i.ProductID = p.ProductID;
END;
Note that this only uses inserted and not the original table.
So the issue was that I was inserting new rows but with the same Order ID. This is why it was doing additional subtraction which I didn't require. So now I have to just insert a new row but with unique OrderID. Thanks to everyone who replied above!
I need to update two table in single query.
Please find the below query.
Update
m_student_moreinfo
INNER JOIN
m_student
ON
m_student_moreinfo.studentID = m_student.id
SET m_student_moreinfo.MIAStartdate=GETDATE(),m_student.status='Clinical MIA'
where
studentID IN
(
Select
smi.studentID
FROM
dbo.m_student st
INNER JOIN
dbo.m_student_course sc
on
sc.studentID=st.id
INNER JOIN
dbo.m_student_classClinical scl
on
scl.studentcourseID=sc.id
INNER JOIN
dbo.m_student_moreinfo smi
on
smi.studentID=st.id
where
scl.startDate<=GETDATE() and scl.endDate >=GETDATE()
and MIAStartdate IS NULL
)
I am getting Incorrect syntax near Inner.
You can't update two tables at once, but you can link an update into an insert using OUTPUT INTO, and you can use this output as a join for the second update
please see this and that for more info
So basically you can wrap this in transaction and commit after all update steps finished.
Simple answer: You can not.
What you can do is two update queries in a transaction:
BEGIN TRANSACTION;
update query 1
update query 2
COMMIT;
That wil do the job for you
I have the following query script:
declare #tblSectionsList table
(
SectionID int,
SectionCode varchar(255)
)
--assume #tblSectionsList has 50 sections- rows
DELETE
td
from
[dbo].[InventoryDocumentDetails] td
inner join [dbo].InventoryDocuments th
on th.Id = td.InventoryDocumentDetail_InventoryDocument
inner join #tblSectionsList ts
on ts.SectionID = th.InventoryDocument_Section
This script contains three tables, where #tblSectionsList is a temporary table, it may contains 50 records. Then I am using this table in the join condition with the InventoryDocuments table, then further joined to the InventoryDocumentDetails table. All joins are based on INT foreign-keys.
On the week-end I put this query on server and it is still running even after 2 days,4 hours... Can any body tell me if I am doing something wrong. Or is there any idea to improve its performance? Even I don't know how much more time it will take to give me the result.
Before this I also tried to create an index on the InventoryDocumentDetails table with following script:
CREATE NONCLUSTERED INDEX IX_InventoryDocumentDetails_InventoryDocument
ON dbo.InventoryDocumentDetails (InventoryDocumentDetail_InventoryDocument);
But this script also take more than one day and did not finish so I cancelled this query.
Additional info:
I am using MS SQL 2008 R2.
InventoryDocuments table contains 2108137 rows, has primary key 'Id'.
InventoryDocumentDetails table contains 25055158 rows, has primary key 'Id'.
Both tables have primary keys defined.
CUP - Intel Xeon - with 32 GB RAM
No indexes are defined, because now when I am going to create a new index, that query also get suspended.
Query Execution Plan (1):
2nd Part:
The following query give one row for this and showing status='suspended', and wait_type='LCK_M_IX'
SELECT r.session_id as spid, r.[status], r.command, t.[text], OBJECT_NAME(t.objectid, t.[dbid]) as object, r.logical_reads, r.blocking_session_id as blocked, r.wait_type, s.host_name, s.host_process_id, s.program_name, r.start_time
FROM sys.dm_exec_requests AS r LEFT OUTER JOIN sys.dm_exec_sessions s ON s.session_id = r.session_id OUTER APPLY sys.dm_exec_sql_text(r.[sql_handle]) AS t
WHERE r.session_id <> ##SPID AND r.session_id > 50
What happens when you change the Inner Join to EXISTS
DELETE td
FROM [dbo].[InventoryDocumentDetails] td
WHERE EXISTS (SELECT 1
FROM [dbo].InventoryDocuments th
WHERE EXISTS (SELECT 1
FROM #tblSectionsList ts
WHERE ts.SectionID = th.InventoryDocument_Section)
AND th.Id = td.InventoryDocumentDetail_InventoryDocument)
It sometimes can be more efficient time-wise to truncate a table and re-import the records you want to keep. A delete operation on a large tables is incredibly slow compared to an insert. Of course this is only an option if you can take your table offline. Also, only do this if your logging is set to simple.
Drop triggers table A.
Bulk copy table A to B.
Truncate table A
Enable Identity Insert.
Insert Into A From B Where A.ID Not in ID's to delete.
Disable Identity Insert.
Rebuild indexes.
Enable triggers
Try like the below. It might give you some idea at least.
DELETE FROM [DBO].[INVENTORYDOCUMENTDETAILS] WHERE INVENTORYDOCUMENTDETAILS_PK IN (
(SELECT INVENTORYDOCUMENTDETAILS_PK FROM
[DBO].[INVENTORYDOCUMENTDETAILS] TD
INNER JOIN [DBO].INVENTORYDOCUMENTS TH ON TH.ID = TD.INVENTORYDOCUMENTDETAIL_INVENTORYDOCUMENT
INNER JOIN #TBLSECTIONSLIST TS ON TS.SECTIONID = TH.INVENTORYDOCUMENT_SECTION
)
I have a table with two columns where I need one (columnB) to be a copy of the other one (columnA). So, if a row is inserted or updated, I want the value from columnA to be copied to columnB.
Here's what I have now:
CREATE TRIGGER tUpdateColB
ON products
FOR INSERT, UPDATE AS
BEGIN
UPDATE table
SET columnB = columnA
END
The problem now is that the query affects all rows, not just the one that was updated or inserted. How would I go about fixing that?
Assuming you have a primary key column, id, (and you should have a primary key), join to the inserted table (making the trigger capable of handling multiple rows):
CREATE TRIGGER tUpdateColB
ON products
FOR INSERT, UPDATE AS
BEGIN
UPDATE table
SET t.columnB = i.columnA
FROM table t INNER JOIN inserted i ON t.id = i.id
END
But if ColumnB is always a copy of ColumnA, why not create a Computed column instead?
Using the inserted and deleted Tables
There is a special inserted table available in triggers that will contain the "after" version of rows impacted by an INSERT or UPDATE operation. Similarly, there is a deleted table that will contain the "before" version of rows impacted by an UPDATE or DELETE operation.
So, for your specific case:
UPDATE t
SET t.columnB = t.columnA
FROM inserted i
INNER JOIN table t
ON i.PrimaryKeyColumn = t.PrimaryKeyColumn
I'm looking for a model for keeping a last update timestamp on all records in a set of tables. One idea was to implement this with triggers so that no matter how a record got updated, the DateLastUpdated column would keep up to date.
Given a test table called Data_Updata, this is my current update trigger which ensures
the data is updated. Looking at the query execution plan when I run this, the trigger takes 56% of the time. Is there a more efficient trigger/sql to use?
ALTER TRIGGER [dbo].[Data_Update]
ON [dbo].[Data_Trigger]
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE
[Data_Trigger]
SET
DateLastUpdated = GetDate()
FROM
[Data_Trigger] data
JOIN
inserted ON data.DataID = inserted.DataID
END
Very likely, the join between [Data_trigger] and inserted uses a TABLE SCAN/CLUSTERED INDEX SCAN on [Data_trigger] table.
What can you do ?
Check cached plan for this trigger:
1) First, run this query to find plan_handle for your object(trigger):
SELECT t.name AS TriggerName
,ts.*
FROM sys.dm_exec_trigger_stats ts
INNER JOIN sys.triggers t ON ts.object_id = t.object_id
WHERE ts.database_id = DB_ID()
AND t.name LIKE '%Audit%';
2) Second, find the cached plan (XML). For example, if the plan handle for this trigger is 0x050009009A0A677BB8E09C7A000000000000000000000000, you can use this query to find cached plan:
DECLARE #plan_handle VARBINARY(64) = 0x050009009A0A677BB8E09C7A000000000000000000000000;
SELECT *
FROM sys.dm_exec_query_plan(#plan_handle) qp;
If the join between [Data_trigger] and inserted use a CLUSTERED INDEX SCAN for [Data_trigger] then you have (at least) three options in SQL Server 2008:
1) UPDATE STATISTICS on [Data_trigger] table: updating statistics causes queries to recompile. After this operation, test trigger and check again cached plan to see if it uses SEEK.
2) Or, you can rewrite the JOIN from UPDATE using one IN subquery:
FROM
[Data_Trigger] data
WHERE data.DataID IN (SELECT DataID IN SELECT inserted)
After this operation, test trigger and check again cached plan to see if it uses SEEK.
3) If SEEK operator is not used then you can use FORCESEEK table hint (new in SQL Server 2008; also see Best Practice Considerations section):
FROM
[Data_Trigger] data WITH(FORCESEEK)
JOIN
inserted ON data.DataID = inserted.DataID
For TABLE SCAN try creating an unique (clustered or clustered) index on DataID column.