I have created the follow TSQL trigger that appears to be running forever whenever the underlying table gets updated.
CREATE TRIGGER Trigger_MDSS_ComputeAggregates
ON dbo.MonthlyDetectionScoresSums
AFTER UPDATE, INSERT
AS
BEGIN
update dbo.MonthlyDetectionScoresSums
SET
YPElec = CAST(COALESCE (i.YPLocChain_TotElec, i.YPGlobChain_TotElec, i.YPSIC_TotElec) AS real),
YPGas = CAST(COALESCE (i.YPLocChain_TotGas, i.YPSIC_TotGas) AS real)
from MonthlyDetectionScoresSums mdss
inner join INSERTED i on i.ACI_OI = mdss.ACI_OI
END
GO
Do you know why it might be running for a really really long time?
May I suggest that you use computed columns and drop the trigger?
ALTER TABLE dbo.MonthlyDetectionScoresSums ADD
YPElec AS CAST(COALESCE (YPLocChain_TotElec, YPGlobChain_TotElec, YPSIC_TotElec) AS real)
YPGas AS CAST(COALESCE (YPLocChain_TotGas, YPSIC_TotGas) AS real)
From what I see, you are updating rows you've just updated/inserted. The DB engine will do it for you and no trigger needed.
Do you have recursive triggers turned on?
Although an infinite loop should be terminated, it's possible if your update is very large that it takes a long time to get to the nesting limit of 32:
http://msdn.microsoft.com/en-us/library/ms190739.aspx
Related
I'm not a database expert but I am in need of some help making sure a trigger we are using to track an update on a table is the best way to handle our situation and is performing as it should. After loading the trigger we noticed some slow performance on the actual business system (user side).
Background: we are trying to capture the date/time of a transaction that happens so it can be referenced on a customer portal for our website.
The theory: the trigger monitors for a Update to a column to 'PI' and if that happens, it writes data to a table giving some basic information from 2 other tables that are related to to update.
Table 1 columns
RH.kbranch, RH.kordnum, RH.kcustnum, RH.custsnum, RH.[program]
Table 2 columns
RD.kbranch, RD.kordnum, RD.kpart
Table 3 columns (where trigger is attached)
EQ.kequipnum, EQ.eqpstatus
Trigger
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[PICKUPTrigger]
ON [TEST].[dbo].[equip]
FOR UPDATE
AS
IF (SELECT eqpstatus FROM inserted) = 'PI'
BEGIN
SET NOCOUNT ON
INSERT INTO [Workfiles].[dbo].[PickupAudit] ([HHBranch],[HHOrder],[HHCustomer], [HHShipTo], [EquipID], [EQStatus], [PickupNo], [StatusDate])
SELECT
RH.kbranch, RH.kordnum, RH.kcustnum, RH.custsnum,
RD.kpart, EQ.eqpstatus, RH.[program], GETDATE()
FROM
TEST.dbo.renthead RH
JOIN
TEST.dbo.rentdetl RD ON RH.kbranch = RD.kbranch
AND RH.kordnum = RD.kordnum
AND RH.program NOT LIKE 'OPSS%'
JOIN
TEST.dbo.equip EQ ON EQ.kequipnum = RD.kpart
WHERE
RD.kpart = (SELECT kequipnum FROM inserted);
END
The trigger works, but it appears to be causing problems and slowing down the actual user experience. Any help in tweaking what we have done is appreciated and if you have any questions, feel free to ask. Thanks.
You should use explicit joins:
INSERT INTO [Workfiles].[dbo].[PickupAudit]
([HHBranch],[HHOrder],[HHCustomer],[HHShipTo],[EquipID],[EQStatus],[PickupNo],[StatusDate])
SELECT RH.kbranch, RH.kordnum, RH.kcustnum, RH.custsnum, RD.kpart, EQ.eqpstatus, RH.[program], GETDATE()
FROM TEST.dbo.renthead RH JOIN
TEST.dbo.rentdetl RD
ON RH.kbranch = RD.kbranch AND
RH.kordnum = RD.kordnum AND
RH.program NOT LIKE 'OPSS%' JOIN
TEST.dbo.equip EQ
ON EQ.kequipnum = RD.kpart JOIN
inserted i
ON RD.kpart = i.kequipnum;
For performance, you want indexes on the columns used in the JOINs, in this order:
TEST.dbo.rentdetl(kpart, kbanch, kordnum)
TEST.dbo.equip(kequipnum)
TEST.dbo.renthead(kbranch, kbanch, program)
The slow is caused by th join statements, as I think you join heavy tables with data or tables under load sometimes at the trigger operation time
The solution to get better performance is to create "indexed view" not just "view"
And use it in trigger, and you will see drastically affect
The scenario is the following -
OrderTable with Columns "OrderId" and "OrderType"
OrderRelationTable with Columns "OrderId" and "CustId"
OrderProcessTable with Columns "OrderId", "OrderType", "CustId", and "ProcessFlag"
The flow goes like this-
Application1 creates the record in OrderTable -> Then pass the record to Application2 by using MQ protocol, Application 2 in this case insert/create the record passed in the OrderRelationTable -> Then a trigger is called in Oracle DB to create the record in OrderProcessTable
Problem
Sometimes the record is not inserted into table 3 OrderProcessTable. Not sure if it is cause by timing or there is something that is not correct with the trigger?
Application1 Code
boolean updated = false;
/** JDBC prepare statement execution insert into OrderTable in Java**/
int rowCount = ps.executeUpdate();
if(rowCount>0){
updated=true;
}
log.log("updated flag:"+updated);
/** I am able to see the log shows the flag is true, and recored inserted into OrderTable **/
Application2 Code
This doesn't really matter much assuming that it is some Java JDBC code that does the insert into OrderRelationTable and it is successful.
The Trigger
Assuming the syntax is correct.
CREATE OR REPLACE TRIGGER INSERTINTOOrderProcessTable
AFTER INSERT ON OrderRelationTable
FOR EACH ROW DECLEAR
v_order_type := null;
BEGIN
SELECT OrderType INTO v_order_type FROM OrderTable
WHERE OrderId = :new.OrderId
AND OrderType IS NOT NULL
AND rownum=1;
IF v_order_type IS NOT NULL THEN
INSERT INTO OrderProcessTable VALUES (:new.OrderId, v_order_type, :new.CustId, 'N');
END IF;
END;
Questions -
After the Application 1 Code is executed is guaranteed DB will have the OrderTable record avaliable for SELECT statement? (Assume that updated flag is true)
Is there a timing issue with the app code and trigger? for example when trigger calls the SELECT statement from OrderTable? (of course the order id is matching in the OrderRelationTable and OrderTable)
Basically right now my problem is that sometimes (rarely) the record is not inserted into OrderProcessTable via the trigger even though it should (Order Type is not null)? Any idea?
There's no timing issue, as far as I can tell.
As of trigger code: what is the purpose of and rownum = 1 condition? I'm not saying that it is wrong, I'm just asking. Do you expect several rows to be returned by that query? If so, is that a legal situation? Wouldn't you rather handle it with the WHEN TOO_MANY_ROWS exception handler (i.e. instead of using the ROWNUM condition)?
What happens if SELECT returns nothing? It raises then NO_DATA_FOUND exception and trigger fails and certainly doesn't insert anything. Is it propagated so that someone (human being) or something (error logging procedure) sees / catches it so that you'd know that something went wrong?
And, of course, the fact that V_ORDER_TYPE remains NULL which causes INSERT to fail (as P. Salmon already suggested).
I've got a table with data named energydata
it has just three columns
(webmeterID, DateTime, kWh)
I have a new set of updated data in a table temp_energydata.
The DateTime and the webmeterID stay the same. But the kWh values need updating from temp_energydata table.
How do I write the T-SQL for this the correct way?
Assuming you want an actual SQL Server MERGE statement:
MERGE INTO dbo.energydata WITH (HOLDLOCK) AS target
USING dbo.temp_energydata AS source
ON target.webmeterID = source.webmeterID
AND target.DateTime = source.DateTime
WHEN MATCHED THEN
UPDATE SET target.kWh = source.kWh
WHEN NOT MATCHED BY TARGET THEN
INSERT (webmeterID, DateTime, kWh)
VALUES (source.webmeterID, source.DateTime, source.kWh);
If you also want to delete records in the target that aren't in the source:
MERGE INTO dbo.energydata WITH (HOLDLOCK) AS target
USING dbo.temp_energydata AS source
ON target.webmeterID = source.webmeterID
AND target.DateTime = source.DateTime
WHEN MATCHED THEN
UPDATE SET target.kWh = source.kWh
WHEN NOT MATCHED BY TARGET THEN
INSERT (webmeterID, DateTime, kWh)
VALUES (source.webmeterID, source.DateTime, source.kWh)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
Because this has become a bit more popular, I feel like I should expand this answer a bit with some caveats to be aware of.
First, there are several blogs which report concurrency issues with the MERGE statement in older versions of SQL Server. I do not know if this issue has ever been addressed in later editions. Either way, this can largely be worked around by specifying the HOLDLOCK or SERIALIZABLE lock hint:
MERGE INTO dbo.energydata WITH (HOLDLOCK) AS target
[...]
You can also accomplish the same thing with more restrictive transaction isolation levels.
There are several other known issues with MERGE. (Note that since Microsoft nuked Connect and didn't link issues in the old system to issues in the new system, these older issues are hard to track down. Thanks, Microsoft!) From what I can tell, most of them are not common problems or can be worked around with the same locking hints as above, but I haven't tested them.
As it is, even though I've never had any problems with the MERGE statement myself, I always use the WITH (HOLDLOCK) hint now, and I prefer to use the statement only in the most straightforward of cases.
I often used Bacon Bits great answer as I just can not memorize the syntax.
But I usually add a CTE as an addition to make the DELETE part more useful because very often you will want to apply the merge only to a part of the target table.
WITH target as (
SELECT * FROM dbo.energydate WHERE DateTime > GETDATE()
)
MERGE INTO target WITH (HOLDLOCK)
USING dbo.temp_energydata AS source
ON target.webmeterID = source.webmeterID
AND target.DateTime = source.DateTime
WHEN MATCHED THEN
UPDATE SET target.kWh = source.kWh
WHEN NOT MATCHED BY TARGET THEN
INSERT (webmeterID, DateTime, kWh)
VALUES (source.webmeterID, source.DateTime, source.kWh)
WHEN NOT MATCHED BY SOURCE THEN
DELETE
If you need just update your records in energydata based on data in temp_energydata, assuming that temp_enerydata doesn't contain any new records, then try this:
UPDATE e SET e.kWh = t.kWh
FROM energydata e INNER JOIN
temp_energydata t ON e.webmeterID = t.webmeterID AND
e.DateTime = t.DateTime
Here is working sqlfiddle
But if temp_energydata contains new records and you need to insert it to energydata preferably with one statement then you should definitely go with the answer that Bacon Bits gave.
UPDATE ed
SET ed.kWh = ted.kWh
FROM energydata ed
INNER JOIN temp_energydata ted ON ted.webmeterID = ed.webmeterID
Update energydata set energydata.kWh = temp.kWh
where energydata.webmeterID = (select webmeterID from temp_energydata as temp)
THE CORRECT WAY IS :
UPDATE test1
INNER JOIN test2 ON (test1.id = test2.id)
SET test1.data = test2.data
I have a two tables in SQL Server, in which one is the source for a MERGE operation into another.
The source table has 30Mil Records
The Target table has 180Mil Records. Both tables have 227 columns.
I do have SSIS, but I'm told in this case, a MERGE statement is the better option. Below is a shortened version of it:
;WITH MySource as (
SELECT * FROM [STAGE].[dbo].[STAGE_TABLE]
)
MERGE [EDW].[dbo].[TARGET_TABLE] AS MyTarget
USING MySource
ON MySource.[ID_FIELD] = MyTarget.[ID_FIELD]
AND MySource.[LoadDate] >= MyTarget.[LoadDate]
WHEN MATCHED THEN UPDATE SET
<<Target Column>> = MySource.<<Source Colums>> --227 columns
WHEN NOT MATCHED THEN INSERT
(
[ID_FIELD],
[LoadDate],
<<225 Other Columns>>
)
VALUES (
MySource.[ID_FIELD],
MySource.[LoadDate],
MySource.<<225 other columns>>
);
The only changes I made to the script above is truncating the list of columns to keep the code block here short.
My Problem is that I am getting hung on the execution. The profile screen shows a CXPACKET suspension with the error: cwaitpipenewrow, node=2.
How do I troubleshoot this? Thank you.
Seems like CXPACKET and suspended state means that some threads which have completed are logging that other thread's state which have not completed yet.
Please check here. The query need to update upto 1 Billion values in the table. hence it would be slow running queries.
https://dba.stackexchange.com/questions/96346/cxpacket-suspended-and-null-wait-type
https://www.sqlshack.com/troubleshooting-the-cxpacket-wait-type-in-sql-server/
Hope these articles might help you debug.
I have the following select statement that finishes almost instantly.
declare #weekending varchar(6)
set #weekending = 100103
select InvoicesCharges.orderaccnumber, Accountnumbersorders.accountnumber
from Accountnumbersorders, storeinformation, routeselecttable,InvoicesCharges, invoice
where InvoicesCharges.pubid = Accountnumbersorders.publication
and Accountnumbersorders.actype = 0
and Accountnumbersorders.valuezone = 'none'
and storeinformation.storeroutename = routeselecttable.istoreroutenumber
and storeinformation.storenumber = invoice.store_number
and InvoicesCharges.invoice_number = invoice.invoice_number
and convert(varchar(6),Invoice.bill_to,12) = #weekending
However, the equivalent update statement takes 1m40s
declare #weekending varchar(6)
set #weekending = 100103
update InvoicesCharges
set InvoicesCharges.orderaccnumber = Accountnumbersorders.accountnumber
from Accountnumbersorders, storeinformation, routeselecttable,InvoicesCharges, invoice
where InvoicesCharges.pubid = Accountnumbersorders.publication
and Accountnumbersorders.actype = 0
and dbo.Accountnumbersorders.valuezone = 'none'
and storeinformation.storeroutename = routeselecttable.istoreroutenumber
and storeinformation.storenumber = invoice.store_number
and InvoicesCharges.invoice_number = invoice.invoice_number
and convert(varchar(6),Invoice.bill_to,12) = #weekending
Even if I add:
and InvoicesCharges.orderaccnumber <> Accountnumbersorders.accountnumber
at the end of the update statement reducing the number of writes to zero, it takes the same amount of time.
Am I doing something wrong here? Why is there such a huge difference?
transaction log file writes
index updates
foreign key lookups
foreign key cascades
indexed views
computed columns
check constraints
locks
latches
lock escalation
snapshot isolation
DB mirroring
file growth
other processes reading/writing
page splits / unsuitable clustered index
forward pointer/row overflow events
poor indexes
statistics out of date
poor disk layout (eg one big RAID for everything)
Check constraints with UDFs that have table access
...
Although, the usual suspect is a trigger...
Also, your condition extra has no meaning: How does SQL Server know to ignore it? An update is still generated with most of the baggage... even the trigger will still fire. Locks must be held while rows are searched for the other conditions for example
Edited Sep 2011 and Feb 2012 with more options
The update has to lock and modify the data in the table, and also log the changes to the transaction log. The select does not have to do any of those things.
Because reading does not affect indices, triggers, and what have you?
In Slow servers or large database i usually use UPDATE DELAYED, that waits for a "break" to update the database itself.