Update trigger to track last update date (SQL 2008 R2) - sql-server

I'm looking for a model for keeping a last update timestamp on all records in a set of tables. One idea was to implement this with triggers so that no matter how a record got updated, the DateLastUpdated column would keep up to date.
Given a test table called Data_Updata, this is my current update trigger which ensures
the data is updated. Looking at the query execution plan when I run this, the trigger takes 56% of the time. Is there a more efficient trigger/sql to use?
ALTER TRIGGER [dbo].[Data_Update]
ON [dbo].[Data_Trigger]
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE
[Data_Trigger]
SET
DateLastUpdated = GetDate()
FROM
[Data_Trigger] data
JOIN
inserted ON data.DataID = inserted.DataID
END

Very likely, the join between [Data_trigger] and inserted uses a TABLE SCAN/CLUSTERED INDEX SCAN on [Data_trigger] table.
What can you do ?
Check cached plan for this trigger:
1) First, run this query to find plan_handle for your object(trigger):
SELECT t.name AS TriggerName
,ts.*
FROM sys.dm_exec_trigger_stats ts
INNER JOIN sys.triggers t ON ts.object_id = t.object_id
WHERE ts.database_id = DB_ID()
AND t.name LIKE '%Audit%';
2) Second, find the cached plan (XML). For example, if the plan handle for this trigger is 0x050009009A0A677BB8E09C7A000000000000000000000000, you can use this query to find cached plan:
DECLARE #plan_handle VARBINARY(64) = 0x050009009A0A677BB8E09C7A000000000000000000000000;
SELECT *
FROM sys.dm_exec_query_plan(#plan_handle) qp;
If the join between [Data_trigger] and inserted use a CLUSTERED INDEX SCAN for [Data_trigger] then you have (at least) three options in SQL Server 2008:
1) UPDATE STATISTICS on [Data_trigger] table: updating statistics causes queries to recompile. After this operation, test trigger and check again cached plan to see if it uses SEEK.
2) Or, you can rewrite the JOIN from UPDATE using one IN subquery:
FROM
[Data_Trigger] data
WHERE data.DataID IN (SELECT DataID IN SELECT inserted)
After this operation, test trigger and check again cached plan to see if it uses SEEK.
3) If SEEK operator is not used then you can use FORCESEEK table hint (new in SQL Server 2008; also see Best Practice Considerations section):
FROM
[Data_Trigger] data WITH(FORCESEEK)
JOIN
inserted ON data.DataID = inserted.DataID
For TABLE SCAN try creating an unique (clustered or clustered) index on DataID column.

Related

Adding clustered index on temp table to improve performance

I have ran an execution plan and noticed that the query is taking time while inserting into temp tables. We have multiple queries that insert into temp tables. I have shared two of them below. How do I add the clustered index to the temp table via the storedprocedure query. It needs to create the index on the fly and destroy it
if object_id('tempdb..#MarketTbl') is not null drop table #MarketTbl else
select
mc.companyId,
mc.pricingDate,
mc.tev,
mc.sharesOutstanding,
mc.marketCap
into #MarketTbl
from ciqMarketCap mc
where mc.pricingDate > #date
and mc.companyId in (select val from #companyId)
---- pricing table: holds pricing data for the stock pprice
if object_id('tempdb..#PricingTbl') is not null drop table #PricingTbl else
select
s.companyId,
peq.pricingDate,
ti.currencyId,
peq.priceMid
into #PricingTbl
from ciqsecurity s
join ciqtradingitem ti on s.securityid = ti.securityid
join ciqpriceequity peq on peq.tradingitemid = ti.tradingitemid
where s.primaryFlag = 1
and s.companyId in (select val from #companyId)
and peq.pricingDate> #date
and ti.primaryflag = 1
Execution plan
What you are doing is pure nonsense. You have to speed up your select, not insert.
And to speed it up you (maybe) need indexes on tables from which you select.
What you are doing now is trying to add a clustered index to a table that does not exist (the error tells you about it!), and the table does not exist because if it exists you drop it
1.First, your data is not more than 5 to 10 thousand, do not use temp table, use table type variable.
2.You can create the index, after inserting the data, use alter table syntax.

MS SQL - Delete query taking too much time

I have the following query script:
declare #tblSectionsList table
(
SectionID int,
SectionCode varchar(255)
)
--assume #tblSectionsList has 50 sections- rows
DELETE
td
from
[dbo].[InventoryDocumentDetails] td
inner join [dbo].InventoryDocuments th
on th.Id = td.InventoryDocumentDetail_InventoryDocument
inner join #tblSectionsList ts
on ts.SectionID = th.InventoryDocument_Section
This script contains three tables, where #tblSectionsList is a temporary table, it may contains 50 records. Then I am using this table in the join condition with the InventoryDocuments table, then further joined to the InventoryDocumentDetails table. All joins are based on INT foreign-keys.
On the week-end I put this query on server and it is still running even after 2 days,4 hours... Can any body tell me if I am doing something wrong. Or is there any idea to improve its performance? Even I don't know how much more time it will take to give me the result.
Before this I also tried to create an index on the InventoryDocumentDetails table with following script:
CREATE NONCLUSTERED INDEX IX_InventoryDocumentDetails_InventoryDocument
ON dbo.InventoryDocumentDetails (InventoryDocumentDetail_InventoryDocument);
But this script also take more than one day and did not finish so I cancelled this query.
Additional info:
I am using MS SQL 2008 R2.
InventoryDocuments table contains 2108137 rows, has primary key 'Id'.
InventoryDocumentDetails table contains 25055158 rows, has primary key 'Id'.
Both tables have primary keys defined.
CUP - Intel Xeon - with 32 GB RAM
No indexes are defined, because now when I am going to create a new index, that query also get suspended.
Query Execution Plan (1):
2nd Part:
The following query give one row for this and showing status='suspended', and wait_type='LCK_M_IX'
SELECT r.session_id as spid, r.[status], r.command, t.[text], OBJECT_NAME(t.objectid, t.[dbid]) as object, r.logical_reads, r.blocking_session_id as blocked, r.wait_type, s.host_name, s.host_process_id, s.program_name, r.start_time
FROM sys.dm_exec_requests AS r LEFT OUTER JOIN sys.dm_exec_sessions s ON s.session_id = r.session_id OUTER APPLY sys.dm_exec_sql_text(r.[sql_handle]) AS t
WHERE r.session_id <> ##SPID AND r.session_id > 50
What happens when you change the Inner Join to EXISTS
DELETE td
FROM [dbo].[InventoryDocumentDetails] td
WHERE EXISTS (SELECT 1
FROM [dbo].InventoryDocuments th
WHERE EXISTS (SELECT 1
FROM #tblSectionsList ts
WHERE ts.SectionID = th.InventoryDocument_Section)
AND th.Id = td.InventoryDocumentDetail_InventoryDocument)
It sometimes can be more efficient time-wise to truncate a table and re-import the records you want to keep. A delete operation on a large tables is incredibly slow compared to an insert. Of course this is only an option if you can take your table offline. Also, only do this if your logging is set to simple.
Drop triggers table A.
Bulk copy table A to B.
Truncate table A
Enable Identity Insert.
Insert Into A From B Where A.ID Not in ID's to delete.
Disable Identity Insert.
Rebuild indexes.
Enable triggers
Try like the below. It might give you some idea at least.
DELETE FROM [DBO].[INVENTORYDOCUMENTDETAILS] WHERE INVENTORYDOCUMENTDETAILS_PK IN (
(SELECT INVENTORYDOCUMENTDETAILS_PK FROM
[DBO].[INVENTORYDOCUMENTDETAILS] TD
INNER JOIN [DBO].INVENTORYDOCUMENTS TH ON TH.ID = TD.INVENTORYDOCUMENTDETAIL_INVENTORYDOCUMENT
INNER JOIN #TBLSECTIONSLIST TS ON TS.SECTIONID = TH.INVENTORYDOCUMENT_SECTION
)

Adding a where clause to a SQL Server trigger

I'm looking to add a WHERE clause to the trigger shown below and looking for a bit of advice if possible. Currently the trigger works on the basis of any particular items being added to the order and not just specific ones (ideally with a prefix).
CREATE TRIGGER ItalianEmail ON SOPOrderReturn
FOR INSERT, UPDATE
AS
declare #SOPOrderReturnID int;
UPDATE SOPOrderReturn
SET AnalysisCode19 = 'mario#aol.com'
FROM SOPOrderReturn
INNER JOIN INSERTED i ON SOPOrderReturn.SOPOrderReturnID = i.SOPOrderReturnID)
GO
The layout of the tables in SQL Server is the following:
SOPOrderReturn [Header Table] -- Holds Order Information (has primary key SOPOrderReturnID)
SOPOrderReturnLine [Order Line table] -- stores the item data for the order
(has primary key SOPOrderReurnLineID and a foreign key SOPOrderReturnID)
I need the WHERE clause to pick up the StockItem on the SOPOrderReturnLine table if its LIKE 'XXX_%'
I hope I have explained enough of the structure of the tables for you to get an idea of what I would like to achieve?
Any help offered is gratefully appreciated and I thank you for your time.
Try the following. Notice the alias on
CREATE TRIGGER ItalianEmail ON SOPOrderReturn
FOR INSERT, UPDATE
AS
declare #SOPOrderReturnID int;
UPDATE oRet
SET AnalysisCode19 = 'mario#aol.com'
FROM SOPOrderReturn oRet
INNER JOIN INSERTED i ON (oRet.SOPOrderReturnID = i.SOPOrderReturnID)
INNER JOIN SOPOrderReturnLine oRetLine ON (oRetLine.SOPOrderReturnID = i.SOPOrderReturnID)
WHERE oRetLine.StockItem LIKE 'XXX%'
GO

How can I update statistics in SQL Server 2012 without using sp_updatestats?

I have read that if I use the command:
EXEC sp_updatestats
That this creates statistics based on an estimated 20,000 rows per table. I am not sure what this means as I have many tables with less than 20 rows.
Can someone give me advice on if there's another more accurate way to update statistics that will not involve my entering a command for every table.
Why do you want to collect statistics other way then Microsoft recommends ?
more accurate way to update statistics that will not involve my
entering a command for every table
This command update statistics for all tables in the current database and you don't need to enter this command for every table in the database:
EXEC sp_updatestats;
Also you can use UPDATE STATISTICS command. I don't know the advantages of UPDATE STATISTiCS over sp_updatestats and I think you can use both of them.
It is a good way to collect actual statistics for your data, but be aware that it can be heavy operation and require many server resources. If it is possible I recommens to collect statistics when most of the users don't work with the data.
Other maintenence solution (like rebuild and reorganize indexes) you can find in this post.
Following query show list of table that need to update statistics. you can use cursor for result of this query and update statistics of each table that need to update statistics.
SELECT SchemaName, ObjectName, StatisticName, [RowCount], UpdatedCount
FROM (
SELECT SCHEMA_NAME(o.schema_id) AS SchemaName,
OBJECT_NAME(o.object_id) AS ObjectName,
s.Name AS StatisticName,
rows AS [RowCount],
modification_counter AS UpdatedCount,
modification_counter/rows * 100 AS UpdatePercent,
rows * -0.00001846153+19.538461 AS threshold
FROM sys.stats s
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) b
INNER JOIN sys.objects o ON o.object_id = s.object_id
WHERE OBJECTPROPERTY(o.object_id,'IsUserTable')=1
)z
WHERE z.UpdatePercent > 20
OR (z.[RowCount]>=25000 AND z.[RowCount]<=1000000 AND z.UpdatePercent>2 AND z.UpdatePercent > z.threshold)
OR (z.[RowCount]>1000000 AND z.[RowCount]<=10000000 AND z.UpdatePercent>1)
OR (z.[RowCount]>10000000 AND z.[RowCount]<=20000000 AND z.UpdatePercent>0.5)
OR (z.[RowCount]>20000000 AND z.[RowCount]<=30000000 AND z.UpdatePercent>0.25)
ORDER BY z.UpdatePercent
When you set Auto Update Statistics (ALTER DATABASE YourDatabase SET AUTO_UPDATE_STATISTICS ON), sql server update automatically statistics of your table when updated row amount was 20% of count rows. it's seem good in first glance. But 20% of small table is very different with 20% of a large table. In other word if your table have 100 rows, each time that update 20 rows affected, sql server update statistics of your table automatically but if your table have 100,000,000 rows, each time update of 20,000,000 rows affected, sql server automatically update statistic of this table and this update rows need very time to affected. It seem that small table need to update statistics when Updated rows count was 20% of total rows and larg table need to update statistics when updated rows count was 1% of total rows. In my query I show list of table that need to updated statistics according to table rows count and update rows count.

Update Query issue in SQL for multiple rows

I am trying to use an update query but so far it is keep failing on me and i don't understand what am i doing wrong here. I am getting this error 'Update canceled: attempt to update a target row with values from multiple join rows'. I know the table called OTHER_TABLE has duplicate records. Here is my current query:
UPDATE MAINTABLE
SET BLDG_NBR = DM.BLDG_NBR
FROM OTHER_TABLE DM
WHERE MAINTABLE.BLDG_NM = DM.BLDG_NM
You need to join the two tables
UPDATE MAINTABLE
SET BLDG_NBR = DM.BLDG_NBR
FROM MAINTABLE INNER JOIN OTHER_TABLE DM
ON MAINTABLE.BLDG_NM = DM.BLDG_NM
According to your comment, you have no index set on the tables and because of that it performs full table scan. Try adding index on both tables before executing the update statement,
CREATE NONCLUSTERED INDEX MAINTABLE_BLDGNM_idx ON MAINTABLE(BLDG_NM);
CREATE NONCLUSTERED INDEX OTHERTABLE_BLDGNM_idx ON OTHER_TABLE(BLDG_NM);

Resources