We have a lot of operations that time out in our site log.
After installing Redgate SQL Monitor on the server, we figured out we have many blocked processes and some times deadlock.
With Redgate we realized problem is for a stored procedure. It's a simple stored procedure and just increase view count of product (simple update)
ALTER PROCEDURE [dbo].[SP_IncreaseProductView]
#PID int
AS
BEGIN
UPDATE dbo.stProduct
SET ViewCount = ViewCount + 1
WHERE ID = #PID
END
When that stored procedure is off, everything is fine but some times block process error is back.
This table(product) hasn't any trigger. but its like heart of system and has 12000 records.
It has 3 indexes, 1 clustered and 2 non-clustered, and many statistics
We don't have any transaction. block process mostly happens in update query.
How can I figure out where the problem is?
Sorry for my bad languages
Thanks
Edit:
I think the problem isn't the SP, its about update on product table (Its my opinion). Its large table. I still get block process when SP is off but less.
We have a lot of select and update on this table.
Also i rewrite the increase view count with LINQ to SQL, but still get block process like when SP is on.
Edit 2:
I set profiler and get all query on product table.
530 select (most with join with another 2 table) and 25 update per minute (on product table only).
For now, [SP_IncreaseProductView] is off. Because when its on, we getting block process and operation timed out about every 10 second and web site stopped.
After that (set SP to off) block process still exist but roughly 50 per day.
I would go with Ingaz' second solution, but further optimizations or simplification can be performed:
1) store view count in a product 1:1 table. This is particularly useful when some queries do not need view count
2) view count redundancy
- keep view count in product table and read it from there
- also define view count in another table (just productid and viewcount columns)
- application can update asynchronously directly in the second table
- a job updates in the product table based on data from the second table
This ensures that locking is affecting product table much less than independent updates.
It is expected: I suppose that you have a lot of updates into single small (12K rows) table.
You can:
Work around your problem
Put ROWLOCK hint in your UPDATE
Change database option to READ_COMMITED_SNAPSHOT
Be warned though: it can create another problems.
More complex recipe to eliminate blocking completely
Not for faint of heart.
Create table dbo.stProduct_Increment.
Modify your [SP_IncreaseProductView] to INSERT into increment table.
Create periodic task that UPDATEs your dbo.stProduct and clear increment table.
Create view that combines stProduct and stProduct_Increment.
Modify all SELECT statements for stProduct to created view.
Related
I would need information what might be the impact for production DB of creating triggers for ~30 Production tables that capture any Update,Delete and Insert statement and put following information "PK", "Table Name", "Time of modification" to separate table.
I have limited ability to test it as I have read only permissions to both Prod and Test environment (and I can get one work day for 10 end users to test it).
I have estimated that number of records inserted from those triggers will be around ~150-200k daily.
Background:
I have project to deploy Data Warehouse for database that is very customized + there are jobs running every day that manipulate the data. Updated on Date column is not being maintain (customization) + there are hard deletes occurring on tables. We decided to ask DEV team to add triggers like:
CREATE TRIGGER [dbo].[triggerName] ON [dbo].[ProductionTable]
FOR INSERT, UPDATE, DELETE
AS
INSERT INTO For_ETL_Warehouse (Table_Name, Regular_PK, Insert_Date)
SELECT 'ProductionTable', PK_ID, GETDATE() FROM inserted
INSERT INTO For_ETL_Warehouse (Table_Name, Regular_PK, Insert_Date)
SELECT 'ProductionTable', PK_ID, GETDATE() FROM deleted
on core ~30 production tables.
Based on this table we will pull delta from last 24 hours and push it to Data Warehouse staging tables.
If anyone had similar issue and can help me estimate how it can impact performance on production database I will really appreciate. (if it works - I am saved, if not I need to propose other solution. Currently mirroring or replication might be hard to get as local DEVs have no idea how to set it up...)
Other ideas how to handle this situation or perform tests are welcome (My deadline is Friday 26-01).
First of all I would suggest you code your table name into a smaller variable and not a character one (30 tables => tinyint).
Second of all you need to understand how big is the payload you are going to write and how:
if you chose a correct clustered index (date column) then the server will just need to out data row by row in a sequence. That is a silly easy job even if you put all 200k rows at once.
if you code the table name as a tinyint, then basically it has to write:
1byte (table name) + PK size (hopefully numeric so <= 8bytes) + 8bytes datetime - so aprox 17bytes on the datapage + indexes if any + log file . This is very lightweight and again will put no "real" pressure on sql sever.
The trigger itself will add a small overhead, but with the amount of rows you are talking about, it is negligible.
I saw systems that do similar stuff on a way larger scale with close to 0 effect on the work process, so I would say that it's a safe bet. The only problem with this approach is that it will not work in some cases (ex: outputs to temp tables from DML statements). But if you do not have these kind of blockers then go for it.
I hope it helps.
I have a database table which have more than 1 million records uniquely identified by a GUID column. I want to find out which of these record or rows was selected or retrieved in the last 5 years. The select query can happen from multiple places. Sometimes the row will be returned as a single row. Sometimes it will be part of a set of rows. there is select query that does the fetching from a jdbc connection from a java code. Also a SQL procedure also fetches data from the table.
My intention is to clean up a database table.I want to delete all rows which was never used( retrieved via select query) in last 5 years.
Does oracle DB have any inbuild meta data which can give me this information.
My alternative solution was to add a column LAST_ACCESSED and update this column whenever I select a row from this table. But this operation is a costly operation for me based on time taken for the whole process. Atleast 1000 - 10000 records will be selected from the table for a single operation. Is there any efficient way to do this rather than updating table after reading it. Mine is a multi threaded application. so update such large data set may result in deadlocks or large waiting period for the next read query.
Any elegant solution to this problem?
Oracle Database 12c introduced a new feature called Automatic Data Optimization that brings you Heat Maps to track table access (modifications as well as read operations). Careful, the feature is currently to be licensed under the Advanced Compression Option or In-Memory Option.
Heat Maps track whenever a database block has been modified or whenever a segment, i.e. a table or table partition, has been accessed. It does not track select operations per individual row, neither per individual block level because the overhead would be too heavy (data is generally often and concurrently read, having to keep a counter for each row would quickly become a very costly operation). However, if you have you data partitioned by date, e.g. create a new partition for every day, you can over time easily determine which days are still read and which ones can be archived or purged. Also Partitioning is an option that needs to be licensed.
Once you have reached that conclusion you can then either use In-Database Archiving to mark rows as archived or just go ahead and purge the rows. If you happen to have the data partitioned you can do easy DROP PARTITION operations to purge one or many partitions rather than having to do conventional DELETE statements.
I couldn't use any inbuild solutions. i tried below solutions
1)DB audit feature for select statements.
2)adding a trigger to update a date column whenever a select query is executed on the table.
Both were discarded. Audit uses up a lot of space and have performance hit. Similary trigger also had performance hit.
Finally i resolved the issue by maintaining a separate table were entries older than 5 years that are still used or selected in a query are inserted. While deleting I cross check this table and avoid deleting entries present in this table.
We have a stored procedure which loads order details about an order. We always want the latest information about an order, so order details for the order are regenerated every time, when the stored procedure is called. We are using SQL Server 2016.
Pseudo code:
DELETE by clustered index based on order identifier
INSERT into the table, based on a huge query containing information about order
When multiple end-users are executing the stored procedure concurrently, there is a blocking created on orderdetails table. Once the first caller is done, second caller is queued, followed by third caller. So, the time for the generation of the orderdetails increases as time goes by. This is happening especially in the cases of big orders containing details rows in > 100k or 1 or 2 million, as there is table level lock is happening.
The approach we took
We partitioned the table based on the last digit of the order identifier for concurrent orderdetails loading. This improves the performance in the case of first time orderdetails loading, as there are no deletes. But, second time onwards, INSERT in first session is causing blocking for other sessions DELETE. The other sessions are blocked till first session is done with INSERT.
We are considering creation of separate orderdetails table for every order to avoid this concurrency issues.
Question
Can you please suggest some approach, which will support concurrent DELETE & INSERT scenario ?
We solved the contention issue by going for temporary table for orderdetails. We found that huge queries are taking longer SELECT time and this longer time was contributing to longer table level locks on the orderdetails table.
So, we first loaded data into temporary table #orderdetail and then went for DELETE and INSERT in the orderdetail table.
As the orderdetail table is already partitioned, DELETE were faster and INSERT were happening in parallel. INSERT was also very fast here, as it is simple table scan from #orderdetail table.
You can give a look to the Hekaton Engine. It is available even in SQL Server Standard Edition if you are using SP1.
If this is too complicated for implementation due to hardware or software limitations, you can try to play with the Isolation Levels of the database. Sometimes, queries that are reading huge amount of data are blocked or even deadlock victims of queries which are modifying parts of these data. You can ask yourself do you need to guarantee that the data read by the user is valid or you can afford for example some dirty reads?
I'm working on updating a legacy stored procedure (which calls several other child stored procedures.) Within a transaction, it manipulates data in about a dozen or so tables and performs lots of calculations in the process, sometimes triggering lock escalation up to a table lock. This process could take 20 minutes or more to complete in some cases. Obviously, locking tables for that long is a big no no. So I'm working on a 2-stage plan to the reduce the blocking caused by this sproc in phase 1 and completely rewrite it to be more efficient and not take an inordinate amount of time in phase 2.
In order to reduce the blocking, wherever there is manipulation on the database tables, I plan to move that manipulation into a temporary table. By doing all of the work in temporary table and then updating the real tables with the final results at the very end of the process, I should be able to reduce the time spent blocking other users, significantly. (That's the "quick fix" for phase 1.)
Here's my issue: some of these temp table might have 100,000 rows or more in them while I use them for various calculations. Because of this I would like to generate indexes on the temp tables to keep performance up. And since these are temp tables that are created within a stored procedure, they need to have unique names to avoid errors if multiple users execute the sproc at the same time. I know that I can manually declare the temp tables using CREATE TABLE statements, and if I do that I can specify an index without a name and let SQL Server create the name for me. What I'm hoping to be able to do is use SELECT * INTO to generate the temp table and find another way to get SQL Server to auto-generate index names. I'm sure you're asking "Why?" My company has several changes in store for the system that I'm working with. If I can manage to use the SELECT INTO method, then, if a column gets added or resized or whatever, then there won't be an issue with the developers needing to know that they have to go back into these stored procedures and change their temp table definitions to match. Using SELECT INTO will automatically keep the temp tables matching the layout of the "real" tables.
So, does anyone know of a way to get SQL Server to auto-generate the name for an index on a temp table (aside from doing it as part of the CREATE TABLE syntax)?
Thank you!
And since these are temp tables that are created within a stored procedure, they need to have unique names to avoid errors if multiple users execute the sproc at the same time.
No they don't. Each session will have their own temp tables, and they will be automatically cleaned up.
And indexes don't have global name scope, so each temp table can have the same index names. eg
create procedure TempTest
as
begin
select * into #t from sys.objects
create index foo on #t(name)
waitfor delay '00:00:10'
select * from #t
end
And you can run
exec temptest
go 10
from multiple sessions.
I want to place DB2 Triggers for Insert, Update and Delete on DB2 Tables heavily used in parallel online Transactions. The tables are shared by several members on a Sysplex, DB2 Version 10.
In each of the DB2 Triggers I want to insert a row into a central table and have one background process calling a Stored Procedure to read this table every second to process the newly inserted rows, ordered by sequence of the insert (sequence number or timestamp).
I'm very concerned about DB2 Index locking contention and want to make sure that I do not introduce Deadlocks/Timeouts to the applications with these Triggers.
Obviously I would take advantage of DB2 Features to reduce locking like rowlevel locking, but still see no real good approach how to avoid index contention.
I see three different options to select the newly inserted rows.
Put a sequence number in the table and the store the last processed sequence number in the background process. I would do the following select Statement:
SELECT COLUMN_1, .... Column_n
FROM CENTRAL_TABLE
WHERE SEQ_NO > 'last-seq-number'
ORDER BY SEQ_NO;
Locking Level must be CS to avoid selecting uncommited rows, which will be later rolled back.
I think I need one Index on the table with SEQ_NO ASC
Pro: Background process only reads rows and makes no updates/deletes (only shared locks)
Neg: Index contention because of ascending key used.
I can clean-up processed records later (e.g. by rolling partions).
Put a Status field in the table (processed and unprocessed) and change the Select as follows:
SELECT COLUMN_1, .... Column_n
FROM CENTRAL_TABLE
WHERE STATUS = 'unprocessed'
ORDER BY TIMESTAMP;
Later I would update the STATUS on the selected rows to "processed"
I think I need an Index on STATUS
Pro: No ascending sequence number in the index and no direct deletes
Cons: Concurrent updates by online transactions and the background process
Clean-up would happen in off-hours
DELETE the processed records instead of the status field update.
SELECT COLUMN_1, .... Column_n
FROM CENTRAL_TABLE
ORDER BY TIMESTAMP;
Since the table contains very few records, no index is required which could create a hot spot.
Also I think I could SELECT with Isolation Level UR, because I would detect potential uncommitted data on the later delete of this row.
For a Primary Key index I could use GENERATE_UNIQUE,which is random an not ascending.
Pro: No Index hot spot and the Inserts can be spread across the tablespace by random UNIQUE_ID
Con: Tablespace scan and sort on every call of the Stored Procedure and deleting records in parallel to the online inserts.
Looking forward what the community thinks about this problem. This must be a pretty common problem e.g. SAP should have a similar issue on their Batch Input tables.
I tend to favour Option 3, because it avoids index contention.
May be there is still another solution in your minds out there.
I think you are going to have numerous performance problems with your various solutions.
(I know premature optimazation is a sin, but experience tells us that some things are just not going to work in a busy system).
You should be able to use DB2s autoincrement feature to get your sequence number, with little or know performance implications.
For the rest perhaps you should look at a Queue based solution.
Have your trigger drop the operation (INSERT/UPDATE/DELETE) and the keys of the row into a MQ queue,
Then have a long running backgound task (in CICS?) do your post processing as its processing one update at a time you should not trip over yourself. Having a single loaded and active task with the ability to batch up units of work should give you a throughput in the order of 3 to 5 hundred updates a second.