My source tables called Event sitting in a different database and it has millions of rows. Each event can have an action of DELETE, UPDATE or NEW.
We have a Java process that goes through these events in the order they were created and do all sort of rules and then insert the results into multiple tables for look up, analyse etc..
I am using JdbcTemplate and using batchUpdate to delete and upsert to Postgres DB in a sequential order right now, but I'd like to be able to parallel too. Each batch is 1,000 entities to be insert/upserted or deleted.
However, currently even doing in a sequential manner, Postgres locks queries somehow which I don't know much about and why.
Here are some of the codes
entityService.deleteBatch(deletedEntities);
indexingService.deleteBatch(deletedEntities);
...
entityService.updateBatch(allActiveEntities);
indexingService.updateBatch(....);
Each of these services are doing insert/delete into different tables. They are in one transaction though.
The following query
SELECT
activity.pid,
activity.usename,
activity.query,
blocking.pid AS blocking_id,
blocking.query AS blocking_query
FROM pg_stat_activity AS activity
JOIN pg_stat_activity AS blocking ON blocking.pid = ANY(pg_blocking_pids(activity.pid));
returns
Query being blocked: "insert INTO ENTITY (reference, seq, data) VALUES($1, $2, $3) ON CONFLICT ON CONSTRAINT ENTITY_c DO UPDATE SET data = $4",
Blockking query: delete from ENTITY_INDEX where reference = $1
There are no foreign constraints between these tables. And we do have indexes so that we can run queries for our processing as part of the process.
Why would one completely different table can block the other tables? And how can we go about resolving this?
Your query is misleading.
What it shows as “blocking query” is really the last statement that ran in the blocking transaction.
It was probably a previous statement in the same transaction that caused entity (or rather a row in it) to be locked.
Related
I experienced a very strange occurrence relating to a multi-query transaction. After SQL Server was updated from 2008 to 2016 (with no warning from our host), we started dropping data after it was posted to the API. The weird thing is, some of the data arrived, and some didn’t.
In order to protect integrity, the queries are all joined in one transaction. The records can be created and then updated at a later time. They are formatted similar to this:
DELETE FROM table_1 WHERE parentID = 123 AND col2 = 321;
DELETE FROM table_2 WHERE parentID = 123 AND col2 = 321;
-- etc
INSERT INTO table_1 (parentID, col2, etc) VALUES (123, 321, 123456);
INSERT INTO table_2 (parentID, col2, etc) VALUES (123, 321, 654321);
-- etc
There could be hundreds of lines being executed. Due to design, the records in question do not have unique IDs, so the most performant way to execute the queries was to first delete the matching records, then re-insert them. Looping through the records and checking for existence is the only other option (as far as I know), and that is expensive with that many records.
Anyway, I was struggling to find a reason for this data loss, which seemed random. I had logs of the sql queries, so I know they were being formatted correctly and they had all the data intact. Finally, the only thing left I could think of was to separate the DELETE queries into a separate transaction and execute first*. That seems to have fixed the problem.
Q. Does anyone know if these queries could be executed out of order in which they were presented? Do you see a better way I could be writing these transactions?
* I don't necessarily like this solution, because the delete queries were the main reason I wanted a transaction in the first place. If an error occurs during the second transaction, then all the older matching records have been deleted, but the newer versions are never saved. Living on the edge...
P.S. One other problem I had, and this is probably due to my ignorance of the platform, when I tried to bracket these queries with BEGIN TRAN; and COMMIT TRAN;, immediately after this script finished, any following queries in the same thread got hung up for about 20-30 seconds or so. What am I doing wrong? Do I actually need these verbs if all the queries are being executed at once?
We could use a bit more information, such as if there is unique constraint on your table and ignore duplicate insert.
if the data is missing, it could be due to insert failed and this will register an entry in the Profiler event "User Error Message" under "Errors and Warnings" event class. Create a trace to filter this login only and check each statement and if there is any user errors raised in the trace.
If you have a other processes running (other applications or threads), it is possible that after you inserted the records other deleted that row without your knowledge. In this case, you might want to set up a delete trigger to log all update and delete actions on the table and see what is the user performing these actions. In short, if you think you have lost data, it is either the command was not executed , executed with error, or deleted buy other processes after execution.
Environment: Oracle 12C
Got a table with about 10 columns which include few clob and date columns. This is a very busy table for an ETL process as described below-
Flat files are loaded into the table first, then updated and processed. The insert and updates happen in batches. Millions of records are inserted and updated.
There is also a delete process to delete old data based on a date field from the table. The delete process runs as a pl/sql procedure and deletes from the table in a loop fetching first n records only based on date field.
I do not want the delete process to interfere with the regular insert/update . What is the best practice to code the delete so that it has minimal impact on the regular insert/update process ?
I can also partition the table and delete in parallel since each partition uses its own rollback segment but am looking for a simpler way to tune the delete process.
Any suggestions on using a special rollback segment or other tuning tips ?
The first thing you should look for is to decouple various ETL processes so that you need not do all of them together or in a particular sequence. Thereby, removing the dependency of the INSERTS/UPDATES and the DELETES. While a insert/update you could manage in single MERGE block in your ETL, you could do the delete later by simply marking the rows to be deleted later, thus doing a soft delete. You could do this as a flag in your table column. And use the same in your application and queries to filter them out.
By doing the delete later, your critical path of the ETL should minimize. Partitioning the data based on date range should definitely help you to maintain the data and also make the transactions efficient if it's date driven. Also, look for any row-by-row thus slow-by-slow transactions and make them in bulk. Avoid context switching between SQL and PL/SQL as much as possible.
If you partition the table as a date range, then you could look into DROP/TRUNCATE partition which will discard the rows stored in that partition as a DDL statement. This cannot be rolled back. It executes quickly and uses few system resources (Undo and Redo). You can read more about it in the documentation.
I have a SQL Server 2012 table that will contain 2.5 million rows at any one time. Items are always being written into the table, but the oldest rows in the table get truncated at the end of each day during a maintenance window.
I have .NET-based reporting dashboards that usually report against summary tables though on the odd occasion it does need to fetch a few rows from this table - making use of the indexes set.
When it does report against this table, it can prevent new rows being written to this table for up to 1 minute, which is very bad for the product.
As it is a reporting platform and the rows in this table never get updated (only inserted - think Twitter streaming but for a different kind of data) it isn't always necessary to wait for a gap in the transactions that cause rows to get inserted into this table.
When it comes to selecting data for reporting, would it be wise to use a SNAPSHOT isolation level within a transaction to select the data, or NOLOCK/READ UNCOMITTED? Would creating a SQLTransaction around the select statement cause the insert to block still? At the moment I am not wrapping my SQLCommand instance in a transaction, though I realise this will still cause locking regardless.
Ideally I'd like an outcome where the writes are never blocked, and the dashboards are as responsive as possible. What is my best play?
Post your query
In theory a select should not be blocking inserts.
By default a select only takes a shared lock.
Shared locks are acquired during read operations automatically and prevent the user from modifying data.
This should not block inserts to otherTable or joinTable
select otherTable.*, joinTable.*
from otherTable
join joinTable
on otherTable.jionID = joinTable.ID
But it does have the overhead of acquiring a read lock (it does not know you don't update).
But if it is only fetching a few rows from joinTable then it should only be taking a few shared locks.
Post your query, query plan, and table definitions.
I suspect you have some weird stuff going on where it is taking a lot more locks than it needs.
It may be taking lock on each row or it may be escalating to page lock or table lock.
And look at the inserts. Is it taking some crazy locks it does not need to.
I have a table in SQL server that is CRUD-ed concurrently by a stored procedure running simultaneously in different sessions:
|----------------|---------|
| <some columns> | JobGUID |
|----------------|---------|
The procedure works as follows:
Generate a GUID.
Insert some records into the shared table described above, marking them with the GUID from step 1.
Perform a few updates on all records from step 2.
Select the records from step 3 as SP output.
Every select / insert / update / delete statement in the stored procedure has a WHERE JobGUID = #jobGUID clause, so the procedure works only with the records it has inserted on step 2. However, sometimes when the same stored procedure runs in parallel in different connections, deadlocks occur on the shared table. Here is the deadlock graph from SQL Server Profiler:
Lock escalations do not occur. I tried adding (UPDLOCK, ROWLOCK) locking hints to all DML statements and/or wrapping the body of the procedure in a transaction and using different isolation levels, but it did not help. Still the same RID lock on the shared table.
After that I've discovered that the shared table did not have a primary key/identity column. And once I added it, deadlocks seem to have disappeared:
alter table <SharedTable> add ID int not null identity(1, 1) primary key clustered
When I remove the primary key column, the deadlocks are back. When I add it back, I cannot reproduce the deadlock anymore.
So, the question is, is a primary key identity column really able to resolve deadlocks or is it just a coincidence?
Update: as #Catcall suggests, I've tried creating a natural clustered primary key on the existing columns (without adding an identity column), but still caught the same deadlock (of course, this time it was a key lock instead of RID lock).
The best resource (still) for deadlock resolution is here: http://blogs.msdn.com/b/bartd/archive/2006/09/09/deadlock-troubleshooting_2c00_-part-1.aspx.
Pt #4 says:
Run the queries involved in the deadlock through Database Tuning
Advisor. Plop the query in a Management Studio query window, change
db context to the correct database, right-click the query text and
select “Analyze Query in DTA”. Don’t skip this step; more than half
of the deadlock issues we see are resolved simply by adding an
appropriate index so that one of the queries runs more quickly and
with a smaller lock footprint. If DTA recommends indexes (it'll say
“Estimated Improvement: %”), create them and monitor to
see if the deadlock persists. You can select “Apply Recommendations”
from the Action drop-down menu to create the index immediately, or
save the CREATE INDEX commands as a script to create them during a
maintenance window. Be sure to tune each of the queries separately.
I know this doesn't "answer" the question to why necessarily, but it does show that adding indexes can change the execution in ways to make either the lock footprint smaller or execution time faster which can significantly reduce the chances of a deadlock.
Recently I have seen this post, according to above information i hope this post will help you,
http://databaseusergroup.blogspot.com/2013/10/deadlocked-on-sql-server.html
I am using SQL Server 2008 and I have two tables which are of the same schema and I create a view which union the content of the two tables to provide a single view of "table" to external access.
One of the table is read only and the other table contains bulk insert/delete operation (on the other table, I will use bulk insert at some interval to insert everal thousand of rows and run another SQL Job to remove several Million rows daily).
My question is, if the other table is under bulk insert/delete operation, will the physical table be locked so that the access from external user to the union view of two tables are also blocked? (I am thinking of whether lock escalation applies in this scenario, row locks finally lock the table, which finally locks the access of the view?)
if the other table is under bulk insert/delete operation, will the physical table be locked so that the access from external user to the union view of two tables are also blocked?
Yes, with the caveat that, if the optimiser can find a way to execute the query that does not involve accessing the bulk insert table then access will not be blocked.
If you are looking to optimise bulk loading times make sure you have a read of this blog post.
EDIT
What is the actual problem you are experiencing? Do you really need to be using this view everywhere (for example are there places that only need data from one table, that are querying it via the view?)
If you want you view to be "online" all the time consider either snapshot isolation, or if you are loading up full sets into the bulk table (eg. full content is replaced daily), you can load the data into a separate table and sp_rename the table in (in a transaction)
Mostly likely yes. It depends on lock escalation
To work around (not all options):
Use the WITH (NOLOCK) table hint to ignore and don't set any locks. If used on the view it also applies to both tables
Use WITH (READPAST) if you don't mind skipping locked rows in the BCP table
Change the lock granularity for the BCP table. Use sp_tableoption and set "table lock on bulk load" = false.
Edit: Now I've had coffee...
If you need to query the bulk table during load/delete operations and get accurate results and not suffer performance hits, I suggest you need to consider SNAPSHOT isolation
Edit 2: SNAPSHOT isolation