How to find lock-wait queries on AgentGraph? - agens-graph

I'm suffering from slow transaction of AgensGraph.
CPU Usage is extremely low.
I'm guessing that lock-wait situation.
How to find lock-wait queries?

You can try lock-wait log of AgensGraph.
First, change parameters on "postgresql.conf"
log_lock_waits = on
deadlock_timeout = 1s
Second, restart AgensGraph.
$ ag_ctl stop
waiting for server to shut down.... done
server stopped
$ ag_ctl start
server starting
Finally, run queries and check log file.
[Session1 : block tran]
agens=# begin;
BEGIN
agens=# create (:n{id:1});
GRAPH WRITE (INSERT VERTEX 1, INSERT EDGE 0)
agens=# rollback;
ROLLBACK
agens=#
[Session2 : lock-wait transaction ]
agens=# create (:n{id:1});
GRAPH WRITE (INSERT VERTEX 1, INSERT EDGE 0)
Check log from log file.
LOG: process 3908 still waiting for ShareLock on transaction 1586 after 1001.058 ms
DETAIL: Process holding the lock: 3906. Wait queue: 3908.
CONTEXT: while inserting index tuple (0,7) in relation "n_id_idx"
STATEMENT: create (:n{id:1});
LOG: process 3908 acquired ShareLock on transaction 1586 after 4639.630 ms
CONTEXT: while inserting index tuple (0,7) in relation "n_id_idx"
STATEMENT: create (:n{id:1});

Related

MariaDB replication is not working when no database is selected

I'm using MariaDB 10.6.8 and have one of master DB and two of slave DBs. Those DBs are set up for replication.
When I excute INSERT or UPDATE query without database selection, replication doesn't seem to work. In other words, master DB's data is changed but slave DB's data is remains intact.
/* no database is selected */
MariaDB [(none)]> show master status \G
*************************** 1. row ***************************
File: maria-bin.000007
Position: 52259873
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.000 sec)
MariaDB [(none)]> UPDATE some_database.some_tables SET some_datetime_column = now() WHERE primary_key_column = 1;
Query OK, 1 row affected (0.002 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MariaDB [(none)]> show master status \G
*************************** 1. row ***************************
File: maria-bin.000007
Position: 52260068
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.000 sec)
/* only change master database's record even though the replication position is changed */
However, after selecting the database, replication work fine.
/* but, after selecting the database */
MariaDB [(none)]> USE some_database;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [some_database]> UPDATE some_tables SET some_datetime_column = now() WHERE primary_key_column = 1;
Query OK, 1 row affected (0.002 sec)
Rows matched: 1 Changed: 1 Warnings: 0
/* then change master and slave database's record */
Can anyone tell me what could be the cause of this situation?
Regardless of the binary log format (MIXED, STATEMENT, ROW) all DML commands will be written to the binary log file as soon the transaction will be committed.
When using ROW format a TABLE_MAP event will be logged first, which contains a unique ID, the database and table name. The ROW_EVENT (Delete/Insert/Update) refers to one or more table id's to identify the tables used.
The STATEMENT format logs a query event, which contains the default database name, timestamp and the SQL statement. If there is no default database, the statement itself will contain the database name.
Binlog dump example for STATEMENT format (I removed the non relevant parts such as timestamp and user variables from output)
without default database
#230210 4:42:41 server id 1 end_log_pos 474 CRC32 0x1fa4fa55 Query thread_id=5 exec_time=0 error_code=0 xid=0
insert into test.t1 values (1),(2)
/*!*/;
# at 474
#230210 4:42:41 server id 1 end_log_pos 505 CRC32 0xfecc5d48 Xid = 28
COMMIT/*!*/;
# at 505
with default database:
#230210 4:44:35 server id 1 end_log_pos 639 CRC32 0xfc862172 Query thread_id=5 exec_time=0 error_code=0 xid=0
use `test`/*!*/;
insert into t1 values (1),(2)
/*!*/;
# at 639
#230210 4:44:35 server id 1 end_log_pos 670 CRC32 0xca70b57f Xid = 56
COMMIT/*!*/;
If a session doesn't use a default database on the source server, it may not be replicated if a binary log filter was specified on the replica, e.g. replicate_do_db, since the replica doesn't parse the statement but checks if the database name applies to the filter.
To avoid inconsistent data on your replicas I would recommend to use ROW format instead.

Error (sqlca.sqlcode == ECPG_DUPLICATE_KEY) Handling In Ecpg PostgreSQL

Need to handle a error while inserting records into table (Ecpg PostgreSQL), but job should not abort/commit/rollback if any duplicate record (Primary Key).Job should skip and continue for next.
Note:SQL_CODE = sqlca.sqlcode
if ( SQL_CODE == -403 ) Other Way (sqlca.sqlcode == ECPG_DUPLICATE_KEY)
{
Log_error_tab();
}
else if ( SQL_CODE != SQL_SUCCESS )
{
Job_fail();
}
If i will handle as above its handling the error by calling function Log_error_tab(),but its failing in next DML operation with error "sqlerrm.sqlerrmc: current transaction is aborted, commands ignored until end of transaction block on line (sqlstate: 25P02)"
That's the way PostgreSQL works: if a statement inside a transaction fails, the transaction is aborted, and all subsequent statements will fail with that message.
So you should EXEC SQL ROLLBACK before you attempt your next SQL statement.
If you don't want to rollback the whole transaction, you can set a savepoint prior to executing the “dangerous” SQL statement:
SAVEPOINT sname
Then, when the critical part is over, you can release the savepoint:
RELEASE SAVEPOINT sname
If you hit an error, you can roll back everything since the savepoint was set, including the error, with
ROLLBACK TO SAVEPOINT sname
Note that you should use savepoints sparingly if you want decent performance.

postgresql deadlock

Sometimes postgresql raise error deadlocks.
In trigger for table setted FOR UPDATE.
Table comment:
http://pastebin.com/L1a8dbn4
Log (INSERT sentences is cutted):
2012-01-26 17:21:06 MSK ERROR: deadlock detected
2012-01-26 17:21:06 MSK DETAIL: Process 2754 waits for ExclusiveLock on tuple (40224,15) of relation 735493 of database 734745; blocked by process 2053.
Process 2053 waits for ShareLock on transaction 25162240; blocked by process 2754.
Process 2754: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
Process 2053: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (4071267066, 935967, 11372945, 'text2') RETURNING comment.id;
2012-01-26 17:21:06 MSK HINT: See server log for query details.
2012-01-26 17:21:06 MSK CONTEXT: SQL statement "SELECT comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE"
PL/pgSQL function "increase_comment_counter" line 5 at SQL statement
2012-01-26 17:21:06 MSK STATEMENT: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
And trigger on table comment:
CREATE OR REPLACE FUNCTION increase_comment_counter() RETURNS TRIGGER AS $$
DECLARE
comments_count_var INTEGER;
BEGIN
SELECT INTO comments_count_var comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE;
UPDATE content SET comments_count = comments_count_var + 1, last_comment_dt = now() WHERE content.id = NEW.content_id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER increase_comment_counter_trigger AFTER INSERT ON comment FOR EACH ROW EXECUTE PROCEDURE increase_comment_counter();
Why it can happens?
Thanks!
These are two comments being inserted with the same content_id. Merely inserting the comment will take out a SHARE lock on the content row, in order to stop another transaction deleting that row until the first transaction has completed.
However, the trigger then goes on to upgrade the lock to EXCLUSIVE, and this can be blocked by a concurrent transaction performing the same process. Consider the following sequence of events:
Txn 2754 Txn 2053
Insert Comment
Insert Comment
Lock Content#935967 SHARE
(performed by fkey)
Lock Content#935967 SHARE
(performed by fkey)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2053's share lock)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2754's share lock)
So- deadlock.
One solution is to immediately take an exclusive lock on the content row before inserting the comment. i.e.
SELECT 1 FROM content WHERE content.id = 935967 FOR UPDATE
INSERT INTO comment(.....)
Another solution is simply to avoid this "cached counts" pattern completely, except where you can prove it is necessary for performance. If so, consider keeping the cached count somewhere other than the content table-- e.g. a dedicated table for the counter. That will also cut down on the update traffic to the content table every time a comment gets added. Or maybe just re-select the count and use memcached in the application. There's no getting round the fact that wherever you store this cached count is going to be a choke point, it has to be updated safely.

How to explicitly lock a table in Microsoft SQL Server (looking for a hack - uncooperative client)

This was my original question:
I am trying to figure out how to enforce EXCLUSIVE table locks in SQL Server. I need to work around uncooperative readers (beyond my control, closed source stuff) which explicitly set their ISOLATION LEVEL to READ UNCOMMITTED. The effect is that no matter how many locks and what kind of isolation I specify while doing an insert/update, a client just needs to set the right isolation and is back to reading my garbage-in-progress.
The answer turned out to be quite simple -
while there is no way to trigger an explicit lock, any DDL change triggers the lock I was looking for.
While this situation is not ideal (the client blocks instead of witnessing repeatable reads), it is much better than letting the client override the isolation and reading dirty data. Here is the full example code with the dummy-trigger lock mechanism
WINNING!
#!/usr/bin/env perl
use Test::More;
use warnings;
use strict;
use DBI;
my ($dsn, $user, $pass) = #ENV{ map { "DBICTEST_MSSQL_ODBC_$_" } qw/DSN USER PASS/ };
my #coninf = ($dsn, $user, $pass, {
AutoCommit => 1,
LongReadLen => 1048576,
PrintError => 0,
RaiseError => 1,
});
if (! fork) {
my $reader = DBI->connect(#coninf);
$reader->do('SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED');
warn "READER $$: waiting for table creation";
sleep 1;
for (1..5) {
is_deeply (
$reader->selectall_arrayref ('SELECT COUNT(*) FROM artist'),
[ [ 0 ] ],
"READER $$: does not see anything in db, sleeping for a sec " . time,
);
sleep 1;
}
exit;
}
my $writer = DBI->connect(#coninf);
eval { $writer->do('DROP TABLE artist') };
$writer->do('CREATE TABLE artist ( name VARCHAR(20) NOT NULL PRIMARY KEY )');
$writer->do(do('DISABLE TRIGGER _lock_artist ON artist');
sleep 1;
is_deeply (
$writer->selectall_arrayref ('SELECT COUNT(*) FROM artist'),
[ [ 0 ] ],
'No rows to start with',
);
$writer->begin_work;
$writer->prepare("INSERT INTO artist VALUES ('bupkus') ")->execute;
# this is how we lock
$writer->do('ENABLE TRIGGER _lock_artist ON artist');
$writer->do('DISABLE TRIGGER _lock_artist ON artist');
is_deeply (
$writer->selectall_arrayref ('SELECT COUNT(*) FROM artist'),
[ [ 1 ] ],
'Writer sees inserted row',
);
# delay reader
sleep 2;
$writer->rollback;
# should not affect reader
sleep 2;
is_deeply (
$writer->selectall_arrayref ('SELECT COUNT(*) FROM artist'),
[ [ 0 ] ],
'Nothing committed (writer)',
);
wait;
done_testing;
RESULT:
READER 27311: waiting for table creation at mssql_isolation.t line 27.
ok 1 - READER 27311: does not see anything in db, sleeping for a sec 1310555569
ok 1 - No rows to start with
ok 2 - Writer sees inserted row
ok 2 - READER 27311: does not see anything in db, sleeping for a sec 1310555571
ok 3 - READER 27311: does not see anything in db, sleeping for a sec 1310555572
ok 3 - Nothing committed (writer)
ok 4 - READER 27311: does not see anything in db, sleeping for a sec 1310555573
ok 5 - READER 27311: does not see anything in db, sleeping for a sec 1310555574
One hack hack hack way to do this is to force an operation on the table which takes a SCH-M lock on the table, which will prevent reads against the table even in READ UNCOMMITTED isolation level. Eg, doing an operation like ALTER TABLE REBUILD (perhaps on a specific empty partition to reduce performance impact) as part of your operation will prevent all concurrent access to the table until you commit.
Add a locking hint to your SELECT:
SELECT COUNT(*) FROM artist WITH (TABLOCKX)
and put your INSERT into a transaction.
If your initial statement is in an explicit transaction, the SELECT will wait for a lock before it processes.
There's no direct way to force locking when a connection is in the READ UNCOMMITTED isolation level.
A solution would be to create views over the tables being read that supply the READCOMMITTED table hint. If you control the table names used by the reader, this could be pretty straightforward. Otherwise, you'll have quite a chore as you'll have to either modify writers to write to new tables or create INSTEAD OF INSERT/UPDATE triggers on the views.
Edit:
Michael Fredrickson is correct in pointing out that a view simply defined as a select from a base table with a table hint wouldn't require any trigger definitions to be updatable. If you were to rename the existing problematic tables and replace them with views, the third-party client ought to be none the wiser.

xml Column update and Locking in Sql Server

I have a few windwos services. They get xml column from Sql server manipulate and update it.
Service A- Gets XML
Service B- Gets XML
Service A- Updates XML (it will be lost)
Service B- Updates XML
I must lock row and I use next Code:
SqlCommand cmdUpdate = new SqlCommand();
cmdUpdate.CommandText = "select MyXML from MyTable with(holdlock,rowlock) where id=#id";
cmdUpdate.Parameters.AddWithValue("#id", id);
using (SqlConnection conn = Helper.GetConnection())
{
cmdUpdate.Connection = conn;
SqlTransaction ts = conn.BeginTransaction();
cmdUpdate.Transaction = ts;
XElement elem = XElement.Parse(cmdUpdate.ExecuteScalar().ToString());
UpdateXElement(elem);
cmdUpdate.Parameters.Clear();
cmdUpdate.CommandText = "update MyTable set MyXML=#xml where id=#id";
cmdUpdate.Parameters.AddWithValue("#id", id);
cmdUpdate.Parameters.AddWithValue("#xml", elem.ToString());
cmdUpdate.ExecuteNonQuery();
ts.Commit();
}
}`
then occurs Deadlocks.
Have you got a better idea, to solve this problem ?
Thanks
The scenario you are describing is not a deadlock. It's a lock contention, in other words, exactly what the locks are for:
Service A- Gets XML - Service A locks XML
Service B- Gets XML - Services B places lock request which waits for service A to release the lock
Service A- Updates XML (it will be lost) - Service A should commit or rollback the transaction to release the lock.
Service B- Updates XML - Service B acquires the lock on the XML and updates it
Service B will be frozen between steps 2 and 3.
This means you should perform these steps as fast as possible.
Update:
You use a HOLDLOCK to lock the row in a transaction.
HOLDLOCK places a shared lock which is compatible with another shared lock but not with update lock placed by UPDATE.
Here's what happens:
Service A places a shared lock on row 1
Service B places a shared lock on row 1
Service A tries to place an update lock on row 1 which is not compatible with the shared lock placed by Service B on step 2. Service A enters wait state (while still holding a shared lock placed on step 1).
Service B tries to place an update lock on row 1 which is not compatible with the shared lock placed by Service A on step 1. Service B enters wait state. DEADLOCK.
There is no point in placing a shared lock in a SELECT clause here. You should place an UPDLOCK in a SELECT clause instead. This will make the transaction locks completely incompatible and either transaction will have to wait for completion of other transactions before acquiring any locks.
In this scenario, deadlocks are impossible.

Resources