Can Transaction across machine with JMS? - distributed-transactions

Case:
Start: create transaction
Insert Data to DB (Machine A)
send sync JMS message to Queue (Machine A)
receive JMS message from the Queue (Machine B)
Insert Data to DB and return (Machine B)
further process in (Machine A)
End: Commit transaction
Can the process in Machine A and B working with one transaction. Therefore, process A rollback if the process B rollback and vice versa ?
Is there any example? Any extra server/component are needed?

you will need some sort of transaction server, I'd suggest to use JOTM

Related

Transaction deadlocked while long select

I have a nightly job which execute a stored procedure that goes over a table and fetches records to be inserted to another table.
The procedure duration is about 4-5 minutes in which it executes 6 selects over a table with ~3M records.
While this procedure is running there are exceptions thrown from another stored procedure which trying to update the same table:
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
I have read Why use a READ UNCOMMITTED isolation level?
question, but didn't come to a conclusion what best fits my scenario, as one of the comments stated:
"The author seems to imply that read uncommitted / no lock will return
whatever data was last committed. My understanding is read uncommitted
will return whatever value was last set even from uncommitted
transactions. If so, the result would not be retrieving data "a few
seconds out of date". It would (or at least could if the transaction
that wrote the data you read gets rolled back) be retrieving data that
doesn't exist or was never committed"
Taking into consideration that I only care about the state of the rows at the moment the nightly job started (the updates in the meanwhile will be calculated in the next one)
What would be most appropriate approach?
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
SELECT * FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
Another common reason for the deadlock (Cycle Deadlock) is due to the order of the statements your put in your query, where in the end every query waits for another one in different transactions. For this type of scenarios you have to visit your query and fix the ordering issue.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
This is very clear, you need to work on the query performance, and keep the record locking as less as possible.

T-SQL stored procedure transaction concurrency

I have a situation where I need to wrap an update T-SQL in a stored procedure (sp_update_queue) inside a transaction. But I'm wondering what would happen if you have two threads using the same connection but executing different queries and one rolls back a transaction it started.
For example ThreadA called sp_update_queue to update table QUEUED_TASKS but before sp_update_queue commits/rollback, transaction ThreadB executes some other update or insert SQL on a different table, say CUSTOMERS. Then after ThreadB has finished, sp_update_queue happens to encounter an error and calls rollback.
Because they are both using the same connection would the rollback also rollback changes made by ThreadB?, regardless of whether ThreadB made its changes within a transaction or not.
Each thread which acquire the resource first, will lock that resource(if you have suitable isolation level), so the second thread will wait for the required resource.
Note:each thread will have their own SessionId.
UPDATED
In your scenario, however both of the threads are using same connection, but do not use any common resources(ThreadA is dealing with table X and ThreadB is dealing with table Y). So commit or rollback of each Thread(Thread A or B) does not impact the other one.
Read more about Isolation Level

Synchronous Replication Setup on 2 Postgresql 9.2.1.4 machines

I am running synchronous replication on 2 Postgresql 9.2.1.4 machines (master and slave)
Here is the configuration:
Master Parameters
synchronous_commit=on
synchronous_standby_names = '*'
no synchronous_replication_timeout parameter, so 10 sec by default
no synchronous_replication parameter, so async by default
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
hot_standby = on
Slave Parameters
no synchronous_commit, so by default on
no synchronous_replication_service parameter, so by default async
max_wal_senders = 5
wal_keep_segments = 32
hot_standby = on
The application inserts records on Master and reads the from Master or Slave by using pgpool. Sometimes it happens that just after inserting the records the application does not see the inserted records (probably by reading from another db host as inserted ),
but when we check it afterwords the records are there in the database.
On
http://wiki.postgresql.org/wiki/Synchronous_replication#SYNCHRONOUS_REPLICATION_OVERVIEW
I found:
"If no reply is received within the timeout we raise a NOTICE and then return successful commit (no other action is possible)."
My Questions
a) Does it really mean that if the synchronous_replication_timeout
(which is 10 second by default) on Master is exceeded and in any of three cases where
the data did not reach the Slave or
the Transaction was not commited on Slave or
the Transaction was rolled back on Slave,
that the Master commits the transaction but the slave not at all?
If so then the transaction does not seem to be really synchronous...
b) What if I set on Master synchronous_replication_timeout=0 Will
Master wait infinitly for Slave to Commit or Rollback and in case
slave commits, master commis too in case slave rollbacks, master
rollbacks too?
What values should I set in
synchronous_replication (on master)
= async (def) | recv | fsync | apply
and
synchronous_replication_service (on Slave)
= async (def) | recv | fsync | apply
in order to ensure I do have propper synchronous replication setup
(so I am sure that data is commited on both servers or rolled back on both)
Shoud they both be set to apply?
Is there any option to ensure that by using synchronous replication
on PosgreSQL 9.1.4 the data are commited on both master and slave
are commited at the same time?
The wiki page you referenced currently describes a patch implementing synchronous replication which wasn't committed, see here if you're interested:
http://archives.postgresql.org/pgsql-hackers/2010-12/msg02484.php
So the questions you have about GUCs "synchronous_replication_timeout" or "synchronous_replication_service" aren't relevant to released versions of PostgreSQL, since the version of synchronous replication which was eventually committed differed substantially from the one described in that wiki page. Sorry about that, and I'll see about getting that wiki page cleaned up. The information you want is at:
http://www.postgresql.org/docs/current/static/warm-standby.html#SYNCHRONOUS-REPLICATION

Service Broker : keep messages that could not be handled by an external program in the queue

I have an external program which will call a stored procedure to wait for message on a queue and then process it. The problem is sometimes the message read from the queue might not be handled properly, when it happens I would like to keep the message stay in the queue until it can be processed.
It looks unless the queue is created by specifying RETENTION, message will be always be removed from the queue upon a successful WAITFOR, unless the transaction is rolled back. But as you can see I won't be able to know if the message is valid until the stored procedure return the message to the caller,which is the java program. I am wondering is it possible to break the "begin transaction" and "commit" or "rollback" to two stored procedures: call the stored procedure which begins transaction and waitfor message first; when it returns with the message, try to process the message in the java code;if the message is processed successfully, call the 2nd stored procedure to commit the transaction, or call another stored procedure to rollback it and put the message back to the queue.
My concern is , how to specify the right transaction to commit or rollback since they are not called in a same stored procedure?
Is there any other good practice to handle this situation? I have another alternative idea is to create an exception queue, let the java code put failed message to the exception queue.
Any comments will be appreciated!
you don't call stored procedure to wait for messages, instead just query the queue.
WAITFOR (RECEIVE conversation_handle,message_type_name,service_contract_name,convert(xml, message_body) FROM [dbo].[MyQueue])
When new message arrives, keep transaction open, process message- if message has unexpected error, then rollback transaction. If expected error- end conversation with error.
Actually i think good idea is to use external activator (msdn, name of download: "Microsoft SQL Server 2008 R2 Service Broker External Activator")
You can also make something similar like is already made in .net (ServiceBrokerInterface)

Questions about CreateCluster tool in H2 database

I have a couple of questions about H2's create cluster tool's behavior.
If a create a cluster specifying source A and target B, is H going to keep B in synch with A? In other words, is there a master-slave relationship maintained between both?
Let's imagine that database A, B and C belong to the same cluster. What happens if two different transactions are executed on A and B simultaneoulsy. Does H2 elect a leader in the cluster to make sure there is a unique execution order for all databases in the cluster?
If H2 elects a leader, what if this leader disappears? Is there an automatic failover mecanism? Is a new leader automatically elected? Can I still
If I create a cluster with source A -> target B, then source B -> target C, then source C -> target D, will D get statements to execute from C, C get executions statements from B and B get statements to execute for A? Or will B, C and D get execution statements from A (or the elected leader)? In other words, do we have a chain or star organization?
See the cluster documentation on the H2 web site.
There is no master / slave, no leader, and no connection between the cluster nodes. Instead, each client connects to both cluster nodes and executes the statements on both.
Each client executes all statements on all cluster nodes, in the same order. Each client has a list of cluster nodes, and each cluster node keeps the list as well. The clients verify the list is the same.
There is no leader. The failover mechanism is: if a client loses the connection to one of the cluster node, it removes that cluster node from it's list, and tells each cluster node to remove the cluster node from the list.
This will just expand the list so you get A, B, C, D. Each client will then execute all update statements on each cluster node.

Resources