How to read from Active MQ without deleting the contents in Talend - database

I am currently using tMomInput to read from Active MQ.
Is there any way in Talend that we can read from a Queue in Active MQ without deleting? Is it possible to delete the contents only when they are successfully copied over to a temporary table? If there is a failure such as server shut down and the job fails before copying them to the DB table then there will be no way of recovering the data.

The general way to do this in JMS is to use transactions. This way the message is only deleted from the queue when the transaction is committed in the success case. In the error case the transaction is rolled back and the message is given back to the jms server for retransmit.

Related

What happens in the case of a Postgres Transaction Rollback command not reaching the database?

Section 3.4 of the Postgres documentation covers transactions.
I thought a transaction worked according to the following rules:
The client sends a BEGIN statement to the Database server on a connection. Call this connection “connection_one”.
The client sends whatever queries they want to the Database server. All of these queries are sent via “connection_one”.
If at any time the connection (in this example “connection_one”) is lost before a COMMIT statement reaches the Database server, the Database server rollsback to before the BEGIN statement.
If a COMMIT statement is issued and received by the Database server, then the changes are saved and then transaction block has completed.
It looks like the above is not the case though. My confusion is that it looks like I have to actually issue a ROLLBACK command and have it reach the Database Server in order for partial changes not to be saved. Is this really the case or am I missing something? If it is the case is there any way I can get the above behavior to occur or is there some reason I would not want the above behavior to occur? My concern is what if the connection is lost before I am able to ROLLBACK.
Thanks.

Oracle database rollback after update

After upgrading from 11g to 12c, we noticed a weird behaviour. When we update a table, the data is rolled back though we issued commit and there was no error.
Anyone with similar experience?
This is never supposed to happen: once the database receives a COMMIT request, it must either (1) fulfill the COMMIT request or (2) return an error AND roll back the transaction. Oracle (version 12C) has an API called "Transaction Guard" that is supposed to notify you if a commit is successful. Here is the URL for that API:
https://docs.oracle.com/database/121/CNCPT/transact.htm#CNCPT89217
Even if you are Not using this API, if the transaction reaches the ORACLE database, it should
either succeed or you should receive a listener or ORA- error.
Are you sure that:
You are using a client that does not roll back if part of the data for a transaction (E.G. one field in a data grid) is not filled out
Is your ORACLE client compatible with 12C?

Schedule service broker to receive messages automatically

I am new to Sql Server Service Broker and experimenting with it.
I was able to send messages from one DB and receive those messages in another DB (of the same SQL server) and I am inserting those messages into a table in the receiving DB.
Everything is working so far, but everytime I send a message from the source DB, I have to go the destination DB and run the RECEIVE query manually to fetch the message from the receiving queue and insert into the table.
I would like to automatically receive the messages from the receive queue as soon as they arrive (or in a schedule, say every 10 minutes) and insert them into my destination table, without me manually doing it.
One option is to create a SP and schedule that to run every 10 minutes. I am not sure if that is the recommended way or if there is any other better way to listen to the receiving queue and automatically retrieve the messages as soon as they arrive.
Any help would be appreciated.
What you're looking for is what's called broker activation (specifically, internal activation). In essence, you can "attach" a stored procedure to a service broker queue that will be called when a message shows up on the queue. Read all about it in BOL.

Can jdbc connections be recovered?

Can jdbc connections which are closed due to database un-availability be recovered.
To give back ground I get following errors in sequence. It doesn't look to be manual re-start. The reason for my question is that I am told that the app behaved correctly without
the re-start. So if the connection was lost, can it be recovered, after a DB re-start.
java.sql.SQLException: ORA-12537: TNS:connection closed
java.sql.SQLRecoverableException: ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory
java.sql.SQLRecoverableException: ORA-01033: ORACLE initialization or shutdown in progress
No. The connection is "dead". Create a new connection.
A good approach is to use a connection pool, which will test if the connection is still OK before giving it to you, and automatically create a new connection if needed.
There are several open source connection pools to use. I've used Apache's JDCP, and it worked for me.
Edited:
Given that you want to wait until the database comes back up if it's down (interesting idea), you could implement a custom version of getConnection() that "waits a while and tries again" if the database doesn't respond.
p.s. I like this idea!
The connection cannot be recovered. What can be done is to failover the connection to another database instance. RAC and data guard installations support this configuration.
This is no problem for read-only transactions. However for transactions that execute DML this can be a problem, especially if the last call to the DB was a commit. In case of a commit the client cannot tell if the commit call completed or not. When did the DB fail; before executing the commit, or after executing the commit (but not sending back the acknowledgment to the client). Only the application has this logic and can do the right thing. If the application after failing over does not verify the state of the last transaction, duplicate transactions are possible. This is a known problem and most of us experienced it buying tickets or similar web transactions.

Sending and Receiving SQL Server Service Broker Messages within Nested Transactions

I'd like to use SQL Server 2008 Service Broker to log the progress of a long-running (up to about 30 minutes) transaction that is dynamically created by a stored procedure. I have two goals:
1) To get real-time logging of the dynamically-created statements that make up the transaction so that the progress of the transaction can be monitored remotely,
2) To be able to review the steps that made up the transaction up to a point where a failure may have occurred requiring a rollback.
I cannot simply PRINT (or RAISERROR(msg,0,0)) to the console because I want to log the progress messages to a table (and have that log remain even if the stored procedure rollsback).
But my understanding is that messages cannot be received from the queue until the sending thread commits (the outer transaction). Is this true? If so, what options do I have?
It is true that you cannot read messages from the service queue until the transaction is committed.
You could try some other methods:
use a sql clr procedure to send a .net remoting message to a .net app that receives the messages and them log them.
use a sql clr procedure to write a text or other log file to disk.
Some other method...
Regards
AJ

Resources