Postgres - Force commit to idle transaction - database

I know it is possible to kill Postgres idle transaction, however is it possible to force it to commit. The commit needs to come from some kind of an agent that monitors idle transactions and not from the module that started the transaction.
Thanks
Avi

There is no way to do this, and that's a good thing too. How could you be certain that the transaction's work is done and the database is in a consistent state afterwards?
The correct solution is to fix the application so that it does not leave transactions “hanging”.

Related

Keep a transaction open on SQL Server with connection closed

On SQL Server, is it possible to begin a transaction but intentionally orphan it from an open connection yet keep it from rolling back?
The use-case it for a REST service.
I'd like to be able to link a series of HTTP requests to work under a transaction, which can be done if the service is stateful; i.e. there's a single REST API server holding a connection open (map HTTP header value to a named connection), but a flawed idea in a non-sticky farm of web servers.
If the DB supported the notion of something like named/leased transactions, kinda like a named mutex, this could be done.
I appreciate there are other RESTful designs for atomic data mutations.
Thanks.
No. A transaction lives and dies with the session it's created in, and a session lives and dies with its connection. You can keep a transaction open for as long as you like -- but only by also keeping the session, and thereby the connection open. If the session is closed before the transaction commits, it automatically rolls back. Which is a good thing, in general, because transactions tend to use pessimistic locking. You don't want to keep those locks around for longer than necessary.
While there is such a thing as a distributed transaction that you can enlist in even if the current connection did not begin the transaction, this will still not do what you want for the scenario of multiple distributed nodes performing actions in succession to complete a transaction on one database. Specifically, you'd still need to have one "master" node to keep the transaction alive and decide it should finally commit now, and you need a way to make nodes aware of the transaction so they can enlist. I don't recommend you actually go this way, as it's much more complicated than tailoring a solution to your specific scenario (typically, accumulating modifications in their own table and committing them as a batch when they're complete, which can be done in one transaction).
You could use a queue-oriented design, where the application simply adds to the queue, while SQL server agent 'pop's the queue and executes.

Restarting due to SPID stuck on RUNNING, KILLED/ROLLEDBACK status

I'm a new "accidental" DBA and I'm currently trying to resolve a lockup caused by a trigger I created on a production database supporting a front end application.
I created a trigger, and then I decided I'd be best off creating a job to do the work instead, so tried to delete the trigger in object explorer. The delete failed with the message:
An exception occurred while executing a Transact-SQL statement or batch.
Lock request time out period exceeded.
I then tried to manually drop it and it failed at 0%, 0s left to go. I checked for the longest running transaction and then tried to kill the process in activity monitor. Since then the process has been stuck on "Task State:RUNNING and Command:KILLED/ROLLBACK". After some googling it sounds like I have two options.
Option 1: Restart DTC on the SQL server.... didn't work, still stuck.
Option 2: Restart the SQL service. Uh-oh.
This is the first time I've ever had to do anything like this and I'm pretty nervous being the only SQL guy in the office. Please can anyone let me know what the potential implications of restarting the service are, in terms of data loss and impact to front end users? Am I better off waiting to restart after business hours?
Thanks, and apologies if I've asked this question badly, first time for everything.
Cheers
Wait. It's rolling back and has to finish the rollback. Don't restart SQL, that will just result in the rollback continuing after the restart, possibly with the database offline.
If this is a production system and you do bounce the database, all users of your user interface will get weird and wonderful errors. Unless your application can handle it, your users will have a bad experience and then you will start getting phone calls from the boss....
As a side note, check for locking\blocking processes. The message in the question "Lock request time out period exceeded. " seems to suggest there is locking/blocking happening.

What are atomikos transaction logs used for?

I've inherited an application that uses Atomikos for transaction handling in Spring on top of an Oracle database. In production deployments transaction logging has always been enabled by setting com.atomikos.icatch.enable_logging=true but the truth is I can't find any info on what exactly these logs are used for.
The atomikos site states "this should never be disabled on production or data integrity cannot be guaranteed" and I found a comment in a jta.properties on that site that said there is a "risk of losing data after restart or crash" if it is disabled.
We don't enable this in our development environments and are able to use the application normally. I thought they might be used in the case of the application crashing but if so I'm not sure how they'd be used. Maybe automatically during the next startup or manually in some way? In terms of data integrity I know Oracle enables it's own data recovery but maybe these transaction logs hold data that Oracle hasn't seen yet, e.g. if Spring were to crash.
http://fogbugz.atomikos.com/default.asp?community.6.1950.6 seems to indicate that the transaction logs are used for recovery only and can be disabled if you don't need them for recovery.
These logs maintain transaction information in the latest revision that may not be known yet to your database. without this set, recovery after a crash/restart will probably be incorrect.
HTH
Guy
Before I answer you question you need to read the begining of this post here How would you tune Distributed ( XA ) transaction for performance? to get the therminology.
The Atomikos is acting as Transaction coordinator who coordinates across the participants which are the different databases. As a coordinator it orchestrate the process of transactions accross the different databases. It is essentialy the same work that a Policemen is doing at the middle of a crossroad.
Atomikos writes its log file in order to know where exactyly in the process of the distributed transaction it is. In case of failure it can trace its uncommited transactions progress and attempt from the place it was previously interrupted. As such the transaction log is very important for the transaction recovery process.

db transaction (commit and rollback) in datastage

Is there a way to implement transaction in job datastage, a way to rollback all upsert when i.e. my job aborts?
If not, is there a way or standard practice, a workaround, to simulate commit/rollback sistem?
Thanks in advance
This is not a good practice, in DW ETL process, you need to design you job as re-entrance able
rather than transaction, transaction is expensive, you just need to fix the issue and restart the job, in most cases ,you can't get issues fixed without human interfere,

Can my oracle still be used by end user when a crontab job update the database?

I have a crontab job run at 00:00 to update my website database every night;
To prevent the job from brreaking end user's operation on website, should I stop my website during the job is running? Is there any better alternate course? Thanks in advance!
Oracle, like other DBMSs, allows concurrent access to the data, even in case of concurrent readings and writings.
So yes, the users will still be able to access the database during the update job. Depending on what the update job might do and its duration, there might be interferences, but I don't know the details.
Normally, you should try to define the update job in such a way to make sure that there are no interferences with user activities, if possible, instead of shutting down the site while updating.
Try it and if you find you do have interference and the job is very long running check whether the design allows you to COMMIT more often. Otherwise let us know details such as: what the job is doing, how many rows you are likely to insert or update, and which version of Oracle.

Resources