Keep a transaction open on SQL Server with connection closed - sql-server

On SQL Server, is it possible to begin a transaction but intentionally orphan it from an open connection yet keep it from rolling back?
The use-case it for a REST service.
I'd like to be able to link a series of HTTP requests to work under a transaction, which can be done if the service is stateful; i.e. there's a single REST API server holding a connection open (map HTTP header value to a named connection), but a flawed idea in a non-sticky farm of web servers.
If the DB supported the notion of something like named/leased transactions, kinda like a named mutex, this could be done.
I appreciate there are other RESTful designs for atomic data mutations.
Thanks.

No. A transaction lives and dies with the session it's created in, and a session lives and dies with its connection. You can keep a transaction open for as long as you like -- but only by also keeping the session, and thereby the connection open. If the session is closed before the transaction commits, it automatically rolls back. Which is a good thing, in general, because transactions tend to use pessimistic locking. You don't want to keep those locks around for longer than necessary.
While there is such a thing as a distributed transaction that you can enlist in even if the current connection did not begin the transaction, this will still not do what you want for the scenario of multiple distributed nodes performing actions in succession to complete a transaction on one database. Specifically, you'd still need to have one "master" node to keep the transaction alive and decide it should finally commit now, and you need a way to make nodes aware of the transaction so they can enlist. I don't recommend you actually go this way, as it's much more complicated than tailoring a solution to your specific scenario (typically, accumulating modifications in their own table and committing them as a batch when they're complete, which can be done in one transaction).

You could use a queue-oriented design, where the application simply adds to the queue, while SQL server agent 'pop's the queue and executes.

Related

Is it possible to create a lock free SNAPSHOT transaction in SQL Server?

TL;DR: Is it possible to basically create a fast, temporary, "fork" of a database (like a snapshot transaction) without any locks given that I know for a fact that the changes will never be committed and always be rolled back.
Details:
I'm currently working with SQL Server and am trying to implement a feature where the user can try all sorts of stuff (in the application) that is never persisted in the database.
My first instinct was to (mis)use snapshot transactions for that to basically "fork" the database into a short lived (under 15min) user-specific context. The rest of the application wouldn't even have to know that all the actions the user performs will later be thrown away (I currently persist the connection across requests - it's a web application).
Problem is that there are situations where the snapshot transaction locks and waits for other transactions to complete. My guess is that this happens because SQL server has to make sure it can merge the data if one of the open transactions commits, but in my case I know for a fact that I will never commit the changes from this transactions and always throw the data away (note that not everything happens in this transactions, there are other things that a user can do that happen on a different connection and are persisted).
Are there other ideas, that don't involve cloning the database (too large/slow) or updating/changing the schema of all tables (I'd like to avoid "poisoning" the schema with the implemenation detail of the "try out" feature).
No. SQL Server has copy-on-write Database Snapshots, but the snapshots are read-only. So where a SNAPSHOT transaction acquires regular exclusive locks when it modifies the database, a Database Snapshot would just give you an error.
There are storage technologies that can a writable copy-on-write storage snapshot, like NetApp. You would run a command to create a new LUN that is a snapshot of an existing LUN, present it to your server as a disk, mount its volume in a folder or drive letter, and attach the files you find there as a database. This is often done for cloning across environments to refresh dev/test with prod data without having to copy all the data. But it seems like way too much infrastructure work for your use case.

When using a SQL Transaction, how long can I keep it open without causing a problem

I have an Web App hosted on Azure. It has a complicated signup form allowing users to rent lockers, add spouse memberships etc. I don't want to add records to the database until EVERYTHING on that page is completed and checks out. I am using a SQL Transaction so that I can add records to various tables and then roll them back if the user does not complete the entries properly, or simply exits the page. I don't want a bunch of orphaned records in my DB. All of the records that will eventually be added reference each other by the identity field on each table. So, if I don't add records to a table, I don't get an identity returned to reference in other tables.
At the start of the page, I open a SQL connection and associate it with a Transaction and I hold the transaction open until the end of the process. If all is well, I commit the transaction, send out emails etc.
I know best practice is to open and close a SQL connection as quickly as possible. I don't know of any other way to operate this page without opening a SQL connection and transaction and holding it open until the end of the process.
If I should not be doing it this way, how do others do it?
I see two questions here, one about how would I do it, and the other about the limits of the DB. Starting with the second, the timeout of a transaction depends on your connection string timeout. So if the connection is still alive, you can complete the commit or do the rollback.
about how to do it, I'd not do it that way. Linking a database critical lock process to user interaction is a really bad approach. You put the performance in your user's hands and also, you're assuming goog intentional clients, but you'll also have bad guys.
I'd store it locally in the web browser the information and if the process is complete, then send the information to the DB to commit it. So the final "POST" would create all the items, which is going to also take some time.
Another option if you want to keep it server side, a Redis server to cache the information and then, "move it" into the DB when the process is finished.

What does sql server do when a JPA transaction fails across a network?

I'm using JPA to connect to an SQL server across a WAN. I've been unable to find information on what happens when I begin a JPA transaction that involves writes to the remote DB, but the WAN connection goes down before or during commit.
In each transaction, I'm transmitting a header and several hundred detail lines.
Does the far-end database know enough to discard all the changes?
Obviously, requesting a rollback on the local application isn't going to have any effect since the WAN link is down.
I presume:
By "the connection goes down", I mean, that the dbms-client-driver signals to your application code, that the connection has been lost.
you have just one DBMS, which means no two-phase commit.
Then:
It does not matter if you are using sql-server via WAN or LAN. Either the transaction is done completely, or not at all.
That is the nature of transactions.
So if the connection goes down before the commit, the server will rollback everything. There is no way to reconnect on application level, to complete the transaction.
If the connection goes down during the commit, then dependent on the implementation and on the exact point in time, the transaction might be persisted completely or rolled back completely.
You can be absolutely sure that everything is persisted as intended as soon as commit returns to your code.
Beware, that "connection goes down" might happen after a timeout that might be quite long (several minutes). In that time, the transactions keep all the locks and might slow down the complete system. These timeouts might be set to longer intervals if you are communicating via slower network.

Is it possible to use transaction between two instances without enable DTC?

I have two instances of database located in two servers. I want to create an application to insert data into the first one and then update data on the second instance, if one of these process fail then I want to rollback all operations.
The database servers do not enable DTC/MSDTC. I tired to use transaction scope but no luck. Do you guys have any idea how can I do this?
if one of these process fail then I want to rollback all operations
You are describing a distributed transaction. To use distributed transactions, you need a transaction coordinator. You can't have the cake and eat it too.
There are alternatives if you consider asynchronous application of the changes, ie. Replication. This removes the distributed transaction requirement but the changes are applied asynchronously to the second server, after they are committed on the first server.
One option would be to put compensation code into your application. For example, if your application were c# based, you could have a try...catch block. In the catch block, you could add compensation code to "undo" the changes you made to the data on the first server.
The best method however, is of course to make a case to the DBAs to enable DTC

SQL Server Service Broker -- Suggestion for Handling Two-Phase Commit Between SQL Server Instances

We're exploring different approaches for communicating between two different SQL Server instances. One of the desired workflows is to send a message of some sort to the "remote" side requesting, let's say, deletion of a record. When that system completes the deletion, it holds its transaction open and sends a response back to the initiator, who then deletes its corresponding record, commits its transaction, and then sends a message back to the "remote" side, telling it, finally, to commit the deletion on its side as well.
It's a poor man's approximation of two-phase commit. There's a religious debate going on in the department as to whether SQL Server Service Broker can or can't handle this type of scenario reasonably cleanly. Can anyone shed light on whether it can? Any experience in similar types of workflows? Is there a better mechanism I should be looking at to accomplish this, considering that the SQL Server instances are on separate, non-domain machines?
Edit: To clarify, we can't use distributed transactions due to the fact that network security is both tight and somewhat arbitrary. We're not allowed the configuration that would make that possible.
Unless I'm misunderstanding the requirements, I'd say it's a perfect job for Service Broker. Service Broker frees you from the need of using distributed transactions and 2PC. What you do with Service Broker is reduce the problem to local transactions and transactional messaging between the servers.
In your particular case, one of the servers would delete its record and then (as part of the same transaction) send a message to the other server requesting deletion of the corresponding record. After enqueuing the message, the first server can commit the transaction and forget the whole thing without waiting for synchronization with the second server. Service Broker guarantees that once enqueuing of a message is committed, the message will be transactionally delivered to the destination, which can then delete its record as part of the same transaction in which it received the message, thus making sure the message processing and data changes are atomic.
Have you tried using a distibuted transaction?
It will do everything you need but each server will need to connect to each other as a linked server.

Resources