Is it possible to use transaction between two instances without enable DTC? - sql-server

I have two instances of database located in two servers. I want to create an application to insert data into the first one and then update data on the second instance, if one of these process fail then I want to rollback all operations.
The database servers do not enable DTC/MSDTC. I tired to use transaction scope but no luck. Do you guys have any idea how can I do this?

if one of these process fail then I want to rollback all operations
You are describing a distributed transaction. To use distributed transactions, you need a transaction coordinator. You can't have the cake and eat it too.
There are alternatives if you consider asynchronous application of the changes, ie. Replication. This removes the distributed transaction requirement but the changes are applied asynchronously to the second server, after they are committed on the first server.

One option would be to put compensation code into your application. For example, if your application were c# based, you could have a try...catch block. In the catch block, you could add compensation code to "undo" the changes you made to the data on the first server.
The best method however, is of course to make a case to the DBAs to enable DTC

Related

Is it possible to create a lock free SNAPSHOT transaction in SQL Server?

TL;DR: Is it possible to basically create a fast, temporary, "fork" of a database (like a snapshot transaction) without any locks given that I know for a fact that the changes will never be committed and always be rolled back.
Details:
I'm currently working with SQL Server and am trying to implement a feature where the user can try all sorts of stuff (in the application) that is never persisted in the database.
My first instinct was to (mis)use snapshot transactions for that to basically "fork" the database into a short lived (under 15min) user-specific context. The rest of the application wouldn't even have to know that all the actions the user performs will later be thrown away (I currently persist the connection across requests - it's a web application).
Problem is that there are situations where the snapshot transaction locks and waits for other transactions to complete. My guess is that this happens because SQL server has to make sure it can merge the data if one of the open transactions commits, but in my case I know for a fact that I will never commit the changes from this transactions and always throw the data away (note that not everything happens in this transactions, there are other things that a user can do that happen on a different connection and are persisted).
Are there other ideas, that don't involve cloning the database (too large/slow) or updating/changing the schema of all tables (I'd like to avoid "poisoning" the schema with the implemenation detail of the "try out" feature).
No. SQL Server has copy-on-write Database Snapshots, but the snapshots are read-only. So where a SNAPSHOT transaction acquires regular exclusive locks when it modifies the database, a Database Snapshot would just give you an error.
There are storage technologies that can a writable copy-on-write storage snapshot, like NetApp. You would run a command to create a new LUN that is a snapshot of an existing LUN, present it to your server as a disk, mount its volume in a folder or drive letter, and attach the files you find there as a database. This is often done for cloning across environments to refresh dev/test with prod data without having to copy all the data. But it seems like way too much infrastructure work for your use case.

Keep a transaction open on SQL Server with connection closed

On SQL Server, is it possible to begin a transaction but intentionally orphan it from an open connection yet keep it from rolling back?
The use-case it for a REST service.
I'd like to be able to link a series of HTTP requests to work under a transaction, which can be done if the service is stateful; i.e. there's a single REST API server holding a connection open (map HTTP header value to a named connection), but a flawed idea in a non-sticky farm of web servers.
If the DB supported the notion of something like named/leased transactions, kinda like a named mutex, this could be done.
I appreciate there are other RESTful designs for atomic data mutations.
Thanks.
No. A transaction lives and dies with the session it's created in, and a session lives and dies with its connection. You can keep a transaction open for as long as you like -- but only by also keeping the session, and thereby the connection open. If the session is closed before the transaction commits, it automatically rolls back. Which is a good thing, in general, because transactions tend to use pessimistic locking. You don't want to keep those locks around for longer than necessary.
While there is such a thing as a distributed transaction that you can enlist in even if the current connection did not begin the transaction, this will still not do what you want for the scenario of multiple distributed nodes performing actions in succession to complete a transaction on one database. Specifically, you'd still need to have one "master" node to keep the transaction alive and decide it should finally commit now, and you need a way to make nodes aware of the transaction so they can enlist. I don't recommend you actually go this way, as it's much more complicated than tailoring a solution to your specific scenario (typically, accumulating modifications in their own table and committing them as a batch when they're complete, which can be done in one transaction).
You could use a queue-oriented design, where the application simply adds to the queue, while SQL server agent 'pop's the queue and executes.

is it possible to commit/rollback another procedures transaction in current procedure

i have two questions
1.i have two stored procedures. is it possible to commit/rollback another procedure's transaction in my current procedure.
2.i have two webservices two services connected with same database or linked server database. one webservices gotsucceed it transactions. when moving to the second webservice some error got occured. if error occured i have to rollback the previous webservice transactions.? is it possible. if anyone explain related to banking transactions like ATM
is it possible?
how?
explain related to banking sector with little understandable coding.
No, a commit must be issued from the same connection that the begin transaction statement was issued.
In this case you would first need to append the data tables with a "transaction" field or something similar that will uniquely identify each transaction. If the second webservice needed to issue a rollback that touched the work of the first webservice, it would have to call a custom process would then issue deletes looking for the transaction identifier that you have built into the tables. There's no built-in functionality of the db engine to accommodate this out of the box.

Exception handling in a real time, SQL-Server driven system

I have developed a report viewer in .NET Winforms (it just runs queries and displays results).
This works against a reporting database. However, the above is a small subset of a much larger application, which gets data from another database. It looks like this:
Monitored system has a change in state (e.g. latency increases) => Event is recorded into SQL Server database (call this database A) as a transaction => This fires a trigger to write the same event into the reporting database.
I am not sure about the differences between the two databases, they may be tuned for different goals or there may be some financial or even political reason for the two databases.
Anyway, the term was mentioned that the reporting database is "transactionally dependent" on the main database. What exactly does this mean? The reporting database depends entirely on the transactions of database A? This made me think of some questions:
1) How could I handle the situation that the reporting database has no disk space, but database A is still firing triggers to the reporting database? Would it be good to queue
2) Linked to the above, would it work if I queue the triggers and their data not able to fire into the reporting db (not sure how, but conceptually...)? Even then, this makes the system not real time.
Are there any other dangers/issues with exception handling in a setup like this?
Thanks
Such dependencies are actually very bad in production. For once, triggers and updating (remote) databases is a sure shot to kill performance. But more importantly is the issue of availability. The applicaitons that depend on Database A are now tied to the availability of Database B, because if database B is unavailable then the trigger cannor do its work, it will fail and the application will hit errors. So righ now the amdinsitrator(s) of database B are on hook for the operations of the applications using database A.
There are many approaches for this issue, the simplest one is to deploy transactional replication from a publication in database A with a subscription in database B. This isolates the two databases from a transactional point of view, allowing for application dependent on database A to go ahead unhintered when database B is unavailable, or just slow.
If the system has to be real time, then triggers are the only way. Note that triggers are fully synchronous - the operation on the reporting database will have to complete successfully, or the trigger will fail, and it's likely you will then fail your operation on the transaction database since it's in a trigger, the statement on the original table will fail, which may or may not be caught, but either way the change to that table in the transaction database will not occur.
There are valid reasons for this scenario, but it really creates a dependency of the transaction database on the reporting database, since if the reporting database is down, the transaction database effectively becomes read-only or worse.
That's not really what you want.
You can look at replication if your database have the same structure. Typically, when I think of a reporting database, I'm thinking of something with a different structure which is optimized for reporting, not just another copy of the data isolated for performance reasons (which is fine, but this is basically simply throwing hardware at the problem to stop reporting users hurting transaction users).

Trigger without a transaction?

Is it possible to create a trigger that will not be in a transaction?
I want to update data on a linked server with a trigger but due to firewall issues we can't create a distributed transaction between the two servers.
What you probably want is a combination of a queue that contains updates for the linked server and a process that reads data from the queue and updates the remote server. The trigger will then insert a message into the queue as part of the normal transaction. This data will be read by the separate process and used to update the remote server. Logic will needed in the process handle errors (and possibly retries).
The queue can be implemented with one or more tables.
I know it's not helpful, so I'll probably get downvoted for this, but really, the solution is to fix the firewall problem.
I think if you use remote (not linked) servers (which are not the preferred option these days) then you can use SET REMOTE_PROC_TRANSACTIONS OFF to prevent the use of DTC for remote transactions, which might do the right thing here. But that probably doesn't help you with a linked server anyway.

Resources