Is it possible to create a trigger that will not be in a transaction?
I want to update data on a linked server with a trigger but due to firewall issues we can't create a distributed transaction between the two servers.
What you probably want is a combination of a queue that contains updates for the linked server and a process that reads data from the queue and updates the remote server. The trigger will then insert a message into the queue as part of the normal transaction. This data will be read by the separate process and used to update the remote server. Logic will needed in the process handle errors (and possibly retries).
The queue can be implemented with one or more tables.
I know it's not helpful, so I'll probably get downvoted for this, but really, the solution is to fix the firewall problem.
I think if you use remote (not linked) servers (which are not the preferred option these days) then you can use SET REMOTE_PROC_TRANSACTIONS OFF to prevent the use of DTC for remote transactions, which might do the right thing here. But that probably doesn't help you with a linked server anyway.
Related
On SQL Server, is it possible to begin a transaction but intentionally orphan it from an open connection yet keep it from rolling back?
The use-case it for a REST service.
I'd like to be able to link a series of HTTP requests to work under a transaction, which can be done if the service is stateful; i.e. there's a single REST API server holding a connection open (map HTTP header value to a named connection), but a flawed idea in a non-sticky farm of web servers.
If the DB supported the notion of something like named/leased transactions, kinda like a named mutex, this could be done.
I appreciate there are other RESTful designs for atomic data mutations.
Thanks.
No. A transaction lives and dies with the session it's created in, and a session lives and dies with its connection. You can keep a transaction open for as long as you like -- but only by also keeping the session, and thereby the connection open. If the session is closed before the transaction commits, it automatically rolls back. Which is a good thing, in general, because transactions tend to use pessimistic locking. You don't want to keep those locks around for longer than necessary.
While there is such a thing as a distributed transaction that you can enlist in even if the current connection did not begin the transaction, this will still not do what you want for the scenario of multiple distributed nodes performing actions in succession to complete a transaction on one database. Specifically, you'd still need to have one "master" node to keep the transaction alive and decide it should finally commit now, and you need a way to make nodes aware of the transaction so they can enlist. I don't recommend you actually go this way, as it's much more complicated than tailoring a solution to your specific scenario (typically, accumulating modifications in their own table and committing them as a batch when they're complete, which can be done in one transaction).
You could use a queue-oriented design, where the application simply adds to the queue, while SQL server agent 'pop's the queue and executes.
I have two applications.
One inserts data into database continuously like it is having an infinity loop.
When the second application inserts data to same database and table what will happen.
If it waits till the other application to complete inserting which will handle this?
Or it will say it is busy?
Or code throws an exception?
SQL servers have something called a connection pool which means that more than once connection to the database can be made at any particular time, and that's where the easy bit ends.
If you were to for example connect to the database on two applications at the same time and insert data in to different tables from each application then the two could happily happen at the same time without issue.
If however those applications wanted to do something like edit the same row then there's an issue with "locking" ...
Essentially any operation on a SQL database requires "acquiring a lock" on a "set" or "row" or "cell" depending on the configuration of the server its hard to say what might happen in your case.
So the simple answer is:
Yes, SQL can make stuff happen (like inserts) at the same time but with some clauses.
And long answer ...
requires in depth knowledge of locking and your database and server configuration.
I am looking for the best practice for the following scenario.
We have a CRM in our company. When an employee updates the record of a company, there is trigger that fires a stored procedure which has a CRUD statement to the linked server hosting the SQL DB of our website.
Question:
What happens when the connection is lost in the middle of the CRUD and the SQL DB of the website did not get updated? What would be the best way to have the SQL statement processed again when the connection is back?
I read about Service Broker or Transactional Replication. Is one of these more appropriate for that situation?
The configuration:
Local: SQL server 2008 R2
Website: SQL server 2008
Here's one way of handling it, assuming that the CRUD statement isn't modal, in the sense that you have to give a response from the linked server to the user before anything else can happen:
The trigger stores, in a local table, all the meta-information you need to run the CRUD statement on the linked server.
A job runs every n minutes that reads the table, attempts to do the CRUD statements stored in the table, and flags them as done if the linked server returns any kind of success message. The ones that aren't successful stay as they are in the table until the next time the job runs.
If the transaction failed in the middle of the trigger it would still be in the transaction and the data would not be written to either the CRM database or the web database. There is also the potential problem of performance, the SQL server data modification query wouldn't return control to the client until both the local and remote change had completed. The latter wouldn't be a problem if the query was executed async, but fire and forget isn't a good pattern for writing data.
Service Broker would allow you to write the modification into some binary data and take care of ensuring that it was delivered in order and properly processed at the remote end. Performance would not be so bad as the insert into the queue is designed to be completed quickly returning control to the trigger and allowing the original CRM query to complete.
However, it is quite a bit to set up. Even using service broker for simple tasks on a local server takes quite a bit of setup, partly because it is designed to handle secure, multicast, reliable ordered conversations so it needs a few layers to work. Once it is all there is is very reliable and does a lot of the work you would otherwise have to do to set up this sort of distributed conversation.
I have used it in the past to create a callback system from a website, a customer enters their number on the site and requests a callback this is sent via service broker over a VPN from the web to the back office server and a client application waits for a call on the service broker queue. Worked very efficiently once it was set up.
I have two instances of database located in two servers. I want to create an application to insert data into the first one and then update data on the second instance, if one of these process fail then I want to rollback all operations.
The database servers do not enable DTC/MSDTC. I tired to use transaction scope but no luck. Do you guys have any idea how can I do this?
if one of these process fail then I want to rollback all operations
You are describing a distributed transaction. To use distributed transactions, you need a transaction coordinator. You can't have the cake and eat it too.
There are alternatives if you consider asynchronous application of the changes, ie. Replication. This removes the distributed transaction requirement but the changes are applied asynchronously to the second server, after they are committed on the first server.
One option would be to put compensation code into your application. For example, if your application were c# based, you could have a try...catch block. In the catch block, you could add compensation code to "undo" the changes you made to the data on the first server.
The best method however, is of course to make a case to the DBAs to enable DTC
We're exploring different approaches for communicating between two different SQL Server instances. One of the desired workflows is to send a message of some sort to the "remote" side requesting, let's say, deletion of a record. When that system completes the deletion, it holds its transaction open and sends a response back to the initiator, who then deletes its corresponding record, commits its transaction, and then sends a message back to the "remote" side, telling it, finally, to commit the deletion on its side as well.
It's a poor man's approximation of two-phase commit. There's a religious debate going on in the department as to whether SQL Server Service Broker can or can't handle this type of scenario reasonably cleanly. Can anyone shed light on whether it can? Any experience in similar types of workflows? Is there a better mechanism I should be looking at to accomplish this, considering that the SQL Server instances are on separate, non-domain machines?
Edit: To clarify, we can't use distributed transactions due to the fact that network security is both tight and somewhat arbitrary. We're not allowed the configuration that would make that possible.
Unless I'm misunderstanding the requirements, I'd say it's a perfect job for Service Broker. Service Broker frees you from the need of using distributed transactions and 2PC. What you do with Service Broker is reduce the problem to local transactions and transactional messaging between the servers.
In your particular case, one of the servers would delete its record and then (as part of the same transaction) send a message to the other server requesting deletion of the corresponding record. After enqueuing the message, the first server can commit the transaction and forget the whole thing without waiting for synchronization with the second server. Service Broker guarantees that once enqueuing of a message is committed, the message will be transactionally delivered to the destination, which can then delete its record as part of the same transaction in which it received the message, thus making sure the message processing and data changes are atomic.
Have you tried using a distibuted transaction?
It will do everything you need but each server will need to connect to each other as a linked server.