Control a trigger in SQL - sql-server

I would like to know how we can control a trigger in SQL when the process is in progress.
Example: If any Update, Delete, Insert operation performed on a table (Employee Table) then I am executing a batch file located in the windows drive. Here the batch file takes around 5 minutes to complete the whole process. So my question here is for suppose I have a multiple application connected to the same table (Employee Table) and if different an performed on the table from Application and due to trigger the batch process started. In meanwhile from another application one more operation performed then it triggers the batch file again. Due to which the performance is degrading or getting crashed.
So here i would like to know is there any way to control the trigger. Such as until the batch file completes the process the second trigger is kept on hold and after the completion the process needs to be started again.

You could have a helper table where your sp check if running = 0, if so, writes 1 and batch starts. When the batch ends, set running = 0.

Ugh - Having a database trigger execute a batch script on the server sounds like a horrible idea. And it takes 5 minutes!!!???? You've pretty much abandoned the idea of concurrency, and it does not sound like a very stable system.
But assuming you really want to move forward with this design. I suggest that any process that performs DML on the table should establish an exclusive lock on the table before the DML is executed. Your trigger(s) are always executed after the DML is complete on SQL Server, which I suspect is too late.
I recommend that you force all DML on that table to be through stored procedures that always establish the table lock before proceeding. Your actions are thus guaranteed to be serialized, your batch script never executes more than once at any given time, and concurrent requests will be queued.
You could restrict write access to the table by giving write access to a single account that owns the stored procedures, and then grant execute privilege on the procedures to appropriate users/roles.
Actually, I recommend that you abandon your design completely. But I can't suggest an alternative because I have no idea what led you to your current design

Related

Need to design a SQL Server pair where one is read-only while the other is updated by a batch process

This would seem to me a common scenario, but I don't know what the current terminology and solution would be.
We have database tables that are updated nightly by a batch process.
At the same time, we require 24/7 read access to the data.
My thought is to have a set exposed to the batch process, and when that finishes it gets switched on for read access.
Likewise, the DB that had been available for read is swapped back and is the target for the batch process. (Note, these DB's are in Azure)
Make sense?
Is this a likely way to deal with this scenario?
What do we call this technique?

Stored procedure and trigger execution blocking

I have SQL Server 2017 Express database that is accessed by up to 6 tablets that connect via an Angular 7 app using REST web services.
I have a stored procedure that allows a new user to be inserted into a specific database table. The insert will always only insert 1 record at a time, but with 6 clients, the stored procedure could be called by each client almost simultaneously.
The last step of the process is to print an agreement form to a specific printer. Initially this was going to be handled on the client side, but the tablets do not have the ability to print to a network printer, so that functionality now needs to reside on the server side.
With this new requirement, the agreement form is an RTF document that is being read, placeholder values replaced with values from the insert statement, written to a temporary file and then printed to the network printer via the application (Wordpad most likely) that is associated with the RTF file format.
There is also an MS Access front-end app that uses linked servers to connect to the database, but doesn't have the ability to create new users, but will need to be able to initiate the "print agreement" operation in the case of an agreement not being printed due to printer issue, network issue, etc.
I have the C# code written to perform the action of reading/modifying/writing/printing of the form that uses the UseShellExecute StartInfo property with the Process.Start method.
Since the file read/modify/write/print process takes a few seconds, I am concerned about having the stored procedure for adding the registration blocking for that length of time.
I am pretty sure that I am going to need a CLR stored procedure so that the MS Access front-end can initiate the print operation, so what I have come up with is either the "Add_Registration" stored procedure (Transact-SQL) will call the CLR stored procedure to do the read/modify/write/print operation, or an insert trigger (either CLR or Transact-SQL) on the table that calls the CLR stored procedure to read/modify/write/print.
I could avoid the call from the trigger to the stored procedure by duplicating the code in both the CLR trigger and the CLR stored procedure if there is a compelling reason to do so, but was trying to avoid having duplicate code if possible.
The solutions that I am currently considering are as follows, but am unsure of how SQL Server handles various scenarios:
A CLR or Transact-SQL Insert trigger on the registration table that calls a CLR stored procedure that does the reading/modifying/writing/printing process.
A CLR stored procedure that does the reading/modifying/writing/printing process, being called from the current add_registration Transact-SQL stored procedure
The questions I keep coming back to are:
How are Insert CLR triggers executed if multiple inserts are done at the same or nearly the same time (only 1 per operation), are they queued up an then processed synchronously or are they executed immediately?
Same question as #1 except with a Transact-SQL trigger
How are CLR stored procedures handled if they are called by multiple clients at the same or nearly the same time, are they queued up an then processed synchronously, or is each call to the stored procedure executed immediately?
Same question as #3 except with a Transact-SQL stored procedure
If a CLR stored procedure is called from a Transact-SQL trigger, is the trigger blocked until the stored procedure returns or is the call to the stored procedure spawned out to it's own process (or similar method) with the trigger returning immediately?
Same question as #5 except with a CLR trigger calling the CLR stored procedure
I am looking for any other suggestions and/or clarifications on how SQL Server handles these scenarios.
There is no queuing unless you implement it yourself, and there are a few ways to do that. So for multiple concurrent sessions, they are all acting independently. Of course, when it comes to writing to the DB (INSERT / UPDATE / DELETE / etc) then they clearly operate in the order in which they submit their requests.
I'm not sure why you are including the trigger in any of this, but as it relates to concurrency, triggers execute within a system-generated transaction initiated by the DML statement that fired the trigger. This doesn't mean that you will have single-threaded INSERTs, but if the trigger is calling a SQLCLR stored procedure that takes a second or two to complete (and the trigger cannot continue until the stored procedure completes / exits) then there are locks being held on the table for the duration of that operation. Those locks might not prevent other sessions from inserting at the same time, but you might have other operations attempting to modify that table that require a conflicting lock that will need to wait until the insert + trigger + SQLCLR proc operation completes. Of course, that might only be a problem if you have frequent inserts, but that depends on how often you expect new users, and that might not be frequent enough to worry about.
I'm also not sure why you need to print anything in that moment. It might be much simpler on a few levels if you simply have a flag / BIT column indicating whether or not the agreement has been printed, defaulted to 0. Then, you can have an external console app, scheduled via SQL Server Agent or Windows Scheduled Tasks, executed once every few minutes, that reads from the Users table WHERE HasPrintedAgreement = 0. Each row has the necessary fields for the replacement values, it prints each one, and upon printing, it updates that UserID setting HasPrintedAgreement = 1. You can even schedule this console app to execute once every minute if you always want the agreements immediately.

SQL Server Pulling Records Out of a Table Being Continually Updated

I have a table in SQL Server 2012 that's being used by a service that's continually updating records within the table. It's sort of a queue where the service processes the records, and then periodically I run another stored procedure to pull out the ones that are processed into another table. Records in this table start out in one status and as they're processed they get put into another status.
When I try to run the stored procedure to pull the completed records out, I'm running into a deadlocking issue if it happens to occur when the running process is updating the table, which happens about every 2 minutes. I thought about just using a NOLOCK hint to eliminate that, but after reading a bit on this SO thread, I'm thinking I should avoid NOLOCK whenever possible.
GOAL:
Allow the service to continue running as usual, but also allow another stored procedure to periodically go in and remove records that are completed. In the event that there's a lock on a given row, I'd like to just leave that row alone and pick it up on the next time I run the stored procedure. During the processing, there's no requirement that I get all the rows with the stored procedure. That only matters once all the records have been processed, at which point I need to ensure that I get all the records, all while having the service still running on other unrelated records, and not causing any deadlocking issues. Hopefully this makes sense.
This article seems to suggest REPEATABLE READ
Am I on the right track or is there a better method?

Data integration using SQL statements, opposed to cursors/triggers

I am integrating my .NET/SQL application with two other products. Having read/experienced the performance and other issues with cursors and triggers, I decided to use the batch method using a series of SQL insert and update statements. In some places I need to look up mapping IDs from the incoming feeds and map to IDs in my system. I also need to do a fair amount of error handling in my sql batch code where for instance if a related ID is missing or NULL, I write that to an error log and not process that record at all. I think the system will process a large initial batch of hundreds/thousands of records and once in production we will read incoming feeds on an hourly basis. So far so good.
The problem is I can't possibly pre-determine every single error that could result in the incoming feed. After days of testing I am still seeing something or the other fails and the batch doesn't process. When an error happens in a batch or set-based integration (rather than cursor/trigger), I have no way of pin pointing which record the statement failed at. I can figure out which SQL statement in my batch failed but not at which exact row.
Whereas if I used a cursor I would know as the cursor processes which exact record bombed and tuck it away into an error log. Isn't this one reason why in some cases cursors are helpful?
Also, is there any way using my current set-based method of batching, I could pin point which row insert/update failed and have it move on with the rest of the processing?
Thanks.

How to bulk update a SQL server database with a lot of active readers

I am designing a solution for a SQL Server 2012 database for the following scenario
The database contains about 1M records with some simple parent child relationships between 4 or 5 tables
There is a 24x7 high load of reads on the database
Once a day we receive a batch with about 1000 inserts, updates and deletes that should be merged into the database, occasionally this number could be higher.
Apart from the daily batch there are no other writers to the database
There are a few 'special' requirements
Readers should not experience any significant latency due to these updates
The entire batch should be processed atomically from the readers perspective. The reader should not see a partially processed batch
If the update fails halfway we need to rollback all changes of the batch
Processing of the batch itself is not time-critical, with a simple implementation it now takes up to a few minutes which is just fine.
The options I am thinking of are
Wrap a single database transaction around the entire update batch (this could be a large transaction), and using snapshot isolation to allow readers to read the original data while the update is running.
Use partition switching, It seems like this feature was designed with this kind of usecase in mind. The downside seems to be that before we can start processing the batch we need to create a copy of all the original data.
Switch the entire database. We could create a copy of the entire database, process the batch in this copy and then redirect all clients to this database(e.g. by changing their connection string). This should even allow us to make the database read only and possibly even create multiple copies of the database for scalability.
Which of these options, or another, would best fit this scenario and why?
the transaction strat will block and cause latency.
partition switching is not really going to solve your solution as you should consider that the same as doing it against the database as you have it today... (so the rollback/insert) would still be blocking however it could be isolated to just part of your data not all...
Your best bet is to use 2 databases and switch connection strings...
OR use 1 database and have 2 sets of tables and use views or sprocs that are swapped to look at the "active" tables. You still could have disk contention issues but from a locking perspective you would be fine.

Resources