I have SQL Server 2017 Express database that is accessed by up to 6 tablets that connect via an Angular 7 app using REST web services.
I have a stored procedure that allows a new user to be inserted into a specific database table. The insert will always only insert 1 record at a time, but with 6 clients, the stored procedure could be called by each client almost simultaneously.
The last step of the process is to print an agreement form to a specific printer. Initially this was going to be handled on the client side, but the tablets do not have the ability to print to a network printer, so that functionality now needs to reside on the server side.
With this new requirement, the agreement form is an RTF document that is being read, placeholder values replaced with values from the insert statement, written to a temporary file and then printed to the network printer via the application (Wordpad most likely) that is associated with the RTF file format.
There is also an MS Access front-end app that uses linked servers to connect to the database, but doesn't have the ability to create new users, but will need to be able to initiate the "print agreement" operation in the case of an agreement not being printed due to printer issue, network issue, etc.
I have the C# code written to perform the action of reading/modifying/writing/printing of the form that uses the UseShellExecute StartInfo property with the Process.Start method.
Since the file read/modify/write/print process takes a few seconds, I am concerned about having the stored procedure for adding the registration blocking for that length of time.
I am pretty sure that I am going to need a CLR stored procedure so that the MS Access front-end can initiate the print operation, so what I have come up with is either the "Add_Registration" stored procedure (Transact-SQL) will call the CLR stored procedure to do the read/modify/write/print operation, or an insert trigger (either CLR or Transact-SQL) on the table that calls the CLR stored procedure to read/modify/write/print.
I could avoid the call from the trigger to the stored procedure by duplicating the code in both the CLR trigger and the CLR stored procedure if there is a compelling reason to do so, but was trying to avoid having duplicate code if possible.
The solutions that I am currently considering are as follows, but am unsure of how SQL Server handles various scenarios:
A CLR or Transact-SQL Insert trigger on the registration table that calls a CLR stored procedure that does the reading/modifying/writing/printing process.
A CLR stored procedure that does the reading/modifying/writing/printing process, being called from the current add_registration Transact-SQL stored procedure
The questions I keep coming back to are:
How are Insert CLR triggers executed if multiple inserts are done at the same or nearly the same time (only 1 per operation), are they queued up an then processed synchronously or are they executed immediately?
Same question as #1 except with a Transact-SQL trigger
How are CLR stored procedures handled if they are called by multiple clients at the same or nearly the same time, are they queued up an then processed synchronously, or is each call to the stored procedure executed immediately?
Same question as #3 except with a Transact-SQL stored procedure
If a CLR stored procedure is called from a Transact-SQL trigger, is the trigger blocked until the stored procedure returns or is the call to the stored procedure spawned out to it's own process (or similar method) with the trigger returning immediately?
Same question as #5 except with a CLR trigger calling the CLR stored procedure
I am looking for any other suggestions and/or clarifications on how SQL Server handles these scenarios.
There is no queuing unless you implement it yourself, and there are a few ways to do that. So for multiple concurrent sessions, they are all acting independently. Of course, when it comes to writing to the DB (INSERT / UPDATE / DELETE / etc) then they clearly operate in the order in which they submit their requests.
I'm not sure why you are including the trigger in any of this, but as it relates to concurrency, triggers execute within a system-generated transaction initiated by the DML statement that fired the trigger. This doesn't mean that you will have single-threaded INSERTs, but if the trigger is calling a SQLCLR stored procedure that takes a second or two to complete (and the trigger cannot continue until the stored procedure completes / exits) then there are locks being held on the table for the duration of that operation. Those locks might not prevent other sessions from inserting at the same time, but you might have other operations attempting to modify that table that require a conflicting lock that will need to wait until the insert + trigger + SQLCLR proc operation completes. Of course, that might only be a problem if you have frequent inserts, but that depends on how often you expect new users, and that might not be frequent enough to worry about.
I'm also not sure why you need to print anything in that moment. It might be much simpler on a few levels if you simply have a flag / BIT column indicating whether or not the agreement has been printed, defaulted to 0. Then, you can have an external console app, scheduled via SQL Server Agent or Windows Scheduled Tasks, executed once every few minutes, that reads from the Users table WHERE HasPrintedAgreement = 0. Each row has the necessary fields for the replacement values, it prints each one, and upon printing, it updates that UserID setting HasPrintedAgreement = 1. You can even schedule this console app to execute once every minute if you always want the agreements immediately.
Related
I have a long running stored procedure that is executed from IIS. On average this stored procedure takes between two and five minutes to complete because it is searching through a large dataset. (although it has take around 20 minutes in some cases)
Most of the time the stored procedure works fine but every now and then the SPIDS go into a sleeping state and never recover. The only solution I have found is to restart the SQL Server and re-run the stored procedure
The are no table inserts in the proc (only table variable inserts), and the other statements are selects on a large table.
I'm stuck for where to start debugging this issue. Any hints one what it might be or suggestions on tools that would help me find the issue would be most helpful
EDIT: More info added:
The actual issue is the proc doesn't return the resultset. My first thought was to look at the spids, they were sleeping but the cputime was still increasing
It's a .Net app so .Net Core 3.1 with ASP.NET Core and a Blazor UI. The libary used for db connection is System.data.SqlClient I believe System.data.SqlClient uses it's own custom driver. Calling code below:
The stored procedure doesn't return multiple result sets, however obviously different instances of the proc run at the same time.
No limits to connection pooling in IIS
#RichardWatts when you say " re-run the stored procedure" you mean that the same stored proc with the same parameter and data works once you restart SQL Server ?
If so look over your loc (sp_loc} inside your table probably another process loc some data and doesnt release it properly, specialy if you have transaction accessing the same tables.
What is your your isolation level on your connexion ? If you can, try to change it to READ UNCOMMITTED to see if that solve your problem.
as an alternate you can also add a WITH (NOLOCK) or (READUNCOMMITTED) to your sql command.
Know that you will need to hold query with a read uncommited or nolock if you have some modification on the structure of your table or index re construction for example or they will in turn block its execution
Nevertheless be cautious this solution depend on your environment, specially if your tables gots lots of update, delete, insert,... this kind of isolation can lead to a Dirty read and doesnt adress the root cause of your problem wich I would bet is uncomited transaction (good article that explain it)
Make also a DBCC CHECKTABLE just to be sure on this side
I am trying to simulate autonomous transaction in SQL Server, but the problem is, that using CLR DLL procedure (that is using different session) slows down the performance about 5 times.
To be clear:
Let's assume that in one transaction I am calling procedure for every of 100k rows in table, which gives 100k proc calls in one transaction. If any of this procedure fails, I want to rollback the entire transaction (AFTER all procedures calls), but I need to keep logs from the procedures that fails (in case of failure, insert to ErrorLog table).
The problem is, that in such case, I perform 100k connections, and it cost in terms of performance.
Using table variable is not a solution, because I am not able to control every transaction (some are controlled by frontend), using Loopback (to the same server) is not recommended in production (from what I read), so the solution was to use CLR for different session purpose.
Is there any solution to maybe create altered session for every session, and use that session for all those insert instead of creating new connection every time, or is my understanding of using CLR wrong, and it must open new connection every time. (From what I read, context_session uses the same session from what it was called, so in case of rollback, it will delete my logs from ErrorLog table).
You can use "SAVE TRAN XXXX" in the procedure and a "ROLLBACK TRAN XXXX" where an exception occurs. The Insert into the ERRORLOG table should be after the "ROLLBACK TRAN XXXX in the Procedure not to be rolled back.
Hope this helps...
I have a stored procedure written in SQL Server which has some logic derived by joining various tables in the database and finally the end result is inserted into a table.
This stored procedure is being called from front end using .Net application api by various users .
If only one user is performing some operation, it's taking some 20 seconds to complete the operation.
If multiple users are performing the operation, then it's hanging and taking almost more than 20 mins to complete.
I have tried to understand the locks getting backend and tried to set hints WITH(NOLOCK) and some other options but not able to resolve it.
How to handle the locks effectively when concurrent execution is happening?
I'm trying to create some standard logging for stored procedures along with our standard error handling. I want to call a stored procedure at the beginning of the sproc to log to a table that it has started, and call a stored procedure at the end to log that it has successfully completed. I also want to call a stored procedure from the error handler to log the failure and errors message/number.
The stored procedure may have its own transaction, or its calling stack may have started a transaction, or there may be no transaction open. I don't want any of my calls to the logging procedures to be rolled back at all, no matter what happens in the main transaction. i.e. the processing of the logging stored procedures needs to be completely separate from that of the main processing.
Is it possible to execute some SQL as though it is in a separate session during a transaction? What's the easiest way to achieve this?
Incidentally, I can't use xp_cmdshell, otherwise I'd shell a call to sqlcmd.
Thanks,
Mark
What you are describing is nested transactions. However, nested transactions are not real. https://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2630-nested-transactions-are-real/ You would have to do your logging in another context. Since you are in a stored procedure you would have to use dynamic sql in your logging code for this to work.
I would like to know how we can control a trigger in SQL when the process is in progress.
Example: If any Update, Delete, Insert operation performed on a table (Employee Table) then I am executing a batch file located in the windows drive. Here the batch file takes around 5 minutes to complete the whole process. So my question here is for suppose I have a multiple application connected to the same table (Employee Table) and if different an performed on the table from Application and due to trigger the batch process started. In meanwhile from another application one more operation performed then it triggers the batch file again. Due to which the performance is degrading or getting crashed.
So here i would like to know is there any way to control the trigger. Such as until the batch file completes the process the second trigger is kept on hold and after the completion the process needs to be started again.
You could have a helper table where your sp check if running = 0, if so, writes 1 and batch starts. When the batch ends, set running = 0.
Ugh - Having a database trigger execute a batch script on the server sounds like a horrible idea. And it takes 5 minutes!!!???? You've pretty much abandoned the idea of concurrency, and it does not sound like a very stable system.
But assuming you really want to move forward with this design. I suggest that any process that performs DML on the table should establish an exclusive lock on the table before the DML is executed. Your trigger(s) are always executed after the DML is complete on SQL Server, which I suspect is too late.
I recommend that you force all DML on that table to be through stored procedures that always establish the table lock before proceeding. Your actions are thus guaranteed to be serialized, your batch script never executes more than once at any given time, and concurrent requests will be queued.
You could restrict write access to the table by giving write access to a single account that owns the stored procedures, and then grant execute privilege on the procedures to appropriate users/roles.
Actually, I recommend that you abandon your design completely. But I can't suggest an alternative because I have no idea what led you to your current design