How to show progress of sql server SP on winform client? - winforms

we have winforms client application that calls .net web services which further calls sql server SP . on one particular occasion, we are calling sp which does heavy batch processing on the server. is there any way we can update progress back on the winform client while sql server sp is running ?
I thought of calling my main webmethod asynchronously which does all batch processing. and then call another webmethod in separate thread which periodically queries server to know progress. is this possible ? or is there any other approach to achieve this?
please advice.
thanking you in advance.

I have done this before using a method similar to what you suggest. The stored procedure updates a status table as it runs, and the other thread queries that status table with the NOLOCK hint. The interesting thing I found in doing it this way is that, if the work is happening in a transaction, and the transaction gets rolled back, the status bar will even run backwards as it rolls back! However, large amounts of work should probably not be done in a single transaction for performance reasons.

Related

User session stuck in killed\rollback state

One of the session that is executing a stored proc from an application is stuck in the killed\rollback phase. Arguably, it shouldn't be this long for the sproc to rollback and it has been stuck there for an eternity. Basically, the sproc is a bunch of select with unions and I am curious as to why this is holding up that long. As far as the waits are concerned below is a snippet of what I see that it is waiting on. I would like to understand how am I going to get rid of this w/o restarting SQL services and most importantly what basically can be done in order to avoid this situation either from the application side or from the SQL side. Let me know if anything else is needed. Also, these stored procedure is using [SalesForce] as linkedserver using DBAmp to fetch the data...would this be a cause and how to overcome the same.
Depending on how long an eternity is here, it's possible it's hung forever.
I previously worked in an environment where we routinely pulled data into SQL Server from a mainframe application. Periodically, the mainframe would unexpectedly terminate a connection, but would not communicate anything back to SQL Server, which would happily sit in an 'Executing' state waiting for the query results. The next day, when the same job would run, the not-executing-executing-query would block the new instance and throw an error.
KILLing the undead connection would allow the new instance to run, but the old instance would stick in KILLED\ROLLBACK until we restarted SQL Services.
Since the zombies weren't interfering with anything, we'd usually let them sit until the monthly maintenance window.
Before implementing this work-around, on several occasions we had our mainframe server engineers verify for us that as far as the mainframe was concerned there really was no active connection. You should check the SalesForce side and see if there's any activity there.

Create SQL Server trigger ON INSERT to send inserted data to a remote url

I'm new to SQL Server and I have situation that I want to instantly receive inserted data to my web service url.
So I want to create a trigger in SQL Server to send all inserted data to my web service url.
Can you please help me with this.
Calling a service from trigger is really a bad idea. Even you should avoid calling any service from SQL Server, it should be done from outside the SQL server. You can implement your requirement by following way.
On every insert, using trigger store the newly inserted row inside a table (queue) for processing by some background job. This will make sure that your insert performance will not impact and chances of failure will be minimum.
You can choose one of the below option.
Processing Service : You can create a windows service, which will pool the queue table, it will pick the pending items and will call the required service. After successfully completion it should updated the queue status to processed.
SQL Job : Write a SQL job and schedule it to run after a fixed interval of time like (1 minute). This job will pick the pending items from your queue table and will use the CLR stored procedure sp_OACreate to call the service.
This will not be real time, but more reliable option with less chances of failure. You can make to near real time by increasing the frequency of job/service.

Best practice to recover an CRUD statement to a linked server if connection is lost

I am looking for the best practice for the following scenario.
We have a CRM in our company. When an employee updates the record of a company, there is trigger that fires a stored procedure which has a CRUD statement to the linked server hosting the SQL DB of our website.
Question:
What happens when the connection is lost in the middle of the CRUD and the SQL DB of the website did not get updated? What would be the best way to have the SQL statement processed again when the connection is back?
I read about Service Broker or Transactional Replication. Is one of these more appropriate for that situation?
The configuration:
Local: SQL server 2008 R2
Website: SQL server 2008
Here's one way of handling it, assuming that the CRUD statement isn't modal, in the sense that you have to give a response from the linked server to the user before anything else can happen:
The trigger stores, in a local table, all the meta-information you need to run the CRUD statement on the linked server.
A job runs every n minutes that reads the table, attempts to do the CRUD statements stored in the table, and flags them as done if the linked server returns any kind of success message. The ones that aren't successful stay as they are in the table until the next time the job runs.
If the transaction failed in the middle of the trigger it would still be in the transaction and the data would not be written to either the CRM database or the web database. There is also the potential problem of performance, the SQL server data modification query wouldn't return control to the client until both the local and remote change had completed. The latter wouldn't be a problem if the query was executed async, but fire and forget isn't a good pattern for writing data.
Service Broker would allow you to write the modification into some binary data and take care of ensuring that it was delivered in order and properly processed at the remote end. Performance would not be so bad as the insert into the queue is designed to be completed quickly returning control to the trigger and allowing the original CRM query to complete.
However, it is quite a bit to set up. Even using service broker for simple tasks on a local server takes quite a bit of setup, partly because it is designed to handle secure, multicast, reliable ordered conversations so it needs a few layers to work. Once it is all there is is very reliable and does a lot of the work you would otherwise have to do to set up this sort of distributed conversation.
I have used it in the past to create a callback system from a website, a customer enters their number on the site and requests a callback this is sent via service broker over a VPN from the web to the back office server and a client application waits for a call on the service broker queue. Worked very efficiently once it was set up.

SQL Server Deadlock within WCF

I am trying to resolve a deadlock issue regarding a SQL transaction (which is in a stored procedure called via LINQ-to-SQL). I used SQL Server Profiler and see that the SP is deadlocking on itself.
My client is calling a WCF method in rapid succession, which in turn calls the SP which causes a deadlock.
I have tried setting all the various SQL Server isolation levels as well as trying to set the WCF ConcurrencyMode = Single.
I continue to have many deadlock 'victims' and I am losing insert data as a consequence.
Has anybody solved this kind of problem?
Kind regards,
NickV
To get details of the deadlock, switch traceflag 1204 on. Any time a deadlock occurs, the deadlock graph, including all the details on the contended resource and what the two processes were doing is written to the SQL error log. This really helps in finding the cases of the deadlock.
Keep transcations as small as possible. Do no unneccessary work in a transaction. Never begin a transaction then wait for user input. Ensure that queries within the transaction are as efficient as possible.
Access objects in the same order. If one proc updates tbl1 first, then tbl2 after, all other procs that use those two should use them in the same order. Done properly, this can prevent deadlocks completely.
Hope that helps

Stored Procedure and Timeout

I'm running a long process stored procedure.
I'm wondering if in case of a timeout or any case of disconnection with the database after initiating the call to the stored procedure. Is it still working and implementing the changes on the server?
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
Anyway if the client is not there to commit at the end of the job the changes should be rolled back by the server.
In other words, if you have a stored procedure making changes to the database and there is a possibility that the connection might disconnect in the middle, be sure to enclose all changes within a transaction.
It depends on the server I guess.
I know Firebird will detect disconnected clients and stop working.
Anyway if the client is not there to commit at the end of the job the changes should be rolled back by the server.
I would suggest running your profiler on the database and watching the activity, and also create a basic test case so that you know for sure what happens. The outcome is dependent on your database and what you are using to connect to it.

Resources