One of the session that is executing a stored proc from an application is stuck in the killed\rollback phase. Arguably, it shouldn't be this long for the sproc to rollback and it has been stuck there for an eternity. Basically, the sproc is a bunch of select with unions and I am curious as to why this is holding up that long. As far as the waits are concerned below is a snippet of what I see that it is waiting on. I would like to understand how am I going to get rid of this w/o restarting SQL services and most importantly what basically can be done in order to avoid this situation either from the application side or from the SQL side. Let me know if anything else is needed. Also, these stored procedure is using [SalesForce] as linkedserver using DBAmp to fetch the data...would this be a cause and how to overcome the same.
Depending on how long an eternity is here, it's possible it's hung forever.
I previously worked in an environment where we routinely pulled data into SQL Server from a mainframe application. Periodically, the mainframe would unexpectedly terminate a connection, but would not communicate anything back to SQL Server, which would happily sit in an 'Executing' state waiting for the query results. The next day, when the same job would run, the not-executing-executing-query would block the new instance and throw an error.
KILLing the undead connection would allow the new instance to run, but the old instance would stick in KILLED\ROLLBACK until we restarted SQL Services.
Since the zombies weren't interfering with anything, we'd usually let them sit until the monthly maintenance window.
Before implementing this work-around, on several occasions we had our mainframe server engineers verify for us that as far as the mainframe was concerned there really was no active connection. You should check the SalesForce side and see if there's any activity there.
Related
I've hunted high and low over the years about WSUS and all anyone says is it is slow when doing things like a reset and there's no way to gauge progress, stop it (without the update to singleton data) and so on.
My own WSUS has always had major SQL blocking issues whenever I reset it.
Digging deeper, the SQL server appears to be seriously struggling with process blocking see pic.
When you look at what is causing the blocking, it is a variety of CREATE PROC statements with the ChildEulas one being a major culprit.
So is it that the reason resets are soooooo slow is that the reset is having to drop/re-create the stored procedures for each and every update in the DB and throwing in blocking makes it absolutely ridiculous.
Can we re-write the reset routine to stop it dropping/creating the stored procs and make it lightning fast?
I tried killing the blocking process whenever it was a create proc statement (as the proc was already actually still there, presumably in a transaction) but it keeps halting the WSUS service altogether.
I'm having an issue with processes that lock my SQL Server even though they appear to be finished.
The blocking Processes are 4 a simple SELECT GETDATE() commands that just don't finish for some reason unknown to me. The SQL Server Profiler doesn't really show any activity except for repeating the SELECT GETDATE() every four minutes. - It is possible that the same connection sent a UPDLOCK request before that.
I mostly want to find an explaination for that behaviour. I can't really influence those blocking requests. - As you can see, they are sent by an external company.
The suspended process is also called from within the Business Central Server, but is circumventing the standard interpretation layer to optimize performance. - To do this, it's calling the SQL .Net Class to execute the SQL query directly.
If i kill the process, the server throws an error and the whole execution falls appart.
p.s. for the PPL that work with BC in here and think this is not a good idea:
This code won't run on a daily business. I it's just for migrating data during upgrades from NAV to BC. We Build a tool that allows us to map Fields between C/AL and AL solutions and generate AL-Extension based on those mappings. These extensions grab the data from a copy of the original DB and write them directly into their destination files. We need the SQL commands because some of our customers have so much data accumulated over the years that an upgrade would otherwise take more than a Week if the data was processed in AL
Sometimes queries that normally take almost no time to run at all suddenly start to take as much as 2 seconds to run. (The query is select count(*) from calendars, which returns the number 10). This only happens when running queries through our application, and not when running the query directly against the database server. When we restart our application server software (Tomcat), suddenly performance is back to normal. Normally I would blame the network, but it doesn't make any sense to me that restarting the application server would make it suddenly behave much faster.
My suspicion falls on the connection pool, but I've tried all sorts of different settings and multiple different connection pools and I still have the same result. I'm currently using HikariCP.
Does anyone know what could be causing something like this, or how I might go about diagnosing the problem?
Do you use stored procedures or ad-hoc queries? On reason to get different executions when running a query let's say in management studio vs using stored procedure in you application can be inefficient cached execution plan, which could have been generated like that due to parameter sniffing. You could read more about it here and there are number of solutions you could try (like substituting parameters with local variables). If you restart the whole computer (and SQL Server is also running on it), than this could explain why you get fast queries in the beginning after a restart - because the execution plans are cleaned after reboot.
It turned out we had a rogue process that was grabbing 64 connections to the database at once and using all of them for intense and inefficient work. We were able to diagnose this using jstack. We ran jstack when we noticed the system had slowed down a ton, and it showed us what the application was working on. We saw 64 stack traces all inside the same rogue process, and we had our answer!
On our admin of our company's production site, we have a little query dumping tool, and I unknowingly, in trying to get data from a database, different than the main one, used the use database command.
And here's the kicker, it then made every coldfusion page with it's query instantly fail.
since it somehow caches that use database command.
Has anyone else heard of this weird bug?
How can we stop this behavior?
If i use a "use database" command, I want that to only exist as far as the current query i am running, after i am done, to go back to the normal database usage.
This is weird and a potentially damaging problem.
Any thoughts?
I imagine that this has something to do with connection pooling. When you call close, it doesn't close the connection, it just puts it back into the pool. When you call open, it doesn't have to open a new connection, it just grabs an existing one from the pool. If you change the database that the connection is pointing to, ColdFusion may be unaware of this. This is why some platforms (MySQL on .Net for instance) reset the connection each time you retrieve it from the pool, to ensure that you are querying the correct database, and to ensure that you don't have any temporary tables and other session info hanging around. The downside of this kind of behaviour, is that it has to make a round trip to the database, even when using pooled connections, which really may not be necessary.
Kibbee is on the right track, but to extend that a little further with three possible workarounds:
Create a different DSN for use by that one query so the "USE DATABASE" statement would only persist for any queries using that DSN.
Uncheck "Maintain connections across client requests" in the CF admin
Always remember to reset the database to the one you intend to use at the end of the request. It kinda goes without saying that this is a very dangerous utility to have on your production server!
It's not a bug nor is it really unexpected behavior - if the query is cached, then everything inside the cfquery block is going along for the ride. Which database platform are you using?
I have a very strange and complicated situation. I have data being erased from one of my SQL Server tables, and I am not sure by what application. I would like to be able to track this.
As I am sure you are wondering how I could find myself in this situation, here is some background. We have 2 servers, Web and Database running IIS6 and SQL Server 2005 respectively. They were setup by the previous developer who left the company without giving me any sort of introduction to the system so I am left "hunting" for everything.. I have been able to figure out most of the system on my own except for this, which remains a mystery. All I know for sure is this:
Data is being erased at a set time every day (I have setup a TRIGGER to capture this)
It is not a SQL Server Agent Job
It is not a Windows Scheduled Task
It is not a Windows Service
All database logins are done with the sa user so login history cannot help me... (again, I didn't set this up)
How the heck do I debug something like this? If anything, I want to know if this is coming from something running on the database server, or from a request from an outside source. Please help :-)
As you know the time it happens you should set up a SQL Profiler trace at that time to catch the statements being sent.
This will show you the SQL being sent, the spid of the connection, user name, application name sent by the connection and other useful info to track down the culprit.
In case the time that it happens is not convenient for you to do this you can script SQL traces (which is more lightweight than running the full GUI anyway)
Edit: Be careful when using it not to record so much information that you bog down the server. You can filter for activity on the database of interest for example.