I work with Delphi 2010 and SQL Server 2012. Components for database is sdac.
I have a service working with database and it has several threads. In them it executes different requests to database. Every of these threads had a connection timeout and command timeout, and separate tmsconnection of course. And one time a sample of SQL Server itself was rebooted by the admin.
Operations in main thread had coped with this situation without problem. One of the other threads have written error in its connection and it was worked out, but stopped hang itself just after, I think in the beginning of next iteration. Other threads which had connection only initially worked out this situation without any problems.
Tell me please, does it mean during every connection to database in a secondary threads, I run the risk of hanging if the server would be rebooted during that time, and I can do nothing to protect against that?
Related
I have several publications in SQL Server 2016 in my test environment, when my distribution clean up job runs it running forever without deleting anything.
I did some digging around and discovered that this job is actually blocked by one repl-logreader of the server, however, when I checked the replication monitor for all the publications of the server, all of them showing good status without any latency. Their undistributed commands are '0'.
What else that I should look at to solve this issue?
We have a client deployment of our software that is showing intermittent SQL server connection failures, and we are struggling to understand them.
Our system consists of a SQL Server DB (2012) and 14 identical engines, each installed on a Windows 2012 VM. Each of these was created from the same template so they should be identical. The engines consist of a Windows service that connects to the DB on startup by reading a single row from a table. If the connection fails they will wait a few seconds and try again, until they get a connection.
In this particular case, the VMs were all rebooted due to a Windows Update. (The SQL server had the update/reboot about 12 hours before). They came online within a few minutes of each other. 12 of the engines started up without any problem. Two of them, however, failed to connect to the DB with:
"The underlying provider failed on Open."
Those two engines then started to poll, and continued to get this error for many hours. The rest of the engines had started up and were fine. We have a broker service too that was accessing the DB throughout and showed no connection issues.
When the client noticed this issue, they restarted the engine services on the two problem VMs, and the two engines connected to the DB just fine.
We are trying to understand what could have happened here. I guess my main questions are:
What could be an explanation of why 12 connections succeed and two fail? There's absolutely no difference as far as we know between the engines. The query itself is very simple.
Why did the connection continue to fail for those two engines until the service was restarted? This suggests to me that there is some process-level failed state that is only cleared when restarting the services. I've looked at the code to see if it was reusing the connections. It uses Entity Framework to read the single table row, and we create a fresh DbContext each time. I don't understand how this could go wrong.
We noted that there was a CheckDb operation proceeding on the DB around the time the services were coming up, and we wondered if this could be related to the issue. However, the client says that this runs every night and hasn't caused problems in the past. And it wouldn't explain why the engines didn't come back up again.
Thanks in advance for any help.
Background:
I have a SQL Server 2005 setup with master, slave1, slave2 replication set up as a pull replication from slaves. The distribution database resides on the slave1 machine, both slaves pull.
A problem began today where the replication on slave1 simply stops running. It claims that it completed successfully, but it does not restart, and manually starting the process finishes in roughly one minute, again without an error message.
Replication is running fine on slave2, but I can't seem to figure out what's wrong on slave1. I've tried the obvious Windows debugging 101: "restart the machine" technique, but to no avail.
Has anyone encountered this before Does anyone have an idea of what I could check or change to get it working again? I'm especially at a loss as SQL Server claims that the job is just finishing successfully.
Though I'm unsure of why this began occurring. It appears to be due to the use of a custom SQL Server Replication Agent profile. Switching to using the default got it working again.
I'm maintaining a legacy server app that generate DMO files from SQL Server views.
Sometimes the server crashes because SQL Server consumes all cpu resources.
Using the SQL Server monitor I see that the problem is in SQLDMO connections that are consuming all cpu time and blocking the server.
I don't understand the reason of that because the dmo connection is with TRANSACTION LEVEL READ UNCOMMITTED and these SQLs never finish, during weeks. The only solution is to shutdown the server.
I would suggest looking into the code why these connections are not closed. I'm guessing there's no proper closing at the end or something along those lines.
If that is not an option, you could consider running a scheduled job that kills off these specific jobs every so often if they ran for longer than say, 24 hours.
I have come across an issue that seems to be somehow connected to a web server configuration, and resulting in queries randomly taking a long time to execute. The application is created using old plain Classic ASP and ADODB Connection is used.
The scenario goes as follows:
there is a single connection opened in a script at the beginning of processing each HTTP request
this connection is used to execute a query against a SQL Server, that resides on a separate box. conn.Execute is used. Connection is NOT closed afterwards
there are usually a few to a few dozens of conn.Execute in a single ASP page
All has been working well until recently, when some of the conn.Execute started to take much longer to execute, totally on random.
the difference is e.g. 15ms normal execution time vs. 2000ms long execution time
on the SQL Server side, Profiler does not show longer query execution times, so there must be something blocking the conn.Execute request
When a proper practice of closing a connection after each conn.Execute has been implemented, the issue goes away. However, as I have stated before, all has been working flawlessly until recently. This web app is a fairly large one and rewriting it to close and reopen connections properly will take some time. And I need a short-term solution.
My guess is that it could have something to do with the connection pool size, however this is not ADO.NET, therefore I am not sure, whether a connection pool issue should be taken into the consideration at all. On the SQL Server side, there is no limit on the number of concurrent connections to the server.
I need some hints. Brainstorming possible ideas.
Could be related to delays resolving the hostname in the connection string via DNS - have you tried putting an IP address in the connection string instead of the hostname?