I had a developer run a query that had no timeout and it left a transaction open. Around the 8 hour mark tempdb was almost full.
Is there a server side setting for SQL Server to set timeouts to prevent this from happening in the future? Hopefully this setting would override the client timeout.
The answer is to ask your developer to not do that on a production server. Only allow trustworthy people and/or processes access to your production database. If a process is going to churn through data over a long period of time then make sure there is a basic understanding of sql server logging and in your case, #temp table usage.
Here is an article on how to kill long running processes on sql server, however, this level of precaution should never have to be used IMO.
Related
I have a SQL Server database and have a linked server connection to an Oracle DB.
I had the following query running on my SQL Server:
INSERT INTO dbo.my_table_on_sql_server
SELECT *
FROM OPENQUERY (linkedservername, ‘SELECT * FROM target_table’)
The target_table has 50 million rows and I'm aware the query takes time to execute but has successfully completed before.
This time though, my PC had an automatic restart in the middle of the query. SSMS 2017 automatically reopened as soon as the PC fired back up, but I could not longer see the query running. my_table_on_sql_server has no data.
I'd like to understand what happens in SQL Server in the event of such a situation. Am I correct in assuming that the query was killed / rolled back? Is there any query running in the background? I've seen some related answers on this forum but wanted to specifically understand this for linked servers, as I use them a lot to retrieve data from other DBs for my job.
I'm more concerned about the Oracle DB as I don't want my query to impact any performance upstream. I only have a read-only access permission to the Oracle DB.
Thank you!
On shutdown the query will be aborted, and the INSERT rolled back. The rollback may happen during shutdown, or after restart, and may take some time to complete.
There's no automatic retry or anything that will access the linked server Oracle after the shutdown.
I have an issue that started a few weeks ago after a Windows update, And I cannot find any info about the problem on the interwebs. I have a SQL Server 2016 Express instance installed on an up to date Windows 10 machine, with a database that has a FILESTREAM file group, and a full text search catalog. The database is attached and functions properly as far as I can tell, there is nothing off in the Windows event log. However, since that update, SQL Server constantly churns on the database, using CPU and disk constantly.
I had the database stored on a mechanical hard drive, and the CPU usage was constantly around 30% until I shut down the SQL instance. Restarting it only helps temporarily as the churning soon starts again. Keep in mind this is on an off-network machine (apart from an internet connection). At first I thought I got a virus or something, so I shut down the server, and nuked it from orbit. I got a new SSD, installed Windows 10, installed SQL Server 2016, updated everything, took the MDF and LDF (and filestream folder), moved them over to the new machine, attached the database. No issue at first. Then it starts again, albeit now the CPU usage is much lower, probably because the storage is so much faster.
This is what it looks like in the Resource Monitor:
This seems to be related to Windows Defender somehow, as I can start a scan and see the amount of sqlservr.exe handles to the same database blow up live.
The SQL Server logs look like endless pages of this:
And all the while the SSMS activity monitor shows no processes or anything database wise that could explain the activity. Keep in mind this is an isolated database on a freshly installed machine with no client connected apart from me.
I have looked at the updates that could cause this, but I see nothing apparent and now I am at a loss as to what to do. The only solution I see is a downgrade to SQL Server 2008 SP3 which I know for a fact worked fine before. I would greatly appreciate any help on this.
The frequent "Starting up database 'Abacus'" message in the SQL Server error log indicate the database is set to AUTO_CLOSE and the database is frequently accessed. This constant opening and closing of the database results in significant overhead and is the likely cause of the high resource utilization you see.
The simple cure is to turn off auto close:
ALTER DATABASE Abacus
SET AUTO_CLOSE OFF;
It is generally best to keep the AUTO_CLOSE database setting off to avoid unnecessary overhead. The exception is a SQL instance hosting hundreds or thousands of databases where most are not actively used.
I've got SQL2012 running on 2 different servers with public, static IP addresses. I want to implement replication in a way that will keep both servers in sync at all times, regardless of which server is actually receiving the data. I've been reading about the subscriber/publisher model but I'm not exactly sure which should be which. A few facts about our setup:
I'm trying to achieve failover. If server A goes down, I need server B to be operational and have all latest data, or as close as possible. And vice versa. When the server comes back online, I need the replication to get caught up quickly and start working again. I need failures to be graceful, in other words I can't have server A get weird just because server B went offline.
I don't need realtime replication, but close would be nice. If server A was 10 seconds behind server B with data updates, nobody would care. But if it were an hour behind, that would be bad. Fast DB performance is more important that realtime replication, but again, close would be nice.
My database is just shy of 900Mb, and grows by 3Mb per day.
I am looking for advice on the best way to set this up given my setup and needs. Much appreciated.
Since one server will be Primary and the other Failover, use Log Shipping. It will keep two databases the same for all transactions completed on Primary server upto the failure moment. All transactions that have not completed at the moment of failure, will not appear on Failover server, so they should resubmitted by the application and hit Failover server.
Also there should be a Recovery procedure, to ensure than Primary server is up to date.
Useful articles:
Database Mirroring and Log Shipping.
Configure Log Shipping
I'm maintaining a legacy server app that generate DMO files from SQL Server views.
Sometimes the server crashes because SQL Server consumes all cpu resources.
Using the SQL Server monitor I see that the problem is in SQLDMO connections that are consuming all cpu time and blocking the server.
I don't understand the reason of that because the dmo connection is with TRANSACTION LEVEL READ UNCOMMITTED and these SQLs never finish, during weeks. The only solution is to shutdown the server.
I would suggest looking into the code why these connections are not closed. I'm guessing there's no proper closing at the end or something along those lines.
If that is not an option, you could consider running a scheduled job that kills off these specific jobs every so often if they ran for longer than say, 24 hours.
We have an application which has a SQL Server 2000 Database attached to it. After every couple of days the application hangs, and we have to restart SQL Server service and then it works fine. SQL Server logs show nothing about the problem. Can anyone tell me how to identify this issue? Is it an application problem or a SQL Server problem?
Thanks.
Is it an application problem or a SQL Server problem?
Is it possible to connect to MS SQL Server using Query Analyzer or another instance of your application?
General tips:
Use Activity Monitor to find information about concurrent processes, locks and resource utilization.
Use Sql Server Profiler to trace server and database activity, to capture and save data to a table or file to analyze it later.
You can use Dynamic Management Views (\Database name\Views\System Views folder (in the Management Studio)) to get more detailed information about MS SQL Server internals.
If you have the problems with perfomance (not your case) - you can use Perfomance Monitor and Data Collector Sets to gather perfomance information
Hard to predict the issue, I will suggest you to check your application first.Check what all operations you are performing against data base, are you taking care of connection pooling, unused open connections can create issues.
Check if you can get any log from your application. Without any log information hardly we can suggest anything.
Read this
Application may be hanging due to Deadlock
check the SP runs at that time using Profiler
and check the table manipulation(use nolock),
check the buffer size and segregate the DB into two or three module.