SQL Server 2016 + FILESTREAM + Windows Defender = constant CPU and disk usage - sql-server

I have an issue that started a few weeks ago after a Windows update, And I cannot find any info about the problem on the interwebs. I have a SQL Server 2016 Express instance installed on an up to date Windows 10 machine, with a database that has a FILESTREAM file group, and a full text search catalog. The database is attached and functions properly as far as I can tell, there is nothing off in the Windows event log. However, since that update, SQL Server constantly churns on the database, using CPU and disk constantly.
I had the database stored on a mechanical hard drive, and the CPU usage was constantly around 30% until I shut down the SQL instance. Restarting it only helps temporarily as the churning soon starts again. Keep in mind this is on an off-network machine (apart from an internet connection). At first I thought I got a virus or something, so I shut down the server, and nuked it from orbit. I got a new SSD, installed Windows 10, installed SQL Server 2016, updated everything, took the MDF and LDF (and filestream folder), moved them over to the new machine, attached the database. No issue at first. Then it starts again, albeit now the CPU usage is much lower, probably because the storage is so much faster.
This is what it looks like in the Resource Monitor:
This seems to be related to Windows Defender somehow, as I can start a scan and see the amount of sqlservr.exe handles to the same database blow up live.
The SQL Server logs look like endless pages of this:
And all the while the SSMS activity monitor shows no processes or anything database wise that could explain the activity. Keep in mind this is an isolated database on a freshly installed machine with no client connected apart from me.
I have looked at the updates that could cause this, but I see nothing apparent and now I am at a loss as to what to do. The only solution I see is a downgrade to SQL Server 2008 SP3 which I know for a fact worked fine before. I would greatly appreciate any help on this.

The frequent "Starting up database 'Abacus'" message in the SQL Server error log indicate the database is set to AUTO_CLOSE and the database is frequently accessed. This constant opening and closing of the database results in significant overhead and is the likely cause of the high resource utilization you see.
The simple cure is to turn off auto close:
ALTER DATABASE Abacus
SET AUTO_CLOSE OFF;
It is generally best to keep the AUTO_CLOSE database setting off to avoid unnecessary overhead. The exception is a SQL instance hosting hundreds or thousands of databases where most are not actively used.

Related

Azure VM with SQL Server database - backup and file recovery

I have an Azure VM - Windows (Windows Server 2008 R2 Datacenter). It has Microsoft SQL Server 2008 R2 running on it (version v10.50.6549).
The Azure VM has backups running according to a policy - and I can see in the backups blade for the VM that they are running nightly.
If I have an issue with the SQL Server, and need to roll back to a prior version of the database, will the File Recovery option from the VM backup be adequate?
Or should I also be running SQL Server backups via a maintenance plan on the server on the VM?
If I have an issue with the SQL Server, and need to roll back to a
prior version of the database, will the File Recovery option from the
VM backup be adequate?
Maybe. VM Backups don't always give you consistent SQL backups. They usually work, but not always. If you have everything setup just right and get consistent VM backups, it might be ok-- but you are running a fairly old OS on that VM, so I'd be nervous. Very nervous. If the data is really important to you, then you should backup the data, not just the VM. Sometimes you want to restore just the data to another VM to investigate, not the entire server. I also hope you have more than just "last night's VM backup" at any given time. Sometimes bad things happen on Friday and you don't notice until Monday.
Or should I also be running SQL backups via a maintenance plan on the
server on the VM?
Yes, you should be running SQL backups if you your data is important. If your data is really important (you don't want to lose half a day of it), you should be doing full backups periodically (e.g. nightly) transaction log backups many times per hour, and keeping a few weeks worth of backups in rotation. If your data is super-important (you don't want to lose more than a few seconds), you should be mirroring it over to another database server in near-time (asynchronously). If it is critical (you don't want to lose any data), then you want to mirror to another server in real-time (synchronously).
Of course, if you are already running in Azure and don't have a DBA, managing a database is a lot easier, safer, more available, and generally cheaper if you use Azure SQL rather than trying to manage your own instance SQL Server in a VM-- oh yeah, and backups are handled for you, with millisecond point-in-time recovery for up to 45 days-- and they handle the mirroring for you to. If you want to mirror to another region across the country, you do have to pay extra for that, though.

SQL Server 2008 DBNETLIB error

Our ASp.net application is getting error as below"
[DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied "
I can connect with Enterprise manager management studio and Query analyzer without any issue.
It was running these applications with out any issue long time. last one week we are getting this error.If we restart the server .it works then it will come again after 3to 4 hours.
We are running on Windows 2003 server. I was searching and didn't find a solution yet. If anybody knows anything for this error, please post the details to resolve.
Thank you in Advance
Joseph
Sounds like you are running out of space on the drive that contains the tempdb. The tempdb is purged on a restart but slowly grows as queries are executed and connections are made. Check the drive when you start having the problem. If you're within a few megabytes of zero, then that's the problem. Clear-up some hard drive space, move the tempdb to another drive, or create multiple tempdb files on multiple drives.
Could also be a problem with RAM, but it's more likely to be an issue with the tempdb.

Will uninstalling a named instance of SQL Server require a reboot or cause issues with existing instances?

I have a server with a default instance and 2 named instances of SQL Server 2005 standard installed. This is a mission critical production server that cannot be restarted during normal business hours.
Will uninstalling the two named instances of SQL Server 2005 require a reboot or put the server in a state that may cause issues with the default instance of SQL Server 2005 until it's rebooted?
This would probably get a better answer at serverfault.com.
I'm not sure how much it would help perf, if SQL isn't getting hit it doesn't do much. You could probably get away with uninstalling, but then again when in surgery bad things happen. I've never killed SQL server uninstalling an instance, but I have killed the client tools. I would take one of the following approaches:
a) First, backup and drop all the databases to reclaim the disk space. Then stop disable the services for the named instances. The binaries will still be there, but they aren't too large and will be sitting idle.
b) Better long-term plan if you can source the hardware is to setup a new de novo box and drop a clean SQL instance over there, then port the live server over. Really not too painful. Then repurpose old box as is fit.
Is there a critical reason why you need to uninstall named instances. Can you just ignore them?
EDIT: the answer is yes you can uninstall via add/remove programs
Rebooting doesn't occur
An article which might apply to your situation:
http://support.microsoft.com/?kbid=915854

Performance problems with SQL Server Management Studio

I'm running Sql Server Management Studio 2008 on a decent machine. Even if it is the only thing open with no other connections to the database, anything that has to do with the Database Diagram or simple schema changes in a designer take up to 10 minutes to complete and SQL Management Studio is unresponsive during that time. The same SQL code takes less than a second. This entirely defeats the purpose of the designers and diagramers.
------------------
System Information
------------------
Operating System: Windows Vistaâ„¢ Ultimate (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.080917-1612)
Processor: Intel(R) Core(TM)2 Quad CPU Q6700 # 2.66GHz (4 CPUs), ~2.7GHz
Memory: 6142MB RAM
Please tell me this isn't a WOW64 problem; if it is, I love MS, but step up your 64-bit support in development tools.
Is there anything I can do to get the performance anywhere near acceptable?
Edit:
I've got version 10.0.1600.22 of SQL Server Management Studio installed. Is this not the latest release? I'm sure I installed it from an MSDN CD and I pretty much rely on Windows Update these days. Is there any place I can quickly see what the latest release version number is for tools like this?
Edit:
Every time I go to open a database diagram I get the message "This database does not have one or more of the support objects required to use database diagramming. Do you wish to create them?" I say yes every time. Is this part of the problem? Also, if I press the copy icon, I get the message "Current thread must be set to single thread apartment (STA) mode before OLE calls can be made." Database corruption?
I'm running in a similar environment and not having that problem.
As with any performance problem, you'll have to analyze it a bit - just saying "it takes 10 minutes" give no information on the reason it takes so long, so no information you can use to solve the problem.
Here are some tools to play around with. I'd have mentioned them originally, but "play around" is all I've learned to do with them. I'd recommend you try learning a little about them, which I have not done. http://technet.microsoft.com is a good source on performance issues.
Start with Task Manager, believe it or not. It's been enhanced in Vista and Server 2008, and now has a better Performance tab, and a Services tab. Be sure to click "Show processes from all users", or you'll miss nasty things done by services.
The bottom of the Performance tab has a "Resource Monitor" button. Click it, watch it, learn what it can do for you.
The Resource Monitor is actually part of a larger "Reliability and Performance Monitor" tool in Administrative Tools. Try it. It even includes the new version of perfmon, which will be more useful when you have a better idea what counters to look at.
I will also suggest the Process Explorer and Process Monitor tools from Sysinternals. See http://technet.microsoft.com/en-us/sysinternals/default.aspx.
Do your simple schema changes possibly mean that you're reordering the columns of a table?
In that case, what SQL Management Studio does behind the scenes is create a new table, move all the data from the old table to the newly created table, and then drop the old table.
Thus, if you reorder columns on a table with lots of data, lots of indices or both, you CAN incur a massive amount of "reorganization" work without really realizing it.
Marc
Can you try connecting your SQL Management Studio to a different instance of SQL Server or, better, an instance on a remote machine (and try to make similar changes)?
Are there any entries in the System or Application Event Logs (or SQL logs for that matter)? Have you tried uninstalling and reinstalling SQL Server on your machine? What version of SQL Server (database) are you running?
Lastly, can you open the Activity Monitor successfully? Right click on the server (machine name) - top of the three in the object explorer window - and click on 'Activity Monitor'.
Do you have problems with other software on your machine or only with SQL Server & Management Studio?
When you open SSMS it attempts to validate itself with Microsoft. You can speed this process by performing the second of the recommendations at the following link.
http://www.sql-server-performance.com/faq/sql_server_management_studio_load_time_p1.aspx
Also, are you using the registered servers feature? If so SSMS will attempt to validate all of these.
It seems as though it was a network configuration problem. Never trust a developer (myself) to setup a haphazard domain at his office.
I had my DNS server on my computer pointed to my ISP's (default because the wireless router we're using provided by the ISP doesn't allow me to override the DNS server to my own) instead of my DNS server here, so I have to remember to configure it manually on each computer, which I forgot for this particular computer.
I only discovered it when I tried to connect for the first time to a remote SQL Server instance form this PC. It was trying to resolve to an actual sub-domain of mycompany.com instead of my DNS server's authority of COMPUTERNAME.corp.mycompany.com
I can't say why this was an issue for the designers in SQL Server but not anything else, but my only hypothesis is that when I established a connection to my own computer locally using the computer name instead of "." or "localhost", SQL queries executed immediately, knowing it was local, but the designers still waited for a timeout from the external IP address before trying the local one.
Whatever the explanation is, changing my DNS server for my network card on the local machine to my DNS server's IP made it all work very quickly.
I had a similar issue with mine. Turned out to be some interference with the biometrics login service running on my laptop. Disabled the service and now it works.

How would a SQL Server 2005 database lose a few days data?

I really need some help here.
I'm the owner of a SQL Server Database application that lost three days data! I can't understand how or why.
So here is the set-up.
SQL Server 2005 32bit standard edition database on Windows 2000 server. (Database B)
Database is in simple recovery mode
The database is connected as a subscriber to another database(SQL Server 2005 64bit enterprise edition on Win2k3 enterprise) using SQL Server continuous Merge Replication. (Database A)
DatabaseB was rebooted on night X as part of scheduled reboot. When the database came back up it was used as normal for a couple of days and data was created into it perfectly fine.
But then yesterday Day X + 4 it lost a lot of data.
Database B is on a server with another instance of SQL Server and they both started to run out of memory(conflicting with each other).
Here is the sequence of events from the event log when I think this happened.
AppDomain 2 (DatabaseB.dbo[runtime].1) is marked for unload due to memory pressure.
AppDomain 2 (DatabaseB.dbo[runtime].1) unloaded.
BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated.
The simple recovery model should be used to automatically truncate the transaction log. (on DatabaseB)
AppDomain 3 (DatabaseB.dbo[runtime].2) created.
I know the data is missing because of my audit logs and that a user had taken a screen shot of some of the data before it was deleted.
So here is my dilema...how could this have happened?
How can several days data go missing from DatabaseB?? (it subsequently is missing from the publication db also!)
Did the truncate with the Appdomain down cause the data to be flushed from the log?
Any and all theories considered. If anyone needs more data I can add it.
Help!
This isn't the answer you want to hear, but in a nutshell, SQL Server doesn't "lose" data. Someone deleted it. If you had the database in full recovery mode, you could use a product like Quest LiteSpeed to read the logs and identify exactly how it was deleted, but in simple mode...sorry, sir, but you're out of luck.
Merge replication is implemented with triggers, so it doesn't need full recovery. Is it possible that someone disabled all triggers in the db? its easy to do DISABLE TRIGGER [database] This would at least account for the subscriber losing data.
Those appdomain lines in the log don't mean that much, its the SQL CLR telling you its unloading assemblies to free up some memory. & then reloading them later on.
Truncating the log removes inactive parts that have been committed to disk, having the recovery model set to simple means there's no point in truncating the log, as the message suggests.
None of this explains why data went missing on both the servers though. There has to be something else that caused this.
How did you verify that for the 4 days when everything was 'created perfectly fine' that it actually was? do you have backups from these days? can you see records with time stamps from those days?
Is it possible there's a ghost in the machine that did a restore without telling you?

Resources