get warning when reaching certain database size - sql-server

We are using MS SQL Express where database is limited to 10 GB. We ran into problems lately where we reached the 10 GB and figured it out by users telling us that the apps are not working anymore.
In MS SQL Server Management Studio we have the option to get
Reports - Standard Reports - Disk Usage
And there we see how many % of the table are at the moment unused.
Is it possible somehow to get a notification when reaching 90% of DB Size = 9GB?
Thanks for the help
Andreas

Related

How to definitely fix this error: .NET Framework execution was aborted by escalation policy because of out of memory?

First of all, here is our setup:
SQL Server 2014 SP2 Standard Edition 64-bit
128GB of RAM (maximum allowed by Standard Edition), of which 120GB is allocated to SQL Server
The server currently hosts ~5000 databases which are all similar (same tables, stored proc, etc.) for a total of 690GB of data (mdf files only)
Now what happens:
Every now and then, after the server has been up for some time, we receive this error when executing queries on some databases:
.NET Framework execution was aborted by escalation policy because of out of memory
This error happens more often when we perform an update of all client databases (when we launch a feature) using Red Gate SQL Multi Script. Of the ~5000 DBS, we have the error on 70 of them. Running the update script again, the error happens on a portion, and so on until we have all databases updated correctly. This is just annoying.
We have this error since a long time. Our server had 64GB of RAM, so we just added more memory to max out SQL Server Standard Editor, but still, the error came back a few days later. We think the error might be a symptom of something else.
A few things that might help to get an answer:
Our version of SQL Server is 64-bit, so we think we don't have to deal with Virtual Address Space Reservation
The error also happens when running from a client app written in PHP/Linux, so we're not talking about .NET framework of the client code
In our databases, the only usage of .NET framework we make is GROUP_CONCAT, a.NET CLR Assembly with User Defined Functions which help us simulate MySQL GROUP_CONCAT aggregate. We have a copy of the assembly in each of our 5000 client databases.
We already tried to lower the max server memory setting (to 96GB), but we were still getting those errors
If more info is needed I will update my question.
It's been 4 months since I tried a fix, and I did not experienced the bug again. Still, I don't have the exact explanation for the bug, but here is what I tried and it seems to work:
My guess was that having the same .NET CLR Assembly in each of our 5000+ databases might be the problem and increased memory usage for .NET in some way.
I created a new database named something like DotNetClrUtils
I copied the .NET CLR Assembly for GROUP_CONCAT in this database
I changed all usage of GROUP_CONCAT in all our client code and stored procedures to reference the single instance in DotNetClrUtils database (calling it like this: DotNetClrUtils.dbo.GROUP_CONCAT_D(col, ','))
That's all, and now this problem is gone!

Virtualized II7 page runs slow when querying against SQL server

OK… I’ve been tasked to figure out why an intranet site is running slow for a small to medium sized company (less than 200 people). After three days of looking on the web. I’ve decided to post what I’m looking at. Here is what I know:
Server: HP DL380 Gen9 (new)
OS: MS Server 2012 – running hyper-v
RAM: 32GB
Server 2012 was built to run at most 2 to 3 VMs at most (only running one VM at the moment)
16GB of RAM dedicated for the VHD (not dynamic memory)
Volume was created to house the VHD
The volume has a fixed 400GB VHD inside it.
Inside that VHD is server 2008r2 running SQL 2008r2 and hosting an iis7 intranet.
Here is what’s happening:
A page in the intranet is set to run a couple of stored procedures that do some checking against data in other tables as well as insert data (some sort of attendance db) after employee data is entered. The code looks like it creates and drops approximately 5 tables in the process of crunching the data. The page takes about 1min50secs to run on the newer server. I was able to get hold of the old server & run a speed test: 14 seconds.
I’m at a loss… a lot of sites say alter the code. However it was running quick before.
Old server is a 32bit 2003 server running SQL2000… new is obviously 64bit.
Any ideas?
You should find out where the slowness is coming from.
The bottleneck could be in SQL-Server, in IIS, in the code, on the network?
Find the SQL statements that are executed and run them directly in SQL server.
Run the code outside of IIS web pages
Run the code from a different server
Solved my own issue... just took a while for me to get back to this. Hopefully this will help others.
Turned on SQL Activity Monitor under tools\options => at startup => Open Object Explorer and Activity Monitor.
Opened Recent Expensive Queries. Right clicked on the top queries and selected Show Execution Plan. This showed a missing index for the db. Added index by clicking the plan info at the top. Added the index.
Hope this helps!

Why should I upgrade from SQL express to standard SQL server?

One of our customer is on SQL express 2012 R2 for last couple of years and their database has grown to 1 GB. At any given time they have around 15 or more workstations connect to this database. They have a dedicated 2008 server for this database. Sometimes I could see some issues with the slow response but most of the time it is just ok. I cannot tell if suggesting standard SQL would improve the performance? Or it would be a waste of money? Can anybody suggest the parameters to check before I can make this decision?
In the task manager there are 2 sqlservr.exe processes and both of them are using 0% CPU but one of the process is using 2.2 GB of memory and the other is using 68 MB of memory.
Am I already pushing the envelope too far?
Please advise.
This cannot be answered without knowing how the system is developed. The vast majority of slow issues I have run across in many years of datbase work have to do with inefficient code or missing indexes. Getting a higher version of the database won't fix either fo those two issues.
Your problem could also be caused by physical equipment that is reaching it's limit or by network issues.
You are not close the data storage capacity of SQL server express, so I would investigate other things first as SQL Server Standard edition is quite a bit more expensive.
Your best bet would be to get a good book on performance tuning and read it. There are hundreds of things that can cause slowness.

SQL Server 2014 standard edition slows the machine when Database size grows

I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .

A scenario to show better performance of having more RAM for SQL server 2008

I've asked for more ram for our SQL Server (currently we have a server with 4 GB of RAM) but our administrator told me that he would accept that only if I can show him the better performance with having more memory available because he has checked the server logs and SQL Server is using only 2.5 GB.
Can someone tell me how can I prove to him the effect of more available memory (like in a performance issue for a query)?
Leaving aside the fact that you don't appear to have memory issues...
Some basic checks to run:
Check the Page Life Expectancy counter: this is how long a page will stay in memory
Target Server Memory is how much RAM SQL Server want to use
Note on PLE:
"300 seconds" is quoted but our busy server has a PLE of 80k+. Which is a week. When our databases are 15 x RAM. With peaks of 3k new rows per second and lot of read aggregations.
Edit, Oct 2011
I found this article on PLE by Jonathan Kehayias: http://www.sqlskills.com/blogs/jonathan/post/Finding-what-queries-in-the-plan-cache-use-a-specific-index.aspx
The comments have many of the usual SQL Server suspect commenting

Resources