I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem.
There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated.
Edit:
I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums
Extracted fromt he SQL Server documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory
SQL Server can allocate when it starts
and while it runs. This configuration
option can be set to a specific value
if you know there are multiple
applications running at the same time
as SQL Server and you want to
guarantee that these applications have
sufficient memory to run. If these
other applications, such as Web or
e-mail servers, request memory only as
needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
The recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO.
Stop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server.
Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right.
But if you do a lot of work in the VM, why not give it at least half of the 2GB?
so id like to be able to run the vm on
768 mb or less if possible.
That will depend on your data and the size of your database. But I usually like to give SQL server at least a GB
It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with.
For production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants.
Since you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM.
Related
I just started at a new office as a Data Analyst. The job entails upgrading client systems from our dbase platform to our new RDBMS. The actual conversion is handled by some in-house software that is a black box to me but at the end of the conversion my system memory usage is maxed out ~15.3 of 16GB. I was told to just restart my computer but it seems like there must be a better way (hopefully that doesn't involve fixing the software since that is out of scope for me).
I found the question at the link below but running DBCC DROPCLEANBUFFERS doesn't seem to work. Restarting the SQL instance works but that interrupts all the databases on the instance. Is there another way to release the memory?
SQL Server clear memory
We use both ssms2008r2 and smss2012.
Thanks
SQL Server will grab and keep all memory that you let it. If you don't want it to use 15.3 GB of memory, you need to change the setting so it only grabs X GB.
You can do this by right clicking the instance in the object explorer, click on Server Properties, and change the Maximum server memory under the Memory tab. It is generally a good idea to leave at lest 1 - 2 GB for the operating system, and then more if you have anything else running on the server (you should avoid running other stuff on the server, if possible).
SQL by design wants to use as much memory as it can get. If you wish to limit that, you can do so from the server properties and limit it how many GBs its allowed to consume.
DBCC DROPCLEANBUFFERS clears all the clean buffers from the buffer pool. That won't help you.
You need to understand that SSMS isnt eating your memory, the sqlserver service is. SQL server will expand it's memory footprint as much as you will allow it. If you have 32GB and you say it can use 32GB then it will allocate as much as it can.
Here's a great article on DBA stack exchange regarding max server memory: https://dba.stackexchange.com/questions/47431/why-is-sql-server-consuming-more-server-memory
When I start SQL Server it won't stop growing and gets to over a gigabyte within a couple minutes. I have restarted the service several times and each time it does the same thing. After each time I restarted the service I do not start any applications and have no applications open that would use SQL Server.
I turned the SQL Agent off just to make sure it wasn't running anything.
I've tried seeing if the profiler would show anything being run but didn't see anything.
I tried running a query to see what the longest running queries were but nothing seems out of order there either.
I'm wondering what other options I have to try and figure out what's causing SQL Server to grow incessantly?
You can configure max memory usage: http://technet.microsoft.com/en-us/library/ms178067.aspx
If you really want to see what SQL Server is caching in RAM then have a look at the sys.dm_os_buffer_descriptors DMV:
sys.dm_os_buffer_descriptors
This DMV will return a row for every single page in the buffer pool, it will tell you what type of page it is and it will tell you which database it belongs to.
The other thing in SQL Server that can eat RAM is the plan cache, which you can see by referencing the associated DMV (sys.dm_exec_cached_plans) and the associated functions around this.
There are loads of resources available on how to use these DMVs to analyse the contents of the plan cache and buffer pool, so you will be able to tell exactly what SQL Server is using all this memory for.
The previous answer is correct though, you really do need to set the max server memory property to prevent SQL Server running away with all the RAM, especially in this case where it appears to be a concern.
This is going to be my first attempt at fine tuning our SQL Server 2008R2, and I'd like a starting point based on the following.
When I view the resource monitor, I see (in KB):
Commit: 843,948
Working Set: 718,648
Shareable: 26,276
Private: 692,372
Out of 2 gigs available on our virtual server, 1.6 is getting used up, and I suspect it is due SQL Server, and the memory gets chewed up when I initiate a service that does a bunch of TVP inserts and checks. I already added some GC.collect() in my c# service, however I'm not really seeing much of a change, which leads me back to the SQL Server.
Where would be a good starting point for me to learn more about optimizing based on this information, and some quick pointers?
Thanks.
Here is a quick pointer: buy more memory. 2GB is nothing today.
For the long answer: you need to understand how SQL Server allocates and uses memory. 1.6Gb on a 2Gb box is perfectly normal. See Dynamic Memory Management:
When SQL Server starts, it computes the size of virtual address space
for the buffer pool based on a number of parameters such as amount of
physical memory on the system, number of server threads and various
startup parameters. SQL Server reserves the computed amount of its
process virtual address space for the buffer pool, but it acquires
(commits) only the required amount of physical memory for the current
load.
The instance then continues to acquire memory as needed to support the
workload. As more users connect and run queries, SQL Server acquires
the additional physical memory on demand. A SQL Server instance
continues to acquire physical memory until it either reaches its max
server memory allocation target or Windows indicates there is no
longer an excess of free memory; it frees memory when it has more than
the min server memory setting, and Windows indicates that there is a
shortage of free memory.
In other words, SQL Server will not release the 1.6Gb unless there is memory pressure notification from Windows.
And finally, about your question on where to look for info on optimizations: Waits and Queues is an excellent resource. It is a methodology that allows you to identify the bottlenecks and recommends the appropriate action for all common bottleneck cases.
SQL Server is designed to pre-allocate and "eat up" all the memory you let it use. There really is no way to see any improvement except to reduce the SQL footprint in the configuration.
If it's the default configuration, sql server will analyse usage and then grab as much memory as it can to optimise itself. If other apps then start asking for memory it gives it back.
There are a couple of values you can play with in terms of memory, a minimum which is the amount sql server will keep to itself, and a maximum which it will never grab more than. You can also play around with the number of threads it will run. You'll need some good stats for this. Depends on your usage patterns, and what else needs memory and how well it plays with others. Mess about and you can starve sql server, which is never a brill idea. I've always been a big fan of dedicated machines for dbms for any non trivial use.
As much art as science this, unless you find something horrible in there, slowing down your sql server will give your applications lots of memory to do nothing with because they are waiting for the Db....
More stuff to have a look at.
MSDN - Sql Server Performance and Memory
You need to get performamnce monitoring going and a have a good idea what sort of things are going on during the run. And you want an 'average' run. Peak hits, Out of office hours processing, holidays etc, no use at all.
PS don't forget performance monitoring is a significant hit on the machine.
We have an 8 Core, 16GB RAM server that has SQL Server 2008 running on it. When we perform large queries on millions of rows the RAM usage goes up to 15.7GB and then even file browsing, opening excel etc gets really slow.
So does SQL Server really release memory when another process needs it or am I having another issue? We don't have any other major programs running on this server.
We've set a max memory usage of 14GB for SQL Server.
Thanks all for any enlightenment or trouble shooting ideas.
Yes it does. See SQLOS's memory manager: responding to memory pressure for details how this works. But what exactly means to have 'memory pressure' it depends from machine to machine and from OS version to OS version, see Q & A: Does SQL Server always respond to memory pressure?. If you want to reserve more memory for applications (I'm not even bother to ask why you browse files and use Excel on a machine dedicated to SQL Server....) then you should lower the mas server memory until it leaves enough for your entertainment.
SQL server does NOT release memory. It takes all the memory it can get up to the MaxMemory setting and it stays there.
I've read that it's unwise to install SQL Server and IIS on the same machine, but I haven't seen any evidence for that. Has anybody tried this, and if so, what were the results? At what point is it necessary to separate them? Is any tuning necessary? I'm concerned specifically with IIS7 and SQL Server 2008.
If somebody can provide numbers showing when it makes more sense to go to two machines, that would be most helpful.
It is unwise to run SQL Server with any other product, including another instance of SQL Server. The reason for this recommendation is the nature of of how SQL Server uses the OS resources. SQL Server runs on a user mode memory management and processor scheduling infrastructure called SQLOS. SQL Server is designed to run at peak performance and assumes that is the only server on the OS. As such the SQL OS reserves all RAM on the machine for SQL process and creates a scheduler for each CPU core and allocates tasks for all schedulers to run, utilizing all CPU it can get, when it needs it. Because SQL reserves all memory, other processes that need memory will cause SQL to see memory pressure, and the response to memory pressure will evict pages from buffer pool and compiled plans from the plan cache. And since SQL is the only server that actually leverages the memory notification API (there are rumors that the next Exchange will too), SQL is the only process that actually shrinks to give room to other processes (like leaky buggy ASP pools). This behavior is also explained in BOL: Dynamic Memory Management.
A similar pattern happens with CPU scheduling where other processes steal CPU time from the SQL schedulers. On high end systems and on Opteron machines things get worse because SQL uses NUMA locality to full advantage, but no other processes are usually not aware of NUMA and, as much as the OS can try to preserve locality of allocations, they end up allocating all over the physical RAM and reduce the overall throughput of the system as the CPUs are idling on waiting for cross-numa boundary page access. There are other things to consider too like TLB and L2 miss increase due to other processes taking up CPU cycles.
So to sum up, you can run other servers with SQL Server, but is not recommended. If you must, then make sure you isolate the two server to your best ability. Use CPU affinity masks for both SQL and IIS/ASP to isolate the two on separate cores, configure SQL to reserve less RAM so that it leaves free memory for IIS/ASP, configure your app pools to recycle aggressively to prevent application pool growth.
Yes, it is possible and many do it.
It tends to be a question of security and/or performance.
Security is questioned as your attack surface is increased on a box that has both. Perhaps not an issue for you.
Performance is questioned as now your server is serving web and DB requests. Again, perhaps not an issue in your case.
Test vs. Production....
Many may feel fine in test environments but not production....
Again, your team's call. I like my test and production environments being as similar as possible if possible but that's my preference.
It's possible, yes.
A good idea for a production environment, no.
The problem that you're going to run in to is that a SQL Server database under substantial load is, more than likely, going to be doing heavy disk I/O and have a large memory footprint. That combination is going to tie up the machine, and you're going to see a performance hit in IIS as it tries to serve up the pages.
It's unwise in certain contexts... totally wise in others.
If your machine is underutilized and won't experience heavy loads, then there is an advantage to installing the database on the same machine, because you simply won't have to transfer anything across the network.
On the other hand, if one or both of IIS or the database will be under heavy load, they will likely start to interfere, and the performance gain of dedicated hardware for each will probably outstrip the loss of having to go over the network.
Don't forget the maintenance issue...you can't reboot/patch one without nuking the other. If they are on two boxes, you could give your users a better experience, than no response from the webserver if you are maintaining the SQL box.
Not highest on the list, but should be noted.
You certainly can. You will run into performance issues if, for example, you have large user base or if there are a lot of heavy query's being run against the DB. I have worked on several sites, usually hosted at 1and1, that run IIS and SQL Server (Express!) on the same box with thousands of users (hundreds concurrent) and millions of records in poorly designed tables, accessed via poorly written stored procedures and the user experience was certainly tolerable. It all comes down to how hard you plan on hitting the server.