I have a single instance of SQL Server on my server machine, with 80% of the memory reserved to it. This is configured in the SQL Server max memory settings.
If I create an additional instance, will it use the reserved memory, or will it attempt to use the unallocated 10%?
Simply put, is the reserved memory used by SQL Server instance specific?
Max and Min Server Memory are NOT reservations. It does not work the same way as e.g. VmWare with reservations.
On startup, the instance only uses a minimum amount of memory - it does not grab the Min Server Memory amount immediately.
In normal use, your instance will continue to use more memory as required until it reaches the Max Server Memory limit - but only if there is no external memory pressure signalled by Windows. If the system requests more memory, SQL Server will release this but will not release if it would drop below Min Server Memory.
Therefore if you are going to install more than 1 instance - you need to be careful in regards to the Min Server Memory and not set this so high that the sum of all Min Server Memory amounts plus the amount needed by the OS and other programs is more than the total physical memory available. Also think about what is realistic for the Max Server memory - as one instance increases memory uses it may request other instances to release some memory back - possibly impacting performance of other instances.
You'll need to monitor what your instance actually needs - counters such as Page Life Expectancy can help give you an idea of how long pages stay in memory after loaded (i.e. is there memory pressure) which will help you identify the impact of lowering Max Server Memory for that instance.
The performance characteristics of each database will affect what you configure.
In saying that, in these days of virtualisation - I would tend to avoid multiple instances per server and simply spin up another VM for my new instance. That way I can have more control over the actual physical reservations for memory - but I still need to be aware of operations like ballooning if I over-allocate.
Related
We have a SQL Server 2012 Enterprise Edition, 128GB of RAM, Windows 2008R2. The SQL Server job runs every day at 3 AM and takes 5 hrs to load data into the database. During this process, SQL Server utilizes 123GB (max memory allocated).
After the job completes, SQL Server is not releasing the RAM.
Queried memory utilization where buffer pool shows 97GB. Users don't access database during this time. I restarted SQL Server services to bring RAM down. I didn't find a correct answer related to this issue. Why is it not releasing the RAM? How can we bring RAM utilization down?
SQL Server Job -> SSIS package -> Import data from Mysql to SQL Server database
Thanks
This is by design once SQL Server uses memory, it keeps hold of it and does not release it back to OS.
Your Task Manager may show all/nearly all memory used by SQL Server but if you want to see how much memory SQL Server is actually using you can use the following query.
SELECT (physical_memory_in_use_kb/1024) AS Memory_usedby_Sqlserver_MB
FROM sys.dm_os_process_memory;
By design, SQL Server holds on to the RAM that is has allocated. Much of the RAM is used for the buffer pool. The buffer pool is a cache that holds database pages in memory for fast retrieval.
If SQL Server were to release some memory, and someone were to run a query that requests it right afterwards, the query would have to wait for expensive physical I/O to produce the data. Therefore, SQL Server tries to hold as much memory as possible (and as configured) for as long as possible.
The RAM settings here specify the min server memory and the max server memory. Careful setting of the max memory setting allows room for other processes to run. The article quotes a complicated formula for determining how much room to leave:
From the total OS memory, reserve 1GB-4GB to the OS itself.
Then subtract the equivalent of potential SQL Server memory allocations
outside the max server memory control, which is comprised of stack
size 1 * calculated max worker threads 2 + -g startup parameter 3 (or
256MB by default if -g is not set). What remains should be the
max_server_memory setting for a single instance setup.
In our servers, we usually just wing it and set the max memory option to several GB below the total physical memory. This leaves plenty of room for the OS and other applications.
If SQL Server memory is over the min server memory, and the OS is under memory pressure, SQL Server can release memory until it is at the min server memory setting.
Reference: Memory Management Architecture Guide.
One of the primary design goals of all database software is to
minimize disk I/O because disk reads and writes are among the most
resource-intensive operations. SQL Server builds a buffer pool in
memory to hold pages read from the database. Much of the code in SQL
Server is dedicated to minimizing the number of physical reads and
writes between the disk and the buffer pool. SQL Server tries to reach
a balance between two goals:
Keep the buffer pool from becoming so big that the entire system is low on memory.
Minimize physical I/O to the database files by maximizing the size of the buffer pool.
When SQL Server is using memory dynamically, it queries the system
periodically to determine the amount of free memory. Maintaining this
free memory prevents the operating system (OS) from paging. If less
memory is free, SQL Server releases memory to the OS. If more memory
is free, SQL Server may allocate more memory. SQL Server adds memory
only when its workload requires more memory; a server at rest does not
increase the size of its virtual address space.
...
As more users connect and run queries, SQL Server acquires the
additional physical memory on demand. A SQL Server instance continues
to acquire physical memory until it either reaches its max server
memory allocation target or Windows indicates there is no longer an
excess of free memory; it frees memory when it has more than the min
server memory setting, and Windows indicates that there is a shortage
of free memory.
As other applications are started on a computer running an instance of
SQL Server, they consume memory and the amount of free physical memory
drops below the SQL Server target. The instance of SQL Server adjusts
its memory consumption. If another application is stopped and more
memory becomes available, the instance of SQL Server increases the
size of its memory allocation. SQL Server can free and acquire several
megabytes of memory each second, allowing it to quickly adjust to
memory allocation changes.
If, for some reason:
You absolutely MUST have that memory back
You know you do not need it for a while
You are willing to pay a penalty for virtual memory allocation and physical I/O to retrieve data from disk the next time you need that memory
Then you can temporarily reconfigure the database max server memory setting to a lower value. This can be done through the SSMS user interface, or you can use an sp_configure 'max server memory' followed by reconfigure to make the changes programatically.
Full disclosure: I did not try it myself.
You should not try it on your production environment before testing it somewhere else.
This is from a DBA answer:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
4096 should be replaced by the value that you find acceptable as the minimum.
Should be followed by a similar command to increase the memory back to your original maximum.
I understand from a search here that SQL Server 2012 will continue to use memory until it meets the limit set for it, but the usage I see is hard to believe. The database is about 26MB at the moment, but the memory usage is over 30GB. Is this to be expected or is there some other problem lurking somewhere?
There is nothing wrong. The memory allocated to SQL Server can be changed to almost any value you'd like. I'd say 30GB is certainly overkill for a 26MB DB if it is the only DB on the instance. The memory allocated to SQL Server is used for numerous functions, e.g. sorting queries, plan caches, etc. The 30GB you're seeing means that 30GB of your system memory is reserved for SQL Server.
For a better understanding, you'll want to look into your target memory too. Target memory is how much memory is needed for SQL Server work, based on your configuration. In your case, I bet target memory equals max memory and SQL Server is trying to consume all the memory. Here is how you can check that:
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name in ('Total Server Memory (KB)', 'Target Server Memory (KB)')
More information from Brent Ozar:
So how much memory is SQL using? I’ll make this easy for you. SQL
Server is using all of the memory. Period.
No matter how much memory you put in a system, SQL Server will use all
it can get until it’s caching entire databases in memory and then
some. This isn’t an accident, and there’s a good reason for it. SQL
Server is a database: programmers store data in SQL Server, and then
SQL Server manages writing that data to files on the hard drive.
Programmers issue SELECT statements (yes, usually SELECT *) and SQL
Server fetches the data back from the drives. The organization of
files and drives is abstracted away from the programmers.
To improve performance, SQL Server caches data in memory. SQL Server
doesn’t have a shared-disk model: only one server’s SQLserver.exe can
touch the data files at any given time. SQL Server knows that once it
reads a piece of data from the drives, that data isn’t changing unless
SQL Server itself needs to update it. Data can be read into memory
once and safely kept around forever. And I do mean forever – as long
as SQL Server’s up, it can keep that same data in memory. If you have
a server with enough memory to cache the entire database, SQL Server
will do just that. Why Doesn’t SQL Server Release Memory?
Memory makes up for a lot of database sins like:
Slow, cheap storage (like SATA hard drives and 1Gb iSCSI)
Programs that needlessly retrieve too much data
Databases that don’t have good indexes
CPUs that can’t build query plans fast enough
Throw enough memory at these problems and they go away, so SQL Server
wants to use all the memory it can get. It also assumes that more
queries could come in at any moment, so it never lets go or releases
memory unless the server comes under memory pressure (like if other
apps need memory and Windows sends out a memory pressure
notification).
By default, SQL Server assumes that its server exists for the sole
purpose of hosting databases, so the default setting for memory is an
unlimited maximum. (There are some version/edition restrictions, but
let’s keep things simple for now.) This is a good thing; it means the
default setting is covering up for sins. To find out if the server’s
memory is effectively covering up sins, we have to do some
investigation.
Docs on SQL Server memory configuration: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options
Here is how you can set a fixed amount for Min/Max memory on SQL Server: https://technet.microsoft.com/en-us/library/ms191144(v=sql.105).aspx
This is going to be my first attempt at fine tuning our SQL Server 2008R2, and I'd like a starting point based on the following.
When I view the resource monitor, I see (in KB):
Commit: 843,948
Working Set: 718,648
Shareable: 26,276
Private: 692,372
Out of 2 gigs available on our virtual server, 1.6 is getting used up, and I suspect it is due SQL Server, and the memory gets chewed up when I initiate a service that does a bunch of TVP inserts and checks. I already added some GC.collect() in my c# service, however I'm not really seeing much of a change, which leads me back to the SQL Server.
Where would be a good starting point for me to learn more about optimizing based on this information, and some quick pointers?
Thanks.
Here is a quick pointer: buy more memory. 2GB is nothing today.
For the long answer: you need to understand how SQL Server allocates and uses memory. 1.6Gb on a 2Gb box is perfectly normal. See Dynamic Memory Management:
When SQL Server starts, it computes the size of virtual address space
for the buffer pool based on a number of parameters such as amount of
physical memory on the system, number of server threads and various
startup parameters. SQL Server reserves the computed amount of its
process virtual address space for the buffer pool, but it acquires
(commits) only the required amount of physical memory for the current
load.
The instance then continues to acquire memory as needed to support the
workload. As more users connect and run queries, SQL Server acquires
the additional physical memory on demand. A SQL Server instance
continues to acquire physical memory until it either reaches its max
server memory allocation target or Windows indicates there is no
longer an excess of free memory; it frees memory when it has more than
the min server memory setting, and Windows indicates that there is a
shortage of free memory.
In other words, SQL Server will not release the 1.6Gb unless there is memory pressure notification from Windows.
And finally, about your question on where to look for info on optimizations: Waits and Queues is an excellent resource. It is a methodology that allows you to identify the bottlenecks and recommends the appropriate action for all common bottleneck cases.
SQL Server is designed to pre-allocate and "eat up" all the memory you let it use. There really is no way to see any improvement except to reduce the SQL footprint in the configuration.
If it's the default configuration, sql server will analyse usage and then grab as much memory as it can to optimise itself. If other apps then start asking for memory it gives it back.
There are a couple of values you can play with in terms of memory, a minimum which is the amount sql server will keep to itself, and a maximum which it will never grab more than. You can also play around with the number of threads it will run. You'll need some good stats for this. Depends on your usage patterns, and what else needs memory and how well it plays with others. Mess about and you can starve sql server, which is never a brill idea. I've always been a big fan of dedicated machines for dbms for any non trivial use.
As much art as science this, unless you find something horrible in there, slowing down your sql server will give your applications lots of memory to do nothing with because they are waiting for the Db....
More stuff to have a look at.
MSDN - Sql Server Performance and Memory
You need to get performamnce monitoring going and a have a good idea what sort of things are going on during the run. And you want an 'average' run. Peak hits, Out of office hours processing, holidays etc, no use at all.
PS don't forget performance monitoring is a significant hit on the machine.
I need some help very badly.
I'm working on a project where a bulk of data is entered all the time. It's a reporting software.
10 Million records in an average is stored per day and it could keep on increasing as users increase.
As of now, SQL SERVER CONSUMES 5gb of RAM on the task manager. I have an 8GB ram on my server now.
How do other enterprises manage such situations?
SQL Server uses memory efficiently and takes as much as it can. It's also usually clever enough to release memory when needed.
Using 5GB means:
SQL Server is configured to 5GB or SQL Server has simply reserved this memory during normal usage
It's left 3GB because it doesn't need to use it
Nothing is wrong... and I'd probably configure the SQL Server max mem to 6.5GB...
Late addition: Jonathan Kehayias blog entry
SQL Server typically uses as much memory as it can get it's hands on, as it then stores the more frequently accessed data in memory to be more efficient, as disk access is slower then memory access.
So nothing is wrong with it using 5gb of memory.
To be honest, it's leaving 3gb of memory for other applications and the operating system, so there might not be anything wrong with this. (If this is all that server is designed to do.).
To configure the memory limit, do the following:
In SQL Server Enterprise manager, right click on the server name, and go to properties.
Click on the Memory option
Reduce the maximum server memory to what you think is appropriate.
Click ok.
I highly doubt that this is in fact a memory leak. The increase of SQL Server's memory usage is by design, simply because it caches a lot of stuff (queries, procedures).
What you will most likely see is that if the available memory that is still left runs low, SQL server will 'flush' its memory, and you would see in fact that memory will be freed in the end.
I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem.
There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated.
Edit:
I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums
Extracted fromt he SQL Server documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory
SQL Server can allocate when it starts
and while it runs. This configuration
option can be set to a specific value
if you know there are multiple
applications running at the same time
as SQL Server and you want to
guarantee that these applications have
sufficient memory to run. If these
other applications, such as Web or
e-mail servers, request memory only as
needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
The recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO.
Stop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server.
Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right.
But if you do a lot of work in the VM, why not give it at least half of the 2GB?
so id like to be able to run the vm on
768 mb or less if possible.
That will depend on your data and the size of your database. But I usually like to give SQL server at least a GB
It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with.
For production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants.
Since you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM.