This is going to be my first attempt at fine tuning our SQL Server 2008R2, and I'd like a starting point based on the following.
When I view the resource monitor, I see (in KB):
Commit: 843,948
Working Set: 718,648
Shareable: 26,276
Private: 692,372
Out of 2 gigs available on our virtual server, 1.6 is getting used up, and I suspect it is due SQL Server, and the memory gets chewed up when I initiate a service that does a bunch of TVP inserts and checks. I already added some GC.collect() in my c# service, however I'm not really seeing much of a change, which leads me back to the SQL Server.
Where would be a good starting point for me to learn more about optimizing based on this information, and some quick pointers?
Thanks.
Here is a quick pointer: buy more memory. 2GB is nothing today.
For the long answer: you need to understand how SQL Server allocates and uses memory. 1.6Gb on a 2Gb box is perfectly normal. See Dynamic Memory Management:
When SQL Server starts, it computes the size of virtual address space
for the buffer pool based on a number of parameters such as amount of
physical memory on the system, number of server threads and various
startup parameters. SQL Server reserves the computed amount of its
process virtual address space for the buffer pool, but it acquires
(commits) only the required amount of physical memory for the current
load.
The instance then continues to acquire memory as needed to support the
workload. As more users connect and run queries, SQL Server acquires
the additional physical memory on demand. A SQL Server instance
continues to acquire physical memory until it either reaches its max
server memory allocation target or Windows indicates there is no
longer an excess of free memory; it frees memory when it has more than
the min server memory setting, and Windows indicates that there is a
shortage of free memory.
In other words, SQL Server will not release the 1.6Gb unless there is memory pressure notification from Windows.
And finally, about your question on where to look for info on optimizations: Waits and Queues is an excellent resource. It is a methodology that allows you to identify the bottlenecks and recommends the appropriate action for all common bottleneck cases.
SQL Server is designed to pre-allocate and "eat up" all the memory you let it use. There really is no way to see any improvement except to reduce the SQL footprint in the configuration.
If it's the default configuration, sql server will analyse usage and then grab as much memory as it can to optimise itself. If other apps then start asking for memory it gives it back.
There are a couple of values you can play with in terms of memory, a minimum which is the amount sql server will keep to itself, and a maximum which it will never grab more than. You can also play around with the number of threads it will run. You'll need some good stats for this. Depends on your usage patterns, and what else needs memory and how well it plays with others. Mess about and you can starve sql server, which is never a brill idea. I've always been a big fan of dedicated machines for dbms for any non trivial use.
As much art as science this, unless you find something horrible in there, slowing down your sql server will give your applications lots of memory to do nothing with because they are waiting for the Db....
More stuff to have a look at.
MSDN - Sql Server Performance and Memory
You need to get performamnce monitoring going and a have a good idea what sort of things are going on during the run. And you want an 'average' run. Peak hits, Out of office hours processing, holidays etc, no use at all.
PS don't forget performance monitoring is a significant hit on the machine.
Related
I understand from a search here that SQL Server 2012 will continue to use memory until it meets the limit set for it, but the usage I see is hard to believe. The database is about 26MB at the moment, but the memory usage is over 30GB. Is this to be expected or is there some other problem lurking somewhere?
There is nothing wrong. The memory allocated to SQL Server can be changed to almost any value you'd like. I'd say 30GB is certainly overkill for a 26MB DB if it is the only DB on the instance. The memory allocated to SQL Server is used for numerous functions, e.g. sorting queries, plan caches, etc. The 30GB you're seeing means that 30GB of your system memory is reserved for SQL Server.
For a better understanding, you'll want to look into your target memory too. Target memory is how much memory is needed for SQL Server work, based on your configuration. In your case, I bet target memory equals max memory and SQL Server is trying to consume all the memory. Here is how you can check that:
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name in ('Total Server Memory (KB)', 'Target Server Memory (KB)')
More information from Brent Ozar:
So how much memory is SQL using? I’ll make this easy for you. SQL
Server is using all of the memory. Period.
No matter how much memory you put in a system, SQL Server will use all
it can get until it’s caching entire databases in memory and then
some. This isn’t an accident, and there’s a good reason for it. SQL
Server is a database: programmers store data in SQL Server, and then
SQL Server manages writing that data to files on the hard drive.
Programmers issue SELECT statements (yes, usually SELECT *) and SQL
Server fetches the data back from the drives. The organization of
files and drives is abstracted away from the programmers.
To improve performance, SQL Server caches data in memory. SQL Server
doesn’t have a shared-disk model: only one server’s SQLserver.exe can
touch the data files at any given time. SQL Server knows that once it
reads a piece of data from the drives, that data isn’t changing unless
SQL Server itself needs to update it. Data can be read into memory
once and safely kept around forever. And I do mean forever – as long
as SQL Server’s up, it can keep that same data in memory. If you have
a server with enough memory to cache the entire database, SQL Server
will do just that. Why Doesn’t SQL Server Release Memory?
Memory makes up for a lot of database sins like:
Slow, cheap storage (like SATA hard drives and 1Gb iSCSI)
Programs that needlessly retrieve too much data
Databases that don’t have good indexes
CPUs that can’t build query plans fast enough
Throw enough memory at these problems and they go away, so SQL Server
wants to use all the memory it can get. It also assumes that more
queries could come in at any moment, so it never lets go or releases
memory unless the server comes under memory pressure (like if other
apps need memory and Windows sends out a memory pressure
notification).
By default, SQL Server assumes that its server exists for the sole
purpose of hosting databases, so the default setting for memory is an
unlimited maximum. (There are some version/edition restrictions, but
let’s keep things simple for now.) This is a good thing; it means the
default setting is covering up for sins. To find out if the server’s
memory is effectively covering up sins, we have to do some
investigation.
Docs on SQL Server memory configuration: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options
Here is how you can set a fixed amount for Min/Max memory on SQL Server: https://technet.microsoft.com/en-us/library/ms191144(v=sql.105).aspx
I just started at a new office as a Data Analyst. The job entails upgrading client systems from our dbase platform to our new RDBMS. The actual conversion is handled by some in-house software that is a black box to me but at the end of the conversion my system memory usage is maxed out ~15.3 of 16GB. I was told to just restart my computer but it seems like there must be a better way (hopefully that doesn't involve fixing the software since that is out of scope for me).
I found the question at the link below but running DBCC DROPCLEANBUFFERS doesn't seem to work. Restarting the SQL instance works but that interrupts all the databases on the instance. Is there another way to release the memory?
SQL Server clear memory
We use both ssms2008r2 and smss2012.
Thanks
SQL Server will grab and keep all memory that you let it. If you don't want it to use 15.3 GB of memory, you need to change the setting so it only grabs X GB.
You can do this by right clicking the instance in the object explorer, click on Server Properties, and change the Maximum server memory under the Memory tab. It is generally a good idea to leave at lest 1 - 2 GB for the operating system, and then more if you have anything else running on the server (you should avoid running other stuff on the server, if possible).
SQL by design wants to use as much memory as it can get. If you wish to limit that, you can do so from the server properties and limit it how many GBs its allowed to consume.
DBCC DROPCLEANBUFFERS clears all the clean buffers from the buffer pool. That won't help you.
You need to understand that SSMS isnt eating your memory, the sqlserver service is. SQL server will expand it's memory footprint as much as you will allow it. If you have 32GB and you say it can use 32GB then it will allocate as much as it can.
Here's a great article on DBA stack exchange regarding max server memory: https://dba.stackexchange.com/questions/47431/why-is-sql-server-consuming-more-server-memory
I've read that it's unwise to install SQL Server and IIS on the same machine, but I haven't seen any evidence for that. Has anybody tried this, and if so, what were the results? At what point is it necessary to separate them? Is any tuning necessary? I'm concerned specifically with IIS7 and SQL Server 2008.
If somebody can provide numbers showing when it makes more sense to go to two machines, that would be most helpful.
It is unwise to run SQL Server with any other product, including another instance of SQL Server. The reason for this recommendation is the nature of of how SQL Server uses the OS resources. SQL Server runs on a user mode memory management and processor scheduling infrastructure called SQLOS. SQL Server is designed to run at peak performance and assumes that is the only server on the OS. As such the SQL OS reserves all RAM on the machine for SQL process and creates a scheduler for each CPU core and allocates tasks for all schedulers to run, utilizing all CPU it can get, when it needs it. Because SQL reserves all memory, other processes that need memory will cause SQL to see memory pressure, and the response to memory pressure will evict pages from buffer pool and compiled plans from the plan cache. And since SQL is the only server that actually leverages the memory notification API (there are rumors that the next Exchange will too), SQL is the only process that actually shrinks to give room to other processes (like leaky buggy ASP pools). This behavior is also explained in BOL: Dynamic Memory Management.
A similar pattern happens with CPU scheduling where other processes steal CPU time from the SQL schedulers. On high end systems and on Opteron machines things get worse because SQL uses NUMA locality to full advantage, but no other processes are usually not aware of NUMA and, as much as the OS can try to preserve locality of allocations, they end up allocating all over the physical RAM and reduce the overall throughput of the system as the CPUs are idling on waiting for cross-numa boundary page access. There are other things to consider too like TLB and L2 miss increase due to other processes taking up CPU cycles.
So to sum up, you can run other servers with SQL Server, but is not recommended. If you must, then make sure you isolate the two server to your best ability. Use CPU affinity masks for both SQL and IIS/ASP to isolate the two on separate cores, configure SQL to reserve less RAM so that it leaves free memory for IIS/ASP, configure your app pools to recycle aggressively to prevent application pool growth.
Yes, it is possible and many do it.
It tends to be a question of security and/or performance.
Security is questioned as your attack surface is increased on a box that has both. Perhaps not an issue for you.
Performance is questioned as now your server is serving web and DB requests. Again, perhaps not an issue in your case.
Test vs. Production....
Many may feel fine in test environments but not production....
Again, your team's call. I like my test and production environments being as similar as possible if possible but that's my preference.
It's possible, yes.
A good idea for a production environment, no.
The problem that you're going to run in to is that a SQL Server database under substantial load is, more than likely, going to be doing heavy disk I/O and have a large memory footprint. That combination is going to tie up the machine, and you're going to see a performance hit in IIS as it tries to serve up the pages.
It's unwise in certain contexts... totally wise in others.
If your machine is underutilized and won't experience heavy loads, then there is an advantage to installing the database on the same machine, because you simply won't have to transfer anything across the network.
On the other hand, if one or both of IIS or the database will be under heavy load, they will likely start to interfere, and the performance gain of dedicated hardware for each will probably outstrip the loss of having to go over the network.
Don't forget the maintenance issue...you can't reboot/patch one without nuking the other. If they are on two boxes, you could give your users a better experience, than no response from the webserver if you are maintaining the SQL box.
Not highest on the list, but should be noted.
You certainly can. You will run into performance issues if, for example, you have large user base or if there are a lot of heavy query's being run against the DB. I have worked on several sites, usually hosted at 1and1, that run IIS and SQL Server (Express!) on the same box with thousands of users (hundreds concurrent) and millions of records in poorly designed tables, accessed via poorly written stored procedures and the user experience was certainly tolerable. It all comes down to how hard you plan on hitting the server.
I need some help very badly.
I'm working on a project where a bulk of data is entered all the time. It's a reporting software.
10 Million records in an average is stored per day and it could keep on increasing as users increase.
As of now, SQL SERVER CONSUMES 5gb of RAM on the task manager. I have an 8GB ram on my server now.
How do other enterprises manage such situations?
SQL Server uses memory efficiently and takes as much as it can. It's also usually clever enough to release memory when needed.
Using 5GB means:
SQL Server is configured to 5GB or SQL Server has simply reserved this memory during normal usage
It's left 3GB because it doesn't need to use it
Nothing is wrong... and I'd probably configure the SQL Server max mem to 6.5GB...
Late addition: Jonathan Kehayias blog entry
SQL Server typically uses as much memory as it can get it's hands on, as it then stores the more frequently accessed data in memory to be more efficient, as disk access is slower then memory access.
So nothing is wrong with it using 5gb of memory.
To be honest, it's leaving 3gb of memory for other applications and the operating system, so there might not be anything wrong with this. (If this is all that server is designed to do.).
To configure the memory limit, do the following:
In SQL Server Enterprise manager, right click on the server name, and go to properties.
Click on the Memory option
Reduce the maximum server memory to what you think is appropriate.
Click ok.
I highly doubt that this is in fact a memory leak. The increase of SQL Server's memory usage is by design, simply because it caches a lot of stuff (queries, procedures).
What you will most likely see is that if the available memory that is still left runs low, SQL server will 'flush' its memory, and you would see in fact that memory will be freed in the end.
I'm managing a co-located winserver08 box running both IIS and SQL 08 express. I just happened to glance at the task manager's performance tab and find the 'mem usage history graph is close to topped out reading 1.8 gig (i have 2 gig physical ram). Processes shows sqlserver is running at 940,000K - by far the largest consumer.
I'm a low volume site - cpu utilization barely registers. Haven't had any stability issues with the server at all. Is this just how SQL treats available mem or should I be digging deeper?
thx
SQL Server manages it's own memory pool. It will release memory back to the OS under memory pressure.
So, yes, this is normal behaviour and nothing to be concerned about.
Note: I should mention an exception to this: If TSQL scripts are using sp_OACreate to create COM objects and not releasing the object with a corresponding sp_OSDestroy (say, for instance an error occurs and the script terminates prematurely), then memory can leak. Use of these stored procedures is not that common (many DBAs won't allow this feature to be turned on, for good reason) I believe this is also the case for CURSORS that are not deallocated.
Unless you have configured this is normal behavior. Read this article for a clear understanding of memory configuration and recommendations.
In your case, I assume you have the default settings and what you see is normal and there is no cause for concern.
Raj
Keep into account SqlExpress probably manages memory in a different way compared to any standard edition (that is, any non-express edition).
I can't provide specific links but from personal experience things change a lot when you move to a proper distribution of SQLServer (speed, memory-manamgement, responsiveness in extreme conditions, etc).
If anybody knows something more please integrate my answer.
SQL Server will use something in the area of 90% of a machine's memory if it isn't capped. This is completely normal as SQL Server is managing memory for itself and will release memory as necessary.
If you are worried about it than you can cap the amount of memory SQL Server can use, by going to the properties of the sqlexpress instance, selecting the memory page and then reducing the maximum server memory.