We have oracle DB (19c) installed on Oracle Linux 8 machine. When we connect our servers with 19c db, the memory of DB machine starts growing up. As long as the server handles load and performs db operations, memory keeps growing and after few hours all the memory gets used up and there is no more free memory available. We have short PS/SQL statements and stored procedures which get executed as different CRUD, commit, rollback operations are performed. We did some research and found this command to free up memory but this doesn't work.
sync; echo 3 > /proc/sys/vm/drop_caches
Note: We found the same behaviour on 11g database, so it's not a database versions specific problem, also we changed DB machine but the issue is persistent.
A screenshot of our machine with fully used memory is attached. Any help will be highly appreciated.
Make sure you are closing all of your database sessions, as well as cursors. All of those processes you showed appear to be dedicated server processes, opened for each login to the database. That's why your memory goes up as soon as you start using it: every login requires its own dedicated memory, in addition to the memory used by the background server processes.
See here for information on tuning Oracle memory usage.
Also note that your memory is not exactly 100% used: 29.9% is recently used or reserved in buffers and ready to be made available if there is a need. If it were actually used, then the used value would be 100% and your swap usage would likely be much higher.
Its hard to say whether you really have memory leak or not. Most likely no. Oracle spawns dedicated process for each connection. This process "mounts" shared memory regions. That's why you see such a huge memory allocation. Try to use tool like ps_mem.py, which can better deal with shared memory segments.
Oracle DB has two memory related parameters SGA+PGA, normally DB should not use more RAM than defined sum of these two parameters.
Just execute vmstat and check whether your server actually is swapping memory or not.
Related
I just started at a new office as a Data Analyst. The job entails upgrading client systems from our dbase platform to our new RDBMS. The actual conversion is handled by some in-house software that is a black box to me but at the end of the conversion my system memory usage is maxed out ~15.3 of 16GB. I was told to just restart my computer but it seems like there must be a better way (hopefully that doesn't involve fixing the software since that is out of scope for me).
I found the question at the link below but running DBCC DROPCLEANBUFFERS doesn't seem to work. Restarting the SQL instance works but that interrupts all the databases on the instance. Is there another way to release the memory?
SQL Server clear memory
We use both ssms2008r2 and smss2012.
Thanks
SQL Server will grab and keep all memory that you let it. If you don't want it to use 15.3 GB of memory, you need to change the setting so it only grabs X GB.
You can do this by right clicking the instance in the object explorer, click on Server Properties, and change the Maximum server memory under the Memory tab. It is generally a good idea to leave at lest 1 - 2 GB for the operating system, and then more if you have anything else running on the server (you should avoid running other stuff on the server, if possible).
SQL by design wants to use as much memory as it can get. If you wish to limit that, you can do so from the server properties and limit it how many GBs its allowed to consume.
DBCC DROPCLEANBUFFERS clears all the clean buffers from the buffer pool. That won't help you.
You need to understand that SSMS isnt eating your memory, the sqlserver service is. SQL server will expand it's memory footprint as much as you will allow it. If you have 32GB and you say it can use 32GB then it will allocate as much as it can.
Here's a great article on DBA stack exchange regarding max server memory: https://dba.stackexchange.com/questions/47431/why-is-sql-server-consuming-more-server-memory
This is going to be my first attempt at fine tuning our SQL Server 2008R2, and I'd like a starting point based on the following.
When I view the resource monitor, I see (in KB):
Commit: 843,948
Working Set: 718,648
Shareable: 26,276
Private: 692,372
Out of 2 gigs available on our virtual server, 1.6 is getting used up, and I suspect it is due SQL Server, and the memory gets chewed up when I initiate a service that does a bunch of TVP inserts and checks. I already added some GC.collect() in my c# service, however I'm not really seeing much of a change, which leads me back to the SQL Server.
Where would be a good starting point for me to learn more about optimizing based on this information, and some quick pointers?
Thanks.
Here is a quick pointer: buy more memory. 2GB is nothing today.
For the long answer: you need to understand how SQL Server allocates and uses memory. 1.6Gb on a 2Gb box is perfectly normal. See Dynamic Memory Management:
When SQL Server starts, it computes the size of virtual address space
for the buffer pool based on a number of parameters such as amount of
physical memory on the system, number of server threads and various
startup parameters. SQL Server reserves the computed amount of its
process virtual address space for the buffer pool, but it acquires
(commits) only the required amount of physical memory for the current
load.
The instance then continues to acquire memory as needed to support the
workload. As more users connect and run queries, SQL Server acquires
the additional physical memory on demand. A SQL Server instance
continues to acquire physical memory until it either reaches its max
server memory allocation target or Windows indicates there is no
longer an excess of free memory; it frees memory when it has more than
the min server memory setting, and Windows indicates that there is a
shortage of free memory.
In other words, SQL Server will not release the 1.6Gb unless there is memory pressure notification from Windows.
And finally, about your question on where to look for info on optimizations: Waits and Queues is an excellent resource. It is a methodology that allows you to identify the bottlenecks and recommends the appropriate action for all common bottleneck cases.
SQL Server is designed to pre-allocate and "eat up" all the memory you let it use. There really is no way to see any improvement except to reduce the SQL footprint in the configuration.
If it's the default configuration, sql server will analyse usage and then grab as much memory as it can to optimise itself. If other apps then start asking for memory it gives it back.
There are a couple of values you can play with in terms of memory, a minimum which is the amount sql server will keep to itself, and a maximum which it will never grab more than. You can also play around with the number of threads it will run. You'll need some good stats for this. Depends on your usage patterns, and what else needs memory and how well it plays with others. Mess about and you can starve sql server, which is never a brill idea. I've always been a big fan of dedicated machines for dbms for any non trivial use.
As much art as science this, unless you find something horrible in there, slowing down your sql server will give your applications lots of memory to do nothing with because they are waiting for the Db....
More stuff to have a look at.
MSDN - Sql Server Performance and Memory
You need to get performamnce monitoring going and a have a good idea what sort of things are going on during the run. And you want an 'average' run. Peak hits, Out of office hours processing, holidays etc, no use at all.
PS don't forget performance monitoring is a significant hit on the machine.
We have an 8 Core, 16GB RAM server that has SQL Server 2008 running on it. When we perform large queries on millions of rows the RAM usage goes up to 15.7GB and then even file browsing, opening excel etc gets really slow.
So does SQL Server really release memory when another process needs it or am I having another issue? We don't have any other major programs running on this server.
We've set a max memory usage of 14GB for SQL Server.
Thanks all for any enlightenment or trouble shooting ideas.
Yes it does. See SQLOS's memory manager: responding to memory pressure for details how this works. But what exactly means to have 'memory pressure' it depends from machine to machine and from OS version to OS version, see Q & A: Does SQL Server always respond to memory pressure?. If you want to reserve more memory for applications (I'm not even bother to ask why you browse files and use Excel on a machine dedicated to SQL Server....) then you should lower the mas server memory until it leaves enough for your entertainment.
SQL server does NOT release memory. It takes all the memory it can get up to the MaxMemory setting and it stays there.
I need some help very badly.
I'm working on a project where a bulk of data is entered all the time. It's a reporting software.
10 Million records in an average is stored per day and it could keep on increasing as users increase.
As of now, SQL SERVER CONSUMES 5gb of RAM on the task manager. I have an 8GB ram on my server now.
How do other enterprises manage such situations?
SQL Server uses memory efficiently and takes as much as it can. It's also usually clever enough to release memory when needed.
Using 5GB means:
SQL Server is configured to 5GB or SQL Server has simply reserved this memory during normal usage
It's left 3GB because it doesn't need to use it
Nothing is wrong... and I'd probably configure the SQL Server max mem to 6.5GB...
Late addition: Jonathan Kehayias blog entry
SQL Server typically uses as much memory as it can get it's hands on, as it then stores the more frequently accessed data in memory to be more efficient, as disk access is slower then memory access.
So nothing is wrong with it using 5gb of memory.
To be honest, it's leaving 3gb of memory for other applications and the operating system, so there might not be anything wrong with this. (If this is all that server is designed to do.).
To configure the memory limit, do the following:
In SQL Server Enterprise manager, right click on the server name, and go to properties.
Click on the Memory option
Reduce the maximum server memory to what you think is appropriate.
Click ok.
I highly doubt that this is in fact a memory leak. The increase of SQL Server's memory usage is by design, simply because it caches a lot of stuff (queries, procedures).
What you will most likely see is that if the available memory that is still left runs low, SQL server will 'flush' its memory, and you would see in fact that memory will be freed in the end.
I'm managing a co-located winserver08 box running both IIS and SQL 08 express. I just happened to glance at the task manager's performance tab and find the 'mem usage history graph is close to topped out reading 1.8 gig (i have 2 gig physical ram). Processes shows sqlserver is running at 940,000K - by far the largest consumer.
I'm a low volume site - cpu utilization barely registers. Haven't had any stability issues with the server at all. Is this just how SQL treats available mem or should I be digging deeper?
thx
SQL Server manages it's own memory pool. It will release memory back to the OS under memory pressure.
So, yes, this is normal behaviour and nothing to be concerned about.
Note: I should mention an exception to this: If TSQL scripts are using sp_OACreate to create COM objects and not releasing the object with a corresponding sp_OSDestroy (say, for instance an error occurs and the script terminates prematurely), then memory can leak. Use of these stored procedures is not that common (many DBAs won't allow this feature to be turned on, for good reason) I believe this is also the case for CURSORS that are not deallocated.
Unless you have configured this is normal behavior. Read this article for a clear understanding of memory configuration and recommendations.
In your case, I assume you have the default settings and what you see is normal and there is no cause for concern.
Raj
Keep into account SqlExpress probably manages memory in a different way compared to any standard edition (that is, any non-express edition).
I can't provide specific links but from personal experience things change a lot when you move to a proper distribution of SQLServer (speed, memory-manamgement, responsiveness in extreme conditions, etc).
If anybody knows something more please integrate my answer.
SQL Server will use something in the area of 90% of a machine's memory if it isn't capped. This is completely normal as SQL Server is managing memory for itself and will release memory as necessary.
If you are worried about it than you can cap the amount of memory SQL Server can use, by going to the properties of the sqlexpress instance, selecting the memory page and then reducing the maximum server memory.