Running TDengine database on Windows, only one client is writing data into it ,and the amount of data is very small, but the program occupies 1G of memory
Why is it so big?
How to solve the problem?
enter image description here
Related
There is a problem with SharePoint database "WSS_Content_"
I've got a simple document library in my SharePoint site. when I add a file of specific size(e.g 1MB),SharePoint stores the file in .mdf file 10 times of the original(1GB). I got it by checking the file size in AllDocs Table. As a reult, the original database size has grown up to 240 GB from 78 GB.
Also shrinking database couldn't be useful.
Any idea to fix my SharePoint database is greatly appreciated.
You should expect the database to be larger than the sum of your content. The physical database size includes not just content (which itself includes deleted documents/items that haven't been flushed from site recycle bins), but also transaction logs, permissions, table metadata, the database schema, indexes, and any pre-allocated space for future growth.
A database of 240 GB for only 78 GB of content does seem quite large (68% overhead sounds excessive), so you might want to look into defragmentation and shrink operations. You should verify how your SQL Server is configured in terms of pre-allocation of space; this can cause sudden large spikes in storage consumption when SQL decides it needs more storage for future growth (even though it's not consuming it with data just yet).
All that being said, your screenshot suggests that your math is off by a factor of ten; 1457664 bytes is only 1.457664 MB, not very close to 1 GB.
By the way, you can save much more space in database if you turn off versioning, because for one document it stores all its versions sepereately
I am trying to run a large query that writes to a custom table in my database. The databases I am calling on are about 6-7 million rows per table. The query includes join statements and sub-queries.
Previously when I have run this query, my raw data databases were significantly smaller. In the range of 1-2 million rows per table. Everything was working up until I imported more data... I've run into this error a few times prior to this addition of data and clearing the working memory cache's worked well
The host computer currently has 8GB of RAM. That computer is used solely for uploading data to the server and hosting the server. The laptop I am calling the commands from only has 4 GB of RAM.
1) Do I have enough memory on my server computer and my laptop? When I run a large query is it using the memory on my laptop or on my host computer? When answering, can you please explain how the working memory of the computer works in conjunction with SQL?
2) If I do not need to add memory, how do I configure my server to prevent this error from occurring?
Additional Information from Server Properties:
Indexing Memory Creation is set to 0 (Dynamic Memory)
Minimum memory per query = 1024 KB
Maximum server memory = maxed out # 21474836447 MB
Minimum Server Memory = 10 MB
I just want to know that how I can allocate more memory to MSSQL management studio, so that it takes less time to run long queries. because I have to import and manipulate large amount of data form access files and other, where as I have 32 Gb ram for this purpose. so tell me solutions.
thanks
SQL Server Management Studio (SSMS) is just a client tool.
If queries are running too long, the issue will be at the actual SQL Server. You can assign extra memory there by using SSMS to connect to the server, right-click it, and then select Properties.
On the Memory page, you can configure how much maximum RAM you want to assign to SQL Server. In your situation, I'd not assign more than 30 GB (leave some for the OS. In case your SQL Server is not the only application running on the machine, assign even less).
If you're dealing with large amounts of data that need to be imported, RAM might not be the issue, though. Most likely the bottleneck will be the disk system. Use Performance Monitor to try and get a clue as to where the real bottleneck is.
Some ways to enhance performance for the disk system is to ensure your drives are configured properly (General rule of thumb is to place transaction logs on RAID 1 partitions, and data on RAID 10 (or 5)). If you can afford it, place indexes on separate RAID partitions. Also make sure the database and drives are regularly defragged.
We have an 8 Core, 16GB RAM server that has SQL Server 2008 running on it. When we perform large queries on millions of rows the RAM usage goes up to 15.7GB and then even file browsing, opening excel etc gets really slow.
So does SQL Server really release memory when another process needs it or am I having another issue? We don't have any other major programs running on this server.
We've set a max memory usage of 14GB for SQL Server.
Thanks all for any enlightenment or trouble shooting ideas.
Yes it does. See SQLOS's memory manager: responding to memory pressure for details how this works. But what exactly means to have 'memory pressure' it depends from machine to machine and from OS version to OS version, see Q & A: Does SQL Server always respond to memory pressure?. If you want to reserve more memory for applications (I'm not even bother to ask why you browse files and use Excel on a machine dedicated to SQL Server....) then you should lower the mas server memory until it leaves enough for your entertainment.
SQL server does NOT release memory. It takes all the memory it can get up to the MaxMemory setting and it stays there.
I need some help very badly.
I'm working on a project where a bulk of data is entered all the time. It's a reporting software.
10 Million records in an average is stored per day and it could keep on increasing as users increase.
As of now, SQL SERVER CONSUMES 5gb of RAM on the task manager. I have an 8GB ram on my server now.
How do other enterprises manage such situations?
SQL Server uses memory efficiently and takes as much as it can. It's also usually clever enough to release memory when needed.
Using 5GB means:
SQL Server is configured to 5GB or SQL Server has simply reserved this memory during normal usage
It's left 3GB because it doesn't need to use it
Nothing is wrong... and I'd probably configure the SQL Server max mem to 6.5GB...
Late addition: Jonathan Kehayias blog entry
SQL Server typically uses as much memory as it can get it's hands on, as it then stores the more frequently accessed data in memory to be more efficient, as disk access is slower then memory access.
So nothing is wrong with it using 5gb of memory.
To be honest, it's leaving 3gb of memory for other applications and the operating system, so there might not be anything wrong with this. (If this is all that server is designed to do.).
To configure the memory limit, do the following:
In SQL Server Enterprise manager, right click on the server name, and go to properties.
Click on the Memory option
Reduce the maximum server memory to what you think is appropriate.
Click ok.
I highly doubt that this is in fact a memory leak. The increase of SQL Server's memory usage is by design, simply because it caches a lot of stuff (queries, procedures).
What you will most likely see is that if the available memory that is still left runs low, SQL server will 'flush' its memory, and you would see in fact that memory will be freed in the end.