How Allocate more memory to mssql - sql-server

I just want to know that how I can allocate more memory to MSSQL management studio, so that it takes less time to run long queries. because I have to import and manipulate large amount of data form access files and other, where as I have 32 Gb ram for this purpose. so tell me solutions.
thanks

SQL Server Management Studio (SSMS) is just a client tool.
If queries are running too long, the issue will be at the actual SQL Server. You can assign extra memory there by using SSMS to connect to the server, right-click it, and then select Properties.
On the Memory page, you can configure how much maximum RAM you want to assign to SQL Server. In your situation, I'd not assign more than 30 GB (leave some for the OS. In case your SQL Server is not the only application running on the machine, assign even less).
If you're dealing with large amounts of data that need to be imported, RAM might not be the issue, though. Most likely the bottleneck will be the disk system. Use Performance Monitor to try and get a clue as to where the real bottleneck is.
Some ways to enhance performance for the disk system is to ensure your drives are configured properly (General rule of thumb is to place transaction logs on RAID 1 partitions, and data on RAID 10 (or 5)). If you can afford it, place indexes on separate RAID partitions. Also make sure the database and drives are regularly defragged.

Related

SQL Server 2012 memory use is 1000 x size of database - is this right?

I understand from a search here that SQL Server 2012 will continue to use memory until it meets the limit set for it, but the usage I see is hard to believe. The database is about 26MB at the moment, but the memory usage is over 30GB. Is this to be expected or is there some other problem lurking somewhere?
There is nothing wrong. The memory allocated to SQL Server can be changed to almost any value you'd like. I'd say 30GB is certainly overkill for a 26MB DB if it is the only DB on the instance. The memory allocated to SQL Server is used for numerous functions, e.g. sorting queries, plan caches, etc. The 30GB you're seeing means that 30GB of your system memory is reserved for SQL Server.
For a better understanding, you'll want to look into your target memory too. Target memory is how much memory is needed for SQL Server work, based on your configuration. In your case, I bet target memory equals max memory and SQL Server is trying to consume all the memory. Here is how you can check that:
SELECT *
FROM sys.dm_os_performance_counters
WHERE counter_name in ('Total Server Memory (KB)', 'Target Server Memory (KB)')
More information from Brent Ozar:
So how much memory is SQL using? I’ll make this easy for you. SQL
Server is using all of the memory. Period.
No matter how much memory you put in a system, SQL Server will use all
it can get until it’s caching entire databases in memory and then
some. This isn’t an accident, and there’s a good reason for it. SQL
Server is a database: programmers store data in SQL Server, and then
SQL Server manages writing that data to files on the hard drive.
Programmers issue SELECT statements (yes, usually SELECT *) and SQL
Server fetches the data back from the drives. The organization of
files and drives is abstracted away from the programmers.
To improve performance, SQL Server caches data in memory. SQL Server
doesn’t have a shared-disk model: only one server’s SQLserver.exe can
touch the data files at any given time. SQL Server knows that once it
reads a piece of data from the drives, that data isn’t changing unless
SQL Server itself needs to update it. Data can be read into memory
once and safely kept around forever. And I do mean forever – as long
as SQL Server’s up, it can keep that same data in memory. If you have
a server with enough memory to cache the entire database, SQL Server
will do just that. Why Doesn’t SQL Server Release Memory?
Memory makes up for a lot of database sins like:
Slow, cheap storage (like SATA hard drives and 1Gb iSCSI)
Programs that needlessly retrieve too much data
Databases that don’t have good indexes
CPUs that can’t build query plans fast enough
Throw enough memory at these problems and they go away, so SQL Server
wants to use all the memory it can get. It also assumes that more
queries could come in at any moment, so it never lets go or releases
memory unless the server comes under memory pressure (like if other
apps need memory and Windows sends out a memory pressure
notification).
By default, SQL Server assumes that its server exists for the sole
purpose of hosting databases, so the default setting for memory is an
unlimited maximum. (There are some version/edition restrictions, but
let’s keep things simple for now.) This is a good thing; it means the
default setting is covering up for sins. To find out if the server’s
memory is effectively covering up sins, we have to do some
investigation.
Docs on SQL Server memory configuration: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options
Here is how you can set a fixed amount for Min/Max memory on SQL Server: https://technet.microsoft.com/en-us/library/ms191144(v=sql.105).aspx

Purposefully Maxing out Memory on SQL Server

I'm looking into using Windows Performance Monitor to analyse server performance. I'm testing on the adventure works 2014 database on SQL Server 2014.
I want to try to Max out the CPU, Memory, disk usage (I/O) and possibly put a high amount of User activity. Then I can train myself in using windows performance monitor to take logs for performances around this area.
I know for CPU I can just run a heavy query within a while loop (maybe infinite) and that will go towards maxing it out.
I'm less sure about the other ones. I've tried queries which select from large tables (30,000 records), and are in a while loop to try and use some of the memory up. But it doesn't seem to drop the Available Mbytes left counter on performance monitor. Is this because the tables not big enough?
As for the disk usage, I imagine I may have to do some updates or inserts so that the disk is being written to. But I can't seem to get it to effect disk usage.
As for network, I can only think of opening up multiple queries and running them concurrently, but that seems a bit messy.
As a side note, I want to script it all myself. Rather than using any extra tools or pre canned apps that do it for you.

Can a 200GB or bigger SQL Server database load into a pagefile when not enough memory is available on the server?

Here's what we're trying to do.
We will have a 200GB+ SQL Server database that needs to load into memory. Microsoft best practice is to have enough physical memory available on the server and then load the entire database into that. That means we would need 256GB of memory on each of our SQL Servers. This would result is fast access to the database which is loaded to memory, but for the high cost of memory. BTW, we're running SQL Server 2008 on Windows Server 2008.
Currently, our server is setup with only 12GB memory. Just under 3GB is allocated to the OS, and the remaining 9GB is used for SQL Server. Is it possible to increase the pagefile to 256GB and set it up on an SSD drive? What we want to do then is, load the database into the pagefile located on the SSD. We're hoping the performance will be similar to loading the entire database into memory, since it'll be on an SSD.
Will this work? Is there another alternative we're overlooking? We want to keep the costs down as much as we can, without sacrificing the performance of our environment. Any advice would be appreciated.
Thanks.
If you want the database to be stored in memory, you need to buy more memory. In spite of what the other answer suggests, memory is the absolute best and cheapest way to make a database perform better - SQL Server is designed to use memory well.
While SQL Server will take advantage of the page file when it has to, and while having the page file on an SSD will be slightly faster than on an old-fashioned mechanical disk, it's still I/O and swapping and there is a lot of overhead around that, regardless of the disk type underneath. This may turn out to be a little bit better, in general, than having the same page file on a spinny disk (or no page file at all), but I don't think that it's going to be anywhere near the impact of having real memory, or that it's going to come anywhere close to your expectations of "fast access."
If you can't buy more memory then you can start with this page file on an SSD, but I'm confident you will need to additionally focus on other tuning opportunities - largely making sure you have indexes that support the type of queries you run, avoiding full table scans as much as possible. For full table aggregates you can consider indexed views (see here and here); for subsets you can consider filtered indexes.
And just to be sure: you are storing the actual data on an SSD drive, right? If not, then I would argue that you should use the SSD for the data and/or log, not for the page file. The page file isn't going to offer you much benefit if it is constantly swapping data in and out by exchanging with a spinny disk.
Need more clairity on the question.
Are you in control of the database or is this a COTS solution that limits your ability to optimize?
Are you clustering? Is that why adding 200+Gb of RAM is an issue (now more than 400GB, 200 per node)?
Are you on bare metal or virtualized? Is this why RAM may be an issue?
So far it would seem the "experts" have made some assumptions that may not be fair to your circumstance.
Please update your question... :)

SQL SERVER 2008 - Memory Leak while storing Millions of records

I need some help very badly.
I'm working on a project where a bulk of data is entered all the time. It's a reporting software.
10 Million records in an average is stored per day and it could keep on increasing as users increase.
As of now, SQL SERVER CONSUMES 5gb of RAM on the task manager. I have an 8GB ram on my server now.
How do other enterprises manage such situations?
SQL Server uses memory efficiently and takes as much as it can. It's also usually clever enough to release memory when needed.
Using 5GB means:
SQL Server is configured to 5GB or SQL Server has simply reserved this memory during normal usage
It's left 3GB because it doesn't need to use it
Nothing is wrong... and I'd probably configure the SQL Server max mem to 6.5GB...
Late addition: Jonathan Kehayias blog entry
SQL Server typically uses as much memory as it can get it's hands on, as it then stores the more frequently accessed data in memory to be more efficient, as disk access is slower then memory access.
So nothing is wrong with it using 5gb of memory.
To be honest, it's leaving 3gb of memory for other applications and the operating system, so there might not be anything wrong with this. (If this is all that server is designed to do.).
To configure the memory limit, do the following:
In SQL Server Enterprise manager, right click on the server name, and go to properties.
Click on the Memory option
Reduce the maximum server memory to what you think is appropriate.
Click ok.
I highly doubt that this is in fact a memory leak. The increase of SQL Server's memory usage is by design, simply because it caches a lot of stuff (queries, procedures).
What you will most likely see is that if the available memory that is still left runs low, SQL server will 'flush' its memory, and you would see in fact that memory will be freed in the end.

SQL Server 2k5 memory consumption?

I have a development vm which is running sql server as well as some other apps for my stack, and I found that the other apps are performing awfully. After doing some digging, SQL Server was hogging the memory. After a quick web search I discovered that by default, it will consume as much memory as it can in order to cache data and give it back to the system as other apps request it, but this process often doesn't happen fast enough, apparently my situation is a common problem.
There however is a way to limit the memory SQL Server is allowed to have. My question is, how should I set this limit. Obviously I'm going to need to do some guess and check, but is there an absolute minimum threshhold? Any recommendations are appreciated.
Edit:
I'll note that out developer machines have 2 gigs of memory so I'd like to be able to run the vm on 768 mb or less if possible. This vm will be only used for local dev and testing , so the load will be very minimal. After code has been tested locally it goes to another environment where the SQL server box is dedicated. What I'm really looking for here is recommendations on minimums
Extracted fromt he SQL Server documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory
SQL Server can allocate when it starts
and while it runs. This configuration
option can be set to a specific value
if you know there are multiple
applications running at the same time
as SQL Server and you want to
guarantee that these applications have
sufficient memory to run. If these
other applications, such as Web or
e-mail servers, request memory only as
needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
The recommendation on minimum is: No such thing. The more memory the better. The SQL Sever needs as much memory as it can get or it will trash your IO.
Stop the SQL Server. Run your other applications and take note to the amount of memory they need. Subtract that from your total available RAM, and use that number for the MAX memory setting in the SQL Server.
Since this is a development environment, I agree with Greg, just use trial and error. It's not that crucial to get it perfectly right.
But if you do a lot of work in the VM, why not give it at least half of the 2GB?
so id like to be able to run the vm on
768 mb or less if possible.
That will depend on your data and the size of your database. But I usually like to give SQL server at least a GB
It really depends on what else is going on on the machine. Get things running under a typical load and have a look at Task Manager to see what you need for everything else. Try that number to start with.
For production machines, of course, it is best to give control of the machine to Sql Server (Processors -> Boost Sql Server Priority) and let it have all the RAM it wants.
Since you are using VMs, maybe you could create a dedicated one just for Sql Server and run everything else on a different VM.

Resources