I have and machine powered by Windows Server 2012 R2 with SQL Server 2014, and for an unknown reason, the max memory of the SQL Server decreased automatically from 18024 MB to 1024 MB which is causing slowness in the system, as we need to update this value manually to 18024 MB
.
Not sure why that happened "max memory of SQL server decreased automatically from 18024 MB to 1024 MB".
But if you want to correct it, you can do it instantly without restart :
1. Increase Max memory
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 18024;
GO
RECONFIGURE;
GO
Output will be like :
Configuration option 'max server memory (MB)' changed from 1024
to 18024. Run the RECONFIGURE statement to install.
2. Determine current memory allocation
SELECT
physical_memory_in_use_kb/1024 AS sql_physical_memory_in_use_MB,
large_page_allocations_kb/1024 AS sql_large_page_allocations_MB,
locked_page_allocations_kb/1024 AS sql_locked_page_allocations_MB,
virtual_address_space_reserved_kb/1024 AS sql_VAS_reserved_MB,
virtual_address_space_committed_kb/1024 AS sql_VAS_committed_MB,
virtual_address_space_available_kb/1024 AS sql_VAS_available_MB,
page_fault_count AS sql_page_fault_count,
memory_utilization_percentage AS sql_memory_utilization_percentage,
process_physical_memory_low AS sql_process_physical_memory_low,
process_virtual_memory_low AS sql_process_virtual_memory_low
FROM sys.dm_os_process_memory;
3. Determining value for 'max server memory (MB)
SELECT c.value, c.value_in_use
FROM sys.configurations c WHERE c.[name] = 'max server memory (MB)'
For increasing memory from low doesn't require server restart/stop. Just make sure your OS has enough memory for running self & other process to make sure everything runs ok after.
For details refer Microsoft SQL server configuration options:
https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-ver15
As you said in your comments that this problem happens on weekends, there may be some management scripts on your server, which do some cleaning and configuration tasks. Please check your SQL Server agent jobs and maintenance plans on the server.
Related
I run same JAVA application (spring/hibernate) on different system both use same SQL Server version.
I'm using SQL Server Profiler to trace a query which I run (exactly same) on both systems.
This is my SQL Server version on both system:
Trace System 1 : slow-system2.trc query takes randomly between 100 - 300ms
Trace System 2 : fast.trc query takes randomly between 10-20ms
It seems here on slow-screenshot a query of "use database" takes "331ms" compared to fast.trc (0ms= :
What can cause this difference just by running "use database" query ?
I tried on a 3th system running on sql express which is too slow here is trace
It seems here on "sql express" it is due to the fact I have two additional classEvent Audit Logout that takes time :
Maybe I missed out some option on SQL Server?
The long duration of the USE statement indicates the database may be set to AUTO_CLOSE ON. Overhead is incurred during database startup when it must be opened.
The setting can be changed with:
ALTER DATABASE [YourDatabase] SET AUTO_CLOSE OFF;
I am not very experienced so please bear with me.
I have been attempting to import a 2500 MB and 3800 MB CSV into SQL Server 2016. Unfortunately I keep getting the System.OutOfMemoryException error. My computer has 8.00 GB RAM so I figured I would just increase the max server memory from the default up to 4000 MB. For some reason though each time I try to change the max server memory to make it higher it changes back to default. How do I fix this problem?
You can change it with :
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
What's a good way of checking how much (actual) memory is currently
being used vs. how much is SQL Server allocated to itself?
I've been resorting to memory_utilization_percentage but that doesn't seem to change after running the following to release memory.
SELECT [Memory_usedby_Sqlserver_MB] = ( physical_memory_in_use_kb / 1024 ) ,
[Memory_utilization_percentage] = memory_utilization_percentage
FROM sys.dm_os_process_memory;
DBCC FREESYSTEMCACHE ('ALL')
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
SELECT [Memory_usedby_Sqlserver_MB] = ( physical_memory_in_use_kb / 1024 ) ,
[Memory_utilization_percentage] = memory_utilization_percentage
FROM sys.dm_os_process_memory;
A solution is to drop max server memory for the SQL Server and increase it again to force SQL Server to release unused but allocated memory. However an issue with this approach is that we cannot be sure how far to reduce max server memory, hence run the risk of killing SQL Server. This is why it's important to understand how much SQL Server is 'actually' using before reducing the value for max server memory.
The modified script below worked for me. I needed to temporarily release a bunch of RAM held by SQLServer so that we could run some other one-off processes on the same server. It temporarily releases SQL's reserved mem space while still allowing it to gobble the mem back up as needed.
I added a built-in wait to let SQLServer actually release the mem before bumping it back to the original level. Obviously adjust the values as needed to suit your needs.
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
/*** Drop the max down to 64GB temporarily ***/
sp_configure 'max server memory', 65536; --64GB
GO
RECONFIGURE;
GO
/**** Wait a couple minutes to let SQLServer to naturally release the RAM..... ****/
WAITFOR DELAY '00:02:00';
GO
/** now bump it back up to "lots of RAM"! ****/
sp_configure 'max server memory', 215040; --210 GB
GO
RECONFIGURE;
GO
SQL Server always assumes it is the primary application running. It is not designed to share resources. It will always take all the available memory and it will only release it for the operating system unless you throttle with 'max server memory'.
By design, Sql Server does not play well with others.
This sqlskills article recommends a baseline for throttling followed by monitoring and raising the throttle as needed:
https://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/
I don't have a solution for how to release the allocated memory. However, for our purposes we were able to figure out, how to allow active-active clusters to run safely. We've decided to set minimum server memory to ~2GB. This is helpful because no matter how much max memory an instance decides to use, it will never run other instances out of memory. So again, this solves our purpose but it still doesn't answer the question of how much memory is actually being used, how low can we drop the max server memory, etc...
You have to set 'Max server memory' to some value between 1-2 GB. This range is safe in most cases. It may take a time to release the memory after executing below:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 1024;
GO
RECONFIGURE;
GO
That setting allows to clear the pool, compile memory, all the caches, clr memory, etc.
The minimum value for 'max server memory' is 128 MB, but it's not recommended as SQL Server may not start in certain configurations. If it happens, use "-f" switch to force SQL start with minimal configuration, then change the value to the original one.
This post is solved in the following link, please check the format:
SQL Server not releasing memory after query executes
I don't think SQL Server releases memory unless the operating system actively requests it. If there is a case of other processes requesting more memory and if there is none at all, SQL Server will release the unused memory on its own. Rather than trying to flush the unusued memory, I'd probably go with limiting the SQL's maximum allowed memory.
sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
sp_configure 'max server memory', 512; --or some other value
GO
RECONFIGURE
GO
For further info, you could check this MSDN article: https://msdn.microsoft.com/en-us/library/ms178067.aspx
Just in case you are in an emergency situation and if you can have a small downtime, just restart your SQL service. It's just a few seconds to restart and do the job very well. Right click on your server name and click Restart.
I am in the midst of evaluating default SQL Server 2008 R2 configuration settings.
I have been asked to run the below script on the production server:
sp_configure 'remote query timeout', 0
sp_configure 'max server memory (MB)', 28000
sp_configure 'remote login timeout', 300
go
reconfigure with override
go
Before proceeding on this, I have been trying to gauge the advantages and disadvantages of each line of SQL code.
Edited on 17-May-2016 14:19 IST:
Few Microsoft links that I have referred are as below:
https://msdn.microsoft.com/en-us/library/ms178067.aspx
https://msdn.microsoft.com/en-IN/library/ms175136.aspx
Edited on 23-May-2016 11:15 IST:
I have set the 'MAX SERVER MEMORY' based on feedback here and further investigation from my end. I have provided my inferences to the customer.
I have also provided my inferences on the other 2 queries based on views and answers provided here.
Thanks to all for your help. I will update this question after inputs from the customer.
Following Query will set the query timeout to 0 , i.e No timeout
sp_configure 'remote query timeout', 0
This value has no effect on queries received by the Database Engine.
To disable the time-out, set the value to 0. A query will wait until
it is canceled.
sp_configure 'max server memory (MB)', 28000
amount of memory (in megabytes) that is managed by the SQL Server
Memory Manager for a SQL Server process used by an instance of SQL
Server.
sp_configure 'remote login timeout', 300
If you have applications that connect remotely to server ,we can set timeout using the above query.
Note :
You can also set the server properties via SSMS (management studio) where you can set the maximum and minimum values rather using the codes as shown in your post.
You can very well try these queries ,but settings that you would like to opt in would depend on hardware and application type you are working on.
I would generally say that these statements are quite idiotic. Yes, seriously.
Line by line:
sp_configure 'remote query timeout', 0
Makes queries run unlimited time before aborting. While I accept there are long running queries, those should be rare (the default timeout of IIRC 30 seconds handles 99.99% of the queries) and the application programmer can set an appropriate timeout in the rare cases he needs it for this particular query.
sp_configure 'max server memory (MB)', 28000
Sets max server memory to 28gb. Well, that is nonsense - the DBA should have set that upon install to a sensible value, so it is not needed unless the dba is incompetent. Whether the roughly 28gb make sense I can not comment.
sp_configure 'remote login timeout', 300
Timeout for remote login 300 seconds. Default of 30 seconds already is plenty. I have serious problems considering a scenario where the server is healthy and does not process logins within less than a handful of seconds.
The only scenario I have seen where this whole batch would make sense is a server dying from overload - which is MOST OFTEN based on some brutal incompetence somewhere. Either the admin (64gb RAM machine configured to only use 2gb for SQL Server for example) or most often the programmers (no indices, ridiculous bad SQL making the server die from overload). Been there, seen that way too often.
Otherwise the timeouts really make little sense.
"Remote query timeout" sets how much time before a remote query times out.
"Remote login timeout" set how much time before a login attempt time out.
The values set here could make sense in certain conditions (slow, high-latency network, for example).
"Max server memory" is different. It's a very useful setting, and it should be set almost always to avoid possible performance problems. What value, it depends how much memory is on the server as whole and which other applications/service are running on it. If it's a dedicated server with 32 GB of memory, this value sounds about right.
None of these could be really tested on the test environment, I'm afraid, unless you have an 1:1 replica of the prod environment.
It might be very basic question for you friends, but how to allow multiple users on SQL Server installed on remote windows server 2012 machine.?
right now only two user can work at the same time if third one comes one of two who are active has to allow and get out himself.
we are building new server which will allow multiple user to work on the same time.
My question is once we install SQL server on windows server machine what configuration needs to be done to achieve our goal(Multiple user can work on same time) on server machine as well as what configuration needs to be done on computers of people who will be logging into it.
do we need same number of instance similar to how many people will be working on it? if yes it means that many number of same database on the server and more space will be occupied right?
Thanks.
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'user connections', 777;
GO
RECONFIGURE;
GO
Replace 777 with your limit of connections.