SQL Configuration : Pros & Cons - sql-server

I am in the midst of evaluating default SQL Server 2008 R2 configuration settings.
I have been asked to run the below script on the production server:
sp_configure 'remote query timeout', 0
sp_configure 'max server memory (MB)', 28000
sp_configure 'remote login timeout', 300
go
reconfigure with override
go
Before proceeding on this, I have been trying to gauge the advantages and disadvantages of each line of SQL code.
Edited on 17-May-2016 14:19 IST:
Few Microsoft links that I have referred are as below:
https://msdn.microsoft.com/en-us/library/ms178067.aspx
https://msdn.microsoft.com/en-IN/library/ms175136.aspx
Edited on 23-May-2016 11:15 IST:
I have set the 'MAX SERVER MEMORY' based on feedback here and further investigation from my end. I have provided my inferences to the customer.
I have also provided my inferences on the other 2 queries based on views and answers provided here.
Thanks to all for your help. I will update this question after inputs from the customer.

Following Query will set the query timeout to 0 , i.e No timeout
sp_configure 'remote query timeout', 0
This value has no effect on queries received by the Database Engine.
To disable the time-out, set the value to 0. A query will wait until
it is canceled.
sp_configure 'max server memory (MB)', 28000
amount of memory (in megabytes) that is managed by the SQL Server
Memory Manager for a SQL Server process used by an instance of SQL
Server.
sp_configure 'remote login timeout', 300
If you have applications that connect remotely to server ,we can set timeout using the above query.
Note :
You can also set the server properties via SSMS (management studio) where you can set the maximum and minimum values rather using the codes as shown in your post.
You can very well try these queries ,but settings that you would like to opt in would depend on hardware and application type you are working on.

I would generally say that these statements are quite idiotic. Yes, seriously.
Line by line:
sp_configure 'remote query timeout', 0
Makes queries run unlimited time before aborting. While I accept there are long running queries, those should be rare (the default timeout of IIRC 30 seconds handles 99.99% of the queries) and the application programmer can set an appropriate timeout in the rare cases he needs it for this particular query.
sp_configure 'max server memory (MB)', 28000
Sets max server memory to 28gb. Well, that is nonsense - the DBA should have set that upon install to a sensible value, so it is not needed unless the dba is incompetent. Whether the roughly 28gb make sense I can not comment.
sp_configure 'remote login timeout', 300
Timeout for remote login 300 seconds. Default of 30 seconds already is plenty. I have serious problems considering a scenario where the server is healthy and does not process logins within less than a handful of seconds.
The only scenario I have seen where this whole batch would make sense is a server dying from overload - which is MOST OFTEN based on some brutal incompetence somewhere. Either the admin (64gb RAM machine configured to only use 2gb for SQL Server for example) or most often the programmers (no indices, ridiculous bad SQL making the server die from overload). Been there, seen that way too often.
Otherwise the timeouts really make little sense.

"Remote query timeout" sets how much time before a remote query times out.
"Remote login timeout" set how much time before a login attempt time out.
The values set here could make sense in certain conditions (slow, high-latency network, for example).
"Max server memory" is different. It's a very useful setting, and it should be set almost always to avoid possible performance problems. What value, it depends how much memory is on the server as whole and which other applications/service are running on it. If it's a dedicated server with 32 GB of memory, this value sounds about right.
None of these could be really tested on the test environment, I'm afraid, unless you have an 1:1 replica of the prod environment.

Related

SQL Server Profiler trace different result on different system for same query

I run same JAVA application (spring/hibernate) on different system both use same SQL Server version.
I'm using SQL Server Profiler to trace a query which I run (exactly same) on both systems.
This is my SQL Server version on both system:
Trace System 1 : slow-system2.trc query takes randomly between 100 - 300ms
Trace System 2 : fast.trc query takes randomly between 10-20ms
It seems here on slow-screenshot a query of "use database" takes "331ms" compared to fast.trc (0ms= :
What can cause this difference just by running "use database" query ?
I tried on a 3th system running on sql express which is too slow here is trace
It seems here on "sql express" it is due to the fact I have two additional classEvent Audit Logout that takes time :
Maybe I missed out some option on SQL Server?
The long duration of the USE statement indicates the database may be set to AUTO_CLOSE ON. Overhead is incurred during database startup when it must be opened.
The setting can be changed with:
ALTER DATABASE [YourDatabase] SET AUTO_CLOSE OFF;

How do you force SQL Server to release memory?

What's a good way of checking how much (actual) memory is currently
being used vs. how much is SQL Server allocated to itself?
I've been resorting to memory_utilization_‌​percentage but that doesn't seem to change after running the following to release memory.
SELECT [Memory_usedby_Sqlserver_MB] = ( physical_memory_in_use_kb / 1024 ) ,
[Memory_utilization_percentage] = memory_utilization_percentage
FROM sys.dm_os_process_memory;
DBCC FREESYSTEMCACHE ('ALL')
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
SELECT [Memory_usedby_Sqlserver_MB] = ( physical_memory_in_use_kb / 1024 ) ,
[Memory_utilization_percentage] = memory_utilization_percentage
FROM sys.dm_os_process_memory;
A solution is to drop max server memory for the SQL Server and increase it again to force SQL Server to release unused but allocated memory. However an issue with this approach is that we cannot be sure how far to reduce max server memory, hence run the risk of killing SQL Server. This is why it's important to understand how much SQL Server is 'actually' using before reducing the value for max server memory.
The modified script below worked for me. I needed to temporarily release a bunch of RAM held by SQLServer so that we could run some other one-off processes on the same server. It temporarily releases SQL's reserved mem space while still allowing it to gobble the mem back up as needed.
I added a built-in wait to let SQLServer actually release the mem before bumping it back to the original level. Obviously adjust the values as needed to suit your needs.
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
/*** Drop the max down to 64GB temporarily ***/
sp_configure 'max server memory', 65536; --64GB
GO
RECONFIGURE;
GO
/**** Wait a couple minutes to let SQLServer to naturally release the RAM..... ****/
WAITFOR DELAY '00:02:00';
GO
/** now bump it back up to "lots of RAM"! ****/
sp_configure 'max server memory', 215040; --210 GB
GO
RECONFIGURE;
GO
SQL Server always assumes it is the primary application running. It is not designed to share resources. It will always take all the available memory and it will only release it for the operating system unless you throttle with 'max server memory'.
By design, Sql Server does not play well with others.
This sqlskills article recommends a baseline for throttling followed by monitoring and raising the throttle as needed:
https://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/
I don't have a solution for how to release the allocated memory. However, for our purposes we were able to figure out, how to allow active-active clusters to run safely. We've decided to set minimum server memory to ~2GB. This is helpful because no matter how much max memory an instance decides to use, it will never run other instances out of memory. So again, this solves our purpose but it still doesn't answer the question of how much memory is actually being used, how low can we drop the max server memory, etc...
You have to set 'Max server memory' to some value between 1-2 GB. This range is safe in most cases. It may take a time to release the memory after executing below:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 1024;
GO
RECONFIGURE;
GO
That setting allows to clear the pool, compile memory, all the caches, clr memory, etc.
The minimum value for 'max server memory' is 128 MB, but it's not recommended as SQL Server may not start in certain configurations. If it happens, use "-f" switch to force SQL start with minimal configuration, then change the value to the original one.
This post is solved in the following link, please check the format:
SQL Server not releasing memory after query executes
I don't think SQL Server releases memory unless the operating system actively requests it. If there is a case of other processes requesting more memory and if there is none at all, SQL Server will release the unused memory on its own. Rather than trying to flush the unusued memory, I'd probably go with limiting the SQL's maximum allowed memory.
sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
sp_configure 'max server memory', 512; --or some other value
GO
RECONFIGURE
GO
For further info, you could check this MSDN article: https://msdn.microsoft.com/en-us/library/ms178067.aspx
Just in case you are in an emergency situation and if you can have a small downtime, just restart your SQL service. It's just a few seconds to restart and do the job very well. Right click on your server name and click Restart.

multiple user on sql server

It might be very basic question for you friends, but how to allow multiple users on SQL Server installed on remote windows server 2012 machine.?
right now only two user can work at the same time if third one comes one of two who are active has to allow and get out himself.
we are building new server which will allow multiple user to work on the same time.
My question is once we install SQL server on windows server machine what configuration needs to be done to achieve our goal(Multiple user can work on same time) on server machine as well as what configuration needs to be done on computers of people who will be logging into it.
do we need same number of instance similar to how many people will be working on it? if yes it means that many number of same database on the server and more space will be occupied right?
Thanks.
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'user connections', 777;
GO
RECONFIGURE;
GO
Replace 777 with your limit of connections.

Possible to set SQL Server Remote Query Timeout per Query for Linked Server calls?

For linked servers, I see how it is possible to change the "remote query timeout" configuration to hint a call to a linked server should complete or rollback within the specific timeout value. This appears to work across the SQL Server engine--is it possible to change the hint within a stored procedure, so that a specific stored procedure could run longer if needs to, but all other non-hinted SPROCs would timeout quicker if they run long?
Linked Query Timeout is discussed here:
http://support.microsoft.com/kb/314530
Example code to set it to timeout in 3 seconds is here:
sp_configure 'remote query timeout', 3
go
reconfigure with override
go
Not really advisable to change it within a stored procedure. remote query timeout is a global server setting when altered with sp_configure, so changing it in a stored procedure affects all remote queries for all linked servers on the server.
Additionally, executing sp_configure requires the ALTER SETTINGS server permission, which typically only sysadmin and serveradmin have. Granting these permissions to a data access account would be a security concern since they could potentially take your server down with sp_configure commands.
What I would suggest is creating a second linked server with a different name that you would use with just this one stored procedure. You can, in SSMS, configure a query timeout for each individual linked server. Adding a second linked server would enable you to query the same server with different linked server client settings. You might need to create a DNS CNAME to accomplish this if you're using plain SQL Server Linked Servers.

How do I set a SQL Server script's timeout from within the script?

I have a large script file (nearly 300MB, and feasibly bigger in the future) that I am trying to run. It has been suggested in the comments of Gulzar's answer to my question about it that I should change the script timeout to 0 (no timeout).
What is the best way to set this timeout from within the script? At the moment I have all of this at the top of the script file in the hopes that one of them does something:
sp_configure 'remote login timeout', 600
go
sp_configure 'remote query timeout', 0
go
sp_configure 'query wait', 0
go
reconfigure with override
go
However, I'm still getting the same result and I can't tell if I'm succeeding in setting the timeout because the response from sqlcmd.exe is the world's least helpful error message:
Sqlcmd: Error: Scripting error.
Your solution - Add GO every 100 or 150 lines
http://www.red-gate.com/MessageBoard/viewtopic.php?t=8109
sqlcmd -t {n}
Where {n} must be a number between 0 and 65535.
Note that your question is a bit misleading since the server has no concept of a timeout and therefore you cannot set the timeout within your script.
In your context the timeout is enforced by sqlcmd
I think there is no concept of timeout within a SQL script on SQL Server. You have to set the timeout in the calling layer / client.
According to this MSDN article you could try to increase the timeout this way:
exec sp_configure 'remote query timeout', 0
go
reconfigure with override
go
"Use the remote query timeout option to specify how long, in seconds, a remote operation can take before Microsoft SQL Server times out. The default is 600, which allows a 10-minute wait. This value applies to an outgoing connection initiated by the Database Engine as a remote query. This value has no effect on queries received by the Database Engine."
P.S.: By 300 MB you mean the resulting file is 300 MB? I don't hope that the script file itself is 300 MB. That would be a world record. ;-)

Resources