Execution plan and SQL goes to hung state - sql-server

I am using a SQL server 2008, which has databases mirrored in Synchronized mode.
I am trying to run some update stored procedures, with some nested joins and it runs fine (Obviously with a reduced performance compared to a server which is not mirrored).
The problem I am facing is that if I select the "show detailed plan" option. The query starts running and it virtually goes to a hung state and doesnt recover. I finally end task the SQL.
I have a public role for the databases and I cant access any stats.
Can you tell me what exactly (or in general) should I ask the DBA to look at?
The details of the SQL server is mentioned below.
Product - SQL Server Enterprise Edtn- 64 bit.
OS - WIndows NT 6.0
Memory -6143 MB
Processor -2
Maximum Server memory - 3072 MB
Minimum server memory - 16 MB
Any help on guiding me to a right direction will be appreciated.
Regards,
Dasso

Because
1) You have activated [Include actual execution plan] option and because
2) There is a WHILE statement
SQL Server will send to client - Sql Server Management Studio the actual execution plan of every SQL statement executed by every iteration of WHILE statement. So, if WHILE includes a simple UPDATE and it executes 100 iterations then Sql Server will send the execution plan of UPDATE 100 times!
You should decrease the number of iterations for WHILE or you could use estimated plans.

Related

Facing slowness in database server after migrating from SSMS 2008 to SSMS 2016

We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).

SQL Server 2008 to 2012 - Query Plan Compilation Time Issue

Ok very strange one this that is outside of my skill level unfortunately. We've got a fairly large database (35GB) with medium usage. This was on oldish hardware and SQL Server 2008. We got a new server, lots more ram / faster processors - great! HDD setup is the same (i.e. raid / configuration / file location). Backed up the database and restored on the new server (running in 2012 mode). Everything seemed fine - but all was not well. I'm getting very strange performance issues. Most queries are running slightly faster, which is great, but some queries on the first time are running a lot slower.
Example - we have a query that on initial run takes 7 seconds to complete. If I run it again it takes 250ms. If I change a parameter value it takes 7 seconds to run again. If I clear the query plan cache it takes 7 seconds to run again. If I run the same query on the old database, it takes 500ms on first run, 400ms on second run.
So something is defo up with how long it takes to compile the query. When I return the actual execution plan, its the same but the estimate rows / subtree costs are a lot higher on the new server. When I do properties and get the compile time its 7000 vs 350 on the old server (assuming thats ms).
If I amend the query and have options(recompile) it takes 3 seconds to run pretty much each and every time. So faster initial, but still too slow on recalls.
As part of the migration, rebuilt all indexes and updated statistics.
So long story short, new server is quicker but only after the query plan has been created. Ideas?
Not including any code, as the query isn't the problem - runs fine on SQL Server 2008 and runs fine on 2012 after initial run.
[Edit - Example of what I mean by variable. Between edits I'm just changing the 'something' to 'something1']
DECLARE #myReference VARCHAR(50) = 'something'
SELECT Column1, Column2
FROM dbo.Table
WHERE Column3 = #myReference
[/Edit]
I would expect the general performance to be a slight improvement over 2008 due to the better hardware.
Image of profiler outputs: [profiler]: https://imgur.com/hq5t73t "profiler image"
Confirmation of version: Microsoft SQL Server 2012 (SP4) (KB4018073) -
11.0.7001.0 (X64) Aug 15 2017 10:23:29 Copyright (c) Microsoft Corporation Standard
Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: )
Obviously that the problem is in a high compilation time.
Perhaps this issue is already addressed and fixed by MS.
Is the server updated to the latest Service Pack and cumulative update?
Is a trace 4199 enabled? SQL to enable:
DBCC TRACEON (4199, -1)
Long compilation time can be caused by large number of statistics, especially auto-created stats. Will removal of some of them adjust compilation time?
Recompilation also can be prevented by force parametrization:
ALTER DATABASE [your db] SET PARAMETERIZATION FORCED
However, it can improve current problematic query, while some other will suffer from parameter sniffing.

Queries slow when run by specific Windows account

Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html

SQL Server 2014 standard edition slows the machine when Database size grows

I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .

How to limit SQL queries CPU utilization?

After a large SQL Query is run that is built through my ASPX Pages I see the following two items listed in sql profiler.
Event Class TextData ApplicationName CPU Reads Writes
SQL:BatchCompleted Select N'Testing Connection...' SQLAgent - Alert Engine 1609 0 0
SQL:BatchCompleted EXECUTE msdb.sbo.sp_sqlagent_get_perf_counters SQLAgent - Alert Engine 1609 96 0
These CPU is the same as the query so does that query actually take 1609*3=4827?
Same thing happens with case :
Audit Logout
Can I limit this? I am using sql server 2005.
First of all, some of what you see in the SQL Profiler is cumulative, so you can't always just add the numbers up. For example, a SPCompleted event will show the total time of all the SPStatementCompleted events that make it up. Not sure if that's your issue here.
The only way to improve the CPU is to actually improve your query. Make sure its using indexes, minimize the number of rows read, etc. Work with an experienced DBA on some of these techniques, or read a book.
Only other mitigation I can think of is to limit the number of CPUs the query runs on (this is called Degree of Parallelism, or DOP). You can set this at the server level, or specify it at the query level. If you have a multiple processor server, this can ensure that a single long-running query doesn't take over all processors on the box--it will leave one or more processors free for other queries to run.
No, it takes 1609 milliseconds of CPU in total. What is the duration?
I bet the same or slighty more because I doubt SQL Agent queries use parallelism.
Are you trying to reduce background processes using CPU? If so, then you reduce functionality by disabling SQL Agent (no backups then for example) and restarting SQL Server with switch -x
You also can not stop "Audit logout" events... this is what happens when you disconnect or close a connection.
However, are you maxing the processors? If so, you'll need to differentiate between "user" memory for queries and "system" memory used for paging or (god forbid) generating your parity on RAID 5 disks.
High CPU can often be solved by more RAM and a better disk config.
SQL Server 2008 has a new "Resource Governor" that may help. I don't know if you're using SQL Server 2008 or not but you may want to take a look here
This is an issue of connection string. If audit logout takes too much of your cpu then try to play with different connection string.

Resources