Ok very strange one this that is outside of my skill level unfortunately. We've got a fairly large database (35GB) with medium usage. This was on oldish hardware and SQL Server 2008. We got a new server, lots more ram / faster processors - great! HDD setup is the same (i.e. raid / configuration / file location). Backed up the database and restored on the new server (running in 2012 mode). Everything seemed fine - but all was not well. I'm getting very strange performance issues. Most queries are running slightly faster, which is great, but some queries on the first time are running a lot slower.
Example - we have a query that on initial run takes 7 seconds to complete. If I run it again it takes 250ms. If I change a parameter value it takes 7 seconds to run again. If I clear the query plan cache it takes 7 seconds to run again. If I run the same query on the old database, it takes 500ms on first run, 400ms on second run.
So something is defo up with how long it takes to compile the query. When I return the actual execution plan, its the same but the estimate rows / subtree costs are a lot higher on the new server. When I do properties and get the compile time its 7000 vs 350 on the old server (assuming thats ms).
If I amend the query and have options(recompile) it takes 3 seconds to run pretty much each and every time. So faster initial, but still too slow on recalls.
As part of the migration, rebuilt all indexes and updated statistics.
So long story short, new server is quicker but only after the query plan has been created. Ideas?
Not including any code, as the query isn't the problem - runs fine on SQL Server 2008 and runs fine on 2012 after initial run.
[Edit - Example of what I mean by variable. Between edits I'm just changing the 'something' to 'something1']
DECLARE #myReference VARCHAR(50) = 'something'
SELECT Column1, Column2
FROM dbo.Table
WHERE Column3 = #myReference
[/Edit]
I would expect the general performance to be a slight improvement over 2008 due to the better hardware.
Image of profiler outputs: [profiler]: https://imgur.com/hq5t73t "profiler image"
Confirmation of version: Microsoft SQL Server 2012 (SP4) (KB4018073) -
11.0.7001.0 (X64) Aug 15 2017 10:23:29 Copyright (c) Microsoft Corporation Standard
Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: )
Obviously that the problem is in a high compilation time.
Perhaps this issue is already addressed and fixed by MS.
Is the server updated to the latest Service Pack and cumulative update?
Is a trace 4199 enabled? SQL to enable:
DBCC TRACEON (4199, -1)
Long compilation time can be caused by large number of statistics, especially auto-created stats. Will removal of some of them adjust compilation time?
Recompilation also can be prevented by force parametrization:
ALTER DATABASE [your db] SET PARAMETERIZATION FORCED
However, it can improve current problematic query, while some other will suffer from parameter sniffing.
Related
We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).
I've got SQL Server Express 2017 (RTM) 14.0.1000.169 installed on a low-powered W2019 server on AWS. It has 2GB of RAM and 2.40GHz processor. (t2.small).
I've had the same setup on other AWS machines with less power than this and they don't seem to have this problem. But those machines would have SQL Server 2014 and windows 2008R2.
The problem is that when I run specific queries, they are taking huge amounts of time. In my applications, I have a routine that sniffs parameters for the stored procedure being called so that the .NET can provide parameters as needed without me having to code every single procedure separately.
To do this, I have been running
[sys].[sp_procedure_params_100_managed] #procedure_name=#PRC
but this routine in this new machine is taking up to 30 seconds to run.
So I've manually created something simpler to see if I can view any issues with the execution plan, but it still runs extremely slowly:
select o.name, prm.*
from sys.parameters prm
inner join sys.objects o on prm.object_id=o.object_id
where o.type='P'
and o.name='prc_THEPROCNAME'
To be clear, there are only 30 stored procedures in the whole database. But the query above takes 2860ms to run in SSMS and is showing 1285 reads.
When I run that on my little win10 machine with SQL Server Express 2014, it takes 12ms with 568 reads on a similar (but larger) database.
So my question is this: what is the issue with this environment? Is a t2.small too underpowered for Win2019? Does SQL Server 2017 have that much more overhead than SQL Server 2014? Or is it possible that I have a bad configuration somewhere in my SQL Server setup?
I am using SQL Server on a very similar AWS platform (t2.small) and have no performance issues with your query (< 1s). You haven't mentioned whether any other queries are suffering performance problems, but that would be helpful. My answer then would be that the t2.small is not underpowered and it is highly unlikely that SQL '17 would have greater difficulty than SQL '14 when executing this query.
I am using a SQL server 2008, which has databases mirrored in Synchronized mode.
I am trying to run some update stored procedures, with some nested joins and it runs fine (Obviously with a reduced performance compared to a server which is not mirrored).
The problem I am facing is that if I select the "show detailed plan" option. The query starts running and it virtually goes to a hung state and doesnt recover. I finally end task the SQL.
I have a public role for the databases and I cant access any stats.
Can you tell me what exactly (or in general) should I ask the DBA to look at?
The details of the SQL server is mentioned below.
Product - SQL Server Enterprise Edtn- 64 bit.
OS - WIndows NT 6.0
Memory -6143 MB
Processor -2
Maximum Server memory - 3072 MB
Minimum server memory - 16 MB
Any help on guiding me to a right direction will be appreciated.
Regards,
Dasso
Because
1) You have activated [Include actual execution plan] option and because
2) There is a WHILE statement
SQL Server will send to client - Sql Server Management Studio the actual execution plan of every SQL statement executed by every iteration of WHILE statement. So, if WHILE includes a simple UPDATE and it executes 100 iterations then Sql Server will send the execution plan of UPDATE 100 times!
You should decrease the number of iterations for WHILE or you could use estimated plans.
I'm having a problem with an ad-hoc query that manages a fairly high amount of data. Upon executing the query, the status immediately goes into suspended state. It will stay suspended for around 25 minutes and then complete execution.
I have a mirror environment with SQL2K and the same query executes in around 2 minutes and never goes into suspended state.
##version =
Microsoft SQL Server 2005 - 9.00.3068.00 (Intel IA-64) Feb 26 2008 21:28:22 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)
Perhaps the statistics are out of date and need updated.
Update them but better to rebuild indexes at the same time.
Or, you don't have any. Are stats set to create and update automatically?
I've seen cases where they're switched off because someone does not understand what they are for or how updates happen.
Note: the sampling rate of stats is based on the last stats update. So if you last sampled 100%, it may take some time.
What happens when you run the query twice? Is it quicker the second time?
It's hard to tell from the limited information, but I'd be curious to know what's happening from a performance perspective on the server while the query is running. You can capture performance metrics with Perfmon, and I've got a tutorial about it here:
http://www.brentozar.com/perfmon
While the query's running, what's the statistics each of those counters look like? If you capture the statistics as described in that article, you can email 'em to me at brento#brentozar.com and I'll take a look at 'em to see what's going on.
Another thing that'd help is the execution plan of the query. Go into SQL Server Management Studio, put the query in, and click Query, Display Estimated Execution Plan. Right-click anywhere on the plan and save it as a file, and then other people can see what the query looks like.
Then ideally, click Query, Include Actual Execution Plan, run the query, and then go to the Execution Plan tab. Save that one too. If you post the two plans (or email 'em to me) you'll get better answers about what's going on.
Are there any common reasons why upgrading a database from SQL Server 2000 to SQL Server 2005 would result in slower queries? This is coming from an ASP.NET 1.1 application with hundred of tables, everything is indexed and seems to run well on the older version.
After the upgrade first thing you need to do is update the statistics with full scan and rebuild the indexes or you will get suboptimal plans
Are you certain that all of your indexes survived the upgrade? Are there any differences in hardware? Have you used the SQL Profiler to determine which queries are running slower to try to track down the problem?
There could be a lot of things. Without specific query examples and other information I don't think anyone will be able to help much.
You may want to re-evaluate your indexes by looking at the execution plans of your most-troublesome queries. The SQL 2005 query optimizer may be coming up with completely different execution plans.
You should also make sure you update statistics on your entire database.
A few things...
What Service Pack are you on?
Have you applied any additional Hotfixes or CUs?
Did you change the db compatibility level from 80 to 90 during the upgrade?
If you are using server side cursors, be aware that there are some performance problems that can start to surface after upgrading from SQL Server 2000 to SQL Server 2005. If this is your situation, there are a couple of hotfixes that might help. Just search for SQL Server 2005 hotfixes and server side cursors.
Aside from that, always be sure to check db integrity after the upgrade, rebuild indexes and update stats.
We just experienced this issue after upgrade from 2000 Ent SP4 to 2005 Std 64 bit SP2, a much more powerful server too (2 4-core, 32GB RAM)
SELECT query took 2~3 secs on 2000 and 20+ minutes (and still not finished) on 2005
Re-built ALL indexes, sp_updatestats, same results. Very strange, no index hints were used except NOLOCK
The databases remained in 8.0 compatible mode on the 2005 box though
Restoring to another 2005 box as we speak to test
Make sure that the queries and stored procedures you're running are not utilizing any index hints. Like everyone else has mentioned, the optimizer has changed between 2000 and 2005, so these hints may no longer be useful.
Also, if all else fails, there is a bug in the 2005 optimizer addressed in SP2 cumulative update 6 (and requires applying 2 traceflags).
You didn't say which edition you are running.
But if you just moved from a 2000 Standard or Enterprise edition to a 2005 Express edition; The Express edition only uses one processor. I jsut had this happen to me last week; one of my queries went from an already slow 1.5 seconds to 55 seconds! I ran the query plan, and the only difference was the parallel operations. Couldn't beleive the speed difference.