We recently did an SSRS upgrade and migration going from 2008R2 to 2016 Standard SP1 on a new server. The migration was done with a ReportServer DB restore, so all the 2008 RDLs were copied over.
One of the reports is taking about 20 seconds longer to render (based on the average TimeRendering value from ExecutionLog) on the new server compared to the old one. The report has a footer, so all the pages render at runtime. There is about 1800 pages worth of data, and there are many tables with groupings / logic. There are probably ways to optimize the report, but shouldn't the same report run at least as fast on my new server?
Below is a list of things I looked at / noticed, but at this point I don't know where else to look to see why there could be a performance difference.
Old Server:
2008R2
Report data source on same server
96GB RAM
4 core CPU
64 bit 2008R2 Windows Server
New Server:
2016 SP1
Server on same SAN / physical location as report data source
128 GB RAM
4 core CPU
64 bit 2016 Windows Server
Things I tried (none of which made a difference):
Opening the RDL in VS 2015 / upgrading the RDL to new version
Running the report in Chrome vs IE 11
Running the report on RDP
Add new report site to compatibility list in IE
Running a version of the report without the footer, and the render time goes down to 1 second, but the TimeProcessing increases, so the overall runtime is still the same. This was very confusing...
Things I noticed:
Old server will use more CPU than new server. There are other processing running, so could be due to that, but new server (ssrs only) CPU will never go over 30% usage. Could this be a config somewhere?
What are the data retrieval and processing times from the execution logs? Those might point you in the right direction.
Related
We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).
Ok very strange one this that is outside of my skill level unfortunately. We've got a fairly large database (35GB) with medium usage. This was on oldish hardware and SQL Server 2008. We got a new server, lots more ram / faster processors - great! HDD setup is the same (i.e. raid / configuration / file location). Backed up the database and restored on the new server (running in 2012 mode). Everything seemed fine - but all was not well. I'm getting very strange performance issues. Most queries are running slightly faster, which is great, but some queries on the first time are running a lot slower.
Example - we have a query that on initial run takes 7 seconds to complete. If I run it again it takes 250ms. If I change a parameter value it takes 7 seconds to run again. If I clear the query plan cache it takes 7 seconds to run again. If I run the same query on the old database, it takes 500ms on first run, 400ms on second run.
So something is defo up with how long it takes to compile the query. When I return the actual execution plan, its the same but the estimate rows / subtree costs are a lot higher on the new server. When I do properties and get the compile time its 7000 vs 350 on the old server (assuming thats ms).
If I amend the query and have options(recompile) it takes 3 seconds to run pretty much each and every time. So faster initial, but still too slow on recalls.
As part of the migration, rebuilt all indexes and updated statistics.
So long story short, new server is quicker but only after the query plan has been created. Ideas?
Not including any code, as the query isn't the problem - runs fine on SQL Server 2008 and runs fine on 2012 after initial run.
[Edit - Example of what I mean by variable. Between edits I'm just changing the 'something' to 'something1']
DECLARE #myReference VARCHAR(50) = 'something'
SELECT Column1, Column2
FROM dbo.Table
WHERE Column3 = #myReference
[/Edit]
I would expect the general performance to be a slight improvement over 2008 due to the better hardware.
Image of profiler outputs: [profiler]: https://imgur.com/hq5t73t "profiler image"
Confirmation of version: Microsoft SQL Server 2012 (SP4) (KB4018073) -
11.0.7001.0 (X64) Aug 15 2017 10:23:29 Copyright (c) Microsoft Corporation Standard
Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: )
Obviously that the problem is in a high compilation time.
Perhaps this issue is already addressed and fixed by MS.
Is the server updated to the latest Service Pack and cumulative update?
Is a trace 4199 enabled? SQL to enable:
DBCC TRACEON (4199, -1)
Long compilation time can be caused by large number of statistics, especially auto-created stats. Will removal of some of them adjust compilation time?
Recompilation also can be prevented by force parametrization:
ALTER DATABASE [your db] SET PARAMETERIZATION FORCED
However, it can improve current problematic query, while some other will suffer from parameter sniffing.
We are using MS SQL Express where database is limited to 10 GB. We ran into problems lately where we reached the 10 GB and figured it out by users telling us that the apps are not working anymore.
In MS SQL Server Management Studio we have the option to get
Reports - Standard Reports - Disk Usage
And there we see how many % of the table are at the moment unused.
Is it possible somehow to get a notification when reaching 90% of DB Size = 9GB?
Thanks for the help
Andreas
OK… I’ve been tasked to figure out why an intranet site is running slow for a small to medium sized company (less than 200 people). After three days of looking on the web. I’ve decided to post what I’m looking at. Here is what I know:
Server: HP DL380 Gen9 (new)
OS: MS Server 2012 – running hyper-v
RAM: 32GB
Server 2012 was built to run at most 2 to 3 VMs at most (only running one VM at the moment)
16GB of RAM dedicated for the VHD (not dynamic memory)
Volume was created to house the VHD
The volume has a fixed 400GB VHD inside it.
Inside that VHD is server 2008r2 running SQL 2008r2 and hosting an iis7 intranet.
Here is what’s happening:
A page in the intranet is set to run a couple of stored procedures that do some checking against data in other tables as well as insert data (some sort of attendance db) after employee data is entered. The code looks like it creates and drops approximately 5 tables in the process of crunching the data. The page takes about 1min50secs to run on the newer server. I was able to get hold of the old server & run a speed test: 14 seconds.
I’m at a loss… a lot of sites say alter the code. However it was running quick before.
Old server is a 32bit 2003 server running SQL2000… new is obviously 64bit.
Any ideas?
You should find out where the slowness is coming from.
The bottleneck could be in SQL-Server, in IIS, in the code, on the network?
Find the SQL statements that are executed and run them directly in SQL server.
Run the code outside of IIS web pages
Run the code from a different server
Solved my own issue... just took a while for me to get back to this. Hopefully this will help others.
Turned on SQL Activity Monitor under tools\options => at startup => Open Object Explorer and Activity Monitor.
Opened Recent Expensive Queries. Right clicked on the top queries and selected Show Execution Plan. This showed a missing index for the db. Added index by clicking the plan info at the top. Added the index.
Hope this helps!
I've asked for more ram for our SQL Server (currently we have a server with 4 GB of RAM) but our administrator told me that he would accept that only if I can show him the better performance with having more memory available because he has checked the server logs and SQL Server is using only 2.5 GB.
Can someone tell me how can I prove to him the effect of more available memory (like in a performance issue for a query)?
Leaving aside the fact that you don't appear to have memory issues...
Some basic checks to run:
Check the Page Life Expectancy counter: this is how long a page will stay in memory
Target Server Memory is how much RAM SQL Server want to use
Note on PLE:
"300 seconds" is quoted but our busy server has a PLE of 80k+. Which is a week. When our databases are 15 x RAM. With peaks of 3k new rows per second and lot of read aggregations.
Edit, Oct 2011
I found this article on PLE by Jonathan Kehayias: http://www.sqlskills.com/blogs/jonathan/post/Finding-what-queries-in-the-plan-cache-use-a-specific-index.aspx
The comments have many of the usual SQL Server suspect commenting