SSRS accessing SQL ReportServer DB every 10 seconds - sql-server

In SQL Profiler (for SPs), it seems my (single) SSRS Report Server instance is repeatedly accessing the ReportServer SQL DB every 10 seconds. The SSRS Stored Procedures being run are:
GetCurrentProductInfo
GetAllConfigurationInfo
GetAnnouncedKey
AnnounceOrGetKey
Is there any way to reduce the frequency of these SSRS requests ?
I've changed various settings in RSReportService.config, inc. PollingInterval (from 10s to 28800s), but none of this has had any effect on the above, even after re-start, re-boot.
Thanks for info/help.

Related

PBI Service dataflow - SQL Server data query timeout error

Have you had experience with SQL queries in Power BI dataflows that run for longer than 10 minutes for one SQL query?
I have migrated an Excel Power Query script to PBI dataflows. One of the migrated queries fails and it fails consistently at 10:00 minutes exactly. But there is not "timeout" error displayed, just a message that says, "Query has cancelled". My query does have the [Command Timeout] property set to 60 minutes, but the property looks like it's being ignored in the PBI service dataflow.
I have written an Excel Power Query script with multiple SQL connections, most connection are stored procedure that return a dataset. One SQL proc takes about 30 minutes to complete. The entire of these queries takes about 50 minutes. We're trying to stage datasets in dataflows and improve performance for reporting for our end users, who are currently pulling datasets from SSRS reports. This dataflow query failure is a major road block.
I'm curious if anyone has had a similar experience. To me it seems ridiculous that a dataflow SQL connection can't run past 10 minutes.
Thanks for your input!

Facing slowness in database server after migrating from SSMS 2008 to SSMS 2016

We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).

SSRS 2008R2 Upgrade to 2016 Performance Issues

We recently did an SSRS upgrade and migration going from 2008R2 to 2016 Standard SP1 on a new server. The migration was done with a ReportServer DB restore, so all the 2008 RDLs were copied over.
One of the reports is taking about 20 seconds longer to render (based on the average TimeRendering value from ExecutionLog) on the new server compared to the old one. The report has a footer, so all the pages render at runtime. There is about 1800 pages worth of data, and there are many tables with groupings / logic. There are probably ways to optimize the report, but shouldn't the same report run at least as fast on my new server?
Below is a list of things I looked at / noticed, but at this point I don't know where else to look to see why there could be a performance difference.
Old Server:
2008R2
Report data source on same server
96GB RAM
4 core CPU
64 bit 2008R2 Windows Server
New Server:
2016 SP1
Server on same SAN / physical location as report data source
128 GB RAM
4 core CPU
64 bit 2016 Windows Server
Things I tried (none of which made a difference):
Opening the RDL in VS 2015 / upgrading the RDL to new version
Running the report in Chrome vs IE 11
Running the report on RDP
Add new report site to compatibility list in IE
Running a version of the report without the footer, and the render time goes down to 1 second, but the TimeProcessing increases, so the overall runtime is still the same. This was very confusing...
Things I noticed:
Old server will use more CPU than new server. There are other processing running, so could be due to that, but new server (ssrs only) CPU will never go over 30% usage. Could this be a config somewhere?
What are the data retrieval and processing times from the execution logs? Those might point you in the right direction.

SSAS mdx query penalty when run as a regular user vs ran as server administrator

I'm running a MDX query on a MS SSAS Tabular instance using two different users. The first is a Server Administrator and the second one is a member of the Reading security Role. The query runs in ~0.8 seconds under the Server Admin while the unprivileged user runs it in ~6 seconds.
Any ideas on what's causing this performance penalty?
Here are a few more details:
- the exact same MDX is ran by both users. Checked with the profiler;
- the Reading role has no DAX row filters defined;
- I ran the query multiple times on the two accounts and the execution times remained consistent;
- running MS SQL Server 2012 (Enterprise).

Deadlock on communication buffer: SQL Server 2008 R2 running stored procedures for data warehouse

Currently running SQL Server 2008 R2 SP1 on 64-Bit Windows Server 2008 R2 Enterprise on a Intel dual 8-core processor server with 128 GB RAM and 1TB internal SCSI drive.
Server has been running our Data Warehouse and Analysis Services packages since 2011. This server and SQL instance is not used for OLTP.
Suddenly and without warning, all of the jobs that call SSIS packages that build the data warehouse tables (using Stored Procedures) are failing with "Deadlock on communication buffer" errors. The SP that generates the error within the package is different each time the process is run.
However, the jobs will run fine if SQL Server Profiler is running to trace at the time that the jobs are initiated.
This initially occured on our Development server (same configuration) in June. Contact with Microsoft identified Disk I/O issues, and suggested setting MaxDOP = 8, which has mitigated the deadlock issue, but introduced an issue where the processes can take up to 3 times longer at random intervals.
This just occurred today on our Production server. MaxDOP is currently set to zero. There have been no changes to OS, SQL Server or the SSIS packages in the past month. The jobs ran fine overnight on September 5th, but failed with the errors overnight last night (September 6th) and continue to fail on any retry.
The length of time that any one job will run before failing is not consisent nor is there consistency between jobs. Jobs that take 2 minutes to run to completion previously will fail in seconds, where jobs that normally take 2 hours may run anywhere from 30 - 90 minutes before failing.
Have you considered changing the isoloation level of the database. This can help when parallel reads and writes are happening on the database.

Resources