SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing - sql-server

An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill.
At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data.
Any suggestions are greatly appreciated.
Kind Regards,
Bernie

Forgot to post an update to this. I found the problem. The problem was associated with an image with an external source on the report. Recently the report server was disallowed internet access. So when reporting services was processing the report, it was trying to do an HTTP GET, to retreive the image. Since the server was disallowed outbound internet access, the request would eventually timeout with a 301 error. Unfortunately this timeout period was very long, and I suspect it happened for each page of the report, becase the longer the report, the longer the processing time. At any rate, I was not able to get outbound internet access reopened on the server so I took a different path. Since the web server where the image was hosted and the reporting server were on the same local network, I was able to modify the HOST file on the reporting server with the image hosts domain and local IP address. For example:
www.someplacewheremyimageis.com/images/myimage.gif
reporting server would try to resolve this via its local dns and no doubt get external ip address X.X.X.X
so I modified the HOST file on the report server by adding the the following line
192.168.X.X www.someplacewheremyimageis.com
So now when reporting services tries to generate the report it resolves to the above internal IP address and includes the image in the report.
The reports are now running snappier than ever.
Its these kinds of problems that you figure out with a flash of inspiriation at 4:30 am after hours of beating your head against your keyboard, that make it wonderful and terrible to be a software developer.
Hope this helps someone.
Thanks,
Bernie

Related

Azure sql server size resets to 1GB

I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.

SQL Server 2016 Mobile Report Datasets Expire about 30 Days

We are in the process of creating a suite of SQL server 2016 Reporting Services Mobile Reports for our company’s Cloud offering to customers, however, we keep running into a situation where the all datasets expire after a certain time.
We have found that all the datasets on the server seem to stop working after 30 days after they have been created and an Error message (“The data set could not be processed. There was a problem getting data from the Report Server Web Service.”) is displayed.
To resolve this, all the datasets need to be opened manually and re-saved onto the server. As you can imagine, this isn’t really a suitable solution for as we have a long number of reports and datasets for each customer.
After a bit of investigation, we have managed to pinpoint a “Snapshotdata” table in the report server database which has an “ExpirationDate” column, which seems to be linked to the issue.
Has anyone else can across this before and could please advise a possible solution to the datasets expiring? Why would the datasets have an expiration date on them anyway?
A dataset will not be expired once it has been created.
In your scenario, did you create cache for those datasets? Was there anything change to the dataset?
You said in mobile report it prompted "dataset could not be processed" error, please locate to dataset property pane, and check whether it returns data successfully by clicking on Load Data. If not, change to another account and try again.
Besides, please check whether the account used to connect to data source was expired after 30 days which might caused the failure of data retrieval.

SQL server temporary connection failure from particular clients

We have a client deployment of our software that is showing intermittent SQL server connection failures, and we are struggling to understand them.
Our system consists of a SQL Server DB (2012) and 14 identical engines, each installed on a Windows 2012 VM. Each of these was created from the same template so they should be identical. The engines consist of a Windows service that connects to the DB on startup by reading a single row from a table. If the connection fails they will wait a few seconds and try again, until they get a connection.
In this particular case, the VMs were all rebooted due to a Windows Update. (The SQL server had the update/reboot about 12 hours before). They came online within a few minutes of each other. 12 of the engines started up without any problem. Two of them, however, failed to connect to the DB with:
"The underlying provider failed on Open."
Those two engines then started to poll, and continued to get this error for many hours. The rest of the engines had started up and were fine. We have a broker service too that was accessing the DB throughout and showed no connection issues.
When the client noticed this issue, they restarted the engine services on the two problem VMs, and the two engines connected to the DB just fine.
We are trying to understand what could have happened here. I guess my main questions are:
What could be an explanation of why 12 connections succeed and two fail? There's absolutely no difference as far as we know between the engines. The query itself is very simple.
Why did the connection continue to fail for those two engines until the service was restarted? This suggests to me that there is some process-level failed state that is only cleared when restarting the services. I've looked at the code to see if it was reusing the connections. It uses Entity Framework to read the single table row, and we create a fresh DbContext each time. I don't understand how this could go wrong.
We noted that there was a CheckDb operation proceeding on the DB around the time the services were coming up, and we wondered if this could be related to the issue. However, the client says that this runs every night and hasn't caused problems in the past. And it wouldn't explain why the engines didn't come back up again.
Thanks in advance for any help.

Virtualized II7 page runs slow when querying against SQL server

OK… I’ve been tasked to figure out why an intranet site is running slow for a small to medium sized company (less than 200 people). After three days of looking on the web. I’ve decided to post what I’m looking at. Here is what I know:
Server: HP DL380 Gen9 (new)
OS: MS Server 2012 – running hyper-v
RAM: 32GB
Server 2012 was built to run at most 2 to 3 VMs at most (only running one VM at the moment)
16GB of RAM dedicated for the VHD (not dynamic memory)
Volume was created to house the VHD
The volume has a fixed 400GB VHD inside it.
Inside that VHD is server 2008r2 running SQL 2008r2 and hosting an iis7 intranet.
Here is what’s happening:
A page in the intranet is set to run a couple of stored procedures that do some checking against data in other tables as well as insert data (some sort of attendance db) after employee data is entered. The code looks like it creates and drops approximately 5 tables in the process of crunching the data. The page takes about 1min50secs to run on the newer server. I was able to get hold of the old server & run a speed test: 14 seconds.
I’m at a loss… a lot of sites say alter the code. However it was running quick before.
Old server is a 32bit 2003 server running SQL2000… new is obviously 64bit.
Any ideas?
You should find out where the slowness is coming from.
The bottleneck could be in SQL-Server, in IIS, in the code, on the network?
Find the SQL statements that are executed and run them directly in SQL server.
Run the code outside of IIS web pages
Run the code from a different server
Solved my own issue... just took a while for me to get back to this. Hopefully this will help others.
Turned on SQL Activity Monitor under tools\options => at startup => Open Object Explorer and Activity Monitor.
Opened Recent Expensive Queries. Right clicked on the top queries and selected Show Execution Plan. This showed a missing index for the db. Added index by clicking the plan info at the top. Added the index.
Hope this helps!

Establishing SQL Connection Taking 10 - 15 Seconds

We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an ASP.NET MVC C# website using EF4 POCO in IIS 7 (highly specced servers, dedicated just for this application).
Obviously it's slow on application_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using Mini-Profiler). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms).
This is not SQL queries that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either.
We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped.
It seems after looking at Sql Server Profiler after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on Sql Server?
This should work if you use the below in your connection string:
server=MyServer;database=MyDatabase;Min Pool Size=1;Max Pool Size=100
It will force your connection pool to always maintain at least one connection. I must say I don't recommend this (persistant connection) but it will solve your problem.
Are you using LMHOSTS file? We had same issue. LMHOSTS file cache expires after 10 minutes by default. After system has been sitting idle for 10 minutes the host would use Broadcast message before reloading the LMHOSTS file causing the delay.
It seems after looking at Sql Server Profiler after 5 minutes give or
take 30 seconds with no activity in Sql Server Profiler and no site
interaction a couple of "Audit Logout" entries appear for the
application and as soon as that happens it then seems to take 10 - 15
seconds to refresh the application. Is there an idle timeout on Sql
Server?
This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses connection pooling and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people.
For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
I would run "SQL Server Profiler" against SQL Server and capture a new trace while reproducing the problem by accessing the site after being idle 5-10 mins. After that look at the trace. Specifically, look for entries in which ApplicationName starts with "EntityFramework...". This will tell you what is EF doing at that moment. There could be some issue with custom caching on top of EF, or some session state that is expiring (check sessionState timeout in web.config)
Since it is the first run (per app?) that is slow, you may be experiencing the compilation of the EDMX or the LINQ into SQL.
Possible solutions:
Use precompiled views and precompiled queries (may require a lot of refactoring).
http://msdn.microsoft.com/en-us/library/bb896240.aspx
http://blogs.msdn.com/b/dmcat/archive/2010/04/21/isolating-performance-with-precompiled-pre-generated-views-in-the-entity-framework-4.aspx
http://msdn.microsoft.com/en-us/library/bb896297.aspx
http://msdn.microsoft.com/en-us/magazine/ee336024.aspx
Dry run all your queries on app start (prior to first request is received).
You can run queries with a fake input (e.g. non existing zero keys) on a default connection string (can be to an empty database). Just make sure you don't throw exceptions (use SingleOrDefault() instead of Single() and handle null results and 0-length list results on .ToList()).
Create a simple webpage that accesses the SQL Server with a trivial query like "Select getDate()" or some other cheap query. Then use an external service like Pingdom or other monitor to hit that page every 30 seconds or so. That should keep the connections warm.
Try to overwrite your timeout in the web.config like in this example:
Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True
;User ID=User;Password=password;Connection Timeout=120
If it works, this is not a solution..just a work around.
In our case the application was hosted in Azure App Service Plan and was having similar problem. Turned out to be a problem of not configuring virtual network. See the question/answer here - EF Core 3.1.14 Recurring Cold Start

Resources