SQL Server 2016 Mobile Report Datasets Expire about 30 Days - sql-server

We are in the process of creating a suite of SQL server 2016 Reporting Services Mobile Reports for our company’s Cloud offering to customers, however, we keep running into a situation where the all datasets expire after a certain time.
We have found that all the datasets on the server seem to stop working after 30 days after they have been created and an Error message (“The data set could not be processed. There was a problem getting data from the Report Server Web Service.”) is displayed.
To resolve this, all the datasets need to be opened manually and re-saved onto the server. As you can imagine, this isn’t really a suitable solution for as we have a long number of reports and datasets for each customer.
After a bit of investigation, we have managed to pinpoint a “Snapshotdata” table in the report server database which has an “ExpirationDate” column, which seems to be linked to the issue.
Has anyone else can across this before and could please advise a possible solution to the datasets expiring? Why would the datasets have an expiration date on them anyway?

A dataset will not be expired once it has been created.
In your scenario, did you create cache for those datasets? Was there anything change to the dataset?
You said in mobile report it prompted "dataset could not be processed" error, please locate to dataset property pane, and check whether it returns data successfully by clicking on Load Data. If not, change to another account and try again.
Besides, please check whether the account used to connect to data source was expired after 30 days which might caused the failure of data retrieval.

Related

Best Way to Pull in Live Data From 'Root' Database On Demand

Let me start by apologizing as I'm afraid this might be more of a "discussion" than an "answerable" question...but I'm running out of options.
I work for the Research Dept. for my city's public schools and am in charge of a reporting web site. We use a third-party vendor (Infinite Campus/IC) solution to track information on our students -- attendance, behavior, grades, etc. The IC database sits in a cloud and they replicate the data to a local database controlled by our IT Dept.
I have created a series of SSIS packages that pull in data each night from our local database, so the reporting data is through the prior school day. This has worked well, but recently users have requested that some of the data be viewed in real-time. My database sits on a different server than the local IC database.
My first solution was to create a linked server from my server to the local IC server, and this was slow but worked. Unfortunately, this put a strain on the local IC database, my IT Dept. freaked out and told me I could no longer do that.
My next & current solution was to create an SSIS package that would be called by a stored procedure. The SSIS package would query the local IC database and bring in the needed data to my database. This has been working well and is actually much quicker than using the linked server. It takes about 30 seconds to pull in the data, process it and spit it out on the screen as opposed to the 2-3 minutes the linked server took. It's been in place for about a month or so.
Yesterday, this live report turned into a parking lot -- the report says "loading" and just sits like that for hours. It eventually will bring back the data. I discovered the department head that I created this report for sent out an e-mail to all schools (approximately 160) encouraging them to check out the report. As far as I can tell, about 90 people tried to run the report at the same time, and I guess this is what caused the traffic jam.
So my question is...is there a better way to pull in this data from the local IC database? I'm kind of limited with what I can do, because I'm not in our IT Dept. I think if I presented a solution to them, they may work with me, but it would have to be minimal impact on their end. I'm good with SQL queries but I'm far from a db admin so I don't really know what options are available to me.
UPDATE
I talked to my IT Dept about doing transactional replication on the handful of tables that I needed, and as suspected it was quickly shot down. What I decided to do was set up an SSIS package that is called via Job Scheduler and runs every 5 minutes. The package only takes about 25-30 seconds to execute. On the report, I've put a big "Last Updated 3/29/2018 5:50 PM" at the top of the report along with a message explaining the report gets updated every 5 minutes. So far this morning, the report is running fantastically and the users I've checked in with seem to be satisfied. I still wish my IT team was more open to replicating, but I guess that is a worry for another day.
Thanks to everybody who offered solutions and ideas!!
One option which I've done in the past is an "ETL on the Fly" method.
You set up an SSIS package as a dataflow but it writes to a DataReader Destination. This then becomes the source for your SSRS Report. In effect this means that when the SSRS report is run - it automatically runs the SSIS package and fetches the data - it can pass parameters into the SSIS report as well.
There's a bit of extra config involved but this is straightforward.
This article goes through it -
https://www.mssqltips.com/sqlservertip/1997/enable-ssis-as-data-source-type-on-sql-server-reporting-services/

Azure sql server size resets to 1GB

I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.

Logs on azure sql database

We had an issue yesterday that we are trying to figure out. Out of nowhere everything on the database changed,
We know it was an update without a where clause, but we are just a few developers. So if any of us would have done it we would know it.
It was at a strange time of the day, very late at night and only a few ip addresses are allowed into the server.
Is there any way to get the full log with ips of all the transactions on azure?
Did anyone had a similar problem? can it be a break through?
Are there any software protections, scripts that we can add to limit this?
Is there any way to get the full log with ips of all the transactions on azure?
Few options i could think off,Even this is not possible in onpremises..if you don't have correct measures to detect this...else contact support for a request to read TLOG of the database(Azure support won't read the log,unless you have a business justification,as this involves involving many teams due to safety reasons)
1.) You could use activity log to know more details..
2.) There is an sys.event_log (Azure SQL Database) DMV ,which shows connections successfull or not .you can correlate to know the users based on your office set up..this won't show success or failures
To avoid this happening again,Audit data and Azure offers many features to know more on whats happening like
1..Get started with SQL database auditing
2. Enable rules to get alerted when some thing happens..
Enable Auditing and Threat Detection on the server if you hadn't
For more information, please read this page.

Sending emails automatically using SQL Server job

I'm developing a .NET desktop application with SQL Server as the database backend. One of the requirements of the application is that if a record status, for example, remains inactive for 30 days, there will be a reminder email sent to the user associated to that record.
This could be done pretty easily within the application, as long as it is started and running. However, assume that for a certain period of time, nobody starts up the application, the reminder email won't be sent, because nothing / nodody triggers the action.
How about creating a job in SQL Server which can monitors the records and sends emails as needed? Has anyone ever done that?
Thanks a lot!
Given the requirements of your task, I suggest that you create a console program (w/ C# or VB.NET) that checks for inactive (30 days) row condition and then generates the appropriate email notification message. Then run this program every hour or so (depending on the urgency involved in detecting an inactive row condition) using a SQL Server Agent Job.
The following image shows how the SQL Server Agent Jobs are displayed in the Object Explorer for SQL Server 2008 R2.
This SO entry covers some aspects on creating a console program that runs at certain times. The SQL Server Job Agent has several scheduling options that should facilitate your needs.
You might be reluctant to create a console program for this, but you are apt to find that doing so gives you options that are simply not easily implemented with a pure SQL Server based approach. Plus, you may have future needs that require similar processing that this approach provides.

SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing

An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill.
At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data.
Any suggestions are greatly appreciated.
Kind Regards,
Bernie
Forgot to post an update to this. I found the problem. The problem was associated with an image with an external source on the report. Recently the report server was disallowed internet access. So when reporting services was processing the report, it was trying to do an HTTP GET, to retreive the image. Since the server was disallowed outbound internet access, the request would eventually timeout with a 301 error. Unfortunately this timeout period was very long, and I suspect it happened for each page of the report, becase the longer the report, the longer the processing time. At any rate, I was not able to get outbound internet access reopened on the server so I took a different path. Since the web server where the image was hosted and the reporting server were on the same local network, I was able to modify the HOST file on the reporting server with the image hosts domain and local IP address. For example:
www.someplacewheremyimageis.com/images/myimage.gif
reporting server would try to resolve this via its local dns and no doubt get external ip address X.X.X.X
so I modified the HOST file on the report server by adding the the following line
192.168.X.X www.someplacewheremyimageis.com
So now when reporting services tries to generate the report it resolves to the above internal IP address and includes the image in the report.
The reports are now running snappier than ever.
Its these kinds of problems that you figure out with a flash of inspiriation at 4:30 am after hours of beating your head against your keyboard, that make it wonderful and terrible to be a software developer.
Hope this helps someone.
Thanks,
Bernie

Resources