Deploying reports from multiple parameters automatically? - sql-server

I'm quite new to SSRS. I have a set of reports which obtain parameters from a query (defined as dataset in BIDS). These parameters include a machine identity, starting date and ending date. While each is specific to each machine, SSRS will put all parameters into separate drop-down lists which the user must then select the correct values from and run the report.
The problem, as you may imagine, is that there is somewhat of a wide margin for error here on the target user's part. The user is able to select incorrect starting and ending dates from this list for any machine. These reports should ideally run automatically on a schedule every Monday morning, but I'm having difficulty seeing how this is accomplished, as SSRS must essentially iterate through the machine ID list and use the appropriate starting and ending dates for each report. All other reports in this instance depend on these parameters which are gleaned from this query.
Does anyone know how to automatically run multiple SSRS reports based on a list of parameters? I sounds like some sort of coding logic is necessary here but I don't know how to use it in this case (I would prefer to have no user interaction, if possible).

Have you looked at Data Drive Subscriptions?
http://msdn.microsoft.com/en-us/library/ms169972.aspx

Related

SSRS 2016 - subscription subject from variable

I need to pass a current date variable dynamically to SSRS subscription subject. Is it possible when using standard subscription ?
I know it may be obtained using data-driven subscription but then I get report sent as many times as number of rows in the report, and I only need the report to be sent once.
I just need to have something like "Report XXX, #GETDATE()" in subject.
Is it possible to also set dynamic report name in Excel file sent using the SSRS subscription ?
You can obtain the Execution Time of the report using #ExecutionTime, see this link. However, you want to make sure the execution time represents when the report was rendered or when the subscription processed the report, I am sure it is the former and it could be problematic for snapshot/cached reports. For non-cached reports this should be the current date outside of the edge case of crossing midnight server time.

users and expiration date

I have a question that I hope someone can help me with, I would like to be able to search for how many paying members my website has at a specific date.
They belong to their own role "members" and there is an expiration date for each member. If they do not have an expiration date, they should also be listed.
Should I be able to use 2sxc module for this and would anyone tell me how to do?
You would need to have an intermediate understanding of SQL and the tables/fields related to the results you are trying to achieve. Though it is possible to get done in 2sxc, I would recommend starting the effort the DNN Reports module. That should let you focus on getting the results you need in the SQL Query first (since the display part is auto/default. Then, once you have the right query, you could move it over to 2sxc (or any module that allows data to be queried and returned as a result (set)) and do something more useful.

SSRS auto refresh (AutoRefresh) only partially works on particular clients

We've been successfully using SSRS auto refresh (SQL Server 2008R2) on a variety of clients over the years and never had any problems. Various combinations of Chrome and IE on various OSs (windows XP, 7 and 10) have all been fine. We've just deployed a new report to run in full screen mode on TV screens, and it seems to be PARTIALLY refreshing. The Globals!ExecutionTime displays accurately, but new rows (INSERTs in the source data) in the report's tablix do not show up until the report is manually refreshed. Even more oddly, UPDATEs to the source data seem to make it through the auto refresh process. The problem only seems to occur on these particular clients.
We've set up a report history to help monitor this problem, and it works as expected. In fact it highlights the inconsistency, where newer information is captured in snapshots that were run earlier that the screen autorefresh.
The report execution logs are recording exactly the executions we'd expect to see. The data is just not making it onto the screen.
The report's processing options are:
Always run this report with the most recent data,
Do not cache temporary copies of this report
Any suggestions greatly appreciated :-)
Well we eventually resolved it. The problem turned out to be not exactly as described above. It seems to have been due to unusual combination of several cascading parameters, and a rapidly changing underlying dataset. What would happen, is that:
The SQL behind the cascading parameters would be executed on autorefresh in order to populate each parameter's default values.
During or shortly after this, the source data would change.
Next the final dataset SQL would be executed using the now out-of-date parameter values, and bring back the wrong results.
The solution was to remove four of the five cascading parameters from the report (and the underlying stored procedure). The stored procedure had initially been intended to be widely used and that was the reason for needing all the parameters. Turns out though that it was only used by the report, so as luck would have it we were able to simplify the process.

Development standards for SQL Server supporting services?

I am trying to find some development best practises for SQL Server Reporting Services, Analysis Services and Integration Services.
Does anyone have some useful links or guidance they can offer on this subject?
I can only talk specifically to SSIS although some of this wil be applicable to the others as well.
Save your packages as files and put them in Source Control.
Where possible use variables for things that will change from server to server or run to run.
Use configuration files to save the configuration for differnt environments.
When processing data that comes from an outside source, assume it will change format without warning (ie check to see that the data you expect in each column is the data you got!) Nothing like putting the emails in the lastname field (or as happened to us once in DTS, the social security number into the field that said how much to pay the person, sure glad we caught that before someone got paid that amount.).
Things I have seen happen include adding new columns, removing columns that are critical to your process, reaarranging the order of the columns (especially bad when the file itself does not have the column names), leaving the column titles the same but changing the data they contain (yes once I got a file where the last name data was in the column labelled First_name and vice versa), data with new values that don't have a match to values in your system (i'm think of look up type things here like medical specialties), flat out strange data such as notes in an email field, names in this format lastname - 'Willams, Jo' first_name - 'hn' (combine the two fields to get the whole name - apparently their data entry people just typed the name until they ran out of spaces and continued on in the next field no matter where they were in the name!).
Don't put uncleaned data into your database.
Always retain a copy of any files that you process or send out. Amazing how often you will need to research back.
Log errors and log records that needed cleaning, espcially if the problem in the field was such that it caused the process to fail. It is a whole lot easier to see the errors in a table than to know your 20 million record file failed because one record had an extra | in it and try to figure out which one it was.
If you do a lot of similar imports in SSIS, create a template project that has all the standard logging and data cleaning it it. It is a whole lot faster to start from a template and adjust to new mappings based onteh new file you are working with and make minor adjustoments to things specific to that file than to rewrite every SSIS package from scratch.
Store meta data. Sooner or later you will be asked, how often did it fail or how soon after the file was received did the import happen or even when was the last import. All our pacakges start and end with a task to store start and stop times in our meta data table. All failure paths include a task to mark the import as failed in our meta data. Eventually you can build a system that knows how many records to expect and fail it if the new file is significantly off. Meta data can also be used to store things like number of records which can help identify when they sent a partial file instead of the whole file you were expecting and prevent you from blowing away 300,000 sales targets they actually still want.

Cleaning Up Temporary SSRS Reports

Our application uses SQL Server Reporting Services and allows users to add custom filters to reports. We do this by modifying the RDL and then uploading the modified RDL to the server to create a new report. The problem is that after the report has run once, it's no longer needed; it's really just a temporary report. Obviously, this would eventually result in a lot of temporary reports laying around. We need a way to clean these up.
We've already thought about external methods like creating a service or job to periodically delete the reports, and that's probably what we'll end up doing if we can't come up with something better. What we're wondering is, does SSRS itself provide a better way to do this? We thought about trying to somehow use a cached instance which would be set to expire, but that seems to only works on an executed instance of a report not the report itself. As far as I can tell there's no way to set a report to expire. Is there some other way to get SSRS to clean up for us?
Immediately deleting the report isn't an option because our execution is asynchronous.
Built-in, there's nothing. But writing something yourself is easy enough.
Try having a process which queries your catalog of reports for ones that are older than half an hour (or so). You could even join to ReportServerTempDB to see if they still have an active session (in which case, you ignore them a bit longer).
Once you've found them, it's easy to grab that using the Web Service interface and delete them from the catalog.
But... I'd actually look at a better way of providing the custom filter, using code. Surely you could provide the filter as a parameter, and use the VB code within the report to convert what the user provides into something which could be evaluated for each row.
Rob

Resources