Can I change the datasource after a ssrs report is created? - sql-server

I need to change the DataSource for my SSRS reports. Some field names and DIM-FACT table names have changed on the SQL server 2008 database used to create SSRS reports. How can I change the DataSource do that without losing all of the work I have done? Some field names are not the same or have been removed.
The reports were already uploaded/deployed from Visual Studio and copied to SharePoint 2010, Is there a way to modify the original datasource without having to rewrite the whole dril-down report?
I am new to SSRS and I hope what I am asking makes sense )
Solution Explorer and Properties in Visual Studio where modified, but ReportData Section (on the left) are still the same. Can someone please help me?

In your example, you have your report splendidly broken out into 3 parts - an RDL, which is your actual Report definition; an RSD, which is your dataset, which houses a reference to a sproc or just your entire query, and maintains information about the field names, data types, etc; and an RDS, which is your datasource, and merely contains a connection string.
As long as the metadata between them remain the same, you can alter any of these files independently of the others - you can completely gut & rewrite your RSD, and as long as the field names, datatypes, and parameters are the same, the RDL will continue to work with no modifications needed. Similarly, you can change your datasource's (RDS) connection string, and as long as the new connection has access to the same objects, your RSD, and thus RDL will work fine.
So, if you merely need to change the data source, simply modify that file, and you're done.
It sounds, however, like you need to change your dataset. This can be as simple or as complicated as you'd like it to be. You could simply update your query, and alias all of the new field names back to what they were before your change. This would require no modifications to your RDL, though could be argued as being a bad practice.
Lastly, if this really is a simple change of replacing one value with another, know that all 3 files - RDS, RSD, RDL - are simply XML. Open them up using the Notepad clone of your choice, and do a find/replace for everything (you can also use "Code View" in Visual Studio).

Related

ODBC and data binding by [Table].[column]

I'm rewriting an older-than-dirt MFC application, ripping out the old MFC-based DB code and re-working queries to make it run faster. This code works with a MS Access 2003 .mdb file.
The old code used the convenience functions like RFX_Bool, RFX_Long, RFX_Int to read from the records. These are nice, and I am reading about binding in ODBC using SQLBindCol to set the bindings ahead of time to avoid that extra processing time for each row. This is great, but I see SQLBindCol only takes the column number, not the name. What if I want to bind using the column name like with the RFX_* functions? SQLDescribeCol gives the column names, but it doesn't have the "full" name, i.e. [Table/Alias].[Column]. Some of my queries involve JOIN'ing the same table multiple times with aliases, so I can't bind the column by the column name alone. If I plug my query into Access, the Datasheet view shows the alias in the column name. I'm currently using my connection string with Driver={Microsoft Access Driver (*.mdb, *.accdb)}, if it matters.
tl;dr How do I do MFC's RFX_*(fieldExchange, L"[Table].[Column]", &variable) in the modern ODBC API?
OK, I think I understand what the RFX functions are doing now, and I think I know what I need to do.
The MFC ODBC classes construct your query programmatically, so after starting with SELECT, UPDATE, etc, every call to RFX_* simply appends the field name to the query, then ties a reference to your variable to the column index, which it increments after every call. So I just need to append my fields to my queries with a helper function the same way MFC does, in order to bind my pointers in the same way MFC does.
Hopefully this is helpful to somebody.

SSRS Calculated Field used in multiple Reports

I am working in Visual Studio 2013 and SQL Server 2008 R2.
A really long Expression [60-70 IIF's] is in a calculated field that is used in about 35 Reports. The Calculated Field expression matches a value from the Data Row [coming in from T_SQL] and designates a 'Group Name' for the row.
Example Data:
ID Prod_Num Amount
1 123 15
2 234 20
3 345 25
Example Expression (pseudo code):
IIF (Prod_Num = '123', 'Shirts',
IIF Prod_Num = '234', 'Pants',
IIF Prod_Num = '345', 'Socks', 'Other')
Problem is when the Prod_Num list is added to or modified - the changes have to be made in all reports.
What would be a good way to have all this in one place - so that when there are changes, they only need to be made in that one place.
I don't have Create Table rights on the DB and I don't know if that is even an option - though if I DID have the rights, I would put all the Prod_Nums and Categories (Shirts . . . Pants . . .) into a Table and then just do the work in the SQL for the report.
I thought of a T_SQL Function but some of the Reports use a Linked Server to pull data from a Progress DB . . . and I don't know how that would work with a SQL Server Function??
I'd appreciate any help/suggestions.
Thanks!
You can put your expression into a function in a Custom Code Assembly in SSRS.
Then you add that assembly to all the reports that need it, and all you have to do on each report is call that function in an expression.
By the way, you should be using Visual Studio 2008 to build reports for SSRS 2008R2. Reports built in VS2013 are not guaranteed to work on SSRS 2008R2.
You will have the same issue wherever you place this logic if you are not enforcing the relationship in your data, i.e. creating a type table for your products.
The only leverage you can gain is to move the hardcodes from many locations to just one so that when you do update a table(s), you can clearly document one other location that must also be updated. Here are a few examples :
As Tab Allerman pointed out, create a class function inside an assembly and embed that assembly into each report. You then just update the server with a new assembly when your choices change.
Create a custom SP in a database that every report will have access to, even if it is not your report's main database. (You can create multiple data sources in a report.)
Use a web service as a data source for your reports and put the types in one location this way.
Use an xml document as a data source for your reports and put the types in one location this way.
Ask the person(s) maintaining the database why the heck the products are not typified.

Epicor asking for password after making a table change

Epicor - what a beastly creature!
Epicor asking for password after making a table change, any idea why?!?!
We removed the relationship from the (part table) and set up a criteria, instead. Now it is asking for a password, which should not be happening.
the login happens when I try to run the report. I am trying to figure out what I did to aggravate Epicor. The table was already there. I removed the relationship (part table) and added a criteria, instead, otherwise, that is exactly what I would have done. The only reason that I did not add a table to a report data definition, like I originally wanted to is because the parts table could only be added once. Which is why I removed the relationship and added a criteria, instead.
From your description, it sounds like the problem is related to the xml generated by Epicor for a non-BAQ based report data definition. Crystal and SSRS reports ask login information when either there is more than one datasource is referenced in the report, or there is improper relationships defined.
Note:
If you are not a report developer and you have modified this in an attempt to change the end data, I recommend you contact the report developer responsible for maintaining these before proceeding. Otherwise, read on.
Based on my experience, I would say if you are confident in the new relationship structure you have in the report data definition, the solution to this problem is likely within the report itself. Generate an xml file by running a test report, then open the .rpt (or .rdl) associated with this report and set the datasource to the new xml file. This should update the new xml schema used as the datasource. Even if none of the fields were changed in the data definition, the datasource schema definition that is stored in these files define exactly the data formatting that the report expects to receive when it is opened by Epicor.
If that doesn't solve the problem and you are using Crystal, the xml relationships may be defined in a way that will effect the way the data is displayed, which can be adjusted by using database expert->links tab in crystal. You should reconnect all of the links to match the report data definition within Epicor.
If none of that works, open up and view the xml file.
It is not unheard of for report data definitions in Epicor to break behind the scenes when altering relationships, and the xml file generated by the test report may not be a fully-qualified xml file. I have seen many xml files that do not have elements closed, etc. that will cause various problems when attempting to run the report. In this case, my recommendation is to create a completely new report data definition (do not copy), and re-enter all of the parameters that existed in the former definition. Repeat the refreshing of the report datasource as described above and this problem should be fixed.

Is it possible to list fields actually used from result sets in SSRS?

I have dozens of SSRS Reports and the corresponding stored procedures that I am slowly cleaning up and optimizing, I am finding a number of data sets that have extra fields that are not used in the actual report, they are usually the result of a SELECT * that is slowing down the SP end of things significantly in a few places.
I am wondering if there is a quicker way to see which fields from the datasets are used/unused to more efficiently clean up the stored procedures. It seems like clicking through to each <<expr>> and checking them off is a ridiculous way to go about this.
I'll tell you, I wish I knew a tool that simplifies this for you.
And I don't off the top of my head.
But for sure I know you can search the text of the rdl and find these details.
I do this often when troubleshooting problems with existing reports (or SSIS packages).
The .rdl files are human-readable xml.
You can open any one file in a text editor and search the text - even Visual Studio if you "Open File" rather than use the Report project.
Given that, of course you can write a routine in your preferred programming language that
finds the query or proc in the data source of the Report
runs it (as metadata only) to get all the columns
search for each one in the text of the rdl
you can be more specific if you use xml queries to limit
the search to more realistic targets like display box Data Sources
Sorry I don't have a more convenient answer like an existing tool.
If I remember, I may look for one because this is a big problem for "corporate coders" like us.
If I can't find one, maybe I'll write the script in .net and come back and post it :)
Yes you can ! Use the following steps
right click the rdl file and
click the View Code . This will be an XML format
CTR + F to get a search text box
Enter the name of any field in the text box.
Use the Forward Arrow icon to see the number of occurrence of the searched field name
If the text field is in the dataset and Tablixcell value, then it shows it's being used in the report
If the text field is only in the dataset and not any tablixcell value, then it's not being used in the report.

Export tables from SQL Server to be imported to Oracle 10g

I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.

Resources