SSRS Calculated Field used in multiple Reports - sql-server

I am working in Visual Studio 2013 and SQL Server 2008 R2.
A really long Expression [60-70 IIF's] is in a calculated field that is used in about 35 Reports. The Calculated Field expression matches a value from the Data Row [coming in from T_SQL] and designates a 'Group Name' for the row.
Example Data:
ID Prod_Num Amount
1 123 15
2 234 20
3 345 25
Example Expression (pseudo code):
IIF (Prod_Num = '123', 'Shirts',
IIF Prod_Num = '234', 'Pants',
IIF Prod_Num = '345', 'Socks', 'Other')
Problem is when the Prod_Num list is added to or modified - the changes have to be made in all reports.
What would be a good way to have all this in one place - so that when there are changes, they only need to be made in that one place.
I don't have Create Table rights on the DB and I don't know if that is even an option - though if I DID have the rights, I would put all the Prod_Nums and Categories (Shirts . . . Pants . . .) into a Table and then just do the work in the SQL for the report.
I thought of a T_SQL Function but some of the Reports use a Linked Server to pull data from a Progress DB . . . and I don't know how that would work with a SQL Server Function??
I'd appreciate any help/suggestions.
Thanks!

You can put your expression into a function in a Custom Code Assembly in SSRS.
Then you add that assembly to all the reports that need it, and all you have to do on each report is call that function in an expression.
By the way, you should be using Visual Studio 2008 to build reports for SSRS 2008R2. Reports built in VS2013 are not guaranteed to work on SSRS 2008R2.

You will have the same issue wherever you place this logic if you are not enforcing the relationship in your data, i.e. creating a type table for your products.
The only leverage you can gain is to move the hardcodes from many locations to just one so that when you do update a table(s), you can clearly document one other location that must also be updated. Here are a few examples :
As Tab Allerman pointed out, create a class function inside an assembly and embed that assembly into each report. You then just update the server with a new assembly when your choices change.
Create a custom SP in a database that every report will have access to, even if it is not your report's main database. (You can create multiple data sources in a report.)
Use a web service as a data source for your reports and put the types in one location this way.
Use an xml document as a data source for your reports and put the types in one location this way.
Ask the person(s) maintaining the database why the heck the products are not typified.

Related

Pivot shows single record field value that doesn't agree to tabualr data model value (source data) - misaligned?

I have found something I'm a little concerned about. Was trying to get a measure working and had it as a calc column previously so was comparing the 2 different outputs and checking for line by line differences. I picked a good one and investigated. calc column value was zero, measure value was £42. The calc column is correct. So i drilled into my measure to see what i could find. Alarmingly, I found that for a certain field called DocumentStatus the pivot showed it as "LIVE". But if I go to the table in visual studio and find that order the status is "COMPLETED". I have checked and rechecked. There is only 1 order on this table with the right DocumentNo. The pivot seems to think this order is LIVE but the source data definitely shows it as completed. What??? How can this happen?
So strictly speaking the measure is actually calculating correctly because it is seeing the order as live therefore picking the £42 value is correct for that formula. The calc column is correct because it is seeing the order as completed so picking zero as the final value is correct.
It is the fact that the record is being seen as both live and completed that is throwing me. I'm concerned to say the least. This feels like a bug. I have checked and have no other filters in play. I have checked other ways too - like filtering on all orders with a £42 value in a particular field - none of the have a LIVE status. Its almost like that field is misaligned in the background.
Has anyone ever seen this?
TIA
SSAS Tabular; SQL Server 2016, Visual Studio 2017
Edit 5 Jul:
Thanks for your comments. Unfortunately I cannot provide sample data due to strict confidentiality. I have provided 2 screen shots below, both showing the same record - (1) is the view from the excel pivot table that is connected to the ssas tabular data model (2) is the view of the table in Visual Studio (note how the value of the Accrued Income measure in this view is not the same as the AccruedIncome total in the excel pivot table)
I am wondering if this is to do with the way that I have deployed recent edits to the data model. Every time I make a change I am running the deploy & build commands so that I can refresh the excel reports to see if they are working as intended. What I don't know is when I do this am I deploying the metadata only or the metadata and the actual data (several hundred thousand rows on a dozen or so tables)? Is the issue here that the pivot is looking at an older set of data than the dataset visual studio is looking at? When I deploy & build do I then need to process the SSAS tabular object to update the data?
Also note how the DocumentStatus is different in the 2 views.
Excel pivot
Visual Studio

crystal reports missing columns

OK, first question here so go easy!
Let's start with some quick background - I have been working with Crystal almost daily for the past 15 years to so, so I like to think I am not too much of a dummy.
Today, I have stumbled across a problem I have never seen.
My source data is from SQL Server 2012 Standard.
I have created a view, nothing too complicated. It grabs PartNo and Description from an Inventory Main table. Then a few other columns from other related tables so I can see the data neatly in a single view, so stuff like carton qty (how many units go into 1 carton), height, length, depth.
Now, bear in mind I have done these kinds of views so many times in the past I couldn't even count.
Now, in Crystal Reports 2013, I connect to SQL via OLE using sa credentials, find my view. Pop over to the "Links" tab in the Database Expert, and my columns created from the related tables are not listed! Only PartNo and Description are visible.
I have scratching my head over this for the past few hours & the only thing I can put it down to is some kind of weird Microsoft update.
My SQL view results:
SQL view
What I see in Crystal
Database Expert, Links
the other way to test this is instead of a view or table just add a
command and select * from yourview
the only downside i think would be using sql expression from a command connection.
another option is to
To define the data type in the view... same with dates. does not like
to be converted otherwise will treat it as a string

I need to make sure 2 DB are the same

I'm doing it programmatically (I'm a newbie to sql) I'm getting the data per table within first DB using with being a value from a list of table names that I need to make sure are
there
if there have the corresponding values in the same table in
DB X list all the fields that do not have the same values and the
value in below
Table that does match listing the table, field name, row,
"SELECT * FROM [Dev.Chris21].[dbo].[" & PayrollTablemaskedarray(xxxxxx-2) & "]"
I can copy the whole thing into excel but I'm wondering is there a way to do this using sql?
Thanks
Since you mention that you're doing it programmically I assume you're using visual studio. If so you can take advantage of SQL Server Data Tools (SSDT) to do comparisons of two database schemas or two database data sets. You get this out of the box with VS2012 or VS2013 (and earlier versions too). Might be worth a look...

Can I change the datasource after a ssrs report is created?

I need to change the DataSource for my SSRS reports. Some field names and DIM-FACT table names have changed on the SQL server 2008 database used to create SSRS reports. How can I change the DataSource do that without losing all of the work I have done? Some field names are not the same or have been removed.
The reports were already uploaded/deployed from Visual Studio and copied to SharePoint 2010, Is there a way to modify the original datasource without having to rewrite the whole dril-down report?
I am new to SSRS and I hope what I am asking makes sense )
Solution Explorer and Properties in Visual Studio where modified, but ReportData Section (on the left) are still the same. Can someone please help me?
In your example, you have your report splendidly broken out into 3 parts - an RDL, which is your actual Report definition; an RSD, which is your dataset, which houses a reference to a sproc or just your entire query, and maintains information about the field names, data types, etc; and an RDS, which is your datasource, and merely contains a connection string.
As long as the metadata between them remain the same, you can alter any of these files independently of the others - you can completely gut & rewrite your RSD, and as long as the field names, datatypes, and parameters are the same, the RDL will continue to work with no modifications needed. Similarly, you can change your datasource's (RDS) connection string, and as long as the new connection has access to the same objects, your RSD, and thus RDL will work fine.
So, if you merely need to change the data source, simply modify that file, and you're done.
It sounds, however, like you need to change your dataset. This can be as simple or as complicated as you'd like it to be. You could simply update your query, and alias all of the new field names back to what they were before your change. This would require no modifications to your RDL, though could be argued as being a bad practice.
Lastly, if this really is a simple change of replacing one value with another, know that all 3 files - RDS, RSD, RDL - are simply XML. Open them up using the Notepad clone of your choice, and do a find/replace for everything (you can also use "Code View" in Visual Studio).

Export tables from SQL Server to be imported to Oracle 10g

I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.

Resources