SSAS Tabular calculated measures missing from Power View fields list - sql-server

I have defined a calculated measure named "Gross Margin" for my "FactInvoiceLineItem" table. I can see this measure in the Measures dimension (along with several others):
These measures work fine in a PivotTable, but they are absent from the field list in Power View:
I've seen similar issues with PowerPivot and e.g. date columns but that issue shouldn't apply here since the result of the calculation is numeric. We've tried wrapping the calculation in a CALCULATE() anyway, but it didn't help.
There are examples of using measures with Power View from a PowerPivot model. Am I missing some setting in my model, or is this a quirk with Power View and SSAS Tabular?

As mmarie suggests, it's not a limitation of Power View and Tabular - except that Power View only supports numeric measures.
We had included in our calculated measure some custom formatting using FORMAT() to apply parenthesis to negative numbers, which rendered our nice numeric calculation into text. Thus, Power View wouldn't display our measures because they were no longer numeric!
Stripping the FORMAT() out returned the calculation to a numeric type, and made it available in Power View.

Related

Does Apache Superset support Weighted Averages?

I'm trying to use Apache Superset to create a dashboard that will display the average rate of X/Y at different entities such that the time grain can be changed on the fly. However, all I have available as raw data is daily totals of X and Y for the entities in question.
It would be simple to do if I could just get a line chart that displayed sum(X)/sum(Y) as its own metric, where the sum range would change with the time grain, but that doesn't seem to be supported.
Creating a function in SQLAlchemy that calculates the daily rates and then uses that as the raw data is also an insufficient solution, since taking the average of that over different time ranges would not be properly weighted.
Is there a workaround I'm not seeing?
Is there a way to use Druid or some other tool to make displaying a quotient over a variable range possible?
My current best solution is to just set up different charts for each time grain size (day, month, quarter, year), but that's extremely inelegant and I'm hoping to do better.
There are multiple ways to do this, one is using the Metric editor as shown bellow, in this case the metric definition is stored as part of the chart.
Another way is to define a metric in the "datasource editor", where the metric will be stored with the datasource definition, and become reusable for any chart using this datasource, as shown here
Side note: depending on the database you use, you may have to CAST from say an integer to a numeric type as I did in the example, or multiply by 100 in order to get a proper result that's useful.

How to calculate a percentage using columns of data in a SSRS TABLIX that I have grouped

I have an SQL query that gives me a data set with 3 columns:
Contract Code
Volume
MonthRegistered
I want to present this data grouped on rows by Contract_Code and columns by MonthRegistered:
I then want to calculate a Percentage difference between the months:
I will only ever in this case have 2 months worth of data - Each 1 year apart.
I am trying to express the percentage variation from one year to the next for each row of data.
I did this expression:
=(Fields!Volume.Value)/(Fields!Volume.Value)
but CLEARLY it was not right - and how it is not right is it is not addressing the columns independently.
I did format the TABLIX text box as a percentage so at least I figured that one out.
in this Technet article: Calculating Totals and Other Aggregates (Reporting Services) it states:You can also write your own expressions to calculate aggregate values for one scope relative to another scope. I couldn't find reference to how to address the separate scopes.
I would appreciate any pointers on this one please!
Sorry for posting my examples as JPG rather than actual text but I needed to hide some of the data...
This only works because you will only ever have two months worth of data to compare. You have to make sure that your SQL has already ordered by MonthRegistered. If you do not order in your query then SSRS's own sorting will be applied to determine which value is first and last.
=First(Fields!Volume.Value) / Last(Fields!Volume.Value)
Because you have performed the aggregation in SSRS you may have to wrap each statement in a SUM expressions.
It would be advisable to perform the aggregation in SQL where possible, if you only plan on showing it in this way.

SSRS not sorting details as expected

I have a report with a tablix that is grouped on a supplier group. I have a details group that contains a Sequence, Block and Product number. I need the report to sort based on the Sequence column (first column in the details group). My problem is that even with the sort applied at the dataset, and details group level I am getting a sort that goes 1,4,5,6,7,8,9,10,11,2,3 etc. I have 32 sequences. I would understand if the sort went 1,10,11 etc but it is jumping over 2 and 3. The field is defined as an integer so I can't figure it out. When I look at this in the query in the dataset the sorting is correct. I am at a loss. I have tried applying the sort at every level within the report and also not applying the sort at all. Anyone have any ideas?
Can you share which version of SSRS and also include an image of the report output, tablix, and row groups?
I would recommend you remove the ORDER BY from your SQL in the data set. In my experience, SSRS can sort this more efficiently in your tablix than SQL Server.
Also, I prefer to focus any sorting at the closest level of user visibility (I only think of sorting as useful for users). Therefore, I recommend you apply the sorting to the row group. Also, if the sequences you are using, they appear to be integers, ever convert to text, make sure you convert the field back to a number in your sort expression. I suggest you convert this to an integer even if you are certain that it is already an integer, at least for testing.

Mutually exclusive facts. Should I create a new dimention in this case?

There is a star schema that contains 3 dimensions (Distributor, Brand, SaleDate) and a fact table with two fact columns: SalesAmountB measured in boxes as the integer type and SalesAmountH measured in hectolitres as the numeric type. The end user wants to select which fact to show in a report. The report is going to be presented via SharePoint 2010 PPS.
So help me please determine which variant is suitable for me the most:
1) Add a new dimension like "Units" with two values Boxes, Hectolitres and use the in-built filter for this dim. (The fact data types are incompatible though)
2) Make two separate tables for the two facts and build two cubes. Then select either as the datasource.
3) Leave the model as it is and use the PPS API in SharePoint to select the fact to show.
So any ideas?
I think the best way to implement this is by using separate field for SalesAmountB and SalesAmountH in fact table. Then creating 2 separate measure in BIDS and controlling the visibility through MDX. By doing this, you can avoid complexity of duplicating whole data or even creating separate cubes.

To use or not to use computed columns for performance and maintainability

I have a table where am storing a startingDate in a DateTime column.
Once i have the startingDate value, am supposed to calculate the
number_of_days,
number_of_weeks
number_of_months and
number_of_years
all from the startingDate to the current date.
If you are going to use these values in two or more places in the application and you do care much about the applications response time, would you rather make the calculations in a view or create computed columns for each so you can query the table directly?
Computed columns are easy to maintain and provide an ideal solution to your problem – I have used such a solution recently. However, be aware the values are calculated when requested (when they are SELECTed), not when the row is INSERTed into the table – so performance might still be an issue. This might be acceptable if you can off-load work from the application server to the database server. Views also don’t exist until they are requested (unless they are materialised) so, again, there will be an overhead at runtime, but, again it’s on the database server, not the application server.
Like nearly everything: It depends.
As #RedX suggest it probably not much of a performance difference either way, so it becomes a question of how will use them. To me this is more of a feel thing.
Using them more than once doesn't wouldn't necessary drive me immediately to either a view or computed columns. If I only use them in a few places or low volume code paths I might calc them in-line in those places or use a CTE. But if the are in wide spread or heavy use I would agree with a view or computed column.
You would also want them in a view or cc if you want them available via ORM tools.
Am I using those "computed columns" individual in places or am I using them in sets? If using them in sets I probably want a view of the table that shows included them all.
When i need them do I usually want them associated with data from a particular other table? If so that would suggest a view.
Am I basing updates on the original table of those computed values? If so then I want computed columns to avoid joining the view in these case.
Calculated columns may seem an easy solution at first, but I have seen companies have trouble with them because when they try to do ETL with CDC for real-time Change Data Capture with tools like Attunity it will not recognize the calculated columns since the values are not there permanently. So there are some issues. Also if the columns will be retrieve many, many times by users, you will save time in the long run by putting that logic in the ETL tool or procedure and write it once to the database instead of calculating it many times for each request.

Resources