Replicating Power Query formatting to a Pivot Table? - pivot-table

The default Designs (colors etc) for a Power Query seem to be using a default template that isn't available in the default Designs for a PivotTable (also on its Design Tab). Is there a way to copy the formatting from one to apply as a custom design on the other?

Related

Is it ok to have 2 dimensions that are the same but one is less deep?

I have a fact table, with account number and some numbers associated..
I have my DimAccount which has a very long hierarchy of level1,sub-level2… up to sub-level20.
When reporting in PowerBI this makes it very hard to navigate…
My requirement is to have a sort of different/new DimAccount which is less deep (it will be similar to DimAccount but with a different grouping)
So, I want to create a different mapping. Where should this be done?
In the backend?
Having some sort of DimAccount2, where it has less hierarchies or
Creating new table? Perhaps creating a mapping table, where I just map sublevels to a less deep hierarchy?
Or should this be corrected in the cube/powerbi ? creating measures in DAX where one does the mapping manually there?
I am not sure where/how to do it... My Goal is to have a DimHighLevelAccount, but it is not that I just can remove sub-levels, the mapping will be also different, perhaps I group some categories from level5,6 and 7 together...
Power BI always has its own data model (called a "dataset" in Power BI docs), derived in this case from the data model in your data warehouse. And the Power BI data model has some modeling capabilities that your DW does not have.
So the Power BI data model should load/expose only the tables and columns from your data warehouse that are useful for the use case (you may have a handful of different Power BI datasets for the same DW tables). And then add additional modeling, like adding Measures, hiding columns, and declaring Hierarcies.
So in this case, have a single Account dimension table, but when you bring it in to Power BI, leave out hierarchy levels that you don't want, and add the remaining ones to a Hierarchy and hide the individual levels from the report view, so the report developer sees a single hierarchal property.

Tableau Peformance- Custom SQL Queries

I am essentially building gone report that ingests two types of data. One is the receptionists data. Which is each receptionists stats by day. But then the data gets a little more granular and is each call for each receptionist.
Essentially the report does two things gives receptionist performance then a person can click and prompt the same dashboard sheet to update with specific call log etc.
So basically this data set is huge and held as an export so it will be faster an I limit the data to this month and last month (minimum requirement). I have also eliminated any unnecessary columns.
I am curious if I should create two separate custom queries in Tableau then create referential field or should I bring both custom queries inside of one workbook and join them together. At first I had the two connections separate but now I brought them together and am noticing some performance issues. What are some of my options?
It would be better to have two seperate queries since for the first view doesnt need all the additional details you want to show in the drill down.
Use an action filter and link the two sheets(which use different data sources) by selecting the specific fields when configuring the action filter.
Performance wise this is a good approach.

SSAS Tabular calculated measures missing from Power View fields list

I have defined a calculated measure named "Gross Margin" for my "FactInvoiceLineItem" table. I can see this measure in the Measures dimension (along with several others):
These measures work fine in a PivotTable, but they are absent from the field list in Power View:
I've seen similar issues with PowerPivot and e.g. date columns but that issue shouldn't apply here since the result of the calculation is numeric. We've tried wrapping the calculation in a CALCULATE() anyway, but it didn't help.
There are examples of using measures with Power View from a PowerPivot model. Am I missing some setting in my model, or is this a quirk with Power View and SSAS Tabular?
As mmarie suggests, it's not a limitation of Power View and Tabular - except that Power View only supports numeric measures.
We had included in our calculated measure some custom formatting using FORMAT() to apply parenthesis to negative numbers, which rendered our nice numeric calculation into text. Thus, Power View wouldn't display our measures because they were no longer numeric!
Stripping the FORMAT() out returned the calculation to a numeric type, and made it available in Power View.

Are there any downsides to using NewSequentialID?

As the question states, what are the downsides of using NewSequentialID as the default value of a table vs NewID()? The obvious advantage is that it won't fragment our index as much.
Is there any concern for ever maxing out the sequence?
I don't see how a default value on a field could really be a disadvantage.
If you want to control the ids of some records before you insert them, it can be handy to use NEWID() instead of the default sequential id (so you can generate the records and their associations before you interact with the database, and you won't have to query it afterwards to get the ids back). Although the two are not mutually exclusive...
As granadaCoder said, the sequential ID could be inferred, but IMO the benefit is so great in term of performance and maintenance that it would be a mistake not to use it.
newsequesntialid is not supported by Azure

To use or not to use computed columns for performance and maintainability

I have a table where am storing a startingDate in a DateTime column.
Once i have the startingDate value, am supposed to calculate the
number_of_days,
number_of_weeks
number_of_months and
number_of_years
all from the startingDate to the current date.
If you are going to use these values in two or more places in the application and you do care much about the applications response time, would you rather make the calculations in a view or create computed columns for each so you can query the table directly?
Computed columns are easy to maintain and provide an ideal solution to your problem – I have used such a solution recently. However, be aware the values are calculated when requested (when they are SELECTed), not when the row is INSERTed into the table – so performance might still be an issue. This might be acceptable if you can off-load work from the application server to the database server. Views also don’t exist until they are requested (unless they are materialised) so, again, there will be an overhead at runtime, but, again it’s on the database server, not the application server.
Like nearly everything: It depends.
As #RedX suggest it probably not much of a performance difference either way, so it becomes a question of how will use them. To me this is more of a feel thing.
Using them more than once doesn't wouldn't necessary drive me immediately to either a view or computed columns. If I only use them in a few places or low volume code paths I might calc them in-line in those places or use a CTE. But if the are in wide spread or heavy use I would agree with a view or computed column.
You would also want them in a view or cc if you want them available via ORM tools.
Am I using those "computed columns" individual in places or am I using them in sets? If using them in sets I probably want a view of the table that shows included them all.
When i need them do I usually want them associated with data from a particular other table? If so that would suggest a view.
Am I basing updates on the original table of those computed values? If so then I want computed columns to avoid joining the view in these case.
Calculated columns may seem an easy solution at first, but I have seen companies have trouble with them because when they try to do ETL with CDC for real-time Change Data Capture with tools like Attunity it will not recognize the calculated columns since the values are not there permanently. So there are some issues. Also if the columns will be retrieve many, many times by users, you will save time in the long run by putting that logic in the ETL tool or procedure and write it once to the database instead of calculating it many times for each request.

Resources