Removing Blank Days from Power BI Chart - sql-server

I have created a weekly request measure like so :
RequestsWeekly = var result= CALCULATE(
DISTINCTCOUNTNOBLANK(SessionRequests[RequestDateTime]),
FILTER('Date','Date'[WeekDate]=SELECTEDVALUE('DateSelector'[WeekDate],MAX('DateSelector'[WeekDate]))))+0
RETURN
IF ( NOT ISBLANK ( result ), result)
DateSelector is a standalone table (not connected to any other table in data model) that I have created for all the dates for a dropdown menu select for a Power BI Dasbboard. Unfortunately as there are less dates in the Date Selector table than the Date table, I get ...
Date table is the standard DATE table full of dates from 1970 to 2038. Date connects to Session Requests via a many to one relationships, single way filter. Session Requests is the main fact table.
I need to get rid of the blank row in my result set via DAX so it does not appear in my chart on the X axis. I have tried lots of different DAX combos like blank () and NOT ISBLANK. Do I need to create a table for the result set and then try to filter out the blank day there?

You should not check if the result is empty but if the VALUE ( Table[DayNameShort] ) exists for your current row context:
RequestsWeekly =
VAR result =
CALCULATE (
DISTINCTCOUNTNOBLANK ( SessionRequests[RequestDateTime] ),
FILTER (
'Date',
'Date'[WeekDate]
= SELECTEDVALUE (
'DateSelector'[WeekDate],
MAX ( 'DateSelector'[WeekDate] )
)
)
) + 0
RETURN
IF (
NOT ISBLANK (
VALUE ( Table[DayNameShort] ) -- put here correct table name
),
result
)

Related

POWER BI - DAX - Measure filter

I have a dax measure . This measure have 1 data . This is "GOOGLE";"YOUTUBE";"AMAZON"
I want to use this 1 line string result in FILTER.
CALCULATE(SUM(_TABLE);_TABLE.COMPANIESNAME; FILTER(_TABLE.COMPANIESNAME IN { mymeasure } ))
Does anyone can help me solve this problem ?
Thank you for help
There are probably way better ways to do what you want. You are treating Power BI like a relational database when you should be using it like a Star Schema. But without more info, I'm just going to answer the question.
Here's my sample table:
// Table
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WcsxNrMrPU9JRMlSK1YlWcs/PT89JBXKNwNzI/NKQ0iQQ3xjMd0tMTk3Kz88GCpgoxcYCAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Company = _t, Count = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Count", Int64.Type}})
in
#"Changed Type"
I don't have your DAX measure or its name, so I'm using this:
CompanyList = """Google"";""YouTube"";""Amazon"""
Just to prove it's the same as your measure, here it is in the report:
From this post I created a DAX formula that will parse your DAX value into a table with one row for each company name. Add this as a DAX table from Modeling > New Table. I named mine "Filtered table".
Filtered table = VAR CommaSeparatedList = [CompanyList]
VAR BarSeparatedList =
SUBSTITUTE ( CommaSeparatedList, ";", "|" )
VAR Length =
PATHLENGTH ( BarSeparatedList )
VAR Result =
SELECTCOLUMNS (
GENERATESERIES ( 1, Length ),
"Company", SUBSTITUTE( PATHITEM ( BarSeparatedList, [Value] ), """", "")
)
RETURN
Result
Here's what the table looks like:
Add a relationship between the two tables like this (Modeling > Manage relationships > New...):
Then add a DAX column to the filtered table by selecting the table and then Modeling > New Column
Count = CALCULATE(SUM('Table'[Count]))
You can total it up with this DAX measure:
Filtered total = SUM('Filtered table'[Count])
Change the CompanyList measure, and result will update:

DAX average not in date period

What would the equivalent DAX be for this SQL?
SELECT AVG(CountRows) FROM pbi.FactVend AS FV JOIN pbi.DimAsset AS DA ON DA.KEY_Asset = FV.KEY_Asset WHERE CAST(FV.KEY_VendDate AS Date) NOT BETWEEN DA.ExcludedFromDate AND DA.ExcludedToDate
I have two tables, Vend and Asset. I want to exclude the rows from Vend where the VendDate is in an excluded period. Something like below however I can't get the DAX right. If i filter Vend it cannot see the Asset columns also doesn't seem to like being supplied convert date, KEY_VendDate is Int YYYYMMDD...
Average Cup Vends =
CALCULATE(AVERAGE(Vend[CountRows]),
FILTER(Vend, NOT(DATESBETWEEN(CONVERT(Vend[KEY_VendDate], DATETIME), Asset[Excluded From Date], Asset[Excluded To Date])))
Things I don't understand but I assume:
Assets[ProductKey] has unique values.
Vends (which from now on I'll call Transactions) contains one column
related to Assets[ProductKey]
Transactions[Date] is a Date/Time column
Transactions is related to a Dates Table, which is not related to
Assets.
I don't know how you're trying to use the measurement, but I hope the following example can help you find the right path.
I excluded Assets[ProductKey] only on January, as you can see in the assets table image, ie:
Assets[ProductKey]=21 was excluded from 1/21/2021 00:00 to 1/22/2021
00:00,
Assets[ProductKey]=22 was excluded from 1/22/2021 00:00 to
1/23/2021 00:00 and so on
You can access the columns in the expanded Transactions table through RELATED.
FILTER(Transactions,
NOT(AND(
Transactions[TransactionDate]>=RELATED(Assets[ExcludedFromDate]),
Transactions[TransactionDate]<RELATED(Assets[ExcludedToDate]))))
In my example. I used this:
AVGNonExcludedTransactions :=
VAR SMZDateContext=SUMMARIZE(CalendarDateTime,CalendarDateTime[Year],CalendarDateTime[MonthName])
VAR NonExcludedTransactions=
FILTER(Transactions,
NOT(AND(
Transactions[TransactionDate]>=RELATED(Assets[ExcludedFromDate]),
Transactions[TransactionDate]<RELATED(Assets[ExcludedToDate]))))
VAR Result=
ADDCOLUMNS(SMZDateContext, "Count", CALCULATE(COUNTROWS(INTERSECT(
Transactions, NonExcludedTransactions))))
RETURN
AVERAGEX(Result,[Count])
... removing the highlighted rows. One day of exclusion for each [ProductKey] in the Assets table and more than one day of exclusion for each product in the Transactions table.
which can be analyzed by changing INTERSECT() to EXCEPT() and increasing granularity at the day level.
EDIT:
In this second part the objective is not to use FILTER on the Transactions table. However, I think the following approach can be improved by changing dates to numbers. And I still don't know if it's more efficient than using FILTER on a 10M row table. Probably not, because it would be necessary to have less than 100 products and more than 2M transactions
This is what the model looks like:
This time TCountR is a simpler measure:
TCountR = COUNTROWS(Transactions)
And the filter is calculated in another way. With a single DateTime column containing the exclusion period for each product within the granularity of CalendarDateTime:
AVGTCountRNonExcluded :=
VAR TotalRow =
SUMMARIZE(CalendarDateTime,CalendarDateTime[Year],CalendarDateTime[MonthName])
VAR AllCJ =
CROSSJOIN(SUMMARIZE(Products,Products[ProductKey]),SUMMARIZE(CalendarDateTime,CalendarDateTime[DateTime]))
VAR Excluded=
SELECTCOLUMNS(
GENERATE(Assets,
ADDCOLUMNS(
CROSSJOIN (
//Dates in Transactions should be rounded down at the hour level.
// -1 means that the day 1/2/2021 is not included
//From 1/1/2021 00:00 to 1/1/2021 23:00
////
//Without adding or subtracting a value:
//From 1/1/2021 00:00 to 1/2/2021 23:00
CALENDAR(Assets[ExcludedFromDate],Assets[ExcludedToDate]-1),
SELECTCOLUMNS(GENERATESERIES(0,23,1),"Time",TIME([Value],0,0))),
"DateTime", [Date] + [Time])),
"ProductKey", Assets[ProductKey], "DateTime", [DateTime])
VAR FilteredOut=EXCEPT(AllCJ,Excluded)
VAR Result = ADDCOLUMNS(TotalRow,"Count", CALCULATE([TCountR],KEEPFILTERS(FilteredOut)))
RETURN
AVERAGEX(Result,[Count])
The result is the same.
EDIT 2
Why not?
If you already understood the 2nd approach, you may wonder, what if I can add a column to my Transactions table and change the [TransactionDate] format from DateTime to Date, and use a Dates Table only at the Date level.
Example:
1/1/2021 23:00 To 1/1/2021 00:00
1/2/2021 00:00 To 1/2/2021 00:00
The code gets simpler:
AVGCountRowsDateLevel :=
VAR TotalRow= SUMMARIZE(Dates,Dates[Year],Dates[MonthName])
VAR AllCJ=CROSSJOIN(SUMMARIZE(Products,Products[ProductKey]),SUMMARIZE(Dates,Dates[Date]))
VAR Excluded=
SELECTCOLUMNS(
GENERATE(Assets,
DATESBETWEEN(Dates[Date],Assets[ExcludedFromDate],Assets[ExcludedToDate]-1)),
"ProductKey", Assets[ProductKey], "Date", [Date])
VAR FilteredOut=EXCEPT(AllCJ,Excluded)
VAR Result = ADDCOLUMNS(TotalRow,"Count", CALCULATE([TCountR],KEEPFILTERS(FilteredOut)))
RETURN
AVERAGEX(Result,[Count])
And the result is the same
As I said at the beginning, this is an example, which I hope can help you find the solution.
Assuming you have a relationship from Assets[KEY_Asset] to Vend[KEY_Asset] and Vend[VendDate] is formatted as a date, then you can write
Average Cup Vends =
CALCULATE (
AVERAGE ( Vend[CountRows] ),
FILTER (
Vend,
NOT AND (
Vend[VendDate] > RELATED ( Asset[Excluded From Date] ),
Vend[VendDate] < RELATED ( Asset[Excluded To Date] )
)
)
)
This requires first defining a calculated column Vend[VendDate] to convert Vend[KEY_VendDate] from YYYYMMDD to a date format. You can define such a column as follows:
VendDate =
DATE (
LEFT ( Vend[KEY_VendDate], 4 ),
MID ( Vend[KEY_VendDate], 5, 2 ),
RIGHT ( Vend[KEY_VendDate], 2 )
)
Another option is to convert the Asset date columns into integer format instead.
Average Cup Vends =
CALCULATE (
AVERAGE ( Vend[Countrows] ),
FILTER (
Vend,
NOT AND (
Vend[KEY_VendDate]
> VALUE ( FORMAT ( RELATED ( Asset[Excluded From Date] ), "yyyymmdd" ) ),
Vend[KEY_VendDate]
< VALUE ( FORMAT ( RELATED ( Asset[Excluded To Date] ), "yyyymmdd" ) )
)
)
)

Measure does not work for Month Threshold

I build this Dax measure
_Access_Daily = CALCULATE(
DISTINCTCOUNTNOBLANK(ApplicationAccessLog[ApplicationUserID]),
FILTER('Date','Date'[DateId]=SELECTEDVALUE('DateSelector'[DateId],MAX('DateSelector'[DateId]))))+0
_Access__PreviousDay = CALCULATE(
DISTINCTCOUNTNOBLANK(ApplicationAccessLog[ApplicationUserID]), FILTER('Date','Date'[DateId]=SELECTEDVALUE('DateSelector'[DateId],MAX('DateSelector'[DateId]))-1 ))+0
The Date Selector table is a disconnected table containing dates from the 20th Jan to now. Dateid is a whole number like 20200131.
The Date table is a standard date table with all the dates between 1970 and 2038. Date id is a whole number like 20200131.
However it does not seems to work for the month threshold between Jan and Feb ? So if selected date is 01/02/2020 then it does not return correctly for the 31/01/2020.
As mentioned in the comments, the root problem here is that the whole numbers you use are not dates. As a result, when you subtract 1 and cross month (or year) boundaries, there is no calendar intelligence that can adjust the numbers properly.
Your solution (using 'Date'[DayDateNext]) might work, and if for some additional considerations this design is a must, go with it. However, I'd suggest to revisit the overall approach and use real dates instead of "DateId". You will then be able to use built-in DAX time intelligence, and your code will be more elegant and faster.
For example, if your "Date" and "DateSelector" tables have regular date fields, your code can be re-written as follows:
_Access_Daily =
VAR Selected_Date = SELECTEDVALUE ( 'DateSelector'[Date], MAX ( 'DateSelector'[Date] ) )
VAR Result =
CALCULATE (
DISTINCTCOUNTNOBLANK ( ApplicationAccessLog[ApplicationUserID] ),
'Date'[Date] = Selected_Date
)
RETURN
Result + 0
and:
_Access_PreviousDay =
CALCULATE ( [_Access_Daily], PREVIOUSDAY ( 'Date'[Date] ) )

Issue with dataset Creation

I am new to ssrs and having difficulty defining parameters. I am trying to develop a code which will determine the parameters based on report type value 1 or 2.
I tried defining the parameters at run time using variables but no luck
SELECT A.PRV_TIN_NO,
B.CLM_PD_DT,
A.CLM_FROM_DOS_DT,
a.CLM_THRU_DOS_DT,
A.GRP_NO
FROM CBDW.CBDW_CLM_MEDCL_HDR A
INNER JOIN CBDW.CBDW_CLM_MEDCL_HDRLN B
ON A.GRP_NO = B.GRP_NO
WHERE (
(#reporttype=1
AND A.GRP_NO = #grpNo
AND b.CLM_PD_DT >= #PaidfromDate
AND b.CLM_PD_DT <=#PaidToDate
)
OR
(#Reporttype=2
AND A.GRP_NO = #grpNo
AND a.CLM_FROM_DOS_DT>=#fromDate
AND a.CLM_THRU_DOS_DT <=#ToDate
AND b.CLM_PD_DT <=#PaidToDate
)
)
I am expecting to ssrs to prompt report type and when I select 1 parameters should be groupnumber and claim paid start date and end date while in option 2 I should also have service date parameter/

Power Pivot: DAX for Same Value Sequential Count

Please reference the following example table.
The actual table would be contained in PowerPivot.
There are multiple runs identified by sequential numbering.
Based on what we want to observe through filtering, each run has an associated value.
Here's simplified version of the current data:
Current Table
Common Columns for all Data:
Part: Uniquely defines the group. In this case, it's part or device.
Run: Identifies a same test count.
Value: The outcome generated from the test.
What I've been trying to add is an additional three columns:
Desired1: Same_Value_Count: This counts consecutive same values.
Desired2: Same_Max: Gives the maximum same value count.
Desired3: Same_Min: Give the minimum same value count.
This would result in the following PivotTable:
Resulting Table
I am having trouble formulating the proper DAX syntax to accomplish the two extra columns.
Keep in mind, I'd like to show the whole table as is.
I have a calculated column here called count_seq_dup:
=CALCULATE(COUNTROWS(table), ALLEXCEPT(table, table[3_Value]), EARLIER(Table[2_Run]) >= CSVsource[2_Run])
It worked perfectly for a single part, but does not work with multiple parts parts and when other filtering or slicers are applied.
I'm close, but it's not exactly what I'm looking for, and I can't figure out the syntax in DAX to get it right.
Can anyone help me?
For the Same_Value_Count, try something like this:
Same_Value_Count =
VAR part = 'table'[1_Part]
VAR val = 'table'[3_Value]
VAR run = 'table'[2_Run]
VAR tblpart = FILTER ( 'table', 'table'[1_Part] = part && 'table'[2_Run] <= run )
RETURN
run - CALCULATE ( MAX ( 'table'[2_Run] ), FILTER ( tblpart, [3_Value] <> val ) )
This will return the maximum same value count for a part / value combination.
Max Count =
VAR part = 'table'[1_Part]
VAR val = 'table'[3_Value]
RETURN
CALCULATE (
MAX ( 'table'[Same_Value_Count] ),
FILTER ( 'table', [3_Value] = val && 'table'[1_Part] = part )
)

Resources