How can I sort time dimensions in SSAS cubes? Poor sort is causing issues with ParallelPeriod() - sql-server

I have an old cube built in SSAS. We break revenue down into a time dimension like so:
fiscal week
Date
Revenue
Prior Year Revenue
53
2020-12-31
$5000
????
49
2020-12-03
$1000
$200 (should be $100)
50
2020-12-10
$2000
$300
51
2020-12-17
$3000
$400
52
2020-12-24
$4000
$500
As you can see, the 53rd fiscal week is not displaying in the proper sort order.
This is becoming a problem due for a dimension we use, "Prior Year, Week". This attempts to grab the revenue from the same fiscal week instead of the previous year. It accomplishes this by calling ParallelPeriod([FiscalWeek], 52), [Measures].[Revenue]).
What I believe is happening is that because week 53 is at the beginning of the month, ParallelPeriod is counting it as one of the intervals it skips. This results in the Prior Year column showing fiscal week 50 when it should be 51, 51 when it should be 52, etc.
Is there a way to sort those dates? I've tried the obvious KeyColumnstuff and examined the source views, and nothing seems like it should be causing this poor sort.
Thanks in advance -- I don't know much about this stuff and am sort of being tossed into the deep end here.
EDIT:
I've made some progress here but still having issues. It appears that every year with 53 fiscal weeks would have this problem -- week 53 is sorted improperly.
The time dimension breaks down like this:
Year
Quarter
Month
Week Number
Bill Date
The bill dates are sorted in order, but for some reason, the weeks in the Browser look like this:
53
1
2
3
...
It appears that the Week Number attribute is the one that's off. The OrderBy property is set to AttributeKey and the OrderByAttribute property is set to Bill Date attribute. However, viewing just the pubdates reveals that they are properly sorted.
EDIT:
After setting the OrderBy to Key as suggested by the response below, it appears that some of the month hierarchies are out of order. For instance, certain months are containing weeks that don't belong. For example:
* Sep
* 35
* 2020-08-27
* 36
* 2020-09-03
* 37
* 2020-09-10
* 38
* 2020-09-17
* 39
* 2020-09-24
Here's the XML for the month attribute:
<Attribute>
<ID>Month attribute</ID>
<Name>Month attribute</Name>
<Type>Months</Type>
<EstimatedCount>48</EstimatedCount>
<KeyColumns>
<KeyColumn>
<DataType>WChar</DataType>
<Source xsi:type="ColumnBinding">
<TableID>dbo_dim_time</TableID>
<ColumnID>fiscalmonth</ColumnID>
</Source>
</KeyColumn>
</KeyColumns>
<NameColumn>
<DataType>WChar</DataType>
<Source xsi:type="ColumnBinding">
<TableID>dbo_dim_time</TableID>
<ColumnID>fiscalmonth</ColumnID>
</Source>
</NameColumn>
<AttributeRelationships>
<AttributeRelationship>
<AttributeID>Pubdate attribute</AttributeID>
<Name>Pubdate attribute</Name>
</AttributeRelationship>
</AttributeRelationships>
<OrderBy>AttributeKey</OrderBy>
<OrderByAttributeID>Pubdate attribute</OrderByAttributeID>
<MembersWithData>NonLeafDataHidden</MembersWithData>
<MembersWithDataCaption>(* data)</MembersWithDataCaption>
<AttributeHierarchyVisible>false</AttributeHierarchyVisible>
</Attribute>

The OrderBy design you described referencing Bill Date seems overly complex for your requirements.
For the Week Number attribute, I would just set the OrderBy property to Key.

Related

Query for Time and average of a conc based on previous 24 hour average

I want to generate a query where there are two columns one is time and second is a particular conc. Based upon last 24 hour of data I want to calculate 24 hour average of conc. In the
below table, if suppose I have a data for past 24 hour it will be calculated like
(conc+conc....+nthconc)/count. The dates will be moving forward like 8/11, 9/11, 10/11 and so on. This query will be kept in Grafana for conc visualisation with time.For information, if I use avg function in sql the value shown in avgvalue is same as conc Can anybody help me to write this query.
Time
conc
avg (my-output)
Output wanted(conc)
2021-11-07 18:47:00
1
1
24 hour average
2021-11-07 18:48:00
1
2
24 hour average
....
2021-11-08 18:47:00
5
5
24 hour average
2021-11-08 18:48:00. (Get the 24 hour average)
Seems that you need a window function (see the manual 3.5. Window functions, 4.2.8. Window Function Calls, 9.22. Window Functions). Then you can try this :
SELECT avg(conc) OVER (ORDER BY Time RANGE BETWEEN '1 DAY' PRECEDING AND CURRENT ROW)
FROM your_table

Measure for a month and its previous using a named set SSAS

I have a named set that returns the last 10 weeks from the current week. In the cube browser I get the value of a measure for each week.
I want to create another measure that contains the value of the previous week returned by the named set. Something like this :
Weeks Measure1 Measure2
Week 1 50 40
Week 2 35 50
Week 3 77 35
How to do this using MDX ?
Measure2 will be a tuple of whatever measure you want to show - lets calls it [Measures].[Revenue] and the currentmember of the hierarchy used in the Weeks column lagged by 1.
I don't know the structure of your cube so you'll need to adjust the following:
(
[Measures].[Revenue],
[Date].[Calendar Week of Year].CURRENTMEMBER.LAG(1)
)

Cumulative Sum - Choosing Portions of Hierarchy

I have a bit of an interesting problem.
I required the cumulative sum on a set that is created by pieces of a Time dimension. The time dimension is based on hours and minutes. This dimension begins at the 0 hour and minute and ends at the 23 hour and 59 minute.
What I need to do is slice out portions from say 09:30 AM - 04:00 PM or 4:30PM - 09:30 AM. And I need these values in order to perform my cumulative sums. I'm hoping that someone could suggest a means of doing this with standard MDX. If not is my only alternative to write my own stored procedure which forms my Periods to date set extraction using the logic described above?
Thanks in advance!
You can create a secondary hiearchy in your time dimension with only the hour and filter the query with it.
[Time].[Calendar] -> the hierarchy with year, months, day and hours level
[Time].[Hour] -> the 'new' hierarchy with only hours level (e.g.) 09:30 AM.
The you can make a query in mdx adding your criteria as filter :
SELECT
my axis...
WHERE ( SELECT { [Time].[Hour].[09:30 AM]:[Time].[Hour].[04:00 PM] } on 0 FROM [MyCube] )
You can also create a new dimension instead of a hierarchy, the different is in the autoexists behaviour and the performance.

SSAS -> AdventureWorks Example -> Using the browser to splice a measure by week, shows results that have two of the same week records?

I have been working on a cube and noticed that when I am browsing measures in my cube by weeks, I am getting an unexpected result, but first let me display my current scenario. I am looking at counts of a fact load by weeks. When I do so I am getting results like these. :
Weeks | Fact Internet Sales Count
2001-07-01 00:00:00.000 | 28
2001-07-08 00:00:00.000 | 29
....and so on as you would expect.
Further down I noticed this. :
2001-09-30 00:00:00.000 | 10
2001-09-30 00:00:00.000 | 24
As you can see, it shows the week twice with different counts, when you add these counts together it is the correct number of counts for this week (i.e. 34).
I am just confused why it is showing two weeks, when I look at the data in sql I can see that the difference in data between these two is strictly the month in which these dates fell (10 in the earliest month and 24 and the later month in any example).
I initially saw this in my original cube that I created on my own, in turn, I pulled up trusty adventureWorks practice cube and found that it was present in that cube also.
This is due to the fact that within this date hierarchy, the lowest attribute in the hierarchy was date not week. Therefore, there was always a split for weeks by date. This can be alleviated by making a date hierarchy with week as the lowest portion of a date hierarchy.

DATE lookup table (1990/01/01:2041/12/31)

I use a DATE's master table for looking up dates and other values in order to control several events, intervals and calculations within my app. It has rows for every single day begining from 01/01/1990 to 12/31/2041.
One example of how I use this lookup table is:
A customer pawned an item on: JAN-31-2010
Customer returns on MAY-03-2010 to make an interest pymt to avoid forfeiting the item.
If he pays 1 months interest, the employee enters a "1" and the app looks-up the pawn
date (JAN-31-2010) in date master table and puts FEB-28-2010 in the applicable interest
pymt date. FEB-28 is returned because FEB-31's dont exist! If 2010 were a leap-year, it
would've returned FEB-29.
If customer pays 2 months, MAR-31-2010 is returned. 3 months, APR-30... If customer
pays more than 3 months or another period not covered by the date lookup table,
employee manually enters the applicable date.
Here's what the date lookup table looks like:
{ Copyright 1990:2010, Frank Computer, Inc. }
{ DBDATE=YMD4- (correctly sorted for faster lookup) }
CREATE TABLE datemast
(
dm_lookup DATE, {lookup col used for obtaining values below}
dm_workday CHAR(2), {NULL=Normal Working Date,}
{NW=National Holiday(Working Date),}
{NN=National Holiday(Non-Working Date),}
{NH=National Holiday(Half-Day Working Date),}
{CN=Company Proclamated(Non-Working Date),}
{CH=Company Proclamated(Half-Day Working Date)}
{several other columns omitted}
dm_description CHAR(30), {NULL, holiday description or any comments}
dm_day_num SMALLINT, {number of elapsed days since begining of year}
dm_days_left SMALLINT, (number of remaining days until end of year}
dm_plus1_mth DATE, {plus 1 month from lookup date}
dm_plus2_mth DATE, {plus 2 months from lookup date}
dm_plus3_mth DATE, {plus 3 months from lookup date}
dm_fy_begins DATE, {fiscal year begins on for lookup date}
dm_fy_ends DATE, {fiscal year ends on for lookup date}
dm_qtr_begins DATE, {quarter begins on for lookup date}
dm_qtr_ends DATE, {quarter ends on for lookup date}
dm_mth_begins DATE, {month begins on for lookup date}
dm_mth_ends DATE, {month ends on for lookup date}
dm_wk_begins DATE, {week begins on for lookup date}
dm_wk_ends DATE, {week ends on for lookup date}
{several other columns omitted}
)
IN "S:\PAWNSHOP.DBS\DATEMAST";
Is there a better way of doing this or is it a cool method?
This is a reasonable way of doing things. If you look into data warehousing, you'll find that those systems often use a similar system for the time fact table. Since there are less than 20K rows in the fifty-year span you're using, there isn't a huge amount of data.
There's an assumption that the storage gives better performance than doing the computations; that most certainly isn't clear cut since the computations are not that hard (though neither are they trivial) and any disk access is very slow in computational terms. However, the convenience of having the information in one table may be sufficient to warrant having to keep track of an appropriate method for each of the computed values stored in the table.
It depends on which database you are using. SQL Server has horrible support for temporal data and I almost always end up using a date fact table there. But databases like Oracle, Postgres and DB2 have really good support and it is typically more efficient to calculate dates on the fly for OLTP applications.
For instance, Oracle has a last_day() function to get the last day of a month and an add_months() function to, well, add months. Typically in Oracle I'll use a pipelined function that takes start and end dates and returns a nested table of dates.
The cool way of generating a rowset of dates in Oracle is to use the hierarchical query functionality, connect by. I have posted an example of this usage in another thread.
It gives a lot of flexibility without the PL/SQL overhead of a pipelined function.
OK, so I tested my app using 31 days/month to calculate interest rates & pawnshops are happy with it! Local Law prays as follows: From pawn or last int. pymt. date to 5 elapsed days, 5% interest on principal, 6 to 10 days = 10%, 11 to 15 days = 15%, and 16 days to 1 "month" = 20%.
So the interest table is now defined as follows:
NUMBER OF ELAPSED DAYS SINCE
PAWN DATE OR LAST INTEREST PYMT
FROM TO ACUMULATED
DAY DAY INTEREST
----- ---- ----------
0 5 5.00%
6 10 10.00%
11 15 15.00%
16 31 20.00%
32 36 25.00%
37 41 30.00%
42 46 35.00%
47 62 40.00%
[... until day 90 (forfeiture allowed)]
from day 91 to 999, daily prorate based on 20%/month.
Did something bad happen in the UK on MAR-13 or SEP-1752?

Resources