Tableau – Using Nested Aggregations to Establish a Weekday/Hour Baseline - sql-server

Background Information: We have an incident time tracker that tracks how long each user spends with a representative before the issue can be closed. We want to determine the average volume of incidents that are being handled for each hour. To say this in another way: We want to get an hourly baseline for each day of the week that will show us the average total call length within the specific time period. Eg: We want to average the total length of every call on Monday from 9AM-10AM for all the weeks in the database, and the same for other hourly intervals.
The simplest way to think of this is that I want AVG(SUM) for the specific time periods, but Tableau does not allow me to do this.
Tableau Output:
This is the desired, target visualization that I am looking for from Tableau.
SQL Query:
I have written a SQL query that returns the answer:
We are looking at two columns: start_time (time stamp) and interval_seconds(float)
In the inner query I use the hour_start function which truncates the date/time value to the hour start, so I can group by the hour and day of the week in the outer query.
SQL Results:
Question:
Is there a way to solve this problem ENTIRELY in Tableau that would get me the result that I am looking for without having to write any SQL code?
Files Stored on Drive
CSV File:
https://drive.google.com/open?id=0B4nMLxIVTDc7NEtqWlpHdVozRXc
Tableau Worksheet:
https://drive.google.com/open?id=0B4nMLxIVTDc7M3A4Q0JxbGdlTE0

You can use Level of Detail expressions to compute the SUM(interval_seconds) at the hour level and then use AVG to calculate the number you are looking for.
I created a couple of calculations:
hour which is defined as: DATETRUNC('hour',[start_time])
this should be equivalent to your hour_start(start_time).
and interval_hours which is defined as {FIXED [hour] : SUM([interval_seconds])/3600 }
This calculates the aggregate for each start_time truncated to the hour.
After this, you simply calculate AVG(interval_hours) and use it in your view.
I put a workbook in dropbox: https://www.dropbox.com/s/3hfvz8w529g9f46/Interval%20Time%20Baseline.twbx?dl=0
Although the chart looks similar to yours, the numbers I came up with are somewhat different from the "SQL Results" you show. Was the data you provided slightly different?

Related

Best way to handle time consuming queries in InfluxDB

We have an API that queries an Influx database and a report functionality was implemented so the user can query data using a start and end date.
The problem is that when a longer period is chosen(usually more than 8 weeks), we get a timeout from influx, query takes around 13 seconds to run. When the query returns a dataset successfully, we store that in cache.
The most time-consuming part of the query is probably comparison and averages we do, something like this:
SELECT mean("value") AS "mean", min("value") AS "min", max("value") AS "max"
FROM $MEASUREMENT
WHERE time >= $startDate AND time < $endDate
AND ("field" = 'myFieldValue' )
GROUP BY "tagname"
What would be the best approach to fix this? I can of course limit the amount of weeks the user can choose, but I guess that's not the ideal fix.
How would you approach this? Increase timeout? Batch query? Any database optimization to be able to run this faster?
In such cases where you allow user to select in days, I would suggest to have another table that stores the result (min, max and avg) of each day as a document. This table can be populated using some job after end of the day.
You can also think changing the document per day to per week or per month, based on how you plot the values. You can also add more fields like in your case, tagname and other fields.
Reason why this is superior to using a cache: When you use a cache, you can store the result of the query, so you have to compute for every different combination in realtime. However, in this case, the cumulative results are already available with much smaller dataset to compute.
Based on your query, I assume you are using InfluxDB v1.X. You could try Continuous Queries which are InfluxQL queries that run automatically and periodically on realtime data and store query results in a specified measurement.
In your case, for each report, you could generate a CQ and let your users to query it.
e.g.:
Step 1: create a CQ
CREATE CONTINUOUS QUERY "cq_basic_rp" ON "db"
BEGIN
SELECT mean("value") AS "mean", min("value") AS "min", max("value") AS "max"
INTO "mean_min_max"
FROM $MEASUREMENT
WHERE "field" = 'myFieldValue' // note that the time filter is not here
GROUP BY time(1h), "tagname" // here you can define the job interval
END
Step 2: Query against that CQ
SELECT * FROM "mean_min_max"
WHERE time >= $startDate AND time < $endDate // here you can pass the user's time filter
Since you already ask InfluxDB to run these aggregates continuously based on the specified interval, you should be able to trade space for time.

Comparation Periods in Google Data Studio

I'm creating a in google data studio that has a table behind with data by day.
I need it to be comparable with the month before, but there's a catch that I'm currently stuck!
The period should be something like:
DAY(date)/MONTH(date)-1/YEAR(date)
This allows the comparation between periods with different number of days, example:
Date of analysis: 28/06/2021 - 27/07/2021
Date of comparation: 28/05/2021 - 27/06/2021
When trying to create something like this in DataStudio (with date range controls) none of the options does this, and for what I've explored, there isn't an option to do a formula like the one above.
The closest I get is "Previous Period" but that makes the Date of Comparation 29/05/2021 - 27/06/2021, missing the 28/05/2021.
I'm really stuck and running out of ideas, I've even considered changing the SQL query behind to convert the days somehow.
I've done this by mocking the dates for the comparator.
The other way is to create two series, and don't use the comparator.
Series #1 will have label 28/06/2021 and data from 28/06/2021 (current period).
Series #2 will have label 28/06/2021 (the same) and data from 28/05/2021 (previous period).
They plot nicely:

How can I calculate between two dates per User Pseudo ID for specific events?

I linked Firebase to BigQuery and start using Google Data Studio to create a table to list users by "User Pseudo ID".
My goal is to calculate the difference between two dates, the date of first_open and the date of app_remove to come up with an average retention time.
How can I write the right query in Data Studio?
It can be achieved using the three step process below:
1) HH:MM:SS
The Calculated Field below uses the DATETIME_DIFF function to find the difference between app_remove and first_open, and displays the difference in SECOND (for future reference, set the third input DATETIME_DIFF as required, for example, to view the difference in days, set the input to DAY):
DATETIME_DIFF(app_remove, first_open, SECOND)
2) Type (HH:MM:SS)
Number > Duration (Sec.)
3) Aggregation (HH:MM:SS)
AVG
Google Data Studio Report and a GIF to elaborate:
DATE_DIFF may be what you are looking for.
That is if first_open and app_remove are date fields or date expressions

Google Data Studio date aggregation - average number of daily users over time

This should be simple so I think I am missing it. I have a simple line chart that shows Users per day over 28 days (X axis is date, Y axis is number of users). I am using hard-coded 28 days here just to get it to work.
I want to add a scorecard for average daily users over the 28 day time frame. I tried to use a calculated field AVG(Users) but this shows an error for re-aggregating an aggregated value. Then I tried Users/28, but the result oddly is the value of Users for today. The division seems to be completely ignored.
What is the best way to show average number of daily users over a time frame? Average daily users over 10 days, 20 day, etc.
Try to create a new metric that counts the dates eg
Count of Date = COUNT(Date) or
Count of Date = COUNT_DISTINCT(Date) in case you have duplicated dates
Then create another metric for average users
Users AVG = (Users / Count of Date)
The average depends on the timeframe you have selected. If you are selecting the last 28 days the average is for those 28 days (dates), if you filter 20 days the average is for those 20 days etc.
Hope that helps.
I have been able to do this in an extremely crude and ugly manner using Google Sheets as a means to do the calculation and serve as a data source for Data studio.
This may be useful for other people trying to do the same thing. This assumes you know how to work with GA data in Sheets and are starting with a Report Configuration. There must be a better way.
Example for Average Number of Daily Users over the last 7 days:
Edit the Report Configuration fields:
Report Name: create one report per day, in this case 7 reports. Name them (for example) Users-1 through Users-7. These are your Row 2 values. You'll have 7 columns, with the first report name in column B.
Start Date and End Date: use TODAY()-X where X is the number of days previous to define the start and end dates for each report. Each report will contain the user count for one day. Report Users-1 will use TODAY()-1 for start and end, etc.
Metrics: enter the metrics e.g. ga:users and ga:new users
Create the reports
Use 'Run reports' to have the result sheets created and populated.
Create a sheet for an interim data set you will use as the basis for the average calculation. The first column is date, the remaining columns are for the metrics, in this case Users and New Users.
Populate the interim data set with the dates and values. You will reference the Report Configuration to get the dates, and you will pull the metrics from each of the individual reports. At this stage you have a sheet with date in first columns and values in subsequent columns with a row for each day's values. Be sure to use a header.
Finally, create a sheet that averages the values in the interim data set. This sheet will have a column for each metric, with one value per column. The one value is calculated from the series in the interim data set, for example =AVG(interim_sheet_reference:range) or any other calculation you'd like to do.
At last, you can use Data Studio to connect to this data source and use the values. For counts of users such as this example, you would use Sum as the aggregation field type when you are creating the data source.
It's super ugly but it works.

Keep PivotTable report filter after data refresh

I have a PivotTable (actually it is five PivotTables, each on its own separate sheet) that is created from a query of an outside database. Each of the PivotTables represents a day (i.e. Today, Tomorrow, Today+2, Today+3, and Today+4). For the report filter for the first two, we use a date range filter of today and tomorrow which automatically filters the data and allows it to roll over. We created custom date ranges for the other three days, but upon every external data refresh we have to go into each sheet and reselect the report filter from all to the specified time frame. This data rolls over every day so we can see the lineup for the next 96 hours out.
Is there a way to either keep the PivotTable report filter criteria (VBA and macros are both acceptable, although we are also fairly new to both)?
Or is there some super secret way to extend the report filter from just today and tomorrow to a time range (48 hours, 96 hours) instead of next month?
I need the days to be separated, so next week will not work because all the days will populate on one page.
Without seeing a real example it's hard to tell, but how about changing the query to a relative date index, i.e. something like
SELECT DATEDIFF('day', GETDATE(), report_dt) AS days_from_today FROM reporting_table
And then set your report filters on this relative date index (days_from_today = 1 for tomorrow, etc)? You can always create another Excel column in the report =TODAY() + days_from_today to get your absolute date back. (Assuming you are just dealing with one time zone for reporting purposes.)
I.e., instead of rolling filters, keep the filters on constant indices, and let the indices cover a rolling date range. I'm not sure Excel is smart enough to do the rolling filters thing.

Resources