Recreating a formula field from Salesforce in Tableau - salesforce

I am trying to recreate a formula field that I created in SF over in Tableau that assigns a value to a specific picklist selection and then get a total. There are 3 selections that need different values assigned. I read somewhere in my research that the formula stops once it finds a match. This is the first formula I tried but unless I make it a Dimension, it doesn't apply to all accounts with this criteria.
IF [Priority] = "Priority 1" THEN .5
ELSEIF [Priority] = "Priority 2" THEN .25
ELSEIF [Priority] = "Priority 3" THEN .1
ELSE 0 END
I also tried some LOD formulas but that wasnt working either.
{FIXED [Account ID]: MAX(IF[Priority] = "Priority 1" THEN .5
ELSEIF [Priority] = "Priority 2" THEN .25
ELSEIF [Priority] = "Priority 3" THEN .1
ELSE 0
END)}
I have watched a few videos and tried multiple things and nothing seems to work correctly. This is only my third Tableau report so not up to speed yet on everything and can't seem to find what I am looking for in my google search. Any guidance on how to apply values to a certain field selection for each account and get a total would be great.
Thanks so much for your help
Full Name Account Name Priority Strat calc Measure
NAME Account Name Priority 1 0.5
Account Name Priority 2 0.25
Account Name Priority 3 0.1
Account Name Priority 1 0.5
Account Name Priority 2 0.25 0.25
Account Name Priority 3 0.1

Related

Use result previous row as start value for next line

I'm working on an MRP simulation in which I have to subtract demand or add supply qty to available stock and I hope you can be of support. Find below the result I want to achieve.
I have 1 value for stock = 22 and a lot of values for future demand/supply on specific dates.
Part
Stock
Demand/Supply qty
Demand/Supply Date
Result
1000680
22
-1
2023-01-01
21
1000680
21* what I want to achieve
-15
2023-01-02
6* expected outcome
1000680
6* what I want to achieve
+10
2023-01-03
16* expected outcome
I'm still on the SQL learning curve. I started to add rownumbers to the lines to make sure that the sequence is correct:
select
 part,
 rownum = ROW_NUMBER() OVER (ORDER BY part, mrp_due_date),
 current_stock_qty,
 demand_supply_qty,
 
 current_stock - qty as new_stock_qty, -- if demand
 current_stock + qty as new_stock_qty, -- if supply
 mrp_due_date
from #base
Then I tried the lag function to derive previous row 'new_stock_qty' at date but this only worked for the first line (see image:
)
So I probably need the loop function to first calculate stock-demand and use the result as new stock.
I have looked through similar questions asked on this site, but I find it difficult to define my solution based on that information.

Measure that indicates sales volume per product per day (Google Data Studio)

I need to implement a measure that indicates sales volume per product per day. For the example table below (each line is a record of a sale):
id,create_date,report_date,quantity
329,2019-01-02 08:19:17,2019-01-02 14:34:12,6
243,2019-01-02 09:11:42,2019-01-03 15:30:14,6
238,2019-02-02 08:19:17,2019-03-02 14:36:17,2
170,2019-04-02 02:15:17,2019-04-02 14:37:12,2
238,2019-04-02 08:43:11,2019-04-02 14:41:01,8
238,2019-04-02 08:52:52,2019-04-02 14:39:12,1
238,2019-08-02 08:10:09,2019-08-02 15:02:12,1
238,2019-10-02 08:10:17,2019-10-02 18:34:11,1
170,2020-01-02 08:24:14,2020-01-02 19:31:31,2
170,2020-01-02 08:32:16,2020-01-02 21:52:32,3
The operations to reach the result:
1. Identify total sales and total products for each day.
For 2019-01-02, two sales were carried out, totaling 12 products (6 products for each sale on the day)
2. Divide total products by total sales, resulting in the product/sale ratio for the day (if the result is 2, it indicates that each sale on average corresponds to two products).
In the example table there are 6 different dates (YYYMMDD), for each corresponding date: total products/amount of sales on the day (12/2, 2/1, 11/3, 1/1, 1/1, 5/1) .
3. Average every day's story, resulting in a single value.
(3 + 2 + 3.6 + 1 + 1 + 3)/6 = 2.26 , indicating that on average two products are sold per sale per day.
As it involves many operations, I couldn't get a solution for this problem. If anyone can help me.
note: I accept alternative suggestions to offer the measure to indicate the volume of sales per product per day.
Please check the numbers given in your steps 2 and 3:
12/2=6 not 3
5/1 must be 5/2
I still think that you want to calculate a 'day story' in step 2, see formular below.
Here are the steps for generating such a value:
create a table
add your time as dimension and make it to date not date&time
order by date ascending (optional)
create a field day story with the formula sum(quantity)/count(id)
add this field three times to your table
click on the AUT left to the fieldname and select Running calculation to 'running average`
You have to convince your users to only look at the last line of the table.

Current Record Calculated Field on Previous Record Calculated Value

In this example, there are 5 periods of actual balances and the implied depreciation rates. Starting in Period 6, need the Balance to be calculated on previous period balance ($8,177,480) * the current period depreciation rate (-1.50%) and so on. I've heard recursive CTE but I am not familiar with them.
Period DeprRate Balance Comment
1 0% $10,000,000 Actual Values
2 -1.62% $9,838,000 Actual Values
3 -7.41% $9,109,004 Actual Values
4 -8.00% $8,380,284 Actual Values
5 -2.42% $8,177,481 Actual Values
6 -1.50% null should be $8,177,481*(1-.015)
7 -1.50% null should be Pd 6 Calc Balance *(1-.015)
8 -5.73% null should be Pd 7 Calc Balance *(1-.0573)
9 -4.13% null should be Pd 8 Calc Balance *(1-.0413)
10 -1.50% null should be Pd 9 Calc Balance *(1-.015)
CREATE TABLE Table1
([Period] int, [DeprRate] float, Balance integer)
;
INSERT INTO Table1
([Period], [DeprRate], Balance)
VALUES
(1,0,10000000),
(2,-0.0162,9838000),
(3,-0.0741,9109004.2),
(4,-0.08,8380283.864),
(5,-0.0242,8177480.9944912),
(6,-0.015,null),
(7,-0.015,null),
(8,-0.0573,null),
(9,-0.0413,null),
(10,-0.015,null)
"This seems relatively easy, but can't get it."
Yes, it is. Did you follow these steps ?
"I have 10 periods of actual balances and the implied depreciation rates."
Step 1 : Create a table (Table_1) and populate it with these values.
" Starting in Period 11, need the Balance to be calculated on previous period balance * the current period depreciation rate."
Step 2 : Create a query for calculation of new rates based on the values of previous table, execute it and populate it to new table (Table_2).
" Period 11 isn't difficult if that's all that was needed by using lag. Problem is Period 12-20 need to be calculating current period balance on previous period calculated balance multiplied by the current period depreciation rate."
Step 3 : Two options here - one is through a recursive query as 'Vinit' commented. Another option (easy) is to repeat Step 2 and append to Table_2.
=======
Knowledge sharing / Value addition to your question : Depreciation is an Accounting concept, which usually taken into account either in the year end (closing of the books) or at the end of life of an asset. This concept is very tricky as at least two (usually) different calculations may have to be performed to satisfy the tax compliance and also management accounting requirements. Additional calculations may also have to be carried out for each type of asset, just to take decision to determine best possible option.
Though you did not include the date column in your sample data, you should be writing the script to calculate and populate the depreciated values based on a particular date. You can also arrange to execute this script by specifying a trigger as well as through a job agent (scheduling).
Hope this helps.

How to merge rows of SQL data on column-based logic?

I'm writing a margin report on our General Ledger and I've got the basics working, but I need to merge the rows based on specific logic and I don't know how...
My data looks like this:
value1 value2 location date category debitamount creditamount
2029 390 ACT 2012-07-29 COSTS - Widgets and Gadgets 0.000 3.385
3029 390 ACT 2012-07-24 SALES - Widgets and Gadgets 1.170 0.000
And my report needs to display the two columns together like so:
plant date category debitamount creditamount
ACT 2012-07-29 Widgets and Gadgets 1.170 3.385
The logic to join them is contained in the value1 and value 2 column. Where the last 3 digits of value 1 and all three digits of value 2 are the same, the rows should be combined. Also, the 1st digit of value 1 will always been 2 for sales and 3 for costs (not sure if that matters)
IE 2029-390 is money coming in for Widgets and Gadgets sold to customers, while 3029-390 is money being spent to buy the Widgets and Gadgets from suppliers.
How can I so this programmatically in my stored procedure? (SQL Server 2008 R2)
Edit: Would I load the 3000's into one variable table the and the 2000's into another, then join the two on value2 and right(value1, 3)? Or something like that?
Try this:
SELECT RIGHT(LTRIM(RTRIM(value1)),3) , value2, MAX(location),
MAX(date), MAX(category), SUM(debitamount), SUM(creditamount) FROM
table1 GROUP BY RIGHT(LTRIM(RTRIM(value1)),3), value2
It will sum the credit amount and debit amount. It will choose the maximum string value in the other columns, assuming they are always the same when value2 and the last 3 digits of value1 are the same it shouldn't matter.

Explaining row and column dependencies

This is a simple and common scenario at work, and I'd appreciate some input.
Say I am generating a report for the owners of a pet show, and they want to know which of their customers have bought how many of each pet. In this scenario my only tools are SQL and something that outputs my query to a spreadsheet.
As the shop owner, I might expect reports in the form:
Customer Dog Cat Rabbit
1 2 3 0
2 0 1 1
3 1 2 0
4 0 0 1
And if one day I decided to stock Goldfish then the report should now come out as.
Customer Dog Cat Rabbit Goldfish
1 2 3 0 0
2 0 1 1 0
3 1 2 0 0
4 0 0 1 0
5 0 0 0 1
But as you probably know, to have a query which works this way would involve some form of dynamic code generation and would be harder to do.
The simplest query would work along the lines of:
Cross join Customers and Pets, Outer join Sales, Group, etc.
and generate:
Customer Pet Quantity
1 Dog 2
1 Cat 3
1 Rabbit 0
1 Goldfish 0
2 Dog 0
2 Cat 1
2 Rabbit 1
...etc
a) How would I explain to the shop owners that the report they want is 'harder' to generate? I'm not trying to say it's harder to read, but it is harder to write.
b) What is the name of the concept I am trying to explain to the customer (to aid with my Googling)?
The name of the concept is 'cross-tab' and can be accomplished in several ways.
MS Access has proprietary extensions to SQL to make this happen. SQL pre-2k5 has a CASE trick and 2k5 and later has PIVOT, but I think you still need to know what the columns will be.
Some databases indeed support some way of creating cross tables, but I think most need to know
the columns in advance, so you'd have to modify the SQL (and get a database that supports such an extension).
Another alternative is to create a program that will postprocess the second "easy" table to get your clients the cross table as output. This is probably easier and more generic than having to modify SQL or dynamically generate it.
And about a way to explain the problem... you could show them in an Excel how many steps are needed to get the desired result:
Source data (your second listing).
Select values from the pets column
Place each pet type found on a new column
Count values per each type per client
Fill the values
and then say that SQL gives you only the source data, so it's of course more work.
This concept is called pivoting
SQL assumes that your data is represented in terms of relations with fixed structure.
Like, equality is a binary relation, "customer has this many pets of this type" is a ternary relation and so on.
When you see this resultset:
Customer Pet Quantity
1 Dog 2
1 Cat 3
1 Rabbit 0
1 Goldfish 0
2 Dog 0
2 Cat 1
2 Rabbit 1
, it's actually a relation defined by all possible combinations of domain values being in this relation.
Like, a customer 1 (domain customers id's) has exactly 2 (domain positive numbers) pets of genus dog (domain pets).
We don't see rows like these in the resultset:
Customer Pet Quantity
1 Dog 3
Pete Wife 0.67
, because the first row is false (customer 1 doesn't have 3 items of dog, but 2), and the second row values are out of their domain scopes.
SQL paradigma implies that your relations are defined when you issue a query and each row returned defines the relation completely.
SQL Server 2005+ can map rows into columns (that is what you want), but you should know the number of columns when designing the query (not running).
As a rule, the reports you are trying to build are built with reporting software which knows how to translate relational SQL resultsets into nice looking human readable reports.
I have always called this pivoting, but that may not be the formal name.
Whatever it's called you can do almost all of this in plain SQL.
SELECT customer, count(*), sum(CASE WHEN pet='dog' THEN 1 ELSE 0 END) as dog, sum(case WHEN pet='cat' THEN 1 ELSE 0 END) as cast FROM customers join pets
Obviously what's missing is the dynamic columns. I don't know if this is possible in straight SQL, but it's certainly possible in a stored procedure to generate the query dynamically after first querying for a list of pets. The query is built into a string then that string is used to create a prepared statement.

Resources