I am creating one application, which requires dynamic store data in a Graph Table (using Nodes and Edges).
Each Table contains one Product information.
Details
Fix or Dynamic
Description
Name
Fix
Require for all products
Parts
Dynamic
Require parts is based on products.
Measue
Dynamic
Mainly use for find profit and other calculations e.g Product 2 contains Discount and Product 3 contains Discount + Tax
Requiremens
Need to Store all information in MS SQL Graph Database
Need to Add Extra Property against Measure like User Name and Date Created etc.
Note: This extra Property in the feature can be added more details like Date Updated etc.
Current Approach:
Currently, we are storing all information in JSON format in SQL server for save and Get.
{
"Product Name":"Bike With 1 Part",
"Part 1":"Steel",
"Sell":"15",
"Cost":"9",
"Profit":"4"
},
{
"Product Name":"Bike With 2 Part",
"Part 1":"Plastic",
"Part 2":"Steel",
"Sell":"12",
"Cost":"9",
"Discount":"2",
"Profit":"1"
},
{
"Product Name":"Bike With 3 Part",
"Part 1":"Plastic",
"Part 2":"Steel",
"Part 2":"Copper",
"Sell":"15",
"Cost":"9",
"Discount":"2",
"Tax":"1",
"Profit":"3"
},
Problem statement :
Json Format is not fixed as the Part number and measure count can be up and down
Discount, Tax , Part Number are dynamic (It May be required for a product or not)
If require information like "Product -> Part 1 -> Profit" that is not possible using JSON Format.
Final Requirement:
Create a Graph Node and Edge Tables.
Instead of Json format need to store all information in MS SQL Graph DB. With node and Edge tables. So Require help on creating a Node and Edge table design.
Note:Fetching of Data can be by Product Name ,Part Number , Profit, etc. using Graph Query Match query.
Product Information in Relational DB Format
Related
I'm trying to subtract a value in one field from a value in another table, depending on ID number using Filemaker Pro 19.x. I thought I'd done this before without any problems but now it isn't working and I don't know why.
First, I want to do this in a calculation field, not in a script. I don't understand scripting at all and use it only when there is no alternative. In this case, I think a calculation field should work.
I have 2 tables, I'll call them "Data" and "Categories"
Both tables have the field "CID" "Category ID".
The CID fields in both tables are linked in the Relationship Editor
The Data table has a field "Product ID"
The Categories table has several fields related to products. Two of those are "MIN PID" and "MAX PID". These are the minimum and maximum product ID numbers.
Product IDs are assigned in ranges. If it is within a certain range, it has to belong to a certain category. These categories have CID numbers.
To assign the CID number to the products listed on the Data table, I did run a script that essentially recreated all the data within the Categories table. It was inefficient (in my eyes) because the data was sitting right there in the table. I couldn't figure out how to reference it, so I gave up and ran the script instead. The other problem is that if the CID ever changes for a product, I have to rerun the script (or someone else, who might not know about the script)
That said, I now have the correct CID assigned for all 62 product categories. What I want to do now is to use the CID MIN and CID MAX values (among others) in calculation fields in the Data table.
For instance, if the product ID is "45,001", it belongs to the Category "16", which has a MIN value of "30,000" and a MAX value of "50,000". I want to subtract the "30,000" from the "45,001", (PID - CID MIN) to return the result "15,001". This allows me to locate a product within the category. Instead of being "Product 45,001", it will be "Product 16.15001".
The calculation I tried is:
If (CID = CID::CID ; PID - CID::CID MIN)
The result is a huge group of records where this field is empty.
I also tried:
PID - CID::CID MIN
Again, no results.
Both tables have the field "CID"
The two CID fields are linked in the relationship editor.
I have tried this with a test file and it works perfectly. It does not work in the database I am working within.
What am I doing wrong?
I have imported my GA4 data into Google Data Studio and am trying to see how many giftcards have been sold by their value.
The item revenue metric in GA4 is equal to the giftcard value (i.e. revenue = $200 therefore $200 giftcard was sold).
I want to breakdown sales by giftcard value like so:
Giftcard (revenue)
Count
$200
4
$250
3
$300
6
To do this, I need to set a copy of item revenue as a dimension rather than a metric.
In Google Data Studio, I can create a calculated field with the following formula that should convert the item revenue into text:
CAST(Item Revenue AS TEXT)
The problem I'm having is that while the formula sets the field type as text, it is still regarded by GDS as a metric and can't be used as a dimension.
Even when I try to add text, GDS still recognises the field as a number:
CONCAT(CAST(Item Revenue AS TEXT), " giftcard")
To use a metric as a dimension you can make a combination of data. When defining the graphic element (table, for example) and the respective data source, just create a data combination, but do not combine the data with any other source and just define the combination with the initial data itself. So you will have the same data structure only through a combined structure.
When making a combination of data, data studio recognizes all calculated fields (metrics) as dimensions. Thus, it is possible to make the conversion.
I have a dataset including 3 columns :
ID transac (The unique ID of the transaction - Dimension)
Source (The source of the transaction - Dimension)
Amount € (The amount of the transaction - Stat)
screenshot of my dataset
To Count the number of transactions (for one or more sources), i use COUNT_DISTINCT function
I want to make the sum of the transactions amounts (for one or more sources). But i don't want to additionate the amounts of the transactions with the same ID !
Is there a way to do this calcul with a DataStudio function ?
Thanks for your answers. :-)
EDIT : I saw that we could do this type of calculation via SQL here and I would like to do this in DataStudio (so that I don't have to pre-calculate the amounts per source.)
IMO, your dataset contains wrong data. Each value should be relative only to that line, but this is not the case: if the total is =20, each line should describe the participation of that line to the total. With 4 sources, each line should be =5 or something else that sums 20.
To solve it in DataStudio, you need something like CALCULATE function in PowerBI, but currently DataStudio doesn't support this feature.
But there are some options to consider to repair your data:
If you're sure there are always 4 sources, just create a new calculated field with the expression Amount/4 and SUM it. It is not an elegant solution, but it works.
If your data source is Google Sheets, you can easily repair the data using formulas, like in this example:
Link to spreadsheet
For this spreadsheet, I used this formula in adjusted_amount column: =C2/COUNTIF(A:A,A2). With this column in DataStudio, just use the usual SUM aggregation function to summarize it correctly.
While using Power BI for a few months now, we (the user group) encountered an issue that is not really clear to us.
We use Power-BI with a remote SQL-Server data source, we access the data source through direct query.
Let's pretend we have 2 Tables as below-
Table name: Issue
Column:
ResolutionTime(Date/Time)
IssueID(Unique Numbers)
Table Name: WorkItem
Column:
start (Date/Time)
end (Date/Time)
IssueID (Unique Numbers, Foreign Key to "Issue" table)
Table WorkItem also contain a calculated column "WorkTime" which uses this DAX-expression as below-
WorkTime = WorkItem[end] - WorkItem[start]
The two tables are configured through Power-Bi having a two-way 1:n relationship that can be queried to collect all "WorkItem"(s) assigned to an "Issue" entry, using the "IssueID" as correlation column.
To be able to compute the aggregated "work-time" for each "WorkItem", we use a new/calculated table with the following DAX expression to aggregate the total amount of time invested for a single "Issue":
SumWork =
SUMMARIZE(
WorkItem, WorkItem[IssueID], "All work per item", SUM(WorkItem[WorkTime])
)
The above table computes the total invested work-time for a particular issue, grouping/summarizing results based on the "IssueID" foreign key. This new calculated table is also configured to have a relationship with the "Issue" table, this time a "1:1" relationship, using the IssueID as correlation column.
Now to compute the time that the issue was worked on + the time for Resolution should be summarized in a calculated column inside "Issue", but this does not work:
ResolutionAndWorkTime = Issue[ResolutionTime] + SumWork["All work per item"]
But the above DAX expression fails to compile, as it always reports that it returns "more than one result", thus not being a singular result. But that is suprising, as the two table ("Issue" and "SumWork" are related to each other with a "1:1" relationship).
Tables:
Issues
IssueID ResolutionTime ResolutionAndWorkTime
1 03:20:20 ???
2 01:20:20 ???
3 00:20:20 ???
WorkItem
IssueID start end WorkTime
1 1-2-2020 3:20:20 1-2-2020 3:25:20 00:05:00
1 2-2-2020 6:20:20 2-2-2020 7:20:20 01:00:00
3 1-3-2020 3:20:20 1-3-2020 3:29:20 00:09:00
Any ideas what to look for? Data-types? Table-definition? Table-relationships? We checked other Stackoverflow questions/answers, but no good ideas retrieved so far.
NOTE that a lot of join/merge features of Power BI are not available if direct-query is used and thus joining the tables is not really an option (we think).
You need this following code for your new Calculated column.
Visit HERE To know more about RELATED.
ResolutionAndWorkTime = Issues[ResolutionTime] + RELATED(SumWork[All work per item])
Based on input provided by "mkRabbani" (see other answer) we investigated why "RELATED" does not function as expected. The problem originates in the access to the database. As suspected earlier the function delivers the expected results once the database access is switched to "import" instead of "direct-query".
As a workaround we now joins the data inside the SQL server by using traditional database views. Of course this only works for scenarios where the database is under control of the data analytics team.
I have a SQL Server database with a column called Categories containing comma-separated values in the format nvarchar(50). The length of each column is variable from record to record.
I've created a new empty column called CategoriesJSONArray. For all existing records, I need to convert the values in Categories to JSON arrays and write those converted values to CategoriesJSONArray.
How would I accomplish this using T-SQL, or from within the SSMS UI?
EDIT: example, as requested
Current:
Record Categories
1 Sales, Support, Growth
2 Sales, Growth
3 Sales
4 Support, Growth, Sustain
Desired:
Record Categories CategoriesJSONArray
1 Sales, Support, Growth { "Sales" : "Support" : "Growth"}
2 Sales, Growth { "Sales" : "Growth"}
3 Sales { "Sales" }
4 Support, Growth, Sustain { "Sales" : "Growth" : "Sustain"}
There's some ways to split a delimited string into records and then aggregate those back into JSON, but I think some concatenation and replacing would be a easier here:
SELECT record,
Categories,
'{"' + REPLACE(Categories, ', ', '":"') + '"}' as CategoriesJSONArray
FROM table
That may need some tweaking to deal with spaces and whatnot and it obviously won't work if your JSON becomes more complex, but it's quick and dirty.