Matching and labeling - matching

Good Morning all. I'm trying to write some of my first scripts and I'm having a difficult time doing so. I'm trying to match data from one file to another, and add a label to that row in the original.
I'm using two different data sources, to accomplish this and there are tens of thousands different rows to match. I'm trying to take one column of zip codes in data source one, match it to the same zip codes in a data source two, and add a new column labeling the location in data source one. see example below.
Data Source One:
|A | | B |
|13329 | X |
|22193 | X |
|13211 | X |
Data source two:
|A | | B |
|13211 | Syracuse |
|22193 | D.C. Metro |
|13329 | Utica Rome |
New Data Source one:
| A | B | C |
|13329 | X | Utica-Rome |
|22193 | X | D.C. Metro |
|13211 | X | Syracuse |
New Data Source One is the desired end state. I am dealing with rows that will have no new labels and can be labeled as N/A or NA (whichever way is fine). I hope I have explained the problem and the desired result well enough. Please help.

More commonly than Matching and labeling this is called joining.
join <(sort DS1) <(sort DS2)

Related

Combining fields in Google Data Studio

I have a CSV file of the form (unimportant columns hidden)
player,game1,game2,game3,game4,game5,game6,game7,game8
Example data:
Alice,0,-10,-30,-60,-30,-50,-10,30
Bob,10,20,30,40,50,60,70,80
Charlie,20,0,20,0,20,0,20,0
Derek,1,2,3,4,5,6,7,8
Emily,-40,-30,-20,-10,10,20,30,40
Francine,1,4,9,16,25,36,49,64
Gina,0,0,0,0,0,0,0,0
Hank,-50,50,-50,50,-50,50,-50,50
Irene,-20,-20,-20,50,50,-20,-20,-20
I am looking for a way to make a Data Studio view where I can see a chart of all the results of a certain player. How would I make a custom field that combines the data from game1 to game8 so I can make a chart of it?
| Name | Scores |
|----------|---------------------------------|
| Alice | [0,-10,-30,-60,-30,-50,-10,30] |
| Bob | [10,20,30,40,50,60,70,80] |
| Charlie | [20,0,20,0,20,0,20,0] |
| Derek | [1,2,3,4,5,6,7,8] |
| Emily | [-40,-30,-20,-10,10,20,30,40] |
| Francine | [1,4,9,16,25,36,49,64] |
| Gina | [0,0,0,0,0,0,0,0] |
| Hank | [-50,50,-50,50,-50,50,-50,50] |
| Irene | [-20,-20,-20,50,50,-20,-20,-20] |
The goal of the resulting chart would be something like this, where game1 is the first point and so on.
If this is not possible, how would I best represent the data so what I am looking for can work in Data Studio? I currently have it implemented in a Google Sheet, but the issue is there's no way to make views, so when someone selects a row it changes for everyone viewing it.
If you have two file games as data sources, I guess that you want to combine them by the name, right?
You can do it with the blending data option. Resource > manage blends I think is the option.
Then you can create a blend data source merging it by the name.
You can add also both score fields, with different labels.
This is some documentation about it: https://support.google.com/datastudio/answer/9061420?hl=en

ad-hoc slowly-changing dimensions materialization from external table of timestamped csvs in a data lake

Question
main question
How can I ephemerally materialize slowly changing dimension type 2 from from a folder of daily extracts, where each csv is one full extract of a table from from a source system?
rationale
We're designing ephemeral data warehouses as data marts for end users that can be spun up and burned down without consequence. This requires we have all data in a lake/blob/bucket.
We're ripping daily full extracts because:
we couldn't reliably extract just the changeset (for reasons out of our control), and
we'd like to maintain a data lake with the "rawest" possible data.
challenge question
Is there a solution that could give me the state as of a specific date and not just the "newest" state?
existential question
Am I thinking about this completely backwards and there's a much easier way to do this?
Possible Approaches
custom dbt materialization
There's a insert_by_period dbt materialization in the dbt.utils package, that I think might be exactly what I'm looking for? But I'm confused as it's dbt snapshot, but:
run dbt snapshot for each file incrementally, all at once; and,
built directly off of an external table?
Delta Lake
I don't know much about Databricks's Delta Lake, but it seems like it should be possible with Delta Tables?
Fix the extraction job
Is our oroblem is solved if we can make our extracts contain only what has changed since the previous extract?
Example
Suppose the following three files are in a folder of a data lake. (Gist with the 3 csvs and desired table outcome as csv).
I added the Extracted column in case parsing the timestamp from the filename is too tricky.
2020-09-14_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 |
| 2 | B | 3 - Propose | | 9/12 | 9/14 |
2020-09-15_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 |
| 3 | C | 1 - Lead | | 9/14 | 9/15 |
2020-09-16_CRM_extract.csv
| OppId | CustId | Stage | Won | LastModified | Extracted |
|-------|--------|-------------|-----|--------------|-----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/16 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 |
End Result
Below is SCD-II for the three files as of 9/16. SCD-II as of 9/15 would be the same but OppId=3 has only one from valid_from=9/15 and valid_to=null
| OppId | CustId | Stage | Won | LastModified | valid_from | valid_to |
|-------|--------|-------------|-----|--------------|------------|----------|
| 1 | A | 2 - Qualify | | 9/1 | 9/14 | null |
| 2 | B | 3 - Propose | | 9/12 | 9/14 | 9/15 |
| 2 | B | 4 - Closed | Y | 9/14 | 9/15 | null |
| 3 | C | 1 - Lead | | 9/14 | 9/15 | 9/16 |
| 3 | C | 2 - Qualify | | 9/15 | 9/16 | null |
Interesting concept and of course it would a longer conversation than is possible in this forum to fully understand your business, stakeholders, data, etc. I can see that it might work if you had a relatively small volume of data, your source systems rarely changed, your reporting requirements (and hence, datamarts) also rarely changed and you only needed to spin up these datamarts very infrequently.
My concerns would be:
If your source or target requirements change how are you going to handle this? You will need to spin up your datamart, do full regression testing on it, apply your changes and then test them. If you do this as/when the changes are known then it's a lot of effort for a Datamart that's not being used - especially if you need to do this multiple times between uses; if you do this when the datamart is needed then you're not meeting your objective of having the datamart available for "instant" use.
Your statement "we have a DW as code that can be deleted, updated, and recreated without the complexity that goes along with traditional DW change management" I'm not sure is true. How are you going to test updates to your code without spinning up the datamart(s) and going through a standard test cycle with data - and then how is this different from traditional DW change management?
What happens if there is corrupt/unexpected data in your source systems? In a "normal" DW where you are loading data daily this would normally be noticed and fixed on the day. In your solution the dodgy data might have occurred days/weeks ago and, assuming it loaded into your datamart rather than erroring on load, you would need processes in place to spot it and then potentially have to unravel days of SCD records to fix the problem
(Only relevant if you have a significant volume of data) Given the low cost of storage, I'm not sure I see the benefit of spinning up a datamart when needed as opposed to just holding the data so it's ready for use. Loading large volumes of data everytime you spin up a datamart is going to be time-consuming and expensive. Possible hybrid approach might be to only run incremental loads when the datamart is needed rather than running them every day - so you have the data from when the datamart was last used ready to go at all times and you just add the records created/updated since the last load
I don't know whether this is the best or not, but I've seen it done. When you build your initial SCD-II table, add a column that is a stored HASH() value of all of the values of the record (you can exclude the primary key). Then, you can create an External Table over your incoming full data set each day, which includes the same HASH() function. Now, you can execute a MERGE or INSERT/UPDATE against your SCD-II based on primary key and whether the HASH value has changed.
Your main advantage doing things this way is you avoid loading all of the data into Snowflake each day to do the comparison, but it will be slower to execute this way. You could also load to a temp table with the HASH() function included in your COPY INTO statement and then update your SCD-II and then drop the temp table, which could actually be faster.

Pulling ill-formatted data in Libre Calc: What Function will work with this?

I am working on a project where I am pulling tables from a Fandom Wikia page and feeding it into a spreadsheet named 'WikiPullSheet'. The data in the wiki tables is irregular in format; sometimes using multiple rows for the same entry.
Here is an example of some rows as described above from the sheet:
Name | Power | Stamina | Agility
Townsman Shield | 2 | 1 | 2
Starter | | |
Broken Shield | 4(+1) | 2(+1) | 2(+1)
Z1 | | |
Heater | 2(+1) | 4(+1) | 2(+1)
Z1 | | |
Wood Elf Shield | 2(+1) | 2(+1) | 4(+1)
Z1 | | |
Shiv | 4 | 4 | 3
Z1 Shop | | |
Deimos* | 26 | 16 | 26
| 34 | 22 | 34
I want the sheet to auto-update from the wikia page but this format will not allow me to reference items as the sheet expands. For instance, if on another sheet I want to have a drop down list of all the names for items in this list, I would be referencing the blank and starter cells even though they are not actually unique items in the table. I have done research on VLOOKUP, COUNTIF, REGEX options, MATCH, and more, but none of these seem to work for the issue I am having.
How would I take this input and either create a formula to reformat it or pull from the sheet as is and use the columns appropriately for a drop-down box containing only the item names from the NAME column?
Desired Output:
I need the data to end up formatted with each row representing a different unique item. Since the information is pulling with rows that contain location of the item in the name column (Z1 for instance), this is proving difficult. I could simply remove the rows that cause problems such as 'Z1' & 'Z1 Shop' in the above example, however this does not help when an item has multiple upgrade paths like in the case of the 'Deimos' row entry.
If you insert a pivot table (there is a icon to do so, select ColumnA first) based on ColumnA (assuming that is where Name is to be found) you should get something like:
It is far from a complete solution (you don't show what the desired output should be) but I thought a sorted list, with each entry unique and the blanks at least out of the way, might have been a start.

uml sequence diagram: create objects in a loop

In a sequence diagram i am trying to model a loop that creates a bunch of objects. i have found little information online regarding the creation of multiple objects in an SD diagram so i turn to you.
The classes are Deck and Card
Cards are created by fillDeck(), which is called by the constructor of Deck (FYI the objects are stored in an arraylist in Deck).
There are many types of cards with varying properties. Suppose i want 8 cards of type A to be made, 12 of type B and 3 of type C
How would i go about modelling such a thing? this is the idea i have in mind so far, but it is obviously incomplete.
Hope someone can help! thanks!
+------+
| Deck |
+------+
|
+--+-------+--------------+
| loop 8x / |
+--+-----+ +----------+ |
| |-------->| Card(A) | |
| | +-----+----+ |
+--+----------------------+
| |
+--+--------+------|-----------------------+
| loop 12x / | |
+--+------+ | +---------+ |
| |------------------------->| Card(B) | |
| | | +----+----+ |
|--+---------------------------------------+
| | | |
+--+-------+----------------------------------------------+
| loop 3x / | | |
+--+-----+ | | +---------+ |
| |--------------------------------------->| Card(C) | |
| | | | +----+----+ |
|--+------------------------------------------------------+
| | | |
"A sequence diagram describes an Interaction by focusing on the sequence of Messages that are exchanged, along with their corresponding OccurrenceSpecifications on the Lifelines." (UML standard) A lifeline are defined by one object. But that doesn't mean you must keep all objects in lifelines. You should show only these lifelines, that are exchanging messages you are thinking about.
And you needn't show all messages sequences logic on one diagram. In one SD normally you are showing one Interaction. Or maybe a few of them, if they are simple.
So, if your SD is showing one logical concept, it is correct. If there will be another interaction between some objects, you will draw another SD for this interaction, and there will be only objects participating in this second interaction.
UML standard 2.5. Figure 17.25 - Overview of Metamodel elements of a Sequence Diagram

Spare parts Database (structure)

There is a database of spare parts for cars, and online search by the name of spare parts. The user can type in the search, for example "safety cushion" or "airbag" - and the search result should be the same.
Therefore, I need somehow to implement the aliases for names of spare parts, and the question is how to store them in the database? Until now I have only one option that comes in mind - to create an additional table
| id | name of part | alias_id |
-------------------------------------------------- ---------------
| 1 | airbag | 10 |
| 2 | safety cushion | 10 |
And add additional field "alias_id" to table containing all the spare parts, and search by this field...
Are there other better options?
If I have understood correctly, it's best to have 3 tables in a many to many situation (if multiple parts have multiple aliases:
Table - Parts
| id | name of part |
-----------------------
| 1 | airbag |
| 2 | safety cushion |
Table - Aliases
| id | name of alias |
-----------------------
| 10 | AliasName |
Table - PartToAliases
| id | PartId | AliasId |
-------------------------
| 1 | 1 | 10 |
| 2 | 2 | 10 |
Your solution looks fine for the exact problem you described.
BUT what if someone writes safetycushion? or safety cuschion? With these kinds of variations your alias lookup table will soon become huge and and manualy maintaining these will not be feasible.
At that point you'll need a completely different approach (think full text search engine).
So if you are still sure you only need a couple of aliases your approach seems to be fine.

Resources