Single table column refers to multiple primary key - database

I need to store multiple values in a single column.
For example I am creating table which holds the user preferences
e.g.
| user_id | cities | countries |
|---------|------------|------------|
| 1 | 10, 11, 23 | 21, 34 |
because i can't store them as array (or don't prefer to store as array even if it is available - due to maintenance and performance reasons - and better RDMS design), i have to create a mapping table like this
| user_id | type | reference_id |
|---------|---------|--------------|
| 1 | CITY | 10 |
| 1 | CITY | 11 |
| 1 | CITY | 23 |
| 1 | COUNTRY | 21 |
| 1 | COUNTRY | 34 |
The reference id in this column refers to the master tables like city, country, etc.
The problem here i see is
I can't have FK reference to city or country table, because single reference_id column may refer to city or country depends on the type
As i can't have FK, there is no guaranty that we can't have dirty data
Is there any better approach?
Note:
I have given city/country as sample, but i need to have around 20 columns which can have multiple values like city or country
In future i may introduce some boolean preference like "whether you like to travel" so i might want to store TYPE as "TRAVEL" and referece_id as 0 for yes 1 for no; which definately will not have any reference

You could create a Location Table {LocationId, locationType (city/country)}
and then everytime you add a new record to the city or country table, add it to location table first, then add it to city (or country) table as appropriate with same cityId (or countryId) as was used as LocationId in Location Table.
then create FK between preferences table and location table, and add [zero or one] to one (0/1 - 1) FK relationship between City and country tables to the Location table. (Every record in City and COuntry table tables must be in Location table, but not the other way around.

You're saying you want a table for generic data instead of 20 lookup tables enforcing RI? On a large system, the data would be stored in multiple tables instead of using a delimiter to separate the values and then exploding them out in another table, introducing the problem of enforcing RI. If you're storing values that are really generic, like code/description pairs, you just need a codeSetID field to identify which codes belong in which codesets.

Related

Does taking advantage of dynamic columns in Cassandra require duplicated data in each row?

I've been trying to understand how one would model time series data in Cassandra, like shown in the below image from a popular System Design Interview video, where counts of views are stored hourly.
While I would think the schema for this time series data would be something like the below, I don't believe this would lead to data actually being stored in the way the screenshot shows.
CREATE table views_data {
video_id uuid
channel_name varchar
video_name varchar
viewed_at timestamp
count int
PRIMARY_KEY (video_id, viewed_at)
};
Instead, I'm assuming it would lead to something like this (inspired by datastax), where technically there is a single row for each video_id, but the other columns seem like they would all be duplicated, such as channel_name, video_name, etc.. within the row for each unique viewed_at.
[cassandra-cli]
list views_data;
RowKey: A
=> (channel_name='System Design Interview', video_name='Distributed Cache', count=2, viewed_at=1370463146717000)
=> (channel_name='System Design Interview', video_name='Distributed Cache', count=3, viewed_at=1370463282090000)
=> (channel_name='System Design Interview', video_name='Distributed Cache', count=8, viewed_at=1370463282093000)
-------------------
RowKey: B
=> (channel_name='Some other channel', video_name='Some video', count=4, viewed_at=1370463282093000)
I assume this is still considered dynamic wide row, as we're able to expand the row for each unique (video_id, viewed_at) combination. But it seems less than ideal that we need to duplicate the extra information such as channel_name and video_name.
Is the screenshot of modeling time series data misleading or is it actually possible to have dynamic columns where certain columns in the row do not need to be duplicated?
If I was upserting time series data to this row, I wouldn't want to have to provide the channel_name and video_name for every single upsert, I would just want to provide the count.
No, it is not necessary to duplicate the values of columns within the rows of a partition. It is possible to model your table to accomodate your use case.
In Cassandra, there is a concept of "static columns" -- columns which have the same value for all rows within a partition.
Here's the schema of an example table that contains two static columns, colour and item:
CREATE TABLE statictbl (
pk int,
ck text,
c int,
colour text static,
item text static,
PRIMARY KEY (pk, ck)
)
In this table, each partition share the same colour and item for all rows of the same partition. For example, partition pk=1 has the same colour='red' and item='apple' for all rows:
pk | ck | colour | item | c
----+----+--------+--------+----
1 | a | red | apple | 12
1 | b | red | apple | 23
1 | c | red | apple | 34
If I insert a new partition pk=2:
INSERT INTO statictbl (pk, ck, colour, item, c) VALUES (2, 'd', 'yellow', 'banana', 45)
we get:
pk | ck | colour | item | c
----+----+--------+--------+----
2 | d | yellow | banana | 45
If I then insert another row withOUT specifying a colour and item:
INSERT INTO statictbl (pk, ck, c) VALUES (2, 'e', 56)
the new row with ck='e' still has the colour and item populated even though I didn't insert a value for them:
pk | ck | colour | item | c
----+----+--------+--------+----
2 | d | yellow | banana | 45
2 | e | yellow | banana | 56
In your case, both the channel and video names will share the same value for all rows in a given partition if you declare them as static and you only ever need to insert them once. Note that when you update the value of static columns, ALL the rows for that partition will reflect the updated value.
For details, see Sharing a static column in Cassandra. Cheers!

Constraint on two foreign keys

I have created a database to store data points in an ACID approach. One table holds the data points themselves and has a foreign key to the source of the data. There are multiple sources per geographic location, so I have another table for those locations. From the datasources I have a foreign key on those locations and everything is well.
Now my boss asked me to create a new column in the sensor data to also link to the location for easier debugging and performance reasons. But I still want the database to check that it won't be possible to insert a datapoint for a specific data source and a location where the source and the location don't match up.
It would be even nicer if the database would automatically generate the location ID based on the provided sourceid.
Data:
ID | SrcID | LocID | MoreColumns
---+-------+-------+------------
1 | 1 | 1 | X - OK
2 | 1 | 2 | X - Bad, DataSource is not at location 2
3 | 4 | 4 | X - Bad, Location 4 doesn't exist, but is caught by FK-Constraints
Src:
ID | LocID | MoreColumns
---+-------+------------
1 | 1 | X
2 | 1 | X
3 | 2 | X
Loc:
ID | MoreColumns
---+------------
1 | X
2 | X
How can I achieve data consistency here?
With FKs it checks if Loc and Src exist, but how can I make sure that the location that data points to is the same location that it's source points to?
I solved the problem by making the primary key in Src a composite primary key (ID, LocID) and then having a foreign key from Data from (SrcID, LocID) to that primary key in Src (ID, LocID).
To make sure only unique IDs are given out for Src I created another unique index on ID in Src.
The foreign key from Src to Loc stays as it is.

Ensuring that two column values are related in SQL Server

I'm using Microsoft SQL Server 2017 and was curious about how to constrain a specific relationship. I'm having a bit of trouble articulating so I'd prefer to share through an example.
Consider the following hypothetical database.
Customers
+---------------+
| Id | Name |
+---------------+
| 1 | Sam |
| 2 | Jane |
+---------------+
Addresses
+----------------------------------------+
| Id | CustomerId | Address |
+----------------------------------------+
| 1 | 1 | 105 Easy St |
| 2 | 1 | 9 Gale Blvd |
| 3 | 2 | 717 Fourth Ave |
+------+--------------+------------------+
Orders
+-----------------------------------+
| Id | CustomerId | AddressId |
+-----------------------------------+
| 1 | 1 | 1 |
| 2 | 2 | 3 |
| 3 | 1 | 3 | <--- Invalid Customer/Address Pair
+-----------------------------------+
Notice that the final Order links a customer to an address that isn't theirs. I'm looking for a way to prevent this.
(You may ask why I need the CustomerId in the Orders table at all. To be clear, I recognize that the Address already offers me the same information without the possibility of invalid pairs. However, I'd prefer to have an Order flattened such that I don't have to channel through an address to retrieve a customer.)
From the related reading I was able to find, it seems that one method may be to enable a CHECK constraint targeting a User-Defined Function. This User-Defined Function would be something like the following:
WHERE EXISTS (SELECT 1 FROM Addresses WHERE Id = Order.AddressId AND CustomerId = Order.CustomerId)
While I imagine this would work, given the somewhat "generality" of the articles I was able to find, I don't feel entirely confident that this is my best option.
An alternative might be to remove the CustomerId column from the Addresses table entirely, and instead add another table with Id, CustomerId, AddressId. The Order would then reference this Id instead. Again, I don't love the idea of having to channel through an auxiliary table to get a Customer or Address.
Is there a cleaner way to do this? Or am I simply going about this all wrong?
Good question, however at the root it seems you are struggling with creating a foreign key constraint to something that is not a foreign key:
Orders.CustomerId -> Addresses.CustomerId
There is no simple built-in way to do this because it is normally not done. In ideal RDBMS practices you should strive to encapsulate data of specific types in their own tables only. In other words, try to avoid redundant data.
In the example case above the address ownership is redundant in both the address table and the orders table, because of this it is requiring additional checks to keep them synchronized. This can easily get out of hand with bigger datasets.
You mentioned:
However, I'd prefer to have an Order flattened such that I don't have to channel through an address to retrieve a customer.
But that is why a relational database is relational. It does this so that distinct data can be kept distinct and referenced with relative IDs.
I think the best solution would be to simply drop this requirement.
In other words, just go with:
Customers
+---------------+
| Id | Name |
+---------------+
| 1 | Sam |
| 2 | Jane |
+---------------+
Addresses
+----------------------------------------+
| Id | CustomerId | Address |
+----------------------------------------+
| 1 | 1 | 105 Easy St |
| 2 | 1 | 9 Gale Blvd |
| 3 | 2 | 717 Fourth Ave |
+------+--------------+------------------+
Orders
+--------------------+
| Id | AddressId |
+--------------------+
| 1 | 1 |
| 2 | 3 |
| 3 | 3 | <--- Valid Order/Address Pair
+--------------------+
With that said, to accomplish your purpose exactly, you do have views available for this kind of thing:
create view CustomerOrders
as
select o.Id OrderId,
a.CustomerId,
o.AddressId
from Orders
join Addresses a on a.Id = o.AddressId
I know this is a pretty trivial use-case for a view but I wanted to put in a plug for it because they are often neglected and come in handy with organizing big data sets. Using WITH SCHEMABINDING they can also be indexed for performance.
You may ask why I need the CustomerId in the Orders table at all. To be clear, I recognize that the Address already offers me the same information without the possibility of invalid pairs. However, I'd prefer to have an Order flattened such that I don't have to channel through an address to retrieve a customer.
If you face performance problems, the first thing is to create or amend proper indexes. And DBMS are usually good at join operations (with proper indexes). But yes normalization can sometimes help in performance tuning. But it should be a last resort. And if that route is taken, one should really know what one is doing and be very careful not to damage more at the end of a day, that one has gained. I have doubts, that you're out of options here and really need to go that path. You're likely barking up the wrong tree. Therefore I recommend you take the "normal", "sane" way and just drop customerid in orders and create proper indexes.
But if you really insist, you can try to make (id, customerid) a key in addresses (with a unique constraint) and then create a foreign key based on that.
ALTER TABLE addresses
ADD UNIQUE (id,
customerid);
ALTER TABLE orders
ADD FOREIGN KEY (addressid,
customerid)
REFERENCES addresses
(id,
customerid);

References in a table

I have a table like this, that contains items that are added to the database.
Catalog table example
id | element | catalog
0 | mazda | car
1 | penguin | animal
2 | zebra | animal
etc....
And then I have a table where the user selects items from that table, and I keep a reference of what has been selected like this
User table example
id | name | age | itemsSelected
0 | john | 18 | 2;3;7;9
So what I am trying to say, is that I keep a reference to what the user has selected as a string if ID's, but I think this seems a tad troublesome
Because when I do a query to get information about a user, all I get is the string of 2;3;7;9, when what I really want is an array of the items corresponing to those ID's
Right now I get the ID's and I have to split the string, and then run another query to find the elements the ID's correspond to
Is there any easier ways to do this, if my question is understandable?
Yes, there is a way to do this. You create a third table which contains a map of A/B. It's called a Multiple to Multiple foreign-key relationship.
You have your Catalogue table (int, varchar(MAX), varchar(MAX)) or similar.
You have your User table (int, varchar(MAX), varchar(MAX), varchar(MAX)) or similar, essentially, remove the last column and then create another table:
You create a UserCatalogue table: (int UserId, int CatalogueId) with a Primary Key on both columns. Then the UserId column gets a Foreign-Key to User.Id, and the CatalogueId table gets a Foreign-Key to Catalogue.Id. This preserves the relationship and eases queries. It also means that if Catalogue.Id number 22 does not exist, you cannot accidentally insert it as a relation between the two. This is called referential-integrity. The SQL Server mandates that if you say, "This column must have a reference to this other table" then the SQL Server will mandate that relationship.
After you create this, for each itemsSelected you add an entry: I.e.
UserId | CatalogueId
0 | 2
0 | 3
0 | 7
0 | 9
This also alows you to use JOINs on the tables for faster queries.
Additionally, and unrelated to the question, you can also optimize the Catalogue table you have a bit, and create another table for CatalogueGroup, which contains your last column there (catalog: car, animal) which is referenced via a Foreign-Key Relationship in the current Catalogue table definition you have. This will also save storage space and speed up SQL Server work, as it no longer has to read a string column if you only want the element value.

DB Data migration

I have a database table called A and now i have create a new table called B and create some columns of A in table B.
Eg: Suppose following columns in tables
Table A // The one already exists
Id, Country Age Firstname, Middlename, Lastname
Table B // The new table I create
Id Firstname Middlename Lastname
Now the table A will be look like,
Table A // new table A after the modification
Id, Country, Age, Name
In this case it will map with table B..
So my problem is now i need to kind of maintain the reports which were generated before the table modifications and my friend told me you need to have a data migration..so may i know what is data migration and how its work please.
Thank you.
Update
I forgot to address the reporting issue raised by the OP (Thanks Mark Bannister). Here is a stab at how to deal with reporting.
In the beginning (before data migration) a report to generate the name, country and age of users would use the following SQL (more or less):
-- This query orders users by their Lastname
SELECT Lastname, Firstname, Age, Country FROM tableA order by Lastname;
The name related fields are no longer present in tableA post data migration. We will have to perform a join with tableB to get the information. The query now changes to:
SELECT b.Lastname, b.Firstname, a.Country, a.Age FROM tableA a, tableB b
WHERE a.name = b.id ORDER BY b.Lastname;
I don't know how exactly you generate your report but this is the essence of the changes you will have to make to get your reports working again.
Original Answer
Consider the situation when you had only one table (table A). A couple of rows in the table would look like this:
# Picture 1
# Table A
------------------------------------------------------
Id | Country | Age | Firstname | Middlename | Lastname
1 | US | 45 | John | Fuller | Doe
2 | UK | 32 | Jane | Margaret | Smith
After you add the second table (table B) the name related fields are moved from table A to table B. Table A will have a foreign key pointing to the table B corresponding to each row.
# Picture 2
# Table A
------------------------------------------------------
Id | Country | Age | Name
1 | US | 45 | 10
2 | UK | 32 | 11
# Table B
------------------------------------------------------
Id | Firstname | Middlename | Lastname
10 | John | Fuller | Doe
11 | Jane | Margaret | Smit
This is the final picture. The catch is that the data will not move from table A to table B on its own. Alas human intervention is required to accomplish this. If I were the said human I would follow the steps given below:
Create table B with columns Id, Firstname, Middlename and Lastname. You now have two tables A and B. A has all the existing data, B is empty .
Add a foreign key to table A. This FK will be called name and will reference the id field of table B.
For each row in table A create a new row in table B using the Firstname, Middlename and Lastname fields taken from table A.
After copying each row, update the name field of table A with the id of the newly created row in table B.
The database now looks like this:
# Table A
-------------------------------------------------------------
Id | Country | Age | Firstname | Middlename | Lastname | Name
1 | US | 45 | John | Fuller | Doe | 10
2 | UK | 32 | Jane | Margaret | Smith | 11
# Table B
------------------------------------------------------
Id | Firstname | Middlename | Lastname
10 | John | Fuller | Doe
11 | Jane | Margaret | Smith
Now you no longer need the Firstname, Middlename and Lastname columns in table A so you can drop them.
voilĂ , you have performed a data migration!
The process I just described above is but a specific example of a data migration. You can accomplish it in a number of ways using a number of languages/tools. The choice of mechanism will vary from case to case.
Maintenance of the existing reports will depend on the tools used to write / generate those reports. In general:
Identify the existing reports that used table A. (Possibly by searching for files that have the name of table A inside them - however, if table A has a name [eg. Username] which is commonly used elsewhere in the system, this could return a lot of false positives.)
Identify which of those reports used the columns that have been removed from table A.
Amend the existing reports to return the moved columns from table B instead of table A.
A quick way to achieve this is to create a database view that mimics the old structure of table A, and amend the affected reports to use the database view instead of table A. However, this adds an extra layer of complexity into maintaining the reports (since developers may need to maintain the database view as well as the reports) and may be deprecated or even blocked by the DBAs - consequently, I would only recommend using this approach if a lot of existing reports are affected.

Resources