I have to implement a database using b trees for a school project. the database is for storing audio files(songs), and a number of different queries can be made like asking for all the songs of a given artist or a specific album.
The intuitive idea is to use on b tree for each field ( songs, albums, artists, ...), the problem is that one can be asked to delete any member of any field, and in-case you delete an artist you have to delete all his albums and songs from the other b trees, keeping in mind that for example all the songs of a given artist don't have to be near each other in the b tree that corresponds to songs.
My question is: is there a way to do so (delete the songs after a delete to an author has been made) without having to iterate over all elements of the other b trees? I'm not looking for code just ideas because all the ones I've come up with are brute force ones.
This is my understanding and may not be entirely right.
Typically in a database implementation B Trees are used for indexes, so unless you want to force your user to index every column, defaulting to creating a B Tree for each field is unnecessary. Although this many indexes will lead to a fast read in virtually every case (with an index on everything, you wont have to do a full table scan), it will also cause an extremely slow insert/update/delete, as the corresponding data has to be updated in each tree. As I'm sure you know, modern databases for you to have at least one index (the primary key), so you will have at least one B Tree with a key for the primary key, and a pointer to the appropriate node.
Every node in a B Tree index should have a pointer/reference to the full object it represents.
Future indexes created would include the attributes you specify in the index, such as song name, artist, etc, however will still contain the pointer/reference to the corresponding node. Thus when you modify, lets say, the song title, you will want to modify the referenced node which all the indexes reference. If you have any indexes that have the modified reference as an attribute, you will have to modify the values in that index itself.
Unfortunately I believe you are correct in your belief that you will have to brute-force your way through the other B Trees when deleting/updating, and is one of the downsides of using alot of indexes (slowed update/delete time). If you just delete the referenced nodes, you will likely end up with pointers to deleted objects, which will (depending on your language) give you some form of a NullPointerException. In order to prevent this they references will have to be removed from all the trees.
Keep in mind though that doing a full scan of your indexes will still be much better than doing full table scans.
Related
I am building a database for a christmas tree growing operation. I have put together, what I believe to be, a workable schema. I am hoping to get some feedback from someone, and I have no one. You are my only hope.
So, there are 3 growing plots, we will call them Orchards. Each Orchard has rows & columns, and each row/column intersection can have zero or one trees, planted in it. The rows/columns are numbers and letters, so row 3, column f, etc. Each row/column intersection has a status (empty, in use). A tree can be different species (denoted by manually created GID {Genetic ID}), modified (have a different species grafted on), or moved to a different location. So a plant can have one or many locations, and a location can contain, through history, one or many trees, but only one at a time.
Here is a diagram I put together:
So I was thinking for historical purposes, I would use the
treelocation table. Do you think it is unnecessary?
No, but in that case you should have the information pertaining to the tree's location in the tree location table. For instance "MovedYear". If a tree moves multiple times, don't you want to keep the Year of each Move, instead of just one MovedYear for each tree?
It's fine to have a history table the way you do, but right now, if TreeId 1 has been in 3 different locations, how could you query your database to see which location it's in NOW? All you'll see is:
TreeId LocationId
1 1
1 2
1 3
You won't know in what order the moves took place. (Unless you have some business rule that says trees can only move from 1 to 2 and from 2 to 3, and never follow any other order).
The usual way to solve this is to have a StartDate and EndDate in the history table.
It seams
A plant can have one or many locations
No, a plant have a location but it can move.
To gain this we need to
Have location foreign key(FK) inside Tree table, showing current tree location.
This FK needs to be mandatory (exposing have a)
To prevent multiple trees having the same location we need to have a unique key constraint on this FK column.
A plant can move, so to trace a plants location history
We will need a plant-location-history table
Each row/column intersection has a status (empty, in use)
So the intersections status can have predefined limited values.
Do we need a LocationStatus table? I don't think so. status can be a static field inside locatin table with a check constraint of (1= empty, 2= in-use, 3= ETC)
I am writing an application that uses CC-CEDICT, a CC licensed Chinese-English dictionary.
The dictionary is available only as a zipped text file (4MB) with entries in the following format:
Traditional Simplified [pin1 yin1] /English equivalent 1/equivalent 2/
This is sample data:
是 是 [shi4] /is/are/am/yes/to be/
昰 是 [shi4] /variant of 是[shi4]/used in given names/
時 时 [Shi2] /surname Shi/
時 时 [shi2] /o'clock/time/when/hour/season/period/
I chose those lines deliberately to illustrate my problem. The data has no descernable key by which an individual word can be identified.
The English definitions can change, and do as the dictionary is constantly updated, but suppose in one update the two definitions of 時 时 change, so the next download contains the lines:
時 时 [Shi2] /last name Shi/
時 时 [shi2] /o'clock/time period/when/hour/season/
How am I to tell which records have been updated? This is really noticeable when the translation is a single word that changes completely.
I am after a strategy as to how I can key this dictionary. So far my best idea is to take (Simplified, Traditional) as the key, and treat the duplicates as a special case - in their own table perhaps??
The issue is one of perspective.
You say that your records have no key, but in fact the whole record is the key - assuming you have no identical duplicate records.
Therefore there are no updates only inserts and deletes.
You can track which records are deleted and which are inserted in order to highlight changes in your dictionary.
If you really want to treat definition replacements as updates, then you're going to have to come up with a scheme that (a) creates a unique key for records and (b) allows you to recognize when an new definition list should be considered a modification of an existing definition list.
Part (a) is easy, add your own surrogate key. This could be unique across all definitions, or just across combinations of (Simplified, Traditional).
Part (b) is harder. At what point do you say "surname Shi" is related to "last name Shi"? I suggest coming up with some kind of text comparison function that yields a numeric score. Pick a threshold for this score at which you call it an update instead of a delete and insert. This will be arbitrary, but you might find that two people could disagree over what is an update and what isn't from one case to another anyway.
This is not a solution, but may give some ideas for you (or others).
How about modeling this as a hierarchy, Word->Meaning->Translation.
Compute a hash of the translation, sum the hash value of all translations and store that in the corresponding "meaning" record, then sum the hash values of all meanings and store that in the Word record. (Yes, this is denormalized).
You would have to recompute all the hash values for all records in the file every time. Then you could simply compare the currently stored hash value for a "word" with the hash value that you just computed. If they are different, something has changed. Either there was a new meaning, or a new translation or a translation was removed, etcetera. You could then either remove the word entirely (cascaded) and re-insert the new "sub tree". If you want to complicate things you could also descend into the hierarchy and try to detect exactly what changed.
This might be deleted, since involves idea sharing which is not quite allowed in stack overflow, but still before that if I could get any ideas from solid programmers, it will be a win situation for me
Assume that you have a class Student, stored in the database, and this class has a list property called favoriteTeachers. This list constantly gets updated by the system and involves the id of teachers.
You also have a class Teacher, also stored in database and likewise has a list property favouriteStudents. It is again updated constantly and involves the id's of students.
In our system, when a student calls a function (let's say notMyFavoriteTeacher), our system has to apply the changes below;
Delete the given teacher's id from favouriteTeacher list
Delete the student's id from given teacher's favouriteStudent list
I've tried to consider the number of rows updated could exhaust the database so instead of mapping the students with their favorite teachers in a separate table as user_id, teacher_id, instead I created a column and stored a string which contains the teachers id's separated by comma. (Ex: "1,2,14,4,25"). Same applied for the teacher as well.
However when we call this function, we also face another problem. In order for this operation to be done, you need to convert the string to list, find the element by linear search and later on delete, and later on convert list to string and push back to db. And you have to do the other operation for the teacher class as well. If we did not apply the string method, deletion would be easier but since we would be handling deletion and addition operations for like 2k times a day, i did not think it would be feasible to use separate tables.
I wanted to ask in order to decrease the number of operations, could a data structure be chosen such that it would increase the efficiency?
Storing an relation as an array in a single column is a violation of first normal form, and should not be done without good reason. Although various forms of denormalization may result in increased efficiency in some cases, I don't see this case being one of those. What's worse, you'll get no help from the database in enforcing referential integrity. And some operations will result in guaranteed row scans: When deleting a teacher, you will have to examine every row of every student to remove the teacher from each student's favorite list. Same goes for deleting a student.
Relational Databases are designed and built to link rows to other rows. You need a very good reason to keep them from doing what they're design to do. You should go ahead and design a proper relational schema, and only if actual measurement shows that it is too slow should you worry about its performance.
First of all, I don't understand your choice of storing ids of favorite teachers/students as comma separated strings, because either in the case of comma separated values or in case of a table with studentId, teacherId structure, you do exactly 2 row updates/deletes (first in the favoriteTeachers table, second in the favoriteStudent table).
But one way of optimizing performance given your current data structure would be keeping the comma separated strings sorted. I mean from the very formation of rows, keep your comma separated ids like "1, 5, 7, 15". This way, if you convert it to a list, you could perform binary search and it would take Log(n) time instead of n.
You are losing all the benefits provided by any RDBMS by storing it as a list of strings. Create a separate table with Student_id and favorite teacher_id. Apply filtering conditions (either for student or for teacher) before joining it to main tables.
Specifically a Multigraph.
Some colleague suggested this and I'm completely baffled.
Any insights on this?
It's pretty straightforward to store a graph in a database: you have a table for nodes, and a table for edges, which acts as a many-to-many relationship table between the nodes table and itself. Like this:
create table node (
id integer primary key
);
create table edge (
start_id integer references node,
end_id integer references node,
primary key (start_id, end_id)
);
However, there are a couple of sticky points about storing a graph this way.
Firstly, the edges in this scheme are naturally directed - the start and end are distinct. If your edges are undirected, then you will either have to be careful in writing queries, or store two entries in the table for each edge, one in either direction (and then be careful writing queries!). If you store a single edge, i would suggest normalising the stored form - perhaps always consider the node with the lowest ID to be the start (and add a check constraint to the table to enforce this). You could have a genuinely unordered representation by not having the edges refer to the nodes, but rather having a join table between them, but that doesn't seem like a great idea to me.
Secondly, the schema above has no way to represent a multigraph. You can extend it easily enough to do so; if edges between a given pair of nodes are indistinguishable, the simplest thing would be to add a count to each edge row, saying how many edges there are between the referred-to nodes. If they are distinguishable, then you will need to add something to the node table to allow them to be distinguished - an autogenerated edge ID might be the simplest thing.
However, even having sorted out the storage, you have the problem of working with the graph. If you want to do all of your processing on objects in memory, and the database is purely for storage, then no problem. But if you want to do queries on the graph in the database, then you'll have to figure out how to do them in SQL, which doesn't have any inbuilt support for graphs, and whose basic operations aren't easily adapted to work with graphs. It can be done, especially if you have a database with recursive SQL support (PostgreSQL, Firebird, some of the proprietary databases), but it takes some thought. If you want to do this, my suggestion would be to post further questions about the specific queries.
It's an acceptable approach. You need to consider how that information will be manipulated. More than likely you'll need a language separate from your database to do the kinds graph related computations this type of data implies. Skiena's Algorithm Design Manual has an extensive section graph data structures and their manipulation.
Without considering what types of queries you might execute, start with two tables vertices and edges. Vertices are simple, an identifier and a name. Edges are complex given the multigraph. Edges should be uniquely identified by a combination two vertices (i.e. foreign keys) and some additional information. The additional information is dependent on the problem you're solving. For instance, if flight information, the departure and arrival times and airline. Furthermore you'll need to decide if the edge is directed (i.e. one way) or not and keep track if that information as well.
Depending on the computation you may end up with a problem that's better solved with some sort of artificial intelligence / machine learning algorithm. For instance, optimal flights. The book Programming Collective Intelligence has some useful algorithms for this purpose. But where the data is kept doesn't change the algorithm itself.
Well, the information has to be stored somewhere, a relational database isn't a bad idea.
It would just be a many-to-many relationship, a table of a list of nodes, and table of a list of edges/connections.
Consider how Facebook might implement the social graph in their database. They might have a table for people and another table for friendships. The friendships table has at least two columns, each being foreign keys to the table of people.
Since friendship is symmetric (on Facebook) they might ensure that the ID for the first foreign key is always less than the ID for the second foreign key. Twitter has a directed graph for its social network, so it wouldn't use a canonical representation like that.
How to design data storage for huge tagging system (like digg or delicious)?
There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system?
The tagging system basically has 3 tables:
Item (item_id, item_content)
Tag (tag_id, tag_title)
TagMapping(map_id, tag_id, item_id)
That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy.
For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag.
But, how to partition table TagMapping?
The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id.
As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard.
Is there any better solution to solve this many-to-many partition problem?
I doubt there is a single approach that optimizes all possible usage scenarios. As you said, there are two main scenarios that the TagMapping table supports: finding tags for a given item, and finding items with a given tag. I think there are some differences in how you will use the TagMapping table for each scenario that may be of interest. I can only make reasonable assumptions based on typical tagging applications, so forgive me if this is way off base!
Finding Tags for a Given Item
A1. You're going to display all of the tags for a given item at once
A2. You're going to ensure that all of an item's tags are unique
Finding Items for a Given Tag
B1. You're going to need some of the items for a given tag at a time (to fill a page of search results)
B2. You might allow users to specify multiple tags, so you'd need to find some of the items matching multiple tags
B3. You're going to sort the items for a given tag (or tags) by some measure of popularity
Given the above, I think a good approach would be to partition TagMapping by item. This way, all of the tags for a given item are on one partition. Partitioning can be more granular, since there are likely far more items than tags and each item has only a handful of tags. This makes retrieval easy (A1) and uniqueness can be enforced within a single partition (A2). Additionally, that single partition can tell you if an item matches multiple tags (B2).
Since you only need some of the items for a given tag (or tags) at a time (B1), you can query partitions one at a time in some order until you have as many records needed to fill a page of results. How many partitions you will have to query will depend on how many partitions you have, how many results you want to display and how frequently the tag is used. Each partition would have its own index on tag_id to answer this query efficiently.
The order you pick partitions in will be important as it will affect how search results are grouped. If ordering isn't important (i.e. B3 doesn't matter), pick partitions randomly so that none of your partitions get too hot. If ordering is important, you could construct the item id so that it encodes information relevant to the order in which results are to be sorted. An appropriate partitioning scheme would then be mindful of this encoding. For example, if results are URLs that are sorted by popularity, then you could combine a sequential item id with the Google Page Rank score for that URL (or anything similar). The partitioning scheme must ensure that all of the items within a given partition have the same score. Queries would pick partitions in score order to ensure more popular items are returned first (B3). Obviously, this only allows for one kind of sorting and the properties involved should be constant since they are now part of a key and determine the record's partition. This isn't really a new limitation though, as it isn't easy to support a variety of sorts, or sorts on volatile properties, with partitioned data anyways.
The rule is that you partition by field that you are going to query by. Otherwise you'll have to look through all partitions. Are you sure you'll need to query Tag table by tag_id only? I believe not, you'll also need to query by tag title. It's no so obvious for Item table, but probably you also would like to query by something like URL to find item_id for it when other user will assign tags for it.
But note, that Tag and Item tables has immutable title and URL. That means you can use the following technique:
Choose partition from title (for Tag) or URL (for Item).
Choose sequence for this partition to generate id.
You either use partition-localID pair as global identifier or use non-overlapping number sets. Anyway, now you can compute partition from both id and title/URL fields. Don't know number of partitions in advance or worrying it might change in future? Create more of them and join in groups, so that you can regroup them in future.
Sure, you can't do the same for TagMapping table, so you have to duplicate. You need to query it by map_id, by tag_id, by item_id, right? So even without partitioning you have to duplicate data by creating 3 indexes. So the difference is that you use different partitioning (by different field) for each index. I see no reason to worry about.
Most likely your queries are going to be related to a user or a topic. Meaning that you should have all info related to those in one place.
You're talking about distribution of DB, usually this is mostly an issue of synchronization. Reading, which is about 90% of the work usually, can be done on a replicated database. The issue is how to update one DB and remain consistent will all others and without killing the performances. This depends on your scenario details.
The other possibility is to partition, like you asked, all the data without overlapping. You probably would partition by user ID or topic ID. If you partition by topic ID, one database could reference all topics and just telling which dedicated DB is holding the data. You can then query the correct one. Since you partition by ID, all info related to that topic could be on that specialized database. You could partition also by language or country for an international website.
Last but not least, you'll probably end up mixing the two: Some non-overlapping data, and some overlapping (replicated) data. First find usual operations, then find how to make those on one DB in least possible queries.
PS: Don't forget about caching, it'll save you more than distributed-DB.