In a SQL database, we generally have related information stored in different tables in a database. From what I read from RocksDB document, there's really no clear or 'right' way to represent this kind of structure. So I'm wondering what is the practice to categorize information?
Say I have three types of information, Customer, Product, and Employee. And I want to implement these in RocksDB. Should I use prefix of the key, different column families, or different databases?
Thanks for any suggestion.
You can do it by coming up with some prefix which will mean such table, such column, such id. You could for simplicity store in one column family and definetely in one db since you have atomic operations, snapshots and so on. The better question why would you want to store relational data in nosql db unless you are building something higher level.
By the way, checkout linqdb which is an example of higher-level db where you can store entities, perform linq-style operations and it uses rocksdb underneath.
The way data is organized in key-value store is up the the implementation. There is no "one good way to go" it depends on the underlying key-value store features (is it key ordered in particular).
The same normalization/denormalization technics applies.
I think the piece you are missing about key-value store application design is the concept of key composition. Quickly, is the practice of building keys in such a way that they are amenable to querying. If the database is ordered then it will also also for prefix/range/scan queries and next/previous navigation. This will lead you to build key prefixes in such a way that querying is fast ie. that doesn't require a full table scan.
Expand your research to other key value stores like bsddb or leveldb here on Stack Overflow.
Related
I'm currently writing some code for one of my classes involving distributed and parallel database processing. I'm doing horizontal fragmentation on some data and required to keep track of different pieces of data.
The professor recommends storing "metadata" to keep track of some basic computations. Is this as simple as creating another table and storing some basic information, or is there a much more efficient way of doing this?
Example:
I need to track ranges for min/max values of every table in my database. Should I store that information in an entirely new table or is there a better way of achieving this?
Example: I need to track ranges for min/max values of every table in my database. Should I store that information in an entirely new table or is there a better way of achieving this?
Yes, you should store min/max in a different table. Depending on your application, you might need more than one of those kinds of tables.
Each insert, update, or delete statement can change either or both of those values. Think about how you want to handle that. (Triggers, probably.)
Terminology
Metadata just means "data about other data", and min/max values for one or more columns in each table is arguably data about other data. But I've never seen such data called metadata. It's always either summary or aggregate data.
I think you'll find that when most DBAs and database developers use metadata, they're talking about system tables or the information_schema views that are built on top of system tables.
TL;DR: should I use an SQL JOIN table or Redis sets to store large amounts of many-to-many relationships
I have in-memory object graph structure where I have a "many-to-many" index represented as a bidirectional mapping between ordered sets:
group_by_user | user_by_group
--------------+---------------
louis: [1,2] | 1: [louis]
john: [2,3] | 2: [john, louis]
| 3: [john]
The basic operations that I need to be able to perform are atomic "insert at" and "delete" operations on the individual sets. I also need to be able to do efficient key lookup (e.g. lookup all groups a user is a member of, or lookup all the users who are members of one group). I am looking at a 70/30 read/write use case.
My question is: what is my best bet for persisting this kind of data structure? Should I be looking at building my own optimized on-disk storage system? Otherwise, is there a particular database that would excel at storing this kind of structure?
Before you read any further: stop being afraid of JOINs. This is a classic case for using a genuine relational database such as Postgres.
There are a few reasons for this:
This is what a real RDBMS is optimized for
The database can take care of your integrity constraints as a matter of course
This is what a real RDBMS is optimized for
You will have to push "join" logic into your own code
This is what a real RDBMS is optimized for
You will have to deal with integrity concerns in your own code
This is what a real RDBMS is optimized for
You will wind up reinventing database features in your own code
This is what a real RDBMS is optimized for
Yes, I am being a little silly, but because I'm trying to drive home a point.
I am beating on that drum so hard because this is a classic case that has a readily available, extremely optimized and profoundly stable tool custom designed for it.
When I say that you will wind up reinventing database features I mean that you will start having to make basic data management decisions in your own code. For example, you will have to choose when to actually write the data to disk, when to pull it, how to keep track of the highest-frequency use data and cache it in memory (and how to manage that cache), etc. Making performance assumptions into your code early can give your whole codebase cancer early on without you noticing it -- and if those assumptions prove false later changing them can require a major rewrite.
If you store the data on either end of the many-to-many relationship in one store and the many-to-many map in another store you will have to:
Locate the initial data on one side of the mapping
Extract the key(s)
Query for the key(s) in the many-to-many handler
Receive the response set(s)
Query whatever is relevant from your other storage based on the result
Build your answer for use within the system
If you structure your data within an RDBMS to begin with your code will look more like:
Run a pre-built query indexed over whatever your search criteria is
Build an answer from the response
JOINs are a lot less scary than doing it all yourself -- especially in a concurrent system where other things may be changing in the course of your ad hoc locate-extract-query-receive-query-build procedure (which can be managed, of course, but why manage it when an RDBMS is already designed to manage it?).
JOIN isn't even a slow operation in decent databases. I have some business applications that join 20 tables constantly over fairly large tables (several millions of rows) and it zips right through them. It is highly optimized for this sort of thing which is why I use it. Oracle does well at this (but I can't afford it), DB2 is awesome (can't afford that, either), and SQL Server has come a long way (can't afford the good version of that one either!). MySQL, on the other hand, was really designed with the key-value store use-case in mind and matured in the "performance above all else" world of web applications -- and so it has some problems with integrity constraints and JOINs (but has handled replication very well for a very long time). So not all RDBMSes are created equal, but without knowing anything else about your problem they are the kind of datastore that will serve you best.
Even slightly non-trivial data can make your code explode in complexity -- hence the popularity of database systems. They aren't (supposed to be) religions, they are tools to let you separate a generic data-handling task from your own program's logic so you don't have to reinvent the wheel every project (but we tend to anyway).
But
Q: When would you not want to do this?
A: When you are really building a graph and not a set of many-to-many relations.
There is other type of database designed specifically to handle that case. You need to keep in mind, though, what your actual requirements are. Is this data ephemeral? Does it have to be correct? Do you care if you lose it? Does it need to be replicated? etc. Most of the time requirements are relatively trivial and the answer is "no" to these sort of higher-flying questions -- but if you have some special operational needs then you may need to take them into account when making your architectural decision.
If you are storing things that are actually documents (instead of structured records) on the one hand, and need to track a graph of relationships among them on the other then a combination of back-ends may be a good idea. A document database + a graphing database glued together by some custom code could be the right thing.
Think carefully about which kind of situation you are actually facing instead of assuming you have case X because it is what you are already familiar with.
In relational databases (e. g. SqlServer, MySql, Oracle...), the typical way of representing such data structures is with a "link table". For example:
users table:
userId (primary key)
userName
...
groups table:
groupId (primary key)
...
userGroups table: (this is the link table)
userId (foreign key to users table)
groupId (foreign key to groups table)
compound primary key of (userId, groupId)
Thus, to find all groups with users named "fred", you might write the following query:
SELECT g.*
FROM users u
JOIN userGroups ug ON ug.userId = u.userId
JOIN groups g ON g.groupId = ug.groupId
WHERE u.name = 'fred'
To achieve atomic inserts, updates, and deletes of this structure, you'll have to execute the queries that modify the various tables in transactions. ORM's such as EntityFramework (for .NET) will typically handle this for you.
Here is something I've wondered for quite some time, and have not seen a real (good) solution for yet. It's a problem I imagine many games having, and that I can't easily think of how to solve (well). Ideas are welcome, but since this is not a concrete problem, don't bother asking for more details - just make them up! (and explain what you made up).
Ok, so, many games have the concept of (inventory) items, and often, there are hundreds of different kinds of items, all with often very varying data structures - some items are very simple ("a rock"), others can have insane complexity or data behind them ("a book", "a programmed computer chip", "a container with more items"), etc.
Now, programming something like that is easy - just have everything implement an interface, or maybe extend an abstract root item. Since objects in the programming world don't have to look the same on the inside as on the outside, there is really no issue with how much and what kind of private fields any type of item has.
But when it comes to database serialization (binary serialization is of course no problem), you are facing a dilemma: how would you represent that in, say, a typical SQL database ?
Some attempts at a solution that I have seen, none of which I find satisfying:
Binary serialization of the items, the database just holds an ID and a blob.
Pro's: takes like 10 seconds to implement.
Con's: Basically sacrifices every database feature, hard to maintain, near impossible to refactor.
A table per item type.
Pro's: Clean, flexible.
Con's: With a wide variety come hundreds of tables, and every search for an item has to query them all since SQL doesn't have the concept of table/type 'reference'.
One table with a lot of fields that aren't used by every item.
Pro's: takes like 10 seconds to implement, still searchable.
Con's: Waste of space, performance, confusing from the database to tell what fields are in use.
A few tables with a few 'base profiles' for storage where similar items get thrown together and use the same fields for different data.
Pro's: I've got nothing.
Con's: Waste of space, performance, confusing from the database to tell what fields are in use.
What ideas do you have? Have you seen another design that works better or worse?
It depends if you need to sort, filter, count, or analyze those attribute.
If you use EAV, then you will screw yourself nicely. Try doing reports on an EAV schema.
The best option is to use Table Inheritance:
PRODUCT
id pk
type
att1
PRODUCT_X
id pk fk PRODUCT
att2
att3
PRODUCT_Y
id pk fk PRODUCT
att4
att 5
For attributes that you don't need to search/sort/analyze, then use a blob or xml
I have two alternatives for you:
One table for the base type and supplemental tables for each “class” of specialized types.
In this schema, properties common to all “objects” are stored in one table, so you have a unique record for every object in the game. For special types like books, containers, usable items, etc, you have another table for each unique set of properties or relationships those items need. Every special type will therefore be represented by two records: the base object record and the supplemental record in a particular special type table.
PROS: You can use column-based features of your database like custom domains, checks, and xml processing; you can have simpler triggers on certain types; your queries differ exactly at the point of diverging concerns.
CONS: You need two inserts for many objects.
Use a “kind” enum field and a JSONB-like field for the special type data.
This is kind of like your #1 or #3, except with some database help. Postgres added JSONB, giving you an improvement over the old EAV pattern. Other databases have a similar complex field type. In this strategy you roll your own mini schema that you stash in the JSONB field. The kind field declares what you expect to find in that JSONB field.
PROS: You can extract special type data in your queries; can add check constraints and have a simple schema to deal with; you can benefit from indexing even though your data is heterogenous; your queries and inserts are simple.
CONS: Your data types within JSONB-like fields are pretty limited and you have to roll your own validation.
Yes, it is a pain to design database formats like this. I'm designing a notification system and reached the same problem. My notification system is however less complex than yours - the data it holds is at most ids and usernames. My current solution is a mix of 1 and 3 - I serialize data that is different from every notification, and use a column for the 2 usernames (some may have 2 or 1). I shy away from method 2 because I hate that design, but it's probably just me.
However, if you can afford it, I would suggest thinking outside the realm of RDBMS - it sounds like Non-RDBMS (especially key/value storage ones) may be a better fit to store these data, especially if item 1 and item 2 differ from each item a lot.
I'm sure this has been asked here a million times before, but in addition to the options which you have discussed in your question, you can look at EAV schema which is very flexible, but which has its own sets of cons.
Another alternative is database systems which are not relational. There are object databases as well as various key/value stores and document databases.
Typically all these things break down to some extent when you need to query against the flexible attributes. This is kind of an intrinsic problem, however. Conceptually, what does it really mean to query things accurately which are unstructured?
First of all, do you actually need the concurrency, scalability and ACID transactions of a real database? Unless you are building a MMO, your game structures will likely fit in memory anyway, so you can search and otherwise manipulate them there directly. In a scenario like this, the "database" is just a store for serialized objects, and you can replace it with the file system.
If you conclude that you do (need a database), then the key is in figuring out what "atomicity" means from the perspective of the data management.
For example, if a game item has a bunch of attributes, but none of these attributes are manipulated individually at the database level (even though they could well be at the application level), then it can be considered as "atomic" from the data management perspective. OTOH, if the item needs to be searched on some of these attributes, then you'll need a good way to index them in the database, which typically means they'll have to be separate fields.
Once you have identified attributes that should be "visible" versus the attributes that should be "invisible" from the database perspective, serialize the latter to BLOBs (or whatever), then forget about them and concentrate on structuring the former.
That's where the fun starts and you'll probably need to use "all of the above" strategy for reasonable results.
BTW, some databases support "deep" indexes that can go into heterogeneous data structures. For example, take a look at Oracle's XMLIndex, though I doubt you'll use Oracle for a game.
You seem to be trying to solve this for a gaming context, so maybe you could consider a component-based approach.
I have to say that I personally haven't tried this yet, but I've been looking into it for a while and it seems to me something similar could be applied.
The idea would be that all the entities in your game would basically be a bag of components. These components can be Position, Energy or for your inventory case, Collectable, for example. Then, for this Collectable component you can add custom fields such as category, numItems, etc.
When you're going to render the inventory, you can simply query your entity system for items that have the Collectable component.
How can you save this into a DB? You can define the components independently in their own table and then for the entities (each in their own table as well) you would add a "Components" column which would hold an array of IDs referencing these components. These IDs would effectively be like foreign keys, though I'm aware that this is not exactly how you can model things in relational databases, but you get the idea.
Then, when you load the entities and their components at runtime, based on the component being loaded you can set the corresponding flag in their bag of components so that you know which components this entity has, and they'll then become queryable.
Here's an interesting read about component-based entity systems.
What would be considered best practice when you need additional data about facet results.
ie. i need a friendlyname / image / meta keywords / description / and more.. for product categories. (when faceting on categories)
include it in the document? (can lead to looots of duplication)
introduce category as a new index in solr (or fake by doctype=category field in solr)
use a rdbms to lookup additional data using a SELECT WHERE IN (..category facet result ids..)
Thanks,
Remco
use fast NoSQL db that fits your data
BTW Lucene, which is Solr's underlying layer, is in fact also NoSQL-type storage facility.
If I were you, I'd use MongoDB. That's the first db that came to mind, since you need binary data and they practically invented BSON, which is now widespread mean of transferring binary data in a JSON-like fashion.
If your data structure is more graph-shaped (like social network) check out Neo4j, which has blindingly fast graph traversal algorithms.
A relational DB can reliably enforce the "category is first class entity" thing. You would need referential integrity: a product may not belong to a category that doesnt exist. A deleted category must not have it's child categories lying around. A normalized RDB can enforce referential integrity through schema. A NoSQL DB must work with client-side code (you must write) to enforce referential integrity.
Lets see how "product's category must exist" and "subcategories' parents must exist" are done:
RDB: The table that assigns categories to products (an m:n relation) must be keyed up to the product and category by an ON DELETE CASCADE. If a category is deleted, a product simply cannot have such a category. A category that links up to another category as a child: the relavent field has an ON DELETE CASCADE. This means that if a parent is deleted, it's children cannot exist. This entire method is declarative ("it is declared thus"), all complexities exist in the data, we dont need no stinking code to do it for us. You can model a DB as naturally as you understand their real world implications.
Document store-type NoSQL: You need to write code to do everything. A "category is deleted" is an use case, and you need to find products that have that category, and update each one. You have to write code for each use case. Same goes for managing subcategories. The data model may be incredibly stupid, but their real-world implications must be modeled in the code. And its tougher to reason in code and control flow rather than in data structures.
Do you really have performance needs that require NoSQL databases?
So use RDBMSs to manage your data. Then use Direct Import handler or client-side code to insert/update denormalized entities for searching. If most requests to your site can be expressed in Solr queries, great!
As for expressing hierarchial faceting in Solr, see ' Ways to do hierarchial faceting in Solr? '.
I would think about 2 alternatives:
1.) strong the informations for every document without indexing it (to keep the index small as possible). The point is, that i would not store the image insight Lucene/Solr - only an file pointer.
2.) store the additional data on an rdbms or nosql (linke mongoDB) to lookup, as you wrote.
My favorite is the 2nd. one, because an database is the traditional and most optimized way to storing data.
But finally it depends on your system, because you should keep in mind, that you need time for connecting an database, searching through the data and sending the additional information back to the application.
So it could be faster to store everything on lucene.
Probably an small performance test would be useful.
maybe I am wrong, but if you are on Solr trunk you could benefit from Solr join suport, this would allow you to index several entities with relations among them while enforcing conditions on both.
Object-Relational-Mappers have been created to help applications (which think in terms of objects) deal with stored data in a more application-friendly way like every other class/object.
However, I have never seen a OKM (Object-Key/Value-Mapper) for NoSQL "Key/Value" storage systems. Which seems odd because the need should be far greater given the fact that more value-relations will have to be hard-coded into the app than a regular, single SQL table row object.
four requests:
user:id
user:id:name
user:id:email
user:id:created
vs one request:
user = [id => ..., name => ..., email => ...]
Plus you must keep track of "lists" (post has_many comments) since you don't have has_many through tables or foreign keys.
INSERT INTO user_groups (user_id, group_id) VALUES (23, 54)
vs
usergroups:user_id = {54,108,32,..}
groupsuser:group_id = {23,12,645,..}
And there are lots more examples of the added logic that an application would need to replicate some basic features that normal relational databases use. All of these reasons make the idea of a OKM sound like a shoe-in.
Are there any? Are there any reasons there are not any?
Ruby's DataMapper project is an ORM and will happily talk to a key-value store through the use of an adapter.
Redis and MongoDB have adapters that already exist. CouchDB has an adapter — it's not maintained, but at one point it worked pretty well. I don't think anyone's done anything with Cassandra yet, but there's no reason it couldn't be done. The Dubious framework for Google App Engine takes a very similar approach to Data Mapper to make the Data Store available to applications.
So it's very possible to do ORM with key-value stores. The ORM just really needs to avoid the assumption that SQL is its primary vocabulary.
One of the design goals of SQL is that any data can be stored/queried in any relational database - There are some differences between platforms, but in general the correct way to handle a particular data structure is well known and easily automated but requiring fairly verbose code. That is not the case with NoSQL - generally you will be directly storing the data as used in your application rather than trying to map it to a relational structure, and without joins or other object/relational differences the mapping code is trivial.
Beyond generating the boilerplate data access code, one of the main purposes of an ORM is abstraction of differences between platforms. In my experience the ability to switch platforms has always been purely theoretical, and this lowest common denominator approach simply won't work for NoSQL as the platform is usually chosen specifically for capabilities not present on other platforms. Your example is only for the most trivial key value store - depending on your platform you most likely have some useful additional commands, so your first example could be
MGET user:id:name user:id:email ... (multiget - get any number of keys in a single call)
GET user:id:* (key wildcards)
HGETALL user:id (redis hash - gets all subkeys of user)
You might also have your user object stored in a serialized form - unlike in a relational database this will not break all your queries.
Working with lists isn't great if your platform doesn't have support built in - native list/set support is one of the reasons I like to use redis - but aside from potentially needing locks it's no worse than getting the list out of sql.
It's also worth noting that you may not need all the relationships you would define in sql - for example if you have a group containing a million users, the ability to get a list of all users in a group is completely useless, so you would never create the groupsuser list at all and rather than a seperate usergroups list have user:id:groups as a multivalue property. If you just need to check for membership you could set up keys as usergroups:userid:groupid and get constant time lookup.
I find it helps to think in terms of indexes rather than relationships - when setting up your data access code decide which fields will need to be queried and adding appropriate index records when those fields are written.
ORMs don't map terribly well to the schema-less nature of key-value stores. That being said, if you're using Riak and Ruby, you could take a look at Ripple. There are a number of other drivers for Riak which might fit with your language.
If you're looking into MongoDB (more of a document store than a k/v store), there are a number of drivers available.
The UNIVERSE db , which is a descendent of Pick, lets you store a list of key value pairs for a given key. However this is very old technoligy and the world ran away from these databases a long time ago.
You can implement this in an SQL database with a three column table
CREATE TABLE ATTRS ( KEYVAL VARCHAR(32),
ATTRNAME VARCHAR(32),
ATTRVAR VARCHAR(1024)
)
Although most DBAs will hit you over the head with the very thick Codd and Date hardback edition if you propose this, it is in fact a very common pattern in packaged applications to allow you to add site specific attributes to a system.
To prarphrase Richrd Stallmans comments on LISP.
"Any reasonably functional datastorage system will eventually end up implementing there own version of RDBMS."