Object-Relational-Mappers have been created to help applications (which think in terms of objects) deal with stored data in a more application-friendly way like every other class/object.
However, I have never seen a OKM (Object-Key/Value-Mapper) for NoSQL "Key/Value" storage systems. Which seems odd because the need should be far greater given the fact that more value-relations will have to be hard-coded into the app than a regular, single SQL table row object.
four requests:
user:id
user:id:name
user:id:email
user:id:created
vs one request:
user = [id => ..., name => ..., email => ...]
Plus you must keep track of "lists" (post has_many comments) since you don't have has_many through tables or foreign keys.
INSERT INTO user_groups (user_id, group_id) VALUES (23, 54)
vs
usergroups:user_id = {54,108,32,..}
groupsuser:group_id = {23,12,645,..}
And there are lots more examples of the added logic that an application would need to replicate some basic features that normal relational databases use. All of these reasons make the idea of a OKM sound like a shoe-in.
Are there any? Are there any reasons there are not any?
Ruby's DataMapper project is an ORM and will happily talk to a key-value store through the use of an adapter.
Redis and MongoDB have adapters that already exist. CouchDB has an adapter — it's not maintained, but at one point it worked pretty well. I don't think anyone's done anything with Cassandra yet, but there's no reason it couldn't be done. The Dubious framework for Google App Engine takes a very similar approach to Data Mapper to make the Data Store available to applications.
So it's very possible to do ORM with key-value stores. The ORM just really needs to avoid the assumption that SQL is its primary vocabulary.
One of the design goals of SQL is that any data can be stored/queried in any relational database - There are some differences between platforms, but in general the correct way to handle a particular data structure is well known and easily automated but requiring fairly verbose code. That is not the case with NoSQL - generally you will be directly storing the data as used in your application rather than trying to map it to a relational structure, and without joins or other object/relational differences the mapping code is trivial.
Beyond generating the boilerplate data access code, one of the main purposes of an ORM is abstraction of differences between platforms. In my experience the ability to switch platforms has always been purely theoretical, and this lowest common denominator approach simply won't work for NoSQL as the platform is usually chosen specifically for capabilities not present on other platforms. Your example is only for the most trivial key value store - depending on your platform you most likely have some useful additional commands, so your first example could be
MGET user:id:name user:id:email ... (multiget - get any number of keys in a single call)
GET user:id:* (key wildcards)
HGETALL user:id (redis hash - gets all subkeys of user)
You might also have your user object stored in a serialized form - unlike in a relational database this will not break all your queries.
Working with lists isn't great if your platform doesn't have support built in - native list/set support is one of the reasons I like to use redis - but aside from potentially needing locks it's no worse than getting the list out of sql.
It's also worth noting that you may not need all the relationships you would define in sql - for example if you have a group containing a million users, the ability to get a list of all users in a group is completely useless, so you would never create the groupsuser list at all and rather than a seperate usergroups list have user:id:groups as a multivalue property. If you just need to check for membership you could set up keys as usergroups:userid:groupid and get constant time lookup.
I find it helps to think in terms of indexes rather than relationships - when setting up your data access code decide which fields will need to be queried and adding appropriate index records when those fields are written.
ORMs don't map terribly well to the schema-less nature of key-value stores. That being said, if you're using Riak and Ruby, you could take a look at Ripple. There are a number of other drivers for Riak which might fit with your language.
If you're looking into MongoDB (more of a document store than a k/v store), there are a number of drivers available.
The UNIVERSE db , which is a descendent of Pick, lets you store a list of key value pairs for a given key. However this is very old technoligy and the world ran away from these databases a long time ago.
You can implement this in an SQL database with a three column table
CREATE TABLE ATTRS ( KEYVAL VARCHAR(32),
ATTRNAME VARCHAR(32),
ATTRVAR VARCHAR(1024)
)
Although most DBAs will hit you over the head with the very thick Codd and Date hardback edition if you propose this, it is in fact a very common pattern in packaged applications to allow you to add site specific attributes to a system.
To prarphrase Richrd Stallmans comments on LISP.
"Any reasonably functional datastorage system will eventually end up implementing there own version of RDBMS."
Related
I have a legacy in-house human resources web app that I'd like to rebuild using more modern technologies. Doctrine 2 is looking good. But I've not been able to find articles or documentation on how best to organise the Entities for a large-ish database (120 tables). Can you help?
My main problem is the Person table (of course! it's an HR system!). It currently has 70 columns. I want to refactor that to extract several subsets into one-to-one sub tables, which will leave me with about 30 columns. There are about 50 other supporting one-to-many tables called person_address, person_medical, person_status, person_travel, person_education, person_profession etc. More will be added later.
If I put all the doctrine associations (http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/working-with-associations.html) in the Person entity class along with the set/get/add/remove methods for each, along with the original 30 columns and their methods, and some supporting utility functions then the Person entity is going to be 1000+ lines long and a nightmare to test.
FWIW i plan to create a PersonRepository to handle the common bulk queries, a PersonProfessionRepository for the bulk queries / reports on that sub table etc, and Person*Service s which will contain some of the more complex business logic where needed. So organising the rest of the app logic is fine: this is a question about how to correctly organise lots of sub-table Entities with Doctrine that all have relationships / associations back to one primary table. How do I avoid bloating out the Person entity class?
Identifying types of objects
It sounds like you have a nicely normalized database and I suggest you keep it that way. Removing columns from the people table to create separate tables for one-to-one relations isn't going to help in performance nor maintainability.
The fact that you recognize several groups of properties in the Person entity might indicate you have found cases for a Value Object. Even some of the one-to-many tables (like person_address) sound more like Value Objects than Entities.
Starting with Doctrine 2.5 (which is not yet stable at the time of this writing) it will support embedding single Value Objects. Unfortunately we will have to wait for a future version for support of collections of Value objects.
Putting that aside, you can mimic embedding Value Objects, Ross Tuck has blogged about this.
Lasagna Code
Your plan of implementing an entity, repository, service (and maybe controller?) for Person, PersonProfession, etc sounds like a road to Lasagna Code.
Without extensive knowledge about your domain, I'd say you want to have an aggregate Person, of which the Person entity is the aggregate root. That aggregate needs a single repository. (But maybe I'm off here and being simplistic, as I said, I don't know your domain.)
Creating a service for Person (and other entities / value objects) indicates data-minded thinking. For services it's better to think of behavior. Think of what kind of tasks you want to perform, and group coherent sets of tasks into services. I suspect that for a HR system you'll end up with many services that evolve around your Person aggregate.
Is Doctrine 2 suitable?
I would say: yes. Doctrine itself has no problems with large amounts of tables and large amounts of columns. But performance highly depends on how you use it.
OLTP vs OLAP
For OLTP systems an ORM can be very helpful. OLTP involves many short transactions, writing a single (or short list) of aggregates to the database.
For OLAP systems an ORM is not suited. OLAP involves many complex analytical queries, usually resulting in large object-graphs. For these kind of operations, native SQL is much more convenient.
Even in case of OLAP systems Doctrine 2 can be of help:
You can use DQL queries (in stead of native SQL) to use the power of your mapping metadata. Then use scalar or array hydration to fetch the data.
Doctrine also support arbitrary joins, which means you can join entities that are not associated to each other according by mapping metadata.
And you can make use of the NativeQuery object with which you can map the results to whatever you want.
I think a HR system is a perfect example of where you have both OLTP and OLAP. OLTP when it comes to adding a new Person to the system for example. OLAP when it comes to various reports and analytics.
So there's nothing wrong with using an ORM for transactional operations, while using plain SQL for analytical operations.
Choose wisely
I think the key is to carefully choose when to use what, on a case by case basis.
Hydrating entities is great for transactional operations. Make use of lazy loading associations which can prevent fetching data you're not going to use. But also choose to eager load certain associations (using DQL) where it makes sense.
Use scalar or array hydration when working with large data sets. Data sets usually grow where you're doing analytical operations, where you don't really need full blown entities anyway.
#Quicker makes a valid point by saying you can create specialized View objects. You can fetch only the data you need in specific cases and manually mold that data into objects. This is accompanied by his point to don't bloat the user interface with options a user with a certain role doesn't need.
A technique you might want to look into is Command Query Responsibility Segregation (CQRS).
I understood that you have a fully normalized table persons and now you are asking for how to denormalize that best.
As long as you do not hit any technical constaints (such as max 64 K Byte) I find 70 columns definitly not overloaded for a persons table in a HR system. Do yourself a favour to not segment that information for following reasons:
selects potentially become more complex
each extract table needs (an) extra index/indeces, which increases your overall memory utilization -> this sounds to be a minor issue as disk is cheap. However keep in mind that via caching the RAM to disk space utilization ratio determines your performance to a huge extend
changes become more complex as extra relations demand for extra care
as any edit/update/read view can be restricted to deal with slices of your physical data from the tables only no "cosmetics" pressure arises from end user (or even admin) perspective
In summary your the table subsetting causes lots of issues and effort but does add low if not no value.
Btw. databases are optimized for data storage. Millions of rows and some dozens of columns are no brainers at that end.
TL;DR: should I use an SQL JOIN table or Redis sets to store large amounts of many-to-many relationships
I have in-memory object graph structure where I have a "many-to-many" index represented as a bidirectional mapping between ordered sets:
group_by_user | user_by_group
--------------+---------------
louis: [1,2] | 1: [louis]
john: [2,3] | 2: [john, louis]
| 3: [john]
The basic operations that I need to be able to perform are atomic "insert at" and "delete" operations on the individual sets. I also need to be able to do efficient key lookup (e.g. lookup all groups a user is a member of, or lookup all the users who are members of one group). I am looking at a 70/30 read/write use case.
My question is: what is my best bet for persisting this kind of data structure? Should I be looking at building my own optimized on-disk storage system? Otherwise, is there a particular database that would excel at storing this kind of structure?
Before you read any further: stop being afraid of JOINs. This is a classic case for using a genuine relational database such as Postgres.
There are a few reasons for this:
This is what a real RDBMS is optimized for
The database can take care of your integrity constraints as a matter of course
This is what a real RDBMS is optimized for
You will have to push "join" logic into your own code
This is what a real RDBMS is optimized for
You will have to deal with integrity concerns in your own code
This is what a real RDBMS is optimized for
You will wind up reinventing database features in your own code
This is what a real RDBMS is optimized for
Yes, I am being a little silly, but because I'm trying to drive home a point.
I am beating on that drum so hard because this is a classic case that has a readily available, extremely optimized and profoundly stable tool custom designed for it.
When I say that you will wind up reinventing database features I mean that you will start having to make basic data management decisions in your own code. For example, you will have to choose when to actually write the data to disk, when to pull it, how to keep track of the highest-frequency use data and cache it in memory (and how to manage that cache), etc. Making performance assumptions into your code early can give your whole codebase cancer early on without you noticing it -- and if those assumptions prove false later changing them can require a major rewrite.
If you store the data on either end of the many-to-many relationship in one store and the many-to-many map in another store you will have to:
Locate the initial data on one side of the mapping
Extract the key(s)
Query for the key(s) in the many-to-many handler
Receive the response set(s)
Query whatever is relevant from your other storage based on the result
Build your answer for use within the system
If you structure your data within an RDBMS to begin with your code will look more like:
Run a pre-built query indexed over whatever your search criteria is
Build an answer from the response
JOINs are a lot less scary than doing it all yourself -- especially in a concurrent system where other things may be changing in the course of your ad hoc locate-extract-query-receive-query-build procedure (which can be managed, of course, but why manage it when an RDBMS is already designed to manage it?).
JOIN isn't even a slow operation in decent databases. I have some business applications that join 20 tables constantly over fairly large tables (several millions of rows) and it zips right through them. It is highly optimized for this sort of thing which is why I use it. Oracle does well at this (but I can't afford it), DB2 is awesome (can't afford that, either), and SQL Server has come a long way (can't afford the good version of that one either!). MySQL, on the other hand, was really designed with the key-value store use-case in mind and matured in the "performance above all else" world of web applications -- and so it has some problems with integrity constraints and JOINs (but has handled replication very well for a very long time). So not all RDBMSes are created equal, but without knowing anything else about your problem they are the kind of datastore that will serve you best.
Even slightly non-trivial data can make your code explode in complexity -- hence the popularity of database systems. They aren't (supposed to be) religions, they are tools to let you separate a generic data-handling task from your own program's logic so you don't have to reinvent the wheel every project (but we tend to anyway).
But
Q: When would you not want to do this?
A: When you are really building a graph and not a set of many-to-many relations.
There is other type of database designed specifically to handle that case. You need to keep in mind, though, what your actual requirements are. Is this data ephemeral? Does it have to be correct? Do you care if you lose it? Does it need to be replicated? etc. Most of the time requirements are relatively trivial and the answer is "no" to these sort of higher-flying questions -- but if you have some special operational needs then you may need to take them into account when making your architectural decision.
If you are storing things that are actually documents (instead of structured records) on the one hand, and need to track a graph of relationships among them on the other then a combination of back-ends may be a good idea. A document database + a graphing database glued together by some custom code could be the right thing.
Think carefully about which kind of situation you are actually facing instead of assuming you have case X because it is what you are already familiar with.
In relational databases (e. g. SqlServer, MySql, Oracle...), the typical way of representing such data structures is with a "link table". For example:
users table:
userId (primary key)
userName
...
groups table:
groupId (primary key)
...
userGroups table: (this is the link table)
userId (foreign key to users table)
groupId (foreign key to groups table)
compound primary key of (userId, groupId)
Thus, to find all groups with users named "fred", you might write the following query:
SELECT g.*
FROM users u
JOIN userGroups ug ON ug.userId = u.userId
JOIN groups g ON g.groupId = ug.groupId
WHERE u.name = 'fred'
To achieve atomic inserts, updates, and deletes of this structure, you'll have to execute the queries that modify the various tables in transactions. ORM's such as EntityFramework (for .NET) will typically handle this for you.
For our project, we need a database that supports JOINs and has the ability to easily add and modify attributes of the entity (schema-less/free). Key points:
The system is designed to work with customers (CRM)
Basic entities: User, Customer, Case, Case Interaction, Order
Currently in the database there are ~200k customers and ~250k orders
Customer entity contains 15-20 optional attributes that are most often not filled
About 100 new cases a day
The data is synchronized with several other sources in the background
Requirements (high to low priority):
Ability to implement search/sort by related entities, e.g. Case by linked Customer name (support JOINs)
Having the flexibility to change the schema of the data and do not store NULL for a large number of attributes
Performance
ORM for Python with support for monitoring changes and the possibility of storing only the changes to the database
What we've tried:
MongoDB does not satisfy paragraph 1.
PostgreSQL with all the attributes in one table does not satisfy paragraph 2.
PostgreSQL with a separate table for each attribute or EAV does not satisfy paragraph 3 (a lot of slow joins), but seems a better solution than others.
Can you suggest any database or design of the system that will meet our needs?
Datomic might be worth checking out (http://www.datomic.com/). It satisfies requirements 1-3, and although there's no python ORM, there is a REST API.
Datomic is based on an Entity Attribute Value schema (it's not quite schema free - you need to specify a name and type for each attribute - but any entity can have any attribute). It is transactional and has support for joins, unlike some of the other flexible "NoSQL" solutions. Interestingly, it also has first-class support for time (e.g. what is the history of this entity/what did the database look like at time t,etc), which might be useful if you're tracking cases and interactions.
Queries are based on datalog, which queries by unification. Query by unification looks a bit odd at first but is brilliant once you get used to it.
For example, a query to find cases by linked customer name would be something like this:
[find ?x
:in $
:where [?x :case/linked-customers ?c
?c :customer/name "Barry"]]
The query engine looks in the database, and tries to satisfy the where clause by unifying all occurrences of a given variable. In this case, only ?c appears twice (the case has a linked customer c whose name is Barry), but queries can obviously get a lot more complex. The $ here represents the database.
You may want to consider storing the "flexible" part as XML. Some databases, e.g. DB2, allow XML indexing so lookup performance should be as good as with the relational data store. DB2 Express-C is free and does not have an artificial limit on the database size.
Update Since 2015 DB2 Express-C limits the database user data volume to 15 TB, which still should be plenty.
Here is something I've wondered for quite some time, and have not seen a real (good) solution for yet. It's a problem I imagine many games having, and that I can't easily think of how to solve (well). Ideas are welcome, but since this is not a concrete problem, don't bother asking for more details - just make them up! (and explain what you made up).
Ok, so, many games have the concept of (inventory) items, and often, there are hundreds of different kinds of items, all with often very varying data structures - some items are very simple ("a rock"), others can have insane complexity or data behind them ("a book", "a programmed computer chip", "a container with more items"), etc.
Now, programming something like that is easy - just have everything implement an interface, or maybe extend an abstract root item. Since objects in the programming world don't have to look the same on the inside as on the outside, there is really no issue with how much and what kind of private fields any type of item has.
But when it comes to database serialization (binary serialization is of course no problem), you are facing a dilemma: how would you represent that in, say, a typical SQL database ?
Some attempts at a solution that I have seen, none of which I find satisfying:
Binary serialization of the items, the database just holds an ID and a blob.
Pro's: takes like 10 seconds to implement.
Con's: Basically sacrifices every database feature, hard to maintain, near impossible to refactor.
A table per item type.
Pro's: Clean, flexible.
Con's: With a wide variety come hundreds of tables, and every search for an item has to query them all since SQL doesn't have the concept of table/type 'reference'.
One table with a lot of fields that aren't used by every item.
Pro's: takes like 10 seconds to implement, still searchable.
Con's: Waste of space, performance, confusing from the database to tell what fields are in use.
A few tables with a few 'base profiles' for storage where similar items get thrown together and use the same fields for different data.
Pro's: I've got nothing.
Con's: Waste of space, performance, confusing from the database to tell what fields are in use.
What ideas do you have? Have you seen another design that works better or worse?
It depends if you need to sort, filter, count, or analyze those attribute.
If you use EAV, then you will screw yourself nicely. Try doing reports on an EAV schema.
The best option is to use Table Inheritance:
PRODUCT
id pk
type
att1
PRODUCT_X
id pk fk PRODUCT
att2
att3
PRODUCT_Y
id pk fk PRODUCT
att4
att 5
For attributes that you don't need to search/sort/analyze, then use a blob or xml
I have two alternatives for you:
One table for the base type and supplemental tables for each “class” of specialized types.
In this schema, properties common to all “objects” are stored in one table, so you have a unique record for every object in the game. For special types like books, containers, usable items, etc, you have another table for each unique set of properties or relationships those items need. Every special type will therefore be represented by two records: the base object record and the supplemental record in a particular special type table.
PROS: You can use column-based features of your database like custom domains, checks, and xml processing; you can have simpler triggers on certain types; your queries differ exactly at the point of diverging concerns.
CONS: You need two inserts for many objects.
Use a “kind” enum field and a JSONB-like field for the special type data.
This is kind of like your #1 or #3, except with some database help. Postgres added JSONB, giving you an improvement over the old EAV pattern. Other databases have a similar complex field type. In this strategy you roll your own mini schema that you stash in the JSONB field. The kind field declares what you expect to find in that JSONB field.
PROS: You can extract special type data in your queries; can add check constraints and have a simple schema to deal with; you can benefit from indexing even though your data is heterogenous; your queries and inserts are simple.
CONS: Your data types within JSONB-like fields are pretty limited and you have to roll your own validation.
Yes, it is a pain to design database formats like this. I'm designing a notification system and reached the same problem. My notification system is however less complex than yours - the data it holds is at most ids and usernames. My current solution is a mix of 1 and 3 - I serialize data that is different from every notification, and use a column for the 2 usernames (some may have 2 or 1). I shy away from method 2 because I hate that design, but it's probably just me.
However, if you can afford it, I would suggest thinking outside the realm of RDBMS - it sounds like Non-RDBMS (especially key/value storage ones) may be a better fit to store these data, especially if item 1 and item 2 differ from each item a lot.
I'm sure this has been asked here a million times before, but in addition to the options which you have discussed in your question, you can look at EAV schema which is very flexible, but which has its own sets of cons.
Another alternative is database systems which are not relational. There are object databases as well as various key/value stores and document databases.
Typically all these things break down to some extent when you need to query against the flexible attributes. This is kind of an intrinsic problem, however. Conceptually, what does it really mean to query things accurately which are unstructured?
First of all, do you actually need the concurrency, scalability and ACID transactions of a real database? Unless you are building a MMO, your game structures will likely fit in memory anyway, so you can search and otherwise manipulate them there directly. In a scenario like this, the "database" is just a store for serialized objects, and you can replace it with the file system.
If you conclude that you do (need a database), then the key is in figuring out what "atomicity" means from the perspective of the data management.
For example, if a game item has a bunch of attributes, but none of these attributes are manipulated individually at the database level (even though they could well be at the application level), then it can be considered as "atomic" from the data management perspective. OTOH, if the item needs to be searched on some of these attributes, then you'll need a good way to index them in the database, which typically means they'll have to be separate fields.
Once you have identified attributes that should be "visible" versus the attributes that should be "invisible" from the database perspective, serialize the latter to BLOBs (or whatever), then forget about them and concentrate on structuring the former.
That's where the fun starts and you'll probably need to use "all of the above" strategy for reasonable results.
BTW, some databases support "deep" indexes that can go into heterogeneous data structures. For example, take a look at Oracle's XMLIndex, though I doubt you'll use Oracle for a game.
You seem to be trying to solve this for a gaming context, so maybe you could consider a component-based approach.
I have to say that I personally haven't tried this yet, but I've been looking into it for a while and it seems to me something similar could be applied.
The idea would be that all the entities in your game would basically be a bag of components. These components can be Position, Energy or for your inventory case, Collectable, for example. Then, for this Collectable component you can add custom fields such as category, numItems, etc.
When you're going to render the inventory, you can simply query your entity system for items that have the Collectable component.
How can you save this into a DB? You can define the components independently in their own table and then for the entities (each in their own table as well) you would add a "Components" column which would hold an array of IDs referencing these components. These IDs would effectively be like foreign keys, though I'm aware that this is not exactly how you can model things in relational databases, but you get the idea.
Then, when you load the entities and their components at runtime, based on the component being loaded you can set the corresponding flag in their bag of components so that you know which components this entity has, and they'll then become queryable.
Here's an interesting read about component-based entity systems.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 months ago.
Improve this question
Object-relational mapping has been well discussed, including on here. I have experience with a few approaches and the pitfalls and compromises. True resolution seems like it requires changes to the OO or relational models themselves.
If using a functional language, does the same problem present itself? It seems to me that these two paradigms should fit together better than OO and RDBMS. The idea of thinking in sets in an RDBMS seems to mesh with the automatic parallelism that functional approaches seem to promise.
Does anyone have any interesting opinions or insights? What's the state of play in the industry?
What's the purpose of an ORM?
The main purpose of using an ORM is to bridge between the networked model (object orientation, graphs, etc.) and the relational model. And the main difference between the two models is surprisingly simple. It's whether parents point to children (networked model) or children point to parents (relational model).
With this simplicity in mind, I believe there is no such thing as an "impedance mismatch" between the two models. The problems people usually run into are purely implementation specific, and should be solvable, if there were better data transfer protocols between clients and servers.
How can SQL address the problems we have with ORMs?
In particular, the third manifesto tries to address the shortcomings of the SQL language and relational algebra by allowing for nested collections, which have been implemented in a variety of databases, including:
Oracle (probably the most sophisticated implementation)
PostgreSQL (to some extent)
Informix
SQL Server, MySQL, etc. (through "emulation" via XML or JSON)
In my opinion, if all databases implemented the SQL standard MULTISET() operator (e.g. Oracle does), people would no longer use ORMs for mapping (perhaps still for object graph persistence), because they could materialise nested collections directly from within the databases, e.g. this query:
SELECT actor_id, first_name, last_name,
MULTISET (
SELECT film_id, title
FROM film AS f
JOIN film_actor AS fa USING (film_id)
WHERE fa.actor_id = a.actor_id
) AS films
FROM actor AS a
Would yield all the actors and their films as a nested collection, rather than a denormalised join result (where actors are repeated for each film).
Functional paradigm at the client side
The question whether a functional programming language at the client side is better suited for database interactions is really orthogonal. ORMs help with object graph persistence, so if your client side model is a graph, and you want it to be a graph, you will need an ORM, regardless if you're manipulating that graph using a functional programming language.
However, because object orientation is less idiomatic in functional programming languages, you are less likely to shoehorn every data item into an object. For someone writing SQL, projecting arbitrary tuples is very natural. SQL embraces structural typing. Each SQL query defines its own row type without the need to previously assign a name to it. That resonates very well with functional programmers, especially when type inference is sophisticated, in case of which you won't ever think of mapping your SQL result to some previously defined object / class.
An example in Java using jOOQ from this blog post could be:
// Higher order, SQL query producing function:
public static ResultQuery<Record2<String, String>> actors(Function<Actor, Condition> p) {
return ctx.select(ACTOR.FIRST_NAME, ACTOR.LAST_NAME)
.from(ACTOR)
.where(p.apply(ACTOR)));
}
This approach leads to a much better compositionality of SQL statements than if the SQL language were abstracted by some ORM, or if SQL's natural "string based" nature were used. The above function can now be used e.g. like this:
// Get only actors whose first name starts with "A"
for (Record rec : actors(a -> a.FIRST_NAME.like("A%")))
System.out.println(rec);
FRM abstraction over SQL
Some FRMs try to abstract over the SQL language, usually for these reasons:
They claim SQL is not composable enough (jOOQ disproves this, it's just very hard to get right).
They claim that API users are more used to "native" collection APIs, so e.g. JOIN is translated to flatMap() and WHERE is translated to filter(), etc.
To answer your question
FRM is not "easier" than ORM, it solves a different problem. In fact, FRM doesn't really solve any problem at all, because SQL, being a declarative programming language itself (which is not so different from functional programming), is a very good match for other functional client programming languages. So, if anything at all, an FRM simply bridges the gap between SQL, the external DSL, and your client language.
(I work for the company behind jOOQ, so this answer is biased)
The hard problems of extending the relational database are extended transactions, data-type mismatches, automated query translation and things like N+1 Select that are fundamental problems of leaving the relational system and -- in my opinion -- do not change by changing the receiving programming paradigm.
That depends on your needs
If you want to focus on the data-structures, use an ORM like JPA/Hibernate
If you want to shed light on treatments, take a look at FRM libraries: QueryDSL or Jooq
If you need to tune your SQL requests to specific databases, use JDBC and native SQL requests
The strengh of various "Relational Mapping" technologies is portability: you ensure your application will run on most of the ACID databases.
Otherwise, you will cope with differences between various SQL dialects when you write manually the SQL requests .
Of course you can restrain yourself to the SQL92 standard (and then do some Functional Programming) or you can reuse some concepts of functionnal programming with ORM frameworks
The ORM strenghs are built over a session object which can act as a bottleneck:
it manages the lifecycle of the objects as long as the underlying database transaction is running.
it maintains a one-to-one mapping between your java objects and your database rows (and use an internal cache to avoid duplicate objects).
it automatically detects association updates and the orphan objects to delete
it handles concurrenty issues with optimistic or pessimist lock.
Nevertheless, its strengths are also its weaknesses:
The session must be able to compare objects so you need to implements equals/hashCode methods.
But Objects equality must be rooted on "Business Keys" and not database id (new transient objects have no database ID!).
However, some reified concepts have no business equality (an operation for instance).
A common workaround relies on GUIDs which tend to upset database administrators.
The session must spy relationship changes but its mapping rules push the use of collections unsuitable for the business algorithms.
Sometime your would like to use an HashMap but the ORM will require the key to be another "Rich Domain Object" instead of another light one...
Then you have to implement object equality on the rich domain object acting as a key...
But you can't because this object has no counterpart on the business world.
So you fall back to a simple list that you have to iterate on (and performance issues result from).
The ORM API are sometimes unsuitable for real-world use.
For instance, real world web applications try to enforce session isolation by adding some "WHERE" clauses when you fetch data...
Then the "Session.get(id)" doesn't suffice and you need to turn to more complex DSL (HSQL, Criteria API) or go back to native SQL
The database objects conflicts with other objects dedicated to other frameworks (like OXM frameworks = Object/XML Mapping).
For instance, if your REST services use jackson library to serialize a business object.
But this Jackson exactly maps to an Hibernate One.
Then either you merge both and a strong coupling between your API and your database appears
Or you must implement a translation and all the code you saved from the ORM is lost there...
On the other side, FRM is a trade-off between "Object Relational Mapping" (ORM) and native SQL queries (with JDBC)
The best way to explain differences between FRM and ORM consists into adopting a DDD approach.
Object Relational Mapping empowers the use of "Rich Domain Object" which are Java classes whose states are mutable during the database transaction
Functional Relational Mapping relies on "Poor Domain Objects" which are immutable (so much so you have to clone a new one each time you want to alter its content)
It releases the constraints put on the ORM session and relies most of time on a DSL over the SQL (so portability doesn't matter)
But on the other hand, you have to look into the transaction details, the concurrency issues
List<Person> persons = queryFactory.selectFrom(person)
.where(
person.firstName.eq("John"),
person.lastName.eq("Doe"))
.fetch();
I'd guess functional to relational mapping should be easier to create and use than OO to RDBMS. As long as you only query the database, that is. I don't really see (yet) how you could do database updates without side effects in a nice way.
The main problem I see is performance. Todays RDMS are not designed to be used with functional queries, and will probably behave poorly in quite a few cases.
I haven't done functional-relational mapping, per se, but I have used functional programming techniques to speed up access to an RDBMS.
It's quite common to start with a dataset, do some complex computation on it, and store the results, where the results are a subset of the original with additional values, for example. The imperative approach dictates that you store your initial dataset with extra NULL columns, do your computation, then update the records with the computed values.
Seems reasonable. But the problem with that is it can get very slow. If your computation requires another SQL statement besides the update query itself, or even needs to be done in application code, you literally have to (re-)search for the records that you are changing after the computation to store your results in the right rows.
You can get around this by simply creating a new table for results. This way, you can just always insert instead of update. You end up having another table, duplicating the keys, but you no longer need to waste space on columns storing NULL – you only store what you have. You then join your results in your final select.
I (ab)used an RDBMS this way and ended up writing SQL statements that looked mostly like this...
create table temp_foo_1 as select ...;
create table temp_foo_2 as select ...;
...
create table foo_results as
select * from temp_foo_n inner join temp_foo_1 ... inner join temp_foo_2 ...;
What this is essentially doing is creating a bunch of immutable bindings. The nice thing, though, is you can work on entire sets at once. Kind of reminds you of languages that let you work with matrices, like Matlab.
I imagine this would also allow for parallelism much easier.
An extra perk is that types of columns for tables created this way don't have to be specified because they are inferred from the columns they're selected from.
I'd think that, as Sam mentioned, if the DB should be updated, the same concurrency issues have to be faced as with OO world. The functional nature of the program could maybe be even a little more problematic than the object nature because of the state of data, transactions etc of the RDBMS.
But for reading, the functional language could be more natural with some problem domains (as it seems to be regardless of the DB)
The functional<->RDBMS mapping should have no big differences to OO<->RDMBS mappings. But I think that that depends a lot on what kind of data types you want to use, if you want to develop a program with a brand new DB schema or to do something against a legacy DB schema, etc..
The lazy fetches etc for associations for example could probably be implemented quite nicely with some lazy evaluation -related concepts. (Even though they can be done quite nicely with OO also)
Edit : With some googling I found HaskellDB (SQL library for Haskell) - that could be worth trying?
Databases and Functional Programming can be fused.
for example:
Clojure is a functional programming language based on relational database theory.
Clojure -> DBMS, Super Foxpro
STM -> Transaction,MVCC
Persistent Collections -> db, table, col
hash-map -> indexed data
Watch -> trigger, log
Spec -> constraint
Core API -> SQL, Built-in function
function -> Stored Procedure
Meta Data -> System Table
Note: In the latest spec2, spec is more like RMDB.
see: spec-alpha2 wiki: Schema-and-select
I advocate: Building a relational data model on top of hash-map to achieve a combination of NoSQL and RMDB advantages. This is actually a reverse implementation of posgtresql.
Duck Typing: If it looks like a duck and quacks like a duck, it must be a duck.
If clojure's data model like a RMDB, clojure's facilities like a RMDB and clojure's data manipulation like a RMDB, clojure must be a RMDB.
Clojure is a functional programming language based on relational database theory
Everything is RMDB
Implement relational data model and programming based on hash-map (NoSQL)
Being functional and being OO are two orthogonal concepts. The issue of mapping flat tables to trees of objects is orthogonal to Functional vs Imperative.
However, functional vs imperative does solve one particular mismatch, namely the mismatch between imperative updates and MVCC. In imperative programming, locking the table you are working with while you update the tables is the most intuitive approach, and anything non-sequential is extremely counterintuitive.
In FP, MVCC is much more natural than locks. The natural way to write is to compute the result set, compute the diff with read data, write (i.e. pick the updated dataset as the new one, sharing the data they have in common using persistent data structures), and do a rollback & retry if there is a write-write conflict. This matches exactly what MVCC does.