How can you migrate your data from one graph database (Neo4j, Tiger Graph etc.) to another?
Background:
I have to decide between the standards of the W3C (RDF (S), OWL) and
Databases for property graphs (Neo4j, TigerGraph etc.).
I know that all "triple stores" that support the W3C standard also make it possible to simply "pull out" the data
and import it into another triple store.
For relational databases there is also the standard SQL (and dialects),
so that with a little effort you can get the data from one relational database to another.
But I can't think of such a solution for graph databases.
As someone already mentioned for the property graphs there is no defined standard as of now. There are efforts going to build such standards called GQL https://www.gqlstandards.org/
However, for importing data from RDF to property graphs. Tigergraph and neo4j provides option to load your rdf data to the respective platforms. This might not provide complete switch over capabilities from RDF to Property graph but can help with solutions for certain scenarios.
For interchanging data between property graphs you might have to re-create schema when you switch platforms. For data loading most of property graph dbs provide option to load using csv's.
Related
it's my first time writing here but i'm really struck with a problem:
is it possible to use the Jena reasoner on a No-SQL database, like Neo4J, already filled with data?
I've a Neo4J's graph rappresenting a bunch of triples and I would like to use the Jena API and the Jena reasoner on them. I thought about using the SDB/TDB component of Jena but I don't get how to actually load the data into my model since the SDB component seems to work with just SQL databases and the go throught the whole TDB javadoc seems to be a bit too much.
Should I define some kind of configuration file for the TDB model too ?
Thanks very much for the help.
You should have a look at this link which describes the connection between neo4j and triplestores. Or possible connections at least.
The neo4j model is very different than the RDF model, which Jena uses. RDF is composed of triples, meaning subjects, predicates, and objects. Here is an example of a graph composed of triples. Note the use of URIs for identifying resources, and note that the nodes are typically atomic data values. They're a URI, a simple number, a string, and so on.
In Neo4j, nodes are "Property Containers". Meaning that they're not just URIs, but they're actually bundles of information. Relationships connect nodes. So RDF "predicates" are sort of like Neo4j relationships, but neo4j nodes are not like RDF resources and literals.
Your main task if you want to use reasoners over a neo4j database is going to be to suck data out of neo4j, and format it as a set of RDF triples. You can then put those RDF triples into a Jena Model. When you have that jena model in memory, you can use existing jena APIs to use reasoners with that model.
I am in the process of creating a neo4j implementation of the jena API. For this I am subclassing ObjectProperty, Individual and OntClass and implement queries to the neo4j endpoint.
The main problem is that for reasoning the whole database must be loaded in memory in order to use Jena's inmemory reasoning. My solution at the moment is to use a "reasoning"-server to process this and write new results to the main persistence layer. This, of course, is only suitable for long term recommendation systems and not for UI interactions.
Have a look here for the current state of the project:
https://github.com/uzuzjmd/Wissensmodellierung
Path:
competence-database\src\main\scala\uzuzjmd\competence\persistence\neo4j
Anyone interested to participate in this open source project feel free to contact me.
I'm a bit late to the party but you can use https://github.com/neo4j-labs/neosemantics to output the Neo4J data into triples and read that into a Jena Model
I have a standard IIS web application which stores its data in a standard SQL Server or Oracle DB.
I'm now interested in storing the data in RDF format, to go full Semantic Web.
Is it recommended to store data directly in RDF format? CRUDQ operations will be done on the data. Performance wise is this a good move?
If not (like I'm assuming), I guess I'd maintain standard SQL DBs and export/import data to/from RDF? How can I do this? Are there good converters out there?
If you really want to stick with a relational database, which I don't recommend, you can use something like D2RQ or you can look for something that supports, say, R2RML. Or you could try SDB.
However, if you want to use semantic technologies, you are much better off using an actual RDF database. You'll get better performance and you'll have a better development experience. There are a lot of options out there, Mulgara, Jena, Sesame, Stardog, OWLIM, AllegroGraph, BigData, Virtuoso, and Oracle provides some RDF support if you have an Oracle license, but in my experience, it's not as performant as dedicated RDF databases.
A lot of the SemWeb toolchain is in Java, but since you mentioned IIS, perhaps you're in the MS world, in which case, dotNetRDF is a good option.
But in summary, use dedicated RDF/SemWeb tools if you're going to be using the technology. Don't try and shoehorn RDF into non-semantic stuff, and don't write your own. No need to re-invent the wheel, there's lots of good SemWeb software out there to get your project going.
I think the point is what interface you will use/implement on your (?) data.
There are several way to go, my own would use SWI-Prolog to model the RDF storage, possibly with SPARQL, while accessing raw data via ODBC.
Then i have a chance to avoid static mapping (converting can be a risk, i think) at least until some 'stable' semantic emerges from data.
SWI-Prolog has also some point as reporting tool, and generative Web presentation.
Writing in Prolog the W3C rules to map 'automatically' relations to triples seems an interesting task, I'll try it...
Is there such a thing as a schema in a graph database? For example, can you specify which types of node can have relationships with which other types of node?
What does such a schema look like?
Graph databases differ a lot in this area, just like das_weezul says. In the general case I think graph databases which are closer to object databases (OODB) also have built-in schema support. One nice thing about graph databases is that they're very well suited for mixing data and metadata. So a common approach for both dealing with schema support and security is to store this kind of metadata in a (sometimes hidden) part of the very same graph.
When it comes to Neo4j - where I'm on the team - there's currently at least two approaches in use for defining schemas:
Defining the schema in annotations, for example using Spring Data Graph (docs).
Using a meta-model layer on top of the database.
You'll find some more reading on this topic over at myNoSQL.
Yes. Schemas useful in selecting vertex labels, which are part both Neo4J 2 and Tinkerpop 3. I think writing down the schema helps clarify how the graph should be used, although most databases don't support validations against a schema.
I have a longer post on how to draw the schema as a graph. http://lambdazen.blogspot.com/2014/01/do-property-graphs-have-schemas.html
A graph database will always have a rudimentary schema consisting of (at least) Vertex and Edge objects, where an Edge can contain data about a particular relationship. The degree to which you can add to this schema varies widely across implementations. You may be able to customize the schema by inheriting from Edge and/or Vertex objects,for instance.
If the graph database uses an underlying RDBMS or ODBMS then you may have access to more powerful schema creation and manipulation capabilities.
I have a number of objects, each one have an arbitrary number of shared, and distinct property-value pairs (more specifically: files, and their related properties -such as width, and height values for images, album/artist/length for music files, etc). I'd like to be able to search for objects having specific property/values (such as: by album), group by property, etc.
What kind of database would you suggest for this scenario? Due to modularity (ability to add more properties on-the-fly), as well as the fact of common properties are <20% of all properties, the standard SQL with normalized tables wouldn't really cut it. I have already tried to approach the problem using a "skinny data model"; however I have faced with serious scalability issues.
Are there any specialized databases tuned for this scenario (BSD-licensed solutions preferred)? Or any alternative way to tweak standard RDBMs for this?
Searching for an object having some properties makes me think about a RDF datastore. Have a look a a RDF API (see JENA , sesame, virtuoso ).
Or BerkeleyDB ?
What you're talking about is called EAV model or triple store. Later can be queried with SPARQL
Pierre is right; a triplestore is what you want, and RDF is the standard for that. SPARQL is the standard language for querying it (a lot like SQL for RDBMS's).
Have a look at the databases offered by various Cloud services:
Google AppEngine Datastore
(based on Bigtable)
Amazon SimpleDB
Microsoft SDS
Apache CouchDB
If Cloud databases aren't an option, BerkelyDB could be a good choice.
What are the other types of database systems out there. I've recently came across couchDB that handles data in a non relational way. It got me thinking about what other models are other people is using.
So, I want to know what other types of data model is out there. (I'm not looking for any specifics, just want to look at how other people are handling data storage, my interest are purely academic)
The ones I already know are:
RDBMS (mysql,postgres etc..)
Document based approach (couchDB, lotus notes)
Key/value pair (BerkeleyDB)
db4o
Quote from the "about" page:
db4o is the open source object database that enables Java and .NET developers to store and retrieve any application object with only one line of code, eliminating the need to predefine or maintain a separate, rigid data model.
Older non-relational databases:
Network Database
Hierarchical Database
Both mostly went out of style when relational became feasible.
Column-oriented databases are also a bit of a different animal. Many of them do support standard relational database SQL though. These are generally used for data warehouse type applications.
Semantic Web is also a non-relational data storage paradigm. There are no relations, all metadata is stored in the same way as data, and every entity has potentially its own unique set of attributes. Open-source projects that implement RDF, a Semantic Web standard, include Jena and Sesame.
Isn't Amazon's SimpleDB non-relational?
db4o, as mentioned by Eric, is an Object-Oriented database management system (OODBMS).
There's object-based databases(Gemstore, for example). Google's Big-Table and Amason's Simple Storage I am not sure how you would categorize, but both are map-reduce based.
A non-relational document oriented database we have been looking at is Apache CouchDB.
Apache CouchDB is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API. Among other features, it provides robust, incremental replication with bi-directional conflict detection and resolution, and is queryable and indexable using a table-oriented view engine with JavaScript acting as the default view definition language.
Our interest was in providing a distributed access user preferences store that would be immune to shape changes to which we could serialize preference objects from Java and access those just as easily with Javascript from a XULRunner based client application.
I'd like to detail more on Bill Karwin's answer about semantic web and triplestores, since it's what I am working on at the moment, and I have something to say on it.
The idea behind a triplestore is to store a graph-based database, whose datamodel roots in RDF. With RDF, you describe nodes and associations among nodes (in other words, edges). Data is organized in triples :
start node ----relation----> end node
(in RDF speech: subject --predicate--> object). With this very simple data model, any data network can be represented by adding more and more triples, provided you give a meaning to nodes and relations.
RDF is very general, and it's a graph-based data model well suited for search criteria looking for all triples with a particular combination of subject, predicate, or object, in any combination. Eventually, through a query language called SPARQL, you can also perform more complex queries, an operation that boils down to a graph isomorphism search onto the graph, both in terms of topology and in terms of node-edge meaning (we'll see this in a moment). SPARQL allows you only SELECT (and similar) queries. No DELETE, no INSERT, no UPDATE. The information you query (e.g. specific nodes you are interested in) are mapped into a table, which is what you get as a result of your query.
Now, topology in itself does not mean a lot. For this, a Schema language has been invented. Actually, more than one, and calling them schema languages is, in some cases, very limitative. The most famous and used today are RDF-Schema, OWL (Lite and Full), and they predate from the obsolete DAML+OIL. The point of these languages is, boiling down stuff, to give a meaning to nodes (by granting them a type, also described as a triple) and to relationships (edges). Also, you can define the "range" and "domain" of these relationships, or said differently what type is the start node and what type is the end node: you can say for example, that the property "numberOfWheels" can be applied only to connect a node of type Vehicle to a non-zero integer value.
ns:MyFiat --rdf:type--> ns:Vehicle
ns:MyFiat --ns:numberOfWheels-> 4
Now, you can use these ontologies in two directions: validation and inference. Validation is not that fancy today, but I've seen instances of use. Inference is what is cool today, because it allows reasoning. Inference basically takes a RDF graph containing a set of triples, takes an ontology, mixes them into a triplestore database which contains an "inference engine" and like magic the inference engine invents triples according to your ontological description. Example: suppose you just store this information in the database
ns:MyFiat --ns:numberOfWheels--> 4
and nothing else. No type is specified about this node, but the inference engine will add automatically a triple saying that
ns:MyFiat --rdf:type--> ns:Vehicle
because you said in your ontology that only objects of type Vehicle can be described by a property numberOfWheels.
Conversely, you can use the inference engine to validate your data against the ontology so to refuse not compliant data (sort of like XML-Schema for XML). In this case, you will need both triples to have your data successfully accepted by the triplestore.
Additional characteristics of triplestores are Formulas and Context-aware storage. Formulas are statements (as usual, triples subject predicate object) that describe something hypothetical. I never used Formulas, so I won't go into more details of something I don't know. Context awareness are basically subgraphs: the problem with storing triples is that you don't have anything to say where these triples come from. Suppose you have two dealers that describe the same price of a component. One says that the price is 5.99 and the other 4.99. If you just store both triples into a database, now you don't know anything about who stated each information. There are two ways to solve this problem.
One is reification. Reification means that you store additional triples to describe another triple. It's wasteful, and makes life hell because you have to reify every and each triple you store. The alternative is context-awareness. Having a context-aware storage It's like being able to box a bunch of triples into a container with a label on it (the context identifier). You now can use this identifier as subject for additional statements, hence describing a bunch of triples in a single action.
4. Navigational. Includes Tree/Hierarchy and Graph/Network.
File systems, the semantic web, XML, Object databases, CODASYL, and many others all fit into this category.
Those 4 are pretty much it.
There is also what is referred to as an "inverted index" or "inverted list" database. Software AG's Adabas product would be an example. As with hierachical, these databases continue to be used in large corporate or university environments because of legacy considerations or due to a performance advantage in certain situations (typically high-end transactional applications).
There are BASE systems (Basically Available, Soft State, Eventually consistent) and they work well with simple data models holding vast volumes of data. Google's BigTable, Dojo's Persevere, Amazon's Dynamo, Facebook's Cassandra are some examples.
See LINK
The illuminate Correlation Database is a new revolutionary non-relational database. The Correlation Database Management Dystem (CDBMS) is data model independent and designed to efficiently handle unplanned, ad hoc queries in an analytical system environment. Unlike relational database management systems or column-oriented databases, a correlation database uses a value-based storage (VBS) architecture in which each unique data value is stored only once and an auto-generated indexing system maintains the context for all values (data is 100% indexed). Queries are performed using natural language instead of SQL (NoSQL).
Learn more at: www.datainnovationsgroup.com