How can you migrate your data from one graph database (Neo4j, Tiger Graph etc.) to another?
Background:
I have to decide between the standards of the W3C (RDF (S), OWL) and
Databases for property graphs (Neo4j, TigerGraph etc.).
I know that all "triple stores" that support the W3C standard also make it possible to simply "pull out" the data
and import it into another triple store.
For relational databases there is also the standard SQL (and dialects),
so that with a little effort you can get the data from one relational database to another.
But I can't think of such a solution for graph databases.
As someone already mentioned for the property graphs there is no defined standard as of now. There are efforts going to build such standards called GQL https://www.gqlstandards.org/
However, for importing data from RDF to property graphs. Tigergraph and neo4j provides option to load your rdf data to the respective platforms. This might not provide complete switch over capabilities from RDF to Property graph but can help with solutions for certain scenarios.
For interchanging data between property graphs you might have to re-create schema when you switch platforms. For data loading most of property graph dbs provide option to load using csv's.
I am trying to compare data ingested into both Accumulo and Solr from the same source XML. The data ingested into Accumulo is legacy code while Solr is new code. I can easily extract out data from Solr using SolrCloud and choosing CSV or JSON, which is easily readable. But I'm at a loss for how to easily view the data in Accumulo. I used scan to view the data, but it is not easily readable. Is there a way to export the data in Accumulo to a CSV or something similar so it will be easy to read/compare with other datasets?
As I understand it, Apache Solr is a document store which uses Lucene indexes to make search fast via a web-based REST interface. On the other hand, Apache Accumulo is a massively scalable sorted key-value store, which stores arbitrary key-value pairs with cell-level security labels, in accordance with the user's application, queryable with a Java API. It makes no sense to compare the two. They are entirely different applications. Accumulo is a low-level infrastructure application, upon which you can build complex systems, such as a search engine comparable to Solr, but it is not directly comparable to Solr because Accumulo is not a search engine.
To answer your question about how to view data in Accumulo, the answer is to use its Java API. I recommend starting with the Tour on its web page, for some examples of how to query it. As for how the data is presented, and in what form, that depends on the application which ingested it in the first place. It can be arbitrary binary data in byte arrays and may not be directly viewable; that depends on the application. Accumulo is agnostic to the nature of the data stored in its key-value pairs.
What you were probably referring to in your question, when you said "I used scan to view the data", you were probably referring to the scan command in Accumulo's shell. You should probably be aware that the shell is not the primary interface for query. It is intended for system administration and triage of data ingest. The Java API is the primary means of querying.
The Accumulo open source community is pretty responsive to questions. If you're having trouble figuring out how best to use it for your needs, I would advise to ask on their community mailing lists, which can be found at their website. StackOverflow is more suitable for very specific questions than generalized "getting started" kinds of tutorials.
it's my first time writing here but i'm really struck with a problem:
is it possible to use the Jena reasoner on a No-SQL database, like Neo4J, already filled with data?
I've a Neo4J's graph rappresenting a bunch of triples and I would like to use the Jena API and the Jena reasoner on them. I thought about using the SDB/TDB component of Jena but I don't get how to actually load the data into my model since the SDB component seems to work with just SQL databases and the go throught the whole TDB javadoc seems to be a bit too much.
Should I define some kind of configuration file for the TDB model too ?
Thanks very much for the help.
You should have a look at this link which describes the connection between neo4j and triplestores. Or possible connections at least.
The neo4j model is very different than the RDF model, which Jena uses. RDF is composed of triples, meaning subjects, predicates, and objects. Here is an example of a graph composed of triples. Note the use of URIs for identifying resources, and note that the nodes are typically atomic data values. They're a URI, a simple number, a string, and so on.
In Neo4j, nodes are "Property Containers". Meaning that they're not just URIs, but they're actually bundles of information. Relationships connect nodes. So RDF "predicates" are sort of like Neo4j relationships, but neo4j nodes are not like RDF resources and literals.
Your main task if you want to use reasoners over a neo4j database is going to be to suck data out of neo4j, and format it as a set of RDF triples. You can then put those RDF triples into a Jena Model. When you have that jena model in memory, you can use existing jena APIs to use reasoners with that model.
I am in the process of creating a neo4j implementation of the jena API. For this I am subclassing ObjectProperty, Individual and OntClass and implement queries to the neo4j endpoint.
The main problem is that for reasoning the whole database must be loaded in memory in order to use Jena's inmemory reasoning. My solution at the moment is to use a "reasoning"-server to process this and write new results to the main persistence layer. This, of course, is only suitable for long term recommendation systems and not for UI interactions.
Have a look here for the current state of the project:
https://github.com/uzuzjmd/Wissensmodellierung
Path:
competence-database\src\main\scala\uzuzjmd\competence\persistence\neo4j
Anyone interested to participate in this open source project feel free to contact me.
I'm a bit late to the party but you can use https://github.com/neo4j-labs/neosemantics to output the Neo4J data into triples and read that into a Jena Model
I have a number of objects, each one have an arbitrary number of shared, and distinct property-value pairs (more specifically: files, and their related properties -such as width, and height values for images, album/artist/length for music files, etc). I'd like to be able to search for objects having specific property/values (such as: by album), group by property, etc.
What kind of database would you suggest for this scenario? Due to modularity (ability to add more properties on-the-fly), as well as the fact of common properties are <20% of all properties, the standard SQL with normalized tables wouldn't really cut it. I have already tried to approach the problem using a "skinny data model"; however I have faced with serious scalability issues.
Are there any specialized databases tuned for this scenario (BSD-licensed solutions preferred)? Or any alternative way to tweak standard RDBMs for this?
Searching for an object having some properties makes me think about a RDF datastore. Have a look a a RDF API (see JENA , sesame, virtuoso ).
Or BerkeleyDB ?
What you're talking about is called EAV model or triple store. Later can be queried with SPARQL
Pierre is right; a triplestore is what you want, and RDF is the standard for that. SPARQL is the standard language for querying it (a lot like SQL for RDBMS's).
Have a look at the databases offered by various Cloud services:
Google AppEngine Datastore
(based on Bigtable)
Amazon SimpleDB
Microsoft SDS
Apache CouchDB
If Cloud databases aren't an option, BerkelyDB could be a good choice.
What are the other types of database systems out there. I've recently came across couchDB that handles data in a non relational way. It got me thinking about what other models are other people is using.
So, I want to know what other types of data model is out there. (I'm not looking for any specifics, just want to look at how other people are handling data storage, my interest are purely academic)
The ones I already know are:
RDBMS (mysql,postgres etc..)
Document based approach (couchDB, lotus notes)
Key/value pair (BerkeleyDB)
db4o
Quote from the "about" page:
db4o is the open source object database that enables Java and .NET developers to store and retrieve any application object with only one line of code, eliminating the need to predefine or maintain a separate, rigid data model.
Older non-relational databases:
Network Database
Hierarchical Database
Both mostly went out of style when relational became feasible.
Column-oriented databases are also a bit of a different animal. Many of them do support standard relational database SQL though. These are generally used for data warehouse type applications.
Semantic Web is also a non-relational data storage paradigm. There are no relations, all metadata is stored in the same way as data, and every entity has potentially its own unique set of attributes. Open-source projects that implement RDF, a Semantic Web standard, include Jena and Sesame.
Isn't Amazon's SimpleDB non-relational?
db4o, as mentioned by Eric, is an Object-Oriented database management system (OODBMS).
There's object-based databases(Gemstore, for example). Google's Big-Table and Amason's Simple Storage I am not sure how you would categorize, but both are map-reduce based.
A non-relational document oriented database we have been looking at is Apache CouchDB.
Apache CouchDB is a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API. Among other features, it provides robust, incremental replication with bi-directional conflict detection and resolution, and is queryable and indexable using a table-oriented view engine with JavaScript acting as the default view definition language.
Our interest was in providing a distributed access user preferences store that would be immune to shape changes to which we could serialize preference objects from Java and access those just as easily with Javascript from a XULRunner based client application.
I'd like to detail more on Bill Karwin's answer about semantic web and triplestores, since it's what I am working on at the moment, and I have something to say on it.
The idea behind a triplestore is to store a graph-based database, whose datamodel roots in RDF. With RDF, you describe nodes and associations among nodes (in other words, edges). Data is organized in triples :
start node ----relation----> end node
(in RDF speech: subject --predicate--> object). With this very simple data model, any data network can be represented by adding more and more triples, provided you give a meaning to nodes and relations.
RDF is very general, and it's a graph-based data model well suited for search criteria looking for all triples with a particular combination of subject, predicate, or object, in any combination. Eventually, through a query language called SPARQL, you can also perform more complex queries, an operation that boils down to a graph isomorphism search onto the graph, both in terms of topology and in terms of node-edge meaning (we'll see this in a moment). SPARQL allows you only SELECT (and similar) queries. No DELETE, no INSERT, no UPDATE. The information you query (e.g. specific nodes you are interested in) are mapped into a table, which is what you get as a result of your query.
Now, topology in itself does not mean a lot. For this, a Schema language has been invented. Actually, more than one, and calling them schema languages is, in some cases, very limitative. The most famous and used today are RDF-Schema, OWL (Lite and Full), and they predate from the obsolete DAML+OIL. The point of these languages is, boiling down stuff, to give a meaning to nodes (by granting them a type, also described as a triple) and to relationships (edges). Also, you can define the "range" and "domain" of these relationships, or said differently what type is the start node and what type is the end node: you can say for example, that the property "numberOfWheels" can be applied only to connect a node of type Vehicle to a non-zero integer value.
ns:MyFiat --rdf:type--> ns:Vehicle
ns:MyFiat --ns:numberOfWheels-> 4
Now, you can use these ontologies in two directions: validation and inference. Validation is not that fancy today, but I've seen instances of use. Inference is what is cool today, because it allows reasoning. Inference basically takes a RDF graph containing a set of triples, takes an ontology, mixes them into a triplestore database which contains an "inference engine" and like magic the inference engine invents triples according to your ontological description. Example: suppose you just store this information in the database
ns:MyFiat --ns:numberOfWheels--> 4
and nothing else. No type is specified about this node, but the inference engine will add automatically a triple saying that
ns:MyFiat --rdf:type--> ns:Vehicle
because you said in your ontology that only objects of type Vehicle can be described by a property numberOfWheels.
Conversely, you can use the inference engine to validate your data against the ontology so to refuse not compliant data (sort of like XML-Schema for XML). In this case, you will need both triples to have your data successfully accepted by the triplestore.
Additional characteristics of triplestores are Formulas and Context-aware storage. Formulas are statements (as usual, triples subject predicate object) that describe something hypothetical. I never used Formulas, so I won't go into more details of something I don't know. Context awareness are basically subgraphs: the problem with storing triples is that you don't have anything to say where these triples come from. Suppose you have two dealers that describe the same price of a component. One says that the price is 5.99 and the other 4.99. If you just store both triples into a database, now you don't know anything about who stated each information. There are two ways to solve this problem.
One is reification. Reification means that you store additional triples to describe another triple. It's wasteful, and makes life hell because you have to reify every and each triple you store. The alternative is context-awareness. Having a context-aware storage It's like being able to box a bunch of triples into a container with a label on it (the context identifier). You now can use this identifier as subject for additional statements, hence describing a bunch of triples in a single action.
4. Navigational. Includes Tree/Hierarchy and Graph/Network.
File systems, the semantic web, XML, Object databases, CODASYL, and many others all fit into this category.
Those 4 are pretty much it.
There is also what is referred to as an "inverted index" or "inverted list" database. Software AG's Adabas product would be an example. As with hierachical, these databases continue to be used in large corporate or university environments because of legacy considerations or due to a performance advantage in certain situations (typically high-end transactional applications).
There are BASE systems (Basically Available, Soft State, Eventually consistent) and they work well with simple data models holding vast volumes of data. Google's BigTable, Dojo's Persevere, Amazon's Dynamo, Facebook's Cassandra are some examples.
See LINK
The illuminate Correlation Database is a new revolutionary non-relational database. The Correlation Database Management Dystem (CDBMS) is data model independent and designed to efficiently handle unplanned, ad hoc queries in an analytical system environment. Unlike relational database management systems or column-oriented databases, a correlation database uses a value-based storage (VBS) architecture in which each unique data value is stored only once and an auto-generated indexing system maintains the context for all values (data is 100% indexed). Queries are performed using natural language instead of SQL (NoSQL).
Learn more at: www.datainnovationsgroup.com