Over the last couple of months I've been building up a Neo4j database. I'm finding Neo4j & Cypher really easy to use and definitely appropriate for the kind of data that I'm working with.
I'm hoping there's someone out there who can offer a few pointers on how to get started with the REST API. I don't have any experience coding in Java and I'm finding the Neo4j documentation a little tricky to follow. From what I understand, it should be possible to send a REST request via a straightforward http URL (like this http://localhost:7474/db/data/relationship/types), which would retrieve some data in a JSON.
My end goal is some form of very high level dashboard to summarise the current status of my database, to show the data from a few high level Cypher queries like this one:
match (n) return distinct(n.team), count(n)
Any advice you can offer would be greatly appreciated.
You would better use the http transactional endpoint where you can send Cypher query statements like the one in your questions.
The default endpoint is http://yourserverurl:7474/db/data/transaction/commit
The Neo4j documentation to use it from Java :
http://neo4j.com/docs/stable/server-java-rest-client-example.html#_sending_cypher
Using the transactional endpoint has the benefit of being able to send multiple statements in one transaction which will result in the operation being committed or rolled back.
The ReST API is like any other http api, the only guidelines to follow are the body contents and cypher query parameters which are well explained in the Neo4j documentation : http://neo4j.com/docs/stable/rest-api.html
Related
My organization has multiple databases that we need to provide search results for. Right now you have to search each database individually. I'm trying to create a web interface that will query all the databases at once and sort the results based upon relevance.
Some of the databases I have direct access to. Others I can only access via a REST API.
My challenge isn't knowing how to query each individual database. I understand how to make API calls. It's how to sort the results by relevance.
On the surface it looks like Elasticsearch would be a good option. Its reverse indexing system seems like a good solution to figuring out which results are going to be the most relevant to our users. It's also super fast.
The problem is that I don't see a way (so far) to include results from an external API into Elasticsearch so it can do its magic.
Is there a better option that I'm not aware of? Or is it possible to have Elasticsearch evaluate the relevance of results from an external API while also including data from its own internal indices?
I did find an answer, although nobody replied. :\
The answer is to use the http_poll plugin with logstash. This will make an API call and injest the results into Elasticsearch.
Another option could be some form of microservices orchestration for the various API calls then merge them into a final result set.
I am looking for an overview on what is required and how to connect with Vespa for retrieving indexed data at scale.
i've run stress tests on Vespa document RESTful API and as suggested in documentation, it has an upper bound.
http://docs.vespa.ai/documentation/document-api-guide.html indicates the way forward but assumes a head-start on subject matter.
i can figure
com.yahoo.documentapi.messagebus.MessageBusDocumentAccess
and related bus creation etc.
MessageBusDocumentApiTestCase
adds some more to understanding.
package jrt https://github.com/vespa-engine/vespa/tree/master/jrt and some more resources come to aid but the trail, to humbly accept, is tough to put together :)
The trouble is i can't find, if documented, any guide to clearly explain how to invoke vespa from an external system, or if that's not possible, run an embedded client and how it talks to vespa cluster.
please point me to if such an overview exists.
edit:
vespaclient-java/src/main/java/com/yahoo/vespaget/DocumentRetriever.java
-- another example. thoughts?
This seems like a duplicate of a question which has already been answered in a github issue: https://github.com/vespa-engine/vespa/issues/3628
For feeding to Vespa clusters from external systems which is not part
of your Vespa cluster we recommend
http://docs.vespa.ai/documentation/vespa-http-client.html.
For reading single get operations from Vespa the http RESTful API for
GET described in http://docs.vespa.ai/documentation/document-api.html
is the best option. The RESTful API for GET is built on top of the
http://docs.vespa.ai/documentation/document-api-guide.html which is a
low-level api to use on nodes which are part of a Vespa cluster
already and have access to configuration like schema and content
clusters and number of nodes.
I am looking for a database with HTTP REST API out of the box. I want to skip the middle tier between client and database.
One option I found is a HTTP Plugin for MySQL which operates with JSON format
http://blog.ulf-wendel.de/2014/mysql-5-7-http-plugin-mysql/
Can someone suggest other similar solutions? I want to save development time and effort for some queries.
You really should have a middle layer to sanitize input and prevent unwanted calls deleting or changing your data, IMO.
Since you claim to just be testing, though, the technologies I know off the top of my head that provide REST out of the box are mostly NoSQL. You mention MySQL with that JSON thing, but I imagine that just goes through a JDBC/ODBC layer.
So what I know is:
Solr/Elasticsearch - while not strictly a database, is useful for quickly searchable semi structured data
Couchbase - a distributed document and key value store for JSON documents
Neo4j - Graph database
I need to fetch data from normalized MSSQL db and feed them in Solr index.
I was just wondering whether Apatar can be used to perform the job. I've gone through its documents, but doesn't get the information I'm looking for. It states, it can fetch data from SQL server, and post it over HTTP, but still not sure, whether it can post fetched data in XML over http or not?
Any advise will be highly valuable. thank you
I am not familiar with Apatar, but seeing as it is a Java application, it may be a bit challenging to implement it in a windows environment. However, for various scenarios where I need to fetch data from a MSSQL Database and feed it to Solr, I have written custom C# code leveraging the SolrNet client. This tends to be pretty straight forward and simple code and in the cases where we need to load data at specified intervals we are using scheduled tasks calling a console application. I would recommend checking out the Create/Update section of the SolrNet site for some examples of loading/updating data with the .Net client.
Architecture :
database on a central server which contains a complex hierarchical database structure.
The clients should be able to insert data into tables through the API, The data would be inserted into multiple tables in the database at the same time, and not only into one table.
The clients should be able to retrieve data by using a complex search query.
The clients can upload/download files to the server which could have a size of multiple GBs
would SOAP be better for this job than REST ? can you please explain why ?
Almost all the things you mention are equally achievable using either SOAP or REST, though perhaps a little easier with SOAP. Certainly it's easier to create client APIs for SOAP interfaces; client tooling support is significantly more advanced on the majority of languages.
However, you say that you're wanting to deal with multi-gigabyte upload and download. That's a crucial point as REST is able to handle that sort of thing far more easily. SOAP is almost always tooled in terms of DOM processing, and that means building full messages in memory; you don't ever want to do that with a multi-GB payload.
So go with REST. That's definitely your best option for achieving all your listed objectives.