I am looking for a database with HTTP REST API out of the box. I want to skip the middle tier between client and database.
One option I found is a HTTP Plugin for MySQL which operates with JSON format
http://blog.ulf-wendel.de/2014/mysql-5-7-http-plugin-mysql/
Can someone suggest other similar solutions? I want to save development time and effort for some queries.
You really should have a middle layer to sanitize input and prevent unwanted calls deleting or changing your data, IMO.
Since you claim to just be testing, though, the technologies I know off the top of my head that provide REST out of the box are mostly NoSQL. You mention MySQL with that JSON thing, but I imagine that just goes through a JDBC/ODBC layer.
So what I know is:
Solr/Elasticsearch - while not strictly a database, is useful for quickly searchable semi structured data
Couchbase - a distributed document and key value store for JSON documents
Neo4j - Graph database
Related
Over the last couple of months I've been building up a Neo4j database. I'm finding Neo4j & Cypher really easy to use and definitely appropriate for the kind of data that I'm working with.
I'm hoping there's someone out there who can offer a few pointers on how to get started with the REST API. I don't have any experience coding in Java and I'm finding the Neo4j documentation a little tricky to follow. From what I understand, it should be possible to send a REST request via a straightforward http URL (like this http://localhost:7474/db/data/relationship/types), which would retrieve some data in a JSON.
My end goal is some form of very high level dashboard to summarise the current status of my database, to show the data from a few high level Cypher queries like this one:
match (n) return distinct(n.team), count(n)
Any advice you can offer would be greatly appreciated.
You would better use the http transactional endpoint where you can send Cypher query statements like the one in your questions.
The default endpoint is http://yourserverurl:7474/db/data/transaction/commit
The Neo4j documentation to use it from Java :
http://neo4j.com/docs/stable/server-java-rest-client-example.html#_sending_cypher
Using the transactional endpoint has the benefit of being able to send multiple statements in one transaction which will result in the operation being committed or rolled back.
The ReST API is like any other http api, the only guidelines to follow are the body contents and cypher query parameters which are well explained in the Neo4j documentation : http://neo4j.com/docs/stable/rest-api.html
I need to fetch data from normalized MSSQL db and feed them in Solr index.
I was just wondering whether Apatar can be used to perform the job. I've gone through its documents, but doesn't get the information I'm looking for. It states, it can fetch data from SQL server, and post it over HTTP, but still not sure, whether it can post fetched data in XML over http or not?
Any advise will be highly valuable. thank you
I am not familiar with Apatar, but seeing as it is a Java application, it may be a bit challenging to implement it in a windows environment. However, for various scenarios where I need to fetch data from a MSSQL Database and feed it to Solr, I have written custom C# code leveraging the SolrNet client. This tends to be pretty straight forward and simple code and in the cases where we need to load data at specified intervals we are using scheduled tasks calling a console application. I would recommend checking out the Create/Update section of the SolrNet site for some examples of loading/updating data with the .Net client.
For my new project I'm looking forward to use JSON data as a text file rather then fetching data from database. My concept is to save a JSON file on the server whenever admin creates a new entry in the database.
As there is no issue of security, will this approach will make user access to data faster or shall I go with the usual database queries.
JSON is typically used as a way to format the data for the purpose of transporting it somewhere. Databases are typically used for storing data.
What you've described may be perfectly sensible, but you really need to say a little bit more about your project before the community can comment on your approach.
What's the pattern of access? Is it always read-only for the user, editable only by site administrator for example?
You shouldn't worry about performance early on. Worry more about ease of development, maintenance and reliability, you can always optimise afterwards.
You may want to look at http://www.mongodb.org/. MongoDB is a document-centric store that uses JSON as its storage format.
JSON in combination with Jquery is a great fast web page smooth updating option but ultimately it still will come down to the same database query.
Just make sure your query is efficient. Use a stored proc.
JSON is just the way the data is sent from the server (Web controller in MVC or code behind in standind c#) to the client (JQuery or JavaScript)
Ultimately the database will be queried the same way.
You should stick with the classic method (database), because you'll face many problems with concurrency and with having too many files to handle.
I think you should go with usual database query.
If you use JSON file you'll have to sync JSON files with the DB (That's mean an extra work is need) and face I/O problems (if your site super busy).
Architecture :
database on a central server which contains a complex hierarchical database structure.
The clients should be able to insert data into tables through the API, The data would be inserted into multiple tables in the database at the same time, and not only into one table.
The clients should be able to retrieve data by using a complex search query.
The clients can upload/download files to the server which could have a size of multiple GBs
would SOAP be better for this job than REST ? can you please explain why ?
Almost all the things you mention are equally achievable using either SOAP or REST, though perhaps a little easier with SOAP. Certainly it's easier to create client APIs for SOAP interfaces; client tooling support is significantly more advanced on the majority of languages.
However, you say that you're wanting to deal with multi-gigabyte upload and download. That's a crucial point as REST is able to handle that sort of thing far more easily. SOAP is almost always tooled in terms of DOM processing, and that means building full messages in memory; you don't ever want to do that with a multi-GB payload.
So go with REST. That's definitely your best option for achieving all your listed objectives.
I'm starting to work on a financial information website (somewhat like google finance or bloomberg).
My website needs to display live currency, commodity, and stock values. I know how to do this frontend-wize, but I have a backend data storing question (I already have the data feed APIs):
How would you guys go about this - would you set up your own database and save all the data in the db with some kind of a backend worker, and then plug in your frontend to your db, or would you plug your frontend directly to the API and not mine the data?
Mining the data could be good for later reference (statistics and other things that the API wont allow), but can such a big quantity of ever growing information be stored on a database? Is this feasible? What other things should I be considering?
Thank you - any comment would be much appreciated!
First, I'd cleanly separate the front end from the code that reads the source APIs. Having done that, I could have the code that reads the source APIs feed the front end directly, feed a database, or both.
I'm a database guy. I'd lean toward feeding data from the APIs into a database, and connecting the front end to the database. But it really depends on the application's requirements.
Feeding a database makes it simple and cheap to change your mind. If you (or whoever) decides later to keep no historical data, just delete old data after storing new data. If you (or whoever) decides later to keep all historical data, just don't delete old data.
Feeding a database also gives you fine-grained control over who gets to see the data, relatively independent of their network operating system permissions. Depending on the application, this may or may not be a good thing.