i am developing a website on angularjs and my server gets information from a graph db in neo4j. At first ive used the default neo4j db (with movies and such) but when i want to load my own csv files the neo4j adds onlyhalf of them. I have 117000 rows and ive tried to use periodic commit and it added again, only 58000. What is the cypher command for adding all the data, is it ok to divide it to another csv file?
EDIT: Ive used this command:
USING PERIODIC COMMIT
LOAD CSV FROM 'http://docs.neo4j.org/chunked/2.1.2/csv/artists.csv' AS line
CREATE (:Artist { name: line[1], year: toInt(line[2])})
Another question: i need to show the result from a query using angularJS and i couldnt find a normal explanation, algorithm, example etc. is there a way to show the result (the result is in json)
EDIT: i need to show the results as a table and also as a nodes (like in the neo4j admin website)
Yes, please split it up into 3 questions.
For loading data,
see http://jexp.de/blog/2014/06/load-csv-into-neo4j-quickly-and-successfully/
And check out MERGE:
see: http://docs.neo4j.org/chunked/milestone/query-merge.html
For creating an angular application with Neo4j backend
see: https://github.com/kbastani/neo4j-movies-template
Related
I have a python script that fetches data twice a day from a server of mine. The script returns around 40 JSON files containing various data. The files aren't particularly big and the combined size of all the files is around 250KB.
Alongside my script I am developing a dashboard in React that renders the data from each file into a table, allowing me a visual representation of the data.
I have been looking at what would be the best way to store these files, something that allows me to upload and fetch them twice a day.
Someone mentioned to me about using MongoDB to store the files, but after some research I feel like Mongo is better at storing the contents of the file rather than the file itself. I tried to develop a solution but I couldn't figure out how it could be done when each object is stored as a document with no clear way (to me) which document came from which file.
Other options I have considered are:
Storing the files on the server that is hosting my React project and rendering them locally as I am doing now during development
Storing the files using a provider such as AWS/Firebase
Storing them in a different database (I see SQL now support the storing of JSON files)
Are there any other solutions that you think would work best for this scenario? If so, why?
Hello,
Check about use of FTP server.
We have clients that send us data every 10 min via FTP that is inside XML files, then I have NodeJS back-end which read these files.
You can use it for your scenario with JSON files.
I expose my problem: I have recently started using CortexDB, a NoSQL software to database analysis. I have read the (poor) documentation on https://docs.cortex-ag.com/en/CortexDB/CortexDB/, and purchased a free license to evaluate the operation of the program. As the documentation is unclear I would have some questions to ask you:
1) How do I create a database?
2) how can I import a database contained in an excel file (.csv)?
3) how do I create charts or analyzes regarding the data entered?
Thanks
because the question is very old I hope I can still help you.
First of all: you should download the latest release of the free version (simple registration and download)
if you downloaded the free version you got the server and two databases. A server process handles one database. For a second database you have to start a second server (different port of course). If you start the free version you should have an empty database (or the filled and configured demo db). If you want to create a complete new one without any predefined configuration, you have to start the server process with the command line and the parameter -n (ctxserver64 -n). If you did that you have to configure everything by hand with the tool ‘remote admin’.
the question is not clear for me. Do you mean how to import a csv file into a CortexDB or do you mean how to import the database content into an excel file?
If you want to import the csv file into a CortxDB, the easiest way is to use the tool CortexImplex. It’s completely explained in the online docs (https://docs.cortex-ag.com/en/CortexImplex/CortexImplex-Basics/)
If you want to export datasets as csv file the only thing you have to do is to configure a list in the CortexUniplex as a view for your datasets and export them as csv (you find the export function in the list menu).
I would do the charting with d3j. For this you can use the so called ‘DataService’ of the CortexUniplex. It’s a kind of an API for posting requests and getting JSON objects. If you have a completely configured UniPlex you can use all of your configuration as json objects for other apps (for example charts or an individual application).
The full version has a simple dashboard inside of the CortexUniplex. Maybe the vendor offers it in the free free version.
By the way: it’s always good to write an email to the info address. Because this database is not so famous and known, the guys are very helpful. Or contact them via twitter or other channels (see at the bottom of the cort ex-ag.com webpage).
I work for a small publishing company with an internal website that displays a static HTML table of our published products.
We have a need to be able to list and sort published products (about 1-2 items are published per day) dynamically that is being fed from an Excel spreadsheet. The Excel spreadsheet is what we are currently using to maintain the data. The Excel spreadsheet is on a shared network drive available to the company.
I am familiar with AngularJS, ReactJS, and VueJS2 for front-end development and was wondering if I would be able to use one of those tools to consume a Excel file, parse it to JSON, and then display it dynamically on the client side.
Is something like this is possible?
When a user finishes editing the Excel sheet and saves it to the shared network drive, is there a script that would automatically save the data as JSON? I assume we would then simply have our Javascript framework reference and consume the saved JSON to populate its published products list.
Note: We are unable to use a relational database at this time (ie MySQL).
Part 1 - generating json from excel...
front-end technologies are not the way to go. You need to run a service that watches folder for change (like nodejs or python). Saving as csv instead of xls might make things easier as you may not need extra libraries to make sense of your xls file
Part 2, displaying json data...
Your browser, by default, cannot load a local json file. So you may need to run a server (again nodejs and python make this relatively easy) to host your json file.
there are many ways of presenting data these days, but without knowing some of your particular and based on the information you did share, looks like you've got a steep learning curve to get something like this going.
There is a requirement in my project where we need to design a system which can collect data through Web API and then use the data to compare and copy the received data to an existing SQL Server DB. I want to know if anyone has already worked on such requirement and if yes then what is the best way to design it? I am currently thinking of below two options. Please let me know which one is better and if there is any other option.
My algorithm will be as - fetch the data through web api -> compare the data -> save mismatched data to a particular table -> copy new data to the existing tables.
The two options I am currently thinking of are-
1) Use a windows service which will run once in a day and execute above algo.
2) Use SSIS package which will run once in a day and execute above algo.
If anyone has used either of this solution, please guide me to an article or blog which can be helpful to me.
I have similar project requirement before. What I achieved is in SSIS.
Brief steps:
Using C# script to get the return data (http://json2csharp.com/ is an easy way to return C# class based on your JSON components)
using third party dll, install Newtonsoft.Json to deserialize the JSON
Assign the results in C# script to each pre defined variable (be careful with the data type)
Compare the result with the existing table in data flow task.
Let me know if you have any questions
I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.