I've got a little too far with RedisGraph and now it's about to ship in production.
Therefore I need to export and import some data between servers and also to create backups.
I'm using the open source community version (not the Redis Entreprise).
How would you recommend to proceed backups and imports/exports?
Thanks for your feedbacks!
RedisGraph stores each graph in a single Redis key, so traditional Redis persistency methods can be used to persist and migrate data.
Backups are usually managed using RDB files or a combination of both the RDB and AOF strategies, these are described here.
If your Redis keyspace should be entirely duplicated or only consists of graph keys, you can copy the RDB file between servers, otherwise you can export and import graph keys with the DUMP and RESTORE commands.
Related
I have some tables from three databases that I want to copy their data to another database in an automated way and these data are quite large. My servers are running on AWS. What is the simplest and most reliable way to do so?
Edit
I want them to stay on-sync (automation process as DevOps engineer)
The databases are all MySQL and all moved between AWS EC2. The data is in range between 100GiB and 200GiB
Currently, Maxwell to take the data from the tables then moved to Kafka and then a script written in Java to feed the other database.
I believe you can use AWS Database Migration Service (DMS) to replicate tables from each source into a single target. You would have a single target endpoint and three source endpoints. You would have three replication tasks that would take data from each source and put it into your target. DMS can keep data in sync via ongoing replication. Be sure to read up on the documentation before proceeding as it isn't the most intuitive service to use, but it should be able to do what you are asking.
https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
I'm tying to figure out a way to do DB to DB migration without using local files(CSV). Is there any direct way to do SQL/Oracle DB to cassandra Db. Mostly I'm just going to import a table from oracle to cassandra. I have checked the methods I'm aware of and they are all using an temporary file to extract the data and then load it into cassandra.
Can someone suggest any other alternate methods that can be carried out possibly in Windows with a free ETL tool or any other suggestion on how to do it.
Thanks
You used to be able to use sqoop to import data but this is deprecated now so depending what version of Cassandra you use you might be able to employ this.
I would recommend using the Datastax bulk loader but you will have to export from oracle to json or csv first. You'd probably also have to do some transformation on your data anyway since its unlikely you'll have the same data model in Cassandra as you did with Oracle.
How to manually copy database(collection of nodes and relationships) from one neo4j database server to another neo4j database server.
In general, we can use neo4j-shell to export and import data(migration) between neo4j servers. This will take a lot of time, In mysql we can just copy the data folder to achieve this, is there a way to do this.
My requirement was i have a huge collection of nodes and relationships in my local neo4j server, i wanted to add it to docker-neo4j containers where ever i run.
copying the data folder does not work because the minor version numbers are different.
Try to see if there is something about data migration to 3.2 on the neo4j website.
Else export to csv then import ?
Using the same versions, i could able to physically copy the database folders/files from one db instance to another db instance and see the data reflected.
I am currently working on a C project that contains an SQLite3 database with WAL enabled. We have an HTTP web interface over which you shall be able to get an online backup of the database. Currently, the database file is reachable over HTTP, which is bad in many ways. My task now is to implement a new backup algorithm.
There is the SQLite-Online-Backup API which seems to be pretty nice. There, you open two database connections and copy one database to the other. Anyway, in my setup, I can't be sure that there is enough space to copy the entire database, since we may have a lot of statistics and multimedia files in it. For me, the best solution would be to open a SQLite connection that is directly connected to stdout, so that I could backup the database through CGI.
Anyway, I didn't find a way in the SQLite3 API to open a database connection on special files like stdout. What would be best practice to backup the database? How do you perform online backups of your SQLite3 databases?
Thanks in advance!
If you need to have some special target interface for the backup, you can implement your custom VFS interface that does what you need. See the parameters for sqlite3_open_v2() where you can pass in the name of a VFS.
(see https://www.sqlite.org/c3ref/vfs.html for Details about VFS and the OS interface used by SQLite)
Basically every sqlite3_backup_step() call will write some blocks of data, and you would need to transfer those to your target database in some way.
I'm searching free (as in freedom) GUI tools that allow me to export data from one relational database into files (CSV, XML, ...) and to bulk import this data into another database. Both database might be from different vendors.
I'm already aware of tools that migrate schemas, like liquibase and not searching for that.
Extra plus points if such a tool
is written in Java and uses JDBC drivers
is an eclipse plugin (because our other tools are also eclipse based)
allows all kinds of filtering and modification of the data during import or export
can handle large (as in giga- or terabytes) data sets
can be scheduled
can continue an interrupted import/export
Similar questions:
Export large amounts of binary data from one SQL database and import it into another database of the same schema
It seems that the WBExport and WBImport commands of SQLWorkbench are very good candidates. I also need to look whether ETL Tools like Pentaho ETL do this stuff.
CloverETL meets nearly all your requirements. With free version you can work with following databases: MySQL, PostgreSQL, SQLite, MS SQL, Sybase, Oracle, and Derby.