move neo4j database from one server to another - database

I have a server that uses neo4j 3.5.x with docker. Now I want to move that database to another server.
This time I see that neo4j released 4.0. I just copied data folder which contains only graph.db
I run the script I used last time
sudo docker run --name vis --restart unless-stopped --log-opt max-size=50m --log-opt max-file=10 -p3001:7474 -p3002:7473 -p3003:3003 -d -v /data:/data -v /conf:/conf -v /logs:/logs --env NEO4J_AUTH=none neo4j
When I run this I see that I can reach it from 7474 which is fine. BUT it asks password to see the data. Though I didn't set a password WHY IT ASKS?
I tried everything possible like neo4j, 123, 1234, test or live it empty. none worked.
it gives error
neo4j-driver.chunkhash.bundle.js:1 WebSocket connection to 'ws://0.0.0.0:7687/' failed: Error in connection establishment: net::ERR_ADDRESS_INVALID
Is there a proper/robust way to import data between neo4j database servers? Can I use this https://neo4j.com/developer/kb/export-sub-graph-to-cypher-and-import/

If you go to the Neo4j desktop, and then select the graph. Open up Manage options. Then choose Open Terminal.
Once there you can use the database backup command. (Here is an example)
bin\neo4j-admin backup --backup-dir=c:/backup --name=graph.db-20200107
This will backup the database to the specified backup directory.
Then you can zip that backup directory, and then unzip the backup directory, and restore it on the new server.
(Here is an example)
bin\neo4j-admin restore --from=c:/backup/graph.db-20200107 --database=graph.db --force=true
Note: The 'graph.db-20200107' is an example of the name that you give the database backup. You can name it whatever you want.
-yyyguy

Related

Import/Export PostgreSQL db "without" pg_dump or sql file / backup, etc...?

I need to import a old db into a new postgre server.
Is there a way to migrate an old database to a new server without using pg_dump?
I don't have the sql file, or the old server backup file, neither the user and the password, just the physical files in the "\data" folder, is there any way to do this?
The target server is in the same version of th old server.
Thanks.
Well as a test you could try:
pg_ctl start -D $DATA
Where pg_ctl comes from the target version and the $DATA is the the /data directory. You have not said how you came to have just a /data directory. If this came from an unclean shutdown or a corrupted drive the possibility exists that the server will not start.
UPDATE
To get around auth failure find pg_hba.conf and create or modify local connection to use trust method. For more info see pg_hba and trust. Then you should be able to connect like:
psql -d some_db -U postgres
Once in you can use ALTER ROLE to change password:
ALTER ROLE <role_name> WITH PASSWORD 'new_password';

Restore corrupt mongo db from WiredTiger files

So here is my scenario:
Today my server was restarted by our hoster (acpi shutdown).
My mongo database is a simple docker container (mongo:3.2.18)
Because of an unknown reason the container wasn't restarted on reboot (restart: always was set in docker-compose).
I started it and noticed the volume mapping were gone.
I restored them to the old paths, restarted the mongo container and it started without errors.
I connected to the database and it was completely empty.
> show dbs
local 0.000GB
> use wekan
switched to db wekan
> show collections
> db.users.find();
>
Also I already tried db.repairDatabase();, no effect.
Now my _data directory contains a lot of *.wt files and more. (File list)
I found collection-0-2713973085537274806.wt which has a file size about 390MiB.
This could be the data I need to restore, assuming its size.
Any way of restoring this data?
I already tried my luck using wt salvage according to this article, but I can't get it running - still trying.
I know backups,backups,backups! Sadly this database wasn't backuped.
Related GitHub issue, contains details to software.
Update:
I was able to create a .dump file with the WiredTiger Data Engine tool. However I can't get it imported into a mongoDB.
Try running a repair on the mongo db container. It should repair your database and the data should be completely restored.
Start mongo container in bash mode.
sudo docker-compose -f docker-compose.yml run mongo bash
or
docker run -it mongo bash
Once you are inside the docker container, run mongo db repair.
mongod --dbpath /data/db --repair
The DB should repaired successfully and all your data should be restored.

Error While trying to connect to DB2 SAMPLE database for the First TIme

I want to install DB2 UDW in my machine for learning purpose but I am having a hard time configuring the local instance. Any help would be highly appreciated.
I installed DB2 express edition -c . I have selected all the default choices. I am trying to connect using IBM data Studio 4.1, In the "DB2 first Steps" GUI I have chosen to create SAMPLE Database. I am getting the below error
Creating database "SAMPLE" on path "C:"...
Existing "SAMPLE" database found...
The "-force" option was not specified...
Attempt to create the database "SAMPLE" failed
'db2sampl' processing complete.
I tried connecting from Data Studio using the following options
Database- SAMPLE
Port- 50000
host - localhost
Error I am getting
Explanation:
An attempt was made to access a database that was not found, has not been started, or does not support transactions.
User response:
Ensure that the specified database name exists in the system database directory. If the database name does not exist in the system database directory, either the database does not exist or the database name has not been cataloged. If needed, issue a db2start command and then resubmit the current command.
SQL4499N A fatal error occurred that resulted in a disconnect from the data source.
SQLSTATE: 08004
Problem is I am having zero knowledge in DB2. If I need to run db2start command from where I should run this? Please help
Probably the instance is not started.
Once you have installed DB2, you need to have an started instance in order to use any database. The instance could be created at the same time of the installation. You can verify which instances exist in your computer by issuing:
/opt/IBM/db2/V10.1/instance/db2ilist
The output should give you a set of users, where an instance has been configured.
You can change to that user and start the instance. For example if the user is db2inst1
su - db2inst1
db2start
Once the instance is started, you can now create a database and then connect to it.

How to restore a MySQL .dump file in remote host

i create backup from localhost MYSQL database(drupal) using PHPMyAdmin (file format *.sql). size of this backup = 20MG. i create new database in PHPMyAdmin my live(online) server. Now, when i import backup sql Files i see this error :
#2006 - MySQL server has gone away
i know this error fix with this:
edit ../sql/bin/my.ini
set max_allowed_packet to e.g. 16M
but my server support said: better way is restore mysql using:
mysql -u username -p dbname < file.sql
now, i don't know how to work with this command line for remote server?!
You need SSH access to your server to execute that command using a terminal. If support told you you should use that command, I would think you have SSH access. The SQL file would have to be on your server, so you'd need to transfer it there first (using for example scp).
But if you're not used to the command line, I would recommend first spending some time learning the basics before jumping right into it ;)

I have a 18MB MySQL table backup. How can I restore such a large SQL file?

I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now.
I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB.
Any advice is hugely appreciated.
Thanks,
Henry.
For large operations like this it is better to go to command line. phpMyAdmin gets tricky when lots of data is involved because there are all sorts of timeouts in PHP that can trip it up.
If you can SSH into both servers, then you can do a sequence like the following:
Log in to server1 (your current server) and dump the table to a file using "mysqldump" --- mysqldump --add-drop-table -uSQLUSER -pPASSWORD -h
SQLSERVERDOMAIN DBNAME TABLENAME > BACKUPFILE
Do a secure copy of that file from server1 to server2 using "scp" ---
scp BACKUPFILE USER#SERVER2DOMAIN:FOLDERNAME
Log out of server 1
Log into server 2 (your new server) and import that file into the new DB using "mysql" --- mysql -uSQLUSER -pPASSWORD DBNAME < BACKUPFILE
You will need to replace the UPPERCASE text with your own info. Just ask in the comments if you don't know where to find any of these.
It is worthwhile getting to know some of these command line tricks if you will be doing this sort of admin from time to time.
try HeidiSQL http://www.heidisql.com/
connect to your server and choose the database
go to menu "import > Load sql file" or simply paste the sql file into the sql tab
execute sql (F9)
HeidiSQL is an easy-to-use interface
and a "working-horse" for
web-developers using the popular
MySQL-Database. It allows you to
manage and browse your databases and
tables from an intuitive Windows®
interface.
EDIT: Just to clarify. This is a desktop application, you will connect to your database server remotely. You won't be limited to php script max runtime, or upload size limit.
use bigdupm.
create a folder on your server witch is not easy to guess like "BigDump_D09ssS" or w.e
Download the http://www.ozerov.de/bigdump.php importer file and add them to that directory after reading the instructions and filling out your config information.
FTP The .SQL File to that folder along side the bigdump script and go to your browser and navigate to that folder.
Selecting the file you uploaded will start importing the SQL is split chunks and would be a much faster method!
Or if this is an issue i reccomend the other comment about SSH And mysql -u -p -n -f method!
Even though this is an old post I would like to add that it is recommended to not use database-storage for images when you have more than like 10 product(image)s.
Instead of exporting and importing such a huge file it would be better to transfer the Shopp installation to file-storage for images before transferring.
You can use this free plug-in to help you. Always backup your files and database before performing this action.
What I do is open the file in a code editor, copy and paste into a SQL window within phpmyadmin. Sounds silly, but I swear by it via large files.

Resources