So here is my scenario:
Today my server was restarted by our hoster (acpi shutdown).
My mongo database is a simple docker container (mongo:3.2.18)
Because of an unknown reason the container wasn't restarted on reboot (restart: always was set in docker-compose).
I started it and noticed the volume mapping were gone.
I restored them to the old paths, restarted the mongo container and it started without errors.
I connected to the database and it was completely empty.
> show dbs
local 0.000GB
> use wekan
switched to db wekan
> show collections
> db.users.find();
>
Also I already tried db.repairDatabase();, no effect.
Now my _data directory contains a lot of *.wt files and more. (File list)
I found collection-0-2713973085537274806.wt which has a file size about 390MiB.
This could be the data I need to restore, assuming its size.
Any way of restoring this data?
I already tried my luck using wt salvage according to this article, but I can't get it running - still trying.
I know backups,backups,backups! Sadly this database wasn't backuped.
Related GitHub issue, contains details to software.
Update:
I was able to create a .dump file with the WiredTiger Data Engine tool. However I can't get it imported into a mongoDB.
Try running a repair on the mongo db container. It should repair your database and the data should be completely restored.
Start mongo container in bash mode.
sudo docker-compose -f docker-compose.yml run mongo bash
or
docker run -it mongo bash
Once you are inside the docker container, run mongo db repair.
mongod --dbpath /data/db --repair
The DB should repaired successfully and all your data should be restored.
Related
Being a noobie to Docker, and thinking about storage with a SQL Server that has a size of several hundred gigabytes or more, it doesn't make sense to me that it would be feasible to store that much in a container. It takes time to load a large file and the sensible location for a file in the terabyte range would be to mount it separately from the container.
After several days attempting to google this information, it seemed more logical to ask the community. Here's hoping a picture is worth 1000 words.
How can a SQL Server container mount an exterior SQL Server source (mdf,ldf,ndf) given these sources are on Fortress (see screen shot) and the docker container is elsewhere on say somewhere in one of the clouds? Similarly, Fortress could also be a cloud location.
Example:
SQL CONTAINER 192.169.20.101
SQL Database Files 192.168.10.101
Currently, as is, the .mdf, .ldf files are located in the container. They should connect to another location that is NOT in the container. It would also be great to know how to move that backup file out of the "/var/opt/mssql/data/xxxx.bak" to a location on my Windows machine.
the sensible location for a file in the terabyte range would be to mount it separately from the container
Yes. Also when you update a SQL Server you replace the container.
This updates the SQL Server image for any new containers you create,
but it does not update SQL Server in any running containers. To do
this, you must create a new container with the latest SQL Server
container image and migrate your data to that new container.
Upgrade SQL Server in containers
So read about Docker Volumes, and how to use them with SQL Server.
Open a copy of Visual Studio Code and open the terminal
How to access /var/lib/docker in windows 10 docker desktop? - the link explains how to get to the linux bash command from within VSCode
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh
cd .. until you reach the root using VSCode and do a find command
find / -name "*.mdf"
This lists a file name, in my case as: /var/lib/docker/overlay2/merged/var/opt/mssql/data as the storage location
Add a storage location on your Windows machine using the docker-compose.yml file
version: "3.7"
services:
docs:
build:
context: .
dockerfile: Dockerfile
target: dev
ports:
- 8000:8000
volumes:
- ./:/app
- shared-drive-1:/your_directory_within_container
volumes:
shared-drive-1:
driver_opts:
type: cifs
o: "username=574677,password=P#sw0rd"
device: "//192.168.3.126/..."
Copy the source files to the volume in the shared drive (found here at /var/lib/docker/overlay2/volumes/) I needed to go to VSCode again for root.
Open SSMS to the SQL Instance in docker and change the file locations (you'll detach them and then swap them with commands to the volume where the files were moved) https://mssqlfun.com/2015/05/18/how-to-move-msdb-model-sql-server-system-databases/
Using the VSCode again, go to the root and enable the mssql login to have permission to the data folder under /var/opt/docker/volumes/Fortress (not sure how to do this but working on it and will update here later if it can be done and otherwise I will remove my answer)
Using the SSMS again, and the new permissions, attach the mdf/ldf again to the docker container SQL Server
Finally, there is a great link here explaining how to pass files back and forth between a container and a windows machine hosting the container
I'm trying to backup and restore a CouchDB following the official documentation:
https://docs.couchdb.org/en/latest/maintenance/backups.html
"However, you can also copy the actual .couch files from the CouchDB data directory (by default, data/) at any time, without problem. CouchDB’s append-only storage format for both databases and secondary indexes ensures that this will work without issue."
Since the doc seems to not show clearly the steps to restore from files, i copy the entire data folder, build up a local CouchDB docker container and try to paste the files into container opt/couchdb/data folder.
But what i get when i start/restart the container and access localhost:5984 to see the databases, is: "This database failed to load."
What should i do after copy the files? Paste directly should work? What is the right time to paste? Should i create the DBs before?
Thank you all
i've been able to resolve that way:
https://github.com/apache/couchdb/discussions/3436
I think you may need to update the ownership of the backup files on your docker container.
This fixed the issue for me:
# recursively change ownership of data dir to couchdb:couchdb
docker exec <container_id> bash -c 'chown -R couchdb:couchdb /opt/couchdb/data'
Just replace the <container_id> with your docker container id and the destination with the location of couchdb data directory in your container.
I have a server that uses neo4j 3.5.x with docker. Now I want to move that database to another server.
This time I see that neo4j released 4.0. I just copied data folder which contains only graph.db
I run the script I used last time
sudo docker run --name vis --restart unless-stopped --log-opt max-size=50m --log-opt max-file=10 -p3001:7474 -p3002:7473 -p3003:3003 -d -v /data:/data -v /conf:/conf -v /logs:/logs --env NEO4J_AUTH=none neo4j
When I run this I see that I can reach it from 7474 which is fine. BUT it asks password to see the data. Though I didn't set a password WHY IT ASKS?
I tried everything possible like neo4j, 123, 1234, test or live it empty. none worked.
it gives error
neo4j-driver.chunkhash.bundle.js:1 WebSocket connection to 'ws://0.0.0.0:7687/' failed: Error in connection establishment: net::ERR_ADDRESS_INVALID
Is there a proper/robust way to import data between neo4j database servers? Can I use this https://neo4j.com/developer/kb/export-sub-graph-to-cypher-and-import/
If you go to the Neo4j desktop, and then select the graph. Open up Manage options. Then choose Open Terminal.
Once there you can use the database backup command. (Here is an example)
bin\neo4j-admin backup --backup-dir=c:/backup --name=graph.db-20200107
This will backup the database to the specified backup directory.
Then you can zip that backup directory, and then unzip the backup directory, and restore it on the new server.
(Here is an example)
bin\neo4j-admin restore --from=c:/backup/graph.db-20200107 --database=graph.db --force=true
Note: The 'graph.db-20200107' is an example of the name that you give the database backup. You can name it whatever you want.
-yyyguy
First of, I want to say, that I am not a DB expert and I have no experience with the heroku service.
I want to deploy a play framework application to the heroku service. And I need a database to do so. So I created a postgresql database with this command, since it's supported by heroku:
Users-MacBook-Air:~ user$ heroku addons:create heroku-postgresql -a name_of_app
And I got this as response
Creating heroku-postgresql on ⬢ benchmarkingsoccerclubs... free
Database has been created and is available
! This database is empty. If upgrading, you can transfer
! data from another database with pg:copy
So the DB is now existing but empty of course. For development I worked with a local H2 Database.
Now I would want to populate the DB on heroku using a sql file, since it's quite a lot of data. But I couldn't find how to do that. Is there a command for the heroku CLI, where I can hand over the sql file as an argument and it populates the database? The File basically consists of a few tables which get created and around 10000 Insert commands.
EDIT: I also have CSV files from all the tables. So if there is a way how I can populate the Postgres DB with those would be also great
First, run the following to get your database's name
heroku pg:info --app <name_of_app>
In the output, note the value of "Add-on", which should look something like this:
Add-on: postgresql-angular-12345
Then, issue the following command:
heroku pg:psql <Add-on> --app <name_of_app> < my_sql_file.sql
For example (assuming your sql commands are in file test.sql):
heroku pg:psql postgresql-angular-12345 --app my_cool_app < test.sql
I need to implement an automatic trasfer of daily backups from one DB to another DB. Both DB's and apps are hosted on heroku.
I know this is possible if to do it manually from local machine with the command:
heroku pgbackups:restore DATABASE `heroku pgbackups:url --app production-app` --app staging-app
But this process should be automated and run not from local machine.
I have an idea to write a rake which will execute this command; and run this rake daily with the help of Heroku Scheduler add-on.
Any ideas how it is better to do this? Or maybe there is a better way for this task?
Thanks in advance.
I managed to solve the issue myself. It appeared to be not so complex. Here is the solution, maybe it'll be useful to somebody else:
1. I wrote a script which copies the latest dump from a certain server to the DB of the current server
namespace :backup do
desc "copy the latest dump from a certain server to the DB of the current server"
task restore_last_write_dump: :environment do
last_dump_url = %x(heroku pgbackups:url --app [source_app_name])
system("heroku pgbackups:restore [DB_to_target_app] '#{last_dump_url}' -a [target_app_name] --confirm [target_app_name]")
puts "Restored dump: #{last_dump_url}"
end
end
To avoid authenication upon each request to the servers, craete a file .netrc in the app root (see details here https://devcenter.heroku.com/articles/authentication#usage-examples)
Setup Scheduler add-on for heroku and add our rake task along with the frequency of its running.
That is all.