Mongodb running on Docker is wiping the collection after restart - database

I have to build a small application that reads data from MongoDB running on docker and uses it for further processes.
The problem is that after I close docker, the local instance of the database is also getting deleted. How can I stop it?
The MONGODB_URI is mongodb://localhost:27017 and what are the attributes that I should add in the docker command to avoid it. should I avoid using localhost? docker-compose seems confusing to me so I use Dockerfile.
So, what exactly can be the docker run command to avoid it? is it one of these?
Commands: docker run -d --name mongo-on-docker -p 27017:27017 mongo
docker run -d --name sample --link mongo-on-docker web app
Also to permanently save what data directory should I use?

Docker container are dead before quiting. For store data you should mount named volume, folder or file to the container.
In MongoDB case try:
docker run --rm -ti -v mongo_data:/data/db mongo bash
Where mongo_data is a special Docker entity, that can be mounted as a folder into container. Including in different containers at the same time.
Not new:
How to set docker mongo data volume

Related

Why docker with SQL Server disappeared?

I run SQL Server in container
docker run --network=bridge --name sql29 -h sql29 -it --rm -v h:/sql219data:/var/opt/mssql -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=sQL_19[pwd]" -p 12433:1433 -d mcr.microsoft.com/mssql/server:2019-latest
as described here:
https://learn.microsoft.com/en-us/sql/linux/tutorial-restore-backup-in-sql-server-container?view=sql-server-ver15
I see the active docker
docker ps
But if I try to create the new folder :
docker exec -it sql29 mkdir /var/opt/mssql/bkp22
then the docker disappeared.
docker ps
.........
How to understand: why the docker disappeared? Maybe the volume was mapped incorrectly?
As #Zeitounator commented, the tutorial you linked has a note saying that bind mounts don't work on Windows with the /var/opt/mssql directory:
Host volume mapping for Docker on Windows does not currently support mapping the complete /var/opt/mssql directory. However, you can map a subdirectory, such as /var/opt/mssql/data to your host machine.
You commented that your goal was to keep or restore databases between docker runs. You don't need a bind mount to do that, this is the primary purpose of a volume.
You can create a volume using docker volume create:
docker volume create sql219data
Then run your container using this volume:
docker run -v sql219data:/var/opt/mssql # ...
For debugging purposes you can remove the --rm option from your docker run command so the container won't be removed when stopped. You will then be able to read the logs of the container (even if it stopped):
docker logs sql29
# Then remove it to run the same `docker run` command again
docker rm sql29
docker run # ...

Docker - SQL service wont run when cloned

I'm trying to add a volume to a docker container but when I commit it and run with the new volume none of the sql services run on this copy?? Why would that be.
https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver15&pivots=cs1-powershell
I am adding the initial one as above and it works.All fine. Services running. I can connect to it, run SQL but I need to share a drive.
Seems I cant add one directly to an existing instance??
docker commit 5a8f89adeead newimagename
docker run -ti -v "C:/dir1":/dir1 newimagename /bin/bash
I do the above to clone it and add a volume. WORKS. But the sql services just arent running on this new one. Ill accept it either way I just want SQL running and a share in there.
Can anyone help.
Manged it:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Pa55word1" `
-v C:/db:/dir1 `
-p 1433:1433 --name sql3 `
-d mcr.microsoft.com/mssql/server:2019-CU3-ubuntu-18.04
Had an issue with having no drive or no services but this has done it.

Persist database of docker container

I am using postgres docker image in my project. For initialization I am using following command to create and init my database (tables, views, data, ...)
COPY sql_dump.sql /docker-entrypoint-initdb.d/
Is possible persist these data after container is stopped and removed? For instance when I run image of postgres, it will create database with these data wihout loading script every time of container start. Just load created data of first run.
I did some research and I found VOLUME command, but I don't know how to use it for my purpose, I am new with Docker. Thanks for any help. I am using Docker For Win v18.
You can use docker named volumes more information can be found here.
this will create a named volume called postgres-data
docker volume create postgres-data
and say this is your command to create the container.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
change that to this.
docker run --name some-postgres -v postgres-data:/var/lib/postgresql -e POSTGRES_PASSWORD=mysecretpassword -d postgres
this should mount the postgres-data volume under /var/lib/postgresql. can they initialize your DB and when you stop and start the container it will contain the persisted data.
-HTH

Can Docker Containers maintain state between restarts?

Should containers be able to maintain state?
I am using a SQLServer Image like so.
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d microsoft/mssql-server-linux:2017-latest
Then I create a database in it using dotnet ef.
dotnet ef database update -v
Database works fine until I restart the container. At which point my database is gona and the container is reset to it's initial state.
What am I missing? Do containers not persist state?
If so what's the point in using them for databases?
Yes they can if you don not delete the container so you can
docker stop xxx
or just simply restart your machine and than use
docker start xxx
or
docker restart xxx
if you use docker run you create a new container so there is no previous state to talk about. For sql server specifically there is an option to create a volume and store data there. If you do that you can delete a container and recreate it again without loosing data as its is no longer stored inside it.

Database migrations in docker swarm mode

I have an application that consists of simple Node app and Mongo db. I wonder, how could I run database migrations in docker swarm mode?
Without swarm mode I run migrations by stopping first the old version of application, running one-off migration command with new version of application and then finally starting a new version of app:
# Setup is roughly the following
$ docker network create appnet
$ docker run -d --name db --net appnet db:1
$ docker run -d --name app --net appnet -p 80:80 app:1
# Update process
$ docker stop app && docker rm app
$ docker run --rm --net appnet app:2 npm run migrate
$ docker run -d --name app --net appnet -p 80:80 app:2
Now I'm testing the setup in docker swarm mode so that I could easily scale app. The problem is that in swarm mode one can't start containers in swarm network and thus I can't reach the db to run migrations:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6jtmtihmrcjl appnet overlay swarm
# Trying to replicate the manual migration process in swarm mode
$ docker service scale app=0
$ docker run --rm --net appnet app:2 npm run migrate
docker: Error response from daemon: swarm-scoped network (appnet) is not compatible with `docker create` or `docker run`. This network can only be used by a docker service.
I don't want to run the migration command during app startup either, as there might be several instances launching and that would potentially screw the database. Automatic migrations are scary, so I want to avoid them at all costs.
Do you have any idea how to implement manual migration step in docker swarm mode?
Edit
I found out a dirty hack that allows to replicate the original workflow. Idea is to create a new service with custom command and remove it when one of its tasks is finished. This is far from pleasant usage, better alternatives are more than welcome!
$ docker service scale app=0
$ docker service create --name app-migrator --network appnet app:2 npm run migrate
# Check when the first app-migrator task is finished and check its output
$ docker service ps app-migrator
$ docker logs <container id from app-migrator>
$ docker service rm app-migrator
# Ready to update the app
$ docker service update --image app:2 --replicas 2 app
I believe you can fix this problem by making your overlay network, appnet, attachable. This can be accomplished with the following command:
docker network create --driver overlay --attachable appnet
This should fix the swarm-scoped network error and and allow you to run migrations
This is indeed tricky situation, though I think running the migration during startup might be the the final piece of the puzzle.
The way I do it right now (though not very elegant, it works), is using a message queue (I'm using redis), on app startup, it will send a message to a the queue, informing it that the migration task needs to be run. At the other end of the queue, I have a listener app that will process the queue and run the migration task. The migration task would only run once, since there's only a single instance of the listener running it sequentially. So essentially I'm just using the queue & the listener app to make sure that the migration task runs only once.

Resources