Database migrations in docker swarm mode - database

I have an application that consists of simple Node app and Mongo db. I wonder, how could I run database migrations in docker swarm mode?
Without swarm mode I run migrations by stopping first the old version of application, running one-off migration command with new version of application and then finally starting a new version of app:
# Setup is roughly the following
$ docker network create appnet
$ docker run -d --name db --net appnet db:1
$ docker run -d --name app --net appnet -p 80:80 app:1
# Update process
$ docker stop app && docker rm app
$ docker run --rm --net appnet app:2 npm run migrate
$ docker run -d --name app --net appnet -p 80:80 app:2
Now I'm testing the setup in docker swarm mode so that I could easily scale app. The problem is that in swarm mode one can't start containers in swarm network and thus I can't reach the db to run migrations:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6jtmtihmrcjl appnet overlay swarm
# Trying to replicate the manual migration process in swarm mode
$ docker service scale app=0
$ docker run --rm --net appnet app:2 npm run migrate
docker: Error response from daemon: swarm-scoped network (appnet) is not compatible with `docker create` or `docker run`. This network can only be used by a docker service.
I don't want to run the migration command during app startup either, as there might be several instances launching and that would potentially screw the database. Automatic migrations are scary, so I want to avoid them at all costs.
Do you have any idea how to implement manual migration step in docker swarm mode?
Edit
I found out a dirty hack that allows to replicate the original workflow. Idea is to create a new service with custom command and remove it when one of its tasks is finished. This is far from pleasant usage, better alternatives are more than welcome!
$ docker service scale app=0
$ docker service create --name app-migrator --network appnet app:2 npm run migrate
# Check when the first app-migrator task is finished and check its output
$ docker service ps app-migrator
$ docker logs <container id from app-migrator>
$ docker service rm app-migrator
# Ready to update the app
$ docker service update --image app:2 --replicas 2 app

I believe you can fix this problem by making your overlay network, appnet, attachable. This can be accomplished with the following command:
docker network create --driver overlay --attachable appnet
This should fix the swarm-scoped network error and and allow you to run migrations

This is indeed tricky situation, though I think running the migration during startup might be the the final piece of the puzzle.
The way I do it right now (though not very elegant, it works), is using a message queue (I'm using redis), on app startup, it will send a message to a the queue, informing it that the migration task needs to be run. At the other end of the queue, I have a listener app that will process the queue and run the migration task. The migration task would only run once, since there's only a single instance of the listener running it sequentially. So essentially I'm just using the queue & the listener app to make sure that the migration task runs only once.

Related

Mongodb running on Docker is wiping the collection after restart

I have to build a small application that reads data from MongoDB running on docker and uses it for further processes.
The problem is that after I close docker, the local instance of the database is also getting deleted. How can I stop it?
The MONGODB_URI is mongodb://localhost:27017 and what are the attributes that I should add in the docker command to avoid it. should I avoid using localhost? docker-compose seems confusing to me so I use Dockerfile.
So, what exactly can be the docker run command to avoid it? is it one of these?
Commands: docker run -d --name mongo-on-docker -p 27017:27017 mongo
docker run -d --name sample --link mongo-on-docker web app
Also to permanently save what data directory should I use?
Docker container are dead before quiting. For store data you should mount named volume, folder or file to the container.
In MongoDB case try:
docker run --rm -ti -v mongo_data:/data/db mongo bash
Where mongo_data is a special Docker entity, that can be mounted as a folder into container. Including in different containers at the same time.
Not new:
How to set docker mongo data volume

Docker container can't connect to Redis

I have a Docker container running a C application using hiredis, which should write data to a Redis server exposed at the default address and port running locally on the same Linux device at 127.0.0.1:6379 .
The Redis server is running in a different Docker container. I start this container running, exposing port 6379 as follows : sudo docker run --name redis_container -d -p 6379:6379 40c68ed3a4d2
redsi-cli can connect to this via 127.0.0.1:6379 without issues.
However, no matter what I try, my container which should write to the Redis gets a Redis connection refused error from the C code all the time. This was my last attempt at running the container : sudo docker run --expose=6379 -i 7340dfee8ea5
What exactly am I missing here? Thanks
The C client is running inside a container, that means 127.0.0.1 points to the container itself, not to your host. You should configure the redis client to redis_container:6379 as that is the name you have used when docker run the redis container. More about this here
Besides, both containers need to be inside the same docker network. Use the following command to create a simple network
docker network create my-net
and add --network my-net to both docker run commands (redis client and redis server)
You can read more about docker network here

Docker - SQL service wont run when cloned

I'm trying to add a volume to a docker container but when I commit it and run with the new volume none of the sql services run on this copy?? Why would that be.
https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-ver15&pivots=cs1-powershell
I am adding the initial one as above and it works.All fine. Services running. I can connect to it, run SQL but I need to share a drive.
Seems I cant add one directly to an existing instance??
docker commit 5a8f89adeead newimagename
docker run -ti -v "C:/dir1":/dir1 newimagename /bin/bash
I do the above to clone it and add a volume. WORKS. But the sql services just arent running on this new one. Ill accept it either way I just want SQL running and a share in there.
Can anyone help.
Manged it:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Pa55word1" `
-v C:/db:/dir1 `
-p 1433:1433 --name sql3 `
-d mcr.microsoft.com/mssql/server:2019-CU3-ubuntu-18.04
Had an issue with having no drive or no services but this has done it.

Gitlab with SQL Server as database source configuration handling

I have a self hosted gitlab on ubuntu machine. I configure a linux container for it to run runner. Now, I am trying to write a configuration for my dotnet project to run unit test on this setup.
I get configuration to run dotnet application without database, and only part I got stuck is that I cannot get Database to load or connect through my test environment.
I get SQL Server linux container to run as service (I am guessing it is running). But I am not sure how I can load my database to it. I know I can do that using Docker Run. But I cannot figure it out how to run it here.
When I try to run "mssql-tools" as service I cannot get it's command to run as it is not install by default in dotnet image.
Here is my file.
image: microsoft/dotnet:latest
variables:
ACCEPT_EULA: Y
SA_PASSWORD: my_secure_password
MSSQL_PID: Developer
stages:
- test
before_script:
- "cd Source"
- "dotnet restore"
test:
stage: test
services:
- mcr.microsoft.com/mssql/server:2017-latest
- mcr.microsoft.com/mssql-tools
script:
- "cd ../Database"
- "docker run -it mcr.microsoft.com/mssql-tools"
- "sqlcmd -S . -U SA -P my_secure_password -i testdata_structure.sql"
- "exit"
- "cd ../Source"
- "dotnet build"
- "dotnet test"
"sqlcmd -S . -U SA -P my_secure_password -i testdata_structure.sql this command won't work in this setup as sqlcmd is not installed, but is one of service. I don't want to make a new image that has all pre-install. But use available stuff to work.
Not, sure if I am able to explain my issue and knowledge here. I am new, but I am reading and changing configuration from 2 days. I can get Linux based SQL Server to run with my app from local docker commands and stuff, but on Gitlab to run Unit Test I cannot get database to restore/get running and connect to application.
GitLab Services does not install commands or apps inside your container job, instead a Service is another container that is usually run in parallel to offer infrastructure services such as databases, cache, queues, etc.
if you want to have sqlcmd inside your container you must install it:
This is an extract from my pipeline, in this case my container is based on Alpine but you can find more ways here: https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver15
before_script:
- apk add curl
- apk add --no-cache gnupg
- curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/msodbcsql17_17.7.2.1-1_amd64.sig
- curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/mssql-tools_17.7.1.1-1_amd64.sig
- curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/msodbcsql17_17.7.2.1-1_amd64.apk
- curl -O https://download.microsoft.com/download/e/4/e/e4e67866-dffd-428c-aac7-8d28ddafb39b/mssql-tools_17.7.1.1-1_amd64.apk
- curl https://packages.microsoft.com/keys/microsoft.asc | gpg --import -
- gpg --verify msodbcsql17_17.7.2.1-1_amd64.sig msodbcsql17_17.7.2.1-1_amd64.apk
- gpg --verify mssql-tools_17.7.1.1-1_amd64.sig mssql-tools_17.7.1.1-1_amd64.apk
- apk add --allow-untrusted msodbcsql17_17.7.2.1-1_amd64.apk
- apk add --allow-untrusted mssql-tools_17.7.1.1-1_amd64.apk
script:
- /opt/mssql-tools/bin/sqlcmd -S $DBC_SERVER -U $DBC_USER -P $DBC_PASSWORD -q "USE myTestDb; CREATE TABLE testGitlab (id int); SELECT * FROM testGitLab"
I end up using my custom Docker Image that has dotnetcore and Sqlcmd installed in it, I can use MsSQL Server as Service in gitlab configuration. (have to define SQL Server' hostname, as IP in same range as my server).
Not an idle answer, but workaround for me.

Changing my project files doesn't change files inside the Docker machine

I'm trying to use Docker to improve my workflow. I installed "Docker Toolbox for Windows" on my Windows 10 home edition (since Docker supposedly only work on professional). I'm using mgexhev's angular-seed which claim to provide full docker support. There is a docker-compose.yml file which links a ./.docker/angular-seed.development.dockerfile.
After git cloning the seed project I can start it by running the commands given on the seed project's github page. So I can see the app after running:
$ docker-compose build
$ docker-compose up -d
But when I change code with Visual Studio Code and save the livereload doesn't work. The only way I can see my changes is by re-running the build and up commands (which re-runs npm install; 5min).
In Docker's documentation they say to "Mount a host directory as a data volume" in order to be able to "change the source code and see its effect on the application in real time"
docker run -v //c/<path>:/<container path>
But I'm not sure this is right when I'm using docker-compose? I have also tried running:
docker run -d -P --name web -v //c/Users/k/dev/:/home/app/ angular-seed
docker run -p 5555:5555 -v //c/Users/k/dev/:/home/app/ -w "/home/app/" angular-seed
docker run -p 5555:5555 -v $(pwd):/home/app/ -w "/home/app/" angular-seed
and lots of similar commands but nothing seems to work.
I tried moving my project from C:/dev/project to home because I read somewhere that there might be some access right issues not using the "home" directory, but this made no difference.
I'm also a bit confused that the instructions say visit localhost:5555. I have to go to dockerIP:5555 to see the app (in case this help anyone understand why my code doesn't update inside of my docker container).
Surely my changes should move in to the docker environment automatically or docker is not very useful for development :)
Looking at the docker-compose.yml you've linked to, I don't see any volume entry. Without that, there's no connection possible between the files on your host and the files inside the container. You'll need a docker-compose.yml that includes a volume entry, like:
version: '2'
services:
angular-seed:
build:
context: .
dockerfile: ./.docker/angular-seed.development.dockerfile
command: npm start
container_name: angular-seed-start
image: angular-seed
networks:
- dev-network
ports:
- '5555:5555'
volumes:
- .:/home/app/angular-seed
networks:
dev-network:
driver: bridge
Docker-machine runs docker inside of a virtual box VM. By default, I believe c:\Users is shared into the VM, but you'll need to check the virtual box settings to confirm this. Any host directories you try to map into the container are mapped from the VM, so if your folder is not shared into that VM, your files won't be included.
With the IP, localhost works on Linux hosts and newer versions of docker for windows/mac. Older docker-machine based installs need to use the IP of the virtual box VM.

Resources