database lost on docker restart - database

I'm running influxdb and grafana on Docker with Windows 10.
Every time I shut down Docker, I lose my database.
Here's what I know:
I have tried adjusting the retention policies, with no effect on the
outcome
I can shut down and restart the containers (docker-compose down) and the database is still there. Only when I shut down Docker for Windows do I lose the database.
I don't see any new folders on the mapped directory when I create a new database (/data/influxdb/data/)'. Only the '_internal' folder persists, which I assume corresponds to the persisting database called '_internal'
Here's my yml file. Any help greatly appreciated.
version: '3'
services:
# Define an InfluxDB service
influxdb:
image: influxdb
volumes:
- ./data/influxdb:/var/lib/influxdb
ports:
- "8086:8086"
- "80:80"
- "8083:8083"
grafana:
image: grafana/grafana
volumes:
- ./data/grafana:/var/lib/grafana
container_name: grafana
ports:
- "3000:3000"
env_file:
- 'env.grafana'
links:
- influxdb
# Define a service for using the influx CLI tool.
# docker-compose run influxdb-cli
influxdb-cli:
image: influxdb
entrypoint:
- influx
- -host
- influxdb
links:
- influxdb

If you are using docker-compose down/up, keep in mind that this is not a "restart" because:
docker-compose up creates new containers and
docker-compose down removes them :
docker-compose up
Builds, (re)creates, starts, and attaches to containers for a service.
docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
So, removing the containers + not using a mechanism to persist data (such as volumes) means that you lose your data ☹️
On the other hand, if you keep using:
docker-compose start
docker-compose stop
docker-compose restart
you deal with the same containers, the ones created when you ran docker-compose up.

docker-compose down
the above command should not remove the volume unless specified.
https://docs.docker.com/compose/reference/down/
I tried the following docker-compose.yaml file which persist the data even with down or rm docker commands.
version: '3'
services:
influxdb:
image: influxdb:2.0
ports:
- 8086:8086
volumes:
- influxdb-data:/var/lib/influxdb2
restart: always
volumes:
influxdb-data:
external: true

I think problem is related to mounted volume not docker or influxdb. You should first find where influxdb stores data(by default it is in your home folder "~user/. influxdb" in windows) and then generate influxdb.conf file, finally mount the volumes.

This seemed to work for me but just in case someone else is reading this for the same problem as mine, the connection with my Docker Wordpresscompose site was lost.
It seems as though it needed restarting.
I used the advice from #tgogos and into the shell terminal in the docker root folder I typed the command:
docker-compose restart
however before doing this i edited the yml file, docker-compose.yml to also include:
restart: always
with the advice from the linode.com site

Related

SQL scripts in Dockerfile is skipped to run if I attach the deployment to a persistent volume

I create a new image base on the "mcr.microsoft.com/mssql/server" image.
Then I have a script to create a new database with some tables with seeded data within the Dockerfile.
FROM mcr.microsoft.com/mssql/server
USER root
# CreateDb
COPY ./CreateDatabaseSchema.sql ./opt/scripts/
ENV ACCEPT_EULA=true
ENV MSSQL_SA_PASSWORD=myP#ssword#1
# Create database
RUN /opt/mssql/bin/sqlservr & sleep 60; /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${MSSQL_SA_PASSWORD} -d master -i /opt/scripts/CreateDatabaseSchema.sql
I can see the database created by my script if I don't attach it to a persistent volume, and DO NOT see the new database if I attach it to a persistent volume. I check the log and don't see any error. Looks like the system skip to process that file. What is the problem that might cause the environment to skip processing the SQL script whci defined in Dockerfile?
thanks,
Austin
The problem with using persistent volume is all the data in that directory is replaced by the base image. I need to learn how to create the database after volume mounts
volumeMounts:
- mountPath: /var/opt/mssql
You can use docker-compose.yml and Dockerfile. Both can work together.
version: '3.9'
services:
mysqlserver:
build:
context: ..
dockerfile: Dockerfile
restart: always
volumes:
- make/my/db/persistent:/var/opt/mssql
Then you can run it with:
docker-compose -f docker-compose.yml up
Have fun

Transferring a data from a mongodb on my local machine to a docker container

I'm trying to deploy a site and I am stuck trying to get my MongoDB data into my docker container. My API seems to work just fine without the docker container but when it is run using the docker container, it throws an error. The errors are due to the database being empty. I'm looking for a way to transfer previously stored data from my local MongoDB to the MongoDB on my container. Any solutions for this.
Below is my docker-compose.yml file:
version: "2"
services:
web:
build: .
ports:
- "3030:3030"
depends_on:
- mongo
mongo:
image: mongo
ports:
- "27018:27017"
I was told using mongodump and mongorestore could be helpful but haven't had much luck with mongorestore.
Currently, I have a dump folder with the db that I'm trying to transfer on my local machine. What steps should I take next to get it into docker?
Found the issue for anyone attempting to populate their mongo database in docker.
Here are the steps I took:
First used the mongodump to copy my database into a dump file
mongodump --db
Used docker cp to copy that dump file into a docker container
docker cp ~/dump/ :/usr/
Used mongorestore inside of the docker container
Open docker mongo shell in Docker Desktop or docker exec -it bash
cd into the usr directory
mongorestore --db= --collection= ./dump//.bson

Unable to run SQL Server 2019 docker with volumes and get ERROR: Setup FAILED copying system data file

When I run latest sql server image from official documentation on linux host.
docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=asdasdasdsad' -p 1433:1433 -v ./data:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2019-latest
I get error:
ERROR: Setup FAILED copying system data file 'C:\templatedata\model_replicatedmaster.mdf' to '/var/opt/mssql/data/model_replicatedmaster.mdf': 5(Access is denied.)
This message occurs only on Linux host and with binded volumes.
I happen because lack of permission. On 2019 mssql docker move from root user images into not-root. It made that docker sql-server containers with binded volumes and run on Linux host has a permission issue (=> has no permission to write into binded volume).
There are few solution for this problem:
1. Run docker as root.
eg. compose:
version: '3.6'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
user: root
ports:
- 1433:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=BLAH
volumes:
- ./data:/var/opt/mssql/data
Source: https://github.com/microsoft/mssql-docker/issues/13#issuecomment-641904197
2. Setup proper directory owner (mssql)
Check id for mssql user on docker image
sudo docker run -it mcr.microsoft.com/mssql/server id mssql
gives: uid=10001(mssql) gid=0(root) groups=0(root)
Change folder's owner
sudo chown 10001 VOLUME_DIRECTORY
Source in Spanish: https://www.eiximenis.dev/posts/2020-06-26-sql-server-docker-no-se-ejecuta-en-root/
3. Give a full access (not recommended)
Give full access to db files on host
sudo chmod 777 -R VOLUME_DIRECTORY
Unfortunately, the only way I found to fix this issue involves a few manual steps.
I used the following docker-compose file for this to work
version: '3.9'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
platform: linux
ports:
- 1433:1433
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=<testPASSWORDthatISlongENOUGH_1234>
volumes:
- ./mssql/data:/var/opt/mssql/data
- ./backups:/var/backups
(the data directory has to be mounted directly due to another issue with SQL server containers hosted on Windows machines)
Then you need to perform the following manual steps:
Connect to the database using SSMS
Find and select your .bak database backup file
Open a terminal in the container
In the directory that the .mdf and .ldf files are going to be created, touch files with the database name you are going to use
touch /var/opt/mssql/data/DATABASE_NAME.mdf
touch /var/opt/mssql/data/DATABASE_NAME_log.ldf
Toggle the option to replace any existing database with the restore
Restore your database
I tried to follow the instructions in this https://www.sqlservercentral.com/blogs/using-volumes-in-sql-server-2019-non-root-containers article but I could not get it to work.
This problem was also discussed in this github issue (which the bot un-helpfully closed without a proper solution).
I encoutered the same problem as you trying to run a container based on sql server on DigitalOcean. user: root also solved the issue.

Permanently install VS Code's server in container

Every time I start up a container to develop in it with VS Code's Remote - Containers extension the container has to re-download the vs-code-server. Is there any way to easily install the server within a Dockerfile so it doesn't have to reinstall every time?
If using docker-compose, you can create a volume for the .vscode-server folder, so that it is persisted across runs.
Something like (in .devcontainer/docker-compose.yml):
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
command:
- /bin/sh
- -c
- "while sleep 1000; do :; done"
volumes:
- ..:/workspace
- vscode-server:/home/code/.vscode-server
volumes:
vscode-server:

How can I drop into a bash shell with docker-compose?

Intention:
I am trying to figure out how to use the command docker-compose up --build to build a machine with an Apache/PHP7.1 service acting as a web server and a MSSQL service as the database for the app that is served from the web server.
I need to be able to type in commands to the docker container while I build this development environment (so that I can look at the PHP logs, etc), but since the web server is running, my terminal is occupied with the output from the web server and when I press ctrl+Z, it actually puts the docker process in the background.
The question:
Is there anyway that I can run this docker-compose.yml file and have it drop me into the shell on the guest machine?
Service 1:
webserver:
image: php:7.1-apache
stdin_open: true
tty: true
volumes:
- ./www:/var/www/html
ports:
- 5001:80
Service 2:
mssql:
image: microsoft/mssql-server-linux
stdin_open: true
tty: true
environment:
- ACCEPT_EULA=Y
volumes:
- ./www:/var/www/html
ports:
- 1433:1433
depends_on:
-webserver
You can exec a new process in a running container.
docker-compose exec [options] SERVICE COMMAND [ARGS...]
In your case
docker-compose exec webserver bash
The -i and -t options from docker exec are assumed in docker-compose exec
First of all, same as docker run, docker-compose has an option for running the services/containers in the background.
docker-compose up --detach --build
After that, list the running containers with:
docker-compose ps
And connect to the container like #Matt already answered.

Resources