Docker-compose v3 not persisting postgres database - database
I'm having difficulty persisting postgres data after a docker-compose v3 container is brought down and restarted. This seems to be a common problem, but after a lot of searching I have not been able to find a solution that works.
My question is similar to here: How to persist data in a dockerized postgres database using volumes, but the solution does not work - so please don't close. I'm going to walk through all the steps below to replicate the problem.
Here is my docker-compose file:
version: "3"
services:
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
volumes:
- pgdata:/var/lib/postgresql/data:rw
ports:
- 5432:5432
app:
build: .
command: ["go", "run", "main.go"]
ports:
- 8081:8081
depends_on:
- db
links:
- db
volumes:
pgdata:
Here is the terminal output after I bring it up and write to my database:
patientplatypus:~/Documents/zennify.me/backend:08:54:03$docker-compose up
Starting backend_db_1 ... done
Starting backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 | 2018-08-19 13:54:53.661 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-08-19 13:54:53.661 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-08-19 13:54:53.664 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-08-19 13:54:53.692 UTC [24] LOG: database system was shut down at 2018-08-19 13:54:03 UTC
db_1 | 2018-08-19 13:54:53.712 UTC [1] LOG: database system is ready to accept connections
app_1 | db init successful
app_1 | create_userinfo_table started
app_1 | create_userinfo_table finished
app_1 | inside RegisterUser in Golang
app_1 | here is the users email:
app_1 | %s pwoiioieind#gmail.com
app_1 | here is the users password:
app_1 | %s ANOTHERSECRETPASSWORD
app_1 | value of randSeq, 7NLHzuVRuTSxYZyNP6MxPqdvS0qy1L6k
app_1 | search_userinfo_table started
app_1 | value of OKtoAdd, %t true
app_1 | last inserted id = 1 //I inserted in database!
app_1 | value of initUserRet, added
I can also connect to postgres in another terminal tab and verify that the database was written to correctly using psql -h 0.0.0.0 -p 5432 -U patientplatypus zennify. Here is the output of the userinfo table:
zennify=# TABLE userinfo
;
email | password | regstring | regbool | uid
-----------------------+--------------------------------------------------------------+----------------------------------+---------+-----
pwoiioieind#gmail.com | $2a$14$u.mNBrITUJaVjly15BOV9.Q9XmELYRjYQbhEUi8i4vLWtOr9QnXJ6 | r33ik3Jtf0m9U3zBRelFoWyYzpQp7KzR | f | 1
(1 row)
So writing to the database once works!
HOWEVER
Let's now do the following:
$docker-compose stop
backend_app_1 exited with code 2
db_1 | 2018-08-19 13:55:51.585 UTC [1] LOG: received smart shutdown request
db_1 | 2018-08-19 13:55:51.589 UTC [1] LOG: worker process: logical replication launcher (PID 30) exited with exit code 1
db_1 | 2018-08-19 13:55:51.589 UTC [25] LOG: shutting down
db_1 | 2018-08-19 13:55:51.609 UTC [1] LOG: database system is shut down
backend_db_1 exited with code 0
From reading the other threads on this topic using docker-compose stop as opposed to docker-compose down should persist the local database. However, if I again use docker-compose up and then, without writing a new value to the database simply query the table in postgres it is empty:
zennify=# TABLE userinfo;
email | password | regstring | regbool | uid
-------+----------+-----------+---------+-----
(0 rows)
I had thought that I may have been overwriting the table in my code on the initialization step, but I only have (golang snippet):
_, err2 := db.Exec("CREATE TABLE IF NOT EXISTS userinfo(email varchar(40) NOT NULL, password varchar(240) NOT NULL, regString varchar(32) NOT NULL, regBool bool NOT NULL, uid serial NOT NULL);")
Which should, of course, only create the table if it has not been previously created.
Does anyone have any suggestions on what is going wrong? As far as I can tell this has to be a problem with docker-compose and not my code. Thanks for the help!
EDIT:
I've looked into using version 2 of docker and following the format shown in this post (Docker compose not persisting data) by using the following compose:
version: "2"
services:
app:
build: .
command: ["go", "run", "main.go"]
ports:
- "8081:8081"
depends_on:
- db
links:
- db
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
volumes:
pgdata: {}
Unfortunately again I get the same problem - the data can be written but does not persist between shutdowns.
EDIT EDIT:
Just a quick note that using docker-compose up to instantiate the service and then relying on docker-compose stop and docker-compose start has no material affect on persistence of the data. Still does not persist across restarts.
EDIT EDIT EDIT:
Couple more things I've been finding out. If you want to properly exec into the docker container to see the value of the database you can do the following:
docker exec -it backend_db_1 psql -U patientplatypus -W zennify
where backend_db_1 is the name of the docker container database patientplatypus is my username and zennify is the name of the database in the database container.
I've also tried, with no luck, to add a network bridge to the docker-compose file as follows:
version: "3"
services:
db:
build: ./db
image: postgres:latest
environment:
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRET
POSTGRES_DB: zennify
ports:
- 5432:5432
volumes:
- ./db/pgdata:/var/lib/postgresql/data
networks:
- mynet
app:
build: ./
command: bash -c 'while !</dev/tcp/db/5432; do sleep 5; done; go run main.go'
ports:
- 8081:8081
depends_on:
- db
links:
- db
networks:
- mynet
networks:
mynet:
driver: "bridge"
My current working theory is that, for whatever reason, my golang container is writing the postgres values it has to local storage rather than the shared volume and I don't know why. Here is my latest idea of what the golang open command should look like:
data.InitDB("postgres://patientplatypus:SUPERSECRET#db:5432/zennify/?sslmode=disable")
...
func InitDB(dataSourceName string) {
db, _ := sql.Open(dataSourceName)
...
}
Again this works, but it does not persist the data.
I had exactly the same problem with a postgres db and a Django app running with docker-compose.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.
Docker named volumes are persisted with the original docker-compose you are using.
version: "3"
services:
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
volumes:
- pgdata:/var/lib/postgresql/data:rw
ports:
- 5432:5432
volumes:
pgdata:
How to prove it?
1) Run docker-compose up -d to create container and volume.
docker-compose up -d
Creating network "docker_default" with the default driver
Creating volume "docker_pgdata" with default driver
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
be8881be8156: Already exists
bcc05f43b4de: Pull complete
....
Digest: sha256:0bbfbfdf7921a06b592e7dc24b3816e25edfe6dbdad334ed227cfd64d19bdb31
Status: Downloaded newer image for postgres:latest
Creating docker_db_1 ... done
2) Write a file on the volume location
docker-compose exec db /bin/bash -c 'echo "File is persisted" > /var/lib/postgresql/data/file-persisted.txt'
3) run docker-compose down
Notice that when you run down it removes no volumes just containers and network as per documentation. You would need to run it with -v to remove volumes.
Stopping docker_db_1 ... done
Removing docker_db_1 ... done
Removing network docker_default
Also notice your volume still exists
docker volume ls | grep pgdata
local docker_pgdata
4) Run docker-compose up -d again to start containers and remount volumes.
5) See file is still in the volume
docker-compose exec db /bin/bash -c 'ls -la /var/lib/postgresql/data | grep persisted '
-rw-r--r-- 1 postgres root 18 Aug 20 04:40 file-persisted.txt
Named volumes are not host volumes. Read the documentation or look up some articles to explain the difference. Also see Docker manages where it stores named volumes files and you can use different drivers but for the moment it is best you just learn the basic difference.
Under Environment Variables:
PGDATA
This optional environment variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data, but if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.
more info here --> https://hub.docker.com/_/postgres/
Related
Docker compose fails to spin up new SQL Server but docker run is ok
I have the following docker compose: version: "3.6" services: db: image: "mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04" user: root environment: SA_PASSWORD: "mysupersecretpassword" ACCEPT_EULA: "Y" ports: - "1433:1433" volumes: - db:/var/opt/mssql volumes: db: driver: local when I do a "docker compose up -d --build", the container starts but then dies with the message unknown package id. When I run it manually, it starts up: PS C:\Users\me\widgets\server> docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=mysupersecretpassword' -e 'MSSQL_PID=Developer' --cap-add SYS_PTRACE -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04 Can you tell me what I'm missing in the compose? Host is Windows 10. EDIT 1 Here's what happens when I do docker compose up: PS C:\Users\me\widgets\server> docker compose up -d _ [+] Running 2/2 - Network server_default Created 0.0s - Container server-db-1 Started 0.4s This is what I see in the logs for the container in docker desktop: Attaching to server-db-1 server-db-1 | SQL Server 2019 will run as non-root by default. server-db-1 | This container is running as user root. server-db-1 | Your master database file is owned by root. server-db-1 | To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216. server-db-1 | ERROR: Unknown package id server-db-1 | server-db-1 exited with code 1
Error connecting to Postgres database running on Docker: "dial tcp: [...] no such host"
Problem $ go run cmd/syndicate/main.go 2021/01/25 16:37:25 error connecting to database: dial tcp: lookup db: no such host Unable to connect to database when attempting to run: $ go run cmd/syndicate/main.go 2021/01/25 16:37:25 error connecting to database: dial tcp: lookup db: no such host & $ migrate -source file://migrations -database postgres://postgres:secret#db:5432/syndicate?sslmode=disable up error: dial tcp: lookup db on [2001:558:feed::1]:53: no such host What do these two commands have in common?... Database URL. I am nearly certain my database URL is incorrect. I have verified my postgres container is running: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4e578bf646c7 adminer "entrypoint.sh docke…" 3 days ago Up 3 days 0.0.0.0:8080->8080/tcp syndicate_adminer_1 729fc179aa6f postgres "docker-entrypoint.s…" 3 days ago Up 3 days 5432/tcp syndicate_db_1 Here's where I might be overlooking something... $ docker-compose ps Name Command State Ports ------------------------------------------------------------------------------------- syndicate_adminer_1 entrypoint.sh docker-php-e ... Up 0.0.0.0:8080->8080/tcp syndicate_db_1 docker-entrypoint.sh postgres Up 5432/tcp 5432/tcp??? I see that my adminer container is clearly mapped to my local port (0.0.0.0:8080->8080/tcp), however my postgres container is only showing 5432/tcp (and not 0.0.0.0:5432->5432/tcp) I am new to docker.. Can anyone explain why my postgres port isn't associated with my local port? Am I on the right track? Here's my docker-compose.yml: version: "3.8" services: db: image: postgres environment: POSTGRES_DB: $POSTGRES_DB POSTGRES_USER: $POSTGRES_USER POSTGRES_PASSWORD: $POSTGRES_PASSWORD migrate: image: migrate/migrate volumes: - ./migrations:/migrations depends_on: - db command: -source=file://migrations -database postgres://$POSTGRES_USER:$POSTGRES_PASSWORD#db:5432/$POSTGRES_DB?sslmode=disable up adminer: image: adminer restart: always ports: - "8080:8080" environment: ADMINER_DEFAULT_SERVER: db depends_on: - db PS. I tried adding port: "5432:5432" variable for db servicd Browse my repository at this time in history Thank you! Connor
add to db service ports: - "5432:5432"
migrate -path D:/works/go/go-fiber-api-server/backend/platform/migrations -database "postgres://postgres:password#cgapp-postgres:5432/postgres?sslmode=disable" up error: dial tcp: lookup cgapp-postgres: no such host Then I have fixed this issue to change db-host name(cgapp-postgres) into "IP ADDRESS" or host.docker.internal. migrate -path D:/works/go/go-fiber-api-server/backend/platform/migrations -database "postgres://postgres:password#100.100.100.100:5432/postgres?sslmode=disable" up 1/u create_init_tables (20.0987ms) migrate -path D:/works/go/go-fiber-api-server/backend/platform/migrations -database "postgres://postgres:password#host.docker.internal:5432/postgres?sslmode=disable" up 1/u create_init_tables (20.0987ms) "host.docker.internal" works.
What configuration should I provide in docker-compose.yml to allow a spring boot docker container to connect to a remote database?
I try to start 2 containers with the following docker compose file: version: '2' services: client-app: image: client-app:latest build: ./client-app/Dockerfile volumes: - ./client-app:/usr/src/app ports: - 3000:8000 spring-boot-server: build: ./spring-boot-server/Dockerfile volumes: - ./spring-boot-server:/usr/src/app ports: - 7000:7000 The spring boot server tries to connect to a remote database server which is on another host and network. Docker successfully starts the client-app containers but fails to start the spring-boot-server. This log is showing that the server crashed because it has failed to connect to the remote database: 2021-01-25 21:02:28.393 INFO 1 --- [main] com.zaxxer.hikari.HikariDataSource: HikariPool-1 - Starting... 2021-01-25 21:02:29.553 ERROR 1 --- [main] com.zaxxer.hikari.pool.HikariPool: HikariPool-1 - Exception during pool initialization. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. The Dockerfiles of both containers create valid images by which I can run manually the containers. It looks like there are some default network restrictions on containers started by a composing file. Docker compose version running on Ubuntu: docker-compose version 1.8.0, build unknown ============================================= FURTHER INVESTIGATIONS: I had created a Dockerfile FROM ubuntu RUN apt-get update RUN apt-get install -y mysql-client CMD mysql -A -P 3306 -h 8.8.8.8 --user=root --password=mypassword -e "SELECT VERSION()" mydatabase along with a docker-compose.yml version: '2' services: test-remote-db-compose: build: . ports: - 8000:8000 to test aside the connectivity alone with the remote database. The test passed with success.
The problem has been misteriously solved, after doing this , a host mashine reboot and docker-compose up --build.
How to run golang-migrate with docker-compose?
In golang-migrate's documentation, it is stated that you can run this command to run all the migrations in one folder. docker run -v {{ migration dir }}:/migrations --network host migrate/migrate -path=/migrations/ -database postgres://localhost:5432/database up 2 How would you do this to fit the syntax of the new docker-compose, which discourages the use of --network? And more importantly: How would you connect to a database in another container instead to one running in your localhost?
Adding this to your docker-compose.yml will do the trick: db: image: postgres networks: new: aliases: - database environment: POSTGRES_DB: mydbname POSTGRES_USER: mydbuser POSTGRES_PASSWORD: mydbpwd ports: - "5432" migrate: image: migrate/migrate networks: - new volumes: - .:/migrations command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"] links: - db networks: new: Instead of using the --network host option of docker run you set up a network called new. All the services inside that network gain access to each other through a defined alias (in the above example, you can access the db service through the database alias). Then, you can use that alias just like you would use localhost, that is, in place of an IP address. That explains this connection string: "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable"
The answer provided by #Federico work for me at the beginning, nevertheless, I realised that I've been gotten a connect: connection refused the first time the docker-compose was run in brand new environment, but not the second one. This means that the migrate container runs before the Database is ready to process operations. Since, migrate/migrate from docker-hub runs the "migration" command whenever is ran, it's not possible to add a wait_for_it.sh script to wait for the db to be ready. So we have to add depends_on and a healthcheck tags to manage the order execution. So this is my docker file: version: '3.3' services: db: image: postgres networks: new: aliases: - database environment: POSTGRES_DB: mydbname POSTGRES_USER: mydbuser POSTGRES_PASSWORD: mydbpwd ports: - "5432" healthcheck: test: pg_isready -U mydbuser -d mydbname interval: 10s timeout: 3s retries: 5 migrate: image: migrate/migrate networks: - new volumes: - .:/migrations command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"] links: - db depends_on: - db networks: new:
As of Compose file formats version 2 you do not have to setup a network. As stated in the docker networking documentation By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name. So in you case you could do something like: version: '3.8' services: #note this databaseservice name is what we will use instead #of localhost when using migrate as compose assigns #the service name as host #for example if we had another container in the same compose #that wnated to access this service port 2000 we would have written # databaseservicename:2000 databaseservicename: image: postgres:13.3-alpine restart: always ports: - "5432" environment: POSTGRES_PASSWORD: password POSTGRES_USER: username POSTGRES_DB: database volumes: - pgdata:/var/lib/postgresql/data #if we had another container that wanted to access migrate container at let say #port 1000 #and it's in the same compose file we would have written migrate:1000 migrate: image: migrate/migrate depends_on: - databaseservicename volumes: - path/to/you/migration/folder/in/local/computer:/database # here instead of localhost as the host we use databaseservicename as that is the name we gave to the postgres service command: [ "-path", "/database", "-database", "postgres://databaseusername:databasepassword#databaseservicename:5432/database?sslmode=disable", "up" ] volumes: pgdata:
Docker-Compose SQL Server database persist data after host restart
My Docker-Compose.yml: version: "3" services: db: image: microsoft/mssql-server-linux:2017-CU8 ports: - 1433:1433 deploy: mode: replicated replicas: 1 environment: - ACCEPT_EULA=Y - MSSQL_SA_PASSWORD=SuperStrongSqlAdminPassword)(*£)($£) volumes: - /home/mssql/:/var/opt/mssql/ - /var/opt/mssql/data As you can see I have a volume mapped to a directory on the host machine: /home/msqql:/var/opt/mssql/ If I do: docker stack deploy -c docker-compose.yml [stack name]. The server starts and I can see data is written to the hosts directory: /home/mssql/*. I then connect to the server and create a database, tables and add some data. If I then kill the stack using docker stack rm [stack name], or restart the host for maintenance reasons etc. When SQL Server starts up again, although the /home/mssql/* still contains the files created by the server initially, if I connect to the server the database/tables/data is gone. Do I have to re-attach the database when the server restarts somehow, or something else I'm missing maybe? Thanks