Redash: Unable to send invitation/passowrd reset mails - redash

After setting the email env variables as per https://redash.io/help/open-source/setup (for AWS SES)
sudo docker-compose run --rm server manage send_test_mail
works, and I receive the email as well.
But invitation emails do not get sent.
On trying this command - to send the invite directly,
sudo docker-compose run --rm server manage users invite x#x.com X admin#x.com
I get the following error:
raise RuntimeError('Application was not able to create a URL '
RuntimeError: Application was not able to create a URL adapter for request independent URL generation. You might be able to fix this by setting the SERVER_NAME config variable.

From https://github.com/getredash/redash/issues/5266#issuecomment-847756246. Thanks to #kijimaD.
I was in the same situation. I found a way to send an invitation email.
After running docker-compose up, check the log when the browser invitation email is sent (↓excerpt).
$ docker-compose up
...
nginx_1 | 172.31.42.153 - - [24/May/2021:10:59:25 +0000] "POST /api/users/124/reset_password HTTP/1.1" 200 122 "https://example.com/users/124" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36" "110.66.19.160"
scheduler_1 | [2021-05-24 10:59:25,960][PID:16][ERROR][ForkPoolWorker-3] task_name=redash.tasks.send_mail task_id=39f69b3c-a109-43d5-bd31-c7dd99955427 Failed sending message: Reset your password
scheduler_1 | Traceback (most recent call last):
scheduler_1 | File "/app/redash/tasks/general.py", line 58, in send_mail
scheduler_1 | mail.send(message)
scheduler_1 | File "/usr/local/lib/python2.7/site-packages/flask_mail.py", line 491, in send
scheduler_1 | with self.connect() as connection:
scheduler_1 | File "/usr/local/lib/python2.7/site-packages/flask_mail.py", line 144, in __enter__
scheduler_1 | self.host = self.configure_host()
scheduler_1 | File "/usr/local/lib/python2.7/site-packages/flask_mail.py", line 158, in configure_host
scheduler_1 | host = smtplib.SMTP(self.mail.server, self.mail.port)
scheduler_1 | File "/usr/local/lib/python2.7/smtplib.py", line 256, in __init__
scheduler_1 | (code, msg) = self.connect(host, port)
scheduler_1 | File "/usr/local/lib/python2.7/smtplib.py", line 317, in connect
scheduler_1 | self.sock = self._get_socket(host, port, self.timeout)
scheduler_1 | File "/usr/local/lib/python2.7/smtplib.py", line 292, in _get_socket
scheduler_1 | return socket.create_connection((host, port), timeout)
scheduler_1 | File "/usr/local/lib/python2.7/socket.py", line 575, in create_connection
scheduler_1 | raise err
scheduler_1 | error: [Errno 99] Cannot assign requested address
scheduler_1 | [2021-05-24 10:59:25,961][PID:16][INFO][ForkPoolWorker-3] Task redash.tasks.send_mail[39f69b3c-a109-43d5-bd31-c7dd99955427] succeeded in 0.00195795716718s: None
server_1 | [2021-05-24 10:59:28,257][PID:12][INFO][metrics] method=GET path=/health_check endpoint=redash_index status=302 content_type=text/html; charset=utf-8 content_length=311 duration=1.80 query_count=0 query_duration=0.00
Obviously, the error content is different from the test command one.
This looks like an error that the scheduler_1 instance is not able to read the port, host.
In other words, the instance don't read environment variables.
I added an environment variable to worker in docker-compose.yml based on this error.
After run docker-compose down && docker-compose up -d, I was able to successfully send the invitation email in my browser.+1
However, the test command docker-compose run --rm server manage users invite user#example.com test-user admin#example.com does not change the error message.
I have a question about the reliability of the test command in specific situations.
Obviously, the behavior of the test command is different from that of the actual browser.
This seems to have confused many people...
My advice to anyone facing the same problem is to operate the browser and see the actual error in the log instead of checking it with a test command.
I hope this will help others who are struggling with the same situation.
The following is my docker-compose and env configuration.
docker-compose
$ cat docker-compose.yml
version: "2"
x-redash-service: &redash-service
image: redash/redash:8.0.0.b32245
depends_on:
- postgres
- redis
env_file: /opt/redash/env
restart: always
services:
server:
<<: *redash-service
command: server
ports:
- "5000:5000"
environment:
REDASH_WEB_WORKERS: 4
env_file: /opt/redash/env
scheduler:
<<: *redash-service
command: scheduler
environment:
QUEUES: "celery"
WORKERS_COUNT: 1
REDASH_WEB_WORKERS: 4
env_file: /opt/redash/env # <------------- Add
scheduled_worker:
<<: *redash-service
command: worker
environment:
QUEUES: "scheduled_queries,schemas"
WORKERS_COUNT: 1
REDASH_WEB_WORKERS: 4
env_file: /opt/redash/env # <------------- Add
adhoc_worker:
<<: *redash-service
command: worker
environment:
QUEUES: "queries"
WORKERS_COUNT: 2
REDASH_WEB_WORKERS: 4
env_file: /opt/redash/env # <------------- Add
redis:
image: redis:5.0-alpine
restart: always
postgres:
image: postgres:9.6-alpine
env_file: /opt/redash/env
volumes:
- /opt/redash/postgres-data:/var/lib/postgresql/data
restart: always
nginx:
image: redash/nginx:latest
ports:
- "80:80"
depends_on:
- server
links:
- server:redash
restart: always
env
$ cat env
PYTHONUNBUFFERED=0
REDASH_LOG_LEVEL=INFO
REDASH_REDIS_URL=redis://redis:6379/0
POSTGRES_PASSWORD=...
REDASH_COOKIE_SECRET=...
REDASH_SECRET_KEY=...
REDASH_DATABASE_URL=...
# Mail
REDASH_MAIL_SERVER=...
REDASH_MAIL_PORT=...
REDASH_MAIL_USE_TLS=...
REDASH_MAIL_USERNAME=...
REDASH_MAIL_PASSWORD=...
REDASH_MAIL_DEFAULT_SENDER=info#example.com
REDASH_HOST=https://example.com
REDASH_SERVER_NAME=https://example.com

Related

Docker compose fails to spin up new SQL Server but docker run is ok

I have the following docker compose:
version: "3.6"
services:
db:
image: "mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04"
user: root
environment:
SA_PASSWORD: "mysupersecretpassword"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
volumes:
- db:/var/opt/mssql
volumes:
db:
driver: local
when I do a "docker compose up -d --build", the container starts but then dies with the message unknown package id.
When I run it manually, it starts up:
PS C:\Users\me\widgets\server> docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=mysupersecretpassword' -e 'MSSQL_PID=Developer' --cap-add SYS_PTRACE -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04
Can you tell me what I'm missing in the compose?
Host is Windows 10.
EDIT 1
Here's what happens when I do docker compose up:
PS C:\Users\me\widgets\server> docker compose up -d _
[+] Running 2/2
- Network server_default Created 0.0s
- Container server-db-1 Started 0.4s
This is what I see in the logs for the container in docker desktop:
Attaching to server-db-1
server-db-1 | SQL Server 2019 will run as non-root by default.
server-db-1 | This container is running as user root.
server-db-1 | Your master database file is owned by root.
server-db-1 | To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
server-db-1 | ERROR: Unknown package id
server-db-1 |
server-db-1 exited with code 1

What configuration should I provide in docker-compose.yml to allow a spring boot docker container to connect to a remote database?

I try to start 2 containers with the following docker compose file:
version: '2'
services:
client-app:
image: client-app:latest
build: ./client-app/Dockerfile
volumes:
- ./client-app:/usr/src/app
ports:
- 3000:8000
spring-boot-server:
build: ./spring-boot-server/Dockerfile
volumes:
- ./spring-boot-server:/usr/src/app
ports:
- 7000:7000
The spring boot server tries to connect to a remote database server which is on another host and network. Docker successfully starts the client-app containers but fails to start the spring-boot-server. This log is showing that the server crashed because it has failed to connect to the remote database:
2021-01-25 21:02:28.393 INFO 1 --- [main] com.zaxxer.hikari.HikariDataSource: HikariPool-1 - Starting...
2021-01-25 21:02:29.553 ERROR 1 --- [main] com.zaxxer.hikari.pool.HikariPool: HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
The Dockerfiles of both containers create valid images by which I can run manually the containers. It looks like there are some default network restrictions on containers started by a composing file.
Docker compose version running on Ubuntu:
docker-compose version 1.8.0, build unknown
=============================================
FURTHER INVESTIGATIONS:
I had created a Dockerfile
FROM ubuntu
RUN apt-get update
RUN apt-get install -y mysql-client
CMD mysql -A -P 3306 -h 8.8.8.8 --user=root --password=mypassword -e "SELECT VERSION()" mydatabase
along with a docker-compose.yml
version: '2'
services:
test-remote-db-compose:
build: .
ports:
- 8000:8000
to test aside the connectivity alone with the remote database. The test passed with success.
The problem has been misteriously solved, after doing this , a host mashine reboot and docker-compose up --build.

User and database are not initialised though environment variables while deploying postgres in kubernetes

When I am trying to deploy postgres over kubernetes, I have populated all the required environment variables such as POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DATABASE, etc. through the creation of configmap and secret files. Container is successfully deployed and I can open into its terminal through the 'kubectl exec' command. When I run the 'env' command, it is seen that all the environment variables supplied during deployment are successfully set accordingly. But when I try to run :
psql -U testuser -d testdb
It gives two types of errors:
psql: could not connect to server: FATAL: role 'testuser' does not exist
psql: could not connect to server: FATAL: database 'testdb' does not exist
After doing a lot of research, I found that even after setting the environment variables, the user and database do not get initialised. Therefore, I created an init.sql file and added it to docker-entrypoint-initdb.d to build a custom image, push it to docker hub and deploy postgres through that image in kubernetes.
init.sql file content:
CREATE USER testuser WITH SUPERUSER PASSWORD test;
CREATE DATABASE testdb;
GRANT ALL PRIVILEGES ON DATABASE testdb TO testuser;
Dockerfile content:
FROM postgres:latest
ADD init.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
Still, I get the same error. user and database are not getting initialised... please help me out with this
UPDATE:
Logs:
PostgreSQL Database directory appears to contain a database; Skipping initialization
2020-04-15 09:50:06.855 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2020-04-15 09:50:06.855 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-04-15 09:50:06.855 UTC [1] LOG: listening on IPv6 address "::", port 5432
2020-04-15 09:50:07.000 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-04-15 09:50:07.204 UTC [26] LOG: database system was interrupted; last known up at 2020-04-15 09:50:02 UTC
2020-04-15 09:50:07.781 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
2020-04-15 09:50:07.815 UTC [26] LOG: invalid record length at 0/16453B0: wanted 24, got 0
2020-04-15 09:50:07.815 UTC [26] LOG: redo is not required
2020-04-15 09:50:08.034 UTC [1] LOG: database system is ready to accept connections
Here, it is already mentioned 'Skipping initialisation'
can you try with this shell script ? file name : init-user-db.sh
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
Docker file something like this
FROM postgres:latest
ADD init-user-db.sh /docker-entrypoint-initdb.d/init-user-db.sh
EXPOSE 5432
EDIT : 1
apiVersion: apps/v1
kind: StatefulSet
metadata:
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
creationTimestamp: null
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: <Password>
- name: POSTGRES_DB
value: <DB name>
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:9.5
imagePullPolicy: IfNotPresent
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
subPath: pgdata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeMode: Filesystem
status:
phase: Pending

Can't ping port 80 outside of docker container - recv failure - connection reset by peer

I'm creating a react app in a docker container. More specifically, it contains much more than that, but I'm stuck on the react/nginx :80 being exposed outside the container. I don't seem to have this problem when I use another port like 3000 or 8080.
git cone https://chrisconnors#bitbucket.org/chrisconnors/mndspn.git
Then I just build the frontend with
docker-compose up --build -d frontend
After it's running, I can hit 0.0.0.0:80 in the container itself.
/ # wget 0.0.0.0:80
Connecting to 0.0.0.0:80 (0.0.0.0:80)
index.html 100% |******************************************| 548 0:00:00 ETA
However, when i hit that in the browser or curl from my terminal (outside the container), I get this error:
:~/src/mndspn$ curl --trace-ascii dump.txt 0.0.0.0:80
curl: (56) Recv failure: Connection reset by peer
:~/src/mndspn$ cat dump.txt
== Info: Rebuilt URL to: 0.0.0.0:80/
== Info: Trying 0.0.0.0...
== Info: TCP_NODELAY set
== Info: Connected to 0.0.0.0 (127.0.0.1) port 80 (#0)
=> Send header, 71 bytes (0x47)
0000: GET / HTTP/1.1
0010: Host: 0.0.0.0
001f: User-Agent: curl/7.58.0
0038: Accept: */*
0045:
== Info: Recv failure: Connection reset by peer
== Info: stopped the pause stream!
== Info: Closing connection 0
Just checked the docker compose file that you might be using, Port 80 is not exposed in frontend service.
https://bitbucket.org/chrisconnors/mndspn/src/4724d5c4a3d67fad9e2e7d84f2ec3916e75360f7/docker-compose.yml?at=master&fileviewer=file-view-default#docker-compose.yml-39
Uncomment the ports line {Line 39-40}
ports:
- "80:80"
Start the container again using docker-compose and you should be able to access the application.
Connection Reset to a Docker container usually indicates that you've defined a port mapping for the container that does not point to an application.
So, if you've defined a mapping of 80:80, check that your process inside the docker instance is in fact running on port 80 (netstat -an|grep LISTEN).
Ensure you have the option -p 80:80 in your docker run command

Docker-compose v3 not persisting postgres database

I'm having difficulty persisting postgres data after a docker-compose v3 container is brought down and restarted. This seems to be a common problem, but after a lot of searching I have not been able to find a solution that works.
My question is similar to here: How to persist data in a dockerized postgres database using volumes, but the solution does not work - so please don't close. I'm going to walk through all the steps below to replicate the problem.
Here is my docker-compose file:
version: "3"
services:
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
volumes:
- pgdata:/var/lib/postgresql/data:rw
ports:
- 5432:5432
app:
build: .
command: ["go", "run", "main.go"]
ports:
- 8081:8081
depends_on:
- db
links:
- db
volumes:
pgdata:
Here is the terminal output after I bring it up and write to my database:
patientplatypus:~/Documents/zennify.me/backend:08:54:03$docker-compose up
Starting backend_db_1 ... done
Starting backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 | 2018-08-19 13:54:53.661 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-08-19 13:54:53.661 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-08-19 13:54:53.664 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-08-19 13:54:53.692 UTC [24] LOG: database system was shut down at 2018-08-19 13:54:03 UTC
db_1 | 2018-08-19 13:54:53.712 UTC [1] LOG: database system is ready to accept connections
app_1 | db init successful
app_1 | create_userinfo_table started
app_1 | create_userinfo_table finished
app_1 | inside RegisterUser in Golang
app_1 | here is the users email:
app_1 | %s pwoiioieind#gmail.com
app_1 | here is the users password:
app_1 | %s ANOTHERSECRETPASSWORD
app_1 | value of randSeq, 7NLHzuVRuTSxYZyNP6MxPqdvS0qy1L6k
app_1 | search_userinfo_table started
app_1 | value of OKtoAdd, %t true
app_1 | last inserted id = 1 //I inserted in database!
app_1 | value of initUserRet, added
I can also connect to postgres in another terminal tab and verify that the database was written to correctly using psql -h 0.0.0.0 -p 5432 -U patientplatypus zennify. Here is the output of the userinfo table:
zennify=# TABLE userinfo
;
email | password | regstring | regbool | uid
-----------------------+--------------------------------------------------------------+----------------------------------+---------+-----
pwoiioieind#gmail.com | $2a$14$u.mNBrITUJaVjly15BOV9.Q9XmELYRjYQbhEUi8i4vLWtOr9QnXJ6 | r33ik3Jtf0m9U3zBRelFoWyYzpQp7KzR | f | 1
(1 row)
So writing to the database once works!
HOWEVER
Let's now do the following:
$docker-compose stop
backend_app_1 exited with code 2
db_1 | 2018-08-19 13:55:51.585 UTC [1] LOG: received smart shutdown request
db_1 | 2018-08-19 13:55:51.589 UTC [1] LOG: worker process: logical replication launcher (PID 30) exited with exit code 1
db_1 | 2018-08-19 13:55:51.589 UTC [25] LOG: shutting down
db_1 | 2018-08-19 13:55:51.609 UTC [1] LOG: database system is shut down
backend_db_1 exited with code 0
From reading the other threads on this topic using docker-compose stop as opposed to docker-compose down should persist the local database. However, if I again use docker-compose up and then, without writing a new value to the database simply query the table in postgres it is empty:
zennify=# TABLE userinfo;
email | password | regstring | regbool | uid
-------+----------+-----------+---------+-----
(0 rows)
I had thought that I may have been overwriting the table in my code on the initialization step, but I only have (golang snippet):
_, err2 := db.Exec("CREATE TABLE IF NOT EXISTS userinfo(email varchar(40) NOT NULL, password varchar(240) NOT NULL, regString varchar(32) NOT NULL, regBool bool NOT NULL, uid serial NOT NULL);")
Which should, of course, only create the table if it has not been previously created.
Does anyone have any suggestions on what is going wrong? As far as I can tell this has to be a problem with docker-compose and not my code. Thanks for the help!
EDIT:
I've looked into using version 2 of docker and following the format shown in this post (Docker compose not persisting data) by using the following compose:
version: "2"
services:
app:
build: .
command: ["go", "run", "main.go"]
ports:
- "8081:8081"
depends_on:
- db
links:
- db
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
volumes:
pgdata: {}
Unfortunately again I get the same problem - the data can be written but does not persist between shutdowns.
EDIT EDIT:
Just a quick note that using docker-compose up to instantiate the service and then relying on docker-compose stop and docker-compose start has no material affect on persistence of the data. Still does not persist across restarts.
EDIT EDIT EDIT:
Couple more things I've been finding out. If you want to properly exec into the docker container to see the value of the database you can do the following:
docker exec -it backend_db_1 psql -U patientplatypus -W zennify
where backend_db_1 is the name of the docker container database patientplatypus is my username and zennify is the name of the database in the database container.
I've also tried, with no luck, to add a network bridge to the docker-compose file as follows:
version: "3"
services:
db:
build: ./db
image: postgres:latest
environment:
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRET
POSTGRES_DB: zennify
ports:
- 5432:5432
volumes:
- ./db/pgdata:/var/lib/postgresql/data
networks:
- mynet
app:
build: ./
command: bash -c 'while !</dev/tcp/db/5432; do sleep 5; done; go run main.go'
ports:
- 8081:8081
depends_on:
- db
links:
- db
networks:
- mynet
networks:
mynet:
driver: "bridge"
My current working theory is that, for whatever reason, my golang container is writing the postgres values it has to local storage rather than the shared volume and I don't know why. Here is my latest idea of what the golang open command should look like:
data.InitDB("postgres://patientplatypus:SUPERSECRET#db:5432/zennify/?sslmode=disable")
...
func InitDB(dataSourceName string) {
db, _ := sql.Open(dataSourceName)
...
}
Again this works, but it does not persist the data.
I had exactly the same problem with a postgres db and a Django app running with docker-compose.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.
Docker named volumes are persisted with the original docker-compose you are using.
version: "3"
services:
db:
image: postgres:latest
environment:
POSTGRES_DB: zennify
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETPASSWORD
volumes:
- pgdata:/var/lib/postgresql/data:rw
ports:
- 5432:5432
volumes:
pgdata:
How to prove it?
1) Run docker-compose up -d to create container and volume.
docker-compose up -d
Creating network "docker_default" with the default driver
Creating volume "docker_pgdata" with default driver
Pulling db (postgres:latest)...
latest: Pulling from library/postgres
be8881be8156: Already exists
bcc05f43b4de: Pull complete
....
Digest: sha256:0bbfbfdf7921a06b592e7dc24b3816e25edfe6dbdad334ed227cfd64d19bdb31
Status: Downloaded newer image for postgres:latest
Creating docker_db_1 ... done
2) Write a file on the volume location
docker-compose exec db /bin/bash -c 'echo "File is persisted" > /var/lib/postgresql/data/file-persisted.txt'
3) run docker-compose down
Notice that when you run down it removes no volumes just containers and network as per documentation. You would need to run it with -v to remove volumes.
Stopping docker_db_1 ... done
Removing docker_db_1 ... done
Removing network docker_default
Also notice your volume still exists
docker volume ls | grep pgdata
local docker_pgdata
4) Run docker-compose up -d again to start containers and remount volumes.
5) See file is still in the volume
docker-compose exec db /bin/bash -c 'ls -la /var/lib/postgresql/data | grep persisted '
-rw-r--r-- 1 postgres root 18 Aug 20 04:40 file-persisted.txt
Named volumes are not host volumes. Read the documentation or look up some articles to explain the difference. Also see Docker manages where it stores named volumes files and you can use different drivers but for the moment it is best you just learn the basic difference.
Under Environment Variables:
PGDATA
This optional environment variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data, but if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.
more info here --> https://hub.docker.com/_/postgres/

Resources