How to make Postgres containers shared data with amazon EFS? - database

I have a cluster which has 2 instances.Both instance has Postgres container and its volume link to Elastic file system accesspoint as the Volume.
I map the volume of both instance to /var/lib/postgresql/data, but container didn't share its data.
Here is my configuration
docker-compose.yml
version: "3.0"
services:
front:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/front:latest
links:
- app:app.plasgate.com
networks:
- app
container_name: front
environment:
- NODE_OPTIONS=--max-old-space-size=2048
ports:
- "8080:80"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "front"
app:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/plasgate:latest
links:
- jasmin:jasmin
- db:db
networks:
- app
container_name: app
environment:
- PYTHONUNBUFFERED=1
- PYTHONIOENCODING=UTF-8
restart: on-failure:10
ports:
- "5000:5000"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "app"
nginx:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/nginx:latest
links:
- app:app
- front:front
container_name: nginx
networks:
- app
environment:
API_HOST: "service.wpdevelop.xyz"
API_PORT: 5000
FRONT_HOST: "customer.wpdevelop.xyz"
FRONT_PORT: 8080
ports:
- "80:80"
- "443:443"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "nginx"
db:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/postgres:latest
volumes:
- postgres:/var/lib/postgresql/data:rw
restart: on-failure:10
networks:
- app
environment:
POSTGRES_PASSWORD: "xxx#2020"
POSTGRES_USER: webadmin
POSTGRES_DB: smsgwdev
ports:
- "5432:5432"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "db"
redis:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/radis:latest
container_name: redis
restart: on-failure:10
networks:
- app
ports:
- "6379:6379"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "redis"
volumes:
postgres:
networks:
app:
driver: bridge
ecs-params.yml
version: 1
task_definition:
family: sms-gateway
ecs_network_mode: bridge
services:
front:
essential: true
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
interval: 5s
timeout: 10s
retries: 3
start_period: 30s
app:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:5000 || exit 1"]
interval: 5s
timeout: 10s
retries: 3
start_period: 30s
depends_on:
- container_name: db
condition: HEALTHY
nginx:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
interval: 5s
timeout: 10s
retries: 3
start_period: 30s
db:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: pg_isready -U webadmin -d smsgwdev
interval: 5s
timeout: 10s
retries: 2
start_period: 30s
redis:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "redis-cli", "ping"]
interval: 5s
timeout: 10s
retries: 2
start_period: 30s
efs_volumes:
- name: postgres
filesystem_id: fs-a4aa73e4
transit_encryption: ENABLED
access_point: fsap-007405b3e9bc7bc2f
How can I make the two Postgres container use the same pgdata?

First and foremost, running Postgres on EFS is not a great idea. I think it’s fine if you need something quick and for very low loads in test environments but EFS is not the right backend for a database engine. Second, sharing an EFS share between 2 containers is an even worse idea. This setup is a no-no because each database will get simultaneus non-arbitrated access to the same data files and this is not how Postgres is supposed to work.
Second, you don't call it out explicitly but are you using the ecs-cli to get this deployed? If so, my suggestion would be to look at an alternative mechanism we (AWS) have introduced together with Docker which relies on the new Docker Compose capabilities to deploy to the Cloud (e.g. ECS). The new version of the ecs-cli is called Copilot and it moved away from Docker support. Note the new Docker Compose integration does not need a separate ecs-params file for now (albeit there are discussions to introduce one) and relies on x-aws- extensions in the docker compose file itself.
Third, regardless of whether this is a good idea or not (it’s not!) on the heels of and inspired by this example, the following simple compose allows you to deploy 2 x Postgres containers that share the same data directory:
version: '3.4'
services:
db1:
container_name: db1
image: postgres:latest
environment:
- POSTGRES_USER=me
- POSTGRES_PASSWORD=mypassword
volumes:
- my-vol:/var/lib/postgresql/data
db2:
depends_on:
- db1
container_name: db2
image: postgres:latest
environment:
- POSTGRES_USER=me
- POSTGRES_PASSWORD=mypassword
volumes:
- my-vol:/var/lib/postgresql/data
app:
container_name: app
image: nginx
volumes:
my-vol:
If you docker compose up in an ECS context (see the blog for more details) you will get 3 ECS services (1 x app/nginx and 2 x DB services) with the 2 DB services insisting on the same EFS Access Point. Again this is just an academic example to prove a working docker compose file. I DO NOT SUGGEST to use this in any meaningful deployment.
[UPDATE]: I have just noticed you only have 1 postgres in the compose above. So I assume you have two separate compose files with one postgres DB insisting against the same file system. All I said above still apply but note there is a limitation that will prevent you to even technically deploy this scenario.

Related

TDengine 3.0.2.2 create mnode execution stuck

Use docker-compose to deploy the TDengine Swarm cluster, there is no response after the execution of CREATE MNODE ON DNODE dnode_id, the command didn't execute correctly or report an error
Environment:
OS: docker image tdengine/tdengine:3.0.2.2
8G Memory, i5-1135G7, 512G SSD
TDengine Version: 3.0.2.2
Use docker-compose to build a Docker Swarm TDengine cluster
show mnodes and show dnodes are acting normal
There is no response after the execution of CREATE MNODE ON DNODE dnode_id, and there is no exception in the logs
I expect to be able to create mnode normally to achieve high availability of the cluster
Docker compose file:
version: "3.9"
services:
td-1:
build:
dockerfile: ./docker/tdengine.Dockerfile
args:
TAOSD_VER: 3.0.2.2
TZ: Asia/Shanghai
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-1"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
volumes:
- taosdata-td1:/var/lib/taos/
- taoslog-td1:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==manager
td-2:
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-2"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-3"
volumes:
- taosdata-td2:/var/lib/taos/
- taoslog-td2:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==server-01
td-3:
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-3"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
volumes:
- taosdata-td3:/var/lib/taos/
- taoslog-td3:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==server-02
adapter:
image: localhost:5000/kun/tdengine
entrypoint: "taosadapter"
networks:
- inter
environment:
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
deploy:
labels:
caddy_0: localhost:6041
caddy_0.reverse_proxy: adapter:6041
caddy_1: localhost:6044
caddy_1.reverse_proxy: adapter:6044
mode: global
placement:
constraints:
- node.role == manager
caddy-docker-proxy:
build:
dockerfile: ./docker/caddy-docker-proxy.Dockerfile
image: localhost:5000/kun/caddy-docker-proxy
networks:
- inter
ports:
- 6041:6041
- 6044:6044/udp
- 80:80
- 5188:5188
environment:
- CADDY_INGRESS_NETWORKS=inter
- CADDY_DOCKER_CADDYFILE_PATH=/etc/Caddyfile
volumes:
- caddy_data:/data
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
placement:
constraints:
- node.role == manager
networks:
inter:
host:
external: true
volumes:
taosdata-td1:
taoslog-td1:
taosdata-td2:
taoslog-td2:
taosdata-td3:
taoslog-td3:
caddy_data:

failed to initialize database, got error failed to connect to `host=db user= database=`: dial error (dial tcp xxxx: connect: connection refused)

I am getting the failed to initialize error whenever I start up my docker container service.
version: '3'
services:
app:
container_name: api
build:
context: .
dockerfile: local.Dockerfile
ports:
- "9090:9090"
- "40000:40000"
security_opt:
- "seccomp:unconfined"
cap_add:
- SYS_PTRACE
restart: on-failure
environment:
PORT: 9090
DB_CONN: "postgres://admin:pass#db:5432/test?sslmode=disable"
volumes:
- .:/app
depends_on:
- db
links:
- db
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
POSTGRES_USER: "admin"
POSTGRES_PASSWORD: "pass"
POSTGRES_DB: "test"
TZ: "UTC"
PGTZ: "UTC"
volumes:
- ./tmp:/var/lib/postgresql/data
I am using air for live reload, please find the air.toml file
root="."
tmp_dir="tmp"
[build]
cmd="go build -gcflags=\"all=-N -l\" -o ./bin/main ."
bin="/app/bin"
full_bin="/app/bin/main"
log="air_errors.log"
include_ext=["go", "yaml"]
exclude_dir=["tmp"]
delay=1000
[log]
time=true
[misc]
clean_on_exit=true
func main() {
Instance, err = gorm.Open(postgres.Open(conn), &gorm.Config{
Logger: logger.New(
log.New(os.Stdout, "", log.LstdFlags), logger.Config{
LogLevel: logger.Info,
Colorful: true,
}),
})
if err != nil {
panic("Cannot connect to DB" + err.Error())
}
}
The connection gets established if you save the code again and air live reload the appliation
You need to wait until the postgres database has been initialized.
Have a look at https://docs.docker.com/compose/compose-file/compose-file-v3/#healthcheck
Add a healthcheck for db service
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
And change depend_on as below
depends_on:
db:
condition: service_healthy

My dockerized project's expressjs server does not (randomly) send response to the client

I would be glad somebody helps me with this issue.
Actually my server express is randomly not able to send response to my react client app.
Here is morgan logs :
GET /api/comments?withUsers=true - - ms - -
GET /api/categories - - ms - -
POST /api/posts - - ms - -
GET /api/posts - - ms - -
Both of my server and client side apps are running in different docker containers.
Here is my docker-compose file :
version: '3'
services:
blog:
container_name: blog
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
environment:
NODE_ENV: development
PORT: 4000
ports:
- '4000:4000'
volumes:
- .:/usr/src/app
client:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
dockerfile: Dockerfile
context: ./views
ports:
- '3000:3000'
volumes:
- ./views:/usr/src/app/views
postgres:
container_name: postgresql
image: postgres:latest
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
- ./sql_tables/tables.sql:/docker-entrypoint-initdb.d/dbinit.sql
restart: always
environment:
POSTGRES_USER: db_user_is_her
POSTGRES_PASSWORD: db_password_is_her
POSTGRES_DB: blog
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:latest
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin_user_email_is_her
PGADMIN_DEFAULT_PASSWORD: pgadmin_password_is_her
PGADMIN_LISTEN_PORT: 80
ports:
- '8080:80'
volumes:
- pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
volumes:
db-data:
pgadmin-data:
app:
Thank you for your help

Deploying with docker-compose. Frontend is not reaching backend

So I'm running a web app which consist of 3 services with docker-compose.
A mongodb database container.
A nodejs backend.
A nginx container with static build folder which serves a react app.
Locally it runs fine and I'm very happy, when trying to deploy to a vps I'm facing an issue.
I've set the vps' nginx to reverse proxy to port 8000 which serves the react app, it runs as expected but I can not send requests to the backend, when I'm logged in the vps I can curl it and it responds, but when the web app sends requests, they hang.
My docker-compose:
version: '3.7'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: server
container_name: node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
depends_on:
- mongo
env_file: ./server/.env
ports:
- '8080:4000'
environment:
- NODE_ENV=production
networks:
- app-network
mongo:
image: mongo:4.2.7-bionic
container_name: database
hostname: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=...
- MONGO_INITDB_ROOT_PASSWORD=...
- MONGO_INITDB_DATABASE=admin
restart: always
ports:
- 27017:27017
volumes:
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
networks:
- app-network
client:
build:
context: ./client
dockerfile: prod.Dockerfile
image: client-build
container_name: react-client-build
env_file: ./client/.env
depends_on:
- server
ports:
- '8000:80'
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local

How to alter GitHub Docker-Compose.yml To use Informix DB for Portus?

I am attempting to make a secured repo for our internal docker registry. Github has a ready to go docker-compose however it is using MariaDB and Postgres as highlighted below.
What would be the best practice to utilize the same informix container to run 2 databases for the frontend and backend support of Portus & Docker Registry.
I feel I have to post the entire docker-compose yaml for context. I am also not clear on if i really need Clair for anything.
I am running this on a Open SUSE Leap 15 system. Thank you!
I have been messing around with this and as its written the registry and portus will not connect for some reason, but the underlining Databases seem to work fine and those are a bigger concern at this moment.
version: '2'
services:
portus:
build: .
image: opensuse/portus:development
command: bundle exec rails runner /srv/Portus/examples/development/compose/init.rb
environment:
- PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}
- PORTUS_PUMA_HOST=0.0.0.0:3000
- PORTUS_CHECK_SSL_USAGE_ENABLED=false
- PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
- CCONFIG_PREFIX=PORTUS
- PORTUS_DB_HOST=db
- PORTUS_DB_PASSWORD=portus
- PORTUS_DB_POOL=5
- RAILS_SERVE_STATIC_FILES=true
ports:
- 3000:3000
depends_on:
- db
links:
- db
volumes:
- .:/srv/Portus
background:
image: opensuse/portus:development
entrypoint: bundle exec rails runner /srv/Portus/bin/background.rb
depends_on:
- portus
- db
environment:
- PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}
- PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
# Theoretically not needed, but cconfig's been buggy on this...
- CCONFIG_PREFIX=PORTUS
- PORTUS_DB_HOST=db
- PORTUS_DB_PASSWORD=portus
- PORTUS_DB_POOL=5
volumes:
- .:/srv/Portus
links:
- db
webpack:
image: kkarczmarczyk/node-yarn:latest
command: bash /srv/Portus/examples/development/compose/bootstrap-webpack
working_dir: /srv/Portus
volumes:
- .:/srv/Portus
clair:
image: quay.io/coreos/clair:v2.0.2
restart: unless-stopped
depends_on:
- postgres
links:
- postgres
ports:
- "6060-6061:6060-6061"
volumes:
- /tmp:/tmp
- ./examples/compose/clair/clair.yml:/clair.yml
command: [-config, /clair.yml]
**db:
image: library/mariadb:10.0.23
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
environment:
MYSQL_ROOT_PASSWORD: portus**
**postgres:
image: library/postgres:10-alpine
environment:
POSTGRES_PASSWORD: portus**
registry:
image: library/registry:2.6
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /registry_data
REGISTRY_STORAGE_DELETE_ENABLED: "true"
REGISTRY_HTTP_ADDR: 0.0.0.0:5000
REGISTRY_HTTP_DEBUG_ADDR: 0.0.0.0:5001
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /etc/docker/registry/portus.crt
REGISTRY_AUTH_TOKEN_REALM: http://${MACHINE_FQDN}:3000/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: ${MACHINE_FQDN}:${REGISTRY_PORT}
REGISTRY_AUTH_TOKEN_ISSUER: ${MACHINE_FQDN}
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: http://${MACHINE_FQDN}:3000/v2/webhooks/events
timeout: 2000ms
threshold: 5
backoff: 1s
volumes:
- /registry_data
- ./examples/development/compose/portus.crt:/etc/docker/registry/portus.crt:ro
ports:
- ${REGISTRY_PORT}:5000
- 5001:5001
links:
- portus
The databases seem to run fine but I am still what i would consider a novice with docker-compose and informix on the setup side.
Any pointers or documentations recommendations would be most helpful as well.
unfortunately, Portus does not support informix DB. see this link

Resources