Use docker-compose to deploy the TDengine Swarm cluster, there is no response after the execution of CREATE MNODE ON DNODE dnode_id, the command didn't execute correctly or report an error
Environment:
OS: docker image tdengine/tdengine:3.0.2.2
8G Memory, i5-1135G7, 512G SSD
TDengine Version: 3.0.2.2
Use docker-compose to build a Docker Swarm TDengine cluster
show mnodes and show dnodes are acting normal
There is no response after the execution of CREATE MNODE ON DNODE dnode_id, and there is no exception in the logs
I expect to be able to create mnode normally to achieve high availability of the cluster
Docker compose file:
version: "3.9"
services:
td-1:
build:
dockerfile: ./docker/tdengine.Dockerfile
args:
TAOSD_VER: 3.0.2.2
TZ: Asia/Shanghai
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-1"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
volumes:
- taosdata-td1:/var/lib/taos/
- taoslog-td1:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==manager
td-2:
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-2"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-3"
volumes:
- taosdata-td2:/var/lib/taos/
- taoslog-td2:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==server-01
td-3:
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-3"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
volumes:
- taosdata-td3:/var/lib/taos/
- taoslog-td3:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==server-02
adapter:
image: localhost:5000/kun/tdengine
entrypoint: "taosadapter"
networks:
- inter
environment:
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
deploy:
labels:
caddy_0: localhost:6041
caddy_0.reverse_proxy: adapter:6041
caddy_1: localhost:6044
caddy_1.reverse_proxy: adapter:6044
mode: global
placement:
constraints:
- node.role == manager
caddy-docker-proxy:
build:
dockerfile: ./docker/caddy-docker-proxy.Dockerfile
image: localhost:5000/kun/caddy-docker-proxy
networks:
- inter
ports:
- 6041:6041
- 6044:6044/udp
- 80:80
- 5188:5188
environment:
- CADDY_INGRESS_NETWORKS=inter
- CADDY_DOCKER_CADDYFILE_PATH=/etc/Caddyfile
volumes:
- caddy_data:/data
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
placement:
constraints:
- node.role == manager
networks:
inter:
host:
external: true
volumes:
taosdata-td1:
taoslog-td1:
taosdata-td2:
taoslog-td2:
taosdata-td3:
taoslog-td3:
caddy_data:
Related
I have a project that runs in 3 Containers.
API-Container
Project-Container
MSSQL-Container
Since the Container is running with a new MSSQL database, I need to attach my DB to it.
So the question is, how can I attach it in my yml.file?
version: '3.4'
services:
levsundt.project:
ports:
- "32333:80"
image: ${DOCKER_REGISTRY-}levsundtproject
build:
context: .
dockerfile: ./LevSundt.Project/Dockerfile
environment:
"ASPNETCORE_ENVIRONMENT": "Development"
"ConnectionStrings:WebAppUserDbConnection": "Server=db;Database=LevSundtUsers;user id=web;password=webPassw0rd!; MultipleActiveResultSets=true;"
"LevSundtBaseUrl" : "http://levsundt.api"
depends_on:
- db
- levsundt.api
levsundt.api:
ports:
- "32330:80"
image: ${DOCKER_REGISTRY-}levsundtapi
build:
context: .
dockerfile: ./LevSundt.Api2/Dockerfile
environment:
"ASPNETCORE_ENVIRONMENT": "Development"
"ConnectionStrings:LevSundtDbConnection": "Server=db;Database=LevSundtDomain; user id=api;password=apiPassw0rd!; MultipleActiveResultSets=true"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server:2019-latest"
user: root
ports:
- "14330:1433"
environment:
MSSQL_SA_PASSWORD: "SqlPassw0rd!"
ACCEPT_EULA: "Y"
volumes:
- C:\Temp\SqlVolume\data:/var/opt/mssql/data
- C:\Temp\SqlVolume\log:/var/opt/mssql/log
- C:\Temp\SqlVolume\secrets:/var/opt/mssql/secrets
container_name: sql2019
hostname: sql1
I would be glad somebody helps me with this issue.
Actually my server express is randomly not able to send response to my react client app.
Here is morgan logs :
GET /api/comments?withUsers=true - - ms - -
GET /api/categories - - ms - -
POST /api/posts - - ms - -
GET /api/posts - - ms - -
Both of my server and client side apps are running in different docker containers.
Here is my docker-compose file :
version: '3'
services:
blog:
container_name: blog
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
environment:
NODE_ENV: development
PORT: 4000
ports:
- '4000:4000'
volumes:
- .:/usr/src/app
client:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
dockerfile: Dockerfile
context: ./views
ports:
- '3000:3000'
volumes:
- ./views:/usr/src/app/views
postgres:
container_name: postgresql
image: postgres:latest
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
- ./sql_tables/tables.sql:/docker-entrypoint-initdb.d/dbinit.sql
restart: always
environment:
POSTGRES_USER: db_user_is_her
POSTGRES_PASSWORD: db_password_is_her
POSTGRES_DB: blog
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:latest
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin_user_email_is_her
PGADMIN_DEFAULT_PASSWORD: pgadmin_password_is_her
PGADMIN_LISTEN_PORT: 80
ports:
- '8080:80'
volumes:
- pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
volumes:
db-data:
pgadmin-data:
app:
Thank you for your help
I have a cluster which has 2 instances.Both instance has Postgres container and its volume link to Elastic file system accesspoint as the Volume.
I map the volume of both instance to /var/lib/postgresql/data, but container didn't share its data.
Here is my configuration
docker-compose.yml
version: "3.0"
services:
front:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/front:latest
links:
- app:app.plasgate.com
networks:
- app
container_name: front
environment:
- NODE_OPTIONS=--max-old-space-size=2048
ports:
- "8080:80"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "front"
app:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/plasgate:latest
links:
- jasmin:jasmin
- db:db
networks:
- app
container_name: app
environment:
- PYTHONUNBUFFERED=1
- PYTHONIOENCODING=UTF-8
restart: on-failure:10
ports:
- "5000:5000"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "app"
nginx:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/nginx:latest
links:
- app:app
- front:front
container_name: nginx
networks:
- app
environment:
API_HOST: "service.wpdevelop.xyz"
API_PORT: 5000
FRONT_HOST: "customer.wpdevelop.xyz"
FRONT_PORT: 8080
ports:
- "80:80"
- "443:443"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "nginx"
db:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/postgres:latest
volumes:
- postgres:/var/lib/postgresql/data:rw
restart: on-failure:10
networks:
- app
environment:
POSTGRES_PASSWORD: "xxx#2020"
POSTGRES_USER: webadmin
POSTGRES_DB: smsgwdev
ports:
- "5432:5432"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "db"
redis:
image: 540744822643.dkr.ecr.ap-southeast-1.amazonaws.com/radis:latest
container_name: redis
restart: on-failure:10
networks:
- app
ports:
- "6379:6379"
logging:
driver: awslogs
options:
awslogs-group: sms-gateway
awslogs-region: ap-southeast-1
awslogs-stream-prefix: "redis"
volumes:
postgres:
networks:
app:
driver: bridge
ecs-params.yml
version: 1
task_definition:
family: sms-gateway
ecs_network_mode: bridge
services:
front:
essential: true
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
interval: 5s
timeout: 10s
retries: 3
start_period: 30s
app:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:5000 || exit 1"]
interval: 5s
timeout: 10s
retries: 3
start_period: 30s
depends_on:
- container_name: db
condition: HEALTHY
nginx:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost || exit 1"]
interval: 5s
timeout: 10s
retries: 3
start_period: 30s
db:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: pg_isready -U webadmin -d smsgwdev
interval: 5s
timeout: 10s
retries: 2
start_period: 30s
redis:
essential: false
cpu_shares: 100
mem_limit: 2147483648
healthcheck:
test: ["CMD-SHELL", "redis-cli", "ping"]
interval: 5s
timeout: 10s
retries: 2
start_period: 30s
efs_volumes:
- name: postgres
filesystem_id: fs-a4aa73e4
transit_encryption: ENABLED
access_point: fsap-007405b3e9bc7bc2f
How can I make the two Postgres container use the same pgdata?
First and foremost, running Postgres on EFS is not a great idea. I think it’s fine if you need something quick and for very low loads in test environments but EFS is not the right backend for a database engine. Second, sharing an EFS share between 2 containers is an even worse idea. This setup is a no-no because each database will get simultaneus non-arbitrated access to the same data files and this is not how Postgres is supposed to work.
Second, you don't call it out explicitly but are you using the ecs-cli to get this deployed? If so, my suggestion would be to look at an alternative mechanism we (AWS) have introduced together with Docker which relies on the new Docker Compose capabilities to deploy to the Cloud (e.g. ECS). The new version of the ecs-cli is called Copilot and it moved away from Docker support. Note the new Docker Compose integration does not need a separate ecs-params file for now (albeit there are discussions to introduce one) and relies on x-aws- extensions in the docker compose file itself.
Third, regardless of whether this is a good idea or not (it’s not!) on the heels of and inspired by this example, the following simple compose allows you to deploy 2 x Postgres containers that share the same data directory:
version: '3.4'
services:
db1:
container_name: db1
image: postgres:latest
environment:
- POSTGRES_USER=me
- POSTGRES_PASSWORD=mypassword
volumes:
- my-vol:/var/lib/postgresql/data
db2:
depends_on:
- db1
container_name: db2
image: postgres:latest
environment:
- POSTGRES_USER=me
- POSTGRES_PASSWORD=mypassword
volumes:
- my-vol:/var/lib/postgresql/data
app:
container_name: app
image: nginx
volumes:
my-vol:
If you docker compose up in an ECS context (see the blog for more details) you will get 3 ECS services (1 x app/nginx and 2 x DB services) with the 2 DB services insisting on the same EFS Access Point. Again this is just an academic example to prove a working docker compose file. I DO NOT SUGGEST to use this in any meaningful deployment.
[UPDATE]: I have just noticed you only have 1 postgres in the compose above. So I assume you have two separate compose files with one postgres DB insisting against the same file system. All I said above still apply but note there is a limitation that will prevent you to even technically deploy this scenario.
So I'm running a web app which consist of 3 services with docker-compose.
A mongodb database container.
A nodejs backend.
A nginx container with static build folder which serves a react app.
Locally it runs fine and I'm very happy, when trying to deploy to a vps I'm facing an issue.
I've set the vps' nginx to reverse proxy to port 8000 which serves the react app, it runs as expected but I can not send requests to the backend, when I'm logged in the vps I can curl it and it responds, but when the web app sends requests, they hang.
My docker-compose:
version: '3.7'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: server
container_name: node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
depends_on:
- mongo
env_file: ./server/.env
ports:
- '8080:4000'
environment:
- NODE_ENV=production
networks:
- app-network
mongo:
image: mongo:4.2.7-bionic
container_name: database
hostname: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=...
- MONGO_INITDB_ROOT_PASSWORD=...
- MONGO_INITDB_DATABASE=admin
restart: always
ports:
- 27017:27017
volumes:
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
networks:
- app-network
client:
build:
context: ./client
dockerfile: prod.Dockerfile
image: client-build
container_name: react-client-build
env_file: ./client/.env
depends_on:
- server
ports:
- '8000:80'
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local
My application has three containers: db, frontend-web(React), backend-api
How can I get my backend-api address in frontend-web
Here is my compose file
version: '2'
services:
db:
image: postgres
ports:
- "5432:5432"
web:
build: .
stdin_open: true
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
environment:
- API_URL=http://api:8080/
links:
- api
depends_on:
- api
api:
build: ./api
stdin_open: true
volumes:
- ./api:/usr/src/app
ports:
- "8080:3000"
links:
- db
depends_on:
- db
I can't get the address both api and process.env.API_URL
Add the container name to the service description as follows:
api:
build: ./api
container_name: api
stdin_open: true
volumes:
- ./api:/usr/src/app
ports:
- "8080:3000"
links:
- db
depends_on:
- db
You can then use the container name as a host name to connect to. See https://docs.docker.com/compose/compose-file/#/containername
I am admitting that your server at the web container is just providing the static htmls and not working as a proxy for the api container server.
So, once that you mapped the ports to the host machine, you could use the host machines name/ip to find the api server.
If your host machine name is app.myserver.dev, in your API_URL env var you can use the config below and docker will do the work for you:
web:
build: .
stdin_open: true
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
environment:
- API_URL=http://app.myserver.dev:8080/
links:
- api
depends_on:
- api