Problem orchestrating Apache Flink and kafka with docker compose - flink-streaming

I am new to flink and trying to create a docker-compose configuration which contains kafka and flink task manager and Jobmanager docker configs.
After starting the containers with docker-compose i am trying to consume kafka messages from Flink consumer, but flink consumer is not consuming messages from kafka.
I have tried multiple configurations for the docker compose configs but nothing worked.
version: "2.1"
networks: app-tier:
driver: bridge
services: jobmanager:
image: ${FLINK_DOCKER_IMAGE_NAME:-flink}
expose:
- "6123"
ports:
- "8081:8081"
command: jobmanager
links:
- "kafka:kafka"
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
networks:
- app-tier taskmanager:
image: ${FLINK_DOCKER_IMAGE_NAME:-flink}
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: taskmanager
links:
- "jobmanager:jobmanager"
- "kafka:kafka"
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
networks:
- app-tier
kafka: image: wurstmeister/kafka:0.10.2.0 hostname: kafka ports:
- "9092:9092" environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: "EVENT_STREAM_INPUT:1:1,EVENT_STREAM_OUTPUT:1:1,"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 depends_on:
- zookeeper networks:
- app-tier
zookeeper: image: zookeeper restart: always hostname: zoo1 ports:
- "2181:2181" environment:
ZOO_MY_ID: 1 networks:
- app-tier
It should start the containers and they must be able to communicate among each other while i am able to ping the kafka container from inside taskmanager and jobmanager containers but the Flink consumer application is not able to consume kafka messsages.

Related

TDengine 3.0.2.2 create mnode execution stuck

Use docker-compose to deploy the TDengine Swarm cluster, there is no response after the execution of CREATE MNODE ON DNODE dnode_id, the command didn't execute correctly or report an error
Environment:
OS: docker image tdengine/tdengine:3.0.2.2
8G Memory, i5-1135G7, 512G SSD
TDengine Version: 3.0.2.2
Use docker-compose to build a Docker Swarm TDengine cluster
show mnodes and show dnodes are acting normal
There is no response after the execution of CREATE MNODE ON DNODE dnode_id, and there is no exception in the logs
I expect to be able to create mnode normally to achieve high availability of the cluster
Docker compose file:
version: "3.9"
services:
td-1:
build:
dockerfile: ./docker/tdengine.Dockerfile
args:
TAOSD_VER: 3.0.2.2
TZ: Asia/Shanghai
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-1"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
volumes:
- taosdata-td1:/var/lib/taos/
- taoslog-td1:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==manager
td-2:
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-2"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-3"
volumes:
- taosdata-td2:/var/lib/taos/
- taoslog-td2:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==server-01
td-3:
image: localhost:5000/kun/tdengine
networks:
- inter
environment:
TAOS_FQDN: "td-3"
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
volumes:
- taosdata-td3:/var/lib/taos/
- taoslog-td3:/var/log/taos/
deploy:
placement:
constraints:
- node.hostname==server-02
adapter:
image: localhost:5000/kun/tdengine
entrypoint: "taosadapter"
networks:
- inter
environment:
TAOS_FIRST_EP: "td-1"
TAOS_SECOND_EP: "td-2"
deploy:
labels:
caddy_0: localhost:6041
caddy_0.reverse_proxy: adapter:6041
caddy_1: localhost:6044
caddy_1.reverse_proxy: adapter:6044
mode: global
placement:
constraints:
- node.role == manager
caddy-docker-proxy:
build:
dockerfile: ./docker/caddy-docker-proxy.Dockerfile
image: localhost:5000/kun/caddy-docker-proxy
networks:
- inter
ports:
- 6041:6041
- 6044:6044/udp
- 80:80
- 5188:5188
environment:
- CADDY_INGRESS_NETWORKS=inter
- CADDY_DOCKER_CADDYFILE_PATH=/etc/Caddyfile
volumes:
- caddy_data:/data
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
placement:
constraints:
- node.role == manager
networks:
inter:
host:
external: true
volumes:
taosdata-td1:
taoslog-td1:
taosdata-td2:
taoslog-td2:
taosdata-td3:
taoslog-td3:
caddy_data:

Why can I not POST from frontend to backend after containerizing my MERN (React+Node/Express+MongoDB) application?

new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!
Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.

Deploying with docker-compose. Frontend is not reaching backend

So I'm running a web app which consist of 3 services with docker-compose.
A mongodb database container.
A nodejs backend.
A nginx container with static build folder which serves a react app.
Locally it runs fine and I'm very happy, when trying to deploy to a vps I'm facing an issue.
I've set the vps' nginx to reverse proxy to port 8000 which serves the react app, it runs as expected but I can not send requests to the backend, when I'm logged in the vps I can curl it and it responds, but when the web app sends requests, they hang.
My docker-compose:
version: '3.7'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: server
container_name: node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
depends_on:
- mongo
env_file: ./server/.env
ports:
- '8080:4000'
environment:
- NODE_ENV=production
networks:
- app-network
mongo:
image: mongo:4.2.7-bionic
container_name: database
hostname: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=...
- MONGO_INITDB_ROOT_PASSWORD=...
- MONGO_INITDB_DATABASE=admin
restart: always
ports:
- 27017:27017
volumes:
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
networks:
- app-network
client:
build:
context: ./client
dockerfile: prod.Dockerfile
image: client-build
container_name: react-client-build
env_file: ./client/.env
depends_on:
- server
ports:
- '8000:80'
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local

How to alter GitHub Docker-Compose.yml To use Informix DB for Portus?

I am attempting to make a secured repo for our internal docker registry. Github has a ready to go docker-compose however it is using MariaDB and Postgres as highlighted below.
What would be the best practice to utilize the same informix container to run 2 databases for the frontend and backend support of Portus & Docker Registry.
I feel I have to post the entire docker-compose yaml for context. I am also not clear on if i really need Clair for anything.
I am running this on a Open SUSE Leap 15 system. Thank you!
I have been messing around with this and as its written the registry and portus will not connect for some reason, but the underlining Databases seem to work fine and those are a bigger concern at this moment.
version: '2'
services:
portus:
build: .
image: opensuse/portus:development
command: bundle exec rails runner /srv/Portus/examples/development/compose/init.rb
environment:
- PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}
- PORTUS_PUMA_HOST=0.0.0.0:3000
- PORTUS_CHECK_SSL_USAGE_ENABLED=false
- PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
- CCONFIG_PREFIX=PORTUS
- PORTUS_DB_HOST=db
- PORTUS_DB_PASSWORD=portus
- PORTUS_DB_POOL=5
- RAILS_SERVE_STATIC_FILES=true
ports:
- 3000:3000
depends_on:
- db
links:
- db
volumes:
- .:/srv/Portus
background:
image: opensuse/portus:development
entrypoint: bundle exec rails runner /srv/Portus/bin/background.rb
depends_on:
- portus
- db
environment:
- PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}
- PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
# Theoretically not needed, but cconfig's been buggy on this...
- CCONFIG_PREFIX=PORTUS
- PORTUS_DB_HOST=db
- PORTUS_DB_PASSWORD=portus
- PORTUS_DB_POOL=5
volumes:
- .:/srv/Portus
links:
- db
webpack:
image: kkarczmarczyk/node-yarn:latest
command: bash /srv/Portus/examples/development/compose/bootstrap-webpack
working_dir: /srv/Portus
volumes:
- .:/srv/Portus
clair:
image: quay.io/coreos/clair:v2.0.2
restart: unless-stopped
depends_on:
- postgres
links:
- postgres
ports:
- "6060-6061:6060-6061"
volumes:
- /tmp:/tmp
- ./examples/compose/clair/clair.yml:/clair.yml
command: [-config, /clair.yml]
**db:
image: library/mariadb:10.0.23
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
environment:
MYSQL_ROOT_PASSWORD: portus**
**postgres:
image: library/postgres:10-alpine
environment:
POSTGRES_PASSWORD: portus**
registry:
image: library/registry:2.6
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /registry_data
REGISTRY_STORAGE_DELETE_ENABLED: "true"
REGISTRY_HTTP_ADDR: 0.0.0.0:5000
REGISTRY_HTTP_DEBUG_ADDR: 0.0.0.0:5001
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /etc/docker/registry/portus.crt
REGISTRY_AUTH_TOKEN_REALM: http://${MACHINE_FQDN}:3000/v2/token
REGISTRY_AUTH_TOKEN_SERVICE: ${MACHINE_FQDN}:${REGISTRY_PORT}
REGISTRY_AUTH_TOKEN_ISSUER: ${MACHINE_FQDN}
REGISTRY_NOTIFICATIONS_ENDPOINTS: >
- name: portus
url: http://${MACHINE_FQDN}:3000/v2/webhooks/events
timeout: 2000ms
threshold: 5
backoff: 1s
volumes:
- /registry_data
- ./examples/development/compose/portus.crt:/etc/docker/registry/portus.crt:ro
ports:
- ${REGISTRY_PORT}:5000
- 5001:5001
links:
- portus
The databases seem to run fine but I am still what i would consider a novice with docker-compose and informix on the setup side.
Any pointers or documentations recommendations would be most helpful as well.
unfortunately, Portus does not support informix DB. see this link

Multiple Docker container access to host database in docker compose

I have been researching how to connect multiple docker containers in the same compose file to a database (MySQL/MariaDB) on the local host. Currently, the database is containerized for development but production requires a separate database. Eventually, the database will be deployed to AWS or Azure.
There are lots of similar questions on SO, but none that seem to address this particular situation.
Given the existing docker-compose.yml
version: '3.1'
services:
db:
build:
image: mariadb:10.3
volumes:
- "~/data/lib/mysql:/var/lib/mysql:Z"
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
All these services are reversed proxied behind nginx, with both api and scan services requiring access to the database. There are other services requiring database access not shown for simpliticy.
The production compose file would be:
version: '3.1'
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
If there was a single container requiring database access, I could just open up the ports 3306:3306, which won't work for multiple containers.
Splitting up the containers breaks the reverse proxy and add's complexity to deployment and management. I've tried extra_hosts
extra_hosts:
- myhost: xx.xx.xx.xx
but this generate EAI_AGAIN DNS errors, which is strange because you can ping the host from inside containers. I realize this may not be possible

Resources