Basically I've setup a webapp with this stack: db: MySQL , frontend: React.js, backend: FastAPI (Python)
It's SSL Secure (Since the domain is being Mitigated thru by Cloudflare)
NGINX is used for Service endpoints api.domain.com is the API & domain.com for the frontend and some SSL keys stuff.
Even though this is quote on quote "Production", I'm still running a react development server for prototyping.
**PROBLEM:**
I'm running this project on a VPS, when I update the frontend on the VPS then delete all docker containers and images via the commands:
docker rm -vf $(docker ps -aq) #Delete all containers and volumes
docker rmi -f $(docker images -aq) #Delete all images
The changes still remain old, the caching on my browser is disabled aswell and i've tried multiple methods of clearing cache, it's not that.
The bundle.js webpack file still has it's old changes when being received from React, Specifically, I edited some Endpoint constants for production & that constant is stale in bundle.js! when changing stuff for the backend (Python files) it's working just fine and updated upon CMDS: docker-compose up -d & docker-compose up but react is acting up)
I've tried:
docker-compose pull
docker-compose build --no-cache
docker-compose
Made sure all my frontend files where saved infact.
no luck..
The API is running fine on HTTPS & React.js is also running fine, but the changes are just stale, it's very weird.
docker-compose.yml
version: "3.9"
services:
db:
image: mysql:${MYSQL_VERSION}
restart: always
environment:
- MYSQL_DATABASE=${MYSQL_DB}
- MYSQL_USER=${MYSQL_USERNAME}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "${MYSQL_PORT}:${MYSQL_PORT}"
expose:
- "${MYSQL_PORT}"
volumes:
- db:/var/lib/mysql
networks:
- mysql_network
backend:
container_name: fastapi-backend
build: ./backend/app
volumes:
- ./backend:/code
ports:
- "${FASTAPI_PORT}:${FASTAPI_PORT}"
env_file:
- .env
depends_on:
- db
networks:
- mysql_network
- backend
restart: always
frontend:
container_name: react-frontend
build: ./frontend/client
ports:
- "${REACT_PORT}:${REACT_PORT}"
depends_on:
- backend
networks:
- backend
restart: always
volumes:
db:
driver: local
networks:
backend:
driver: bridge
mysql_network:
driver: bridge
This was working for my last release, for some reason it just stopped updating, docker is creating the images fine just with stale code...
This is being ran on Ubuntu 22.04 (Debian-like Linux Distribution)
Related
new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!
Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.
Red all the posts around. Tried with lower "react-scripts" (mine is 5.0.1), used CHOKIDAR_USEPOLLING: 'true', basically everything on the first two pages on google.
Hot reloading still doesn't work. My docker-compose.yaml:
version: '3.3'
services:
database:
container_name: mysql
image: mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: toma123
MYSQL_DATABASE: api
MYSQL_USER: toma
MYSQL_PASSWORD: toma123
ports:
- '4306:3306'
volumes:
- mysql-data:/var/lib/mysql
php:
container_name: php
build:
context: ./php
ports:
- '9000:9000'
volumes:
- ./../api:/var/www/api
depends_on:
- database
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '8080:80'
volumes:
- ./../api:/var/www/api
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- database
react:
container_name: react
build:
context: ./../frontend
ports:
- '3001:3000'
volumes:
- node_modules:/home/app/node_modules
volumes:
mysql-data:
driver: local
node_modules:
driver: local
And my react Dockerfile
FROM node
RUN mkdir -p /home/app
WORKDIR /home/app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Even if a lot of your application is in Docker it doesn't mean you need to exclusively use Docker. Since one of Docker's primary goals is to prevent containers from accessing host files, it can be tricky to convince it to emulate a normal host live-development environment.
Install Node on your host. It's likely you have this anyways, or you can trivially apt-get install or brew install it.
Start everything except the front-end application you're developing using Docker; then start your application, on the host, in the normal way.
docker-compose up -d nginx
npm run dev
You may need to make a couple of configuration changes for this to work. For example, in this development environment, the database address will be localhost:4306, but when deployed it will be database:3306, and you'll need to do things like configure Webpack to proxy backend requests to http://localhost:8080. You might set environment variables in your docker-compose.yml for this, and in your code have them default to the things they are in the non-Docker development environment.
const dbHost = process.env.DB_HOST || 'localhost';
const dbPort = process.env.DB_PORT || 4306;
In your Compose setup, do not mount volumes: over your application code or libraries. Once you get up to the point of doing final integration testing on this code, build and run the actual image you're going to deploy. So that section of the Dockerfile might look like
version: '3.8'
services:
react:
build: ../frontend
ports:
- '3001:3000'
# no volumes:
# container_name: is also unnecessary
https://github.com/hodanov/react-django-postgres-sample-app
I want to deploy the DB, API, and FRONT containers in the above repository to AWS ECS so that they can be operated.
Therefore, in order to operate the containers separately, the docker-compose.yaml file was divided into containers.
I pushed the container to ECR and operated it with ECS, but it stopped by all means.
Where should I review it?
I spent some time on this. Where do I start :)
First and foremost the compose file has bind mounts that can't work when you deploy to ECS/Fargate (because the local files mounted do not exist on the remote end). So these are the 3 things that needs to be done:
you need to remove the bind mounts for the rest_api and the web_front services.
you need to append ADD ./code/django_rest_api/ /code in DockerfilePython and append ADD ./code/django_web_front/ /code in DockerfileNode. This will basically simulate the bind mount and will include the content of the folder right in the image(s).
you need to add the image field to the three services. That will allow you to docker compose build and docker compose push` so that the images are available to be deployed in the cloud.
This is my resulting compose file (you need to change your images repos):
version: "3"
services:
db:
container_name: django_db
image: mreferre/django_db:latest
build:
context: .
dockerfile: DockerfilePostgres
volumes:
- django_data_volume:/var/lib/postgresql/data
rest_api:
container_name: django_rest_api
image: mreferre/django_rest_api:latest
build:
context: .
dockerfile: DockerfilePython
command: ./run-migrate-and-server.sh
# volumes:
# - ./code/django_rest_api:/code
tty: true
ports:
- 8000:8000
depends_on:
- db
web_front:
container_name: django_web_front
image: mreferre/django_web_front:latest
build:
context: .
dockerfile: DockerfileNode
command: ./npm-install-and-start.sh
# volumes:
# - ./code/django_web_front:/code
tty: true
ports:
- 3000:3000
depends_on:
- rest_api
volumes:
django_data_volume:
This will deploy just fine with docker compose up in the ECS context and the API server will work. HOWEVER, the React front end will not work because the App.js code points to localhost:
const constants = {
"url": "http://localhost:8000/api/profile/?format=json"
}
I don't know what the React equivalent is but in Angular you need to point to something along the lines of window.location.href to tell the app to point to the same endpoint your browser connected to.
If you fix this problem then it should just work(tm).
So, I'm stuck in an issue related to using files stored in a server and not able to display them in the frontend.
My project is:
React + Redux using Docker
The React app is full, i.e., there's an API folder for the backend (react/redux), a CLIENT folder for the frontend (react/libraries) and MongoDB as DB.
Docker Compose creates these 3 parts, API, CLIENT and MONGO in just 1 container.
So, in the frontend, the user is able to select an image as an avatar, and then, this image is sent through the layers and saved in a specific folder (NOT BUILD/PUBLIC etc) inside the API docker image. It's possible to remove/delete and re-select it. Everything's working fine!
The issue is the display of this image in the frontend. The avatar component uses an IMAGE SRC to display it, but I can't find a valid URL to use to be able for the frontend TO SEE that image file saved in the API/server.
Since it's inside a container, I tried all possibilities I could find in Docker documentation... I think the solution relays in the NETWORK docker-compose option, but even though couldn't make it.
Docker Compose File:
version: '3.8'
services:
client:
build: ./client
stdin_open: true
image: my-client
restart: always
ports:
- "3000:3000"
volumes:
- ./client:/client
- /client/node_modules
depends_on:
- api
networks:
mynetwork:
ipv4_address: 172.19.0.9
api:
build: ./api
image: my-api
restart: always
ports:
- "3003:3003"
volumes:
- ./api:/api
- logs:/api/logs
- /api/node_modules
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 172.19.0.10
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
networks:
- mynetwork
volumes:
mongo_data:
logs:
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: "172.19.0.0/24"
To summarize, there's a folder in the API side with images/files and I want to reference them as in
<img src="mynetwork:3003/imagefolder/imagefile.png"> or something like that...
I can't believe I have to use this other solution...Another Stackoverflow Reply
i am facing following error while installing docker
Service 'webpack' failed to build: failed to register layer: open /var/lib/docker/aufs/layers/7e80462cf605c738f8d502a5d2707a4e4a7fb03daad65d0113240d9f1428df0f: no such file or directory
version: "2"
services:
webpack:
image : express-react-image
build: .
command: ./bin/webpack-dev
volumes:
- .:/src/app
environment:
- VIRTUAL_HOST=localhost
ports:
- "8080:8080"
networks:
- front_tier
server:
build: .
command: ./bin/start-web
volumes:
- .:/src/app
environment:
- VIRTUAL_HOST=localhost
- APP_HOST=http://localhost:3000
ports:
- "3000:3000"
networks:
- front_tier
volumes:
data:
driver: local
networks:
front_tier:
driver: bridge
The error message you're seeing, failed to register layer, means that Docker is failing while building an image because it can't find a cached layer it expects to. The easiest way to resolve this is probably to remove all your cached layers and start the build from scratch. docker-compose rm Might do the trick. If not, I'd start removing containers with docker rm and images with docker rmi 'til you've got a clean enough slate that it works.
If manual cleanup is slow/hard, you might have better luck with docker system prune.
Regardless of you you get your cached layers cleaned up, I'd expect that to work with a fresh docker-compose up.