dockerized react - how to auto enable hot reloading - reactjs

I'm dockerizing MERN apps at the moment.
The apps is divided into 2, react app at frontend and express app at backend.
The goals is to enable hot reloading at the react app.
Here's my docker-compose.yml file :
version: '3'
services:
mango:
container_name: mango
image: nginx:alpine
ports:
- 3000:80
volumes:
- ./docker/app/nginx.conf:/etc/nginx/conf.d/default.conf
- ./build:/usr/share/nginx/html
depends_on:
- mango-builder
command: "/bin/sh -c 'nginx -g \"daemon off;\"'"
logging:
options:
max-size: "10m"
max-file: "3"
networks:
- local_network
mango-builder:
container_name: mango-builder
image: node:12.13.0
working_dir: /app
volumes:
- .:/app
command: bash -c "rm -rf /app/package-lock.json && rm -rf /app/yarn.lock && yarn && if [ `$NODE_ENV` = `development` ]; then yarn run start; else yarn run build; fi"
logging:
options:
max-size: "10m"
max-file: "3"
networks:
- local_network
networks:
local_network:
Note that:
When the NODE_ENV is set to development, it will run the yarn run start command.
yarn will try to recompile dependencies whenever there's a react component change
mango-builder will be built first before the mango
The build folder was generated by mango-builder, then it's attached into mango (check the volumes section)
The react app is running well at browser
When I try to change the react modules, somehow the hot reloading is not working.
I have a feeling I need to make the yarn run start keep running in the background, but I don't know how to do it.
Any thoughts?

Related

Docker react image doesn't hot reload

Red all the posts around. Tried with lower "react-scripts" (mine is 5.0.1), used CHOKIDAR_USEPOLLING: 'true', basically everything on the first two pages on google.
Hot reloading still doesn't work. My docker-compose.yaml:
version: '3.3'
services:
database:
container_name: mysql
image: mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: toma123
MYSQL_DATABASE: api
MYSQL_USER: toma
MYSQL_PASSWORD: toma123
ports:
- '4306:3306'
volumes:
- mysql-data:/var/lib/mysql
php:
container_name: php
build:
context: ./php
ports:
- '9000:9000'
volumes:
- ./../api:/var/www/api
depends_on:
- database
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '8080:80'
volumes:
- ./../api:/var/www/api
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- database
react:
container_name: react
build:
context: ./../frontend
ports:
- '3001:3000'
volumes:
- node_modules:/home/app/node_modules
volumes:
mysql-data:
driver: local
node_modules:
driver: local
And my react Dockerfile
FROM node
RUN mkdir -p /home/app
WORKDIR /home/app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Even if a lot of your application is in Docker it doesn't mean you need to exclusively use Docker. Since one of Docker's primary goals is to prevent containers from accessing host files, it can be tricky to convince it to emulate a normal host live-development environment.
Install Node on your host. It's likely you have this anyways, or you can trivially apt-get install or brew install it.
Start everything except the front-end application you're developing using Docker; then start your application, on the host, in the normal way.
docker-compose up -d nginx
npm run dev
You may need to make a couple of configuration changes for this to work. For example, in this development environment, the database address will be localhost:4306, but when deployed it will be database:3306, and you'll need to do things like configure Webpack to proxy backend requests to http://localhost:8080. You might set environment variables in your docker-compose.yml for this, and in your code have them default to the things they are in the non-Docker development environment.
const dbHost = process.env.DB_HOST || 'localhost';
const dbPort = process.env.DB_PORT || 4306;
In your Compose setup, do not mount volumes: over your application code or libraries. Once you get up to the point of doing final integration testing on this code, build and run the actual image you're going to deploy. So that section of the Dockerfile might look like
version: '3.8'
services:
react:
build: ../frontend
ports:
- '3001:3000'
# no volumes:
# container_name: is also unnecessary

How can I make my docker container update on changes to my source files?

I'm building a Django/React app using docker-compose, and I'd like it to reload my apps when a change is made, so far I've tried adding CHOKIDAR_USEPOLLING,
adding npm-watch to my package.json, but it doesn't seem to be able to detect changes in the host file.
Ideally I don't want to have to run docker-compose up --build every time I make a change since it's making development tedious.
edit: I should mention that the apps both reload running outside of docker (npm start (cra default) and python manage.py runserver) as expected.
Changes are detected inside the container, but the react app will not rebuild.
I'm using Windows 10 also.
Is there something wrong with my files or something else I should be doing here?
docker-compose.yml
version: "3.9"
services:
db:
container_name: db
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
backend:
container_name: backend
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/core
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: frontend
build: ./frontend
command: npm start
volumes:
- './frontend:/app/'
- '/frontend/node_modules'
ports:
- "3000:3000"
environment:
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
# Enable interactive terminal (crucial for react container to work)
stdin_open: true
tty: true
backend Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code/
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
frontend Dockerfile
FROM node:16
WORKDIR /app/
COPY package*.json /app/
RUN npm install
COPY . /app/
EXPOSE 3000
CMD ["npm", "start"]
Instead of copying you should mount volumes directly to the folder where you run the code on your docker image. In that way your code changes will be reflected in your app.
Example in docker-compose.yml:
volumes:
- "local_source_destination:/server_source_destination"
In your frontend docker-compose-yml you have:
volumes:
- '.:/frontend/app'
but in your Dockerfile your have
COPY . /app/
So it seems like your are mixing up where to mount your volume. Make sure '.' is where your root of your code folder is or change it accordingly.
Try something like:
volumes:
- '.:/app'
As that seems to be the location your server wants your code to be.
If your code is correctly mounted to the right destination it might be that you are not running your watch script from inside the docker container. Try running:
docker exec -itw source_destination_in_container your_container_name command_to_run_watch

Docker Compose with React and Nginx

I'm trying to use docker-compose for deployment of my React app, which uses an express backend and Postgres Database. My idea is to have shared volumes from my docker-compose. Then build from my Dockerfile into the volume, so that Nginx will be able to serve the files. The problem now is that it works when i build the project the first time, but if I change something in my React Client and run "docker-compose up --build" it looks like everything is building as it should, but the files served are still the same. Is COPY command in my dockerfile not overwriting the old files?
Dockerfile in my React Client Project
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
RUN npm run build
FROM node:13.12.0-alpine
COPY --from=build /app/build /var/lib/frontend
docker-compose
version: "3.7"
services:
callstat_backend:
build: ./callstat-backend
restart: always
ports:
- "3000:3000"
env_file:
- keys.env
depends_on:
- postgres
callstat_frontend:
build: ./callstat-client
volumes:
- frontend/:/var/lib/frontend
postgres:
image: postgres:11.2-alpine
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: callstat
nginx:
image: nginx
volumes:
- frontend:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
depends_on:
- callstat_frontend
volumes:
pgdata:
frontend:
Maybe i'm taking a totaly wrong approach here?
You can run the commands in the following order:
# stop down the services
docker-compose stop
# remove the previously created docker resources
docker-compose rm
# bring up the services again
docker-compose up --build
This was your previous volume be removed and new one will be created with the updated changes.
NOTE: This is okay from the development perspective, but docker volumes are really expected to persist between deployments. For artifacts like code changes ideally images should be published as part of build process. To get little more insight into this topic you can refer to https://github.com/docker/compose/issues/2127

Angular in docker compose does not reload changes

I am new to docker and I am trying to make an aplication using django-rest and angular. My current docker-compose file looks like this:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=pirate
- POSTGRES_USER=django
- POSTGRES_PASSWORD=secreat
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
backend:
entrypoint: /entrypoint.sh
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost:8000']
interval: 10s
timeout: 5s
retries: 3
frontend:
build: ./frontend
volumes:
- .:/app
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost:4200']
interval: 10s
timeout: 5s
retries: 3
ports:
- "4200:4200"
nginx:
build: ./nginx
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost']
interval: 10s
timeout: 5s
retries: 3
ports:
- "80:80"
links:
- frontend
volumes:
db-data:
And this is my angular Dockerfile:
FROM node:8.6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
# Here starts angular cli workaround
USER node
RUN mkdir /home/node/.npm-global
ENV PATH=/home/node/.npm-global/bin:$PATH
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN npm install -g #angular/cli
# Here ends
COPY . /usr/src/app
CMD ["npm", "start"]
And now this is the problem: Whenever I change sth in my angular code the docker image with angular does not reload changes. I dont know what I am doing wrong.
THe problem is related to how the filesystem works in Docker. To fix this I suggest you to either perform hot reloads (you have add EXPOSE 49153 in Dockerfile and ports - '49153:49153' in docker-compose.yml)
There are other solution like inotify or nodemon but they require that you use the --poll option when you start your application. The problem is that they keep polling the fs for changes and if the application is big your machine will be a lot slower than you'd like.
I think I found the issue. You copy the ./app in /usr/src/app but you're setting .:/app as a volume. So this means that if you get in your docker instance you'll find your application in 2 places: /app and /usr/src/app.
To fix this you should have this mapping: .:/usr/src/app
Btw, you're going to use the node_modules from your host and this might create some issues. To avoid this you can add an empty volume mapping: /usr/src/app/node_modules
If you get inside your running container, you'll find that the folder app exists twice. You can try it, by executing:
docker exec -it $instanceName /bin/sh
ls /app
ls /usr/src/app
The problem is that only the content of /app changes during your coding, while your application is currently executing the content of /usr/src/app which remains always the same.
Your frontend in the docker-compose should look like this:
frontend:
build: ./frontend
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
I came across the same issue in Docker Desktop for Windows. I know it has been a while but, anybody came here looking for a answer like me, you should do these steps.
Modify start command to "start": "ng serve --host 0.0.0.0 --poll 500" on scripts section in package.json. (Here the number 500 means that client will check every 500 milliseconds whether a change has been made, you can reduce this number. Refer this)
Make sure port 49153 is exposed in Dockerfile (use correct node version)
FROM node:10.16.3-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 4200 49153
CMD npm run start
Map ports and volumes in docker-compose.yml
version: "3.7"
services:
webapp:
build: .
ports:
- "4200:4200"
- "49153:49153"
volumes:
- "/app/node_modules"
- ".:/app"
After that running docker-compose up will build an image and spin up a new container which will automatically reload with new changes.

Docker compose with subdirectory and live reload

I created an app using create-react-app and set up docker compose to set up the container and start the app. When the app is in the root directory, the app starts and the live reload works. But when I move the app to a subdirectory, I can get the app to start, but the live reload does not work.
Here's the working setup:
Dockerfile
FROM node:7.7.2
ADD . /code
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD npm start
docker-compose.yml
version: "2"
services:
client:
build: .
ports:
- "3000:3000"
volumes:
- .:/code
Directory structure
app
- node_modules
- docker-compose
- Dockerfile
- package.json
- src
- public
Here's the structure that I would like:
app
- server
- client
/ node_modules
/ Dockerfile
/ package.json
/ src
/ public
- docker-compose.yml
I've tried every variation that I can think of, but the live reload will not work.
The first thing I had to do was change the build location:
version: "2"
services:
client:
build: ./client
ports:
- "3000:3000"
volumes:
- .:/code
Then I got an error when trying to run docker-compose up:
npm ERR! enoent ENOENT: no such file or directory, open '/code/package.json'
So I changed the volume to - .:/client/code and rebuilt and ran the command and the app started, but no live reload.
Anyway to do this when the app is in a subdirectory?
There's no difference to the paths inside the container when you move your local directory. So you only need to change the local references.
The volume mount should come from ./client
version: "2"
services:
client:
build: ./client
ports:
- "3000:3000"
volumes:
- ./client:/code

Resources