Angular in docker compose does not reload changes - angularjs

I am new to docker and I am trying to make an aplication using django-rest and angular. My current docker-compose file looks like this:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=pirate
- POSTGRES_USER=django
- POSTGRES_PASSWORD=secreat
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
backend:
entrypoint: /entrypoint.sh
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost:8000']
interval: 10s
timeout: 5s
retries: 3
frontend:
build: ./frontend
volumes:
- .:/app
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost:4200']
interval: 10s
timeout: 5s
retries: 3
ports:
- "4200:4200"
nginx:
build: ./nginx
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost']
interval: 10s
timeout: 5s
retries: 3
ports:
- "80:80"
links:
- frontend
volumes:
db-data:
And this is my angular Dockerfile:
FROM node:8.6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
# Here starts angular cli workaround
USER node
RUN mkdir /home/node/.npm-global
ENV PATH=/home/node/.npm-global/bin:$PATH
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN npm install -g #angular/cli
# Here ends
COPY . /usr/src/app
CMD ["npm", "start"]
And now this is the problem: Whenever I change sth in my angular code the docker image with angular does not reload changes. I dont know what I am doing wrong.

THe problem is related to how the filesystem works in Docker. To fix this I suggest you to either perform hot reloads (you have add EXPOSE 49153 in Dockerfile and ports - '49153:49153' in docker-compose.yml)
There are other solution like inotify or nodemon but they require that you use the --poll option when you start your application. The problem is that they keep polling the fs for changes and if the application is big your machine will be a lot slower than you'd like.
I think I found the issue. You copy the ./app in /usr/src/app but you're setting .:/app as a volume. So this means that if you get in your docker instance you'll find your application in 2 places: /app and /usr/src/app.
To fix this you should have this mapping: .:/usr/src/app
Btw, you're going to use the node_modules from your host and this might create some issues. To avoid this you can add an empty volume mapping: /usr/src/app/node_modules
If you get inside your running container, you'll find that the folder app exists twice. You can try it, by executing:
docker exec -it $instanceName /bin/sh
ls /app
ls /usr/src/app
The problem is that only the content of /app changes during your coding, while your application is currently executing the content of /usr/src/app which remains always the same.
Your frontend in the docker-compose should look like this:
frontend:
build: ./frontend
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules

I came across the same issue in Docker Desktop for Windows. I know it has been a while but, anybody came here looking for a answer like me, you should do these steps.
Modify start command to "start": "ng serve --host 0.0.0.0 --poll 500" on scripts section in package.json. (Here the number 500 means that client will check every 500 milliseconds whether a change has been made, you can reduce this number. Refer this)
Make sure port 49153 is exposed in Dockerfile (use correct node version)
FROM node:10.16.3-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 4200 49153
CMD npm run start
Map ports and volumes in docker-compose.yml
version: "3.7"
services:
webapp:
build: .
ports:
- "4200:4200"
- "49153:49153"
volumes:
- "/app/node_modules"
- ".:/app"
After that running docker-compose up will build an image and spin up a new container which will automatically reload with new changes.

Related

How can I change the port of a React App running inside docker

I was dockerising an app of mine but I wanted to access it on port 80 on my machine, every time a change the port in docker-composer.yml it returnes the error:
ERROR: for site Cannot create container for service site: mount denied:
the source path "dcfffb89fd376c0d955b0903e3aae045df32a073a6743c7e44b3214325700576:D:\\projetos\\portfolio\\site\\node_modules:rw"
too many colons
ERROR: Encountered errors while bringing up the project.
Im running on windows
docker-composer.yml
version: '3.7'
services:
site:
container_name: site
build: ./site
volumes:
- 'D:\projetos\portfolio\site'
- 'D:\projetos\portfolio\site\node_modules'
ports:
- 3000:3000
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- COMPOSE_CONVERT_WINDOWS_PATHS=true
command: npm start
Dockerfile
FROM node:16.13.1-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I was using the wrong path pattern, on windows you have to use /c/path/to/volume since the ":" is used inside docker stuff(don't know what), also removed the command COMPOSE_CONVERT_WINDOWS_PATHS=true and worked just fine.

How can I make my docker container update on changes to my source files?

I'm building a Django/React app using docker-compose, and I'd like it to reload my apps when a change is made, so far I've tried adding CHOKIDAR_USEPOLLING,
adding npm-watch to my package.json, but it doesn't seem to be able to detect changes in the host file.
Ideally I don't want to have to run docker-compose up --build every time I make a change since it's making development tedious.
edit: I should mention that the apps both reload running outside of docker (npm start (cra default) and python manage.py runserver) as expected.
Changes are detected inside the container, but the react app will not rebuild.
I'm using Windows 10 also.
Is there something wrong with my files or something else I should be doing here?
docker-compose.yml
version: "3.9"
services:
db:
container_name: db
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
backend:
container_name: backend
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/core
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: frontend
build: ./frontend
command: npm start
volumes:
- './frontend:/app/'
- '/frontend/node_modules'
ports:
- "3000:3000"
environment:
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
# Enable interactive terminal (crucial for react container to work)
stdin_open: true
tty: true
backend Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code/
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
frontend Dockerfile
FROM node:16
WORKDIR /app/
COPY package*.json /app/
RUN npm install
COPY . /app/
EXPOSE 3000
CMD ["npm", "start"]
Instead of copying you should mount volumes directly to the folder where you run the code on your docker image. In that way your code changes will be reflected in your app.
Example in docker-compose.yml:
volumes:
- "local_source_destination:/server_source_destination"
In your frontend docker-compose-yml you have:
volumes:
- '.:/frontend/app'
but in your Dockerfile your have
COPY . /app/
So it seems like your are mixing up where to mount your volume. Make sure '.' is where your root of your code folder is or change it accordingly.
Try something like:
volumes:
- '.:/app'
As that seems to be the location your server wants your code to be.
If your code is correctly mounted to the right destination it might be that you are not running your watch script from inside the docker container. Try running:
docker exec -itw source_destination_in_container your_container_name command_to_run_watch

Can't connect to frontend in Dockerized React web app on MacOS

I was recently hired on a website development project and I am having trouble deploying it via docker on MacOS. I just can't connect to the frontend via localhost:8000.
I have temporarily solved this problem by running docker in a virtual machine (Ubuntu), but some things are not working correctly due to this connection.
What are the ways to solve this problem?
Here is the config in dockerfiles:
Dockerfile (frontend)
# pull official base image
FROM node:12-alpine as build
# create and set working directory
WORKDIR /app
# install app dependencies
RUN mkdir frontend
COPY package.json ./frontend/
COPY package-lock.json ./frontend/
COPY frontend_healthchecker.sh ./frontend/
RUN chmod +x /app/frontend/frontend_healthchecker.sh
RUN ls
RUN cd frontend && ls
RUN cd frontend && npm install --only=prod
# RUN cd frontend && npm install --scripts-prepend-node-path=auto
# add app code
COPY . ./frontend/
RUN cd frontend && ls
# start app
RUN cd frontend && npm install && npm run build
FROM nginx:1.16.0-alpine
COPY --from=build /app/frontend/build /usr/share/nginx/html
COPY --from=build /app/frontend/nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/frontend/frontend_healthchecker.sh /home
RUN chmod +x /home/frontend_healthchecker.sh
RUN apk add curl
RUN apk search curl
HEALTHCHECK CMD ["/bin/sh", "/home/frontend_healthchecker.sh"]
EXPOSE 8000
CMD ["nginx", "-g", "daemon off;"]
Dockerfile (backend)
FROM node:12
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
#install ffmpeg
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg
RUN apt-get install -y mediainfo
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y python-pip
RUN pip --version
RUN pip install numpy
RUN pip install opencv-contrib-python
RUN pip install urllib3
WORKDIR /app
COPY . /app
COPY backend_healthchecker.sh /app
RUN chmod +x backend_healthchecker.sh
RUN ls
RUN npm install
EXPOSE 8080
WORKDIR /app/bin
HEALTHCHECK CMD ../backend_healthchecker.sh
ENTRYPOINT ["node"]
CMD ["www"]
docker-compose.yaml
version: '3.3'
services:
kurento:
network_mode: "host"
build:
context: ./kurento
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/tmp
container_name: p_kurento
environment:
- GST_DEBUG=2,Kurento*:5
mongo:
network_mode: "host"
image: mongo:latest
restart: always
container_name: p_mongo
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- db-content:/data/db
healthcheck:
test: mongo localhost:27017/test | mongo show dbs
interval: 1m
timeout: 15m
retries: 5
backend:
network_mode: "host"
env_file:
- ./backend/.env
build:
context: ./backend
container_name: p_backend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/files
frontend:
network_mode: "host"
env_file:
- ./frontend/.env
build:
context: ./frontend
container_name: p_frontend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
coturn:
network_mode: "host"
build:
context: ./stun
container_name: p_turn
portainer:
#network_mode: "host"
restart: always
image: portainer/portainer
command: --admin-password=somepassword -H unix:///var/run/docker.sock
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: portainer
volumes:
static-content:
db-content:
network_mode: host doesn't work on MacOS or Windows systems. The Docker Use host networking documentation notes:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
It also essentially entirely disables Docker's networking stack, and is almost never necessary.
You need to do three things here:
Remove all of the network_mode: host lines from the Compose file. (The container_name: lines are also unnecessary.)
For any of the services you need to access from outside Docker (could be all of them and that's fine) add ports: to publish their container ports.
When any of these services internally call other services configure their URLs (for example, in the .env files) to use their Compose service names. (See Networking in Compose in the Docker documentation.) (Also note that your frontend application probably actually runs in a browser, even if the code is served from a container, and can't use these host names at all; this specific point needs to still use localhost or the host name where the system will eventually be deployed.)
So, for example, the setup for the frontend and backend containers could look like:
version: '3.8'
services:
mongo: { ... }
backend:
# no network_mode: host or container_name:
# if this needs to be accessed from the browser application
ports:
- '8080:8080'
# probably actually put this in .env
environment:
- MONGO_HOST=mongo
# unchanged from the original
env_file:
- ./backend/.env
build:
context: ./backend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/files
frontend:
ports:
- '8000:8000'
environment:
- BACKEND_URL=http://localhost:8080 # as would be seen from the browser
env_file:
- ./frontend/.env
build:
context: ./frontend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
You can try to create a loopback address and use that instead of default localhost.
Steps:
create a file named loopback-alias.plist in your current directory and use the below content
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist>
<plist version="1.0">
<dict>
<key>Label</key>
<string>loopback-alias</string>
<key>ProgramArguments</key>
<array>
<string>/sbin/ifconfig</string>
<string>lo0</string>
<string>alias</string>
<string>10.200.10.1</string>
<string>netmask</string>
<string>255.255.255.0</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
You can change the loop-back address.
Copy the file to /Library/LaunchDaemons/
Restart your network
Sample code to use this in Docker-compose
redis:
image: redis
expose:
- '6379'
ports:
- '10.200.10.1:6379:6379'
extra_hosts:
- 'localhost:10.200.10.1'
You can check the following link for more details.
https://blog.sbaranidharan.online/index.php/2021/05/05/docker-macos-expose-container-ports-to-host-machine/

elasticbeanstalk on the Docker platform: 502 Bad Gateway for the react app

I have a dummy react app deployed from Dockerfile.dev:
FROM node:alpine AS builder
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
which is deployed to elasticbeanstalk right after it is pushed to GitHub using TravisCI:
sudo: required
services:
- docker
before_install:
- docker build -t name/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true name/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: 'us-east-1'
app: 'docker'
env: 'Docker-env'
bucket_name: 'elasticbeanstalk-us-east-1-709516719664'
bucket_path: 'docker'
on:
branch: main
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
The app is successfully deploying to EB but displays 502 Bad Gateway as soon as I access it (by clicking the app link in AWS EB). Enhanced health overview reports:
Process default has been unhealthy for 18 hours (Target.FailedHealthChecks).
Docker-env EC2 instance is running and after allowing all incoming connections to it I can connect just fine:
I can build my app using Dockerfile.dev locally with no problems:
docker build -t name/docker-react -f Dockerfile.dev .
=> => naming to docker.io/name/docker-react
docker run -p 3000:3000 name/docker-react
AWS has a hard time with the '.' folder designation and prefers the long form ./
Try to edit the COPY instruction to COPY package*.json ./
And try also to remove the named builder. By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction.
Your Dockerfile should looks like:
FROM node:alpine
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=0 /app/build /usr/share/nginx/html
You should have a docker-compose.yml, just ensure that you have the right port mapping inside:
Example:
services:
web-service:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:3000" # outside:inside container
finally your TravisCI configuration must be edited. secret_acces_key has ONE 'S'
...
access_key_id: $AWS_ACCESS_KEY
secret_acces_key: $AWS_SECRET_KEY
Nginx default port is 80, and AWS only checks docker-compose.yml to manage resources efficiently, just do the right port mapping inside that file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:3000"
volumes:
- /app/node_modules
- .:/app

Docker Compose with React and Nginx

I'm trying to use docker-compose for deployment of my React app, which uses an express backend and Postgres Database. My idea is to have shared volumes from my docker-compose. Then build from my Dockerfile into the volume, so that Nginx will be able to serve the files. The problem now is that it works when i build the project the first time, but if I change something in my React Client and run "docker-compose up --build" it looks like everything is building as it should, but the files served are still the same. Is COPY command in my dockerfile not overwriting the old files?
Dockerfile in my React Client Project
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
RUN npm run build
FROM node:13.12.0-alpine
COPY --from=build /app/build /var/lib/frontend
docker-compose
version: "3.7"
services:
callstat_backend:
build: ./callstat-backend
restart: always
ports:
- "3000:3000"
env_file:
- keys.env
depends_on:
- postgres
callstat_frontend:
build: ./callstat-client
volumes:
- frontend/:/var/lib/frontend
postgres:
image: postgres:11.2-alpine
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: callstat
nginx:
image: nginx
volumes:
- frontend:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
depends_on:
- callstat_frontend
volumes:
pgdata:
frontend:
Maybe i'm taking a totaly wrong approach here?
You can run the commands in the following order:
# stop down the services
docker-compose stop
# remove the previously created docker resources
docker-compose rm
# bring up the services again
docker-compose up --build
This was your previous volume be removed and new one will be created with the updated changes.
NOTE: This is okay from the development perspective, but docker volumes are really expected to persist between deployments. For artifacts like code changes ideally images should be published as part of build process. To get little more insight into this topic you can refer to https://github.com/docker/compose/issues/2127

Resources