Why do we need a volume option in docker-compose when we use react/next? - reactjs

I have a question, why do we need to use VOLUME option in our docker-compose when we use for React/Next.js ?
If I understood, we use VOLUME to "save the data", for example when we use database.
But with React/Next.js we just use to pass the node_modules and app path, for me it does not make any sense...
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
volumes:
- ./:/usr/src/app/
- /usr/src/app/node_modules
It works..
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
It works in the same way..
Why do we need to save node_modules and app path ?
My DOCKERFILE:
FROM node:12-alpine
WORKDIR /app
COPY package*.json ./
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY . .
EXPOSE 9000
RUN npm run build
CMD ["npm", "start"]

As per your Dockerfile, it's already copying your source code and does a npm run build. Finally it run npm start to start the development server (this is not recommended for production).
By mounting src/app and src/node_modules directories, you get the ability to reload your app while you make changes to the source in your host machine.
In summary, if you did not mount the source code, you have to rebuild the docker image and run it for your changes to be visible in the app. If you mounted the source and node_modules, you can leverage the live reloading capability of npm start and do development on host machine.

With the volumes included, you reflect all your local changes inside your dockerized Next application, which allows you to make use of features such as hot-reloading and not having to re-build the Docker container just to see the changes.
In production, you do not have to include these volumes.

Related

React not reading environment variable value in production from docker-compose.yml but reading it on local machine

I tried to pass my variable from docker-compose.yml to docker container but my container doesn't see the value of this variable. I have tried many cases but all to no avail. here are my attempts.
First try:
FROM node:alpine3.17 as build
LABEL type="production"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /react-app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Second try:
FROM node:alpine3.17 as build
LABEL type="dev"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
# The RUN command is only executed while the build image
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
RUN npm install -g serve
EXPOSE 3000
# The CMD command is only executed while the image is running
CMD serve -s build
And I built the container from Dockerfiles, then I pushed it to docker-hub with various version and after that I run docker-compose.yml from the remote server.
My docker-compose.yml
version: '3'
services:
stolovaya51-react-static-server:
container_name: stolovaya51-react-production:0.0.1 (for example)
build:
args:
- REACT_APP_BACKEND_URL=REACT_APP_BACKEND_URL
ports:
- "80:80"
- "3000:3000"
By the way, when I run this code on my local machine, I see the value of the environment variable, but when I try to run this code on the server, I only see the variable name, but the value = "".
I don't know the reason, what's the matter?
I have found the answear for my question!
Firstly, i have combined two repository with frontend and backend into one project.
Then, i have redesigned my project structure and gathere together two parts of my application. For now i have this structure:
root_project_folder:
./frontend
...some src
./frontend/docker/Dockerfile
./backend
...somer src
./backend/docker/Dockerfile
docker-compose.yml
And now, my frontend applies all args from docker-compose.yml from the root folder

Connecting to react HOST from outside docker

I am running a react application in development inside the docker file and this is the docker compose file
version: '3.8'
services:
my_frontend:
build:
context: ../
dockerfile: ./deployment/Dockerfile.dev
ports:
- 80:80
extra_hosts:
- "app.my-host.com:127.0.0.1"
env_file:
- ../src/environment/.env.local.dev
volumes:
- ../src:/app/src
- /app/node_modules
stdin_open: true
tty: true
The react application runs on the HOST app.my-host.com as host is provided in the environment file. This docker file builds and works properly and I can access the application from inside the docker shell using docker exec
curl http://app.my-host.com
will give correct result, but I can't access this from outside the docker container. I have tried different methods using extra_hosts but to no success,
http:app.my-host.com will give page not found on my windows laptop.
[EDIT: Adding dockerfile]
FROM node:18-alpine
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh curl
WORKDIR /app
COPY package.json /app/
RUN npm install
COPY . /app/
EXPOSE 80
CMD [ "npm", "start" ]
Requirement: access http://app.my-host.com from windows laptop
Any leads will be greatly helpful. Thanks in advance.

Deploy ReactJS application on CentOS 7 server using Docker

I have deployed a ReactJS (with neo4j database) application on CentOS 7 server. I want to deploy another instance of the same application on the same server using docker. I have installed docker (version 20.10.12) on the server (CentOS 7).
On the server, I have cloned my reactjs project and created following Dockerfile:
FROM node:16 as builder
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npm run build
FROM httpd:alpine
WORKDIR /var/www/html
COPY --from=builder /app/build/ .
and following docker-compose.yaml file:
version: '3.1'
services:
app-test:
hostname: my-app-test.com
build: .
ports:
- '80:80'
command: nohup node server.js &
networks:
- app-test-network
app-test-neo4j:
image: neo4j:4.4.2
ports:
- '7474:7474'
- '7473:7473'
- '7687:7687'
volumes:
- ./volumes/neo4j/data:/data
- ./volumes/neo4j/conf:/conf
- ./volumes/neo4j/plugins:/plugins
- ./volumes/neo4j/logs:/logs
- ./volumes/neo4j/import:/import
- ./volumes/neo4j/init:/init
networks:
- app-test-network
environment:
- NEO4J_AUTH=neo/password
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_dbms_security_procedures_whitelist=apoc.*
- NEO4J_dbms_default__listen__address=0.0.0.0
- NEO4J_dbms_connector_bolt_listen__address=:7687
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_bolt_advertised__address=:7687
- NEO4J_dbms_connector_http_advertised__address=:7474
- NEO4J_dbms_default__database=neo4j
- NEO4JLABS_PLUGINS=["apoc"]
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_dbms_shell_enabled=true
networks:
app-test-network:
driver: bridge
But, after running docker-compose up , i get following error:
ERROR: for app-repo_app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): ErroCreating app-repo_app-test-neo4j ... done
ERROR: for app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Can anyone give me any clue what went wrong here ? And, is it the correct approach to deploy the reactjs application on CentOS 7 server using Docker ?
as said in my comment above: You are not using the first container in the Multi-Stage container.
See this article: https://docs.docker.com/develop/develop-images/multistage-build/#name-your-build-stages
And particularly the example code:
# syntax=docker/dockerfile:1
FROM golang:1.16 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
The COPY --from=builder will do the trick for you.

docker bind mount not working in react app

I am using docker toolbox on windows home and having trouble figuring out how to get bind mount working in my frontend app. I want changes to be reflected upon changing content in the src directory.
App structure:
Dockerfile:
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
Docker commands:
(within the frontend dir) docker build -t frontend .
docker run -p 3000:3000 -d -it --rm --name frontend-app -v ${cwd}:/app/src frontend
Any help is highly appreciated.
EDIT
cwd -> E:\docker\multi\frontend
cwd/src is also not working. However, i find that with /e/docker/multi/frontend/src the changes are reflected upon re running the same image
i have ran into same issue it feels like we should use nodemon to look for file changes and restart the app.
because with docker reference and tutorials project does the thing.

How to pass environment variables in docker-compose.yml with create-react-app

I have nginx and client images, loaded by docker-compose.yml file.
For some reason, the environment variables (REACT_APP_MAXIMUM_CAMERAS_COUNT) are not visible when the application is running (I get undefined), and I can't figure out why.
Here is my create-react-app Dockerfile:
FROM node:alpine as builder
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
And here is my docker-compose.yml file:
version: '3'
services:
nginx:
image: <ip_address>:5000/orassayag/osr_streamer_nginx:v1.0
restart: always
ports:
- '3050:80'
client:
image: <ip_address>:5000/orassayag/osr_streamer_client:v1.0
environment:
- REACT_APP_MAXIMUM_CAMERAS_COUNT=10
**Note** that since the docker-compose pulls the images from a private registry (without any build), it can't use the "build" blocks with "args" (already tried with args and it works). Any workaround to solve this?
The docker-compose file you have seems to do the right thing. Try to get a shell inside the running container and type export.
docker exec -it code_client_1 sh
/usr/app # export
export HOME='/root'
export HOSTNAME='f4b3fc891ce3'
export NODE_VERSION='10.15.0'
export PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
export PORT='80'
export PWD='/usr/app'
export REACT_APP_MAXIMUM_CAMERAS_COUNT='10'
export SHLVL='1'
export TERM='xterm'
export YARN_VERSION='1.12.3'
/usr/app #
There I can see the environment variable works. Your problem might be that your website is build without the environment being set, so it will not actually read your environment variables at runtime.
There is a lengthy discussion here on github and even though it might not be optimal, I have myself 'rebuild and then run' each time I start the service. It is not optimal, but it works for what I need.

Resources