Connecting to react HOST from outside docker - reactjs

I am running a react application in development inside the docker file and this is the docker compose file
version: '3.8'
services:
my_frontend:
build:
context: ../
dockerfile: ./deployment/Dockerfile.dev
ports:
- 80:80
extra_hosts:
- "app.my-host.com:127.0.0.1"
env_file:
- ../src/environment/.env.local.dev
volumes:
- ../src:/app/src
- /app/node_modules
stdin_open: true
tty: true
The react application runs on the HOST app.my-host.com as host is provided in the environment file. This docker file builds and works properly and I can access the application from inside the docker shell using docker exec
curl http://app.my-host.com
will give correct result, but I can't access this from outside the docker container. I have tried different methods using extra_hosts but to no success,
http:app.my-host.com will give page not found on my windows laptop.
[EDIT: Adding dockerfile]
FROM node:18-alpine
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh curl
WORKDIR /app
COPY package.json /app/
RUN npm install
COPY . /app/
EXPOSE 80
CMD [ "npm", "start" ]
Requirement: access http://app.my-host.com from windows laptop
Any leads will be greatly helpful. Thanks in advance.

Related

React not reading environment variable value in production from docker-compose.yml but reading it on local machine

I tried to pass my variable from docker-compose.yml to docker container but my container doesn't see the value of this variable. I have tried many cases but all to no avail. here are my attempts.
First try:
FROM node:alpine3.17 as build
LABEL type="production"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /react-app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Second try:
FROM node:alpine3.17 as build
LABEL type="dev"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
# The RUN command is only executed while the build image
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
RUN npm install -g serve
EXPOSE 3000
# The CMD command is only executed while the image is running
CMD serve -s build
And I built the container from Dockerfiles, then I pushed it to docker-hub with various version and after that I run docker-compose.yml from the remote server.
My docker-compose.yml
version: '3'
services:
stolovaya51-react-static-server:
container_name: stolovaya51-react-production:0.0.1 (for example)
build:
args:
- REACT_APP_BACKEND_URL=REACT_APP_BACKEND_URL
ports:
- "80:80"
- "3000:3000"
By the way, when I run this code on my local machine, I see the value of the environment variable, but when I try to run this code on the server, I only see the variable name, but the value = "".
I don't know the reason, what's the matter?
I have found the answear for my question!
Firstly, i have combined two repository with frontend and backend into one project.
Then, i have redesigned my project structure and gathere together two parts of my application. For now i have this structure:
root_project_folder:
./frontend
...some src
./frontend/docker/Dockerfile
./backend
...somer src
./backend/docker/Dockerfile
docker-compose.yml
And now, my frontend applies all args from docker-compose.yml from the root folder

Deploy ReactJS application on CentOS 7 server using Docker

I have deployed a ReactJS (with neo4j database) application on CentOS 7 server. I want to deploy another instance of the same application on the same server using docker. I have installed docker (version 20.10.12) on the server (CentOS 7).
On the server, I have cloned my reactjs project and created following Dockerfile:
FROM node:16 as builder
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npm run build
FROM httpd:alpine
WORKDIR /var/www/html
COPY --from=builder /app/build/ .
and following docker-compose.yaml file:
version: '3.1'
services:
app-test:
hostname: my-app-test.com
build: .
ports:
- '80:80'
command: nohup node server.js &
networks:
- app-test-network
app-test-neo4j:
image: neo4j:4.4.2
ports:
- '7474:7474'
- '7473:7473'
- '7687:7687'
volumes:
- ./volumes/neo4j/data:/data
- ./volumes/neo4j/conf:/conf
- ./volumes/neo4j/plugins:/plugins
- ./volumes/neo4j/logs:/logs
- ./volumes/neo4j/import:/import
- ./volumes/neo4j/init:/init
networks:
- app-test-network
environment:
- NEO4J_AUTH=neo/password
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_dbms_security_procedures_whitelist=apoc.*
- NEO4J_dbms_default__listen__address=0.0.0.0
- NEO4J_dbms_connector_bolt_listen__address=:7687
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_bolt_advertised__address=:7687
- NEO4J_dbms_connector_http_advertised__address=:7474
- NEO4J_dbms_default__database=neo4j
- NEO4JLABS_PLUGINS=["apoc"]
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_dbms_shell_enabled=true
networks:
app-test-network:
driver: bridge
But, after running docker-compose up , i get following error:
ERROR: for app-repo_app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): ErroCreating app-repo_app-test-neo4j ... done
ERROR: for app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Can anyone give me any clue what went wrong here ? And, is it the correct approach to deploy the reactjs application on CentOS 7 server using Docker ?
as said in my comment above: You are not using the first container in the Multi-Stage container.
See this article: https://docs.docker.com/develop/develop-images/multistage-build/#name-your-build-stages
And particularly the example code:
# syntax=docker/dockerfile:1
FROM golang:1.16 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
The COPY --from=builder will do the trick for you.

ASP.NET Core 5 + SQL Server Docker container: invalid compose project

I am trying to setup a simple container on a raspberry pi4 using following guide.
For some reason I'm always bumping into the following error:
[0mservice "my-api" has neither an image nor a build context specified: invalid compose project docker-compose
As this is my first "real" docker container setup, I have no real idea of what to do now. I really looked up every single issue that I could find via google search (even tried it with bing, yeah that desperate). But I can't really find ant decent guide/answer.
I'll attach my docker files:
docker compose:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
DockerFile (API project)
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this
Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0.302-buster-slim-amd64 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0.302-buster-slim-amd64 AS build
WORKDIR /src
COPY ["my-api/my-api.csproj", "my-api/"]
RUN dotnet restore "my-api/my-api.csproj"
COPY . .
WORKDIR "/src/my-api"
RUN dotnet build "my-api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "my-api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "my-api.dll"]
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
entrypoint.sh
#!/bin/bash
set -e
run_cmd="dotnet run --server.urls http://*:80"
until dotnet ef database update; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
exec $run_cmd
If someone can nudge me in the right direction, would be awesome
Indentation in your yaml file is important. Your services need to be indented under the services: line.
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
Are you using Visual Studio (not code) for building this?
Visual Studio has an option "Place solution and project files in the same directory" Make sure that you check it before you become a pro with docker/docker-compose.
Visual studio tends to mess up the relative path of .csproj file when that option is unchecked and you add docker support (or Container orchestration support).
The COPY ["my-api/my-api.csproj", "my-api/"] is supposed to have relative path to csproj file.
Just a hunch.
Also, bing didn't help me either.

Why do we need a volume option in docker-compose when we use react/next?

I have a question, why do we need to use VOLUME option in our docker-compose when we use for React/Next.js ?
If I understood, we use VOLUME to "save the data", for example when we use database.
But with React/Next.js we just use to pass the node_modules and app path, for me it does not make any sense...
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
volumes:
- ./:/usr/src/app/
- /usr/src/app/node_modules
It works..
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
It works in the same way..
Why do we need to save node_modules and app path ?
My DOCKERFILE:
FROM node:12-alpine
WORKDIR /app
COPY package*.json ./
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY . .
EXPOSE 9000
RUN npm run build
CMD ["npm", "start"]
As per your Dockerfile, it's already copying your source code and does a npm run build. Finally it run npm start to start the development server (this is not recommended for production).
By mounting src/app and src/node_modules directories, you get the ability to reload your app while you make changes to the source in your host machine.
In summary, if you did not mount the source code, you have to rebuild the docker image and run it for your changes to be visible in the app. If you mounted the source and node_modules, you can leverage the live reloading capability of npm start and do development on host machine.
With the volumes included, you reflect all your local changes inside your dockerized Next application, which allows you to make use of features such as hot-reloading and not having to re-build the Docker container just to see the changes.
In production, you do not have to include these volumes.

docker bind mount not working in react app

I am using docker toolbox on windows home and having trouble figuring out how to get bind mount working in my frontend app. I want changes to be reflected upon changing content in the src directory.
App structure:
Dockerfile:
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
Docker commands:
(within the frontend dir) docker build -t frontend .
docker run -p 3000:3000 -d -it --rm --name frontend-app -v ${cwd}:/app/src frontend
Any help is highly appreciated.
EDIT
cwd -> E:\docker\multi\frontend
cwd/src is also not working. However, i find that with /e/docker/multi/frontend/src the changes are reflected upon re running the same image
i have ran into same issue it feels like we should use nodemon to look for file changes and restart the app.
because with docker reference and tutorials project does the thing.

Resources