Deploy ReactJS application on CentOS 7 server using Docker - reactjs

I have deployed a ReactJS (with neo4j database) application on CentOS 7 server. I want to deploy another instance of the same application on the same server using docker. I have installed docker (version 20.10.12) on the server (CentOS 7).
On the server, I have cloned my reactjs project and created following Dockerfile:
FROM node:16 as builder
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npm run build
FROM httpd:alpine
WORKDIR /var/www/html
COPY --from=builder /app/build/ .
and following docker-compose.yaml file:
version: '3.1'
services:
app-test:
hostname: my-app-test.com
build: .
ports:
- '80:80'
command: nohup node server.js &
networks:
- app-test-network
app-test-neo4j:
image: neo4j:4.4.2
ports:
- '7474:7474'
- '7473:7473'
- '7687:7687'
volumes:
- ./volumes/neo4j/data:/data
- ./volumes/neo4j/conf:/conf
- ./volumes/neo4j/plugins:/plugins
- ./volumes/neo4j/logs:/logs
- ./volumes/neo4j/import:/import
- ./volumes/neo4j/init:/init
networks:
- app-test-network
environment:
- NEO4J_AUTH=neo/password
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_dbms_security_procedures_whitelist=apoc.*
- NEO4J_dbms_default__listen__address=0.0.0.0
- NEO4J_dbms_connector_bolt_listen__address=:7687
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_bolt_advertised__address=:7687
- NEO4J_dbms_connector_http_advertised__address=:7474
- NEO4J_dbms_default__database=neo4j
- NEO4JLABS_PLUGINS=["apoc"]
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_dbms_shell_enabled=true
networks:
app-test-network:
driver: bridge
But, after running docker-compose up , i get following error:
ERROR: for app-repo_app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): ErroCreating app-repo_app-test-neo4j ... done
ERROR: for app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Can anyone give me any clue what went wrong here ? And, is it the correct approach to deploy the reactjs application on CentOS 7 server using Docker ?

as said in my comment above: You are not using the first container in the Multi-Stage container.
See this article: https://docs.docker.com/develop/develop-images/multistage-build/#name-your-build-stages
And particularly the example code:
# syntax=docker/dockerfile:1
FROM golang:1.16 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
The COPY --from=builder will do the trick for you.

Related

Connecting to react HOST from outside docker

I am running a react application in development inside the docker file and this is the docker compose file
version: '3.8'
services:
my_frontend:
build:
context: ../
dockerfile: ./deployment/Dockerfile.dev
ports:
- 80:80
extra_hosts:
- "app.my-host.com:127.0.0.1"
env_file:
- ../src/environment/.env.local.dev
volumes:
- ../src:/app/src
- /app/node_modules
stdin_open: true
tty: true
The react application runs on the HOST app.my-host.com as host is provided in the environment file. This docker file builds and works properly and I can access the application from inside the docker shell using docker exec
curl http://app.my-host.com
will give correct result, but I can't access this from outside the docker container. I have tried different methods using extra_hosts but to no success,
http:app.my-host.com will give page not found on my windows laptop.
[EDIT: Adding dockerfile]
FROM node:18-alpine
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh curl
WORKDIR /app
COPY package.json /app/
RUN npm install
COPY . /app/
EXPOSE 80
CMD [ "npm", "start" ]
Requirement: access http://app.my-host.com from windows laptop
Any leads will be greatly helpful. Thanks in advance.

ASP.NET Core 5 + SQL Server Docker container: invalid compose project

I am trying to setup a simple container on a raspberry pi4 using following guide.
For some reason I'm always bumping into the following error:
[0mservice "my-api" has neither an image nor a build context specified: invalid compose project docker-compose
As this is my first "real" docker container setup, I have no real idea of what to do now. I really looked up every single issue that I could find via google search (even tried it with bing, yeah that desperate). But I can't really find ant decent guide/answer.
I'll attach my docker files:
docker compose:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
DockerFile (API project)
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this
Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0.302-buster-slim-amd64 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0.302-buster-slim-amd64 AS build
WORKDIR /src
COPY ["my-api/my-api.csproj", "my-api/"]
RUN dotnet restore "my-api/my-api.csproj"
COPY . .
WORKDIR "/src/my-api"
RUN dotnet build "my-api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "my-api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "my-api.dll"]
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
entrypoint.sh
#!/bin/bash
set -e
run_cmd="dotnet run --server.urls http://*:80"
until dotnet ef database update; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
exec $run_cmd
If someone can nudge me in the right direction, would be awesome
Indentation in your yaml file is important. Your services need to be indented under the services: line.
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
Are you using Visual Studio (not code) for building this?
Visual Studio has an option "Place solution and project files in the same directory" Make sure that you check it before you become a pro with docker/docker-compose.
Visual studio tends to mess up the relative path of .csproj file when that option is unchecked and you add docker support (or Container orchestration support).
The COPY ["my-api/my-api.csproj", "my-api/"] is supposed to have relative path to csproj file.
Just a hunch.
Also, bing didn't help me either.

Why do we need a volume option in docker-compose when we use react/next?

I have a question, why do we need to use VOLUME option in our docker-compose when we use for React/Next.js ?
If I understood, we use VOLUME to "save the data", for example when we use database.
But with React/Next.js we just use to pass the node_modules and app path, for me it does not make any sense...
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
volumes:
- ./:/usr/src/app/
- /usr/src/app/node_modules
It works..
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
It works in the same way..
Why do we need to save node_modules and app path ?
My DOCKERFILE:
FROM node:12-alpine
WORKDIR /app
COPY package*.json ./
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY . .
EXPOSE 9000
RUN npm run build
CMD ["npm", "start"]
As per your Dockerfile, it's already copying your source code and does a npm run build. Finally it run npm start to start the development server (this is not recommended for production).
By mounting src/app and src/node_modules directories, you get the ability to reload your app while you make changes to the source in your host machine.
In summary, if you did not mount the source code, you have to rebuild the docker image and run it for your changes to be visible in the app. If you mounted the source and node_modules, you can leverage the live reloading capability of npm start and do development on host machine.
With the volumes included, you reflect all your local changes inside your dockerized Next application, which allows you to make use of features such as hot-reloading and not having to re-build the Docker container just to see the changes.
In production, you do not have to include these volumes.

How to access SQL server from docker ASP.net core API image

I have generated a docker file for asp.net core API with a single page application thanks to Visual studio. After some research on the web I correct differents trouble about SPA in this docker file.
Finnaly my trouble is the connexion with our database server.
When I tried to connect, I've got a
Microsoft.Data.SqlClient.SqlException : A network-related or instance-specific error occurred while establishing a connection to SQL Server.
It seems that it appears because my container could not acces to the server, after hour and hour of google search, I only found solution with a SQL hosted in docker image.
How to all my docker image of wab app accessing the entire company network to access different server ? I use computer name ant not IP to match company requirement.
Thanks for all
Versions :
.net core api : 3.1
I'm using docker for Windows
docker use linux container
Here is my docker file
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq
WORKDIR /src
COPY ["Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj", "Company.Dtm.WebApi.AppWebApi/"]
COPY ["CompanyFramework/Company.Framework.WebApi/Company.Framework.WebApi.csproj", "CompanyFramework/Company.Framework.WebApi/"]
COPY ["CompanyFramework/Company.Framework.Model/Company.Framework.Model.csproj", "CompanyFramework/Company.Framework.Model/"]
COPY ["CompanyFramework/Company.Framework.Tools/Company.Framework.Tools.csproj", "CompanyFramework/Company.Framework.Tools/"]
COPY ["AppLib/Company.Dtm.Lib.AppLib/Company.Dtm.Lib.AppLib.csproj", "AppLib/Company.Dtm.Lib.AppLib/"]
RUN dotnet restore "Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj"
COPY . .
WORKDIR "/src/Company.Dtm.WebApi.AppWebApi"
RUN dotnet build "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Company.Dtm.WebApi.AppWebApi.dll"]
Here is my docker-compose
version: '3'
services:
webapp:
build: .
network_mode: "bridge"
ports:
- "8880:80"
I also had this issue, just trying to connect to my localhost development SQL Server.
What ended up working was to add the normal SQL Server ports to my Dockerfile:
EXPOSE 1433
EXPOSE 5000 [or whatever other ports you may be using.]
Then set up a firewall Inbound Rule to allow those ports.
You cannot use 'localhost' obviously since the 'localhost' is the host the container it is running in, but I did find with Windows at least, that I can simply use my dev machine name as the server, so it seems that DNS works across the nat. I would think you should be able to access any network resource at that point, but I would say your firewall[s] might be a place to start. Your Docker container acts like an external network and therefore generally un-trusted.
I also found that I did not have a 'bridge' network. Maybe you get that with the Linux container.
My >docker network ls command revealed a "Default Switch" network, but no "bridge". Because this is Docker for Windows, there is no 'host' option.
That was all there was to it for me. I see a lot of other posts talking about a lot of other things, but honestly, just opening up the firewall is what did the trick. Good luck!
You need to add another service for your db in your compose file.
Something like this:
version: "3"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
make sure to replace the password in the SA_PASSWORD environment variable under db.

Permanently install VS Code's server in container

Every time I start up a container to develop in it with VS Code's Remote - Containers extension the container has to re-download the vs-code-server. Is there any way to easily install the server within a Dockerfile so it doesn't have to reinstall every time?
If using docker-compose, you can create a volume for the .vscode-server folder, so that it is persisted across runs.
Something like (in .devcontainer/docker-compose.yml):
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
command:
- /bin/sh
- -c
- "while sleep 1000; do :; done"
volumes:
- ..:/workspace
- vscode-server:/home/code/.vscode-server
volumes:
vscode-server:

Resources