How can I exclude node_modules from shared Volumes in docker container? - reactjs

I am working on a MERN Stack Project and I am trying to use Docker for both Development and Production version of the Project. I have created docker-compose for both modes (Dev , Prod) which has 3 services (Frontend, Backend, Database) Now Everything is Connecting correctly and working just fine but For publishing changes in Development mode I am using volumes in it and Now that I am a Windows user, The node_modules in my Project folder and the node_module in Container ( Which are Linux builds for same packages ) are generating Error. I am providing my Docker-Compose File as well.
Error
docker-compose.yml
services:
devengers:
container_name: devengers-root
build:
context: .
dockerfile: Dockerfile.development
image: devengers
backend:
container_name: devengers-backend
image: devengers
ports:
- 3000:3000
environment:
- MONGODB_URL=mongodb://database:27017
networks:
- local_net
depends_on:
- devengers
- database
command: npm run start:dev
volumes:
- ".:/Devengers"
frontend:
container_name: devengers-frontend
image: devengers
ports:
- 8080:8080
environment:
- API=http://backend:3000
networks:
- local_net
depends_on:
- backend
- database
command: npm run dev
volumes:
- ".:/Devengers"
database:
container_name: devengers-database
image: mongo:4.0-xenial
ports:
- 27017:27017
networks:
- local_net
volumes:
- mongodb_data:/data/db
networks:
local_net:
volumes:
mongodb_data:

Related

Dockerizing a MERN stack app, how to define ports

I want to dockerize my MERN app, here is the dockerfile for my frontend:
FROM node:18.8-alpine
COPY . ./app
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["npm", "start"]
and here is the dockerfile for my backend:
FROM node:18.8-alpine
COPY . ./app
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
I also want to use docker-compose to run both frontend and backend together and this is the config file:
version: '3.8'
services:
backend:
build: ./backend
container_name: backend_C
ports:
- 80:80
frontend:
build: ./frontend
container_name: frontend_C
ports:
- 3000:3000
stdin_open: true
tty: true
The problem is since both my backend and frontend run on port 3000, there would be a conflict when I run my images. I don't know how to specify and change the ports for them.
You can simply specify different host ports mapping to the same container ports:
version: '3.8'
services:
backend:
build: ./backend
container_name: backend_C
ports:
- 3000:3000
frontend:
build: ./frontend
container_name: frontend_C
ports:
- 8080:3000
stdin_open: true
tty: true
Now your backend is available on port 3000 and your frontend on port 8080

Uncaught Error: Cannot find module '#blueprintjs/table'

I need help
I have project created with docker.
I run it with docker-compose up.
But I have a Uncaught Error: Cannot find module '#blueprintjs/table'.
The project consists of three containers.
Problem only in skynet.refinedev container.
Please tell me how to solve this problem. I have tried installing #blueprintjs/table via npm and docker-compose exec skynet.refinedev npm install #blueprintjs/core --save
docker-compose.yml:
services:
skynet.nextjs:
container_name: skynet.nextjs
stdin_open: true
build:
context: ./nextjs-test
dockerfile: Dockerfile
volumes:
- ./nextjs-test:/nextjs-test
- /nextjs-test/node_modules/
ports:
- '${FORWARD_DASHBOARD_PORT:-3004}:3002'
command: npm run dev
networks:
- skynet.frontend
skynet.refinedev:
container_name: skynet.refinedev
stdin_open: true
build:
context: ./refinedev-test
dockerfile: Dockerfile
volumes:
- ./refinedev-test:/refinedev-test
- /refinedev-test/node_modules/
ports:
- '${FORWARD_FRONTEND_PORT:-3005}:3001'
# - '${FORWARD_FRONTEND_PORT:-3007}:3007'
# - '${FORWARD_FRONTEND_PORT:-3008}:3008'
command: npm run start
networks:
- skynet.frontend
skynet.velzon:
container_name: skynet.velzon
stdin_open: true
build:
context: ./velzon-reactstrap
dockerfile: Dockerfile
volumes:
- ./velzon-reactstrap:/velzon-reactstrap
- /velzon-reactstrap/node_modules/
ports:
- '${FORWARD_TEMPLATE_PORT:-3006}:3000'
command: npm run start
networks:
- skynet.frontend
networks:
skynet.frontend:
driver: bridge
ch_ntw:
driver: bridge
ipam:
config:
- subnet: 10.222.1.0/24
volumes:
sailmysql:
driver: local
sailredis:
driver: local
sailmeilisearch:
driver: local

Why my db data lost after docker restart?

My docker-compose.yml
version: '3.1'
services:
sqldata:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- SA_PASSWORD=YourStrong#Passw0rd
- ACCEPT_EULA=Y
ports:
- "1433:1433"
volumes:
- ./docker/mssql:/var/opt/mssql
Each time when I did docker-compose down && docker-compose up -d all databases that I created are deleted.
How to prevent this deleting after each docker restart?
Use docker stop and docker start instead of docker-compose down.

TemplateDoesNotExist at / index.html in Docker | react with django

Why my react build folder doesn't serve after run python manage.py collectstatic command in Dockerfile? I have tried for a long time to dockerized my whole project but I did fail to collect static files. Where I miss please have a look into it.
***This is my backend/Dockerfile inside django project ***
FROM python:3.9.5-slim
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
RUN apt-get update && apt-get install build-essential python-dev -y
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt && pip3 install uwsgi
WORKDIR /django
COPY . .
CMD ['RUN', 'python', 'manage.py', 'collectstatic', '--noinput']
EXPOSE 8000
CMD ["uwsgi", "--http", ":9090", "--wsgi-file", "IMS/wsgi.py", "--master", "--processes", "4", "--threads", "2"]
And this is my frontend/Dockerfile inside my react folder
FROM node:13.12.0-alpine
WORKDIR /react
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
And finally, this is my docker-compose.yml file where I setup 3 services
version: "3.3"
services:
backend:
build:
context: ./backend/
command: gunicorn IMS.wsgi --bind 0.0.0.0:8000
container_name: ims-backend
restart: always
ports:
- "8000:8000"
networks:
- ims-network
ims-postgres:
image: "postgres:14beta2-alpine3.14"
restart: always
container_name: ims-postgres
hostname: emrdsaws.czlz2b677ytu.ap-south-1.rds.amazonaws.com
environment:
- POSTGRES_PASSWORD=zkf!%uW_?&UG%4
- POSTGRES_DB=imsrds
- POSTGRES_USER=dbrdsubuntume12
ports:
- 5432
frontend:
build:
context: ./frontend/
volumes:
- react_build:/react/build
expose:
- 3000
stdin_open: true
nginx:
image: nginx:latest
ports:
- "8080:8080"
volumes:
- ./nginx/nginx-setup.conf:/etc/nginx/conf.d/default.conf:ro
- react_build:/var/www/react
depends_on:
- backend
- frontend
- ims-postgres
volumes:
react_build:
networks:
ims-network:
driver: bridge

Connect with database - docker-compose up

I'm trying to make one Docker Compose file to up an WEB (reactJS), API (.NET Core 2.1) and an SQL Server instance.
When I init the database and run .NET with dotnet cli, it works (using a connection string Server=localhost). However what I've been googling is that localhost does not work on containers. And when using container I can't get my .NET Core to connect with my SQL Server.
Can anyone shed some light what am I doing wrong?
I have this repo:
https://github.com/lucasgozzi/sagetest
And I'm currently using a branch names 'docker'. Here is my Docker files and composer in case you don't want to clone the repo.
Backend dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.1
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
WORKDIR /src
COPY . .
RUN dotnet restore "./Api/Api.csproj"
RUN dotnet build "Api/Api.csproj" -c Release -o /app/build
RUN dotnet publish "Api/Api.csproj" -c Release -o /app/publish
EXPOSE 5000
WORKDIR /app/publish
ENTRYPOINT ["dotnet", "Api.dll"]
Frontend dockerfile:
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
EXPOSE 3000
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY . .
RUN npm install --silent
RUN npm install react-scripts -g --silent
# start app
CMD ["npm", "start"]
Docker compose:
version: '3.1'
services:
api:
container_name: "teste-sage-api"
image: 'teste-sage-api'
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/var/www/backend
ports:
- "5000:5000"
depends_on:
- "database"
networks:
- sagetest-network
web:
container_name: "teste-sage-web"
image: 'teste-sage-web'
build:
context: ./frontend_react
dockerfile: Dockerfile
ports:
- "3000:3000"
depends_on:
- "api"
networks:
- sagetest-network
database:
container_name: "sql-server"
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Teste#123"
ACCEPT_EULA: "Y"
ports:
- "1433:1433"
networks:
- sagetest-network
networks:
sagetest-network:
driver: bridge
You can access the database form your containers on the service name, in your case this will be database. But you need to make sure that the database container is up and running before trying to connect to it. depends_on is not enough for this case. you may need to implement a waitfor in your dotnet container. check this for more info https://docs.docker.com/compose/startup-order/

Resources