Consider the Docker compose
version: '3'
services:
frontend:
build:
context: ./frontend
container_name: frontend
command: npm start
stdin_open: true
tty: true
volumes:
- ./frontend:/usr/app
ports:
- "3000:3000"
backend:
build:
context: ./backend
container_name: backend
command: npm start
environment:
- PORT=3001
- MONGO_URL=mongodb://api_mongo:27017
volumes:
- ./backend/src:/usr/app/src
ports:
- "3001:3001"
api_mongo:
image: mongo:latest
container_name: api_mongo
volumes:
- mongodb_api:/data/db
ports:
- "27017:27017"
volumes:
mongodb_api:
And the React Dockerfile :
FROM node:14.10.1-alpine3.12
WORKDIR /usr/app
COPY package.json .
RUN npm i
COPY . .
Folder Structure :
-frontend
-backend
-docker-compose.yml
And inside Frontend :
And inside src :
When I change files inside src it doesn't reflect on the Docker side.
How can we fix this ?
Here is the answer :
If you are running on Windows, please read this: Create-React-App has some issues detecting when files get changed on Windows based machines. To fix this, please do the following:
In the root project directory, create a file called .env
Add the following text to the file and save it: CHOKIDAR_USEPOLLING=true
That's all!
Don't use same name dir for different services like you use /usr/app change this to /client/app for client and server/app for backend and then it all works and use environment:- CHOKIDAR_USEPOLLING=true and use FROM node:16.5.0-alpine and can use stdin_open: true
Related
Red all the posts around. Tried with lower "react-scripts" (mine is 5.0.1), used CHOKIDAR_USEPOLLING: 'true', basically everything on the first two pages on google.
Hot reloading still doesn't work. My docker-compose.yaml:
version: '3.3'
services:
database:
container_name: mysql
image: mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: toma123
MYSQL_DATABASE: api
MYSQL_USER: toma
MYSQL_PASSWORD: toma123
ports:
- '4306:3306'
volumes:
- mysql-data:/var/lib/mysql
php:
container_name: php
build:
context: ./php
ports:
- '9000:9000'
volumes:
- ./../api:/var/www/api
depends_on:
- database
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '8080:80'
volumes:
- ./../api:/var/www/api
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- database
react:
container_name: react
build:
context: ./../frontend
ports:
- '3001:3000'
volumes:
- node_modules:/home/app/node_modules
volumes:
mysql-data:
driver: local
node_modules:
driver: local
And my react Dockerfile
FROM node
RUN mkdir -p /home/app
WORKDIR /home/app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Even if a lot of your application is in Docker it doesn't mean you need to exclusively use Docker. Since one of Docker's primary goals is to prevent containers from accessing host files, it can be tricky to convince it to emulate a normal host live-development environment.
Install Node on your host. It's likely you have this anyways, or you can trivially apt-get install or brew install it.
Start everything except the front-end application you're developing using Docker; then start your application, on the host, in the normal way.
docker-compose up -d nginx
npm run dev
You may need to make a couple of configuration changes for this to work. For example, in this development environment, the database address will be localhost:4306, but when deployed it will be database:3306, and you'll need to do things like configure Webpack to proxy backend requests to http://localhost:8080. You might set environment variables in your docker-compose.yml for this, and in your code have them default to the things they are in the non-Docker development environment.
const dbHost = process.env.DB_HOST || 'localhost';
const dbPort = process.env.DB_PORT || 4306;
In your Compose setup, do not mount volumes: over your application code or libraries. Once you get up to the point of doing final integration testing on this code, build and run the actual image you're going to deploy. So that section of the Dockerfile might look like
version: '3.8'
services:
react:
build: ../frontend
ports:
- '3001:3000'
# no volumes:
# container_name: is also unnecessary
I am trying to dockerize a react app with postgres database.
I am new to docker, so I followed tutorials online to come up with Dockerfile and docker-compose as shown below.
Dockerfile
# pull the official base image
FROM node:13.12.0-alpine
# set working direction
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
EXPOSE 1338
ENV PATH /app/node_modules/.bin:$PATH
# install application dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm i
# add app
COPY . ./
# start app
CMD ["npm", "start"]
docker-compose.yml
version: '3.7'
services:
sample:
container_name: sample
build:
context: .
dockerfile: ./Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- 1338:1338
environment:
- CHOKIDAR_USEPOLLING=true
- ASPNETCORE_URLS=https://+:1338
- ASPNETCORE_HTTPS_PORT=1338
depends_on:
- db
db:
container_name: db
image: postgres:14-alpine
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: ###
POSTGRES_USER: ###
POSTGRES_PASSWORD: ###
# I hide these information for privacy purpose, but I am 100% sure I input these information correctly.
volumes:
- ./db-data/:/var/lib/postgresql/data/
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
pgdata1:
so what happened is when I tried to run docker-compose up , I suppose the db part had no issue, since it wrote database system is ready to accept connections. However, the "sample" part ended up with an error:
Server wasn't able to start properly.
error Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete]
which does not make much sense to me since the database is already up so there should not be any issue with connection at all.
Feel free to share your view, any idea would be appreciated. Thank you.
I'm building a Django/React app using docker-compose, and I'd like it to reload my apps when a change is made, so far I've tried adding CHOKIDAR_USEPOLLING,
adding npm-watch to my package.json, but it doesn't seem to be able to detect changes in the host file.
Ideally I don't want to have to run docker-compose up --build every time I make a change since it's making development tedious.
edit: I should mention that the apps both reload running outside of docker (npm start (cra default) and python manage.py runserver) as expected.
Changes are detected inside the container, but the react app will not rebuild.
I'm using Windows 10 also.
Is there something wrong with my files or something else I should be doing here?
docker-compose.yml
version: "3.9"
services:
db:
container_name: db
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
backend:
container_name: backend
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/core
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: frontend
build: ./frontend
command: npm start
volumes:
- './frontend:/app/'
- '/frontend/node_modules'
ports:
- "3000:3000"
environment:
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
# Enable interactive terminal (crucial for react container to work)
stdin_open: true
tty: true
backend Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code/
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
frontend Dockerfile
FROM node:16
WORKDIR /app/
COPY package*.json /app/
RUN npm install
COPY . /app/
EXPOSE 3000
CMD ["npm", "start"]
Instead of copying you should mount volumes directly to the folder where you run the code on your docker image. In that way your code changes will be reflected in your app.
Example in docker-compose.yml:
volumes:
- "local_source_destination:/server_source_destination"
In your frontend docker-compose-yml you have:
volumes:
- '.:/frontend/app'
but in your Dockerfile your have
COPY . /app/
So it seems like your are mixing up where to mount your volume. Make sure '.' is where your root of your code folder is or change it accordingly.
Try something like:
volumes:
- '.:/app'
As that seems to be the location your server wants your code to be.
If your code is correctly mounted to the right destination it might be that you are not running your watch script from inside the docker container. Try running:
docker exec -itw source_destination_in_container your_container_name command_to_run_watch
I have a problem starting mongodb with Docker. I have some code which i want to reuse for different purpose. After i made a copy of that code everything worked just fine but after renaming the service, database and building everything again with
docker-compose -f docker-compose.dev.yml build
and running with
docker-compose -f docker-compose.dev.yml up
mongodb won't start and i get the ECONNREFUSED error. I tried to remove all the services and containers with
docker-compose -f docker-compose.dev.yml rm
docker rm $(docker ps -a -q)
but nothing seems to help. I also tried to discard all the changes i made (to the point where it worked) but it still doesn't work. I am quite new to programming itself and have no idea what is happening. What am i missing?
Also including my config.js, .env and docker-compose.dev.yml files.
Config.js
const config = {
http: {
port: parseInt(process.env.PORT) || 9000,
},
mongo: {
host: process.env.MONGO_HOST || 'mongodb://localhost:27017',
dbName: process.env.MONGO_DB_NAME || 'myresume',
},
};
module.exports = config;
.env
NODE_ENV=development
MONGO_HOST=mongodb://db:27017
MONGO_DB_NAME=myresume
PORT=9001
docker-compose.dev.yml
version: "3"
services:
myresume-service:
build: .
container_name: myresume-service
command: npm run dev
ports:
- 9001:9001
links:
- mongo-db
depends_on:
- mongo-db
env_file:
- .env
volumes:
- ./src:/usr/myresume-service/src
mongo-db:
container_name: mongo-db
image: mongo
ports:
- 27017:27017
volumes:
- myresume-service-mongodata:/data/db
environment:
MONGO_INITDB_DATABASE: "myresume"
volumes:
myresume-service-mongodata:
I am not completely sure but I think that your service needs the env var
MONGO_HOST=mongodb://mongo-db:27017 instead of the one that you have. The two services are only visible to each other that way. I believe you also need a network to connect the two of them.
something like this:
version: "3"
networks:
my-network:
external: true
services:
myresume-service:
build: .
container_name: myresume-service
command: npm run dev
ports:
- 9001:9001
links:
- mongo-db
depends_on:
- mongo-db
env_file:
- .env
volumes:
- ./src:/usr/myresume-service/src
networks:
- my-network
mongo-db:
container_name: mongo-db
image: mongo
ports:
- 27017:27017
volumes:
- myresume-service-mongodata:/data/db
environment:
MONGO_INITDB_DATABASE: "myresume"
networks:
- my-network
volumes:
myresume-service-mongodata:
you probably need to create the network using the command:
docker network create my-network
via a docker-compose.yml i compose a mssql.
version: "3"
services:
db:
image: mcr.microsoft.com/mssql/server:2017-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SecretPassword
- MSSQL_PID=Express
- MSSQL_LCID=1031
- MSSQL_COLLATION=Latin1_General_CI_AS
- MSSQL_MEMORY_LIMIT_MB=8192
- MSSQL_AGENT_ENABLED=true
- TZ=Europe/Berlin
ports:
- 1433:1433
- 49200:1433
volumes:
- ./data:/var/opt/mssql/data
- ./backup:/var/opt/mssql/backup
restart: always
this works fine.
But how can i expand this image?
with: mssql-server-fts
on github i find this - but how can i combine a docker-compose.yml with a Dockerfile ?
https://github.com/Microsoft/mssql-docker/blob/master/linux/preview/examples/mssql-agent-fts-ha-tools/Dockerfile
Here is a documentation on the docker-compose.yml file docker-compose file
To use the Dockerfile in the docker-compose.yml, one needs to add the build section. If the Dockerfile and docker-compose.yml are in the same directory section of the docker-compose.yml would look like the following:
version: '3'
services:
webapp:
build:
context: .
dockerfile: Dockerfile
contex is set to the root directory, this is based on the location of the docker-compose.yml file
dockerfile is set to the name of the Dockerfile, in this case Dockerfile
I hope that this helps.
Add the path to the docker file you want to include in your docker-compose.
For example:
version: "3"
services:
dockerFileExample:
build: ./Dockerfile // Or custom file name i.e. ./docker-file-frontend
Here is link to the documentation: https://docs.docker.com/compose/reference/build/