Docker - run Dockerfile before or after composer an how? - sql-server

via a docker-compose.yml i compose a mssql.
version: "3"
services:
db:
image: mcr.microsoft.com/mssql/server:2017-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SecretPassword
- MSSQL_PID=Express
- MSSQL_LCID=1031
- MSSQL_COLLATION=Latin1_General_CI_AS
- MSSQL_MEMORY_LIMIT_MB=8192
- MSSQL_AGENT_ENABLED=true
- TZ=Europe/Berlin
ports:
- 1433:1433
- 49200:1433
volumes:
- ./data:/var/opt/mssql/data
- ./backup:/var/opt/mssql/backup
restart: always
this works fine.
But how can i expand this image?
with: mssql-server-fts
on github i find this - but how can i combine a docker-compose.yml with a Dockerfile ?
https://github.com/Microsoft/mssql-docker/blob/master/linux/preview/examples/mssql-agent-fts-ha-tools/Dockerfile

Here is a documentation on the docker-compose.yml file docker-compose file
To use the Dockerfile in the docker-compose.yml, one needs to add the build section. If the Dockerfile and docker-compose.yml are in the same directory section of the docker-compose.yml would look like the following:
version: '3'
services:
webapp:
build:
context: .
dockerfile: Dockerfile
contex is set to the root directory, this is based on the location of the docker-compose.yml file
dockerfile is set to the name of the Dockerfile, in this case Dockerfile
I hope that this helps.

Add the path to the docker file you want to include in your docker-compose.
For example:
version: "3"
services:
dockerFileExample:
build: ./Dockerfile // Or custom file name i.e. ./docker-file-frontend
Here is link to the documentation: https://docs.docker.com/compose/reference/build/

Related

Docker react image doesn't hot reload

Red all the posts around. Tried with lower "react-scripts" (mine is 5.0.1), used CHOKIDAR_USEPOLLING: 'true', basically everything on the first two pages on google.
Hot reloading still doesn't work. My docker-compose.yaml:
version: '3.3'
services:
database:
container_name: mysql
image: mysql
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: toma123
MYSQL_DATABASE: api
MYSQL_USER: toma
MYSQL_PASSWORD: toma123
ports:
- '4306:3306'
volumes:
- mysql-data:/var/lib/mysql
php:
container_name: php
build:
context: ./php
ports:
- '9000:9000'
volumes:
- ./../api:/var/www/api
depends_on:
- database
nginx:
container_name: nginx
image: nginx:stable-alpine
ports:
- '8080:80'
volumes:
- ./../api:/var/www/api
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- database
react:
container_name: react
build:
context: ./../frontend
ports:
- '3001:3000'
volumes:
- node_modules:/home/app/node_modules
volumes:
mysql-data:
driver: local
node_modules:
driver: local
And my react Dockerfile
FROM node
RUN mkdir -p /home/app
WORKDIR /home/app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Even if a lot of your application is in Docker it doesn't mean you need to exclusively use Docker. Since one of Docker's primary goals is to prevent containers from accessing host files, it can be tricky to convince it to emulate a normal host live-development environment.
Install Node on your host. It's likely you have this anyways, or you can trivially apt-get install or brew install it.
Start everything except the front-end application you're developing using Docker; then start your application, on the host, in the normal way.
docker-compose up -d nginx
npm run dev
You may need to make a couple of configuration changes for this to work. For example, in this development environment, the database address will be localhost:4306, but when deployed it will be database:3306, and you'll need to do things like configure Webpack to proxy backend requests to http://localhost:8080. You might set environment variables in your docker-compose.yml for this, and in your code have them default to the things they are in the non-Docker development environment.
const dbHost = process.env.DB_HOST || 'localhost';
const dbPort = process.env.DB_PORT || 4306;
In your Compose setup, do not mount volumes: over your application code or libraries. Once you get up to the point of doing final integration testing on this code, build and run the actual image you're going to deploy. So that section of the Dockerfile might look like
version: '3.8'
services:
react:
build: ../frontend
ports:
- '3001:3000'
# no volumes:
# container_name: is also unnecessary

cannot dockerize react app: unable to connect to database

I am trying to dockerize a react app with postgres database.
I am new to docker, so I followed tutorials online to come up with Dockerfile and docker-compose as shown below.
Dockerfile
# pull the official base image
FROM node:13.12.0-alpine
# set working direction
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
EXPOSE 1338
ENV PATH /app/node_modules/.bin:$PATH
# install application dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm i
# add app
COPY . ./
# start app
CMD ["npm", "start"]
docker-compose.yml
version: '3.7'
services:
sample:
container_name: sample
build:
context: .
dockerfile: ./Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- 1338:1338
environment:
- CHOKIDAR_USEPOLLING=true
- ASPNETCORE_URLS=https://+:1338
- ASPNETCORE_HTTPS_PORT=1338
depends_on:
- db
db:
container_name: db
image: postgres:14-alpine
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: ###
POSTGRES_USER: ###
POSTGRES_PASSWORD: ###
# I hide these information for privacy purpose, but I am 100% sure I input these information correctly.
volumes:
- ./db-data/:/var/lib/postgresql/data/
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
pgdata1:
so what happened is when I tried to run docker-compose up , I suppose the db part had no issue, since it wrote database system is ready to accept connections. However, the "sample" part ended up with an error:
Server wasn't able to start properly.
error Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete]
which does not make much sense to me since the database is already up so there should not be any issue with connection at all.
Feel free to share your view, any idea would be appreciated. Thank you.

How can I make my docker container update on changes to my source files?

I'm building a Django/React app using docker-compose, and I'd like it to reload my apps when a change is made, so far I've tried adding CHOKIDAR_USEPOLLING,
adding npm-watch to my package.json, but it doesn't seem to be able to detect changes in the host file.
Ideally I don't want to have to run docker-compose up --build every time I make a change since it's making development tedious.
edit: I should mention that the apps both reload running outside of docker (npm start (cra default) and python manage.py runserver) as expected.
Changes are detected inside the container, but the react app will not rebuild.
I'm using Windows 10 also.
Is there something wrong with my files or something else I should be doing here?
docker-compose.yml
version: "3.9"
services:
db:
container_name: db
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
backend:
container_name: backend
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/core
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: frontend
build: ./frontend
command: npm start
volumes:
- './frontend:/app/'
- '/frontend/node_modules'
ports:
- "3000:3000"
environment:
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
# Enable interactive terminal (crucial for react container to work)
stdin_open: true
tty: true
backend Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code/
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
frontend Dockerfile
FROM node:16
WORKDIR /app/
COPY package*.json /app/
RUN npm install
COPY . /app/
EXPOSE 3000
CMD ["npm", "start"]
Instead of copying you should mount volumes directly to the folder where you run the code on your docker image. In that way your code changes will be reflected in your app.
Example in docker-compose.yml:
volumes:
- "local_source_destination:/server_source_destination"
In your frontend docker-compose-yml you have:
volumes:
- '.:/frontend/app'
but in your Dockerfile your have
COPY . /app/
So it seems like your are mixing up where to mount your volume. Make sure '.' is where your root of your code folder is or change it accordingly.
Try something like:
volumes:
- '.:/app'
As that seems to be the location your server wants your code to be.
If your code is correctly mounted to the right destination it might be that you are not running your watch script from inside the docker container. Try running:
docker exec -itw source_destination_in_container your_container_name command_to_run_watch

React App doesn't refresh on changes using Docker-Compose

Consider the Docker compose
version: '3'
services:
frontend:
build:
context: ./frontend
container_name: frontend
command: npm start
stdin_open: true
tty: true
volumes:
- ./frontend:/usr/app
ports:
- "3000:3000"
backend:
build:
context: ./backend
container_name: backend
command: npm start
environment:
- PORT=3001
- MONGO_URL=mongodb://api_mongo:27017
volumes:
- ./backend/src:/usr/app/src
ports:
- "3001:3001"
api_mongo:
image: mongo:latest
container_name: api_mongo
volumes:
- mongodb_api:/data/db
ports:
- "27017:27017"
volumes:
mongodb_api:
And the React Dockerfile :
FROM node:14.10.1-alpine3.12
WORKDIR /usr/app
COPY package.json .
RUN npm i
COPY . .
Folder Structure :
-frontend
-backend
-docker-compose.yml
And inside Frontend :
And inside src :
When I change files inside src it doesn't reflect on the Docker side.
How can we fix this ?
Here is the answer :
If you are running on Windows, please read this: Create-React-App has some issues detecting when files get changed on Windows based machines. To fix this, please do the following:
In the root project directory, create a file called .env
Add the following text to the file and save it: CHOKIDAR_USEPOLLING=true
That's all!
Don't use same name dir for different services like you use /usr/app change this to /client/app for client and server/app for backend and then it all works and use environment:- CHOKIDAR_USEPOLLING=true and use FROM node:16.5.0-alpine and can use stdin_open: true

Docker Compose with React and Nginx

I'm trying to use docker-compose for deployment of my React app, which uses an express backend and Postgres Database. My idea is to have shared volumes from my docker-compose. Then build from my Dockerfile into the volume, so that Nginx will be able to serve the files. The problem now is that it works when i build the project the first time, but if I change something in my React Client and run "docker-compose up --build" it looks like everything is building as it should, but the files served are still the same. Is COPY command in my dockerfile not overwriting the old files?
Dockerfile in my React Client Project
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
RUN npm run build
FROM node:13.12.0-alpine
COPY --from=build /app/build /var/lib/frontend
docker-compose
version: "3.7"
services:
callstat_backend:
build: ./callstat-backend
restart: always
ports:
- "3000:3000"
env_file:
- keys.env
depends_on:
- postgres
callstat_frontend:
build: ./callstat-client
volumes:
- frontend/:/var/lib/frontend
postgres:
image: postgres:11.2-alpine
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: callstat
nginx:
image: nginx
volumes:
- frontend:/usr/share/nginx/html
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
depends_on:
- callstat_frontend
volumes:
pgdata:
frontend:
Maybe i'm taking a totaly wrong approach here?
You can run the commands in the following order:
# stop down the services
docker-compose stop
# remove the previously created docker resources
docker-compose rm
# bring up the services again
docker-compose up --build
This was your previous volume be removed and new one will be created with the updated changes.
NOTE: This is okay from the development perspective, but docker volumes are really expected to persist between deployments. For artifacts like code changes ideally images should be published as part of build process. To get little more insight into this topic you can refer to https://github.com/docker/compose/issues/2127

Resources