Docker is not saving django media files into project 'media' directory on production - reactjs

App Description
I have an app with django-gunicorn for back-end and reactjs-nginx with front-end all containerized as well as hosted on aws ec2 instance.
Problem
On development environment, media files are being saved in the 'media' directory permanently. Tho, those files are only saved on the current running docker container on production time. As a result, the files will be removed when I rebuild/stopped the container for a new code push.
Expectation
I wanted to store the file on the 'media' folder for permanent use.
Important code
settings.py
ENV_PATH = Path(__file__).resolve().parent.parent
STATIC_ROOT = BASE_DIR / 'django_static'
STATIC_URL = '/django_static/'
MEDIA_ROOT = BASE_DIR / 'media/'
MEDIA_URL = '/media/'
docker-compose-production.yml
version: "3.3"
services:
db:
image: postgres
restart: always #Prevent postgres from stopping the container
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- 5432:5432
nginx:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
ports:
- 80:80
- 443:443
volumes:
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
depends_on:
- backend
# Volume for certificate renewal
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
backend:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/backend/Dockerfile
entrypoint: /code/docker/backend/wsgi-entrypoint.sh
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
expose:
- 8000
depends_on:
- db
volumes:
static_volume: { }
pgdata: { }

I finally figured out the issue. I forgot to add .:/code to my nginx volumes config in my docker-compose file. Thank to this answer
Updated nginx volumes confi
volumes:
- .:/code
- static_volume:/code/backend/server/django_static
- ./docker/nginx/production:/etc/nginx/conf.d
- ./docker/nginx/certbot/conf:/etc/letsencrypt
- ./docker/nginx/certbot/www:/var/www/certbot

Related

Connection between two docker containers in nextjs in getStaticProps function

I have two separate docker files, one for running nextjs on nginx web server and another for running Laravel on another nginx:
services:
frontendx:
container_name: next_appx
build:
context: ./frontend
dockerfile: Dockerfile
restart: unless-stopped
volumes:
- ./frontend:/var/www/html/frontend
networks:
- app
nginxy:
container_name: nginxy
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- frontendx
networks:
- app
and:
services:
backendx:
container_name: laravelx
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- '8000:8000'
volumes:
- ./:/var/www
- enlive-vendor:/var/www/vendor
- .//docker-xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
depends_on:
- dbx
networks:
- appx
webserverx:
image: nginx:alpine
container_name: webserverx
restart: unless-stopped
tty: true
ports:
- "8090:80"
- "443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- appx
I can connect to the backend container through axios and address like : http://localhost:8090/api/my/api/address
but when I try to get data through getStaticProps I've got the ECONNREFUSED connection error:
const res = await fetch(`http://localhost:8090/api/address`)
I tried to replace the localhost with container ip address like : 172.20.0.2
but I've got the 504 Gateway error.
That's expected.
With axios, you're calling from the browser which is making the request from your host machine network.
But getStaticProps being a SSR function, is run inside your nextjs container. Therefore it must be able to find your backend in your app network.
Now by your setup, the frontend and backend apps are in different isolated networks and you can't connect them like this. But if you put them all your services in the same network, lets say your app(instead of appx) network, you can easily use docker dns:
const res = await fetch(`http://webserverx/api/address`)
Docker knows how to resolve webserverx to your webserverx container.

My dockerized project's expressjs server does not (randomly) send response to the client

I would be glad somebody helps me with this issue.
Actually my server express is randomly not able to send response to my react client app.
Here is morgan logs :
GET /api/comments?withUsers=true - - ms - -
GET /api/categories - - ms - -
POST /api/posts - - ms - -
GET /api/posts - - ms - -
Both of my server and client side apps are running in different docker containers.
Here is my docker-compose file :
version: '3'
services:
blog:
container_name: blog
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
environment:
NODE_ENV: development
PORT: 4000
ports:
- '4000:4000'
volumes:
- .:/usr/src/app
client:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
dockerfile: Dockerfile
context: ./views
ports:
- '3000:3000'
volumes:
- ./views:/usr/src/app/views
postgres:
container_name: postgresql
image: postgres:latest
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
- ./sql_tables/tables.sql:/docker-entrypoint-initdb.d/dbinit.sql
restart: always
environment:
POSTGRES_USER: db_user_is_her
POSTGRES_PASSWORD: db_password_is_her
POSTGRES_DB: blog
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4:latest
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin_user_email_is_her
PGADMIN_DEFAULT_PASSWORD: pgadmin_password_is_her
PGADMIN_LISTEN_PORT: 80
ports:
- '8080:80'
volumes:
- pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
volumes:
db-data:
pgadmin-data:
app:
Thank you for your help

400 Bad Request The plain HTTP request was sent to HTTPS port, while Deploying Django to AWS with Docker and Let's Encrypt

I am following the "Django on Docker Series" series by testdrivenio(https://testdriven.io/blog/django-docker-https-aws/) for deploying a project into a new domain. I am using Django as backend but when I try to access the backend url via the port it showing via docker ps command i.e.,
http://0.0.0.0:443/
I get the following error,
This is the docker-compose file I am using
version: '3.7'
services:
web:
build:
context: ./myprojectname
dockerfile: Dockerfile.staging
image: 789497322711.dkr.ecr.us-east-3.amazonaws.com/myprojectname-staging:web
command: gunicorn myprojectname.wsgi:application --bind 0.0.0.0:8005
volumes:
- static_volume:/home/myprojectname_staging/web/static
- media_volume:/home/myprojectname_staging/web/media
expose:
- 8000
env_file:
- ./.env.staging
frontendimage:
container_name: frontendimage
image: 789497322711.dkr.ecr.us-east-3.amazonaws.com/myprojectname-staging:frontendimage
stdin_open: true
build:
context: .
dockerfile: frontend/Dockerfile.staging
# volumes:
# - type: bind
# source: ./frontend
# target: /usr/src/frontend
# - '.:/usr/src/frontend'
# - '/usr/src/frontend/node_modules'
ports:
- '1337:30'
environment:
- CHOKIDAR_USEPOLLING=true
depends_on:
- web
nginx-proxy:
container_name: nginx-proxy
build: ./myprojectname/nginx
image: 789497322711.dkr.ecr.us-east-3.amazonaws.com/myprojectname-staging:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/home/myprojectname_staging/web/staticfiles
- media_volume:/home/myprojectname_staging/web/mediafiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
static_volume:
media_volume:
certs:
html:
vhost:
My Nginx directory
└── nginx
├── Dockerfile
├── custom.conf
└── vhost.d
└── default
Dockerfile
FROM jwilder/nginx-proxy
COPY vhost.d/default /etc/nginx/vhost.d/default
COPY custom.conf /etc/nginx/conf.d/custom.conf
custom.conf
client_max_body_size 10M;
default
server {
listen 80;
listen 443 default ssl;
}
location /static/ {
alias /home/myprojectname_staging/web/static/;
add_header Access-Control-Allow-Origin *;
}
location /media/ {
alias /home/myprojectname_staging/web/media/;
add_header Access-Control-Allow-Origin *;
}
What should be my next course of action?

Deploying with docker-compose. Frontend is not reaching backend

So I'm running a web app which consist of 3 services with docker-compose.
A mongodb database container.
A nodejs backend.
A nginx container with static build folder which serves a react app.
Locally it runs fine and I'm very happy, when trying to deploy to a vps I'm facing an issue.
I've set the vps' nginx to reverse proxy to port 8000 which serves the react app, it runs as expected but I can not send requests to the backend, when I'm logged in the vps I can curl it and it responds, but when the web app sends requests, they hang.
My docker-compose:
version: '3.7'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: server
container_name: node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
depends_on:
- mongo
env_file: ./server/.env
ports:
- '8080:4000'
environment:
- NODE_ENV=production
networks:
- app-network
mongo:
image: mongo:4.2.7-bionic
container_name: database
hostname: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=...
- MONGO_INITDB_ROOT_PASSWORD=...
- MONGO_INITDB_DATABASE=admin
restart: always
ports:
- 27017:27017
volumes:
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
networks:
- app-network
client:
build:
context: ./client
dockerfile: prod.Dockerfile
image: client-build
container_name: react-client-build
env_file: ./client/.env
depends_on:
- server
ports:
- '8000:80'
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local

MongoNetwork ECONNREFUSED when renaming service and database

I have a problem starting mongodb with Docker. I have some code which i want to reuse for different purpose. After i made a copy of that code everything worked just fine but after renaming the service, database and building everything again with
docker-compose -f docker-compose.dev.yml build
and running with
docker-compose -f docker-compose.dev.yml up
mongodb won't start and i get the ECONNREFUSED error. I tried to remove all the services and containers with
docker-compose -f docker-compose.dev.yml rm
docker rm $(docker ps -a -q)
but nothing seems to help. I also tried to discard all the changes i made (to the point where it worked) but it still doesn't work. I am quite new to programming itself and have no idea what is happening. What am i missing?
Also including my config.js, .env and docker-compose.dev.yml files.
Config.js
const config = {
http: {
port: parseInt(process.env.PORT) || 9000,
},
mongo: {
host: process.env.MONGO_HOST || 'mongodb://localhost:27017',
dbName: process.env.MONGO_DB_NAME || 'myresume',
},
};
module.exports = config;
.env
NODE_ENV=development
MONGO_HOST=mongodb://db:27017
MONGO_DB_NAME=myresume
PORT=9001
docker-compose.dev.yml
version: "3"
services:
myresume-service:
build: .
container_name: myresume-service
command: npm run dev
ports:
- 9001:9001
links:
- mongo-db
depends_on:
- mongo-db
env_file:
- .env
volumes:
- ./src:/usr/myresume-service/src
mongo-db:
container_name: mongo-db
image: mongo
ports:
- 27017:27017
volumes:
- myresume-service-mongodata:/data/db
environment:
MONGO_INITDB_DATABASE: "myresume"
volumes:
myresume-service-mongodata:
I am not completely sure but I think that your service needs the env var
MONGO_HOST=mongodb://mongo-db:27017 instead of the one that you have. The two services are only visible to each other that way. I believe you also need a network to connect the two of them.
something like this:
version: "3"
networks:
my-network:
external: true
services:
myresume-service:
build: .
container_name: myresume-service
command: npm run dev
ports:
- 9001:9001
links:
- mongo-db
depends_on:
- mongo-db
env_file:
- .env
volumes:
- ./src:/usr/myresume-service/src
networks:
- my-network
mongo-db:
container_name: mongo-db
image: mongo
ports:
- 27017:27017
volumes:
- myresume-service-mongodata:/data/db
environment:
MONGO_INITDB_DATABASE: "myresume"
networks:
- my-network
volumes:
myresume-service-mongodata:
you probably need to create the network using the command:
docker network create my-network

Resources