I want to deploy a worker (FastApi) and a web (react) container to Heroku using docker-compose. Running locally, everything works. But on Heroku, the frontend cannot reach the backend.
Dockerfile.worker
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
WORKDIR /app
COPY app .
COPY requirements.txt ./requirements.txt
ENV IS_IN_DOCKER=true
RUN pip3 install -r requirements.txt
RUN pip3 install -r ./custom_model/custom_requirements.txt
CMD uvicorn main:app --host 0.0.0.0 --port 8800
Dockerfile.web
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
docker-compose.yml (using the Port Env variable from heroku for the frontend, locally i have a .env File with PORT=80)
version: '3.7'
services:
ml-starter-frontend:
depends_on:
- ml-starter-backend
container_name: ml-starter-frontend
build:
context: ./frontend
dockerfile: Dockerfile.web
ports:
- '${PORT}:3000'
restart: always
ml-starter-backend:
container_name: ml-starter-backend
build:
context: ./backend
dockerfile: Dockerfile.worker
restart: always
setupProxy.js in React src
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function(app) {
app.use(
'/api/*',
createProxyMiddleware({
target: 'http://ml-starter-backend:8800',
changeOrigin: true,
})
);
};
Call in React frontend to backend
export const getConfiguration = () => async (dispatch) =>{
const {data} = await axios.get('/api/configs')
dispatch({type: GET_CONFIGURATION, payload: data })
}
Running locally the call works:
Locally
Deployment to heroku:
heroku container:push --recursive --app {appName}
heroku container:release worker --app {appName}
heroku container:release web --app {appName}
Frontend cannot reach backend:
Heroku
Heroku Worker (FastAPI) log
2021-04-09T10:10:55.322684+00:00 app[worker.1]: INFO: Application startup complete.
2021-04-09T10:10:55.325926+00:00 app[worker.1]: INFO: Uvicorn running on http://0.0.0.0:8800 (Press CTRL+C to quit)
Heroku Web (React) Log
2021-04-09T10:11:21.572639+00:00 app[web.1]: [HPM] Proxy created: / -> http://ml-starter-backend:8800
.....
2021-04-09T10:25:37.404622+00:00 app[web.1]: [HPM] Error occurred while trying to proxy request /api/configs from {appName}.herokuapp.com to http://ml-starter-backend:8800 (ENOTFOUND) (https://nodejs.org/api/errors.html#errors_common_system_errors)
If possible i would like to avoid nginx as I already spent 2 hours trying to make it work with proxies in the prod build. Does anybody have experience with heroku and docker-compose and could give me a hint how to fix this?
For everybody with a similiar problem.. Actually i ended up with a workaround.. I made a dockerfile containing the backend and frontend.. and deployed it to Heroku as a web app..
During my search i discovered, that Heroku workers are not really suitable in a HTTP scenario. If I understood it right, they are meant for reading messages from a queue.. https://devcenter.heroku.com/articles/background-jobs-queueing
Yup, I did something like what Loki34 did too. My approach was to run npm run build, then have FastAPI serve these build files. And like what Loki34 says, this is a workaround approach, meaning it's probably not the best approach out there when it comes to deploying frontend and backend in Docker containers. Anyway, those who are curious about how I did it can check out this app that I did: https://github.com/yxlee245/app-iris-ml-react-fastapi
Related
I have been trying to deploy the React app on cloud run using dockers & cloudbuild.yaml & even though the deployment & everything is successful, cloudrun url is showing weird error as:-
I tried redeploying tthrough cloudbuild again & most of the times the cloudbuild errors out with timeout error as this:-
The dockerfile looks like this:-
# build environment
FROM node:14-alpine as react-build
WORKDIR /app
COPY . ./
RUN yarn
RUN yarn build
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
COPY --from=react-build /app/build /usr/share/nginx/html
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
The cloudbuild file looks like this:-
# cloudbuild.yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/premi0103982-sustaina/cee-portal:latest', '.']
# dir: 'app' # Working directory for build context
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/premi0103982-sustaina/cee-portal:latest']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args: ['beta', 'run', 'deploy', 'cee-portal', '--image', 'gcr.io/$PROJECT_ID/cee-portal:latest', '--region',
'europe-west3', '--platform', 'managed', '--allow-unauthenticated']
Can you please tell me what wrong I'm doing...?
I have a dummy react app deployed from Dockerfile.dev:
FROM node:alpine AS builder
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
which is deployed to elasticbeanstalk right after it is pushed to GitHub using TravisCI:
sudo: required
services:
- docker
before_install:
- docker build -t name/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true name/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: 'us-east-1'
app: 'docker'
env: 'Docker-env'
bucket_name: 'elasticbeanstalk-us-east-1-709516719664'
bucket_path: 'docker'
on:
branch: main
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
The app is successfully deploying to EB but displays 502 Bad Gateway as soon as I access it (by clicking the app link in AWS EB). Enhanced health overview reports:
Process default has been unhealthy for 18 hours (Target.FailedHealthChecks).
Docker-env EC2 instance is running and after allowing all incoming connections to it I can connect just fine:
I can build my app using Dockerfile.dev locally with no problems:
docker build -t name/docker-react -f Dockerfile.dev .
=> => naming to docker.io/name/docker-react
docker run -p 3000:3000 name/docker-react
AWS has a hard time with the '.' folder designation and prefers the long form ./
Try to edit the COPY instruction to COPY package*.json ./
And try also to remove the named builder. By default, the stages are not named, and you refer to them by their integer number, starting with 0 for the first FROM instruction.
Your Dockerfile should looks like:
FROM node:alpine
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=0 /app/build /usr/share/nginx/html
You should have a docker-compose.yml, just ensure that you have the right port mapping inside:
Example:
services:
web-service:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:3000" # outside:inside container
finally your TravisCI configuration must be edited. secret_acces_key has ONE 'S'
...
access_key_id: $AWS_ACCESS_KEY
secret_acces_key: $AWS_SECRET_KEY
Nginx default port is 80, and AWS only checks docker-compose.yml to manage resources efficiently, just do the right port mapping inside that file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:3000"
volumes:
- /app/node_modules
- .:/app
Environment health has transitioned from Ok to Severe. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
I am deploying a react app in AWS using the docker platform. I am getting HEALTH-Severe issues when I deploy my app. I have also added custom TCP inbound rules in the EC2 instance (source-anywhere).
I am using free tier in AWS. The following is my Dockerfile.
FROM node:alpine as builder
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My .travis.yml file:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t username/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true username/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: us-east-2
app: "docker-react"
env: "DockerReact-env"
bucket_name: "my bucket-name"
bucket_path: "docker-react"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
When I open my app I am getting 502 Bad Gateway error.
I had the same problem. After reading some of the documentation here I figured maybe docker-compose.yml is actually picked up first before anything. Deleting my docker-compose.yml (which I was only using locally) solved the issue for me.
I have 2 docker containers, the front being a React.js app running on ports 3000:3000 and the back being a Flask API running on 5000:5000.
ISSUE:
I am having an issue with these containers wrapped together in docker-compose where the front will be accessible via localhost:3000 as it would normally run outside a container, however it is unable to communicate with the back container. I receive a net::ERR_EMPTY_RESPONSE in browser when attempting to use any API component. How might I be able to resolve this?
SETUP:
My directory for this docker-compose setup is as follows:
/project root
- docker-compose.yml
/react front
- Dockerfile
/app
/flask back
- Dockerfile
/api
My docker-compose.yml is as follows:
version: "3.8"
services:
flask back:
build: ./flask back
command: python main.py run -h 0.0.0.0
volumes:
- ./flask back/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./flask back/.env
react front:
build: ./react front
volumes:
- ./react front/app:/usr/src/app
- usr/src/app/node_modules
ports:
- 3000:3000
links:
- flask back
The front Dockerfile:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /usr/src/app
# add `/react front/node_modules/.bin` to $PATH
ENV PATH /react front/node_modules/.bin:$PATH
# install app dependencies
ADD package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#3.4.1 -g --silent
# start app
CMD ["npm", "start"]
The back Dockerfile:
FROM python:alpine3.7
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /usr/src/app/
TROUBLESHOOTING SO FAR:
So far I have consulted the following threads on SO:
Flask and React Docker containers not communicating via Docker-Compose - where the package.json needed a proxy addition.
ERR_EMPTY_RESPONSE from docker container - where IP addresses needed rewritten to 0.0.0.0 (this appears to be a unique issue to GO as I never used this form of port and IP configuration in my project)
Neither of these very similar issues have resolved my issue. I am also able to ping the back-end container with the front-end and vice versa. Running the React container while running the Flask API outside of its container also works as expected/intended. If there is any other information anyone would like, I would be happy to provide.
Thank you for the time and patience.
The issue:
When running skaffold and update watched files, I see the file sync update occur and nodemon restart the server, but refreshing the page doesn't show the change. It's not until after I stop skaffold entirely and restart that I see the change.
Syncing 1 files for test/dev-client:e9c0a112af09abedcb441j4asdfasfd1cf80f2a9bc80342fd4123f01f32e234cfc18
Watching for changes every 1s...
[client-deployment-656asdf881-m643v client] [nodemon] restarting due to changes...
[client-deployment-656asdf881-m643v client] [nodemon] starting `node bin/server.js`
The setup:
I have a simple microservices application. It has a server side (flask/python) and a client side (react) with express handling the dev server. I have nodemon on with the legacy watch flag as true (For Chokidar polling). On development I'm using Kubernetes via Docker for Mac.
Code:
I'm happy to post my code to assist. Just let me know which ones are most needed.
Here's some starters:
Skaffold.yaml:
apiVersion: skaffold/v1beta7
kind: Config
build:
local:
push: false
artifacts:
- image: test/dev-client
docker:
dockerfile: Dockerfile.dev
context: ./client
sync:
'**/*.css': .
'**/*.scss': .
'**/*.js': .
- image: test/dev-server
docker:
dockerfile: Dockerfile.dev
context: ./server
sync:
'**/*.py': .
deploy:
kubectl:
manifests:
- k8s-test/client-ip-service.yaml
- k8s-test/client-deployment.yaml
- k8s-test/ingress-service.yaml
- k8s-test/server-cluster-ip-service.yaml
- k8s-test/server-deployment.yaml
The relevant part from Package.json:
"start": "nodemon -L bin/server.js",
Dockerfile.dev (Client side):
# base image
FROM node:10.8.0-alpine
# setting the working directory
# may have to run this depending on environment
# RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# add '/usr/src/app/node_modules/.bin' to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app depencies
COPY package.json /usr/src/app/package.json
RUN npm install
# copy over everything else
COPY . .
# start the app.
CMD ["npm", "run", "start"]
It turns out I was using the wrong pattern for my file syncs. **/*.js doesn't sync the directory properly.
After changing
sync:
'**/*.css': .
'**/*.scss': .
'**/*.js': .
to
sync:
'***/*.css': .
'***/*.scss': .
'***/*.js': .
It immediately began working.
Update:
On the latest versions of skaffold, this pattern no longer works as skaffold abandoned flattening by default. You can now use **/* patterns and get the same results.