I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.
Related
I'm trying to dockerize my app. It have an API architecture without using nginx. I'm using this dockerfile for the flask app
FROM python:3.9.0
WORKDIR /ProyectoTitulo
ENV FLASK_APP = app.py
ENV FLASK_ENV = development
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
RUN python -m nltk.downloader all
CMD ["python", "app.py"]
This one is my react app dockerfile.
FROM node:16-alpine
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Finally this is my docker-compose.yml file
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
I use build and compose up but when I try to send a HTTP request to and endpoint it says ERR CONNECTION. I need to add something to these files? something to the composer?
One thing is as #bluepuma77 mentioned you need to publish your BE port when that is done and you can locally connect to it you are ready to check the second step.
As I already answered in the SO question similar to your's I will quote my answer here since it will probably be useful to you aswell.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
Well, I think you need to expose the API port, too.
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
ports:
- "5000:5000" # EXPOSE API
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
Environment health has transitioned from Ok to Severe. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
I am deploying a react app in AWS using the docker platform. I am getting HEALTH-Severe issues when I deploy my app. I have also added custom TCP inbound rules in the EC2 instance (source-anywhere).
I am using free tier in AWS. The following is my Dockerfile.
FROM node:alpine as builder
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My .travis.yml file:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t username/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true username/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: us-east-2
app: "docker-react"
env: "DockerReact-env"
bucket_name: "my bucket-name"
bucket_path: "docker-react"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
When I open my app I am getting 502 Bad Gateway error.
I had the same problem. After reading some of the documentation here I figured maybe docker-compose.yml is actually picked up first before anything. Deleting my docker-compose.yml (which I was only using locally) solved the issue for me.
I have 2 docker containers, the front being a React.js app running on ports 3000:3000 and the back being a Flask API running on 5000:5000.
ISSUE:
I am having an issue with these containers wrapped together in docker-compose where the front will be accessible via localhost:3000 as it would normally run outside a container, however it is unable to communicate with the back container. I receive a net::ERR_EMPTY_RESPONSE in browser when attempting to use any API component. How might I be able to resolve this?
SETUP:
My directory for this docker-compose setup is as follows:
/project root
- docker-compose.yml
/react front
- Dockerfile
/app
/flask back
- Dockerfile
/api
My docker-compose.yml is as follows:
version: "3.8"
services:
flask back:
build: ./flask back
command: python main.py run -h 0.0.0.0
volumes:
- ./flask back/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./flask back/.env
react front:
build: ./react front
volumes:
- ./react front/app:/usr/src/app
- usr/src/app/node_modules
ports:
- 3000:3000
links:
- flask back
The front Dockerfile:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /usr/src/app
# add `/react front/node_modules/.bin` to $PATH
ENV PATH /react front/node_modules/.bin:$PATH
# install app dependencies
ADD package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#3.4.1 -g --silent
# start app
CMD ["npm", "start"]
The back Dockerfile:
FROM python:alpine3.7
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /usr/src/app/
TROUBLESHOOTING SO FAR:
So far I have consulted the following threads on SO:
Flask and React Docker containers not communicating via Docker-Compose - where the package.json needed a proxy addition.
ERR_EMPTY_RESPONSE from docker container - where IP addresses needed rewritten to 0.0.0.0 (this appears to be a unique issue to GO as I never used this form of port and IP configuration in my project)
Neither of these very similar issues have resolved my issue. I am also able to ping the back-end container with the front-end and vice versa. Running the React container while running the Flask API outside of its container also works as expected/intended. If there is any other information anyone would like, I would be happy to provide.
Thank you for the time and patience.
I am trying to use docker for MERN project. I have created Dockerfile in both client and server, and docker-compose.yml in root folder.
I executed code docker-compose build. It executed without any error. Then I run docker-compose up, node and mongodb run successfully but react js is exited with code 0.
Dockerfile for client
# build environment
FROM node:12.18.2-alpine
RUN mkdir -p /opt/app/ps-client
WORKDIR /opt/app/ps-client
# install app dependencies
COPY package.json .
# COPY package-lock.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "run", "start"]
docker-compose.yml
version: "3.8" # specify docker-compose version
# Define the services/ containers to be run
services:
back-end: # name of the first service
build: server # specify the directory of the Dockerfile
ports:
- "5000:5000" #specify ports mapping
links:
- database # link this service to the database service
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forwarding
front-end: # name of the second service
build: client # specify the directory of the Dockerfile
ports:
- "3000:3000" # specify port mapping
I tried docker-compose --verbose up only for react js, result
docker container inspect b9e429 result of this command is result
Try adding the following options to the front-end service configuration in the docker compose file.
stdin_open: true
tty: true
This should be equivalent to running the container with the -it options.
You can read more about the issue here: https://github.com/facebook/create-react-app/issues/8688
Add tty: true to your front-end service should suffice. The full context of the issue is described at: https://github.com/facebook/create-react-app/issues/8688 as mentioned by Teddy Sterne
so I'm developing a basic Express backend for a React app.
The request is being made like this:
axios.get(`${serverLocation}/api/graph/32`).then(res => {
this.setState({datag: res.data});
for(var key in this.state) {
data.push(this.state[key]);
}
});
Server locations looks like http://IP:PORT.
The api is correct and everything I can see and on my development machine it works. React makes successful requests to the server at specified location etc. The thing is, when I put this into 2 separate docker containers via docker-compose.yml it won't work.
This is my docker-compose.yml:
version: '2.0'
services:
server:
restart: always
container_name: varno_domov_server
build: .
ports:
- "8088:5000"
links:
- react
networks:
- varnodomovnetwork
react:
restart: always
container_name: varno_domov_client
build: client/
ports:
- "8080:3000"
networks:
- varnodomovnetwork
networks:
varnodomovnetwork:
driver: bridge
I also have custom Dockerfiles, the server looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
And the client looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
If you've made it this far reading, thank you for taking the time. I am open to any suggestions regarding docker here, the React part is not written by me. If any additional information is required, tell me in the comments. Isolation is making me very available :)
So, the thing was that React was submitting requests to the server. I am inexperienced with React, so I was looking for logs in the terminal/bash when they were actually available in the browser to look at.
The problem was, that my server was on a public IP and communicating via HTTP. This meant the browser blocked the content (Mixed Content: The page at was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint), making my graphs display no data. An easy fix is to just make the browser go through with unsafe content, although I am not about that life. So I did the following:
The key problem was that my server and client are 2 separate containers. Therefore, on separate ports. What I've done is edit my nginx configuration to proxy any requests to my domain looking like "https://www.example.come/api" to be forwarded to the port of the server container on the server machine.
Hope this is of any help to someone :)