so I'm developing a basic Express backend for a React app.
The request is being made like this:
axios.get(`${serverLocation}/api/graph/32`).then(res => {
this.setState({datag: res.data});
for(var key in this.state) {
data.push(this.state[key]);
}
});
Server locations looks like http://IP:PORT.
The api is correct and everything I can see and on my development machine it works. React makes successful requests to the server at specified location etc. The thing is, when I put this into 2 separate docker containers via docker-compose.yml it won't work.
This is my docker-compose.yml:
version: '2.0'
services:
server:
restart: always
container_name: varno_domov_server
build: .
ports:
- "8088:5000"
links:
- react
networks:
- varnodomovnetwork
react:
restart: always
container_name: varno_domov_client
build: client/
ports:
- "8080:3000"
networks:
- varnodomovnetwork
networks:
varnodomovnetwork:
driver: bridge
I also have custom Dockerfiles, the server looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
And the client looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
If you've made it this far reading, thank you for taking the time. I am open to any suggestions regarding docker here, the React part is not written by me. If any additional information is required, tell me in the comments. Isolation is making me very available :)
So, the thing was that React was submitting requests to the server. I am inexperienced with React, so I was looking for logs in the terminal/bash when they were actually available in the browser to look at.
The problem was, that my server was on a public IP and communicating via HTTP. This meant the browser blocked the content (Mixed Content: The page at was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint), making my graphs display no data. An easy fix is to just make the browser go through with unsafe content, although I am not about that life. So I did the following:
The key problem was that my server and client are 2 separate containers. Therefore, on separate ports. What I've done is edit my nginx configuration to proxy any requests to my domain looking like "https://www.example.come/api" to be forwarded to the port of the server container on the server machine.
Hope this is of any help to someone :)
Related
I created a React+FLASK application and when I tried running from the code with commands such as npm start(for react) and flask run(for flask) they were running without any issues, the real issue occurred when I am trying to convert to docker images.
With docker initially I tried running both docker files together with the compose file as below,
docker-compose.yml
services:
api:
build:
context: .
dockerfile: Dockerfile.api
image: api-docker
ports:
- "5000:5000"
client:
build:
context: .
dockerfile: Dockerfile.client
image: react-docker
stdin_open: true
ports:
- "3000:3000"
It created the container like this,
The above method works perfectly.
But when I tried running both the images separately as below,
It shows the following error in react-image.
Error image
Initially I had given http://localhost:5000 and then because of issue I tried with http://127.0.0.1:5000 and also tried removing backslashes at the end based on other posts.But of no luck.
I hope that there must be something (which may be very silly) that I am missing, tried a day in this.
Any help or idea would be appreciated....
I'm trying to dockerize my app. It have an API architecture without using nginx. I'm using this dockerfile for the flask app
FROM python:3.9.0
WORKDIR /ProyectoTitulo
ENV FLASK_APP = app.py
ENV FLASK_ENV = development
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
RUN python -m nltk.downloader all
CMD ["python", "app.py"]
This one is my react app dockerfile.
FROM node:16-alpine
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Finally this is my docker-compose.yml file
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
I use build and compose up but when I try to send a HTTP request to and endpoint it says ERR CONNECTION. I need to add something to these files? something to the composer?
One thing is as #bluepuma77 mentioned you need to publish your BE port when that is done and you can locally connect to it you are ready to check the second step.
As I already answered in the SO question similar to your's I will quote my answer here since it will probably be useful to you aswell.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
Well, I think you need to expose the API port, too.
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
ports:
- "5000:5000" # EXPOSE API
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
Current I am working on a full stack application with a react frontend, mysql DB, and apache php instance. Something seems to be up with my changes going from my docker container to localhost. I can write from my local machine -> docker, but it seems like localhost is not reading react from my docker container.
I know that my mount is working correctly local machine -> docker file system because whenever I make changes in my IDE and save, then go and cat App.js within my docker container, that changes are there.
Any insight would be helpful, I think what is happening is that docker is taking a copy of the file upon creating the container, because whenever I remake the container, my changes to through to localhost.
p.s. I'm newish to docker, so let me know if you need more information. Thanks!
docker-compose
version: "3.7"
services:
frontend:
container_name: frontend
build:
context: "./hartley_react"
dockerfile: Dockerfile
volumes:
- "./hartley_react:/app"
- "/app/node_modules"
ports:
- 3000:3000
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
command: npm start
php:
container_name: php
build:
context: "./dockerfiles/php-img/"
ports:
- "80:80"
volumes:
- ./src:/var/www/html/
db:
container_name: db
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: userdb
MYSQL_USER: my_user
MYSQL_PASSWORD: my_password
volumes:
- ./mysqldata:/var/lib/mysql
adminer:
container_name: adminer
depends_on:
- db
image: adminer
restart: always
ports:
- 8080:8080
volumes:
my-mysqldata:
frontend:
React DockerFile
FROM node:17.4.0-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]
I guess your problem is that npm start do not auto reload if you edit your files. For that you could use nodemon or supervisor which can reload the project each time a file is updated. Otherwise you should restart manually (probably by restarting the docker container)
There are a few things you can try:
check your package.json file and specifically scripts whether start gives npm start with hot reload option or not.
to do so, you may do the full test in your local (without docker) and check whether the changes you are making in html (frontend) is indeed reflecting to your application locally without rebuilding or not.
secondly, create another script inside package.json file (custom script) to have npm run/ npm dev (available in react but not sure for your case) with hot-reload or use nodemon for that.
Once you have that, use that in your docker-compose file in place of CMD [ "npm", "start" ]
For me, It looks like your dockerfile and docker-compose file along with the named volume definition looks ok.
Only one thing though - Not sure why did you mention the "command: npm start" inside docker-compose file while you already have covered that part in your dockerfile while creating an image.
I have a question. I have created simple React and Spring-Boot applications and created dockerfiles for both. Spring-Boot displays some API and React makes requests to it. However, both of them works on ports (React - 3000 and Spring-Boot - 8080). When I made a request, I have my fetch sth like this:
fetch("http://localhost:8080/projects")
How am I supposed to change this, in order to work it with docker-compose? Because when I export ports in docker-compose file this fetch make requests inside the container, not outside of it.
docker-compose:
version: '3'
services:
frontend:
image: "IMAGE/FRONTEND:latest"
ports:
- "3000:3000"
depends_on:
- backend
backend:
image: "IMAGE/BACKEND:latest"
ports:
- "8080:8080"
Here's an example docker compose that will help illustrate how you can do what you are trying to do:
version: '3'
services:
frontend:
image: "alpine:latest" #Using alpine as it has wget
command: sh -c "while true; do wget -q -S -O /dev/null http://backend:80 && sleep 4; done" #This just a sample script that get from the backend service a webpage. Note the usage of the "backend" hostname. it writes to null and only displays headers for brevity. This is just to prove that the front end can reach backend.
depends_on:
- backend
backend:
image: "nginxdemos/hello" # just a dummy image that exposes a html page over port 80
#Notice you dont need to expose ports. Look at docker compose networking for a better understanding of how these two containers are on the same n/w.
Basically like your backend I have used a niginx demo container that serves pages over port 80.
For the front end I have used a shell script that just queries the back end and displays only the headers.
So in your case the problem is that your front end tries to go to localhost for the backend. Whereas the localhost is just pointing to the front end container. You really want it to point to the hostname backend which in turn will route you to containers in the backend service.
To understand how compose networking works please do take a look at https://docs.docker.com/compose/networking/.
Relevant snippet which comes into play in the above example.
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.