I've had a docker-compose/nginx setup going for years & it's been beautiful—hot reload worked for React, Vue, Golang, Flask, Django, Php, etc & all the services could talk to each other, & I used it for projects at several companies ... but recently I decided to update to modern versions of the Alpine, Node, Python, etc Docker images. It's been a mostly smooth transition except for the node services, where I've hit a wall. Both vanilla CRA (create-react-app) and vanilla Vite React SWC installs come back with the same error, after I start the docker-compose/nginx network:
CRA:
WebSocket connection to 'wss://react-apps.raphe.localhost:3000/ws' failed:
Vite:
WebSocket connection to 'wss://vite-react-apps.raphe.localhost:8082/' failed:
w/CRA, the page loads, but there is no hot-reload. With Vite/SWC I get a blank white page. For reference, the url for one of the services (CRA) is: https://react-apps.raphe.localhost:8082/. If I start either service in dev mode without docker/nginx, they run fine w/hot-reload, e.g., these both work:
http://localhost:3000/
http://localhost:5173/
I found various solutions for solving those ^ issues on the web. One was to put WDS_SOCKET_PORT=0 in the .env or docker-compose environment. Another, with vite, was to mess around with the vite.config.js & I tried everything under the sun in that file. Another was to use WATCHPACK_POLLING=true with CRA and CHOKIDAR_USEPOLLING=true with vite. Nothing worked ...
...so, after a couple days of pain, I tried rolling all my code back a year, and got the same error! So I'm really dead in the water. I started to think it might have something to do with my Docker or Chrome versions, since those are the only pieces of the puzzle I haven't rolled back & were updated recently. Could that really be the issue?
Has anyone else dealt with this? It's crazy annoying. Here's an example of one of my development docker-compose services, for reference:
react-apps:
container_name: ${COMPOSE_PROJECT_NAME}-react-apps
restart: always
build:
context: ../rfe_react_apps/
dockerfile: Dockerfile.dev
command: yarn start
environment:
- REACT_APP_SENTRY=${SENTRY}
- REACT_APP_ENV=development
- BROWSER=none
expose:
- 3000
volumes:
- ../rfe_react_apps/:/app/
... & the relevant bit in the nginx config:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
include /etc/nginx/conf.d/certs.txt;
ssl_protocols TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name react-apps.*;
location / {
proxy_pass http://react-apps:3000;
}
}
... & the full vite Chrome console error, in case there's a clue there (I don't totally understand what it's trying to tell me):
GET https://vite-react-apps.raphe.localhost:8082/src/main.jsx 500
client.ts:78 WebSocket connection to 'wss://vite-react-apps.raphe.localhost:8082/' failed:
setupWebSocket # client.ts:78
(anonymous) # client.ts:68
client.ts:78 WebSocket connection to 'wss://localhost:5173/' failed:
setupWebSocket # client.ts:78
fallback # client.ts:43
(anonymous) # client.ts:99
client.ts:48 [vite] failed to connect to websocket.
your current setup:
(browser) vite-react-apps.raphe.localhost:8082/ <--[HTTP]--> localhost:5173/ (server)
(browser) vite-react-apps.raphe.localhost:8082/ <--[WebSocket (failing)]--> localhost:5173/ (server)
Check out your Vite / network configuration and https://vitejs.dev/config/server-options.html#server-hmr .
At this point I've given up, & I rely on this stack for income. I will be so happy if anyone can unstick me here!
I banged my way to a solution (no errors now, hot-reload works), although I don't love it. Here's what I had to change to get the CRA service going: added these vars to the CRA service's environment in my docker-compose file:
- WDS_SOCKET_HOST=localhost
- WDS_SOCKET_PORT=3000
... and swapped out the expose section for this:
ports:
- 3000:3000
That has the unintended side-effect of making the service available at http://localhost:3000, which is fine (since this is just a development environment), but messy. For vite, SWC was erroring out, something about missing bindings, so I switched to vanilla vite/React and finally got lucky with this combination: the docker-compose service:
vite-react-apps:
container_name: ${COMPOSE_PROJECT_NAME}-vite-react-apps
restart: always
build:
context: ../rfe_vite_react_apps/
dockerfile: Dockerfile.dev
command: yarn dev --port 5173 --host
ports:
- 5173:5173
volumes:
- ../rfe_vite_react_apps/:/app/
Again, annoyingly, this means the service is also available at http://localhost:5173/. So the important bits there ^ are the --port 5173 (for some reason, it says 5173 is already in use otherwise, even when it isn't), and, again, swapping out the expose section for ports. I also had to add this to my vite.config.js:
server: {
hmr: {
port: 5173,
host: "localhost",
protocol: "ws",
},
},
Again, don't love this! ... but it's only for my development docker-compose file. Everything works fine in production, I can use expose instead of ports, etc. Not sure why I have to do this stuff now, it worked so beautifully before. If anyone knows of a cleaner solution, or can explain why things changed, I'd love to know.
Related
I'm trying to dockerize my app. It have an API architecture without using nginx. I'm using this dockerfile for the flask app
FROM python:3.9.0
WORKDIR /ProyectoTitulo
ENV FLASK_APP = app.py
ENV FLASK_ENV = development
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
RUN python -m nltk.downloader all
CMD ["python", "app.py"]
This one is my react app dockerfile.
FROM node:16-alpine
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Finally this is my docker-compose.yml file
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
I use build and compose up but when I try to send a HTTP request to and endpoint it says ERR CONNECTION. I need to add something to these files? something to the composer?
One thing is as #bluepuma77 mentioned you need to publish your BE port when that is done and you can locally connect to it you are ready to check the second step.
As I already answered in the SO question similar to your's I will quote my answer here since it will probably be useful to you aswell.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
Well, I think you need to expose the API port, too.
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
ports:
- "5000:5000" # EXPOSE API
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
so I'm developing a basic Express backend for a React app.
The request is being made like this:
axios.get(`${serverLocation}/api/graph/32`).then(res => {
this.setState({datag: res.data});
for(var key in this.state) {
data.push(this.state[key]);
}
});
Server locations looks like http://IP:PORT.
The api is correct and everything I can see and on my development machine it works. React makes successful requests to the server at specified location etc. The thing is, when I put this into 2 separate docker containers via docker-compose.yml it won't work.
This is my docker-compose.yml:
version: '2.0'
services:
server:
restart: always
container_name: varno_domov_server
build: .
ports:
- "8088:5000"
links:
- react
networks:
- varnodomovnetwork
react:
restart: always
container_name: varno_domov_client
build: client/
ports:
- "8080:3000"
networks:
- varnodomovnetwork
networks:
varnodomovnetwork:
driver: bridge
I also have custom Dockerfiles, the server looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
And the client looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
If you've made it this far reading, thank you for taking the time. I am open to any suggestions regarding docker here, the React part is not written by me. If any additional information is required, tell me in the comments. Isolation is making me very available :)
So, the thing was that React was submitting requests to the server. I am inexperienced with React, so I was looking for logs in the terminal/bash when they were actually available in the browser to look at.
The problem was, that my server was on a public IP and communicating via HTTP. This meant the browser blocked the content (Mixed Content: The page at was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint), making my graphs display no data. An easy fix is to just make the browser go through with unsafe content, although I am not about that life. So I did the following:
The key problem was that my server and client are 2 separate containers. Therefore, on separate ports. What I've done is edit my nginx configuration to proxy any requests to my domain looking like "https://www.example.come/api" to be forwarded to the port of the server container on the server machine.
Hope this is of any help to someone :)
I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.
The Error
When deploying to Azure Web Apps with Multi-container support, I receive an "Invalid Host Header" message from https://mysite.azurewebsites.com
Local Setup
This runs fine.
I have two Docker containers: client a React app and server an Express app hosting my API. I am using a proxy to host my API on server.
In client's package.json I have defined:
"proxy": "http://localhost:3001"
I use the following docker compose file to build locally.
version: '2.1'
services:
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server/src:/app/project-server/src
command: npm start
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/app/project-client/src
- ./client/public:/app/project-client/public
links:
- server
command: npm start
Everything runs fine.
On Azure
When deploying to Azure I have the following. client and server images have been stored in Azure Container Registry. They appear to load just fine from the logs.
In my App Service > Container Settings I am loading the images from Azure Container Registry (ACR) and I'm using the following configuration (Docker compose) file.
version: '2.1'
services:
client:
image: <clientimage>.azurecr.io/clientimage:v1
build: ./client
expose:
- 3000
ports:
- 3000:3000
command: npm start
server:
image: <serverimage>.azurecr.io/<serverimage>:v1
build: ./server
expose:
- 3001
ports:
- 3001:3001
command: npm start
I have also defined in Application Settings:
WEBSITES_PORT to be 3000.
This results in the error on my site "Invalid Host Header"
Things I've tried
• Serving the app from the static folder in server. This works in that it serves the app, but it messes up my authentication. I need to be able to serve the static portion from client's App.js and have that talk to my Express API for database calls and authentication.
• In my docker-compose file binding the front end to:
ports:
- 3000:80
• A few other port combinations but no luck.
Also, I think this has something to do with the proxy in client's package.json based on this repo
Any help would be greatly appreciated!
Update
It is the proxy setting.
This somewhat solves it. By removing "proxy": "http://localhost:3001" I am able to load the website, but the suggested answer in the problem does not work for me. i.e. I am now unable to access my API.
Never used azure before and I also don't use a proxy (due to its random connection issues), but if your application is basically running express, you can utilize cors. (As a side note, it's more common to run your express server on 5000 than 3001.)
I first set up an env/config.js folder and file like so:
module.exports = {
development: {
database: 'mongodb://localhost/boilerplate-dev-db',
port: 5000,
portal: 'http://localhost:3000',
},
production: {
database: 'mongodb://localhost/boilerplate-prod-db',
port: 5000,
portal: 'http://example.com',
},
staging: {
database: 'mongodb://localhost/boilerplate-staging-db',
port: 5000,
portal: 'http://localhost:3000',
}
};
Then, depending on the environment, I can implement cors where I'm defining express middleware:
const cors = require('cors');
const config = require('./path/to/env/config.js');
const env = process.env.NODE_ENV;
app.use(
cors({
credentials: true,
origin: config[env].portal,
}),
);
Please note the portal and the AJAX requests MUST have matching host names. For example, if my application is hosted on http://example.com, my front-end API requests must be making requests to http://example.com/api/ (not http://localhost:3000/api/ -- click here to see how I implement it for my website), and the portal env must match the host name http://example.com. This set up is flexible and necessary when running multiple environments.
Or if you're using the create-react-app, then simply eject your app and implement a proxy inside the webpack production configuration.
Or migrate your application to my fullstack boilerplate, which implements the cors example above.
So, I ended up having to move off of containers and serve the React app up in more of a typical MERN architecture with the Express server hosting the React app from the static build folder. I set up some routes with PassportJS to handle my authentication.
Not my preferred solution, I would have preferred to use containers, but this works. Hope this points someone out there in the right direction!
I'm working on a node project on my vagrant laravel/homestead box.
Everything works fine, I can access the project when I go to the host define in my /etc/hosts :
192.168.10.10 project
But, I'm trying to build and watch my project with webpack, so I installed webpack-dev-server and I can run it :
http://localhost:8080/
webpack result is served from /
content is served from /home/vagrant/Workspace/Kanban
404s will fallback to /index.html
[...]
webpack: bundle is now VALID.
My problem is, when I try to access project:8080 with my browser, I get a loading error.
A netstat -an | grep 8080 in the vagrant shows me that the box is listening.
I tried to forward ports using homestead.yaml
ports:
- send: 8080
to: 8080
protocol: tcp
But with or without port forwarding, all I get is an error page.
What can I do to make my webpack watcher work ?
Okay, I finnaly found the answer.
The problem was about not ports but the dev-server. It is configured by default to work only on the localhost. The solution was to add a rule to the configuration :
devServer : {
[...]
, host : '0.0.0.0'
}
Setting the host to '0.0.0.0' allows the dev-server to be accessible from anywhere, therefore, to my "real" host.
I found the explanation on a GitHub issue. Too bad that the arguments list wasn't on the official documentation.