I have created two very simple containers to understand/test HTTP requests between containers, but for some reason, I just can't get my containers communicating. I keep getting GET http://backend:5000/ net::ERR_NAME_NOT_RESOLVED
My first container, a simple react app with no functionality, just gets container name from process.env.REACT_APP_URL, and makes a get request by fetch(http://${url}:5000/).
import { useState } from "react";
const MyComponent = () => {
const [message, setmessage] = useState("Hello");
async function buttonClick() {
let url = process.env.REACT_APP_URL;
try {
let response = await fetch(`http://${url}:5000/`);
console.log("This is response", response);
setmessage(response.data);
} catch (error) {
console.log("error occured:", error);
}
}
return (
<>
<p>{message}</p>
<button onClick={buttonClick}>Click Me!</button>
</>
);
};
export default MyComponent;
My second container, again incredibly simple Flask app, with Hello World served in the homepage route, and nothing else.
from flask import Flask, jsonify
app = Flask(__name__)
#app.route("/")
def hello_world():
return jsonify("Hello World"), 200
if __name__ == '__main__':
app.run(host="0.0.0.0")
And their corresponding docker files,
FROM node:17.4.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
FROM python:3.6.5-alpine
RUN apk update && apk upgrade && apk add gcc musl-dev libc-dev libc6-compat linux-headers build-base git libffi-dev openssl-dev
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "./myfile.py"]
Finally, I am using docker-compose to orchestrate these containers:
version: "3"
services:
backend:
build:
context: ./server
container_name: backend
expose:
- 5000
ports:
- 5000:5000
frontend:
build:
context: ./web
container_name: frontend
expose:
- 3000
ports:
- 3000:3000
environment:
- REACT_APP_URL=backend
depends_on:
- "backend"
links:
- "backend:backend"
My file system is as follows:
/sample-app
|_ server
|_ web
|_ docker-compose.yml
I have been trying to understand what I am doing wrong and I just can't find it. I appreciate any help. 🙏 🙏
Your frontend communicates with the backend with the exposed port on the host machine (not a container-container communication that needs to happen if the backend container wanted to connect with a DB container). Therefore, the hostname should be localhost not backend.
Try with the following change
frontend:
...
...
environment:
- REACT_APP_URL=localhost
...
...
You need to convert the response to JSON as well.
let response = await fetch(`http://${url}:5000/`);
response = await response.json();
console.log("This is response", response);
setmessage(response.data);
Related
I am unable to connect to socketio in production.
This is my traefik file. my backend gets all calls which go to /api....
backend:
build:
context: ./backend
dockerfile: Dockerfile
command: ["npm", "run", "start"]
labels:
- "traefik.enable=true"
- "traefik.http.routers.backend.rule=Host(`mydomain.com`) && PathPrefix(`/api`)"
networks:
- web
My express server:
export const io = new Server(server, {
cors: {
origin: [
"http://localhost:5173",
"https://mysecretdomain.com",
],
},
});
My react frontend:
let socket: any;
if (import.meta.env.MODE === 'development') {
socket = io(API_CONFIGS.SOCKET_IO_URL);
} else {
// the app runs with traefik and is available under the prefix /api
socket = io(API_CONFIGS.SOCKET_IO_URL, { path: '/api/socket.io' });
}
In Development, everything works fine but in prod im getting a 404 error
example call from my domain
https://mysecretdomain/api/socket.io/?EIO=4&transport=polling&t=OKSmUX8
Status:404
I am completly out of ideas. Can someone help me out?
I have a dockerized react-uvicorn asgi app running on reversed proxy nginx.
When i run 'docker compose up --build' everything is connected and on page reload reconnecting is successful. The problem is that react can't emitt events or uvicorn is not recieving them.
The app was tested successfully without nginx locally and everything was ok until i added nginx and deployed on digitalocean.
I'm having some sleepless nights trying to figure it out and still don't know what the problem is. Can someone pls help me...
root directory hirarchy:
App
|__client(react)
| |__conf
| |__conf.d
| |__default.conf
| |__gzip.conf
| |__Dockerfile
| |__public
| |__src
| |__withSocket.tsx
| |__App.tsx
|__server(uvicorn)
| |__server.py
| |__Dockerfile
|__another-service(python socket.io)
| |__main.py
|__docker-compose.yml
docker-compose.yml
version: '3.9'
services:
api:
build: server/
restart: unless-stopped
volumes:
- ./server:/app
ports:
- 8080:8080
client:
build: ./newclient
restart: unless-stopped
ports:
- 80:80
another-service:
build: ./another-service
restart: unless-stopped
volumes:
- ./another-service:/app
ports:
- 5004:5004
depends_on:
- api
uvicorn asgi server.py
import socketio
import asyncio
import json
sio = socketio.AsyncServer(
async_mode='asgi',
async_handlers=True,
cors_allowed_origins="*",
logger=True,
engineio_logger=True,
always_connect=True,
ping_timeout=60
)
app = socketio.ASGIApp(sio, socketio_path='/socket.io')
#sio.event
async def connect(sid, environ, auth=None):
print(Fore.GREEN + 'connected ', sid)
#sio.event
def disconnect(sid):
print('disconnect ', sid)
#sio.on('example-event')
async def auth(sid, data):
data = json.loads(data)
print('received data: ' + data['data'])
if await client.is_user_authorized():
response = {
'auth_log': 'Authenticated',
'logged_in': True
}
await sio.emit('auth', response)
if __name__ == '__main__':
import uvicorn
uvicorn.run("server:app", host='0.0.0.0', port=8080, log_level="info")
uvicorn server Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
RUN pip freeze > requirements.txt
COPY . /app
EXPOSE 8080
CMD python server.py
client React socket.io component-wrapper withSocket.tsx
import React from "react";
import { io, Socket } from "socket.io-client";
interface ServerToClientEvents {
auth: (data: any) => void
}
interface ClientToServerEvents {
connect: (connected: string) => void
example-event: (data: any) => void
}
// component wrapper that allows to use socket globally
function withSocket (WrappedComponent: any) {
const socket: Socket<ServerToClientEvents, ClientToServerEvents> = io(process.env.REACT_APP_SOCKET_URL!, {
transports: ["websocket"],
path: "/socket.io"
});
const WithSocket = (props: any) => {
// function to subscribe to events
const socketListen = async (queue: any, callback: any) => {
await socket.on(queue, (data?: any) => {
callback(data)
})
await socket.on('disconnect', () => socket.off());
}
const socketSend = async (queue?: any, data?: any) => {
await socket.emit(queue!, JSON.stringify(data))
}
return (
<WrappedComponent
{...props}
socketSend={socketSend}
socketListen={socketListen}
/>
)
}
return WithSocket
}
export default withSocket
client socket.io component App.tsx
import React, {useState, useEffect} from 'react'
import withSocket from "../withSocket";
import "./AuthForm.css"
function AuthForm({socketListen, socketSend}: { socketListen: any; socketSend: any }) {
const [data, setData] = useState('');
const [message, setMessage] = useState('');
useEffect(() => {
socketListen('auth', (data: any) => {
setMessage(JSON.stringify(data.auth_log))
})
}, [socketListen])
let handleSubmitData = async (e: any) => {
e.preventDefault();
try {
socketSend('example-event', {'data': data})
} catch (err) {
console.log(err);
}
}
return(
<>
<div className="form-wrapper">
<form onSubmit={handleSubmitData}>
<label>
<input type="text" name="data" placeholder="Data" onChange={(e) => setData(e.target.value)}/>
</label>
<button type="submit">Send</button>
</form>
<div className="message">
{message ? <p>{message}</p> : null}
</div>
</div>
</>
)
}
export default withSocket(AuthForm)
client .env file
REACT_APP_SOCKET_URL=ws://example-site.com
client react & nginx Dockerfile
# build environment
FROM node:alpine as builder
WORKDIR /app
COPY package.json .
COPY yarn.lock .
RUN yarn
COPY . .
RUN yarn build
# production environment
FROM nginx:1.15.2-alpine
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
# Copy .env file and shell script to container
WORKDIR /usr/share/nginx/html
COPY ./env.sh .
COPY .env .
# Add bash
RUN apk add --no-cache bash
# Make our shell script executable
RUN chmod +x env.sh
# Start Nginx server
CMD ["/bin/bash", "-c", "/usr/share/nginx/html/env.sh && nginx -g \"daemon off;\""]
nginx configuration default.conf
# Use site-specific access and error logs and don't log 2xx or 3xx status codes
map $status $loggable {
~^[23] 0;
default 1;
}
access_log /var/log/nginx/access.log combined buffer=512k flush=1m if=$loggable;
error_log /var/log/nginx/error.log;
upstream socket-server-upstream {
server api:8080;
keepalive 16;
}
server {
listen 80;
listen [::]:80;
server_name example-site.com www.example-site.com;
client_max_body_size 15M;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
# enable WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
expires -1;
}
location /socket.io {
proxy_pass http://socket-server-upstream/socket.io;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Headers *;
proxy_redirect off;
proxy_buffering off;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
nginx MIME-Type configuration gzip.conf
gzip on;
gzip_http_version 1.0;
gzip_comp_level 5; # 1-9
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
# MIME-types
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;
As i already wrote at the beginning, the code is working locally without nginx and client is emitting events and the uvicorn server is recieving and emitting regularly. With nginx deployed on Ubuntu Server the services are also connected but client is not emitting. Why?
When you have 2 Docker containers (1 "frontend" container and 1 "backend" container) and they need to exchange information via HTTP or WebSocket then make sure that your domain points to your "backend" container e.g. api.your-domain.com (and call api.your-domain.com from your "frontend" container because you're reaching your API service from your browser).
If you have two "backend" containers like in your docker-comopose.yml file api and another-service and need to exchange information then make sure both services are on the same docker network. Take a look here: https://docs.docker.com/compose/networking/#specify-custom-networks
If they are on the same network then they can call each other by their service name e.g. another-service makes an API call over http://api:8080
(see here: https://stackoverflow.com/a/66588432/5734066)
The app is pretty simple: its job is to display three types of variables:
A variable defined locally in the code,
An environment variable defined in Dockerfile and overwritten on pod creation,
An environment variable defined only on pod creation.
The code is as follows:
import './App.css';
function App() {
let something = "3Wzc3mgdlIFwc4"
return (
<div className="App">
<header className="App-header">
<p>
<code>ENV1: </code> <em>{process.env.REACT_APP_ENV_VARIABLE}</em>
</p>
<p>
<code>ENV2: </code> <em>{process.env.REACT_APP_ENV_VARIABLE_TWO}</em>
</p>
<p>
Hash: <code>{something || "not set"}</code>
</p>
</header>
</div>
);
}
export default App;
Dockerfile used when building an image got the following values:
FROM node:17.1.0-buster-slim as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
RUN npm config set unsafe-perm true
RUN npm install -g npm#8.1.3
RUN npm install --force
RUN npm install react-scripts#4.0.3 -g
COPY . ./
RUN chown -R node /app/node_modules
ENV REACT_APP_ENV_VARIABLE "It works!"
RUN npm run build
FROM nginx:1.21.4-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
As I want to deploy the app on Kubernetes, the following pod and service are defined:
apiVersion: v1
kind: Pod
metadata:
name: playground-pod
labels:
name: playground-pod
app: playground-app
spec:
containers:
- name: playground
image: localhost:5000/playground:1.0
ports:
- containerPort: 80
env:
- name: REACT_APP_ENV_VARIABLE
value: "Variable from Kube!"
- name: REACT_APP_ENV_VARIABLE_TWO
value: "192.168.0.120"
apiVersion: v1
kind: Service
metadata:
name: playground-service
labels:
name: playground-service
app: playground-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30003
protocol: TCP
selector:
name: playground-pod
app: playground-app
After successful creation of the pod and the service, the variables in the pod container look this way:
/ # printenv | grep REACT_APP
REACT_APP_ENV_VARIABLE=Variable from Kube!
REACT_APP_ENV_VARIABLE_TWO=192.168.0.120
Yet, when I display the page in the browser, I can see something like this:
ENV1: It works!
ENV2:
Hash: 3Wzc3mgdlIFwc4
So, as you see, the local variable is displayed as expected, the variable that was intended to be overwritten is not overwritten and keeps the value from the Dockerfile, and the second variable that was only present in pod definition is not displayed at all.
Why is that? What should I do to make it work as expected?
I'm trying to have my React front end application interact with a Flask API, both Dockerized and built together with docker-compose. Here is the docker-compose.yml file:
version: "3.9"
services:
server:
build: ./server
ports:
- "80:5000"
volumes:
- ./server:/app
environment:
FLASK_ENV: development
env_file:
- ./.env
web:
build: ./app
ports:
- "3000:3000"
volumes:
- ./app:/user/src/app
depends_on:
- server
The package.json looks like this:
{
"name": "housing",
"version": "0.1.0",
"private": true,
...
"proxy":"http://server:80"
}
And then in App.js file trying to call the API with:
callAPI( some_arg ) {
var h = new Headers();
h.append("Content-Type", "application/json");
h.append("Access-Control-Allow-Origin", "*");
var raw = JSON.stringify({"some_arg":some_arg});
var requestOptions = {
method: 'POST',
headers: h,
body: raw,
redirect: 'follow'
};
const url = '/api/some_service'
fetch(url, requestOptions).then(res => res.json()).then(data => {
this.setState({some_component_data: data});
});
}
Unfortunately doing this results in an error:
Proxy error: Could not proxy request /api/some_service from localhost:3000 to http://server:80.
It works fine if I replace server with 0.0.0.0 but I'd quite like to use the actual container name in package.json. How can I do this?
My use case is a little different (django + redis), but I would try some combination of these 2 things:
Remove the http:// and just use server:80
Specify container_name in your docker-compose file. I don't know if this is actually necessary or if it uses the service name to connect, but worth a shot if the first thing doesn't work alone.
For my use case, the connection string is just redis://redis and the docker-compose section for that service looks like this:
redis:
image: redis
container_name: redis
restart: always
command: redis-server --requirepass <password>
volumes:
- redis_data:/data
ports:
- "6379:6379"
The Dockerfile for my React client:
FROM node:10
WORKDIR /app/client
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The Dockerfile for my Express backend:
FROM node:10
WORKDIR /app/server
COPY ["package.json", "package-lock.json", "./"]
RUN ls
RUN npm install --production
COPY . .
EXPOSE 5000
CMD ["node", "server.js"]
My docker-compose.yml file in my project's root:
version: '3'
services:
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: "isaacasante/mcm-backend"
ports:
- "5000:5000"
frontend:
build:
context: ./client
dockerfile: ./Dockerfile
image: "isaacasante/mcm-client"
ports:
- "3000:3000"
links:
- "backend"
My server.js file under my backend folder:
var express = require("express");
var cors = require("cors");
var app = express();
var path = require("path");
// Enable CORS and handle JSON requests
app.use(cors());
app.use(express.json());
app.post("/", function (req, res, next) {
// console.log(req.body);
res.json({ msg: "This is CORS-enabled for all origins!" });
});
// Set router for email notifications
const mailRouter = require("./routers/mail");
const readerRouter = require("./routers/reader");
const notificationsRouter = require("./routers/booking-notifications");
app.use("/email", mailRouter);
app.use("/reader", readerRouter);
app.use("/notifications", notificationsRouter);
if (process.env.NODE_ENV === "production") {
app.use(express.static("mcm-app/build"));
app.get("*", (req, res) => {
res.sendFile(path.join(__dirname, "mcm-app", "build", "index.html"));
});
}
app.listen(5000, function () {
console.log("server starting...");
});
When I run:
docker-compose up
I get the following output in my terminal:
$ docker-compose up
Starting mcm_fyp_backend_1 ... done
Starting mcm_fyp_frontend_1 ... done
Attaching to mcm_fyp_backend_1, mcm_fyp_frontend_1
backend_1 | server starting...
frontend_1 |
frontend_1 | > mcm-app#0.1.0 start /app/client
frontend_1 | > react-scripts start
frontend_1 |
frontend_1 | ? ?wds?: Project is running at http://172.18.0.3/
frontend_1 | ? ?wds?: webpack output is served from
frontend_1 | ? ?wds?: Content not from webpack is served from /app/client/public
frontend_1 | ? ?wds?: 404s will fallback to /
frontend_1 | Starting the development server...
frontend_1 |
mcm_fyp_frontend_1 exited with code 0
My backend exits with code 0, and I can't load my app. My backend is running though.
What am I doing wrong, and how can I get my React-Express-Node app running with Docker Compose?
I found the solution to prevent my frontend service from exiting with 0. I had to add tty: true for it in my docker-compose.yml file. Also, to make the frontend-backend interaction work as expected in the app, I had to change my proxy command in my client's package.json to the following:
"proxy": "http://backend:5000"
And I changed my links command in my docker-compose.yml to this:
links:
- "backend:be"
After rebuilding, everything is working as intended.