traefik v2 forward domain requests to subroute - reactjs

I have a react web app built with docker and is behind Traefik proxy. The Docker container has Nginx on port 80 as an HTTP server for the react application. Current config sets http://example.com & https://example.com to react app. I want to forward another domain requests to a subroute of application, for example, forward https://test.example.com requests to https://example.com/test-sub-route. How can I do this with Traefik v2?
Note: The address bar should show https://test.example.com.
This is my current configuration:
version: "3.7"
services:
reactweb:
image: react-web-app:latest
networks:
- 'internal'
- 'traefik'
labels:
- "traefik.enable=true"
- "traefik.docker.network=traefik"
- "traefik.http.routers.reactweb-web.entrypoints=web"
- "traefik.http.routers.reactweb-web.rule=Host(`example.com`)"
- "traefik.http.middlewares.reactweb-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.reactweb-redirect.redirectscheme.permanent=true"
- "traefik.http.routers.reactweb-web.middlewares=reactweb-redirect#docker"
- "traefik.http.routers.reactweb-websecure.entrypoints=websecure"
- "traefik.http.routers.reactweb-websecure.rule=Host(`example.com`)"
- "traefik.http.routers.reactweb-websecure.tls=true"
- "traefik.http.services.reactweb-service.loadbalancer.server.port=80"
networks:
traefik:
external: true
internal:
external: false

You can add the prefix in the path in traefik. Use the below code snippet.
labels:
- "traefik.http.middlewares.add-foo.addprefix.prefix=/foo"
Reference: https://docs.traefik.io/middlewares/addprefix/

Related

Docker + React App: how to use in the Frontend side, files saved in the API (server) side folder?

So, I'm stuck in an issue related to using files stored in a server and not able to display them in the frontend.
My project is:
React + Redux using Docker
The React app is full, i.e., there's an API folder for the backend (react/redux), a CLIENT folder for the frontend (react/libraries) and MongoDB as DB.
Docker Compose creates these 3 parts, API, CLIENT and MONGO in just 1 container.
So, in the frontend, the user is able to select an image as an avatar, and then, this image is sent through the layers and saved in a specific folder (NOT BUILD/PUBLIC etc) inside the API docker image. It's possible to remove/delete and re-select it. Everything's working fine!
The issue is the display of this image in the frontend. The avatar component uses an IMAGE SRC to display it, but I can't find a valid URL to use to be able for the frontend TO SEE that image file saved in the API/server.
Since it's inside a container, I tried all possibilities I could find in Docker documentation... I think the solution relays in the NETWORK docker-compose option, but even though couldn't make it.
Docker Compose File:
version: '3.8'
services:
client:
build: ./client
stdin_open: true
image: my-client
restart: always
ports:
- "3000:3000"
volumes:
- ./client:/client
- /client/node_modules
depends_on:
- api
networks:
mynetwork:
ipv4_address: 172.19.0.9
api:
build: ./api
image: my-api
restart: always
ports:
- "3003:3003"
volumes:
- ./api:/api
- logs:/api/logs
- /api/node_modules
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 172.19.0.10
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
networks:
- mynetwork
volumes:
mongo_data:
logs:
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: "172.19.0.0/24"
To summarize, there's a folder in the API side with images/files and I want to reference them as in
<img src="mynetwork:3003/imagefolder/imagefile.png"> or something like that...
I can't believe I have to use this other solution...Another Stackoverflow Reply

Unable to connect to api server of react js app using kubernetes?

I am basically trying to run a react js app which is mainly composed of 3 services namely postgres db, API server and UI frontend(served using nginx).Currently the app works as expected in the development mode using docker-compose but when i tried to run this in the production using kubernetes,I was not able to access the api server of the app(CONNECTION REFUSED).
Since I want to run in this in production using kubernetes, I created yaml files for each of the services and then tried running them using kubectl apply.I have also tried this with and without using the persistent volume for the api server.But none of this helped.
Docker-compose file(This works and i am able to connect to api server at port 8000)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/build
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
My api server yaml for running in kubernetes(This doesn't work and I cant connect to the api server at port 8000)
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:corspackedapi
name: apiserver
env:
- name: POSTGRES_DB_USER
value: postgres
- name: POSTGRES_DB_PASSWORD
value: password
- name: POSTGRES_DB_HOST
value: postgres
- name: POSTGRES_DB_PORT
value: "5432"
ports:
- containerPort: 8000
name: myport
What changes should I make to my api server yaml(kubernetes). so that I can access it on port 8000. Currently I am getting a connection refused error.
The default service on Kubernetes is ClusterIP that is used to have service inside the cluster but not having that exposed to outside.
That is your service using the LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
type: LoadBalancer
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
With that, you can see how the service expects to have an external IP address by running kubectl describe service apiserver
In case you want to have more control of how your requests are routed to that service you can add an Ingress in front of that same service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: apiserver
name: apiserver
spec:
rules:
- host: apiserver.example.com
http:
paths:
- backend:
serviceName: apiserver
servicePort: 8000
path: /*
Your service in only exposed over the internal kubernetes network.
This is because if you do not specify a spec.serviceType, the default is ClusterIP.
To expose your application you can follow at least 3 ways:
LoadBalancer: you can specify a spec.serviceType: LoadBalancer. A Load Balancer service expose your application on the (public) network. This work great if your cluster is a cloud service (gke, digital ocean, aks, azure, ...), the cloud will take care of providing you the public ip and routing the network traffic to all your nodes. Usually this is not the best method because a cloud Load balancer has a cost (depends on the cloud) and if you need to expose a lot of services the situation could become difficult to be maintained.
NodePort: you can specify a spec.serviceType: NodePort. Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. This is the most common scenario for simple http/https application. It allow you to easily manage ssl termination and routing.
You need to deploy an ingress controller to make this work, like a simple nginx. All the main cloud can do this for you with a simple setting when you create the cluster
Read here for more information about services
Read here for more information about ingress

React + Express on Azure: Invalid Host Header

The Error
When deploying to Azure Web Apps with Multi-container support, I receive an "Invalid Host Header" message from https://mysite.azurewebsites.com
Local Setup
This runs fine.
I have two Docker containers: client a React app and server an Express app hosting my API. I am using a proxy to host my API on server.
In client's package.json I have defined:
"proxy": "http://localhost:3001"
I use the following docker compose file to build locally.
version: '2.1'
services:
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server/src:/app/project-server/src
command: npm start
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/app/project-client/src
- ./client/public:/app/project-client/public
links:
- server
command: npm start
Everything runs fine.
On Azure
When deploying to Azure I have the following. client and server images have been stored in Azure Container Registry. They appear to load just fine from the logs.
In my App Service > Container Settings I am loading the images from Azure Container Registry (ACR) and I'm using the following configuration (Docker compose) file.
version: '2.1'
services:
client:
image: <clientimage>.azurecr.io/clientimage:v1
build: ./client
expose:
- 3000
ports:
- 3000:3000
command: npm start
server:
image: <serverimage>.azurecr.io/<serverimage>:v1
build: ./server
expose:
- 3001
ports:
- 3001:3001
command: npm start
I have also defined in Application Settings:
WEBSITES_PORT to be 3000.
This results in the error on my site "Invalid Host Header"
Things I've tried
• Serving the app from the static folder in server. This works in that it serves the app, but it messes up my authentication. I need to be able to serve the static portion from client's App.js and have that talk to my Express API for database calls and authentication.
• In my docker-compose file binding the front end to:
ports:
- 3000:80
• A few other port combinations but no luck.
Also, I think this has something to do with the proxy in client's package.json based on this repo
Any help would be greatly appreciated!
Update
It is the proxy setting.
This somewhat solves it. By removing "proxy": "http://localhost:3001" I am able to load the website, but the suggested answer in the problem does not work for me. i.e. I am now unable to access my API.
Never used azure before and I also don't use a proxy (due to its random connection issues), but if your application is basically running express, you can utilize cors. (As a side note, it's more common to run your express server on 5000 than 3001.)
I first set up an env/config.js folder and file like so:
module.exports = {
development: {
database: 'mongodb://localhost/boilerplate-dev-db',
port: 5000,
portal: 'http://localhost:3000',
},
production: {
database: 'mongodb://localhost/boilerplate-prod-db',
port: 5000,
portal: 'http://example.com',
},
staging: {
database: 'mongodb://localhost/boilerplate-staging-db',
port: 5000,
portal: 'http://localhost:3000',
}
};
Then, depending on the environment, I can implement cors where I'm defining express middleware:
const cors = require('cors');
const config = require('./path/to/env/config.js');
const env = process.env.NODE_ENV;
app.use(
cors({
credentials: true,
origin: config[env].portal,
}),
);
Please note the portal and the AJAX requests MUST have matching host names. For example, if my application is hosted on http://example.com, my front-end API requests must be making requests to http://example.com/api/ (not http://localhost:3000/api/ -- click here to see how I implement it for my website), and the portal env must match the host name http://example.com. This set up is flexible and necessary when running multiple environments.
Or if you're using the create-react-app, then simply eject your app and implement a proxy inside the webpack production configuration.
Or migrate your application to my fullstack boilerplate, which implements the cors example above.
So, I ended up having to move off of containers and serve the React app up in more of a typical MERN architecture with the Express server hosting the React app from the static build folder. I set up some routes with PassportJS to handle my authentication.
Not my preferred solution, I would have preferred to use containers, but this works. Hope this points someone out there in the right direction!

Docker-Compose - Communication with "Internal" Api.

I've developed an Angular App, that communicates with an UWSGI Flask Api throught Nginx. Currently I've 3 containers(Angular [web_admin], Api [api_admin], Nginx[nginx])
When I'm running it in my development machine, the communication is working alright. The angular requests goes through the url: http://localhost:5000 and the api response well, everything is working well.
But when I deployed it to my Production Server, I noticed that the application is not working, because the port 5000 is not opened in my firewall.
My question is kind simple, how do I make the angular container, call the api container, through internal network, instead of calling it from the external?
version: '2'
services:
data:
build: data
neo4j:
image: neo4j:3.0
networks:
- back
volumes_from:
- data
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/diariooficial/neo4j/data:/data
web_admin:
build: frontend/web
networks:
- front
- back
ports:
- "8001:8001"
depends_on:
- api_admin
links:
- "api_admin:api_admin"
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
api_admin:
build: backend/api
volumes_from:
- data
networks:
- back
ports:
- "5000:5000"
depends_on:
- neo4j
- neo4jtest
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
nginx:
build: nginx
volumes_from:
- data
networks:
- back
- front
ports:
- "80:80"
- "443:443"
volumes:
- /var/diariooficial/log/nginx:/var/log/nginx
depends_on:
- api_admin
- web_admin
networks:
front:
back:
Links create DNS names on the network for the services. You should have the web_admin service talk to api_admin:5000 instead of localhost:5000. The api_admin DNS name will resolve to the IP address of one of the api_admin service.
See https://docs.docker.com/compose/networking/ for an explanation, specifically:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.

Docker Compose Link (Alias)

I have an Angularjs frontend and a Spring-Boot Rest API in backend.
I have create two Docker
DockerFile Front:
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
DockerFile Back:
FROM tomcat:8.0
EXPOSE 8080
COPY rest-api.war /usr/local/tomcat/webapps/rest-api.war
I have a Docker-Compose file, i have define Alias
Docker-Compose:
rest:
image: restapi
container_name: restapi
ports:
- "8080:8080"
frontend:
image: frontend
container_name: frontend
ports:
- "80:80"
i redifine the baseURL in my AngularJs controller
app.controller('MainCtrl', function ($scope, $location, $http, MainService) {
var that = this;
var baseUrl = 'http://rest:8080';
when i lauchn my app in the console i have this
Error:
GET http://rest:8080/category net::ERR_NAME_NOT_RESOLVED
the hosts files in the other containers is not updated automatically
What is wrong ?
****** UPDATE ******
I have create a network
$docker network create my-network
I have redefine my docker-compose file
Container connected to that network reach other containers.
So i have the same error.
When i see in Kitematic my Backend have an ip like this:
And when i see in the hosts files the ip is not the same.
When i modify my controller with ip of Kitematic all works but when i use Alias is not working
So you are trying to use the linked alias inside your browser (Angular) application? Docker only exposes these aliases to the containers. Your local development system, being outside of the docker network, will not have these additions to the host file and therefore not be able to resolve hosts to IPs.
Any application running inside the containers, like a Node.js backend, will be able to use these aliases. Browsers can't.
If you create a new network, any container connected to that network can reach other containers by their name or service name.
create network
$docker network create your-network
docker-compose.yml
rest:
image: restapi
container_name: restapi
ports:
- "8080:8080"
net: your-network
frontend:
image: frontend
container_name: frontend
ports:
- "80:80"
net: your-network
Note if you use docker-compose file version 2.0. composer will create the network for you.
You can try and link the two containers in the configuration file.
rest:
image: restapi
container_name: restapi
ports:
- "8080:8080"
frontend:
image: frontend
container_name: frontend
ports:
- "80:80"
links:
- rest

Resources