I have an Angularjs frontend and a Spring-Boot Rest API in backend.
I have create two Docker
DockerFile Front:
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
DockerFile Back:
FROM tomcat:8.0
EXPOSE 8080
COPY rest-api.war /usr/local/tomcat/webapps/rest-api.war
I have a Docker-Compose file, i have define Alias
Docker-Compose:
rest:
image: restapi
container_name: restapi
ports:
- "8080:8080"
frontend:
image: frontend
container_name: frontend
ports:
- "80:80"
i redifine the baseURL in my AngularJs controller
app.controller('MainCtrl', function ($scope, $location, $http, MainService) {
var that = this;
var baseUrl = 'http://rest:8080';
when i lauchn my app in the console i have this
Error:
GET http://rest:8080/category net::ERR_NAME_NOT_RESOLVED
the hosts files in the other containers is not updated automatically
What is wrong ?
****** UPDATE ******
I have create a network
$docker network create my-network
I have redefine my docker-compose file
Container connected to that network reach other containers.
So i have the same error.
When i see in Kitematic my Backend have an ip like this:
And when i see in the hosts files the ip is not the same.
When i modify my controller with ip of Kitematic all works but when i use Alias is not working
So you are trying to use the linked alias inside your browser (Angular) application? Docker only exposes these aliases to the containers. Your local development system, being outside of the docker network, will not have these additions to the host file and therefore not be able to resolve hosts to IPs.
Any application running inside the containers, like a Node.js backend, will be able to use these aliases. Browsers can't.
If you create a new network, any container connected to that network can reach other containers by their name or service name.
create network
$docker network create your-network
docker-compose.yml
rest:
image: restapi
container_name: restapi
ports:
- "8080:8080"
net: your-network
frontend:
image: frontend
container_name: frontend
ports:
- "80:80"
net: your-network
Note if you use docker-compose file version 2.0. composer will create the network for you.
You can try and link the two containers in the configuration file.
rest:
image: restapi
container_name: restapi
ports:
- "8080:8080"
frontend:
image: frontend
container_name: frontend
ports:
- "80:80"
links:
- rest
Related
new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!
Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.
https://github.com/hodanov/react-django-postgres-sample-app
I want to deploy the DB, API, and FRONT containers in the above repository to AWS ECS so that they can be operated.
Therefore, in order to operate the containers separately, the docker-compose.yaml file was divided into containers.
I pushed the container to ECR and operated it with ECS, but it stopped by all means.
Where should I review it?
I spent some time on this. Where do I start :)
First and foremost the compose file has bind mounts that can't work when you deploy to ECS/Fargate (because the local files mounted do not exist on the remote end). So these are the 3 things that needs to be done:
you need to remove the bind mounts for the rest_api and the web_front services.
you need to append ADD ./code/django_rest_api/ /code in DockerfilePython and append ADD ./code/django_web_front/ /code in DockerfileNode. This will basically simulate the bind mount and will include the content of the folder right in the image(s).
you need to add the image field to the three services. That will allow you to docker compose build and docker compose push` so that the images are available to be deployed in the cloud.
This is my resulting compose file (you need to change your images repos):
version: "3"
services:
db:
container_name: django_db
image: mreferre/django_db:latest
build:
context: .
dockerfile: DockerfilePostgres
volumes:
- django_data_volume:/var/lib/postgresql/data
rest_api:
container_name: django_rest_api
image: mreferre/django_rest_api:latest
build:
context: .
dockerfile: DockerfilePython
command: ./run-migrate-and-server.sh
# volumes:
# - ./code/django_rest_api:/code
tty: true
ports:
- 8000:8000
depends_on:
- db
web_front:
container_name: django_web_front
image: mreferre/django_web_front:latest
build:
context: .
dockerfile: DockerfileNode
command: ./npm-install-and-start.sh
# volumes:
# - ./code/django_web_front:/code
tty: true
ports:
- 3000:3000
depends_on:
- rest_api
volumes:
django_data_volume:
This will deploy just fine with docker compose up in the ECS context and the API server will work. HOWEVER, the React front end will not work because the App.js code points to localhost:
const constants = {
"url": "http://localhost:8000/api/profile/?format=json"
}
I don't know what the React equivalent is but in Angular you need to point to something along the lines of window.location.href to tell the app to point to the same endpoint your browser connected to.
If you fix this problem then it should just work(tm).
So, I'm stuck in an issue related to using files stored in a server and not able to display them in the frontend.
My project is:
React + Redux using Docker
The React app is full, i.e., there's an API folder for the backend (react/redux), a CLIENT folder for the frontend (react/libraries) and MongoDB as DB.
Docker Compose creates these 3 parts, API, CLIENT and MONGO in just 1 container.
So, in the frontend, the user is able to select an image as an avatar, and then, this image is sent through the layers and saved in a specific folder (NOT BUILD/PUBLIC etc) inside the API docker image. It's possible to remove/delete and re-select it. Everything's working fine!
The issue is the display of this image in the frontend. The avatar component uses an IMAGE SRC to display it, but I can't find a valid URL to use to be able for the frontend TO SEE that image file saved in the API/server.
Since it's inside a container, I tried all possibilities I could find in Docker documentation... I think the solution relays in the NETWORK docker-compose option, but even though couldn't make it.
Docker Compose File:
version: '3.8'
services:
client:
build: ./client
stdin_open: true
image: my-client
restart: always
ports:
- "3000:3000"
volumes:
- ./client:/client
- /client/node_modules
depends_on:
- api
networks:
mynetwork:
ipv4_address: 172.19.0.9
api:
build: ./api
image: my-api
restart: always
ports:
- "3003:3003"
volumes:
- ./api:/api
- logs:/api/logs
- /api/node_modules
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 172.19.0.10
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
networks:
- mynetwork
volumes:
mongo_data:
logs:
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: "172.19.0.0/24"
To summarize, there's a folder in the API side with images/files and I want to reference them as in
<img src="mynetwork:3003/imagefolder/imagefile.png"> or something like that...
I can't believe I have to use this other solution...Another Stackoverflow Reply
I have been researching how to connect multiple docker containers in the same compose file to a database (MySQL/MariaDB) on the local host. Currently, the database is containerized for development but production requires a separate database. Eventually, the database will be deployed to AWS or Azure.
There are lots of similar questions on SO, but none that seem to address this particular situation.
Given the existing docker-compose.yml
version: '3.1'
services:
db:
build:
image: mariadb:10.3
volumes:
- "~/data/lib/mysql:/var/lib/mysql:Z"
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
All these services are reversed proxied behind nginx, with both api and scan services requiring access to the database. There are other services requiring database access not shown for simpliticy.
The production compose file would be:
version: '3.1'
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
If there was a single container requiring database access, I could just open up the ports 3306:3306, which won't work for multiple containers.
Splitting up the containers breaks the reverse proxy and add's complexity to deployment and management. I've tried extra_hosts
extra_hosts:
- myhost: xx.xx.xx.xx
but this generate EAI_AGAIN DNS errors, which is strange because you can ping the host from inside containers. I realize this may not be possible
I've developed an Angular App, that communicates with an UWSGI Flask Api throught Nginx. Currently I've 3 containers(Angular [web_admin], Api [api_admin], Nginx[nginx])
When I'm running it in my development machine, the communication is working alright. The angular requests goes through the url: http://localhost:5000 and the api response well, everything is working well.
But when I deployed it to my Production Server, I noticed that the application is not working, because the port 5000 is not opened in my firewall.
My question is kind simple, how do I make the angular container, call the api container, through internal network, instead of calling it from the external?
version: '2'
services:
data:
build: data
neo4j:
image: neo4j:3.0
networks:
- back
volumes_from:
- data
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/diariooficial/neo4j/data:/data
web_admin:
build: frontend/web
networks:
- front
- back
ports:
- "8001:8001"
depends_on:
- api_admin
links:
- "api_admin:api_admin"
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
api_admin:
build: backend/api
volumes_from:
- data
networks:
- back
ports:
- "5000:5000"
depends_on:
- neo4j
- neo4jtest
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
nginx:
build: nginx
volumes_from:
- data
networks:
- back
- front
ports:
- "80:80"
- "443:443"
volumes:
- /var/diariooficial/log/nginx:/var/log/nginx
depends_on:
- api_admin
- web_admin
networks:
front:
back:
Links create DNS names on the network for the services. You should have the web_admin service talk to api_admin:5000 instead of localhost:5000. The api_admin DNS name will resolve to the IP address of one of the api_admin service.
See https://docs.docker.com/compose/networking/ for an explanation, specifically:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.