Docker container on EC2 frontend doesn't connect to backend - reactjs

I have deployed a single docker container with a backend and a frontend on it. For various reasons, it is much easier for me to do it this way.
The docker container works fine locally and the FE and BE interact. However, once it's deployed to the EC2 device, only the FE is accessible and it can't connect to the BE.
The FE is a react-app running on port 3000. The BE is a node/express backend with a nodemon server running on port 5000. I know nodemon should be in a dev environment, but if I got it running locally on a docker container there's no reason it shouldn't work on the EC2 device, right?
I have security groups configured correctly for both ports, and I've checked that the container is running on those ports. Which it is.
I feel a bit out of my depth here and, aside from the entire application, is there anything I can provide here that would better help identify what is at issue here?
Dockerfile:
FROM node:16.17.0
WORKDIR /client
COPY ./client/package.json ./client/package.json
RUN npm i
COPY ./client ./client
WORKDIR /server
COPY ./server/package.json ./server/package.json
RUN npm i
COPY ./server ./server
EXPOSE 3000 5000
WORKDIR /client
CMD ["npm", "run", "remote-start"]
The remote start script launches the servers of client and server in tandem. As said, this works fine locally.
I also have configured in the client's package.json the following:
"proxy": "http://<IP-Address>:5000"
That works fine for the local docker container when it's localhost:5000

How to Start a React App and Express Backend in One Docker Container on EC2
OK, I've looked at this. The way to get a react FE and node BE to connect on an EC2 device is the following. This presumes the structure I assume you have from your dockerfile:
client
- package.json (starts a server on p 3000)
- all client files
server
- package.json (starts a server on p 5000)
- all server files
Use the docker file you've posted above
In the react-app client, use your proxy line in the client's package.json with the public IP address of the EC2 device
"proxy": "http://<IP-Address>:5000"
Make sure ports 3000 and 5000 are accessible in your EC2 security groups
There's no need to alter anything with where express listens
app.listen(5000, () => {
To start two servers at once, you need a script to support your docker CMD(I assume your npm run remote start is like this) that can start them concurrently:
"remote-start": "cd .. && cd server && npm run dev & react-scripts start"
Now both the BE and the FE will be running when the container is deployed to your EC2 device and will be accessible via the IP address of the EC2 device.
In testing this, I found the EC2 device might freeze when the container starts. Give it some time and it should work fine.
As mentioned by Zac Anger, you can curl to the ports but use the IP address of the EC2 device to test if its running.

Related

Front-end application with yarn starting locally but when docker container is run ports are empty

The port loads and I can view the app locally on port 3000. I can do this by running yarn the yarn start. The container runs and says it can be
Available on:
http://127.0.0.1:3000
http://172.17.0.2:3000
When loading the ports in my browser I see nothing. This is error is occurring with two projects I am working on.
For reference these are the front-ends I am trying to place in images.
https://github.com/Uniswap/interface/
https://github.com/safe-global/web-core
I am running node v18.12.1
The DockerFile
FROM node:16-alpine
RUN apk add --no-cache libc6-compat git python3 py3-pip make g++
WORKDIR /app
COPY . .
# install deps
RUN yarn
ENV NODE_ENV production
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
EXPOSE 3000
ENV PORT 3000
CMD [ "yarn", "start" ]
I am able to get the images to say they compiled all the code and run locally. I was expected these apps then to load on port 3000 like they do when I run it locally but this only results in an error in the browser stating "this site can't be reached"
Port 3000 its port inside container. You have to expose port.
For example:
-p 3000:80
And open outside port 80.
https://docs.docker.com/config/containers/container-networking/

Docker container can't connect to Redis

I have a Docker container running a C application using hiredis, which should write data to a Redis server exposed at the default address and port running locally on the same Linux device at 127.0.0.1:6379 .
The Redis server is running in a different Docker container. I start this container running, exposing port 6379 as follows : sudo docker run --name redis_container -d -p 6379:6379 40c68ed3a4d2
redsi-cli can connect to this via 127.0.0.1:6379 without issues.
However, no matter what I try, my container which should write to the Redis gets a Redis connection refused error from the C code all the time. This was my last attempt at running the container : sudo docker run --expose=6379 -i 7340dfee8ea5
What exactly am I missing here? Thanks
The C client is running inside a container, that means 127.0.0.1 points to the container itself, not to your host. You should configure the redis client to redis_container:6379 as that is the name you have used when docker run the redis container. More about this here
Besides, both containers need to be inside the same docker network. Use the following command to create a simple network
docker network create my-net
and add --network my-net to both docker run commands (redis client and redis server)
You can read more about docker network here

anzograph - Cannot connect to the admin console

I set up Anzograph DB Free Edition in Docker Desktop on my Mac, and (per the commands below) ran it. But I can’t connect to the admin console.
docker pull cambridgesemantics/anzograph
docker run cambridgesemantics/anzograph
When I use the inspect feature in Docker Desktop’s Dashboard, all of the ports for the running image are “not binded”. I would have expected to connect on port 5600 but that doesn’t work – not with localhost, not with 0.0.0.0, not with 127.0.0.1 …
Am I perhaps missing some pre-requisite? I allocated 8 GB of memory to Docker.
From the information you documented, what you are seeing is true as you have not documented in your command the specific ports.
What you entered was the following,
docker run cambridgesemantics/anzograph
What you should run to address this, which is documented on the download page for Anzograph, specifying the ports to install,
docker run -d -p 80:8080 -p 443:8443 --name=anzograph cambridgesemantics/anzograph:latest
AnzoGraph frontend binds to port 8443 (https) and 8080 (http),
AnzoGraph DB binds to port 5600 (gRPC DB management) and 5700 (gRPC DB query) inside the docker container.
Docker Desktop for MAC is mapping these container internal ports to ports on localhost. If you do not tell docker how to map those ports, it uses a random strategy to allocate those ports on localhost. In specifying the mapping
docker run -d -p 80:8080 -p 443:8443 -p 5600:5600 -p 5700:5700 --name=anzograph cambridgesemantics/anzograph:2.1.1-latest
you tell docker what localhost ports to use ( -p { localhost port } : { port inside of container} )
Many users new to docker struggle, when they use for example Kitematic, or similar UIs, that make it simple to deploy a running docker container, however they face complexities understanding and determining these random ports.
So if you are new to docker, and you do not want to use kubernetes yet, please use the command line to specify the localhost ports - it ends up being easier.

Cannot open a React app in the browser after dockerising

I'm trying to dockerise a react app. I'm using the following Dockerfile to achieve this.
# base image
FROM node:9.4
# set working directory
WORKDIR /usr/src/app
# install and cache app dependencies
COPY package*.json ./
ADD package.json /usr/src/app/package.json
RUN npm install
# Bundle app source
COPY . .
# Specify port
EXPOSE 8081
# start app
CMD ["npm", "start"]
Also, in my package.json the start script is defined as
"scripts": {
"start": "webpack-dev-server --mode development --open",
....
}
I build the image as:
docker build . -t myimage
And I finally run the image, as
docker run IMAGE_ID
This command then runs the image, however when I go to localhost:8080 or localhost:8081 I dont see anything.
However, when I go into the docker container for myimage, and do curl -X GET http:localhost:8080 I'm able to access my react app.
I also deployed this on google-kubernetes and exposed a load-balancer service on this. However, the same thing happened, I cannot access the react-app on the exposed endpoint, but when I logged into the container, and made curl request, I was getting back the index.html.
So, how do I run the image of this docker image so that I could access the application through a browser.
When you use EXPOSE in Dockerfile it simply states that the service is listening on the specified port (in your case 8081), but it does not actually create any port forwarding.
To actually forward traffic from host machine to the service you must use the -p flag to specify port mapping
For example:
docker run -d -p 80:8080 myimage would start a container and forward requests to localhost:80 to the containers port 8080
More about EXPOSE here https://docs.docker.com/engine/reference/builder/#expose
UPDATE
So usually when you are developing node applications locally and run webpack dev-server it will listen on 127.0.0.1 which is fine since you intend to visit the site from the same machine as it is hosted. But since in docker the container can be thought of as a separate instance that means you need to be able to access it from the "outside" world which means that it is necessary to reconfigure the dev-server to listen on 0.0.0.0 (which basically means all IP addresses assigned to the "instance")
So by updating the dev-server config to listen on 0.0.0.0 you should be able to visit your application from your host machine.
Link to documentation: https://webpack.js.org/configuration/dev-server/#devserverhost

Why does docker run do nothing when i try to run my app?

I made a website to React and I'm trying to deploy it to an Nginx server by using Docker. My Dockerfile is in the root folder of my project and looks like this:
FROM tiangolo/node-frontend:10 as build-stage
WORKDIR /app
COPY . ./
RUN yarn run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
When I run docker build -t mywebsite . on the docker terminal I receive a small warning that I'm building a docker image from windows against a non-windows Docker host but that doesn't seem to be a problem.
However, when I run docker run mywebsite nothing happens, at all.
In case it's necessary, my project website is hosted on GitHub: https://github.com/rgomez96/Tecnolab
What are you expecting ? Nothing will happen on the console except the nginx log.
You should see something happening if you go to http:ip_of_your_container.
Otherwise, you can just launch your container with this command :
docker container run -d -p 80:80 mywebsite
With this command you'll be able to connect to your nginx at this address http://localhost as you are forwarding all traffic from the port 80 of your container to the port 80 of your host.

Resources