Cannot open a React app in the browser after dockerising - reactjs

I'm trying to dockerise a react app. I'm using the following Dockerfile to achieve this.
# base image
FROM node:9.4
# set working directory
WORKDIR /usr/src/app
# install and cache app dependencies
COPY package*.json ./
ADD package.json /usr/src/app/package.json
RUN npm install
# Bundle app source
COPY . .
# Specify port
EXPOSE 8081
# start app
CMD ["npm", "start"]
Also, in my package.json the start script is defined as
"scripts": {
"start": "webpack-dev-server --mode development --open",
....
}
I build the image as:
docker build . -t myimage
And I finally run the image, as
docker run IMAGE_ID
This command then runs the image, however when I go to localhost:8080 or localhost:8081 I dont see anything.
However, when I go into the docker container for myimage, and do curl -X GET http:localhost:8080 I'm able to access my react app.
I also deployed this on google-kubernetes and exposed a load-balancer service on this. However, the same thing happened, I cannot access the react-app on the exposed endpoint, but when I logged into the container, and made curl request, I was getting back the index.html.
So, how do I run the image of this docker image so that I could access the application through a browser.

When you use EXPOSE in Dockerfile it simply states that the service is listening on the specified port (in your case 8081), but it does not actually create any port forwarding.
To actually forward traffic from host machine to the service you must use the -p flag to specify port mapping
For example:
docker run -d -p 80:8080 myimage would start a container and forward requests to localhost:80 to the containers port 8080
More about EXPOSE here https://docs.docker.com/engine/reference/builder/#expose
UPDATE
So usually when you are developing node applications locally and run webpack dev-server it will listen on 127.0.0.1 which is fine since you intend to visit the site from the same machine as it is hosted. But since in docker the container can be thought of as a separate instance that means you need to be able to access it from the "outside" world which means that it is necessary to reconfigure the dev-server to listen on 0.0.0.0 (which basically means all IP addresses assigned to the "instance")
So by updating the dev-server config to listen on 0.0.0.0 you should be able to visit your application from your host machine.
Link to documentation: https://webpack.js.org/configuration/dev-server/#devserverhost

Related

Front-end application with yarn starting locally but when docker container is run ports are empty

The port loads and I can view the app locally on port 3000. I can do this by running yarn the yarn start. The container runs and says it can be
Available on:
http://127.0.0.1:3000
http://172.17.0.2:3000
When loading the ports in my browser I see nothing. This is error is occurring with two projects I am working on.
For reference these are the front-ends I am trying to place in images.
https://github.com/Uniswap/interface/
https://github.com/safe-global/web-core
I am running node v18.12.1
The DockerFile
FROM node:16-alpine
RUN apk add --no-cache libc6-compat git python3 py3-pip make g++
WORKDIR /app
COPY . .
# install deps
RUN yarn
ENV NODE_ENV production
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
EXPOSE 3000
ENV PORT 3000
CMD [ "yarn", "start" ]
I am able to get the images to say they compiled all the code and run locally. I was expected these apps then to load on port 3000 like they do when I run it locally but this only results in an error in the browser stating "this site can't be reached"
Port 3000 its port inside container. You have to expose port.
For example:
-p 3000:80
And open outside port 80.
https://docs.docker.com/config/containers/container-networking/

Docker container on EC2 frontend doesn't connect to backend

I have deployed a single docker container with a backend and a frontend on it. For various reasons, it is much easier for me to do it this way.
The docker container works fine locally and the FE and BE interact. However, once it's deployed to the EC2 device, only the FE is accessible and it can't connect to the BE.
The FE is a react-app running on port 3000. The BE is a node/express backend with a nodemon server running on port 5000. I know nodemon should be in a dev environment, but if I got it running locally on a docker container there's no reason it shouldn't work on the EC2 device, right?
I have security groups configured correctly for both ports, and I've checked that the container is running on those ports. Which it is.
I feel a bit out of my depth here and, aside from the entire application, is there anything I can provide here that would better help identify what is at issue here?
Dockerfile:
FROM node:16.17.0
WORKDIR /client
COPY ./client/package.json ./client/package.json
RUN npm i
COPY ./client ./client
WORKDIR /server
COPY ./server/package.json ./server/package.json
RUN npm i
COPY ./server ./server
EXPOSE 3000 5000
WORKDIR /client
CMD ["npm", "run", "remote-start"]
The remote start script launches the servers of client and server in tandem. As said, this works fine locally.
I also have configured in the client's package.json the following:
"proxy": "http://<IP-Address>:5000"
That works fine for the local docker container when it's localhost:5000
How to Start a React App and Express Backend in One Docker Container on EC2
OK, I've looked at this. The way to get a react FE and node BE to connect on an EC2 device is the following. This presumes the structure I assume you have from your dockerfile:
client
- package.json (starts a server on p 3000)
- all client files
server
- package.json (starts a server on p 5000)
- all server files
Use the docker file you've posted above
In the react-app client, use your proxy line in the client's package.json with the public IP address of the EC2 device
"proxy": "http://<IP-Address>:5000"
Make sure ports 3000 and 5000 are accessible in your EC2 security groups
There's no need to alter anything with where express listens
app.listen(5000, () => {
To start two servers at once, you need a script to support your docker CMD(I assume your npm run remote start is like this) that can start them concurrently:
"remote-start": "cd .. && cd server && npm run dev & react-scripts start"
Now both the BE and the FE will be running when the container is deployed to your EC2 device and will be accessible via the IP address of the EC2 device.
In testing this, I found the EC2 device might freeze when the container starts. Give it some time and it should work fine.
As mentioned by Zac Anger, you can curl to the ports but use the IP address of the EC2 device to test if its running.

Running a React App in Prod using Docer & Nginx

I'm running a react app in prod in a docker container using the following Dockerfile
FROM node:16-alpine3.15 AS builder
WORKDIR /app
COPY . .
RUN yarn install && yarn build
FROM nginx
WORKDIR /usr/share/nginx/html
RUN rm -rf ./*
COPY --from=builder /app/build .
ENTRYPOINT ["nginx", "-g", "daemon off;"]
This runs the app at localhost:8080
I want to move this code to a remote host now. I've a an IP address, say 54.22.33.99, I want to spin up the container and run the app at the root of 54.22.33.99/ and not on 54.22.33.99:8080/
Can someone help me understand, how to do this?
TIA
When you have Docker installed on the remote machine, you need to get the image onto the remote system. There are a few ways to do that:
Copy the Dockerfile and your source code to the remote machine and build it there
docker save the image on your local machine, copy the tar file to the remote machine and docker load it.
Push your image to a remote Docker registry and docker pull it on the remote host
When you have the image on the remote machine, you need to map the port it's listening on to port 80 on the host machine. Port 80 is the default http port, so when you don't type a port number in a URL, the request is made to port 80. If your app listens on port 8080 in the container, you map it to port 80 by using the -p 80:8080 option on your docker run command.

Running a react app inside a linux docker container

I created a Linux container in docker with all the packages and dependencies I need it for my school project. I am aware you can deploy react app containers and use Docker for deployment, although I did not want that. I just need it a Linux container with everything installed so all the members in the team will use the same versions of npm and node. After building the container I ran inside my workdir folder:
npx create-react-app my-app
cd my-app
npm start
and this is what it shows
enter image description here
which means that the app is running locally in my computer how can I see it locally in my PC?
use this to run your image:
docker run -d -p 8080:8080 my_image
-p 8080:808 - will map your docker container port to your localhost 8080 port, and you should be able to just go on http://localhost:8080 to see it.
(assuming that npm start is starting server on 8080 inside of your docker)
-d means in detached mode, your going to start docker and stay outside of it.

Why does docker run do nothing when i try to run my app?

I made a website to React and I'm trying to deploy it to an Nginx server by using Docker. My Dockerfile is in the root folder of my project and looks like this:
FROM tiangolo/node-frontend:10 as build-stage
WORKDIR /app
COPY . ./
RUN yarn run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
When I run docker build -t mywebsite . on the docker terminal I receive a small warning that I'm building a docker image from windows against a non-windows Docker host but that doesn't seem to be a problem.
However, when I run docker run mywebsite nothing happens, at all.
In case it's necessary, my project website is hosted on GitHub: https://github.com/rgomez96/Tecnolab
What are you expecting ? Nothing will happen on the console except the nginx log.
You should see something happening if you go to http:ip_of_your_container.
Otherwise, you can just launch your container with this command :
docker container run -d -p 80:80 mywebsite
With this command you'll be able to connect to your nginx at this address http://localhost as you are forwarding all traffic from the port 80 of your container to the port 80 of your host.

Resources