To get docker and yarn working on my corporate network, I needed to add a CA certificate to trust store (for docker) and set NODE_EXTRA_CA_CERTS for yarn (see here). The Dockerfile for my react application includes yarn install && yarn run build which gives a "self signed certificate in certificate chain" error. I am able to get around the error by running yarn install on my local machine before building in docker, remove yarn install from my Dockerfile and remove node_modules from my .dockerignore file.
How should I be resolving this error? Should I be transferring the .pem CA file to the Docker container and adding set NODE_EXTRA_CA_CERTS to the Dockerfile?
Dockerfile:
FROM node:15.13-alpine
WORKDIR /react
COPY . .
# RUN yarn config set cafile ./
RUN yarn install && yarn run build
.dockerignore:
node_modules
build
I had the same issue on my corporate network. What worked for me is copying the certificate into the image and allow the OS to recognize it by updating CA certificates.
I added this in my Dockerfile:
# Copy SSL certificates into the image
COPY *.crt /usr/local/share/ca-certificates/
# Update the certificate stores
RUN update-ca-certificates --verbose --fresh && \
npm config set cafile /usr/local/share/ca-certificates/my-custom-root-certificate.crt && \
yarn config set cafile /usr/local/share/ca-certificates/my-custom-root-certificate.crt
The *.crt files are in my docker build context (or same level as my Dockerfile)
Related
I am trying to create a dockerfile for a project that has the following folder structure:
HDWD-project
|_ client
| |_ package.json
|_ server
|_ package.json
Client is a react-app and I am just working with this at the moment, before including server which is the backend.
I am having real trouble figuring out the logic of the dockerfile and have googled furiouly for the last two days. All the examples are too easy.
I just can't seem to get react-app to start in the container, and get varying error messages. But I need to know the dockerfile is fine before I proceed.
FROM node:latest
WORKDIR HDWD-project
COPY ./client/package.json .
RUN npm install
COPY . .
RUN cd client
CMD ["npm", "start"]
Going forward I have a script that can start both the server and the client, but I'm just trying to get my head around docker and getting the client frontend to run fine.
Would anyone be able to correct me on where the issue in this config is and explain it?
This is a docker file for the frontend(client in your case). You can make a dockerfile under your client folder and build the image with docker build -t image-name:tag-name .
# Pull the latest node image from dockerhub
FROM node:latest
# Create app directory
WORKDIR /usr/src/app
# Copy package.json and package-lock.json to the workdir
COPY package*.json ./
# Install the dependencies
RUN npm install
# Bundle app source
COPY . .
# Run the app in docker
CMD ["npm", "start"]
The dockerfile builds locally but the GitLab pipeline fails, saying:
Step 3/8 : RUN gem install bundler rake
ERROR: While executing gem ... (Gem::FilePermissionError)
You don't have write permissions for the /usr/local/bundle directory.
The project strucutre is a Ruby Sinatra backend and a React frontend.
The Dockerfile looks like this
FROM ruby:3.0-alpine
# Install Dependencies
RUN apk update && apk add --no-cache build-base mysql-dev rrdtool
RUN gem install bundler rake
# Copy the files and build
WORKDIR /usr/server
COPY . .
RUN bundler install
# Run bundler
EXPOSE 443
CMD ["bundle", "exec", "puma"]
I thought Docker was meant to solve the problem of "it runs on my machine"...
What I've tried
As per this post, I tried adding -n /usr/local/bundle but it did not fix the issue.
hope you might have some guidance for me on this.
Right now I have a React app that is part of a Django app (for the sake of ease of passing auth login tokens), which is now containerised in a single Dockerfile. Everything works as intended when it is run as a Docker instance locally, but the Docker Image is having issues, despite the fact that the webpages are visible when the Image is deployed on server.
Specifically, when the Docker image is accessed, the home page renders as expected, but then a number of fetch requests which usually go to localhost:8000/<path>/<to>/<url> return the following error:
GET http://localhost:8000/<path>/<to>/<url> net::ERR_CONNECTION_REFUSED
On a colleague's suggestion, I have tried changing localhost:8000 to the public IP address of the server the Docker Image is hosted on (eg 172.XX.XX.XXX:8000) but when I rebuild the React app, these changes do not remain, and it defaults back to localhost. Here are my questions:
Is this something I change from within the React application itself? Do I need manually assign an IP address? (This seems unlikely to me)
Or is this something to do with either the Django port settings, or the Dockerfile itself?
Here is the Dockerfile
FROM ubuntu:18.04
# ...
RUN apt-get update && apt-get install -y \
software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y \
python3.7 \
python3-pip
RUN python3.7 -m pip install pip
RUN apt-get update && apt-get install -y \
python3-distutils \
python3-setuptools
RUN python3.7 -m pip install pip --upgrade pip
# ???
ENV PYTHONUNBUFFERD 1
# copy file form local machine to container
COPY ./requirement.txt /requirement.txt
# install dependency
# RUN pip install -r /requirement.txt
RUN pip install -r /requirement.txt
# create app folder in container
RUN mkdir /app
# set default working dictionary
WORKDIR /app
# copy local app folder to container folder
COPY ./app /app
CMD ["python", "test.py"]
Multiple technologies, multiple failure points - thanks in advance!
I have a Docker Compose environment that has been working very differently.
Here is the setup:
docker-compose.prod.yaml
front_end:
image: front-end-build
build:
context: ./front_end
dockerfile: front_end.build.dockerfile
nginx:
build:
context: ./front_end
dockerfile: front_end.prod.dockerfile
ports:
- 80:80
- 5000:5000
environment:
- CHOKIDAR_USEPOLLING=true
stdin_open: true
tty: true
depends_on:
- front_end
front_end.build.dockerfile
FROM node:13.12.0-alpine
COPY package.json ./
WORKDIR /srv
RUN yarn install
RUN yarn global add react-scripts
COPY . /srv
RUN yarn build
front_end.prod.dockerfile
FROM nginx
EXPOSE 80
COPY --from=front-end-build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d
command:
docker-compose down && docker-compose -f docker-compose.prod.yml up --build --remove-orphans nginx
It doesn't work, for various reasons on various runs.
After various errors, I'm starting with a docker system prune, which at least "resets" the problems to some starting state.
Various problems include:
yarn install says info There appears to be trouble with your network connection. Retrying... but then proceeds to continue, spitting out various deprecation/incompatibility warnings, and finally getting to "Done".
Following this, it usually takes maybe 60+ seconds to even show "Removing intermediate container" and move on to the next step in the dockerfile.
Sometimes the network error will be all I get, and then yarn install will fail which halts the whole process.
yarn install might not show that network error, but show its various warnings between "Resolving packages" and "Fetching packages", which doesn't seem to make sense although this might be normal.
yarn install might, at any point in this process (including after install is done, during install, or even during yarn build), report that we're out of space: error An unexpected error occurred: "ENOSPC: no space left on device, mkdir '/node_modules/fast-glob/package/out/providers/filters'". or something similar.
The farthest we might get is, in yarn build:
There might be a problem with the project dependency tree.
It is likely not a bug in Create React App, but something you need to fix locally.
The react-scripts package provided by Create React App requires a dependency:
"webpack-dev-server": "3.11.0"
Don't try to install it manually: your package manager does it automatically.
However, a different version of webpack-dev-server was detected higher up in the tree:
/node_modules/webpack-dev-server (version: 3.10.3)
Manually installing incompatible versions is known to cause hard-to-debug issues.
If you would prefer to ignore this check, add SKIP_PREFLIGHT_CHECK=true to an .env file in your project.
That will permanently disable this message but you might encounter other issues.
To fix the dependency tree, try following the steps below in the exact order:
1. Delete package-lock.json (not package.json!) and/or yarn.lock in your project folder.
2. Delete node_modules in your project folder.
3. Remove "webpack-dev-server" from dependencies and/or devDependencies in the package.json file in your project folder.
4. Run npm install or yarn, depending on the package manager you use.
In most cases, this should be enough to fix the problem.
If this has not helped, there are a few other things you can try:
5. If you used npm, install yarn (http://yarnpkg.com/) and repeat the above steps with it instead.
This may help because npm has known issues with package hoisting which may get resolved in future versions.
6. Check if /node_modules/webpack-dev-server is outside your project directory.
For example, you might have accidentally installed something in your home folder.
7. Try running npm ls webpack-dev-server in your project folder.
This will tell you which other package (apart from the expected react-scripts) installed webpack-dev-server.
If nothing else helps, add SKIP_PREFLIGHT_CHECK=true to an .env file in your project.
That would permanently disable this preflight check in case you want to proceed anyway.
P.S. We know this message is long but please read the steps above :-) We hope you find them helpful!
error Command failed with exit code 1.
webpack-dev-server does not actually appear anywhere in my package.json file so there's nothing for me to change there, but otherwise I've tried those 4 steps. And then the next time I run I get the "no space left" error.
I'll also say, almost separately from this, that there have been times when, for some reason, it will go through all the steps, except with no output whatsover for yarn build, not even "Using cache". And this, of course, will have the nginx container fail as it tries to get the build files. Or something like that, honestly it's been a while. But what does happen when we move on to nginx, is that it will say "Building nginx" for an absurd amount of time, several minutes, before it even gets to the first step in the nginx dockerfile.
But the problem with the front end build is so big that that nginx thing is basically a separate issue.
Has anyone experienced (and solved!) anything similar to what I'm experiencing?
What you're attempting here is a multistage build using the old style before Docker 17.05.
The prod Dockerfile depends on the front-end-build image, that's why you get "Building nginx" until the image is ready.
You can condense both dockerfiles into one now.
Dockerfile
FROM node:13.12.0-alpine AS front-end-build
WORKDIR /srv
COPY package.json ./
RUN yarn install
RUN yarn global add react-scripts
COPY . /srv
RUN yarn build
FROM nginx
EXPOSE 80
COPY --from=front-end-build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d
docker-compose.yml
nginx:
build: front_end/
ports:
- 80:80
- 5000:5000
environment:
- CHOKIDAR_USEPOLLING=true
stdin_open: true
tty: true
Regarding the weird behaviour, check that you're copying the package.json to /package.json and right after you switch the WORKDIR to /srv (which is empty) and run yarn install.
Try moving the COPY order after the WORKDIR:
WORKDIR /srv
COPY package.json ./
https://docs.docker.com/develop/develop-images/multistage-build/
I have a React App, that I build with the following Dockerfile
# base image
FROM node:latest as builder
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY app/package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#1.1.1 -g
COPY ./app/usr/src/app
# start app
CMD ["npm", "start"]
# production environment
FROM nginx:alpine
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
COPY --from=builder /usr/src/app/build /etc/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Then I run this with the following Docker Compose
build: .
labels:
- "traefik.frontend.rule=Host:www.example.com;PathPrefix:/path"
- "traefik.protocol=http"
- "traefik.frontend.entryPoints=https"
- "traefik.port=80"
- "traefik.enable=true"
restart: always
When calling example.com/path I get a lot of 404 Errors, as the React App is not looking for path, but in the root of example.com.
The App is woking when run without PathPrefix and calling example.com directly.
Your app doesn't know that traefik is adding a prefix.
You need to specify homepage property in package.json file to modify every relative URLs that will be used in your app. After building your app using npm run-script build it should be fine.
{
"homepage": "/path"
}
react documentation
I have arrived to a solution to your problem. Let me explain:
The "default" path in production environments for our project
is normally /sforms
I wanted to serve the built package -without rebuild- from any path
in production environments, so "homepage": "." was mandatory.
Example: /sforms_1.0, /sforms_legacy... etc.
So I have reconfigured the script "start" in package.json in the following way:
"start": "cross-env PUBLIC_URL=/sforms react-scripts start"
In this way I'm able to start my dev environment under /sforms,
avoiding the "magic" of traefik, but keeping the "homepage": ".".
So in Traefik I only have to redirect all the calls starting
with the prefix /sforms to my development server, and all the cases
are covered.
Let me paste here my Traefik configuration for that component. Maybe it could be useful:
http:
routers:
debugging-oe-sl-form:
service: debugging-oe-sl-form-proxy
rule: "PathPrefix(`/sforms`)"
entryPoints:
- web
services:
debugging-oe-sl-form-proxy:
loadBalancer:
servers:
- url: "http://host.docker.internal:3000"