I'm deploying a dockerized React app to Heroku (via Travis CI, but I don't think that matters). Heroku recommends not running as root in dev, since in production they don't run as root, by adding
RUN adduser -D myuser
USER myuser
However, this Dockerfile, which was running fine until I added that chunk (or similar), now fails:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install && npm audit fix --force
# add app
COPY . ./
### ------------------- works without this section ------------------- ###
RUN addgroup -g 1001 -S appuser && \
adduser -u 1001 -S appuser -G appuser && \
chown -R appuser ./
USER appuser
### ------------------------------------------------------------------ ###
# start app
CMD ["npm", "start"]
Docker Desktop outputs:
> react-scripts start
ℹ 「wds」: Project is running at http://172.18.0.2/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /app/public
ℹ 「wds」: 404s will fallback to /
Starting the development server...
Failed to compile.
EACCES: permission denied, open '/app/.eslintcache'
It all points to issues with users and permissions but I don't know if my syntax is wrong or if this is just not what Heroku is actually recommending, but it doesn't work for me.
Still new to Docker so I don't know if this is all conceptually correct but I think I solved it.
First, I thought I got it to work by removing the .:/app volume in docker-compose.yml but then I realized that, by de-linking the host with the image, I lost React's hot reloading. So docker-compose.yml has to look like this:
version: '3.8'
services:
frontend:
container_name: poeclient
stdin_open: true
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app' # required for hot reloading, needs Dockerfile user = local user
- '/app/node_modules'
ports:
- 3001:3000
environment:
- CHOKIDAR_USEPOLLING=true
Also, because my local user id is 1000 I have to use that id in the Dockerfile. Since that Apline image comes with a user named 'node' with that id, I don't have to create one in the Dockerfile:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install && npm audit fix --force
# add app with non root owner
COPY . ./
# switch to non-root user 'node' (uid: 1000), which comes with alpine image
# switch now instead of earlier to allow for other commands to be run as root
# for react hot reloading needs to be 1000 because host user is also 1000
USER 1000
# start app
CMD ["npm", "start"]
Despite all this, the .eslintcache file still showed up in the container as belonging to root. So one last thing that was needed was, in my local folder, to chown the .eslintcache from root to my local user (1000) and then the Dockerfile was able to show an .eslintcache that belonged to 1000.
Related
I am trying to run a next js app locally inside a docker file. When I run the container, everything works as expected with the exception of my image files failing to render on the page. Inspection via the developer tools indicates a failed network request for those images (no 4XX code is indicated). The failed request looks as follows:
When I build npm run build and run the app locally npm run start, I see this same request successfully run. Same success story when I run in development mode npm run dev.
Here is the section of code utilizing the next Image module. import Image from "next/image";
<Image
src="/images/computerStation.png"
alt=""
height={300}
width={600}
/>
And my public directory tree:
root
│
└───public
│
└───images
│
└───computerStation.png
Given my local build/dev-env success, my thought is that I am doing something wrong with my docker file. I pretty much just ripped this off from the Next js docs and tweaked it to run with npm instead of yarn. See Dockerfile below:
# Install dependencies only when needed
FROM node:alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:alpine AS builder
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN npm run build
# Production image, copy all the files and run next
FROM node:alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
EXPOSE 3000
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
RUN npx next telemetry disable
CMD ["node_modules/.bin/next", "start"]
Any help on this would be greatly appreciated. Thanks!
You need to check your docker Node version, which should be
FROM node:14-alpine AS deps
FROM node:14-alpine AS builder
FROM node:14-alpine AS runner
Next.js has some problems, in Node 16 (node:alpine).
I am deploying a react app to Heroku via TravisCI. The fact that I'm using Heroku doesn't really affect what I'm about to ask, I'm pretty sure, it's just there for context. Travis successfully deploys the app until I add a testing step (the script section) in .travis.yml:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t myapp:prod -f Dockerfile.prod .
script:
- docker run -e CI=true myapp:prod npm run test
after_success:
- docker build -t myapp:prod -f Dockerfile.prod .
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- docker push myapp:prod
deploy:
provider: heroku
app: myapp
skip_cleanup: true
api_key:
secure: <my_key>
However, my Dockerfile.prod is a multi-stage node + nginx where the nginx stage doesn't keep any node or npm stuff:
# build environment
FROM node:13.12.0-alpine as builder
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# some CI stuff I guess
RUN npm ci
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
# If using React Router
COPY --from=builder /app/build /usr/share/nginx/html
# For Heroku
CMD sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
Therefore, it is my understanding that .travis.yml tries to run that npm run test command inside my nginx container and can't execute npm commands (no node installed, right?). So guided by SO answers such as this one I started adding commands into that nginx stage such as
COPY package.json ./
COPY package-lock.json ./
RUN apk add --update npm
but I realized I might be approaching this the wrong way. Should I perhaps be adding npm through Travis? That is, should I include in .travis.yml in the scripts section something like docker run -e CI=true myapp:prod apk add --update npm and whatever else is necessary? This would result in a smaller nginx image no? However, would I run into problems with package.json from the node stage in Dockerfile.prod or anything like that?
In summary, to use TravisCI to test a dockerized react app served with nginx, at what point should I install npm into my image? Does it happen as part of script in .travis.yml or does it happen in Dockerfile.prod? If it is recommened to npm run tests inside Dockerfile.prod, would I do that in the first stage (node) or the second (nginx)?
Thanks
EDIT: Not sure if this can be considered solved, but a user on Reddit recommended to simply RUN npm run test right before the RUN npm run build.
I am trying to run a react app in a docker image however it exits without an error message
DokerFile
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# add app
COPY . ./
# start app
CMD npm start --port 3000
then I proceeded to build
docker build -t react-app:latest .
then I run
docker run -p 7000:3000 react-app:latest
gives the following out put
then exits out
this is what I see on the browser
Your docker closes because the tty is not enabled.
In order to work, you have to run the docker with
docker run -t -p 7000:3000 react-app:latest
For more info: https://github.com/facebook/create-react-app/issues/8688
But this should be only for testing/development. In production you should build your react app and then serve it with serve or with nginx
// Dockerfile
# pull base image
FROM node:10.15.1-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY ./package.json ./
COPY ./yarn.lock ./
RUN yarn install
RUN yarn global add react-scripts#3.4.1 // not sure if this is even necessary
# add app
COPY . ./
# start app
CMD ["yarn", "start"]
Command I'm running to build it:
docker build -t matrix-fe:dev .
Command to run it:
docker run -it --rm -v ${PWD}:/app -v /app/node_modules -p 3000:3000 -e CHOKIDAR_USEPOLLING=true matrix-fe:dev
And then this is my composer yml:
version: '3.7'
services:
matrix-fe:
container_name: matrix-fe
build:
context: ./a-fe
dockerfile: Dockerfile
volumes:
- './a-fe:/app'
- '/app/node_modules'
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING=true
Then to build and run it:
docker-compose up --build
Error I'm getting:
matrix-fe | yarn run v1.13.0
matrix-fe | $ react-scripts start
matrix-fe | It looks like you're trying to use TypeScript but do not have typescript installed.
matrix-fe | Please install typescript by running yarn add typescript.
matrix-fe | If you are not trying to use TypeScript, please remove the tsconfig.json file from your package root (and any TypeScript files).
matrix-fe |
Why is this happening? I can obviously try to also install typescript, but it is a dependency in package.json and should be installed, I also added node_modules to path. How is running the image with and without docker-compose different? Compose does create a different image called matrix_matrix-fe, but then Dockerfile hasn't changed. docker-compose.yml is in a top level folder, structure looks like this:
/matrix
/a-fe
./package.json
./Dockerfile
...
/a-be
./docker-compose.yml
Help me understand what's different, volumes are same, ENV variables are same, not seeing anything off.
Edit: forgot to mention that running the image without docker-compose doesn't output errors, it's working properly.
I am getting an error when travis-ci builds my app in a docker container. The build folder is not coming down.Here is the error logs
Deploying application
Initialized empty Git repository in /tmp/d20190115-5107-
1w5c6ge/work/.git/
Switched to a new branch 'gh-pages'
cd -
cd /tmp/d20190115-5107-1w5c6ge/work
rsync: change_dir "/app/build" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1183) [sender=3.1.0]
Could not copy /app/build.
Here are my .travis.yml and dockerfile .
# Grants super user permissions
sudo: required
# travis ci installs docker into travis container
services:
- docker
# before tests are ran build docker image
before_install:
- docker build -t dvontrec/fn-killers -f Dockerfile.dev .
script:
# SHOULD ADD TESTS
- docker run dvontrec/fn-killers pwd
- docker run dvontrec/fn-killers ls
# Steps before deploy:
defore_deploy:
- docker run dvontrec/fn-killers -f npm run build
# Steps to deploy to github pages
deploy:
provider: pages
skip_cleanup: true
github_token: $github_token
on:
branch: master
FROM node:alpine
WORKDIR './app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start-docker"]
Does anyone know how to get the files down from the container?
I found out what i did wrong, to deploy with docker you need to have an nginx container that will copy everything down. Here is the Dockerfile i used.
# Build phase
FROM node:alpine as builder
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
# Run phase
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html