My Dockerfile is using alpine and globally installing react-scripts. When it tries to install it, it fails with "could not get uid/gid" error. I added the "---unsafe-perm" option to the npm install -g command. The docker container is successfully created, but the permissions in the container are messaged up for the installed files. I see the username and group set to 1000 for all of them. I tried adding the following command to the Dockerfile right before the install step but that didn't help.
RUN npm -g config set user root
Build error
Error: could not get uid/gid
[ 'nobody', 0 ]
at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:37:16
at ChildProcess.exithandler (child_process.js:296:5)
at ChildProcess.emit (events.js:182:13)
at maybeClose (internal/child_process.js:961:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:250:5)
TypeError: Cannot read property 'get' of undefined
at errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
at /usr/local/lib/node_modules/npm/bin/npm-cli.js:76:20
at cb (/usr/local/lib/node_modules/npm/lib/npm.js:228:22)
at /usr/local/lib/node_modules/npm/lib/npm.js:266:24
at /usr/local/lib/node_modules/npm/lib/config/core.js:83:7
at Array.forEach (<anonymous>)
at /usr/local/lib/node_modules/npm/lib/config/core.js:82:13
at f (/usr/local/lib/node_modules/npm/node_modules/once/once.js:25:25)
at afterExtras (/usr/local/lib/node_modules/npm/lib/config/core.js:173:20)
at Conf.<anonymous> (/usr/local/lib/node_modules/npm/lib/config/core.js:231:22)
/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205
if (npm.config.get('json')) {
^
TypeError: Cannot read property 'get' of undefined
at process.errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
at process.emit (events.js:182:13)
at process._fatalException (internal/bootstrap/node.js:472:27)
ERROR: Service 'sample-app' failed to build: The command '/bin/sh -c npm install react-scripts#1.1.1 -g' returned a non-zero code:
Dockerfile
/usr/src/app # cat Dockerfile
# build environment
FROM node:10-alpine as builder
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#1.1.1 -g
COPY . /usr/src/app
RUN npm run build
# production environment
FROM nginx:1.13.9-alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
UPD Fixed in nodejs#12.4.0?
Check if this is linked to nodejs/docker-node issue 813:
Root cause seems to be: Thread stack size
The default stack size for new threads on glibc is determined based on the resource limit governing the main thread’s stack (RLIMIT_STACK).
It generally ends up being 2-10 MB.
There three possible solutions:
Talk to Alpine teams to fix it. There were some discussions already
Fix it in the node docker alpine image as follows
Set default npm_config_unsafe_perm=true in the docker image as a workaround until it's fixed.
You already tried the third option, but consider also:
Alternatively, you should switch to the slim (Debian) variant until this get's fixe upstream by the Alpine team.
I faced same issue in Docker for node-alpine image when I am dockerizing my react application
I resolved with following dockerfile configuration.
FROM node:8.10.0-alpine
# Set a working directory
WORKDIR /usr/src/app
COPY ./build/package.json .
COPY ./build/yarn.lock .
# To handle 'not get uid/gid'
RUN npm config set unsafe-perm true
# Install Node.js dependencies
RUN yarn install --production --no-progress
# Copy application files
COPY ./build .
# Install pm2
RUN npm install -g pm2 --silent
# Run the container under "node" user by default
USER node
CMD ["pm2", "start", "mypm2config.yml", "--no-daemon", "--env", "preprod"]
Related
I am build my page inside the docker, and then after I build finish it, it should be push to docker hub. and then I want to deploy it like:
docker run -d --name commerce_dash -p4505:8000 -e NODE_ENV=production r2day/commerce-dash
everything works fine, for now. but When I check the logs, and it shows me:
VITE v3.2.2 ready in 641 ms
➜ Local: http://localhost:8000/
➜ Network: http://172.17.0.19:8000/
node:events:491
throw er; // Unhandled 'error' event
^
Error: spawn xdg-open ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:285:19)
at onErrorNT (node:internal/child_process:483:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Emitted 'error' event on ChildProcess instance at:
at ChildProcess._handle.onexit (node:internal/child_process:291:12)
at onErrorNT (node:internal/child_process:483:16)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn xdg-open',
path: 'xdg-open',
spawnargs: [ 'http://localhost:8000/' ]
}
Node.js v19.0.0
My docker file like:
# Install dependencies only when needed
FROM node:alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock vite.config.ts ./
# COPY .env.production ./.env.production
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:alpine AS builder
WORKDIR /app
COPY . .
ENV BASE_PATH admin
COPY --from=deps /app/node_modules ./node_modules
RUN yarn build && yarn install --production --ignore-scripts --prefer-offline
# Production image, copy all the files and run next
FROM node:alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV BASE_PATH admin
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/vite.config.ts ./vite.config.ts
COPY --from=builder /app/packages ./packages
USER nextjs
EXPOSE 8000
ENV PORT 8000
ENV BASE_PATH admin
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
# ENV NEXT_TELEMETRY_DISABLED 1
CMD ["node_modules/.bin/vite", "--host", "0.0.0.0", "--port", "8000"]
Anyway, I can run it on my MacBook by using
node_modules/.bin/vite
I have a React App and I need to run it in docker. Inside this container I need to build 3 instance with the same code, but with different environments by replacing .env.production with my .env.production2 and .env.production3 file. I have a problem with dockerfile: if I'm not use RUN npm install after changing WORKDIR - build stops with error:
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
So right now it works only with this dockerfile:
FROM node:12 as build-box
COPY . /app/expert
COPY . /app/expert-control-chat
COPY . /app/expert-control-support
WORKDIR /app/expert
ARG NPMTOKEN
ENV NPMTOKEN=$NPMTOKEN
RUN npm config set _auth $NPMTOKEN
RUN npm install
# Build
FROM build-box as publish
WORKDIR /app/expert
RUN npm run build
WORKDIR /
RUN rm -rf /app/expert-control-chat/.env.production
COPY .env.production.chat-control /app/expert-control-chat/.env.production
WORKDIR /app/expert-control-chat
RUN npm install
RUN npm run build
WORKDIR /
RUN rm -rf /app/expert-control-support/.env.production
COPY .env.production.chat-support /app/expert-control-support/.env.production
WORKDIR /app/expert-control-support
RUN npm install
RUN npm run build
FROM nginx as runtime
RUN rm -rf /etc/nginx/conf.d
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /app/expert
COPY --from=publish /app/expert/build ./
WORKDIR /app/expert-control-chat
COPY --from=publish /app/expert-control-chat/build ./
WORKDIR /app/expert-control-support
COPY --from=publish /app/expert-control-support/build ./
# Start
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
Can you tell me the correct way to make a build?
I think that there is another way to build it, but I can't get that.
I am deploying a react app to Heroku via TravisCI. The fact that I'm using Heroku doesn't really affect what I'm about to ask, I'm pretty sure, it's just there for context. Travis successfully deploys the app until I add a testing step (the script section) in .travis.yml:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t myapp:prod -f Dockerfile.prod .
script:
- docker run -e CI=true myapp:prod npm run test
after_success:
- docker build -t myapp:prod -f Dockerfile.prod .
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- docker push myapp:prod
deploy:
provider: heroku
app: myapp
skip_cleanup: true
api_key:
secure: <my_key>
However, my Dockerfile.prod is a multi-stage node + nginx where the nginx stage doesn't keep any node or npm stuff:
# build environment
FROM node:13.12.0-alpine as builder
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# some CI stuff I guess
RUN npm ci
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
# If using React Router
COPY --from=builder /app/build /usr/share/nginx/html
# For Heroku
CMD sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
Therefore, it is my understanding that .travis.yml tries to run that npm run test command inside my nginx container and can't execute npm commands (no node installed, right?). So guided by SO answers such as this one I started adding commands into that nginx stage such as
COPY package.json ./
COPY package-lock.json ./
RUN apk add --update npm
but I realized I might be approaching this the wrong way. Should I perhaps be adding npm through Travis? That is, should I include in .travis.yml in the scripts section something like docker run -e CI=true myapp:prod apk add --update npm and whatever else is necessary? This would result in a smaller nginx image no? However, would I run into problems with package.json from the node stage in Dockerfile.prod or anything like that?
In summary, to use TravisCI to test a dockerized react app served with nginx, at what point should I install npm into my image? Does it happen as part of script in .travis.yml or does it happen in Dockerfile.prod? If it is recommened to npm run tests inside Dockerfile.prod, would I do that in the first stage (node) or the second (nginx)?
Thanks
EDIT: Not sure if this can be considered solved, but a user on Reddit recommended to simply RUN npm run test right before the RUN npm run build.
At the top of my react component (Coffee.jsx), I have this import:
import ReactPlayer from 'react-player';
The package 'react-player' is certainly installed, present at package.json and node_modules/.
My code runs inside a docker container. Everytime I spin my containers up, like so:
docker-compose -f docker-compose-dev.yml up -d
I am getting this error:
./src/components/Coffees.jsx
Module not found: Can't resolve 'react-player' in '/usr/src/app/src/components'
this is what console shows me:
Brewing.jsx:22 Uncaught Error: Cannot find module 'react-player'
at webpackMissingModule (Brewing.jsx:22)
at Module../src/components/Coffees.jsx (Brewing.jsx:22)
at __webpack_require__ (bootstrap:781)
at fn (bootstrap:149)
at Module../src/App.jsx (Spotify.css:4)
at __webpack_require__ (bootstrap:781)
at fn (bootstrap:149)
at Module../src/index.js (spotify-auth.js:8)
at __webpack_require__ (bootstrap:781)
at fn (bootstrap:149)
at Object.0 (index.js:10)
at __webpack_require__ (bootstrap:781)
at checkDeferredModules (bootstrap:45)
at Array.webpackJsonpCallback [as push] (bootstrap:32)
at main.chunk.js:1
docker-compose-dev.yml:
client:
build:
context: ./services/client
dockerfile: Dockerfile-dev
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- 3000:3000
environment:
- NODE_ENV=development
- REACT_APP_WEB_SERVICE_URL=${REACT_APP_WEB_SERVICE_URL}
depends_on:
- web
Dockerfile-dev:
# base image
FROM node:11.12.0-alpine
# set working directory
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
COPY package-lock.json /usr/src/app/package-lock.json
RUN npm ci
RUN npm install react-scripts#2.1.8 -g --silent
# start app
CMD ["npm", "start"]
folder structure:
services/
docker-compose-dev.yml
node_modules/
client/
Dockerfile-dev
package.json
package-lock.json
node_modules/
react-player/
Temporay fix:
The hack fixing this is waiting for some time, along with some forced changes in my code either in Coffee.jsx or Brewing.jsx.
After I save the changed code, the package is found.
Then, when I stop containers and up them again, problem resumes. I have trying using the flag --build after up -d, to no avail.
Whats going on? How do I fix this?
more persistent fix:
After removing volumes from docker-compose-dev.yml and rebuilding, like so:
#volumes:
#- './services/client:/usr/src/app'
#- '/usr/src/app/node_modules'
I still get the error:
client_1 | > client#0.1.0 start /usr/src/app
client_1 | > react-scripts start
client_1 |
client_1 | Could not find a required file.
client_1 | Name: index.html
client_1 | Searched in: /usr/src/app/public
client_1 | npm ERR! code ELIFECYCLE
client_1 | npm ERR! errno 1
client_1 | npm ERR! client#0.1.0 start: `react-scripts start`
client_1 | npm ERR! Exit status 1
client_1 | npm ERR!
client_1 | npm ERR! Failed at the client#0.1.0 start script.
client_1 | npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
client_1 |
client_1 | npm ERR! A complete log of this run can be found in:
client_1 | npm ERR! /root/.npm/_logs/2019-11-05T15_14_42_967Z-debug.log
Then it only works if I uncomment volumes again and run the containers with volumes. An answer explaining reasons for
a) temporary fix
b) more permanent fix
would be very appreciated.
Managing node_modules is such a pain with docker. There are great discussions on the stackoverflow about how you can run Javascrip app with docker. Here is how I do it,
Dockerfile
FROM node:11.12.0-alpine
# first installed node_modules in cache and copy them to src folder
RUN mkdir /usr/src/cache
WORKDIR /usr/src/cache
COPY package.json .
RUN npm install -q
# now make a different directory for src code
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# set path to run packages from node_modules
ENV NODE_PATH=/usr/src/app/node_modules/.bin
COPY . .
docker-compose.yaml
app:
build: .
image: app
container_name: services.app
volumes:
- .:/usr/src/app
ports:
- 3000:5000
# this will copy node_modules to src folder, otherwise node_modules will be wipeed out as we don't
# have the node_modules in the host machine
command: /usr/src/app/entrypoint.sh prod
And my entrypoint.sh file looks like
#!/bin/bash
cp -r /usr/src/cache/node_modules/. /usr/src/app/node_modules/
exec npm start
So, the basic idea here is, when you build the image you store the node_modules somewhere in the path, but when you actually run it, you copy that node_modules and place it in the app folder. This way, your local node_modules never clashes with the one in the docker.
You can add node_modules in .dockerignore if you want to make a COPY faster.
First Down services:
docker-compose -f docker-compose-dev.yml down
Then re-build services (without cache):
docker-compose -f docker-compose-dev.yml build --no-cache
In the last run services:
docker-compose -f docker-compose-dev.yml up
I will go for installing packages whenever container up it will install on start on stop as well plus if can control mechanism too when to install or when to go with build time modules.
#!/bin/sh
npm ci
npm install react-scripts#2.1.8 -g --silent
exec npm start
With this approach the container will always install updated node modules, also will not need to build each time during development.
Also, we can control this behaviour.
#!/bin/sh
if [ "$PACKAGE_UPDATE" = true ] ; then
echo 'installing fresh node modules'
# you can also remove existing modules at this step
npm ci
npm install react-scripts#2.1.8 -g --silent
fi
exec npm start
so the docker run for not installing packages will be
docker run -e PACKAGE_UPDATE=true -it my_image
and will mount the anonymous volume to not conflict with host node modules.
volumes:
- './services/client:/usr/src/app'
- '/usr/src/app/node_modules'
I want to create a development environment for a reactjs application. I am new to Docker and have been trying to create an environment using Docker. Below is my Dockerfile code.
# Base node image
FROM node
# create working directory
ADD ./code /usr/src/app
WORKDIR /usr/src/app
# add node_modules path to environment
ENV PATH /usr/src/app/node_modules/.bin:PATH
# copy and install dependencies
COPY ./code/package.json /usr/src/app/package.json
RUN npm install --quiet
RUN npm install react-scripts#1.1.1 -g --silent
# start app
# CMD ["npm","start"]
However, I am getting the error "npm: not found" at line RUN npm install --quiet.
I confirm that node comes with npm:
$ docker run -it --rm node /bin/bash
root#b35e1a6d68f8:/# npm --version
5.6.0
But the line
ENV PATH /usr/src/app/node_modules/.bin:PATH
overwrites the initial PATH, so you should try replacing it with
ENV PATH /usr/src/app/node_modules/.bin:${PATH}
Also, note that your ADD ./code ... line is clumsy, because it would add all the files of your application (including ./code/package.json!) and this step comes too early (w.r.t. Docker's cache mechanism), so I'd suggest to simply remove that line ADD ./code /usr/src/app and add a line COPY ./code ./ after the RUN npm install ...
Finally you may also want to take a look at the official documentation for "dockerizing" a Node.js app: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/