client&admin react-scripts: not found - reactjs

I'm trying to install api-platform with the docker images. The problem is that the containers will not start, exiting with code 127.
$ react-scripts start
/bin/sh: react-scripts: not found
error Command failed with exit code 127.
I'm using Docker for windows with Linux containers. I tried everything I could think of. The only thing I found in support was rebuilding the client and the admin but to no avail. Anybody an idea?
I use the provided Docker files from the api-plaform: https://github.com/api-platform/api-platform.
Dockerfile admin:
# https://docs.docker.com/develop/develop-images/multistage-build/#stop-at-a-specific-build-stage
# https://docs.docker.com/compose/compose-file/#target
# https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
ARG NODE_VERSION=13
ARG NGINX_VERSION=1.17
# "development" stage
FROM node:${NODE_VERSION}-alpine AS api_platform_admin_development
WORKDIR /usr/src/admin
# prevent the reinstallation of node modules at every changes in the source code
COPY package.json yarn.lock ./
RUN set -eux; \
apk add --no-cache --virtual .gyp \
g++ \
make \
python \
; \
yarn install; \
apk del .gyp
COPY . ./
VOLUME /usr/src/admin/node_modules
ENV HTTPS true
CMD ["yarn", "start"]
# "build" stage
# depends on the "development" stage above
FROM api_platform_admin_development AS api_platform_admin_build
ARG REACT_APP_API_ENTRYPOINT
RUN set -eux; \
yarn build
# "nginx" stage
# depends on the "build" stage above
FROM nginx:${NGINX_VERSION}-alpine AS api_platform_admin_nginx
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /usr/src/admin/build
COPY --from=api_platform_admin_build /usr/src/admin/build ./
Dockerfile: client
# https://docs.docker.com/develop/develop-images/multistage-build/#stop-at-a-specific-build-stage
# https://docs.docker.com/compose/compose-file/#target
# https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
ARG NODE_VERSION=13
ARG NGINX_VERSION=1.17
# "development" stage
FROM node:${NODE_VERSION}-alpine AS api_platform_client_development
WORKDIR /usr/src/client
RUN yarn global add #api-platform/client-generator
# prevent the reinstallation of node modules at every changes in the source code
COPY package.json yarn.lock ./
RUN set -eux; \
yarn install
COPY . ./
VOLUME /usr/src/client/node_modules
ENV HTTPS true
CMD ["yarn", "start"]
# "build" stage
# depends on the "development" stage above
FROM api_platform_client_development AS api_platform_client_build
ARG REACT_APP_API_ENTRYPOINT
RUN set -eux; \
yarn build
# "nginx" stage
# depends on the "build" stage above
FROM nginx:${NGINX_VERSION}-alpine AS api_platform_client_nginx
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /usr/src/client/build
COPY --from=api_platform_client_build /usr/src/client/build ./

I had the same problem.
I solved it by directly typing yarn install in both directory (/client and /admin). You can then start docker images (docker-compose up -d).
Hope it helps.

Related

How to get Next.JS environment variables on client side?

I have a Next.Js application that I will deploy with docker. I am passing my environment variables in docker file and docker-compose.yaml. Next version: 12.1.6
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# If using npm with a `package-lock.json` comment out above and use below instead
# COPY package.json package-lock.json ./
# RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_PUBLIC_BASE_URL example --> I'm stating it here. Example is not my value, it just takes space.
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
docker-compose.yaml
version: '3'
services:
frontend:
image: caneral:test-1
ports:
- '3000:3000'
environment:
- NEXT_PUBLIC_BASE_URL=https://example.com/api
I am building with the following command:
docker build -t caneral:test-1 .
Then I run docker-compose:
docker-compose up -d
While I can access the NEXT_PUBLIC_BASE_URL value on the server side, I cannot access it on the client side. It returns undefined. Shouldn't I reach it because I define it as NEXT_PUBLIC? This is stated in the official documents.
How to get environment variables on client side?
Details, you have:
Your .env file. (i'm not sure how docker files will affect logic)
Your next.config.js file
Server-side has access to these 2 files.
Client-side doesn't have access to .env file.
What you can do:
In your next.config.js file you can declare a variable where value is your process.env value.
const baseTrustFactor = process.env.trustFactor
IMPORTANT: do not expose your private info (keys/tokens etc.) to the client-side.
If you need to compare the tokens you can:
Send them from the backend (From NodeAPI or similar)
Make conditions in next.config.js such as:
const baseTrustFactor = process.env.trustFactor == '21' ? true : false

Cleaner way to init react .env variables in Dockerfile

Is there another way to init react env in the dockerfile ?
FROM node:14.19 as base
ARG REACT_APP_VERSION
ARG REACT_APP_WEBSITE_NAME
ARG REACT_APP_BACKEND_BASE_URL
ARG REACT_APP_BI_BASE_URL
ARG REACT_APP_DEFAULT_DEPARTEMENT_CODE
ENV REACT_APP_VERSION $REACT_APP_VERSION
ENV REACT_APP_WEBSITE_NAME $REACT_APP_WEBSITE_NAME
ENV REACT_APP_BACKEND_BASE_URL $REACT_APP_BACKEND_BASE_URL
ENV REACT_APP_BI_BASE_URL $REACT_APP_BI_BASE_URL
ENV REACT_APP_DEFAULT_DEPARTEMENT_CODE $REACT_APP_DEFAULT_DEPARTEMENT_CODE
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . .
FROM base as development
CMD ["npm", "start"]
FROM base as build
RUN npm run build
FROM nginx:alpine as production
COPY nginx.conf /etc/nginx/conf.d/configfile.template
COPY --from=build /app/build /usr/share/nginx/html
ENV HOST 0.0.0.0
EXPOSE 8080
CMD sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
Actually I have all variables declared at the top on the Dockerfile and I have to pass each .env variable in arg. But I don't like this way. I'm searching another way to do that. I'm using cloud build for my pipeline production deployment so maybe there is a way to do something with it ?
You don't need to replicate all your args as env, if they are only required for the build process. They will be seen as env variable during build. Afterwards, when you run the container they are gone though.
This is enough, for the first stage:
FROM node:14.19
ARG REACT_APP_VERSION="unknown" \
REACT_APP_WEBSITE_NAME="" \
REACT_APP_BACKEND_BASE_URL="" \
REACT_APP_BI_BASE_URL="" \
REACT_APP_DEFAULT_DEPARTEMENT_CODE="" \
NODE_ENV="production"
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
For your second stage you can use nginx templates dir, it will do envsubst for you and copy the templates into the conf.d directory, removing the template suffix.
It will run the following script amongst others: https://github.com/nginxinc/docker-nginx/blob/master/entrypoint/20-envsubst-on-templates.sh
So you can create templates with variable placeholders and put them in /etc/nginx/templates.
In your case, you want to put /etc/nginx/templates/default.conf.template.
FROM nginx:alpine
ENV HOST 0.0.0.0 PORT=8080
EXPOSE 8080
COPY nginx.conf /etc/nginx/templates/default.conf.template
COPY --from=0 /app/build /usr/share/nginx/html
That way, you can also keep the default entrypoint and command. You don't have to set your own.
I would also suggest using the unprivileged version of the nginx. It runs on 8080 by default. https://hub.docker.com/r/nginxinc/nginx-unprivileged. This image is generally useful if you plan to drop capabilities, which is advised for production.

React setting environment variables in the right way with docker and codebuild

I have a codebuild project with an environment variable named "GATEWAY_URI" set on aws console as plaintext.
I am using the following buildspec file;
version: 0.2
env:
variables:
AWS_REGION: "eu-central-1"
phases:
pre_build:
commands:
- echo logging in to ecr...
- >
aws ecr get-login-password --region $AWS_REGION \
| docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
- |
if expr "$CODEBUILD_WEBHOOK_TRIGGER" == "branch/main" >/dev/null && expr "$CODEBUILD_WEBHOOK_HEAD_REF" == "refs/heads/main" >/dev/null; then
DOCKER_TAG=prod
else
DOCKER_TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}
fi
- echo "Docker tag:" $DOCKER_TAG
- git_diff="$(git diff --name-only HEAD HEAD~1)"
- echo "$git_diff"
- buildWeb=false
- |
case $git_diff in
*Web*)
buildWeb=true
docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder || true
docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG || true
;;
esac
build:
commands:
- echo building images....
- |
if [ "$buildWeb" = true ]; then
docker build \
--target builder \
--cache-from $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder \
-f Web/Dockerfile.prod \
-t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder \
--build-arg NODE_ENV=production \
--build-arg REACT_APP_GATEWAY_URI=$GATEWAY_URI \
./Web
docker build \
--cache-from $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG \
-f Web/Dockerfile.prod \
-t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG \
./Web
fi
post_build:
commands:
- |
if [ "$buildWeb" = true ]; then
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG
fi
- chmod +x ./deploy.sh
- bash deploy.sh enter code here
The "GATEWAY_URI" is reachable in buildspec I echo it before. I am passing it to docker build command as --build-arg and in docker file, I am setting it as env before "npm run build" so CRA will be able to catch it. I using the following multistage docker file
###########
# BUILDER #
###########
FROM public.ecr.aws/docker/library/node:17-alpine as builder
# set working directory
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci
RUN npm install react-scripts#5.0.0 --silent
# set environment variables
ARG REACT_APP_GATEWAY_URI
ENV REACT_APP_GATEWAY_URI $REACT_APP_GATEWAY_URI
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
# create build
COPY . .
RUN npm run build
#########
# FINAL #
#########
FROM public.ecr.aws/docker/library/nginx:stable-alpine
# update nginx conf
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
# copy static files
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
# expose port
EXPOSE 80
# run nginx
CMD ["nginx", "-g", "daemon off;"]
At the end of the day when the build passes, the "REACT_APP_GATEWAY_URI" variable under process.env is becoming an empty string.
Note: When I used the same dockerfile with docker-compose on my local system it's working fine and I can see "REACT_APP_GATEWAY_URI" on process.env. So my suspicion is that am I using the "- |" syntax right in the buildspec file. I mean, can I execute multiple docker build commands under it consecutively like that?
Edit: After adding the same --build-arg for "GATEWAY_URI" also to the docker build command for the FINAL stage, it is now working as expected. So, I think it was replaced with an empty string between the builder stage and the final stage. But I thought that I need that variable only on the build stage.

How can I expose more than 1 port with Dockerfile/Nginx/Heroku

I'm able to expose 2 services locally, on my computer with the following command docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName.
On port 5000, I expose my backend using a flask-restful api and on port 8080, I expose my frontend using nginx to serve my react.js application.
When I deploy that on Heroku platform I have 2 problems :
It seems Heroku try to bind Nginx on port 80 but the PORT env var is different, log's output :
Starting process with command /bin/sh -c gunicorn\ --bind\ 0.0.0.0:5000\ app:app\ --daemon\ \&\&\ sed\ -i\ -e\ \'s/\34352/\'\"\34352\"\'/g\'\ /etc/nginx/conf.d/default.conf\ \&\&\ nginx\ -g\ \'daemon\ off\;\'
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
[emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
How can I write the -p 8080:8080 -p 5000:5000 part inside the Dockerfile or hack around it since I can't specify the docker run [...] command on Heroku ?
I'm new to Docker and Nginx, so I would be very grateful if you know a better way to achieve my goal. Thanks in advance.
# ------------------------------------------------------------------------------
# Temporary image for react.js app using a multi-stage build
# ref : https://docs.docker.com/develop/develop-images/multistage-build/
# ------------------------------------------------------------------------------
FROM node:latest as build-react
# create a shared folder and define it as current dir
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# copy the files required for node packages installation
COPY ./react-client/package.json ./
COPY ./react-client/yarn.lock ./
# install dependencies, copy code and build bundles
RUN yarn install
COPY ./react-client .
RUN yarn build
# ------------------------------------------------------------------------------
# Production image based on ubuntu:latest with nginx & python3
# ------------------------------------------------------------------------------
FROM ubuntu:latest as prod-react
WORKDIR /usr/src/app
# update, upgrade and install packages
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get upgrade -y
RUN apt-get install -y nginx curl python3 python3-distutils python3-apt
# install pip
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3 get-pip.py
# copy flask-api requirements file and install modules
COPY ./flask-api/requirements.txt ./
RUN pip install -r requirements.txt
RUN pip install gunicorn
# copy flask code
COPY ./flask-api/app.py .
# copy built image and config onto nginx directory
COPY --from=build-react /usr/src/app/build /usr/share/nginx/html
COPY ./conf.nginx /etc/nginx/conf.d/default.conf
# ------------------------------------------------------------------------------
# Serve flask-api with gunicorn and react-client with nginx
# Ports :
# - 5000 is used for flask-api
# - 8080 is used by nginx to serve react-client
# You can change them but you'll have to change :
# - for flask-api : conf.nginx, axios calls (5000 -> #newApiPort)
# - for react-client : CORS origins (8080 -> #newClientPort)
#
# To build and run this :
# docker build -t #imageName .
# docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName
# ------------------------------------------------------------------------------
CMD gunicorn --bind 0.0.0.0:5000 app:app --daemon && \
sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
nginx -g 'daemon off;'

Deploy production create-react-app via GitLab Auto DevOps to GKE

I've been struggling to figure out why my create-react-app application won't display properly when using GitLab Auto DevOps and deploy to GKE. I'm thinking that it has something to do with how I'm serving the create-react-app and how the ingress-controller works, but I'm not totally sure.
For production, create-react-app suggests using yarn build and then package serve but I don't think that serve and ingress-controller play nice together. For reference here is my Dockerfile:
Dockerfile
FROM node:8.9.3-alpine
ARG NODE_ENV=production
ENV NODE_ENV=$NODE_ENV
# Set a working directory
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
RUN set -ex; \
if [ "$NODE_ENV" = "production" ]; then \
yarn install --no-cache --frozen-lockfile --production; \
npm install -g serve; \
elif [ "$NODE_ENV" = "test" ]; then \
touch yarn-error.log; \
mkdir -m 777 build; \
yarn install --no-cache --frozen-lockfile; \
chown -R node:node build node_modules package.json yarn.lock yarn-error.log; \
else \
touch yarn-error.log; \
mkdir -p -m 777 build node_modules /home/node/.cache/yarn; \
chown -R node:node build node_modules package.json yarn.lock yarn-error.log /home/node/.cache/yarn; \
fi;
COPY .env build* ./build/
USER node
CMD [ "serve", "-s", "build" ]
My application is really simple, it's just a single page with a few dummy routes.
When I push to master the whole pipeline succeeds, but the result is sort of a rendered view of my projects file structure. I've looked over the logs and the only thining that seems to be indicating any issue other that the state of the website is the ingress-controller logs which WARN me:
error obtaining PEM from secret app-6174385/production-auto-deploy-tls: error retrieving secret app-6174385/production-auto-deploy-tls: secret app-6174385/production-auto-deploy-tls was not found
Has anyone had success deploying create-react-app to GKE via GitLab's Auto DevOps, if so I could really use some guidance. Also, happy to provide any additional information that would be helpful!
This error means that secret has not been created.
You can find information on how to set up Kubernetes cluster integration in Getting started with Auto DevOps instruction.

Resources