I have a Next.Js application that I will deploy with docker. I am passing my environment variables in docker file and docker-compose.yaml. Next version: 12.1.6
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# If using npm with a `package-lock.json` comment out above and use below instead
# COPY package.json package-lock.json ./
# RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_PUBLIC_BASE_URL example --> I'm stating it here. Example is not my value, it just takes space.
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
docker-compose.yaml
version: '3'
services:
frontend:
image: caneral:test-1
ports:
- '3000:3000'
environment:
- NEXT_PUBLIC_BASE_URL=https://example.com/api
I am building with the following command:
docker build -t caneral:test-1 .
Then I run docker-compose:
docker-compose up -d
While I can access the NEXT_PUBLIC_BASE_URL value on the server side, I cannot access it on the client side. It returns undefined. Shouldn't I reach it because I define it as NEXT_PUBLIC? This is stated in the official documents.
How to get environment variables on client side?
Details, you have:
Your .env file. (i'm not sure how docker files will affect logic)
Your next.config.js file
Server-side has access to these 2 files.
Client-side doesn't have access to .env file.
What you can do:
In your next.config.js file you can declare a variable where value is your process.env value.
const baseTrustFactor = process.env.trustFactor
IMPORTANT: do not expose your private info (keys/tokens etc.) to the client-side.
If you need to compare the tokens you can:
Send them from the backend (From NodeAPI or similar)
Make conditions in next.config.js such as:
const baseTrustFactor = process.env.trustFactor == '21' ? true : false
Related
I tried to pass my variable from docker-compose.yml to docker container but my container doesn't see the value of this variable. I have tried many cases but all to no avail. here are my attempts.
First try:
FROM node:alpine3.17 as build
LABEL type="production"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /react-app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Second try:
FROM node:alpine3.17 as build
LABEL type="dev"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
# The RUN command is only executed while the build image
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
RUN npm install -g serve
EXPOSE 3000
# The CMD command is only executed while the image is running
CMD serve -s build
And I built the container from Dockerfiles, then I pushed it to docker-hub with various version and after that I run docker-compose.yml from the remote server.
My docker-compose.yml
version: '3'
services:
stolovaya51-react-static-server:
container_name: stolovaya51-react-production:0.0.1 (for example)
build:
args:
- REACT_APP_BACKEND_URL=REACT_APP_BACKEND_URL
ports:
- "80:80"
- "3000:3000"
By the way, when I run this code on my local machine, I see the value of the environment variable, but when I try to run this code on the server, I only see the variable name, but the value = "".
I don't know the reason, what's the matter?
I have found the answear for my question!
Firstly, i have combined two repository with frontend and backend into one project.
Then, i have redesigned my project structure and gathere together two parts of my application. For now i have this structure:
root_project_folder:
./frontend
...some src
./frontend/docker/Dockerfile
./backend
...somer src
./backend/docker/Dockerfile
docker-compose.yml
And now, my frontend applies all args from docker-compose.yml from the root folder
I have the written the following Dockerfile after looking at lots of implementation for react with multi stage builds. I am trying to achieve to have a single Dockerfile for all the environments. Currently dev + prod but in future, dev, qa, automation, staging, pt, preprod, prod.
# Create a base image with dependencies
FROM node:16.15.0-alpine3.14 AS base
ENV NPM_CONFIG_LOGLEVEL warn
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json yarn.lock ./
# Create a development environment
FROM base AS development
ENV NODE_ENV development
RUN yarn install --frozen-lockfile
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]
# Generate build
FROM base AS builder
ENV NODE_ENV production
RUN yarn install --production
COPY . .
RUN yarn build
# Create a production image
FROM nginx:1.21.6-alpine as production
ENV NODE_ENV production
COPY --from=builder /app/build /usr/share/nginx/htm
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
So I have a couple of questions as below,
In the above Dockerfile when I target production in my docker-compose.yml file will it run the development stage as well in the Dockerfile?
If so how can I avoid it being run? Because the development has a different yarn install command ad it also copies the src folder which will be redundant in production stage.
Should I strive to have a single Dockerfile and multiple docker-compose.yml files like docker-compose.dev.yml, docker-compose.prod.yml etc?
you can have a base file docker-compose.yml which will contain the common services. you can then create docker-compose-prod.yml , docker-compose-dev.yml, etc. which contains environment specific overrides. While executing docker compose command, you can specify multiple files and the configuration will be merged and executed as single file.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
you can read more here.
Is there another way to init react env in the dockerfile ?
FROM node:14.19 as base
ARG REACT_APP_VERSION
ARG REACT_APP_WEBSITE_NAME
ARG REACT_APP_BACKEND_BASE_URL
ARG REACT_APP_BI_BASE_URL
ARG REACT_APP_DEFAULT_DEPARTEMENT_CODE
ENV REACT_APP_VERSION $REACT_APP_VERSION
ENV REACT_APP_WEBSITE_NAME $REACT_APP_WEBSITE_NAME
ENV REACT_APP_BACKEND_BASE_URL $REACT_APP_BACKEND_BASE_URL
ENV REACT_APP_BI_BASE_URL $REACT_APP_BI_BASE_URL
ENV REACT_APP_DEFAULT_DEPARTEMENT_CODE $REACT_APP_DEFAULT_DEPARTEMENT_CODE
WORKDIR /app
COPY package.json package.json
COPY package-lock.json package-lock.json
RUN npm install
COPY . .
FROM base as development
CMD ["npm", "start"]
FROM base as build
RUN npm run build
FROM nginx:alpine as production
COPY nginx.conf /etc/nginx/conf.d/configfile.template
COPY --from=build /app/build /usr/share/nginx/html
ENV HOST 0.0.0.0
EXPOSE 8080
CMD sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
Actually I have all variables declared at the top on the Dockerfile and I have to pass each .env variable in arg. But I don't like this way. I'm searching another way to do that. I'm using cloud build for my pipeline production deployment so maybe there is a way to do something with it ?
You don't need to replicate all your args as env, if they are only required for the build process. They will be seen as env variable during build. Afterwards, when you run the container they are gone though.
This is enough, for the first stage:
FROM node:14.19
ARG REACT_APP_VERSION="unknown" \
REACT_APP_WEBSITE_NAME="" \
REACT_APP_BACKEND_BASE_URL="" \
REACT_APP_BI_BASE_URL="" \
REACT_APP_DEFAULT_DEPARTEMENT_CODE="" \
NODE_ENV="production"
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
For your second stage you can use nginx templates dir, it will do envsubst for you and copy the templates into the conf.d directory, removing the template suffix.
It will run the following script amongst others: https://github.com/nginxinc/docker-nginx/blob/master/entrypoint/20-envsubst-on-templates.sh
So you can create templates with variable placeholders and put them in /etc/nginx/templates.
In your case, you want to put /etc/nginx/templates/default.conf.template.
FROM nginx:alpine
ENV HOST 0.0.0.0 PORT=8080
EXPOSE 8080
COPY nginx.conf /etc/nginx/templates/default.conf.template
COPY --from=0 /app/build /usr/share/nginx/html
That way, you can also keep the default entrypoint and command. You don't have to set your own.
I would also suggest using the unprivileged version of the nginx. It runs on 8080 by default. https://hub.docker.com/r/nginxinc/nginx-unprivileged. This image is generally useful if you plan to drop capabilities, which is advised for production.
I am trying to deploy my create-react-app to elastic bean stalk with docker
I have setup codepipeline with codebuild and elastic beanstalk.
I am getting this error
Stop running the command. Error: Dockerfile and Dockerrun.aws.json are both missing, abort deployment
My Dockerfile looks like this
FROM tiangolo/node-frontend:10 as build-stage
# Create app directory
# RUN mkdir -p /usr/src/app
# WORKDIR /usr/src/app
WORKDIR /app
# # fix npm private module
# ARG NPM_TOKEN
# COPY .npmrc /app/
#COPY package.json package.json
COPY package*.json /app/
COPY Dockerrun.aws.json /app/
RUN npm install
COPY ./ /app/
# RUN CI=true npm test
RUN npm run build
# FROM nginx:1.15
FROM nginx:1.13.3-alpine
# Install app dependencies
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
RUN ls
EXPOSE 80
I also have a Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "3",
"Image": {
"Name": "something.dkr.ecr.us-east-2.amazonaws.com/subscribili:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
my buildspec.yml file looks like this
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=something.dkr.ecr.us-east-2.amazonaws.com/subscribili
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- printf '[{"name":"nginx","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I am sure there is some issue with buildspec file but I am just not sure what.
I have read all the documentation still couldn't figure out how to write the buildspec file Docker.
Is there anything I am missing?
Dockerfile and Dockerrun.aws.json these 2 files need to be in the same directory where the command "COPY Dockerrun.aws.json /app/ " is running. make sure these files exists in that directory and this error should disappear
"eb deploy" command creates a zip file from your code. However, to make it as small as possible, it only takes the file that are commited to git. So, if you did not commit Dockerfile and Dockerrun file, these two files won't be included in the zip.
If you do not want it to behave like this, you can add .ebignore file to your projects root directory. This files commands are the same as the gitignore file; you can copy everything from gitignore to ebignore. If there is a .ebignore, cli will not check if the project is commited to a source control.
Now to check what is included in zip file, watch the .elasticbeanstalk folder after "eb deploy" command. When the zip is prepared, copy it immediately and paste to another folder. Note: the original zip file will be removed after the cli upload that.
PROBLEM
Hey, I have not used Docker much - I am trying to run my Jest tests through the Dockerfile. However, I'm getting this error when trying to build image:
ERROR
Step 13/16 : RUN if [ "$runTests" = "True" ]; then RUN npm test; fi
---> Running in ccdb3f89fb79
/bin/sh: RUN: not found
Dockerfile
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
ARG runTests
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm install
COPY . .
RUN rm -f .npmrc
ENV PORT=2000
ENV NODE_ENV=production
RUN if [ "$runTests" = "True" ]; then \
RUN npm test; fi
RUN npm run build
EXPOSE 2000
CMD ["npm", "start"]
The command I am using to build the image is this, and the idea is to be able to run the tests only when runTests=True.
docker build -t d-image --build-arg runTests="True" --build-arg "MY TOOOOOKEN"
Is this possible to do by just using the Dockerfile? Or is it necessary to use docker-compose as well?
The conditional statement seems to work good.
Not posssible to have two CMD commands
I have tried this as a workaround (but it did not work):
Dockerfile
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
ARG runTests
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm install
COPY . .
RUN rm -f .npmrc
ENV PORT=3000
ENV NODE_ENV=production
RUN npm run build
EXPOSE 3000
CMD if [ "$runTests" = "True" ]; then \
CMD ["npm", "test"] && ["npm", "start"] ;fi
Now I'm not getting any output from the test, but it looks to be successful.
PROGRESS
I have made some progress and the tests are actually running when I'm building the image. I also decided to use the RUN command for running the tests, so that they run on the build step.
Dockerfile:
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm install
COPY . .
RUN rm -f .npmrc
ENV PORT=3000
ENV NODE_ENV=production
RUN npm run build
RUN npm test
EXPOSE 3000
Error:
FAIL src/pages/errorpage/tests/accessroles.test.jsx
● Test suite failed to run
Jest encountered an unexpected token
This usually means that you are trying to import a file which Jest cannot parse, e.g. it's not plain JavaScript.
By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules".
Here's what you can do:
• If you are trying to use ECMAScript Modules, see https://jestjs.io/docs/en/ecmascript-modules for how to enable it.
• To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
• If you need a custom transformation specify a "transform" option in your config.
• If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.
You'll find more details and examples of these config options in the docs:
https://jestjs.io/docs/en/configuration.html
Details:
It seems to me that the docker build process does not use the jest:{...} configurations in my package.json, even though it is copied and installed in the Dockerfile Any ideas?
RUN and CMD aren't commands, they're instructions to tell Docker what do when building your container. So e.g.:
RUN if [ "$runTests" = "True" ]; then \
RUN npm test; fi
doesn't make sense, RUN <command> runs a shell command but RUN isn't defined in the shell, it should just be:
ARG runTests # you need to define the argument too
RUN if [ "$runTests" = "True" ]; then \
npm test; fi
The cleaner way to do this is to set up npm as the entrypoint, and start as the specific command:
ENTRYPOINT [ "npm" ]
CMD [ "start" ]
This allows you to build the container normally, it doesn't require any build arguments, then run an NPM script other than start in the container, e.g. to run npm test:
docker run <image> test
However, note that this means all of the dev dependencies need to be in the container. It looks (from ENV NODE_ENV=production) like you intend this to be a production build, so you shouldn't be running the tests in the container at all. Also despite having as builder this isn't really a multi-stage build. The idiomatic script for this would be something like:
# stage 1: copy the source and build the app
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# stage 2: copy the output and run it in production
FROM node:10-alpine
WORKDIR /app
ENV PORT=3000
ENV NODE_ENV=production
COPY --from=builder /app/package*.json ./
RUN npm ci
COPY --from=builder /* your build output */
EXPOSE 3000
ENTRYPOINT [ "npm" ]
CMD [ "start" ]
See e.g. this Dockerfile I put together for a full-stack React/Express app.
A couple things on this:
2 commands are possible (but NOT recommended) , as I did here: https://hub.docker.com/repository/docker/djangofan/mountebank-with-ui-node
Also, I wouldn't recommend that your tests run while building your container image. Instead, build the container image so that it maps a folder to the location of your test files. Then, include your "temporary test image" in your compose file.
version: '3.4'
services:
service-api:
container_name: service-api
build:
context: .
dockerfile: Dockerfile-apibase
ports:
- "8083:8083"
e2e-tests:
container_name: e2e-tests
build:
context: .
dockerfile: Dockerfile-testbase
command: bash -c "wait-for-it.sh service-api:8083 && gradle -q clean test -Dorg.gradle.project.buildDir=/usr/src/example"
Then execute it like so to get a 0 or 1 exit code:
docker-compose up --exit-code-from e2e-tests
After running that, the service will remain running but the tests will shutdown when finished.
Hopefully that makes sense even though the example I gave is not exactly like your situation. Here is a LINK to my example from above, which you can try yourself. It should work similarly for Jest tests.