PROBLEM
Hey, I have not used Docker much - I am trying to run my Jest tests through the Dockerfile. However, I'm getting this error when trying to build image:
ERROR
Step 13/16 : RUN if [ "$runTests" = "True" ]; then RUN npm test; fi
---> Running in ccdb3f89fb79
/bin/sh: RUN: not found
Dockerfile
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
ARG runTests
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm install
COPY . .
RUN rm -f .npmrc
ENV PORT=2000
ENV NODE_ENV=production
RUN if [ "$runTests" = "True" ]; then \
RUN npm test; fi
RUN npm run build
EXPOSE 2000
CMD ["npm", "start"]
The command I am using to build the image is this, and the idea is to be able to run the tests only when runTests=True.
docker build -t d-image --build-arg runTests="True" --build-arg "MY TOOOOOKEN"
Is this possible to do by just using the Dockerfile? Or is it necessary to use docker-compose as well?
The conditional statement seems to work good.
Not posssible to have two CMD commands
I have tried this as a workaround (but it did not work):
Dockerfile
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
ARG runTests
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm install
COPY . .
RUN rm -f .npmrc
ENV PORT=3000
ENV NODE_ENV=production
RUN npm run build
EXPOSE 3000
CMD if [ "$runTests" = "True" ]; then \
CMD ["npm", "test"] && ["npm", "start"] ;fi
Now I'm not getting any output from the test, but it looks to be successful.
PROGRESS
I have made some progress and the tests are actually running when I'm building the image. I also decided to use the RUN command for running the tests, so that they run on the build step.
Dockerfile:
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm install
COPY . .
RUN rm -f .npmrc
ENV PORT=3000
ENV NODE_ENV=production
RUN npm run build
RUN npm test
EXPOSE 3000
Error:
FAIL src/pages/errorpage/tests/accessroles.test.jsx
● Test suite failed to run
Jest encountered an unexpected token
This usually means that you are trying to import a file which Jest cannot parse, e.g. it's not plain JavaScript.
By default, if Jest sees a Babel config, it will use that to transform your files, ignoring "node_modules".
Here's what you can do:
• If you are trying to use ECMAScript Modules, see https://jestjs.io/docs/en/ecmascript-modules for how to enable it.
• To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
• If you need a custom transformation specify a "transform" option in your config.
• If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.
You'll find more details and examples of these config options in the docs:
https://jestjs.io/docs/en/configuration.html
Details:
It seems to me that the docker build process does not use the jest:{...} configurations in my package.json, even though it is copied and installed in the Dockerfile Any ideas?
RUN and CMD aren't commands, they're instructions to tell Docker what do when building your container. So e.g.:
RUN if [ "$runTests" = "True" ]; then \
RUN npm test; fi
doesn't make sense, RUN <command> runs a shell command but RUN isn't defined in the shell, it should just be:
ARG runTests # you need to define the argument too
RUN if [ "$runTests" = "True" ]; then \
npm test; fi
The cleaner way to do this is to set up npm as the entrypoint, and start as the specific command:
ENTRYPOINT [ "npm" ]
CMD [ "start" ]
This allows you to build the container normally, it doesn't require any build arguments, then run an NPM script other than start in the container, e.g. to run npm test:
docker run <image> test
However, note that this means all of the dev dependencies need to be in the container. It looks (from ENV NODE_ENV=production) like you intend this to be a production build, so you shouldn't be running the tests in the container at all. Also despite having as builder this isn't really a multi-stage build. The idiomatic script for this would be something like:
# stage 1: copy the source and build the app
FROM node:10-alpine as builder
ARG TOKEN
WORKDIR /app
COPY .npmrc-pipeline .npmrc
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# stage 2: copy the output and run it in production
FROM node:10-alpine
WORKDIR /app
ENV PORT=3000
ENV NODE_ENV=production
COPY --from=builder /app/package*.json ./
RUN npm ci
COPY --from=builder /* your build output */
EXPOSE 3000
ENTRYPOINT [ "npm" ]
CMD [ "start" ]
See e.g. this Dockerfile I put together for a full-stack React/Express app.
A couple things on this:
2 commands are possible (but NOT recommended) , as I did here: https://hub.docker.com/repository/docker/djangofan/mountebank-with-ui-node
Also, I wouldn't recommend that your tests run while building your container image. Instead, build the container image so that it maps a folder to the location of your test files. Then, include your "temporary test image" in your compose file.
version: '3.4'
services:
service-api:
container_name: service-api
build:
context: .
dockerfile: Dockerfile-apibase
ports:
- "8083:8083"
e2e-tests:
container_name: e2e-tests
build:
context: .
dockerfile: Dockerfile-testbase
command: bash -c "wait-for-it.sh service-api:8083 && gradle -q clean test -Dorg.gradle.project.buildDir=/usr/src/example"
Then execute it like so to get a 0 or 1 exit code:
docker-compose up --exit-code-from e2e-tests
After running that, the service will remain running but the tests will shutdown when finished.
Hopefully that makes sense even though the example I gave is not exactly like your situation. Here is a LINK to my example from above, which you can try yourself. It should work similarly for Jest tests.
Related
I tried to pass my variable from docker-compose.yml to docker container but my container doesn't see the value of this variable. I have tried many cases but all to no avail. here are my attempts.
First try:
FROM node:alpine3.17 as build
LABEL type="production"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /react-app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Second try:
FROM node:alpine3.17 as build
LABEL type="dev"
WORKDIR /react-app
COPY package.json ./
COPY package-lock.json ./
# The RUN command is only executed while the build image
RUN npm install
COPY . ./
ARG REACT_APP_BACKEND_URL
ENV REACT_APP_BACKEND_URL=$REACT_APP_BACKEND_URL
RUN npm run build
RUN npm install -g serve
EXPOSE 3000
# The CMD command is only executed while the image is running
CMD serve -s build
And I built the container from Dockerfiles, then I pushed it to docker-hub with various version and after that I run docker-compose.yml from the remote server.
My docker-compose.yml
version: '3'
services:
stolovaya51-react-static-server:
container_name: stolovaya51-react-production:0.0.1 (for example)
build:
args:
- REACT_APP_BACKEND_URL=REACT_APP_BACKEND_URL
ports:
- "80:80"
- "3000:3000"
By the way, when I run this code on my local machine, I see the value of the environment variable, but when I try to run this code on the server, I only see the variable name, but the value = "".
I don't know the reason, what's the matter?
I have found the answear for my question!
Firstly, i have combined two repository with frontend and backend into one project.
Then, i have redesigned my project structure and gathere together two parts of my application. For now i have this structure:
root_project_folder:
./frontend
...some src
./frontend/docker/Dockerfile
./backend
...somer src
./backend/docker/Dockerfile
docker-compose.yml
And now, my frontend applies all args from docker-compose.yml from the root folder
I have a Next.Js application that I will deploy with docker. I am passing my environment variables in docker file and docker-compose.yaml. Next version: 12.1.6
Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# If using npm with a `package-lock.json` comment out above and use below instead
# COPY package.json package-lock.json ./
# RUN npm ci
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_PUBLIC_BASE_URL example --> I'm stating it here. Example is not my value, it just takes space.
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
docker-compose.yaml
version: '3'
services:
frontend:
image: caneral:test-1
ports:
- '3000:3000'
environment:
- NEXT_PUBLIC_BASE_URL=https://example.com/api
I am building with the following command:
docker build -t caneral:test-1 .
Then I run docker-compose:
docker-compose up -d
While I can access the NEXT_PUBLIC_BASE_URL value on the server side, I cannot access it on the client side. It returns undefined. Shouldn't I reach it because I define it as NEXT_PUBLIC? This is stated in the official documents.
How to get environment variables on client side?
Details, you have:
Your .env file. (i'm not sure how docker files will affect logic)
Your next.config.js file
Server-side has access to these 2 files.
Client-side doesn't have access to .env file.
What you can do:
In your next.config.js file you can declare a variable where value is your process.env value.
const baseTrustFactor = process.env.trustFactor
IMPORTANT: do not expose your private info (keys/tokens etc.) to the client-side.
If you need to compare the tokens you can:
Send them from the backend (From NodeAPI or similar)
Make conditions in next.config.js such as:
const baseTrustFactor = process.env.trustFactor == '21' ? true : false
I have the written the following Dockerfile after looking at lots of implementation for react with multi stage builds. I am trying to achieve to have a single Dockerfile for all the environments. Currently dev + prod but in future, dev, qa, automation, staging, pt, preprod, prod.
# Create a base image with dependencies
FROM node:16.15.0-alpine3.14 AS base
ENV NPM_CONFIG_LOGLEVEL warn
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json yarn.lock ./
# Create a development environment
FROM base AS development
ENV NODE_ENV development
RUN yarn install --frozen-lockfile
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]
# Generate build
FROM base AS builder
ENV NODE_ENV production
RUN yarn install --production
COPY . .
RUN yarn build
# Create a production image
FROM nginx:1.21.6-alpine as production
ENV NODE_ENV production
COPY --from=builder /app/build /usr/share/nginx/htm
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
So I have a couple of questions as below,
In the above Dockerfile when I target production in my docker-compose.yml file will it run the development stage as well in the Dockerfile?
If so how can I avoid it being run? Because the development has a different yarn install command ad it also copies the src folder which will be redundant in production stage.
Should I strive to have a single Dockerfile and multiple docker-compose.yml files like docker-compose.dev.yml, docker-compose.prod.yml etc?
you can have a base file docker-compose.yml which will contain the common services. you can then create docker-compose-prod.yml , docker-compose-dev.yml, etc. which contains environment specific overrides. While executing docker compose command, you can specify multiple files and the configuration will be merged and executed as single file.
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
you can read more here.
I've build a React client application supported with a API written in Golang. I would like to use Docker to run these both apps using docker run.
I have the following project structure:
zid
|
|-web/ (my react folder)
main.go
Dockerfile
|
My goal is to run the main.go file in the zid folder and start the webapplication in the zid/web folder. The main.go file starts a API using Gin Gonic that will listen and serve on port 10000.
So I've tried the following:
# Build the Go API
FROM golang:latest as go_builder
RUN mkdir /zid
WORKDIR /zid
COPY . /zid
RUN GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o /go/bin/zid
# Build the React application
FROM node:alpine as node_builder
COPY --from=go_builder /zid/web ./
RUN npm install
RUN npm run build
# Final stage build, this will be the container with Go and React
FROM alpine:latest
RUN apk --no-cache add ca-certificates
COPY --from=go_builder /go/bin/zid /go/zid
COPY --from=go_builder /zid/ca /go/ca
COPY --from=node_builder /build ./web
EXPOSE 3000
WORKDIR /go
CMD ./zid
Next I did the following:
Build it with docker build -t zid . (no errors)
Run it with docker run -p 3000:3000 --rm zid
When I run this, it will startup the API, but when I go to http://localhost:3000/ then I get a Page does not work ERR: ERR_EMPTY_RESPONSE.
So the API starts up, but the npm build doens't. I am not sure what I am doing wrong, because the Docker container both contains the correct folders (go and web).
As you can see in the image it's all there I believe. What am I missing?
EDIT:
I am using the (*gin.Engine).Run() function to set the listen and serve on port 10000. In my local build my React application is sending request to localhost:10000. I always simply used npm start on the side of my React app (localhost:3000). My goal is to do the same but then all in one Dockerfile.
I am still a little unsure if I should EXPOSE ports 10000 & 3000 in my Dockerfile.
My HandleRequest function:
//Start the router and listen/serve.
func HandleRequests() {
router := SetupRouter()
router.Run(":10000")
}
My SetupRouter function:
//Setup the gin router
func SetupRouter() *gin.Engine {
router := gin.Default()
router.Use(CORSMiddleware())
router.POST("/auth/login", login)
router.POST("/component/deploy", deployComponent)
router.POST("/project/create", createProject)
router.POST("/diagram/create", createDiagram)
router.PATCH("/diagram/update", updateDiagram)
router.DELETE("/diagram/delete/:id", deleteDiagram)
router.GET("/diagram/:id", getDiagram)
router.GET("/project/list", getProjectsByUsername)
router.GET("/project/:id", getProject)
router.GET("/project/diagrams/:id", getDiagramsOfProject)
router.DELETE("/project/delete/:id", deleteProject)
router.GET("/application/list", applicationList)
router.GET("/instance/status/:id", getInstanceStatus)
router.GET("/user", getUser)
return router
}
Btw I just want to use the Docker container for Development and learning purpose only.
I've used the following multi-stage Docker build to create:
static VueJS UI HTML assets
compiled Go API http server (serving the above HTML assets)
Note: both Go and VueJS source is download from one git repo - but you could just as easily modify this to copy the two code-bases from local development directories.
#
# go build
#
FROM golang:1.16.5 AS go-build
#
# here we pull pkg source directly from git (and all it's dependencies)
#
RUN go get github.com/me/vue-go/rest
WORKDIR /go/src/github.com/me/vue-go/rest
RUN CGO_ENABLED=0 go build
#
# node build
#
FROM node:13.12.0 AS node-build
WORKDIR /app/vue-go
COPY --from=go-build go/src/github.com/me/vue-go/vue-go ./
# produces static html 'dist' here:
#
# /app/vue-go/dist
#
RUN npm i && npm run build
#
# final layer: include just go-binary and static html 'dist'
#
FROM scratch
COPY --from=go-build \
/go/src/github.com/me/vue-go/rest/rest \
/app/vue-go
COPY --from=node-build \
app/vue-go/dist \
/app/dist/
CMD ["/app/vue-go"]
I don't use Gin - but to use native net/http fileserver serving APIs and static HTML assets, use something like:
h := http.NewServeMux()
// serve static HTML directory:
if conf.StaticDir != "" {
log.Printf("serving on '/' static files from %q", conf.StaticDir)
h.Handle(
"/",
http.StripPrefix(
"/",
http.FileServer(
http.Dir(conf.StaticDir), // e.g. "../vue-go/dist" vue.js's html/css/js build directory
),
),
)
}
// handle API route(s)
h.Handle("/users",
authHandler(
http.HandlerFunc(handleUsers),
),
)
and start the service:
s := &http.Server{
Addr: ":3000", // external-facing IP/port
Handler: h,
}
log.Fatal(s.ListenAndServe())
then to build & run:
docker build -t zid .
docker run -p 3000:3000 --rm zid
I've found a solution! I've created a script on basis of multi-service container and then a run this script in my Dockerfile.
my script (start.sh):
#!/bin/sh
# Start the first process
./zid &
ZID_PID=$!
# Start the second process
cd /web
npm start &
WEB_PID=$!
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps -fp $ZID_PID
ZID_PROCESS_STATUS=$?
if [ $ZID_PROCESS_STATUS -ne 0 ]; then
echo "ZID process has already exited."
exit 1
fi
ps -fp $WEB_PID
WEB_PROCESS_STATUS=$?
if [ $WEB_PROCESS_STATUS -ne 0 ]; then
echo "WEB process has already exited."
exit 1
fi
done
Here I first start my go executable and then I do a npm start
In my Dockerfile I do the following:
# Build the Go API
FROM golang:latest as go_builder
RUN mkdir /zid
WORKDIR /zid
COPY . /zid
RUN GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o /go/bin/zid
# Build the React application
FROM node:alpine as node_builder
COPY --from=go_builder /zid/web ./web
WORKDIR /web
RUN npm install
# Final stage build, this will be the container with Go and React
FROM node:alpine
RUN apk --no-cache add ca-certificates procps
COPY --from=go_builder /go/bin/zid /go/zid
COPY --from=go_builder /zid/static /go/static
COPY --from=go_builder /zid/ca /go/ca
COPY --from=node_builder /web /web
COPY --from=go_builder /zid/start.sh /go/start.sh
RUN chmod +x /go/start.sh
EXPOSE 3000 10000
WORKDIR /go
CMD ./start.sh
Here I am creating a Go executable, copy and npm install my /web folder and in de final stage build I start my ./start.sh script.
This will start my Golang application and the React development server. I hope it helps for others.
I am trying to deploy my create-react-app to elastic bean stalk with docker
I have setup codepipeline with codebuild and elastic beanstalk.
I am getting this error
Stop running the command. Error: Dockerfile and Dockerrun.aws.json are both missing, abort deployment
My Dockerfile looks like this
FROM tiangolo/node-frontend:10 as build-stage
# Create app directory
# RUN mkdir -p /usr/src/app
# WORKDIR /usr/src/app
WORKDIR /app
# # fix npm private module
# ARG NPM_TOKEN
# COPY .npmrc /app/
#COPY package.json package.json
COPY package*.json /app/
COPY Dockerrun.aws.json /app/
RUN npm install
COPY ./ /app/
# RUN CI=true npm test
RUN npm run build
# FROM nginx:1.15
FROM nginx:1.13.3-alpine
# Install app dependencies
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
RUN ls
EXPOSE 80
I also have a Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "3",
"Image": {
"Name": "something.dkr.ecr.us-east-2.amazonaws.com/subscribili:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
my buildspec.yml file looks like this
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=something.dkr.ecr.us-east-2.amazonaws.com/subscribili
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- printf '[{"name":"nginx","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I am sure there is some issue with buildspec file but I am just not sure what.
I have read all the documentation still couldn't figure out how to write the buildspec file Docker.
Is there anything I am missing?
Dockerfile and Dockerrun.aws.json these 2 files need to be in the same directory where the command "COPY Dockerrun.aws.json /app/ " is running. make sure these files exists in that directory and this error should disappear
"eb deploy" command creates a zip file from your code. However, to make it as small as possible, it only takes the file that are commited to git. So, if you did not commit Dockerfile and Dockerrun file, these two files won't be included in the zip.
If you do not want it to behave like this, you can add .ebignore file to your projects root directory. This files commands are the same as the gitignore file; you can copy everything from gitignore to ebignore. If there is a .ebignore, cli will not check if the project is commited to a source control.
Now to check what is included in zip file, watch the .elasticbeanstalk folder after "eb deploy" command. When the zip is prepared, copy it immediately and paste to another folder. Note: the original zip file will be removed after the cli upload that.