What is the reason for "docker: Error response from daemon: No command specified."? - reactjs

Starting from my last commit yesterday I am encountering with a strange problem where GitLab CI fails continuously with an error as shown below:
$ ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 8010:80 --name my_project $TAG_LATEST"
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 125
This is an React App which should be build and deployed to my server.
This is my Dockerfile
FROM node:16.14.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:1.19-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
In .gitlab-ci.yml file I have 2 stages build and deploy. For more clarity I would like to share it eaither:
stages:
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
build_test:
image: docker:latest
stage: build
services:
- docker:19.03.0-dind
script:
- docker build -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_LATEST
environment:
name: test
rules:
- if: $CI_COMMIT_BRANCH == "develop"
when: on_success
deploy_test:
image: alpine:latest
stage: deploy
only:
- develop
tags:
- frontend
script:
- chmod og= $SSH_PRIVATE_KEY
- apk update && apk add openssh-client
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_LATEST"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my_project || true"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 8010:80 --name my_project $TAG_LATEST"
environment:
name: test
I did the following steps:
1st tried to run image using docker run command manually in my server. The result was the same!
2nd I pulled project from gitlab to the brand new folder and run first docker build -my_project . command then as I did it in server. It worked on my local machine.
3rd I re-verified my code base with ChatGPT and it found no error.

Currently there's an issue with latest docker release.
Use:
image: docker:20.10.22
instead of :
image: docker:latest
For more details check this link: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29593#note_1263383415

I got the same error since 3 days.
I also use gitLab CI.
I solved the issue by adding this line in my docker-compose.yml file (in the involved service section)
command: ["nginx", "-g", "daemon off;"]
I not yet find the root cause of the issue.
Hope that will help.

Related

How to tag correctly docker image build with gitlab-ci?

I am setting up a CI/CD with gitlab.
What I want to do:
create a dev tagged image
upload the image to my registry
deploy the dev image on the dev server
create the prod tagged image
upload the image to my registry
deploy the prod image
The application is an application developed in React based on an API.
The only difference between the image tagged dev and the one of prod is the url of the api.
(The dev react application must point to the dev api and the prod one to the prod api)
For the moment, I've done that:
stages:
- docker-build
- deploy
- deploy-prod
docker-build:
stage: docker-build
image: docker:latest
services:
- name: docker:19.03.8-dind
script:
- sudo docker login -u registry_user -p 'MYPASWORD' https://XXXXX.com
- sudo docker build -t XXXXX.com/myapp .
- sudo docker push XXXXX.com/myapp
- sudo docker rmi XXXXX.com/myapp
- sudo docker image prune -f
- sudo docker logout
tags:
- build
deploy:
stage: deploy
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.dev docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- dev
deploy-prd1:
stage: deploy-prod
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.prd docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- prodg
when: manual
deploy-prd2:
stage: deploy-prod
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.prd docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- prods
when: manual
In my docker-compose.yml.prd and docker-compose.yml.dev, I have:
image: XXXXX.com/myapp:prod or image: XXXXX.com/myapp:dev
The url of the api is defined in a .js file as a constant.
const url="api-XXXX.com"
What is the best way to build 2 different images with 2 different urls?

How to deploy react app in ubuntu server with bitbucket pipeline

I want to build and deploy my react app from my master branch I have managed to automate build but unable to transfer it into my server find my pipeline code below, I receive below error
pipelines:
default:
- step:
name: Build Title
script:
- npm install
- npm run build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: alpine
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- apk update && apk add openssh rsync
- rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "rm -r html/www"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "mv 'html/temp/react-${BITBUCKET_BUILD_NUMBER}' 'var/www/html/deploy'"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "chmod -R u+rwX,go+rX,go-w html/www"
Error Log
+ rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
load pubkey "/opt/atlassian/pipelines/agent/ssh/id_rsa": invalid format
rsync: mkdir "/$USERNAME/html/temp/react-15" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(675) [Receiver=3.1.2]
I noticed. this to happen only on Alpine-based images. For example, Debian images work fine. It also happens on Buddy, not just on Bitbucket. I expect this is upstream Alpine bug/issue.
I was using that same script as well, below is what ended up working for me after a lot of banging my head against the screen, updating the image and adding upload artifacts seemed to be the kicker.
default:
- step:
name: Build React Project
script:
- npm install
- npm run-script build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: atlassian/default-image:latest
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- rsync -a --delete upload/ $USERNAME#$SERVER:/home/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh $USERNAME#$SERVER "rm -r /home/www"
- ssh $USERNAME#$SERVER "mv '/home/temp/react-${BITBUCKET_BUILD_NUMBER}' '/home/www'"
- ssh $USERNAME#$SERVER "chmod -R u +rwX,go+rX,go-w /home/www"
artifacts:
- upload/**

How can I expose more than 1 port with Dockerfile/Nginx/Heroku

I'm able to expose 2 services locally, on my computer with the following command docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName.
On port 5000, I expose my backend using a flask-restful api and on port 8080, I expose my frontend using nginx to serve my react.js application.
When I deploy that on Heroku platform I have 2 problems :
It seems Heroku try to bind Nginx on port 80 but the PORT env var is different, log's output :
Starting process with command /bin/sh -c gunicorn\ --bind\ 0.0.0.0:5000\ app:app\ --daemon\ \&\&\ sed\ -i\ -e\ \'s/\34352/\'\"\34352\"\'/g\'\ /etc/nginx/conf.d/default.conf\ \&\&\ nginx\ -g\ \'daemon\ off\;\'
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
[emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
How can I write the -p 8080:8080 -p 5000:5000 part inside the Dockerfile or hack around it since I can't specify the docker run [...] command on Heroku ?
I'm new to Docker and Nginx, so I would be very grateful if you know a better way to achieve my goal. Thanks in advance.
# ------------------------------------------------------------------------------
# Temporary image for react.js app using a multi-stage build
# ref : https://docs.docker.com/develop/develop-images/multistage-build/
# ------------------------------------------------------------------------------
FROM node:latest as build-react
# create a shared folder and define it as current dir
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# copy the files required for node packages installation
COPY ./react-client/package.json ./
COPY ./react-client/yarn.lock ./
# install dependencies, copy code and build bundles
RUN yarn install
COPY ./react-client .
RUN yarn build
# ------------------------------------------------------------------------------
# Production image based on ubuntu:latest with nginx & python3
# ------------------------------------------------------------------------------
FROM ubuntu:latest as prod-react
WORKDIR /usr/src/app
# update, upgrade and install packages
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get upgrade -y
RUN apt-get install -y nginx curl python3 python3-distutils python3-apt
# install pip
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3 get-pip.py
# copy flask-api requirements file and install modules
COPY ./flask-api/requirements.txt ./
RUN pip install -r requirements.txt
RUN pip install gunicorn
# copy flask code
COPY ./flask-api/app.py .
# copy built image and config onto nginx directory
COPY --from=build-react /usr/src/app/build /usr/share/nginx/html
COPY ./conf.nginx /etc/nginx/conf.d/default.conf
# ------------------------------------------------------------------------------
# Serve flask-api with gunicorn and react-client with nginx
# Ports :
# - 5000 is used for flask-api
# - 8080 is used by nginx to serve react-client
# You can change them but you'll have to change :
# - for flask-api : conf.nginx, axios calls (5000 -> #newApiPort)
# - for react-client : CORS origins (8080 -> #newClientPort)
#
# To build and run this :
# docker build -t #imageName .
# docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName
# ------------------------------------------------------------------------------
CMD gunicorn --bind 0.0.0.0:5000 app:app --daemon && \
sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
nginx -g 'daemon off;'

Developing react app via docker container

I am trying to develop a simple react app and I am trying to use docker for running a development server, but it is not connecting up in the browser
Here is the Dockerfile.dev
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
EXPOSE 3000
Here are the two commands to create and run the container
docker build -f Dockerfile.dev .
docker run -p 3000:3000 <image_id>
It's starting a development server as the normal npm start does but it is not running in the browser at localhost:3000
You container will exist if you did not mention -it or -dit (if its not typo in question). The reason being stopped immediately because bash can't find any pseudo terminal to be allocated. You have to specify -it or -dit so that bash or sh can be allocated to a pseudo-terminal.
docker run --name test -p 3000:3000 <image_id>
If you run docker ps | grep test you will see in the output
"/bin/bash" {some} seconds ago Exited (0) {some} seconds ago
Now try to run with
docker run --name test -dit -p 3000:3000 <image_id>
or
docker run --name test -it -p 3000:3000 <image_id>
Good to go localhost:3000
Updated:
For window, docker toolbox follows these steps.
Click the appropriate machine (probably the one labeled "default")
Settings
Network > Adapter 1 > Advanced > Port Forwarding
Click "+" to add a
new Rule Set Host Port 3000 & Guest Port 3000; be sure to leave Host
IP and Guest IP empty
Run the command:
docker run -dit -p 3000:3000 ${image_id}
docker-toolbox-localhost

How to deploy react project to ftp using Bitbucket Pipelines?

I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build
From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT

Resources