Issue while starting Python,Spark and Gunicorn - capistrano3

I am trying to restart Python, gunicorn and spark as soon as the Capistrano completes the deployment but I am getting the below error. However, When I tried to execute these commands on the server by ssh then it is working fine.
Function in deploy.rb:
desc 'Restart django'
task :restart_django do
on roles(:django), in: :sequence, wait: 5 do
within "#{fetch(:deploy_to)}/current/" do
execute "cd #{fetch(:deploy_to)}/current/ && source bin/activate "
execute "sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python && pkill -f gunicorn && pkill -f spark"
#execute " cd /home/ubuntu/code/spark-2.1.0-bin-hadoop2.7/sbin/ && ./start-master.sh && ./start-slave.sh spark://127.0.0.1:7077;"
#execute "sleep 20"
#execute "cd /home/ubuntu/code/ && nohup gunicorn example.wsgi:application --name example --workers 4 &"
end
end
end
Deploy Output:
cap dev deploy:restart_django
Using airbrussh format.
Verbose output is being written to log/capistrano.log.
00:00 deploy:restart_django
01 cd /home/ubuntu/code/ && source bin/activate
✔ 01 ubuntu#xx-xx-xx-xx-xx.us-west-1.compute.amazonaws.com 2.109s
02 sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as ubuntu#ec2-54-244-99-254.us-west-2.compute.amazonaws.com: sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark exit status: 1
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stdout: Stopping supervisor: supervisord.
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stderr: Nothing written
SSHKit::Command::Failed: sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark exit status: 1
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stdout: Stopping supervisor: supervisord.
sudo /etc/init.d/supervisor stop && sudo fuser -k 8000/tcp && pkill -f python gunicorn spark stderr: Nothing written
Tasks: TOP => deploy:restart_django
(See full trace by running task with --trace)

Capistrano by default invokes no-login, non-interactive shell. However, It's not the good option to use login shell with it but I am able to solve the issue by invoking the login shell under Capistrano execute with below command.
execute "bash --login -c 'pkill -f spark'", raise_on_non_zero_exit: false

Related

What is the reason for "docker: Error response from daemon: No command specified."?

Starting from my last commit yesterday I am encountering with a strange problem where GitLab CI fails continuously with an error as shown below:
$ ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 8010:80 --name my_project $TAG_LATEST"
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 125
This is an React App which should be build and deployed to my server.
This is my Dockerfile
FROM node:16.14.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:1.19-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
In .gitlab-ci.yml file I have 2 stages build and deploy. For more clarity I would like to share it eaither:
stages:
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
build_test:
image: docker:latest
stage: build
services:
- docker:19.03.0-dind
script:
- docker build -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_LATEST
environment:
name: test
rules:
- if: $CI_COMMIT_BRANCH == "develop"
when: on_success
deploy_test:
image: alpine:latest
stage: deploy
only:
- develop
tags:
- frontend
script:
- chmod og= $SSH_PRIVATE_KEY
- apk update && apk add openssh-client
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_LATEST"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my_project || true"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 8010:80 --name my_project $TAG_LATEST"
environment:
name: test
I did the following steps:
1st tried to run image using docker run command manually in my server. The result was the same!
2nd I pulled project from gitlab to the brand new folder and run first docker build -my_project . command then as I did it in server. It worked on my local machine.
3rd I re-verified my code base with ChatGPT and it found no error.
Currently there's an issue with latest docker release.
Use:
image: docker:20.10.22
instead of :
image: docker:latest
For more details check this link: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29593#note_1263383415
I got the same error since 3 days.
I also use gitLab CI.
I solved the issue by adding this line in my docker-compose.yml file (in the involved service section)
command: ["nginx", "-g", "daemon off;"]
I not yet find the root cause of the issue.
Hope that will help.

docker-compose for mssql not using .env file?

I have a .env file named .mssql filled with the basic MSSQL options:
ACCEPT_EULA=Y
MSSQL_SA_PASSWORD=12Password34
My docker-compose.yml looks like this:
version: '3'
services:
db:
image: "mcr.microsoft.com/mssql/server:2017-latest"
volumes:
- ./db-data:/var/opt/mssql
- ./sql_scripts:/sql_scripts
env_file:
- .envs/.local/.mssql
healthcheck:
test: [ "CMD", "/opt/mssql-tools/bin/sqlcmd", "-S", "localhost", "-U", "sa", "-P", "$SA_PASSWORD", "-Q", "SELECT 1" ]
interval: 30s
timeout: 30s
retries: 3
# This command runs the sqlservr process and then runs the sql scripts in the sql_scripts folder - it uses init_done.flag to determine if it has already run
command: /bin/bash -c '/opt/mssql/bin/sqlservr; if [ ! -f /sql_scripts/init_done.flag ]; then apt-get update && apt-get install -y mssql-tools unixodbc-dev; /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -i /sql_scripts/db_setup.sql; /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -i /sql_scripts/second_db_setup.sql; /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -i /sql_scripts/third_db_setup.sql; touch /sql_scripts/init_done.flag; fi; tail -f /dev/null'
ports:
- 1433:1433
volumes:
db-data:
When I run the command docker-compose up -d --build
It gives me a warning in the terminal where I ran the docker-compose command:
WARN[0000] The "SA_PASSWORD" variable is not set. Defaulting to a blank string.
Then when the container boots I see it begin the process without issue. Then I see the following in the container logs:
Logon Error: 18456, Severity: 14, State: 8.
Logon Login failed for user 'sa'. Reason: Password did not match that for the login provided.
You set MSSQL_SA_PASSWORD in your env file. But as far as I see, in your Dockerfile you try to log in with $SA_PASSWORD.
So, either you change MSSQL_SA_PASSWORD to SA_PASSWORD in your env file or you change the Dockerfile (in the "test" and "command" section) from $SA_PASSWORD to $MSSQL_SA_PASSWORD. The second option could be preferred, since the image might additionally require MSSQL_SA_PASSWORD to be correctly set internally. I can't tell you about that specifically, since I don't know the MSSQL image enough.

Github Actions CICD to build a SQL Server container fails but will work Locally

I started working on Github actions to build some sample containers instead of doing my builds locally. Less reliance on my machines, etc. It seems though that building and pushing on my MAC works, but creating an Action to do it will fail.
When doing tests locally, I did make sure I have an updated Dockerfile to ensure that everything is built as needed correctly, but part of me is thinking it is related to the OS building with the Github action, but I am trying to understand it more.
The error I get is:
Error: failed to solve: executor failed running [/bin/sh -c ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" && /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac && rm /tmp/db.bacpac && pkill sqlservr]: exit code: 1
My workflow action is:
name: Docker Image CI MSSQL
on:
schedule:
- cron: '0 6 * * *'
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v3
-
name: Prepare Variables
id: prepare
run: |
DOCKER_IMAGE=fallenreaper/eve-mssql
VERSION=$(date -u +'%Y%m%d')
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS --tag ${DOCKER_IMAGE}:latest"
fi
echo ::set-output name=docker_image::${DOCKER_IMAGE}
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
-
name: Login to DockerHub
if: success() && github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Docker Build & Push
if: success() && github.event_name != 'pull_request'
uses: docker/build-push-action#v2
with:
push: true
tags: ${{steps.prepare.outputs.tags}}
context: mssql/.
So i was thinking this would work. The Dockerfile I am building uses mcr.microsoft.com/mssql/server:2017-latest which I figured would work.
Dockerfile:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ARG DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -o /tmp/db.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac \
&& rm /tmp/db.bacpac \
&& pkill sqlservr
EDIT As I keep reading various documents, I am trying to understand and test various methods to see if i can spawn a build. I was thinking that simulating a MAC might be useful, so i had also attempted to use the action: runs-on: macos:latest to see if that would solve it, but i havent seen gains as the run docker-login-action#v1 will fail.
Looking Through each line item I ended up the the following Dockerfile.
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ENV DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -progress=bar:force -q -O /mssql-latest.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:$DBNAME /tu:sa /tp:$SA_PASSWORD /sf:/mssql-latest.bacpac \
&& pkill sqlservr
#Cleanup of created bulk files no longer needed.
RUN rm mssql-latest.bacpac sqlpackage.zip
The Main difference is where the bacpac file is stored. It seemed there were hiccups when creating that file. After adjusting the location, and breaking apart the import list, it seemed to work.
Notes: When the file was created in TMP, it seemed to be partially created, and so it was recognizing the existing file but it was corrupt. Not sure if there were size limits, but it was an observation. Putting it in the / directory of the build gave me the access and complete file so i needed to adjust the /sf reference.
Lastly because there were hanging files which no longer were needed, I found it best to just do a little cleanup by deleting both the sqlpackage and bacpac files.
Suggesting to investigate grep -q "Service Broker manager has started" &&
Maybe this fails, because you start /opt/mssql/bin/sqlservr then immediately check that it started. In most cases it takes few seconds to start.
To test if my thesis is correct.Suggesting to insert few sleep 10 or timeout commands in strategic places.

How to stop greengrass core?

I have installed greengrass core software and started it via:
sudo tar -xzvf greengrass-OS-architecture-1.11.0.tar.gz -C /
sudo tar -xzvf hash-setup.tar.gz -C /greengrass
cd /greengrass/certs/
sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem
cd /greengrass/ggc/core/
sudo ./greengrassd start
Verified that process is started via:
ps aux | grep PID-number
ps aux | grep -E 'greengrass.*daemon'
How to stop greengrass core?
For this, you need to run the following commands,
cd /greengrass-root/ggc/core/
sudo ./greengrassd stop

unable to run redis with custom config file , I am changing dump.rdb file to resrore.rdb which is a backup from another machine

redis dir is /var/lib/redis and dbfilename is dump.rdb and I have restore.rdb backup whichi is from another machine, I tried to replace dump.rdb with restore.rdb but after starting redis it is overwritten to previous steps , so I have changed my dbfilename to restore.rdb and restarted the server but dbfilename is same as before if check in redis-cli
steps I followed
sudo /etc/init.d/redis-server stop
sudo mv /var/lib/redis/dump.rdb /var/lib/redis/dump.rdb.bak
sudo cp restore.rdb /var/lib/redis/dump.rdb
sudo redis-server config/redis.conf
Once server is started dump.rdb is overwritten to previous state. So, instead replace dump.rdb I tried alternative method
sudo /etc/init.d/redis-server stop
sudo cp restore.rdb /var/lib/redis/dump.rdb
and then changed dbfilename to restore.rdb in redis.conf
then restarted the redis
sudo redis-server config/redis.conf.
then I entered into redis-cli and to check config and I realised that config hasn't changed
redis server version:
Redis server v=4.0.9 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=9435c3c2879311f3
the issue is redis server is running in background even after killing it with below command
sudo /etc/init.d/redis-server stop
so you need to make sure you stop redis server is killed
sudo /etc/init.d/redis-server stop
ps aux | grep redis-server | awk '{ print $2 }'| xargs kill -9
above commands will kill all redis related server then
sudo mv /var/lib/redis/dump.rdb /var/lib/redis/dump.rdb.bak
sudo cp restore.rdb /var/lib/redis/dump.rdb
sudo redis-server config/redis.conf

Resources