Git pre-push does not fail recently [duplicate] - githooks

Right now our Jenkins agents generate a docker-compose.yml for each of our Rails projects and then run docker-compose up. The docker-compose.yml has a main "web" container that has rbenv and all of our other Rails dependencies inside. It is linked to a DB container that contains the test Postgres DB.
The problem comes when we need to actually run the tests and generate exit codes. Our CI server will only deploy if the test script returns exit 0, but docker-compose always returns 0, even if one of the container commands fail.
The other issue is that the DB container runs indefinitely, even after the web container is done running the tests, so docker-compose up never returns.
Is there a way we can use docker-compose for this process? We would need to be able to run the containers, but exit after the web container is complete and return it's exit code. Right now we are stuck manually using docker to spin up the DB container and run the web container with the --link option.

Since version 1.12.0, you can use the --exit-code-from option.
From documentation:
--exit-code-from SERVICE
Return the exit code of the selected service container. Implies --abort-on-container-exit.

docker-compose run is the simple way to get the exit statuses you desire. For example:
$ cat docker-compose.yml
roit:
image: busybox
command: 'true'
naw:
image: busybox
command: 'false'
$ docker-compose run --rm roit; echo $?
Removing test_roit_run_1...
0
$ docker-compose run --rm naw; echo $?
Removing test_naw_run_1...
1
Alternatively, you do have the option to inspect the dead containers. You can use the -f flag to get just the exit status.
$ docker-compose up
Creating test_naw_1...
Creating test_roit_1...
Attaching to test_roit_1
test_roit_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
$ docker-compose ps -q | xargs docker inspect -f '{{ .Name }} exited with status {{ .State.ExitCode }}'
/test_naw_1 exited with status 1
/test_roit_1 exited with status 0
As for the db container that never returns, if you use docker-compose up then you will need to sigkill that container; that's probably not what you want. Instead, you can use docker-compose up -d to run your containers daemonized, and manually kill the containers when your test is complete. docker-compose run should run linked containers for you, but I have heard chatter on SO about a bug preventing that from working as intended right now.

Building on kojiro's answer:
docker-compose ps -q | xargs docker inspect -f '{{ .State.ExitCode }}' | grep -v '^0' | wc -l | tr -d ' '
get container IDs
get last runs exit code for each container ID
only status codes that does not start with '0'
count number of non-0 status codes
trim out white space
Returns how many non-0 exit codes were returned. Would be 0 if everything exited with code 0.

--exit-code-from SERVICE and --abort-on-container-exit don't work in scenarios where you need to run all containers to completion, but fail if one of them exited early. An example might be if running 2 test suits in concurrently in different containers.
With #spenthil's suggestion, you can wrap docker-compose in a script that will fail if any containers do.
#!/bin/bash
set -e
# Wrap docker-compose and return a non-zero exit code if any containers failed.
docker-compose "$#"
exit $(docker-compose -f docker-compose.ci.build.yml ps -q | tr -d '[:space:]' |
xargs docker inspect -f '{{ .State.ExitCode }}' | grep -v 0 | wc -l | tr -d '[:space:]')
Then on your CI server simply change docker-compose up to ./docker-compose.sh up.

Use docker wait to get the exit code:
$ docker-compose -p foo up -d
$ ret=$(docker wait foo_bar_1)
foo is the "project name". In the example above, I specified it explicitly, but if you don't supply it, it's the directory name. bar is the name you give to the system under test in your docker-compose.yml.
Note that docker logs -f does the right thing, too, exiting when the container stops. So you can put
$ docker logs -f foo_bar_1
between the docker-compose up and the docker wait so you can watch your tests run.

If you're willing to use docker-compose run to manually kick off your tests, adding the --rm flag, oddly enough, causes Compose to accurately reflect your command's exit status.
Here's my example:
$ docker-compose -v
docker-compose version 1.7.0, build 0d7bf73
$ (docker-compose run bash false) || echo 'Test failed!' # False negative.
$ (docker-compose run --rm bash false) || echo 'Test failed!' # True positive.
Test failed!
$ (docker-compose run --rm bash true) || echo 'Test failed!' # True negative.

docker-rails allows you to specify which container's error code is returned to the main process, so you CI server can determine the result. It is a great solution for CI and development for rails with docker.
For example
exit_code: web
in your docker-rails.yml will yield the web containers exit code as a result of the command docker-rails ci test. docker-rails.yml is just a meta wrapper around the standard docker-compose.yml that gives you the potential to inherit/reuse the same base config for different environments i.e. development vs test vs parallel_tests.

In case you might run more docker-compose services with same name on one docker engine, and you don't know the exact name:
docker-compose up -d
(exit "${$(docker-compose logs -f test-chrome)##* }")
echo %? - returns exit code from test-chrome service
Benefits:
wait's for exact service to exit
uses service name, not container name

You can see exit status with:
echo $(docker-compose ps | grep "servicename" | awk '{print $4}')

Related

Running mssql-server-linux image in QUIET mode

I am using microsoft/mssql-server-linux:2017-latest MS SQL docker image.
It works fine, but outputs dozens of information, which I would be happy to omit. I couldn't find command line or environment options to run it in quiet mode, can anyone help me?
FYI command line is:
docker run -e ACCEPT_EULA=Y -e SA_PASSWORD="<BestPwd>" -e MSSQL_PID=Express -it microsoft/mssql-server-linux:2017-latest
Just appending -q didn't help..
UPD
I know containers can be run in daemon mode, what I need is to reduce log to warnings level and not removing it completely.
Also I would appreciate generic methods; which are NOT connected with stdout redirection or grepping / filtering output.
You can use -d option to run the container in the background using below command.
docker run -e ACCEPT_EULA=Y -e SA_PASSWORD="<BestPwd>" -e MSSQL_PID=Express -d microsoft/mssql-server-linux:2017-latest
If you want to see the logs then you can give docker logs -f 'containerid'

Linux sql server Docker stops after few seconds

This is the command I execute but the container just stop after few seconds: docker run -it -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=dockermssql" -p 1433:1433 -v sqlvlm:/var/opt/mssql --name sql1 -d microsoft/mssql-server-linux
Your password (e.g. dockermssql) doesn't meet the complexity requirements. So try adding a non-alphanumeric characters such as exclamation point (!).
To check for errors, run: docker logs ID (where ID is container ID from docker ps), or run container without -d.
Remove detatched mode "-d" so it runs in "foreground" mode, this should then give you your stdout, stderror etc to your terminal and you may see some errors logged which can hopefully point you in the right direction.

Docker Container exit immediately after running

I'm a Docker newbie and tried to resolve the issue after checking similar SO questions without success. So please don't mark it as a duplicate .
Issue :
The container always exits immediately after its created and running.
I have tried to run the mssql instance using command
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Technocrat123’ -p 1433:1433 -d microsoft/mssql-server-linux
when trying as similar SO link link
$ docker run -t -d --name microsoft/mssql-server-linux 0adcdf822722
got the following error ,
Unable to find image '0adcdf822722:latest' locally
docker: Error response from daemon: repository 0adcdf822722 not found: does not exist or no pull access.
when tried to kill the process referring link1
Kill: illegal process id: PID
I'm using a mac machine. Thanks in advance.
Edit :
After running the log after run command like
docker logs 0adcdf822722
it shows
This is an evaluation version. There are [160] days left in the evaluation period.
The SQL Server End-User License Agreement (EULA) must be accepted before SQL
Server can start. The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388.
You can accept the EULA by specifying the --accept-eula command line option,
setting the ACCEPT_EULA environment variable, or using the mssql-conf tool.
But already in the run command I have set 'ACCEPT_EULA=Y'.
Your password (such as Technocrat123) doesn't meet the complexity requirements. So try adding a non-alphanumeric characters such as exclamation point (!). Secondly, use double quotes instead of single.
To check for errors, run: docker logs ID (where ID is container ID from docker ps).
This worked for me:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Sprpwd1234" --name sql_server_dev -p 1433:1433 -d store/microsoft/mssql-server-linux:2017-GA
Using (") instead of ('). Running Docker on Windows 10.
There is a typo in the command you are running:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Technocrat123’ -p 1433:1433 -d microsoft/mssql-server-linux
'Technocrat123’ should be 'Technocrat123'. The typo is in the end: ’ vs '.
The correct command is:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Technocrat123' -p 1433:1433 -d microsoft/mssql-server-linux
I was running Docker on Mac and trying to install sql-server. Initially, I was pasting the command provided here - https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker changing password. Then I would try to run the docker image. This gave me the said error "This is an evaluation version.....". I did an additional step, after running the command on the above link. I ran it again as docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Very_StrongPassword' -p 1401:1433 microsoft/mssql-server-linux:2017-latest. This kicked off installation of sql-server. This installation takes around 20-30 mins. Then the docker image is ready to use.

How to disable linux space randomization via dockerfile?

I'm trying to disable randomization via Dockerfile:
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
but I get
Step 9 : RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
---> Running in 0f69e9ac1b6e
[91mtee: /proc/sys/kernel/randomize_va_space: Read-only file system
any way to work around this? (I see its saying read-only file system any way to get around this?) If its something which the kernel does this means it's outside of my container scope, in that case how am i supposed to work with gdb inside my container? please note this is my target to work with gdb in a container because i'm experimenting with it, so i wanted a container which encapsulates gcc and gdb which i'll use for experimentations.
In host
run:
sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
not in docker
Docker has syntax for modifying some of the sysctls (not via dockerfile though) and kernel.randomize_va_space does not seem to be one of them.
Since you've said you're interested in running gcc/gdb you could disable ASLR only for these binaries with:
setarch `uname -m` -R /path/to/gcc/gdb
Also see other answers in this question.
Sounds like you are building a container for development on your own computer. Unlike production environment, you could (and probably should) opt for a privileged container. In a privileged container sysfs is mounted read-write, so you can control kernel parameters as you would on the host. This is an example of Amazon Linux container I use to develop for on my Debian desktop, which shows the difference
$ docker run --rm -it amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$ docker run --rm -it --privileged amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$
Notice ro mount in the unprivileged, rw in the privileged case.
Note that the Dockerfile command
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
makes no sense. It will be executed (a) during container build time (b) on the machine where you build the image. You want (a) happen at container's run time and (b) on the machine where you run the container. If you need to change sysctls on image start, write a script which does all the setup and then drops you into the interactive shell, like placing a script into e.g. /root and setting it as the ENTRYPOINT
#!/bin/sh
sudo sysctl kernel.randomize_va_space=0
exec /bin/bash -l
(Assuming you mount host working directory into /home/jas that's a good practice, as bash will read your startup files etc).
You need to make sure you have the same UID and GID inside the container, and can do sudo. How you enable sudo depends on a distro. In Debian, members of the sudo group have unrestricted sudo access, while on Amazon Linux (and, IIRC, other RedHat-like system, the group wheel has. Usually this boils down to an unwieldy run command that you rather want to script than type, like
docker run -it -v $HOME:$HOME -w $HOME -u $(id -u):$(id -g) --group-add wheel amazonlinux-devenv
Since your primary UID and GID match the host, files in mounted host directories won't end up owned by root. An alternative is create a bona fide user for yourself during image build (i.e., in the Dockerfile), but I find this more error-prone, because I can end up running this devenv image where my username has a different UID, and that will cause problems. The use of id(1) in a startup command guarantees UID match.

Starting and populating a Postgres container in Docker

I have a Docker container that contains my Postgres database. It's using the official Postgres image which has a CMD entry that starts the server on the main thread.
I want to populate the database by running RUN psql –U postgres postgres < /dump/dump.sql before it starts listening to queries.
I don't understand how this is possible with Docker. If I place the RUN command after CMD, it will of course never be reached because Docker has finished reading the Dockerfile. But if I place it before the CMD, it will run before psql even exists as a process.
How can I prepopulate a Postgres database in Docker?
After a lot of fighting, I have found a solution ;-)
For me was very useful a comment posted here: https://registry.hub.docker.com/_/postgres/ from "justfalter"
Anyway, I have done in this way:
# Dockerfile
FROM postgres:9.4
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
db/structure.sql is a sql dump, useful to initialize the first tablespace.
Then, the init_docker_postgres.sh
#!/bin/bash
# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests
DATABASE_NAME="db_name"
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
# create default database
gosu postgres postgres --single <<EOSQL
CREATE DATABASE "$DATABASE_NAME";
GRANT ALL PRIVILEGES ON DATABASE "$DATABASE_NAME" TO postgres;
EOSQL
# clean sql_dump - because I want to have a one-line command
# remove indentation
sed "s/^[ \t]*//" -i "$DB_DUMP_LOCATION"
# remove comments
sed '/^--/ d' -i "$DB_DUMP_LOCATION"
# remove new lines
sed ':a;N;$!ba;s/\n/ /g' -i "$DB_DUMP_LOCATION"
# remove other spaces
sed 's/ */ /g' -i "$DB_DUMP_LOCATION"
# remove firsts line spaces
sed 's/^ *//' -i "$DB_DUMP_LOCATION"
# append new line at the end (suggested by #Nicola Ferraro)
sed -e '$a\' -i "$DB_DUMP_LOCATION"
# import sql_dump
gosu postgres postgres --single "$DATABASE_NAME" < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
So finally:
# no postgres is running
[myserver]# psql -h 127.0.0.1 -U postgres
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
[myserver]# docker build -t custom_psql .
[myserver]# docker run -d --name custom_psql_running -p 5432:5432 custom_psql
[myserver]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4212697372 custom_psql:latest "/docker-entrypoint. 9 minutes ago Up 9 minutes 0.0.0.0:5432->5432/tcp custom_psql_running
[myserver]# psql -h 127.0.0.1 -U postgres
psql (9.2.10, server 9.4.1)
WARNING: psql version 9.2, server version 9.4.
Some psql features might not work.
Type "help" for help.
postgres=#
# postgres is now initialized with the dump
Hope it helps!
For those who want to initialize a PostgreSQL DB with millions of records during the first run.
Import using *.sql dump
You can do simple sql dump and copy the dump.sql file into /docker-entrypoint-initdb.d/. The problem is speed. My dump.sql script is about 17MB (small DB - 10 tables with 100k rows in only one of them) and the initialization takes over a minute (!). That is unacceptable for local development / unit test, etc.
Import using binary dump
The solution is to make a binary PostgreSQL dump and use shell scripts initialization support.
Then the same DB is initialized in about 500ms instead of 1 minute.
1. Create the dump.pgdata binary dump of a DB named "my-db"
directly from within a container or your local DB
pg_dump -U postgres --format custom my-db > "dump.pgdata"
Or from host from running container (postgres-container)
docker exec postgres-container pg_dump -U postgres --format custom my-db > "dump.pgdata"
2. Create a Docker image with a given dump and initialization script
$ tree
.
├── Dockerfile
└── docker-entrypoint-initdb.d
├── 01-restore.sh
├── 02-small-updates.sql
└── dump.pgdata
$ cat Dockerfile
FROM postgres:11
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
$ cat docker-entrypoint-initdb.d/01-restore.sh
#!/bin/bash
file="/docker-entrypoint-initdb.d/dump.pgdata"
dbname=my-db
echo "Restoring DB using $file"
pg_restore -U postgres --dbname=$dbname --verbose --single-transaction < "$file" || exit 1
$ cat docker-entrypoint-initdb.d/02-small-updates.sql
-- some updates on your DB, for example for next application version
-- this file will be executed on DB during next release
UPDATE ... ;
3. Build an image and run it
$ docker build -t db-test-img .
$ docker run -it --rm --name db-test db-test-img
Alternatively, you can just mount a volume to /docker-entrypoint-initdb.d/ that contains all your DDL scripts. You can put in *.sh, *.sql, or *.sql.gz files and it will take care of executing those on start-up.
e.g. (assuming you have your scripts in /tmp/my_scripts)
docker run -v /tmp/my_scripts:/docker-entrypoint-initdb.d postgres
There is yet another option available that utilises Flocker:
Flocker is a container data volume manager that is designed to allow databases like PostgreSQL to easily run in containers in production. When running a database in production, you have to think about things like recovering from host failure. Flocker provides tools for managing data volumes across a cluster of machines like you have in a production environment. For example, as a Postgres container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This means that when your Postgres container starts up on a new host, it has its data. This operation can be accomplished manually using the Flocker API or CLI, or automatically by a container orchestration tool that Flocker is integrates with, for example Docker Swarm, Kubernetes or Mesos.
I Followed the same solution which #damoiser , The only situation which was different was I wanted to import all dump data.
Please follow the solution below.(I have not done any kind of checks)
Dockerfile
FROM postgres:9.5
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
then the init_docker_postgres.sh script
#!/bin/bash
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
psql -U postgres < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
and then you can build your image as
docker build -t abhije***/postgres-data .
docker run -d abhije***/postgres-data
My solution is inspired by Alex Dguez's answer which unfortunately doesn't work for me because:
I used pg-9.6 base image, and the RUN /docker-entrypoint.sh --help never ran through for me, which always complained with The command '/bin/sh -c /docker-entrypoint.sh -' returned a non-zero code: 1
I don't want to pollute the /docker-entrypoint-initdb.d dir
The following answer is originally from my reply in another post: https://stackoverflow.com/a/59303962/4440427. It should be noted that the solution is for restoring from a binary dump instead of from a plain SQL as asked by the OP. But it can be modified slightly to adapt to the plain SQL case
Dockerfile:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
I was able to load the data in by pre-pending the run command in the docker file with /etc/init.d/postgresql. My docker file has the following line which is working for me:
RUN /etc/init.d/postgresql start && /usr/bin/psql -a < /tmp/dump.sql
We for E2E test in which we need a database with structure and data already saved in the Docker image we have done the following:
Dockerfile:
FROM postgres:9.4.24-alpine
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV PGDATA /pgdata
COPY database.backup /tmp/
COPY database_restore.sh /docker-entrypoint-initdb.d/
RUN /docker-entrypoint.sh --help
RUN rm -rf /docker-entrypoint-initdb.d/database_restore.sh
RUN rm -rf /tmp/database.backup
database_restore.sh:
#!/bin/sh
set -e
pg_restore -C -d postgres /tmp/database.backup
To create the image:
docker build .
To start the container:
docker run --name docker-postgres -d -p 5432:5432 <Id-docker-image>
This does not restore the database every time the container is booted. The structure and data of the database is already contained in the created Docker image.
We have based on this article, but eliminating the multistage:
Creating Fast, Lightweight Testing Databases in Docker
Edit: With version 9.4-alpine does not work now because it does not
run the database_restore.sh scrips. Use version 9.4.24-alpine
My goal was to have an image that contains the database - i. e. saving the time to rebuild it everytime I do docker run oder docker-compose up.
We would just have to manage to get the line exec "$#" out of docker-entrypoint.sh. So I added into my Dockerfile:
#Copy my ssql scripts into the image to /docker-entrypoint-initdb.d:
COPY ./init_db /docker-entrypoint-initdb.d
#init db
RUN grep -v 'exec "$#"' /usr/local/bin/docker-entrypoint.sh > /tmp/docker-entrypoint-without-serverstart.sh && \
chmod a+x /tmp/docker-entrypoint-without-serverstart.sh && \
/tmp/docker-entrypoint-without-serverstart.sh postgres && \
rm -rf /docker-entrypoint-initdb.d/* /tmp/docker-entrypoint-without-serverstart.sh

Resources