Parameterize docker build by passing jar file name from command line - maven-docker-plugin

I am trying to parameterize the final jar file name in docker build. I need docker-maven-plugin to take jar file name passed via command line parameter. Maven build is not throwing any error while building image.
If I hard-code the jar file name in dockerfile, it is working fine.
Maven command to docker build:
mvn -X -s settings.xml docker:build -DJAR_FILE_NAME=${filename}
My docker file:
RUN curl -kfsSL https://example.com/UnlimitedJCEPolicyJDK8/US_export_policy.jar > US_export_policy.jar \
&& curl -kfsSL https://example.com//UnlimitedJCEPolicyJDK8/local_policy.jar > local_policy.jar \
&& mv local_policy.jar ${JAVA_HOME}/jre/lib/security \
&& mv US_export_policy.jar ${JAVA_HOME}/jre/lib/security \
&& rm -rf US_export_policy.jar local_policy.jar
ENV JAVA_KEYSTORE ${JAVA_HOME}/jre/lib/security/cacerts
RUN curl -kfsSL https://example.com/mycert.cer > mycert.cer \
&& ${JAVA_HOME}/bin/keytool -v -importcert -file mycert.cer -keystore ${JAVA_KEYSTORE} -storepass dummy -alias dummy -noprompt \
&& rm mycert.cer
VOLUME /tmp
#ADD myservice-2.0.2-SNAPSHOT.jar app.jar <-hard-coded name works fine
RUN echo "final jar file name"
RUN echo ${JAR_FILE_NAME}
ADD ${JAR_FILE_NAME} app.jar
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
My POM.xml
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.0</version>
<configuration>
<imageName>${docker.image.prefix}/myservice</imageName>
<dockerDirectory>src/main/docker</dockerDirectory>
<resources>
<resource>
<targetPath>${docker.resource.targetPath}</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>
Output from docker build process:
Step 6 : VOLUME /tmp
---> Using cache
---> xxxxxxxxx
Step 7 : RUN /bin/bash -c echo JAR_FILE_NAME1 in docker :$JAR_FILE_NAME
---> Using cache
---> xxxxxxxxx
Step 8 : RUN /bin/bash -c echo JAR_FILE_NAME2 in docker :${JAR_FILE_NAME}
---> Using cache
---> xxxxxxxxx
Step 9 : RUN echo $JAR_FILE_NAME
---> Using cache
---> xxxxxxxxx
Step 10 : RUN echo "final jar file name"
---> Using cache
---> xxxxxxxxx
Step 11 : RUN echo ${JAR_FILE_NAME}
---> Using cache
---> xxxxxxxxx
Step 12 : ADD ${JAR_FILE_NAME} app.jar
---> Using cache
---> xxxxxxxxx
Step 13 : RUN bash -c 'touch /app.jar'
---> Using cache
---> xxxxxxxxx
Step 14 : ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -jar /app.jar
---> Using cache
---> xxxxxxxxx
Successfully built xxxxxxxxx
[INFO] Built xxx/myservice
Output while pulling image:
I0603 13:48:32.849159 23106 exec.cpp:132] Version: 0.23.0
I0603 13:48:32.857393 23114 exec.cpp:206] Executor registered on slave 20170523-104056-1453378314-5050-11670-S48
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Error: Invalid or corrupt jarfile /app.jar

Why don't you use a regular docker build command instead of going through maven? You can use a maven base image, something like maven:onbuild even. An example here may help.

I parameterized the output jar to be used in docker image in this way.
In Dockerfile, used a concrete name without any version. Removed ${JAR_FILE_NAME}, dynamic parameters from Dockerfile.
ADD myservice.jar app.jar
In POM.xml, I made maven to output artifact without version in the jar file name, by adding below element right under tag.
<finalName>${project.artifactId}</finalName>
So, no manual update needed for Dockerfile. Forgetting to update Dockerfile and then build getting failed was really disturbing. I let Maven to manage versions and those versions are used as finite image tags, and thus "reproducible builds" work for me.

I had the same kind of problem: I dont't want to hard-code the project version in my Dockerfile.
I first though about removing the version in the jar's file name (this can be achieved using your build tool, I personnaly use gradle).
Yet I was quite unhappy with it since I consider having the version in jar's name is a good practice (there is an interstning discustion about this in this question for instance).
So I came up with this solution: copy/add the jar using a wildcard:
FROM eclipse-temurin:17-jre-alpine
WORKDIR /app/
COPY ./path/to/my/jar/*.jar /app/my-app.jar
ENTRYPOINT ["java", "-jar", "/app/my-app.jar"]
Here I managed to make my build tool produce the jar I want alone in its folder.
The point is to have a way to use a wildcard to get this single jar without ambiguity.

Related

Is there a way to store required libraries persistently? I'm running into error code H10

Im deploying a sinatra webapp that makes use of a c library, through Ruby FFI. I need this library to be compiled on the target machine. I do this through a Rakefile normally, but because of Heroku's ephemeral hard drive, any time I run rake the compiled files dissappear. How can I make these libraries persist? Is there a way to push the files to git from heroku?
Possibly related: Whenever I try to access the page, the application crashes with a code=H10 error. The app runs fine locally.
My attempt at customizing a buildpack:
#!/usr/bin/env ruby
# This script compiles an application so it can run on Heroku.
# It will install the application's specified version of Ruby, it's dependencies
# and certain framework specific requirements (such as calling `rake assets:precompile`
# for rails apps). You can see all features described in the devcenter
# https://devcenter.heroku.com/articles/ruby-support
$stdout.sync = true
$:.unshift File.expand_path("../../../lib", __FILE__)
require "language_pack"
require "language_pack/shell_helpers"
begin
# my addition begins here
`mkdir /app/src`
`cd /app && curl -s https://www.astro.com/ftp/swisseph/swe_unix_src_2.10.02.tar.gz | tar xzvf -`
`cd '/app/src' && make libswe.so`
# my addition ends here
LanguagePack::ShellHelpers.initialize_env(ARGV[2])
if pack = LanguagePack.detect(ARGV[0], ARGV[1])
pack.topic("Compiling #{pack.name}")
pack.log("compile") do
pack.compile
end
end
rescue Exception => e
LanguagePack::ShellHelpers.display_error_and_exit(e)
end
Alternatively, I also tried:
#!/usr/bin/env bash
# The actual compilation code lives in `bin/support/ruby_compile`. This file instead
# bootstraps the ruby needed and then executes `bin/support/ruby_compile`
BUILD_DIR=$1
CACHE_DIR=$2
ENV_DIR=$3
BIN_DIR=$(cd $(dirname $0); pwd)
BUILDPACK_DIR=$(dirname $BIN_DIR)
# my addition begins here
mkdir /app/src
cd /app && curl -s https://www.astro.com/ftp/swisseph/swe_unix_src_2.10.02.tar.gz | tar xzvf -
cd '/app/src' && make libswe.so
# my addition ends here
source "$BIN_DIR/support/bash_functions.sh"
heroku_buildpack_ruby_install_ruby "$BIN_DIR" "$BUILDPACK_DIR"
if detect_needs_java "$BUILD_DIR"; then
cat <<EOM
## Warning: Your app needs java
The Ruby buildpack determined your app needs java installed
we recommend you add the jvm buildpack to your application:
$ heroku buildpacks:add heroku/jvm --index=1
-----> Installing Java
EOM
compile_buildpack_v2 "$BUILD_DIR" "$CACHE_DIR" "$ENV_DIR" "https://buildpack-registry.s3.us-east-1.amazonaws.com/buildpacks/heroku/jvm.tgz" "heroku/jvm"
fi
$heroku_buildpack_ruby_dir/bin/ruby $BIN_DIR/support/ruby_compile $#
Both seem to work at first (i.e. compile the C library, output the files I need), but when I run heroku run bash or the web application Im not able to find the files. This is the specific error, btw:
/app/vendor/bundle/ruby/3.0.0/gems/ffi-1.15.5/lib/ffi/library.rb:145:in `block in ffi_lib': Could not open library '/app/src/libswe.so': /app/src/libswe.so: cannot open shared object file: No such file or directory (LoadError)
I've also tried heroku release phase, but even then the files did not persist. Procfile:
release: bundle exec rake
web: bundle exec thin start -R config.ru -e $RACK_ENV -p ${PORT:-5000}
Rakefile:
#require "bundler/gem_tasks"
task default: [:clean, :c_build, :get_ephe]
task :clean do
`rm ./src/libswe.so`
`rm -rf ephe`
end
task :c_build do
`wget https://www.astro.com/ftp/swisseph/swe_unix_src_2.10.02.tar.gz`
`tar xvf swe_unix_src_2.10.02.tar.gz`
`rm swe_unix_src_2.10.02.tar.gz`
`cd src && make libswe.so && echo "Compiled Library"`
end
task :get_ephe do
`mkdir ephe`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/seas_12.se1`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/seas_18.se1`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/sefstars.txt`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/semo_12.se1`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/semo_18.se1`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/sepl_12.se1`
`wget -P ephe https://www.astro.com/ftp/swisseph/ephe/sepl_18.se1`
end

Add Flink Job Jar in Docker Setup and run Job via Flink Rest API

We're running Flink in Cluster Session mode and automatically add Jars in the Dockerfile:
ADD pipeline-fat.jar /opt/flink/usrlib/pipeline-fat.jar
So that we can run this Jar via the Flink Rest API without the need to upload the Jar in advance:
POST http://localhost:8081/:jarid/run
But the "static" Jar is now shown, to get the :jarid:
GET http://localhost:8081/jars
So my question is:
Is it possible to run a userlib jar using the Flink Rest API?
Or can you only reference such jars via
CLI flink run -d -c ${JOB_CLASS_NAME} /job.jar
and standalone-job --job-classname com.job.ClassName Mode?
My alternative approach (workaround) would be to upload the jar in the Docker entrypoint.sh of the jobmanager container:
curl -X POST http://localhost:8084/jars/upload \
-H "Expect:" \
-F "jarfile=#./pipeline-fat.jar"
I believe that it is unfortunately not possible to currently start a flink cluster in session mode with a jar pre-baked in the docker image, and then start the job using the REST API commands (as you showed).
However your workaround approach seems like a good idea to me. I would be curious to see if it worked for you in practice.
I managed to run a userlib jar using the command line interface.
I edited docker compose to run custom docker-entrypoint.sh.
I've add to original docker-entrypoint.sh
run_user_jars() {
echo "Starting user jars"
exec ./bin/flink run /opt/flink/usrlib/my-job-0.1.jar & }
run_user_jars
...
And edit original entrypoint for jobmanager in docker-compose.yml file
entrypoint: ["bash", "/opt/flink/usrlib/custom-docker-entrypoint.sh"]

Can't run webapplication on tomcat using Docker

I am trying to show on my browser the webapp I've created for a school project.
First of all, I've put my Dockerfile and my .war file in the same folder /home/giorgio/Documenti/dockerProject. I've written in my Dockerfile the following:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat path
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY file.war /home/giorgio/Documenti/apache-tomcat-7.0.72/webapps/
Then I've built the image with the command from the ubuntu shell:
docker build -t myName /home/giorgio/Documenti/dockerProjects
Finally, I've run on the shell:
docker run --rm -it -p 8080:8080 myName
Now, everything works fine and it doesn't show any errors, however when I want to reach localhost:8080 from my browser anything shows up, nevertheless tomcat has started running perfectly fine.
Any thoughts about a poossible problem which I can't see?
Thank you!
This is your whole Dockerfile?
Because You just remove all ROOT content (step #3)
then copy war file with your application (step #4) - probably wrong folder in the question only (should be /usr/local/tomcat/webapps/)
But I don't see any endpoint or start foreground application.
I suppose you need to add:
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
and with that just run tomcat. And It is routines to EXPOSE port, but when you are using -p docker does an implicit exposing.
So your Dockerfile should looks like:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat
RUN rm -rf /usr/local/tomcat/webapps/ROOT
# fixed path for copying
COPY file.war /usr/local/tomcat/webapps/
# Routine for me - optional for your case
EXPOSE 8080
# And run tomcat
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]

Reusable docker image for AngularJS

We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.

Why is my debian postinst script not being run?

I have made a .deb of my app using fpm:
fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \
--after-remove debian/postrm --after-install debian/postinst \
--description "Automated build." -d mysql-client -d python-virtualenv home
Among other things, the postinst script is supposed to create a user for the app:
#!/bin/sh
set -e
APP_NAME=myapp
case "$1" in
configure)
virtualenv /home/$APP_NAME/local
#supervisorctl start $APP_NAME
;;
# http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs
install|upgrade)
# If the package has default file it could be sourced, so that
# the local admin can overwrite the defaults
[ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME
# Sane defaults:
[ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME
[ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME
[ -z "$SERVER_NAME" ] && SERVER_NAME=""
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME
# Groups that the user will be added to, if undefined, then none.
ADDGROUP=""
# create user to avoid running server as root
# 1. create group if not existing
if ! getent group | grep -q "^$SERVER_GROUP:" ; then
echo -n "Adding group $SERVER_GROUP.."
addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true
echo "..done"
fi
# 2. create homedir if not existing
test -d $SERVER_HOME || mkdir $SERVER_HOME
# 3. create user if not existing
if ! getent passwd | grep -q "^$SERVER_USER:"; then
echo -n "Adding system user $SERVER_USER.."
adduser --quiet \
--system \
--ingroup $SERVER_GROUP \
--no-create-home \
--disabled-password \
$SERVER_USER 2>/dev/null || true
echo "..done"
fi
# … and a bunch of other stuff.
It seems like the postinst script is being called with configure, but not with install, and I am trying to understand why. In /var/log/dpkg.log, I see the lines I would expect:
2012-06-30 13:28:36 configure myapp 9 9
2012-06-30 13:28:36 status unpacked myapp 9
2012-06-30 13:28:36 status half-configured myapp 9
2012-06-30 13:28:43 status installed myapp 9
I checked that /etc/default/myapp does not exist. The file /var/lib/dpkg/info/myapp.postinst exists, and if I run it manually with install as the first parameter, it works as expected.
Why is the postinst script not being run with install? What can I do to debug this further?
I think the example script you copied is simply wrong. postinst is not
supposed to be called with any install or upgrade argument, ever.
The authoritative definition of the dpkg format is the Debian Policy
Manual. The current version describes postinst in chapter
6
and only lists configure, abort-upgrade, abort-remove,
abort-remove, and abort-deconfigure as possible first arguments.
I don't have complete confidence in my answer, because your bad example
is still up on debian.org and it's hard to believe such a bug could slip
through.
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond.
There must be some fault with the way the that your package is built or an error in the postinst file which is causing your problem.
You can debug your install by adding the -D (debug) option to your command line i.e.:
sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb
-D2 should sort out this type of issue
for the record the debug levels are as follows:
Number Description
1 Generally helpful progress information
2 Invocation and status of maintainer scripts
10 Output for each file processed
100 Lots of output for each file processed
20 Output for each configuration file
200 Lots of output for each configuration file
40 Dependencies and conflicts
400 Lots of dependencies/conflicts output
10000 Trigger activation and processing
20000 Lots of output regarding triggers
40000 Silly amounts of output regarding triggers
1000 Lots of drivel about e.g. the dpkg/info dir
2000 Insane amounts of drivel
The install command calls the configure option and in my experience the postinst script will always be run. One thing that may trip you up is that the postrm script of the "old" version, if upgrading a package, will be run after your current packages preinst script, this can cause havoc if you don't realise what is going on.
From the dpkg man page:
Installation consists of the following steps:
1. Extract the control files of the new package.
2. If another version of the same package was installed before
the new installation, execute prerm script of the old package.
3. Run preinst script, if provided by the package.
4. Unpack the new files, and at the same time back up the old
files, so that if something goes wrong, they can be restored.
5. If another version of the same package was installed before
the new installation, execute the postrm script of the old pack‐
age. Note that this script is executed after the preinst script
of the new package, because new files are written at the same
time old files are removed.
6. Configure the package.
Configuring consists of the following steps:
1. Unpack the conffiles, and at the same time back up the old
conffiles, so that they can be restored if something goes wrong.
2. Run postinst script, if provided by the package.
This is an old issue that has been resolved, but it seems to me that the accepted solution is not totally correct and I believe that it is necessary to provide information for those who, like me, are having this same problem.
Chapter 6.5 details all the parameters with which the preinst and postinst files are called
At https://wiki.debian.org/MaintainerScripts the installation and uninstallation flow is detailed.
Watch what happens in the following case:
apt-get install package
- It runs preinst install and then postinst configure
apt-get remove package
- Execute postrm remove and the package will be set to "Config Files"
For the package to actually be in the "not installed" state it must be used:
apt-get purge package
That's the only way we'll be able to run preinst install and postinst configure the next time the package is installed.

Resources