docker build step Jenkins plugin - configured dockerFolder .....does not exist - jenkins-plugins

My jenkins job is trying to build an image.
It tells me that the docker folder in the Jenkins workspace does not exist which is true.
The Dockerfile is at the same level as the pom.xml so is not under a folder called docker.
My questions are is there any documentation for this plugin?
Do I need to put the Dockerfile in a directory called docker?
Can I tell the plugin step where to look for the Dockerfile?
Regards,John

Related

How to deploy multiple containers on Heroku?

I have created API(Flask RestFul) service, UI(in ReactJS) and Proxy service each having its own Dockerfile in their respective folder.
And there is Docker-compose.yaml file in main repository. It works locally on running following command docker-compose -f docker-compose.prod.yaml up, but I am unable to find way to deploy multiple containers on heroku?
Here is my github repo: https://github.com/Darpan313/Flask-React-nginx-Docker-Compose

I cannot deploy only the build folder of a ReactJS application in the Azure Linux environment

In some tutorials I found to deploy a ReactJs application in the Azure environment, a Windows machine is used, but I would like to use a Linux machine, in which case I can only send all project sources to the server.
../src
../public
I would like to know if it is possible to deploy only the contents of the build folder. Because I'm trying for days and I can't get it.
I will explain what I have done.
First, you should change "homepage" value to "." in package.json
after that, you set the route path with "/"
and then "yarn -s build" or "npm run-script build" in your terminal.
then the build folder will be created.
copy all files in "build" folder and paste it to your /var/www/html(it is dynamic)
I can help you remotely.
I hope this answer helps you.

How can I use build_number variable from jenkins as a tag for a reactjs docker image?

I'm trying to create a tag for each docker image generated with a jenkins job, for this I'm looking of how can I call an environment variable from jenkins such as build_number on my package.json
Thank you for your help.
You can build your docker image from Jenkins job using following command.
docker build -f Dockerfile -t react-image:${env.BUILD_NUMBER} .
It is not clear whether you want a version number defined inside package.json or the jenkins build number as the previous answer gave you.
In case you want a version number defined inside your package.json you could fetch it with grep inside a jenkins shell command and then use docker build as the above answer suggested.
If the version/build number is simply a shell environment variable, then you could use the environment variable injection plugin to export it to Jenkins.

Multiple services with different dockerfiles on GAE Flexible

I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often
Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor
Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.
So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}

how do i run script after successful build in jenkins

I am running automated builds in jenkins after successful build the jars are copying in to my scp repository.
I configured in post build action publish artifacts in to scp repository. Everything is going fine.
But I want to stop my dev server before copying artifacts in to my dev server from jenkins.
Is it possible to stop my dev server from jenkins ?
I'm not sure if you are still looking for an answer but you can install plugin to run a script post build.
You can install the plugin by browsing to Manage Jenkin -> Manage Plugin on Jenkins. Then select Available tab. Search for the Post Build Task plugin and install

Resources