Binding database to app in cloudbees when deploying via jenkins - database

I have now managed to deploy a web app on run#cloud. I do have the cloudbees deployer plugin on jenkins, however, i am looking for a way to use the bees sdk to bind a database to the deployed app. I was wondering how do i go about it.
Currently, I deploy it via jenkins as a postbuild action.

You can configure the Bees SDK in DEV#cloud with a script like this (assuming that you have uploaded a build secret zip file containing your ~/.bees/bees.config using the environment variable ${SECRET} - please see Build Secret Plugin
Run this as an "Execute Shell" task within Jenkins, and then you can use the Bees SDK in the normal way to bind the database (or any resource) to your app, e.g.
bees app:bind -a acme/test -db mydb
See the Database Guide for more details.
Jenkins Execute Shell Script:
if [[ ! -d "${WORKSPACE}/bees-sdks" ]]
then
mkdir ${WORKSPACE}/bees-sdks
fi
cd ${WORKSPACE}/bees-sdks;
curl -o cloudbees-sdk-1.5.0-bin.zip http://cloudbees-downloads.s3.amazonaws.com/sdk/cloudbees-sdk-1.5.0-bin.zip;
unzip -o cloudbees-sdk-1.5.0-bin.zip
rm cloudbees-sdk-1.5.0-bin.zip
PATH=${WORKSPACE}/bees-sdks/cloudbees-sdk-1.5.0:$PATH; export PATH
if [[ ! -d ~/.bees ]]
then
mkdir ~/.bees
fi
cp ${SECRET}/bees.config ~/.bees/bees.config
I've done an online example here that illustrates how this all works. Sorry this is a little more complicated than we would like: we are working to make it smoother and I will update this answer shortly once the changes go live.

Related

Add Flink Job Jar in Docker Setup and run Job via Flink Rest API

We're running Flink in Cluster Session mode and automatically add Jars in the Dockerfile:
ADD pipeline-fat.jar /opt/flink/usrlib/pipeline-fat.jar
So that we can run this Jar via the Flink Rest API without the need to upload the Jar in advance:
POST http://localhost:8081/:jarid/run
But the "static" Jar is now shown, to get the :jarid:
GET http://localhost:8081/jars
So my question is:
Is it possible to run a userlib jar using the Flink Rest API?
Or can you only reference such jars via
CLI flink run -d -c ${JOB_CLASS_NAME} /job.jar
and standalone-job --job-classname com.job.ClassName Mode?
My alternative approach (workaround) would be to upload the jar in the Docker entrypoint.sh of the jobmanager container:
curl -X POST http://localhost:8084/jars/upload \
-H "Expect:" \
-F "jarfile=#./pipeline-fat.jar"
I believe that it is unfortunately not possible to currently start a flink cluster in session mode with a jar pre-baked in the docker image, and then start the job using the REST API commands (as you showed).
However your workaround approach seems like a good idea to me. I would be curious to see if it worked for you in practice.
I managed to run a userlib jar using the command line interface.
I edited docker compose to run custom docker-entrypoint.sh.
I've add to original docker-entrypoint.sh
run_user_jars() {
echo "Starting user jars"
exec ./bin/flink run /opt/flink/usrlib/my-job-0.1.jar & }
run_user_jars
...
And edit original entrypoint for jobmanager in docker-compose.yml file
entrypoint: ["bash", "/opt/flink/usrlib/custom-docker-entrypoint.sh"]

Error getting alerts from compliances Your environment may not have any index with Wazuh's alerts

I am new in elasticsearch. I have to set up wazuh with elasticsearch cluster. I did all the thing. I have also installed wazuh plugin on the Kibana . Once, I opened the app and clicked on the agent section It is saying =>
Error getting alerts from compliances
Your environment may not have any index with Wazuh's alerts
Please help me.
you could try to uninstall and install it again. Here you have the official uninstallation guide Uninstalling Wazuh with Elastic Stack, and after installing Elastic again with this guide Unattended installation
Remember if you want to preserve your configuration you can backup the files:
cp -p /var/ossec/etc/ossec.conf.orig /var/ossec_backup/etc/ossec.conf
cp -p /var/ossec/etc/local_internal_options.conf /var/ossec_backup /etc/local_internal_options.conf
cp -p /var/ossec/etc/ /var/ossec_backup/etc/client.keys
cp -p /var/ossec/queue/rids/ /var/ossec_backup/queue/rids/*
Here you have more information about that Migrating OSSEC agent
After that, you have to put the files in their original path again

How we can execute a Jupyter notebook python script automatically in Sagemaker?

I used terraform to create Sagemaker notebook instance and deploy Jupyter notebook python script to create and deploy a regression model.
I was able to run the scribe and create the model successfully via AWS console manually. However, I could not find a way to get it executed automatically. I even tried executing the script via shell commands through notebook instance’s lifecycle configuration. However, it did not work as expected. Any other idea please?
Figured this out. Passed the below script to notebook instance as lifecycle configuration.
#!/bin/sh
sudo -u ec2-user -i <<'EOF'
source activate python3
pip install runipy
nohup runipy <<path_to_the_jupyter_notebook>> &
source deactivate
EOF

How to get the deployment.yaml file for MSSQL cluster in Kubernetes

I am installing the SQL Server cluster in Kubernetes setup using the below document.
Check here.
I am able to install the cluster successfully. But I need to customize the deployment by specifying the custom docker image, add additional containers.
May I know how to get the deployment YAML file & Dockerfile for all the images in the running containers?
I have tried "kubectl edit", but not able to edit required details.
the easiest way of doing that is using something like:
kubectl get deployment -n %yournamespace% -o yaml > export.yaml
For Kubernetes YAML files you can use:
kubectl get <pod,svc,all,etc> <name> --export=true -o yaml
or
kubectl get <pod,svc,all,etc> <name> -o yaml
For Docker Dockerfiles there is whole post on stackoverflow which explain how to create a dockerfile from an image.
Kubernetes Cheat Sheet is a good source for kubectl commands.
Let's say your deployments are created in develop namespace. Below command will help you to get a yaml out of your deployment, --export flag will remove the cluster specific information.
kubectl get deploy <your-deployment-name> --namespace=develop -o yaml --export > your-deployment.yaml
Below command will help you to get a json out of your deployment.
kubectl get deploy <your-deployment-name> --namespace=develop -o json --export > your-deployment.yaml

gcloud: how to download the app via cli

I depolyed an app with gcloud preview app deploy.
Is there a way to download it to an other local machine?
How can I get the files? I tried it via ssh with no success (can't access the docker dir)
UPDATE:
I found this:
gcloud preview app modules download default --version 1 --output-dir=my_dir
but it's not loading files
Log
Downloading module [default] to [my_dir/default]
Fetching file list from server...
|- Downloading [0] files... -|
I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired.
I used to download my code of the uploaded version with the appcfg.pyusing the following command.
appcfg.py download_app -A <app_id> -V <version> <output-dir>
But of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible.
However, the following method helped me to download the deployed code:
Go the console and in to the Google App Engine.
Select the project you want to work with.
Once the project's dashboard opens, Click on the top right to
open the built in console window.
Which should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM.
Hence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code.
Now once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor
Now here I assumed if I rightclicked and exported the desired
folder it would work,
but instead it gave me the following error message.
{"Error":"'concurrency' must be a number but it is [object Undefined]","Message":"'concurrency' must be a number but it is [object Undefined]"}
So, I thought maybe it would play along nicely if the the folder
was an archive. Go back to the cloud shell and using whatever
utility you fancy make an archive of the folder
zip -r mycode.zip mycode
Go to the docker code editor, export and download.
Now. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years.
Currently, the best way to do this is to pull the files out of Docker.
Put instance into self-managed mode, so that you can ssh into it:
$ gcloud preview app modules set-managed-by default --version 1 --self
Find the name of the instance:
$ gcloud compute instances list | grep gae-default-1
Copy it out of the Docker container, change the permissions, and copy it back to your local machine:
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp'
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 "chown -R $USER /tmp/app"
$ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/
$ ls /tmp/app
Dockerfile
[...]
IMHO, the best option today (Aug 2018) is:
Under the main menu, under Products, go to Tools -> Cloud Build -> Build history.
There, click the ID of the build you want.
Then, in the opened window (Build details), click the source link, the download of your compressed code begins.
As simple as that.
HTH.
As of Feb 2021, you can install appengine-sdk using pip
pip install appengine-sdk
Once installed, appcfg can be used to download the app code.
python -m appcfg download_app -A app_id [ -V version ] out-dir
Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.

Resources