Is there a way we can fail a build in jenkins after xray scan and getting an unapproved license violation from jfog xray (No pipeline script)?
If the Xray service has been defined with an action under the watch to fail a build upon detecting a vulnerability or a license violation, it will fail the build. Please check, Creating Xray policies: Automatic actions.
In order to integrate your Jenkins (or any CI tool), you can follow this wiki page.
As can be seen from the link provided, in order to integrate between the CI and Xray a pipeline job is required (declarative or scripted).
I hope this helps and clarifies more.
Related
I am reading https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/sqlClient.html,
looks that it illustrates the sql-client functionalities with the standalone cluster.
I would ask whether sql client supports to run against yarn cluster? If yes, I would ask how to do the configuration, I didn't find related how-to on flink.apache.org
I don't have a yarn cluster available to test this with, but see FLINK-18273 which explains that
The SQL Client YAML has a deployment section. One can use the regular flink run options there and configure e.g. a YARN job session cluster. This is neither tested nor documented but should work due to the architecture.
and also mentions that
The deployment section has also problems with keys that use upper cases. E.g. fromsavepoint != fromSavepoint which requires to use the short option s as a workaround.
Putting those two statements together suggests that putting this entry into sql-env.yaml (xxx is the yarn application id):
deployment:
yid: xxx
and then starting the client via sql-client embedded -e sql-env.yaml might just work.
See also https://docs.cloudera.com/csa/1.2.0/sql-client/topics/csa-sql-client-session-config.html.
Is there a way we can add build with parameter option in jenkins for github organization project??
Appreciate for your answeres.
First, if you're using the GitHub Organization - It is deprecated and hasn't been updated in 2 years.
https://plugins.jenkins.io/github-organization-folder
So you might want to consider using the Multi Branch Pipeline (MBP) in conjunction with Job DSL instead.
Next - the idea is that the MBP is dynamically creating branch 'jobs' when it scans the repo you provide. I'm not sure that you can't put a parameter block in the pipeline you create, but if you did and it worked, I wonder if the job can be automatically triggered.
https://jenkins.io/doc/book/pipeline/syntax/#parameters
I was working on the tensorflow object detection API. I managed to train it locally on my computer and get decent results. However, when I tried to replicate the same on GCP, I had several errors. So, basically, I followed the documentation mentioned in the official tensorflow -running on cloud documentation
So this is how the bucket is laid out:
Bucket
weeddetectin-data
Train-packages
This is how I ran the training and evaluation job:
Running a multiworker training job
Running an evaluation job on cloud
I then used the following command to monitor on tensoboard:
tensorboard --logdir=gs://weeddetection --port=8080
I opened the dashboard using the preview feature in the console. But it says no dashboards are active for the current data set.
No Dashboards are active
So, I checked on my activity page to really see if the training and evaluation job were submitted:
Training Job
Evaluation Job
It seems as if there are no events files being written to your bucket.
The root cause could be that the manual your are using refers to an old version of the tensor models.
Please try and change
--train_dir=gs:...
to
--model_dir=gs://${YOUR_BUCKET_NAME}/model
And resend the job, once the job is running check the model_dir in the bucket to see if the files are written there.
Check out: gcloud ml-engine jobs documentation for additional read.
Hope it help!
I'm using this plugin for cucumber, and I saw that if one or more steps are failed you have the screenshot on the report of that step.
Is there a way to have the screenshots for each step also if they are not failed?
Generally it is not considered a good practice due to performance overhead however you can do it using "afterstep"
hooks.
We are a SVN/Maven/Hudson shop. We are experimenting with using the Maven Release Plugin to help automate our very laborious tagging and releasing process. We are happy with what we are seeing and have researched thus far in regards to this plugin.
Our question is - if we need to have different tags for some of the modules / applications being built, is there a way to script the responses?
We have waded through the interactive dry runs successfully, however we are looking to script these out to further our automation.
Has anyone tried this or know if it is possible?
Does the "Batch Mode" allow this functionality?
Thanks
Joe R
You can -B but it will use default version names (removing -SNAPSHOT at the end).
regarding tags per module you can have a look at the parameter : autoVersionSubmodules 1
/Olivier