Rest API in Jenkins to get the Pipeline (stages) information - jenkins-plugins

I'm exploring the options to get the pipeline (stages) information in Jenkins through Rest API. We had a pipeline plug-in installed on Jenkins.
Any help on same is highly appreciated.

You can use this endpoint to get the pipeline runs:
https://github.com/jenkinsci/pipeline-stage-view-plugin/tree/master/rest-api#get-jobjob-namewfapiruns
Each run blob should contain the corresponding stages, so you can extract the stages of the latest run.

Related

Query on automating Flink Job submission

I am trying to use Flink REST APIs to automate Flink job submission process via pipeline. To call any Flink Rest endpoint we should be aware about the Job Manager Web interface IP. For my POC, I got the IP after running flink-yarn-session command on CLI, but what is the way to get it from code?
Fo automation, I am planning to call following REST API in sequence
request. get('http://ip-10-0-127-59.ec2.internal:8081/jobs/overview') // Get Running job Id
requests.post('http://ip-10-0-127-59.ec2.internal:8081/jobs/:jobID/savepoints/') // Cancel job with savepoint
requests.get('http://ip-10-0-127-59.ec2.internal:8081/jobs/:JobId/savepoints/
:savepointId') // Get savepoint status
requests. Post("http://ip-10-0-127-59.ec2.internal:8081/jars/upload"). // Upload jar for new job
requests.post(
"http://ip-10-0-127-59.ec2.internal:8081/jars/de05ced9-03b7-4f8a-bff9-4d26542c853f_ATVPlaybackStateMachineFlinkJob-1.0-super-2.3.3.jar/run") // submit new job
requests.get('http://ip-10-0-116-99.ec2.internal:35497/jobs/:jobId') // Get status of new job
If you have the flexibility to run on Kubernetes instead on Yarn (looks like you are on AWS from your hostnames, so you could use EKS) then I would recommend using the official Flink Kubernetes Operator - it is built for exactly this purpose by the community.
If Yarn is a given for your use case then you may follow the code examples that Flink uses to talk to the Yarn ResourceManager in the flink-yarn package, especially the following:
https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java#L384
https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManagerDriver.java#L258

How to glue together Vert.x web and Kotlin react using Gradle in Kotlin MPP

Problem
It is not clear to me how to configure the Kotlin MPP (multiplatform platform project) using Gradle (Kotlin DSL) to use Vert.x web for Kotlin/JVM target with Kotlin React on the Kotlin/JS target.
Update
You can check out updated minimal example for a working solution
inspired by an approach of Alexey Soshin.
What I've tried
Have a look at my minimal example on GitHub of a Kotlin MPP with the Vert.x web server on the JVM target and Kotlin React on the JS target.
You can make it work if you:
First run Gradle task browserDevelopentRun (I don't understand magick behind it) and after browser opens and renders React SPA (single page application) you can
stop that task and then
start the Vert.x backend with task run.
After that, without refreshing the remaining SPA in the browser, you can confirm that it can communicate with the backend by pressing the button and it will alert received data.
Question
What are the possible ways/approaches to glue these two targets so when I run my application: JS target is assembled and served via JVM backend conveniently?
I am thinking that perhaps Gradle should trigger some of the Kotlin browser tasks and then make them available in some way for the Vert.x backend.
If you'd like to run a single task, though, you need that your server task would depend on your JS compile. In your build.gradle add the following:
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
dependsOn(tasks.getByName<org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack>("jsBrowserProductionWebpack"))
}
Now invoking run will also invoke WebPack.
Next you want to serve your files. There are different ways of doing it. One is to copy them to Vert.x resources directory using Gradle. Another is to point Vert.x to where WebPack puts them by default:
route().handler(StaticHandler.create("../../../distributions"))
There is a bunch of different things going on there.
First, both your Vert.x and Webpack run on the same port. Easiest way to fix that is to start Vert.x on some other port, like 18080:
.listen(18080, "localhost") { result ->
And then change your index.kt file to use that port:
val result: SomeData = get("http://localhost:18080/data")
Because we run on different ports now, we also need to specify CORS handler:
router.apply {
route().handler(CorsHandler.create("*"))
Last is the fact, that you cannot run two neverending Gradle tasks from the same process (ok, you can, but that's complicated). So what I suggest is that you open two Terminals, and run:
./gradlew run
In one, and
./gradlew jsBrowserDevelopmentRun
In another.
Having done all that, you should see this:
Now, this is for development mode. For production mode, you probably don't want to run jsBrowserDevelopmentRun, but tie jsBrowserProductionWebpack to your run and server spa.js from your Vert.x app using StaticHandler. But this answer is already too long.

How to expose Hystrix jmx for Prometheus

I'm new to Hystrix and I just created my first Hystrix Commands. The commands are being created and executed in a loop so the metrics data should have being registered. I am using the servo metrics publisher as follows:
HystrixPlugins.getInstance()
.registerMetricsPublisher(HystrixServoMetricsPublisher.getInstance());
EDIT:
Looking at the JConsole I found the related metrics definition as follows in the link:
jconsole
I am not using spring, eureka, servo to read data and run the app.
I would like to know how to expose this data in a way that prometheus can read. I tried hystrix-prometheus, but the documentation is not helpful when it is about where the metrics are being exposed, how to get them or check the them.
In order to retrieve Hystrix metrics, you'll first need to get Prometheus' Java Simple Client up and running. The setup depends on your environment. Independent of your environment the result should be a URL where you can retrieve i.e. simple Java metrics.
Once that it up and running, you can use the line
HystrixPrometheusMetricsPublisher.register("application_name");
to register the additional Hystrix metrics. They will be served by the same URL. Please note that you will see Hystrix metrics only after the first call of a Hystrix enabled command.

methods to get website loading time for each step during execution of selenium scripts?

I'm working on an automation using selenium, but the application on which I'm testing was not always good, I need to get the response for the elements to load on each page so I can clearly get the actual execution time of the script. Is there any way to get such results?
you can try to record the HTTP traffic to create a HTTP Archive (HAR) and analyse this.
this link could be worth a read

how to import web sessions recorded using fiddler in to jmeter?

I use fiddler to record a web session. I export the recorded sessions into various formats like Curl Script, WCAT Script, Meddler Script, Html5 script, visualStudio web test.
How to import the recorded session into jmeter?
JMeter appears to only support a proprietary XML format called jmeterTestPlan. While it would probably be possible to develop a Fiddler exporter that would save to that format, it's not clear that it would be worthwhile. Most of what you might want to do with jMeter can probably be done in Fiddler itself, and if your goal is to use jMeter for everything except the initial capture, you can instead use the proxy recorder included in the jMeter product instead of Fiddler.

Resources