How to provide submodule functions through shared libraries while extending Jenkins Pipeline DSL - jenkins-plugins

When extending DSL, I can extending say this way:
boo {
var1='var'
}
But I want to do extend DSL this way:
boo.RunBooWithFoo('var1')
Can someone provide an example on how to do this?

You can just create a file vars/boo.groovy in the shared library and put that function there.
def RunBooWithFoo(arg) {
//your logic
}
Then in pipeline you can use it this way
#Library('shared-library-name') _
boo.RunBooWithFoo('var1')

Related

Using SourceFunction and SinkFunction in PyFlink

I am new to PyFlink. I have done the official training exercise in Java: https://github.com/apache/flink-training
However, the project I am working on must use Python as a programming language. I want to know if it is possible to write a data generator using the "SourceFunction". In older PyFlink versions this was possible, using Jython: https://nightlies.apache.org/flink/flink-docs-release-1.7/dev/stream/python.html#streaming-program-example
In newer examples the dataframe contains a finite set of data, which is never extended. I have not found any example of a data generator in PyFlink, e.g. https://github.com/apache/flink-training/blob/master/common/src/main/java/org/apache/flink/training/exercises/common/sources/TaxiRideGenerator.java
I am not sure which functionality the interfaces Source and SinkFunction provide. Can it be used somehow in python or can it only be used in combination with other pipelines or jar files? It looks like the methods "run()" and "cancel()" are not implemented and thus it cannot be used like some other classes, by overloading.
If it can not be used in Python, are there any other ways to use it? Someone may provide an easy example.
If it is not possible to use it, are there any other ways to write a data generator in OOP style? Take this example: https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/python/datastream_tutorial/ There the split() method is used to separate the stream. Basically, I want to do this by an extra class and just extending the stream, which was done in the Java TaxiRide example via "ctx.collect()". I am trying to avoid using Java, another framework for the pipeline, and Jython. It would be nice to get a short example code, but I appreciate any tips and advice.
I tried to use SourceFunction directly, but as already mentioned, I think this is a completely wrong way, resulting in an error: AttributeError: 'DataGenerator' object has no attribute '_get_object_id'
class DataGenerator(SourceFunction):
def __init__(self):
super().__init__(self)
self._num_iters = 1000
self._running = True
def run(self, ctx):
counter = 0
while self._running and counter < self._num_iters:
ctx.collect('Hello World')
counter += 1
def cancel(self):
self._running = False
Solution:
After looking in some older code using the classes Source and SinkFunction, I came to a solution. Here a kafka connector written in Java is used. The python code can be taken as an example of how to use pyflink's Source and SinkFuntion.
I have only written an example for the SourceFunction:
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream import SourceFunction
from pyflink.java_gateway import get_gateway
class TaxiRideGenerator(SourceFunction):
def __init__(self):
java_src_class = get_gateway().jvm.org.apache.flink.training.exercises.common.sources.TaxiRideGenerator
java_src_obj = java_src_class()
super(TaxiRideGenerator, self).__init__(java_src_obj)
def show(ds, env):
# this is just a little helper to show the output of the pipeline
ds.print()
env.execute()
def streaming():
# arm the flink ExecutionEnvironment
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
taxi_src = TaxiRideGenerator()
ds = env.add_source(taxi_src)
show(ds, env)
if __name__ == "__main__":
streaming()
The second line in the class init was hard to find. I had expected to get an object in the first line.
You have to create a jar file after building this project.
I have entered the path until I see the folder "org":
$ cd flink-training/flink-training/common/build/classes/java/main
flink-training/common/build/classes/java/main$ ls
flink-training/common/build/classes/java/main$ org
flink-training/common/build/classes/java/main$ jar cvf flink-training.jar org/apache/flink/training/exercises/common/**/*.class
Copy the jar file to the pyflink/lib folder, normally under your python environment, e.g. flinkenv/lib/python3.8/site-packages/pyflink/lib. Then start the script.

Method references in Camel routes

Is there a way to use method references in Camel routes? :
from(X).bean(instance::method)
Thanks
There's two ways you can do this. As CookieSoup mentioned, you can use the bean bindings like this bean(Instance.class, "method(String)").
Or you can use camel Processors and Transforms. There's an example on github of how to use this (you'll need Camel 2.18.0 or greater).
class SomeClass {
public void method(String body) {
}
public String methodWithReturn(String body) {
return body;
}
}
.processor
.body(String.class, instance::method)
.translate
.body(String.class, instance::methodWithReturn)
Note, processors are consumers, whereas transforms are functions that return a transformed message body.

serenity-bdd with cucumber feature hooks

I am using Serenity-BDD with cucumber and I would like to run certain things only once per feature file. It looks like cucumber doesn't support this at the moment. I was wondering if serenity has some workaround for this.
I've also tried to use the JUnit #BeforeClass, #AfterClass hooks in the test suite class but the 2 annotations require static methods and I cannot access the serenity page objects methods at that time (there is no instance injected at that point in time).
You could try setting up a static global flag which will make sure that the before method will runs only once.
Setup the feature file with a tag.
#RunOnce
Feature: Run Once
Use the following hook in your stepdefinition.
private static boolean onceFlag = true;
#Before(value="#RunOnce")
public void beforeOnce(){
if(onceFlag) {
onceFlag = false;
//Your code to write once per feature file
}
}
You could try to implement net.thucydides.core.steps.StepListener interface and connect it via SPI. I described this in answer in this post

Turn off Hystrix functionality

I am integrating Hystrix in an application. That application is already in production and we will be testing out hystrix integration work in sandbox before we will push it to production.
My question is that is there any way to turn on/off hystrix functionality using some configuration setting?
There is no single setting for this. You'll need to set multiple parameters to disable Hystrix.
See https://github.com/Netflix/Hystrix/wiki/Configuration for the configuration options:
hystrix.command.default.execution.isolation.strategy=SEMAPHORE
hystrix.command.default.execution.isolation.semaphore.maxConcurrentRequests=100000 # basically 'unlimited'
hystrix.command.default.execution.timeout.enabled=false
hystrix.command.default.circuitBreaker.enabled=false
hystrix.command.default.fallback.enabled=false
Please double check your version of Hystrix for the available parameters.
This is all what you need:
# Disable Circuit Breaker (Hystrix)
spring:
cloud:
circuit:
breaker:
enabled: false
hystrix:
command:
default:
circuitBreaker:
enabled: false
As ahus1 said, there is no single way to disable Hystrix entirely. To disable it in our application, we decided it was cleanest and safest to put a HystrixCommand in a wrapper class, and that wrapper class only exposed the parts of the HystrixCommand that we used (in our case, the execute() method). When constructing the wrapper class, we pass it a Callable that contains the code we want executed, and if Hystrix is disabled (according to our own config value), we simply call that Callable without ever creating a HystrixCommand. This avoids executing any Hystrix code whatsoever and makes it easier to say that Hystrix isn't affecting our application at all when it's disabled.
There are a couple of ways to achieve this-
Doing this for your every group including default. Although this will not disable hystrix(it will only keep the circuit closed all the time) but you will achieve the same result-
hystrix.command.{group-key}.circuitBreaker.forceClosed=false
If you are using java, you can create an around advice over #HystrixCommand annotation and bypass hystrix execution based upon a flag.
Java Code for #2-
#Pointcut("#annotation(com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand)")
public void hystrixCommandAnnotationPointcut() {
}
#Around("hystrixCommandAnnotationPointcut()")
public Object methodsAnnotatedWithHystrixCommand(final ProceedingJoinPoint joinPoint) throws Throwable {
Object result = null;
Method method = AopUtils.getMethodFromTarget(joinPoint);
if ((System.getProperty(enable.hystrix).equals("true")) {
result = joinPoint.proceed();
} else {
result = method.invoke(joinPoint.getTarget(), joinPoint.getArgs());
}
return result;
}
If your Project is spring Managed you can comment the bean definition of hystrixAspect in applicationContext.xml
Comment the following line
bean id="hystrixAspect"class="com.netflix.hystrix.contrib.javanica.aop.aspectj.HystrixCommandAspect"/>
This will remove Hystrix from your project.
I ran into this situation where I wanted to completely turnoff Hystrix using a single property (We use IBM uDeploy to manage dynamic properties). We are using javanica library built on top of Hystrix
Create a Configuration class which creates the HystrixCommandAspect
#Configuration
public class HystrixConfiguration{
#Bean(name = "hystrixCommandAspect")
#Conditional(HystrixEnableCondition.class)
public HystrixCommandAspect hystrixCommandAspect(){
return new HystrixCommandAspect()}
}
2. And the conditional class would be enabled based on a system property.
public class HystrixEnableCondition implements Condition{
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata){
return
"YES".equalsIgnoreCase(
context.getEnvironment().getProperty("circuitBreaker.enabled")) ||
"YES".equalsIgnoreCase(
System.getProperty("circuitBreaker.enabled"));
}
}
setting
hystrix.command.default.execution.isolation.strategy=SEMAPHORE
is enough.
Additionally you may or should disable also the timeout threads
with hystrix.command.default.execution.timeout.enabled=false

How to load a component in console/shell

In CakePHP, how do I load a component in a shell?
To use a component in a controller, you include an array of components in a property called $components. That doesn't work for my Shell. Neither does the "on-the-fly" loading suggested in the documentation.
class MakePdfsShell extends Shell {
public $components = array('Document'); // <- this doesn't work
public function main()
{
$this->Document = $this->Components->load('Document'); // <- this doesnt work either
$this->Document->generate(); // <- this is what I want to do
}
...
I have some xml utilities that I use across some of my controllers. One of the controller launches a heavy task via the cake console, so that it can quietly run in the background via PHP CLI, while the user's request is immediately completed (once the task done, it will e-mail the results to the user).
The xml utilities are generic enough to be used in controller and shell, are too specific to the application to warrant them vendor status. The offered solution with the Lib folder does not work as in CakePhp v3 there seems to be no Lib folder.
After some quite some time spent, I managed to load my controller component to the shell task. Here is how to:
namespace App\Shell;
use Cake\Console\Shell;
use Cake\Core\App;
use Cake\Controller\Component;
use Cake\Controller\ComponentRegistry;
use App\Controller\Component\XmlUtilitiesComponent; // <- resides in your app's src/Controller/Component folder
class XmlCheckShell extends Shell
{
public function initialize() {
$this->Utilities = new XmlUtilitiesComponent(new ComponentRegistry());
}
...
$this->Utilities can now be used across my entire shell class.
You simply don't.
If you think you have to load a component in shell your application architecture is bad designed and should be refactored.
Technically it is possible but it doesn't make sense and can have pretty nasty side effects. Components are not made to run outside of the scope of a request. A component is thought to run within the scope of a HTTP request and a controller - which is obviously not present in a shell.
Putting things in the right place
Why does XML manipulation stuff have to go into a component? This is simply the wrong place. This should go into a class, maybe App\Utility\XmlUtils for example and have no dependencies at all to the request nor controller.
The logic is properly decoupled then and can be used in other places that need it. Also if you get incoming XML the right place to do this manipulation (by using your utility class) would be inside the model layer, not the controller.
You want to learn about Separation of Concerns and tight coupling
Because you've gone just against both principles.
https://en.wikipedia.org/wiki/Separation_of_concerns
What is the difference between loose coupling and tight coupling in the object oriented paradigm?
Search before asking
You could have tried to search via Google or on SO you would have found one of these:
using components in Cakephp 2+ Shell
CakePHP using Email component from Shell cronjob
Using a plugin component from shell class in cakephp 2.0.2
...
Be aware that some of them might encourage bad practice. I haven't checked them all.
I assume that you have a component named YourComponent:
<?php
App::uses('Component', 'Controller');
class YourComponent extends Component {
public function testMe() {
return 'success';
}
}
- with cake 2., you can load your component like this
App::uses('ComponentCollection', 'Controller');
App::uses('YourComponent', 'Controller/Component');
class YourShell extends AppShell {
public function startup() {
$collection = new ComponentCollection();
$this->yourComponent = $collection->load('Your');
}
public function main() {
$this->yourComponent->testMe();
}
}
- with cake 3. you can load your component like this
<?php
namespace App\Shell;
use App\Controller\Component\YourComponent;
use Cake\Console\Shell;
use Cake\Controller\ComponentRegistry;
class YourShell extends Shell {
public function initialize() {
parent::initialize();
$this->yourComponent = new YourComponent(new ComponentRegistry(), []);
}
public function main() {
$pages = $this->yourComponent->testMe();
}
}
If you are trying to access a custom XyzComponent from a shell, then you probably have commonly-useful functionality there. The right place for commonly-useful functionality (that is also accessible from shells) is in /Lib/.
You can just move your old XyzComponent class from /Controller/Component/XyzComponent.php to /Lib/Xyz/Xyz.php. (You should rename your class to remove the "Component" suffix, e.g., "XyzComponent" becomes "Xyz".)
To access the new location, in your controller, remove 'Xyz' from your class::$components array. At the top of your controller file, add
App::uses('Xyz', 'Xyz'); // that's ('ClassName', 'folder_under_/Lib/')
Now you just need to instantiate the class. In your method you can do $this->Xyz = new Xyz(); Now you're using the same code, but it can also be accessed from your Shell.
//TestShell.php
class TestShell extends AppShell{
public function test(){
//to load a component in dis function
App::import('Component', 'CsvImporter');
$CsvImporter = new CsvImporterComponent();
$data = $CsvImporter->get();
}
}
//CsvImporterComponent.php
App::uses('Component', 'Controller');
class CsvImporterComponent extends Component {
function get(){
//your code
}
}

Resources