Activiti / Camunda change boundary timer with variable - timer

I got a special question regarding timer boundary events on a user task in Activiti/Camunda:
When starting the process I set the timer duration with a process variable and use expressions in the boundary definition to resolve the variable. The boundary event is defined on a user task.
<bpmn2:timerEventDefinition id="_TimerEventDefinition_11">
<bpmn2:timeDuration xsi:type="bpmn2:tFormalExpression">${hurry}</bpmn2:timeDuration>
</bpmn2:timerEventDefinition>
In some cases, when the timer is already running it can occur, that the deadline (dueDate) should be extended because the asignee has requested more time. For this purpose i want to change the value of the process variable defining the deadline.
As it happens, the variable is already resolved at the process-start and set to the boundary event.
Any further changes of the variable do not affect the dueDate of the boundary timer because it is stored in the database and is not updated when the value of the variable changes.
I know how to update the dueDate of the job element via the Java API, but i want to provide a generic approach like setting it with changing the value of the variable.
The most common use case for extending the deadline will be when the boundary timer is already running.
Any ideas how to cope with this problem?
Any tips are very apprechiated.
Cheers Chris

After some time of thinking I came up with a workaround like that:
I start the process with two variables. "hurry" is evaluated for the boundary timer. And "extendDeadline" is initialized with false. If the timer triggers and the process advances to the exclusive gateway, the value of "extendDeadline" is evaluated.
If a user has changed the value of "extendDeadline" to true during the the timer was running the process returns to the user task again where the boundary timer is set to the value of "hurry".
If the "extendDeadline" is still set to false, the process can proceed.

If the timer is running you can change dueDate of the timer by executing a signal. If a assginee requested more time, set new value of hurry and execute the signal. The old timer will be canceled and the new timer will be created with new due date.
runtimeService.setVariable(execution.getId(), "hurry", newDueDate);
runtimeService.signalEventReceived(signalName, execution.getId());

Solution is to have 2 out going sequence flow, one should be from boundary timer on task and another one should be from task it self, as shown in diagram added by #theFriedC. See following image also.
Then you can use some exclusive gateway on 2nd sequence flow and reroute that back to same task with a new timer value.

Related

Jmeter Pacing between thread group Iterations

In Jmeter (5.4.1), I have the below thread group with 1 thread. I am controlling the frequency of the transaction using the constant timer. The constant timer_pacing in the image has the required pacing. I see that during execution, the constant timer is applied for each sample request in the particular thread group.
I am expecting all the samples before the contant timer_pacing to be executed one after the other immediately.What am I doing wrong here. Please advice.
alternatively similiar setup seems to works as expected for another thread group.
If you want to add a delay between each iteration of each thread add a Flow Control Action Sampler as the first sampler and set the delay in the controller
If you want to add a random delay consider using JMeter function Random ${__Random(1000,5000,)}
All JMeter timers obey JMeter Scoping Rules so if you have Timers at the same level as Samplers - all the timers will be applied to all the Samplers
As per Timers documentation:
Note that timers are processed before each sampler in the scope in which they are found
So if you want to add a delay between defaultPhotoUrl and Submit requests you need to add a Constant Timer as a child of the "Submit" request

How to know that the keyed window processing will start and has been finished

I have a keyed window stream processing application(KeyStream.window.process), and the window is a 15 minutes tumbling window.
I would like to know when a new window processing will start and when this window processing ends, so that I could use this chance to do some cleanup/initialize work globally.
For each window, before the processing kicks off, I would like to do some initializing work, such as truncate a db table (this operation should only occur in one place, this is a global operation that should not be done in the process method).
And when the processing window ends(all the process operator's tasks have been finished), I would like to do some other cleanup work (again, this is a global operation).
I would like to know whether is is possible in flink and how to do it, thanks!
I think you could accomplish this in an operator that follows the window, running with a parallelism of one. This operator will need to detect when a new batch of results begins to arrive from the window, and can do what's needed to close the previous window in the DB and initialize the new one at that time. It can also implement close() to do whatever wrap-up is needed if/when the job is ending or being shutdown.
Having done the initialization, this operator can simply forward on all of the events it receives from the window operator, until detecting the beginning of the next window's results.
This operator will need to keep one piece of managed state, namely some sort of identifier for the current window, so it can detect when a new window has begun. The results from the window will need to carry this identifier -- which could just be the window starting or ending timestamp.
You can used Flink's key partitioned state for this state -- you can simply key the stream by a constant. This is normally a bad idea, because it forces the effective parallelism to one (since every event will be assigned the same key), but as that's needed anyway by this (global) operator, that's not an issue.
Given these requirements, this operator could be a RichFlatMapFunction, or a KeyedProcessFunction. You'll need to use a KeyedProcessFunction if you find yourself wanting to use timers to do cleanup.

Apache Flink: When will the last-watermark (by `Long.MaxValue` value)be triggered? And How should it be handled?

I want to know exactly
When will watermark value be set as Long.MaxValue? (On canceling a SourceFunction? Cancel a job through cli & web-panel? ... )
What does it means for an application? (End of the job? Job failure? with no re/start?)
And how should it be handled? (clearing all the states? what about timers? As I saw registering a new timer on this state will make application to run forever! If I should be able to persist a state in last-watermark to recover from it in later time/run, how I should persist a timer-state?)
The last watermark is emitted when your SourceFunction exits the run method and it means you have consumed all input.
Given this you should not need to clear as the job will be marked as finished once the watermark reaches all sinks.

Libev: how to schedule a callback to be called as soon as possible

I'm learning libev and I've stumbled upon this question. Assume that I want to process something as soon as possible but not now (i.e. not in the current executing function). For example I want to divide some big synchronous job into multiple pieces that will be queued so that other callbacks can fire in between. In other words I want to schedule a callback with timeout 0.
So the first idea is to use ev_timer with timeout 0. The first question is: is that efficient? Is libev capable of transforming 0 timeout timer into an efficient "call as soon as possible" job? I assume it is not.
I've been digging through libev's docs and I found other options as well:
it can artificially delay invoking the callback by using a prepare or idle watcher
So the idle watcher is probably not going to be good here because
Idle watchers trigger events when no other events of the same or higher priority are pending
Which probably is not what I want. Prepare watchers might work here. But why not check watcher? Is there any crutial difference in the context I'm talking about?
The other option these docs suggest is:
or more sneakily, by reusing an existing (stopped) watcher and pushing it into the pending queue:
ev_set_cb (watcher, callback);
ev_feed_event (EV_A_ watcher, 0);
But that would require to always have a stopped watcher. Also since I don't know a priori how many calls I want to schedule at the same time then I would have to have multiple watchers and additionally keep track of them via some kind of list and increase it when needed.
So am I on the right track? Are these all possibilities or am I missing something simple?
You may want to check out the ev_prepare watcher. That one is scheduled for execution as the last handler in the given event loop iteration. It can be used for 'Execute this task ASAP' implementations. You can create dedicated watcher for each task you want to execute, or you can implement a queue with a one prepare watcher that is started once queue contains at least one task.
Alternatively, you can implement similar mechanism using ev_idle watcher, but this time, it will be executed only if the application doesn't process any 'higher priority' watcher handlers.

How does Timer component polls?

Let's say I have following time component:
from("timer://foo?period=1000").setBody(constant("select * from customer")).to("jdbc:testdb").to("beanRef:processResult");
How does timer component work here? Does it reads from database in every 1 sec or waits for bean to finish the processing?
If bean is still processing the earlier result and timer will keep polling the database then it will create a bottleneck. Is there any way to avoid it?
Okay, Update: looking at the source code, the timer component relies on the java TimerTask implementation. And your question is already answered here: Is Java's Timer task guarenteed not to run concurrently?
Short answer: one single thread executes the trigger and the routes connected to it, so there will be no concurrent execution.
That said, you might want to controll the execution a bit. It is recommended with Timer Tasks (and hence Camel timers) to have a margin between the period in the timer and the max task execution time.
You can use a SEDA component (with concurrentConsumers=[num threads]) in between to fine grain controll the execution with a work queue. The timer will finish it's task right away while the real route can continue to process.
from("timer://foo?period=1000")
.to("seda:startRoute");
from("seda:startRoute")
.setBody(constant("select * from customer"))
.to("jdbc:testdb").to("beanRef:processResult");
Each event will stack up non the less, so over time, you might want tune the route so that the period > avg route exec time.
You could add a shared boolean variable either in a singleton bean or in a static class:
public static synchronized boolean isRunning(){
return running;
}
public static synchronized void setRunning(boolean isRunning){
running = isRunning;
}
The variable should telling weather the route is running or not and filter timer events that occurs while the variable is true. Just hook up a few processors/bean-calls to handle this.

Resources