Dispatch the highest priority in WPF - wpf

Hi guys I would like to know what happens if I am in the middle of a task thats priority is Render and I create another task with priority send. Will Dispatcher will till its done with its tasks of priority render and then my task of highest priority which is send will be executed?

Will Dispatcher will till its done with its tasks of priority render and then my task of highest priority which is send will be executed?
Yes. The priority is only used when the Dispatcher starts each task. It will not stop an operation already in process.

Related

Jmeter Pacing between thread group Iterations

In Jmeter (5.4.1), I have the below thread group with 1 thread. I am controlling the frequency of the transaction using the constant timer. The constant timer_pacing in the image has the required pacing. I see that during execution, the constant timer is applied for each sample request in the particular thread group.
I am expecting all the samples before the contant timer_pacing to be executed one after the other immediately.What am I doing wrong here. Please advice.
alternatively similiar setup seems to works as expected for another thread group.
If you want to add a delay between each iteration of each thread add a Flow Control Action Sampler as the first sampler and set the delay in the controller
If you want to add a random delay consider using JMeter function Random ${__Random(1000,5000,)}
All JMeter timers obey JMeter Scoping Rules so if you have Timers at the same level as Samplers - all the timers will be applied to all the Samplers
As per Timers documentation:
Note that timers are processed before each sampler in the scope in which they are found
So if you want to add a delay between defaultPhotoUrl and Submit requests you need to add a Constant Timer as a child of the "Submit" request

Event driven simulation: When inserting new events into a priority queue, do old events become redundant?

I created a binary heap based priority queue in C. I'm trying to create a discrete event simulation.
Here's what I understand about event simulation:
Suppose I have 10 values in my priority queue, each value representing an event. For each value in the PQ, the program wil dequeue an value and insert 10 more values. In other words, the program is making new calculations for those 10 events.
But what happens to the old values in the PQ? Since new values are being enqueued for every event, shouldn't the previous values become redundant? Shouldn't they be removed from the PQ so that the PQ doesn't get too large?
Pending events in your priority queue event list remain there until 1) they are polled and become the active event, or 2) they are cancelled (explicitly deleted from the priority queue) due to the logic of a different active event.
For example, consider a simplistic air traffic simulation. A take-off event will schedule an arrival event at the target destination at some specified time. However, a weather event or an emergency event might cancel the scheduled arrival, and either reschedule it with an additional delay or divert the airplane to arrive at a different destination with a different time. However, unless you explicitly cancelled the originally scheduled arrival, that event would be pending on the event list until its scheduled time rolled around.
Bottom line, there's no magic. It's up to you as the modeler to schedule or cancel events in a way that reflects the correct logic of your model. The priority queue just does the bookkeeping to handle the execution order.

How to make Camel AWSSQS Consumer component synchronous

I am using 2.17.6 version of camel AWSSQS component ( consumer) to consume messages from a FIFO queue.
When the exchange processing takes longer than visibility timeout, camel is spinning up a new thread with the same message ( but a new exchange). This is happening because I am using long polling, and looks like this component is built to have asynchronous behavior by default.
In my case, I want to move on to the next message only when the current message gets deleted after processing. ( as my requirement is FIFO)
If I increase the visibility timeout to a high value(ex: 15 mins), that solves the issue. but I do have cases where I will go for infinite retry, and in those cases I don't want to cross the set high Visibility timeout and have multiple threads starting thus changing the order of execution.
Please suggest, if there is a way not to receive the next message in the queue until I am done processing the current one.
Thanks,
Sowjanya.

Libev: how to schedule a callback to be called as soon as possible

I'm learning libev and I've stumbled upon this question. Assume that I want to process something as soon as possible but not now (i.e. not in the current executing function). For example I want to divide some big synchronous job into multiple pieces that will be queued so that other callbacks can fire in between. In other words I want to schedule a callback with timeout 0.
So the first idea is to use ev_timer with timeout 0. The first question is: is that efficient? Is libev capable of transforming 0 timeout timer into an efficient "call as soon as possible" job? I assume it is not.
I've been digging through libev's docs and I found other options as well:
it can artificially delay invoking the callback by using a prepare or idle watcher
So the idle watcher is probably not going to be good here because
Idle watchers trigger events when no other events of the same or higher priority are pending
Which probably is not what I want. Prepare watchers might work here. But why not check watcher? Is there any crutial difference in the context I'm talking about?
The other option these docs suggest is:
or more sneakily, by reusing an existing (stopped) watcher and pushing it into the pending queue:
ev_set_cb (watcher, callback);
ev_feed_event (EV_A_ watcher, 0);
But that would require to always have a stopped watcher. Also since I don't know a priori how many calls I want to schedule at the same time then I would have to have multiple watchers and additionally keep track of them via some kind of list and increase it when needed.
So am I on the right track? Are these all possibilities or am I missing something simple?
You may want to check out the ev_prepare watcher. That one is scheduled for execution as the last handler in the given event loop iteration. It can be used for 'Execute this task ASAP' implementations. You can create dedicated watcher for each task you want to execute, or you can implement a queue with a one prepare watcher that is started once queue contains at least one task.
Alternatively, you can implement similar mechanism using ev_idle watcher, but this time, it will be executed only if the application doesn't process any 'higher priority' watcher handlers.

Activiti / Camunda change boundary timer with variable

I got a special question regarding timer boundary events on a user task in Activiti/Camunda:
When starting the process I set the timer duration with a process variable and use expressions in the boundary definition to resolve the variable. The boundary event is defined on a user task.
<bpmn2:timerEventDefinition id="_TimerEventDefinition_11">
<bpmn2:timeDuration xsi:type="bpmn2:tFormalExpression">${hurry}</bpmn2:timeDuration>
</bpmn2:timerEventDefinition>
In some cases, when the timer is already running it can occur, that the deadline (dueDate) should be extended because the asignee has requested more time. For this purpose i want to change the value of the process variable defining the deadline.
As it happens, the variable is already resolved at the process-start and set to the boundary event.
Any further changes of the variable do not affect the dueDate of the boundary timer because it is stored in the database and is not updated when the value of the variable changes.
I know how to update the dueDate of the job element via the Java API, but i want to provide a generic approach like setting it with changing the value of the variable.
The most common use case for extending the deadline will be when the boundary timer is already running.
Any ideas how to cope with this problem?
Any tips are very apprechiated.
Cheers Chris
After some time of thinking I came up with a workaround like that:
I start the process with two variables. "hurry" is evaluated for the boundary timer. And "extendDeadline" is initialized with false. If the timer triggers and the process advances to the exclusive gateway, the value of "extendDeadline" is evaluated.
If a user has changed the value of "extendDeadline" to true during the the timer was running the process returns to the user task again where the boundary timer is set to the value of "hurry".
If the "extendDeadline" is still set to false, the process can proceed.
If the timer is running you can change dueDate of the timer by executing a signal. If a assginee requested more time, set new value of hurry and execute the signal. The old timer will be canceled and the new timer will be created with new due date.
runtimeService.setVariable(execution.getId(), "hurry", newDueDate);
runtimeService.signalEventReceived(signalName, execution.getId());
Solution is to have 2 out going sequence flow, one should be from boundary timer on task and another one should be from task it self, as shown in diagram added by #theFriedC. See following image also.
Then you can use some exclusive gateway on 2nd sequence flow and reroute that back to same task with a new timer value.

Resources