Delay time between transactions - timer

I have a JMeter script which has multiple Transaction controllers and each Transaction controllers have multiple samples. I want to implement 5 secs delay time between each transaction controllers. What is the right approach?
The script has n number of Threads.

Add "Constant Timer" after each "Transaction Controller" by giving delay of 5 sec.It is a simplest and easy approach for that.

Add constant timer/Uniform random timer of 5 sec only in first sample of each transaction controller (no need to include in first sample of first transaction controller)

Related

How to trigger Messages at specific times?

We have a DB table where every row has a text message and a timestamp. E.g.
Mesg1 09:00
Mesg2 09:01
Mesg3 09:15
Mesg4 09:20
The timings are not at a fixed interval. It is uneven. We would like to read the table as a Source and send the Messages to a Target at the configured timestamps. Components like Quartz do not allow configuring uneven trigger times.
Is there a common pattern that can be followed for such a use case?
Regards,
Yash
Use camel cron component for the trigger events.
from("cron:tab?schedule=0/1+*+*+*+*+?")
.setBody().constant("event")
.log("${body}");
The schedule expression 0/3+10+++*+? can be also written as 0/3 10 * * * ? and triggers an event every three seconds only in the tenth minute of each hour.

Fraud Detection DataStream API tutorial questions

I am following the tutorial here.
Q1: Why in the final application do we clear all states and delete timer whenever flagState = true regardless of the current transaction amount? I refer to this part of the code:
// Check if the flag is set
if (lastTransactionWasSmall != null) {
if (transaction.getAmount() > LARGE_AMOUNT) {
//Output an alert downstream
Alert alert = new Alert();
alert.setId(transaction.getAccountId());
collector.collect(alert);
}
// Clean up our state [WHY HERE?]
cleanUp(context);
}
If the datastream for a transaction was 0.5, 10, 600, then flagState would be set for 0.5 then cleared for 10. So for 600, we skip the code block above and don't check for large amount. But if 0.5 and 600 transactions occurred within a minute, we should have sent an alert but we didn't.
Q2: Why do we use processing time to determine whether two transactions are 1 minute apart? The transaction class has a timeStamp field so isn't it better to use event time? Since processing time will be affected by the speed of the application, so two transactions with event times within 1 minute of each other could be processed > 1 minute apart due to lag.
A1: The fraud model being used in this example is explained by this figure:
In your example, the transaction 600 must immediately follow the transaction for 0.5 to be considered fraud. Because of the intervening transaction for 10, it is not fraud, even if all three transactions occur within a minute. It's just a matter of how the use case was framed.
A2: Doing this with event time would be a very valid choice, but would make the example much more complex. Not only would watermarks be required, but we would also have to sort the stream by event time, since a realistic example would have to consider that the events might be out-of-order.
At that point, implementing this with a process function would no longer be the best choice. Using the temporal pattern matching capabilities of either Flink's CEP library or Flink SQL with MATCH_RECOGNIZE would be the way to go.

Drools Timer based rule fires multiple times after restart

I have a scenario where I want to use rules purely as a scheduled job for invoking other services. I am using a solution similar to Answer 2 on this. So I have rule 1 which looks like:
rule "ServiceCheck"
timer ( int: 3m 5m )
no-loop true
when
then
boolean isServiceEnabled = DummyServices.getServiceEnabledProperty();
if(isServiceEnabled){
ServicesCheck servicesCheck = new ServicesCheck();
servicesCheck.setServiceEnabled(true);
insert(servicesCheck);
}
end
This inserts a servicesCheck object every 5 minutes if services are enabled. Once this object is inserted my other rules fire and retract the servicesCheck fact from there.
The problem I am facing is when I switch off the app and start it next day. At that time, the ServiceCheck rule gets fired a load of times before coming to a stop. My assumption is that the last fired time is saved in the session and when I restart, it finds a difference between current time and saved time and fires the rules for number of times till the 2 times match in the session. So effectively, to catch up for 1 hr gap from shutdown to restart, it will fire the rule 12 times in this case as the interval set is 5 mins. Is there a way using which I can update the last fired time on the rules session so that it starts working like a fresh new start without catching up for lost time.
I suppose you are persisting the entire session? I suppose you have a shutdown procedure. You can use a single Fact, let's call it Trigger. Modify your rule to
rule "ServiceCheck"
timer ( int: 3m 5m )
when
Trigger()
then
// ... same
end
You'll have to insert one Trigger fact after startup and retract it during shutdown.
Later
I've set up an experiment (using 5.5.0) where a session is running, being called with fireUntilHalt in one thread, with a rule like "ServiceCheck". Another thread will sleep some time, halt the session after retracting the Trigger fact. After more than double the interval of the timer firing, the second thread inserts the Trigger again, signals the first thread to re-enter fireUntilHalt(), and the second thread will repeat its cycle. I can observe silence during the period where the Trigger is retracted.
If, however, the Trigger is not retracted/re-inserted, there'll be a burst of firings after the session has been restarted.
This indicates that retracting and re-inserting a Trigger does indeed stop and restart a timer rule.

Apache Camel - aggregator to space out requests, but not queuing requests

I have route which when sent a message invokes a refresh service
I only want the service to be invoked at most every 1 minute
If the refresh service takes longer than 1 minute (e.g. 11 minutes) I don't want requests for it to queue up
The first part: every 1 minutes is easy, I just create an aggregator with a completionTimeout of 1 mins
The part about stopping requests queueing up is not so easy and I can't figure out how to construct it
e.g.
from( seda_in )
.aggregate( constant(A), blank aggregator )
.completionTimeout( 1000 )
.process( whatever )...
if the process takes 15 seconds then potentially 15 new inoke messages could be waiting for the process when it finishes. I want at most just 1 to be waiting for however long the process takes. (its hard to predict)
how can I avoid this or structure it better to achieve my objectives?
I believe you would be interested in seeing the Throttler pattern, which is documented here http://camel.apache.org/throttler.html
Hope this helps :)
EDIT - If you are wanting to eliminate excess requests, you can also investigate setting a TTL (Time to live) header within JMS and add a concurrent consumer of 1 to your route, which means any excess messages will also be discarded.

GAE Task Queues how to make the delay?

In Task Queues code is executed to connect to the server side
through URL Fetch.
My file queue.yaml.
queue:
- Name: default
rate: 10 / m
bucket_size: 1
In such settings, Tusk performed all at once, simultaneously.
Specificity is that between the requests should be delayed at least 5
sec. Task must perform on stage with a difference> 5 sec. (but
does not parallel).
What are the values set in queue.yaml?
You can't specify minimum delays between tasks in queue.yaml, currently; you should do it (partly) in your own code. For example, if you specify a bucket size of 1 (so that more than one task should never be executing at once) and make sure the tasks runs for at least 5 seconds (get a start=time.time() at the start, time.sleep(time.time()-(5+start)) at the end) this should work. If it doesn't, have each task record in the store the timestamp it finished, and when it start check if the last task ended less than 5 seconds ago, and in that case terminate immediately.
The other way could be store the task data in table. In your task-queue add a id parameter. Fetch 1st task from table and pass its id to task queue processing servlet. In servlet at the end delay for 5 second and feth next task, pass its id and.... so on.

Resources