I have a scenario where I want to use rules purely as a scheduled job for invoking other services. I am using a solution similar to Answer 2 on this. So I have rule 1 which looks like:
rule "ServiceCheck"
timer ( int: 3m 5m )
no-loop true
when
then
boolean isServiceEnabled = DummyServices.getServiceEnabledProperty();
if(isServiceEnabled){
ServicesCheck servicesCheck = new ServicesCheck();
servicesCheck.setServiceEnabled(true);
insert(servicesCheck);
}
end
This inserts a servicesCheck object every 5 minutes if services are enabled. Once this object is inserted my other rules fire and retract the servicesCheck fact from there.
The problem I am facing is when I switch off the app and start it next day. At that time, the ServiceCheck rule gets fired a load of times before coming to a stop. My assumption is that the last fired time is saved in the session and when I restart, it finds a difference between current time and saved time and fires the rules for number of times till the 2 times match in the session. So effectively, to catch up for 1 hr gap from shutdown to restart, it will fire the rule 12 times in this case as the interval set is 5 mins. Is there a way using which I can update the last fired time on the rules session so that it starts working like a fresh new start without catching up for lost time.
I suppose you are persisting the entire session? I suppose you have a shutdown procedure. You can use a single Fact, let's call it Trigger. Modify your rule to
rule "ServiceCheck"
timer ( int: 3m 5m )
when
Trigger()
then
// ... same
end
You'll have to insert one Trigger fact after startup and retract it during shutdown.
Later
I've set up an experiment (using 5.5.0) where a session is running, being called with fireUntilHalt in one thread, with a rule like "ServiceCheck". Another thread will sleep some time, halt the session after retracting the Trigger fact. After more than double the interval of the timer firing, the second thread inserts the Trigger again, signals the first thread to re-enter fireUntilHalt(), and the second thread will repeat its cycle. I can observe silence during the period where the Trigger is retracted.
If, however, the Trigger is not retracted/re-inserted, there'll be a burst of firings after the session has been restarted.
This indicates that retracting and re-inserting a Trigger does indeed stop and restart a timer rule.
Related
I have an AppleScript that runs on loop every two hours to modify a calendar B based on updates from another calendar A.
The script uses the on idle command below to wait 2 hours every loop. What happens if the computer stays idle for 1.5 hours then goes to sleep for 10 hours? Will there be 0.5 hours left when it wakes up? Any other scenarios?
on idle
my_code()
return (120 * minutes)
end idle
The script truly only needs to run if there is an update to calendar A, which is a shared iCloud calendar and can get updates from multiple people. The two hour loop is what I could figure out so far but I feel it is not efficient. Any more robust suggestions? Is there a way I can trigger the script to run only when it detects an update in calendar A? Or, along the same line of thought, is there a way to get the last timestamp the calendar was updated?
Thanks
I can't test following. Not sure it is the best way to solve your problem. Try yourself:
property oldStampDates : {}
on run
tell application "Calendar" to tell calendar "Test Calendar" to set oldStampDates to get stamp date of events
end run
on idle
--> Script retrieves last modified date and time of indicated calendar events.
tell application "Calendar" to tell calendar "Test Calendar" to set newStampDates to get stamp date of events
if newStampDates is not oldStampDates then display notification "The changes was detected"
set oldStampDates to newStampDates
return 30 -- seconds, default setting
end idle
NOTE: 1) you can put instead of display notification call to your handler my_code(), 2) you can put instead of 30 seconds other value, for example, return 10 (checking every 10 seconds).
I am following the tutorial here.
Q1: Why in the final application do we clear all states and delete timer whenever flagState = true regardless of the current transaction amount? I refer to this part of the code:
// Check if the flag is set
if (lastTransactionWasSmall != null) {
if (transaction.getAmount() > LARGE_AMOUNT) {
//Output an alert downstream
Alert alert = new Alert();
alert.setId(transaction.getAccountId());
collector.collect(alert);
}
// Clean up our state [WHY HERE?]
cleanUp(context);
}
If the datastream for a transaction was 0.5, 10, 600, then flagState would be set for 0.5 then cleared for 10. So for 600, we skip the code block above and don't check for large amount. But if 0.5 and 600 transactions occurred within a minute, we should have sent an alert but we didn't.
Q2: Why do we use processing time to determine whether two transactions are 1 minute apart? The transaction class has a timeStamp field so isn't it better to use event time? Since processing time will be affected by the speed of the application, so two transactions with event times within 1 minute of each other could be processed > 1 minute apart due to lag.
A1: The fraud model being used in this example is explained by this figure:
In your example, the transaction 600 must immediately follow the transaction for 0.5 to be considered fraud. Because of the intervening transaction for 10, it is not fraud, even if all three transactions occur within a minute. It's just a matter of how the use case was framed.
A2: Doing this with event time would be a very valid choice, but would make the example much more complex. Not only would watermarks be required, but we would also have to sort the stream by event time, since a realistic example would have to consider that the events might be out-of-order.
At that point, implementing this with a process function would no longer be the best choice. Using the temporal pattern matching capabilities of either Flink's CEP library or Flink SQL with MATCH_RECOGNIZE would be the way to go.
I have a task that need to run at 10 irregular intervals of time throughout the day.
For e.g at 6am,8am,11am,4pm,4:30pm ...
I am planning to do it the following way:
Schedule the the first timer for 6am and when it is fired schedule another timer for
8 am and so on.This thing should work fine till the time DST does not come into
picture as when the DST starts the already scheduled timer will be fired 1 hour later than the actual time.
The public void schedule(TimerTask task,Date time) api takes a date object , however once the
timer is scheduled even if the dst changes it will be fired according to the new time, not at the actual
time.
Could someone provide some inputs to achieve this ?
I have route which when sent a message invokes a refresh service
I only want the service to be invoked at most every 1 minute
If the refresh service takes longer than 1 minute (e.g. 11 minutes) I don't want requests for it to queue up
The first part: every 1 minutes is easy, I just create an aggregator with a completionTimeout of 1 mins
The part about stopping requests queueing up is not so easy and I can't figure out how to construct it
e.g.
from( seda_in )
.aggregate( constant(A), blank aggregator )
.completionTimeout( 1000 )
.process( whatever )...
if the process takes 15 seconds then potentially 15 new inoke messages could be waiting for the process when it finishes. I want at most just 1 to be waiting for however long the process takes. (its hard to predict)
how can I avoid this or structure it better to achieve my objectives?
I believe you would be interested in seeing the Throttler pattern, which is documented here http://camel.apache.org/throttler.html
Hope this helps :)
EDIT - If you are wanting to eliminate excess requests, you can also investigate setting a TTL (Time to live) header within JMS and add a concurrent consumer of 1 to your route, which means any excess messages will also be discarded.
In Task Queues code is executed to connect to the server side
through URL Fetch.
My file queue.yaml.
queue:
- Name: default
rate: 10 / m
bucket_size: 1
In such settings, Tusk performed all at once, simultaneously.
Specificity is that between the requests should be delayed at least 5
sec. Task must perform on stage with a difference> 5 sec. (but
does not parallel).
What are the values set in queue.yaml?
You can't specify minimum delays between tasks in queue.yaml, currently; you should do it (partly) in your own code. For example, if you specify a bucket size of 1 (so that more than one task should never be executing at once) and make sure the tasks runs for at least 5 seconds (get a start=time.time() at the start, time.sleep(time.time()-(5+start)) at the end) this should work. If it doesn't, have each task record in the store the timestamp it finished, and when it start check if the last task ended less than 5 seconds ago, and in that case terminate immediately.
The other way could be store the task data in table. In your task-queue add a id parameter. Fetch 1st task from table and pass its id to task queue processing servlet. In servlet at the end delay for 5 second and feth next task, pass its id and.... so on.