Can I use timerOnce with StateMachineRuntimePersister? - timer

I have a spring statemachine (version 3.0.1) with an own instance for each user. With every HTTP request the statemachine is acquired by a DefaultStateMachineService from a JpaStateMachineRepository. At the end of the HTTP request the statemachine is released again. The state machine is configured to use a StateMachineRuntimePersister.
At one special state of the statemachine there is a transition triggered by a timer:
.withExternal()
.source( "S1" )
.target( "S2" )
.timerOnce( 60000 )
I want this timer to be executed only if the statemachine is still in state S1 after one minute (another cloud instance handling a later step of this statemachine instance could have changed the state in the meantime, making the timer obsolete).
I tried to release the statemachine without stopping it (stateMachineService.releaseStateMachine( machineId, false );). Then the timer always triggers after one minute, because it does not refresh the statemachine from the database leading to an inconsistent state.
I tried to release the statemachine stopped. Then the timer is killed and will not trigger anymore.
One solution could be to switch the trigger from timerOnce to event and to use a separate mechanism (e.g. Quartz scheduler) that reads the statemachine from database, checks the state and then explicitly fires the event.
But this would make the timer feature of spring statemachine useless in a cloud environment, so I guess there should be a better solution.
What would be the best solution here?

The problem you mentioned can be solved using Distributed States.
So, Spring Statemachine uses ZooKeeper to keep states of machines consistent in the app instances.
In your case, ZooKeeper will let your state machine know if its state changes on the other app instance and behave accordingly with the timer.
It's worth mentioning that the feature is not mature as of now according to docs:
Distributed state functionality is still a preview feature and is not yet considered to be stable in this particular release. We expect this feature to mature towards its first official release.
You can find some technical details of the concept in the Distributed State Machine Technical Paper
Personally speaking, I would stick with the solution you described (triggering event fire with a scheduler) since it gives more control over the process.

Related

Codename One and Java synchronized keyword

I had a suspect that a bug in one of my Codename One projects was caused by concurrent executions of the same listener (if the user taps a button very quickly more times, invoking its listener before it ended the execution)... I added a lock variable in the code, to avoid multiple executions at the same time, and this solved the bug.
This is my first time that I have this kind of problem. Reading on the web, it's suggested to use the synchronized Java keyword (however I'm not sure if it can be useful in this case).
My question is if the synchronized Java keyword is supported by Codename One.
synchronized works fine in Codename One but if you used an action listener it's unlikely that it solved the issue unless we have a huge unimaginable bug.
All events, paints, lifecycle methods etc. are invoked on the EDT. It's a single thread so two clicks on the button will happen on a single thread. synchronized would be meaningless. The EDT is used from the touch screen interaction all the way down to the event on the component itself and you can test that through the isEDT() method.
A more likely scenario is that one of the action listeners on the button uses invokeAndBlock which can trigger weird side effects in the event dispatch chain. invokeAndBlock() is used internally by AndWait methods, dialogs etc.
Using syncronized will prevent concurrent execution of the method, but will essentially queue up the request that are made, by forcing threads to wait for any current execution.
When handling this scenario, you might want to debounce the button clicks, by preventing user interaction for some period after it is first pressed, or for the duration of the resulting computation, by disabling the button and re-enabling it

Apache Flink: How to make some action after the job is finished?

I'm trying to do one action after the flink job is finished (make some change in DB). I want to do it in the same flink application with no luck.
I found that there is JobStatusListener that is notified in ExecutionGraph about changed state but I cannot find how I can get this ExecutionGraph to register my listener.
I've tried to completely replace ExecutionGraph in my project (yes, bad approach but...) but as soon as it is runtime library it is not called at all in distributed mode, only in local run.
I have next flink application in short:
DataSource.output(RichOutputFormat.class)
ExecutionEnvironment.getExecutionEnvironment().execute()
Can please anybody help?

Where is the state stored by default if I do not configure a StateBackend?

In my program I have enabled checkpointing,
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000);
but I haven't configured any StateBackend.
Where is the checkpointed state stored? Can I somehow inspect this data?
The default state backend keeps the working state on the heaps of the various task managers, and backs that up to the job manager heap. This is the so-called MemoryStateBackend.
There's no API for directly accessing the data stored in the state backend. You can simulate a task manager failure and observe that the state is restored. And you can instead trigger a savepoint if you wish to externalize the state, though there is no tooling for directly inspecting these savepoints.
It's not the answer, but small addition to the correct answer. I can't write comments cause of reputation.
If you use flink version earlier then v1.5 then default state backend will MemoryStateBeckend with asynchronous snapshots set to false. So you will use synchronous saving checkpoints every 5 seconds in your case (your pipeline will block every 5 seconds for saving checkpoint).
To avoid this use explicit constructor:
env.setStateBackend(new MemoryStateBackend(maxStateSize, true));
Since flink version v1.5.0 MemoryStateBackend uses asynchronous snapshots by default.
For more information see flink_v1.4 docs

How does Apache Flink restore state from checkpoint/savepoint?

I need to know how Apache Flink restore its state from checkpoint, because I can't see any difference between the time of start and seeing first event in operator when running pure job verses restoring from savepoint.
Does state load lazily from checkpoint/savepoint?
The keyed state interfaces are designed to make this distinction transparent. As Dawid mentioned, the state is loaded during job start. Note that what it means to load the state depends on which state backend is being used.
In the case of operator state the CheckpointedFunction interface has this method
public void initializeState(FunctionInitializationContext context)
where the context has an isRestored() method that lets you know if you are recovering from a failure. See the docs on managed operator state for more details, including an example.

under what circumstances (if any) can I continue to run "out of date" GWT clients when I update my GAEJ version?

following on from this question:
GWT detect GAE version changes and reload
I would like to further clarify some things.
I have an enterprise app (GWT 2.4 & GAEJ 1.6.4 - using GWT-RPC) that my users typically run all day in their browsers, indeed some don't bother refreshing the browser from day to day. I make new releases on a pretty regular basis, so am trying to streamline the process for minimal impact to my users. - Not all releases concern all users, so I'd like to minimize the number of restarts.
I was hoping it might be possible to do the following. Categorize my releases as follows:
1) releases that will cause an IncompatibleRemoteServiceException to be thrown
and 2) those that don't : i.e. only affect the server, or client but not the RPC interface.
Then I could make lots of changes to the client and server without affecting the interface between the two. As long as I don't make a modification to the RPC interface, presumably I can change server code and or client code and the exception won't be thrown? Right? or will any redeployment of GAE cause an old client to get an IncompatibleRemoteServiceException ?
If I was able to do that I could batch up interface busting changes into fairly infrequent releases and notify my users a restart will be required.
many thanks for any help.
I needed an answer pretty quick so I thought I'd just do some good old fashioned testing to see what's possible. Hopefully this will be useful for others with production systems using GWT-RPC.
Goal is to be able to release updates / fixes without requiring all connected browsers to refresh. Turns out there is quite a lot you can do.
So, after my testing, here's what you can and can't do:
no problem
add a new call to a RemoteService
just update some code on the server e.g. simple bug fix, redeploy
just update some client (GWT) code and redeploy (of course anyone wanting new client functionality will have to refresh browser, but others are unaffected)
limited problems
add a parameter to an existing RemoteService method - this one is interesting, that particular call will throw "IncompatibleRemoteServiceException" (of course) but all others calls to the same Remote Service or other Remote Services (Impl's) are unaffected.
Add a new type (as a parameter) to any method within a RemoteService - this is the most interesting one, and is what led me to do this testing. It will render that whole RemoteService out of date for existing clients with IncompatibleRemoteServiceException. However you can still use other RemoteServices. - I need to do some more testing here to fully understand or perhaps someone else knows more?
so if you know what you're doing you can do quite a lot without having to bother your users with refreshes or release announcements.

Resources