How do you transfer a timer's value from one room to another in Game-maker?
I have a working timer system that counts up, but when I change rooms the timer count resets. Why does the timer count reset?
The object that the timer is connected to is obviously not set to persistent (which will enable it to remain throughout all the rooms, with all of its previous data).
To enable persistence in an object:
If it is connected to a controller object it should be easy to make it persistence but if it is part of a lesser object I recommend having it in a separate object to help with room changes (an object set to persistent of course).
Related
We have a topology that uses states (ValueState and ListState) with TTL(StateTtlConfig) because we can not use Timers (We would generate hundred of millions of timers per day, and it does scale : a savepoint/checkpoint would take hours to be generated and might even get stuck while running).
However we need to update the value of the TTL at runtime depending of the type of some incoming events and other logic. Is this alright to recreate a new state with a new StateTtlConfig (and updated TTL time) and copy the values from "old" to "new in the processElement1() and processElement2() methods of a CoProcessFunction (instead of once in the open() like we usually do) ?
I guess the "old" state would be garbage collected (?).
Would this solution scale? be performant? generate any issue? anything bad?
I think your approach can work with the state re-creation in runtime to some extent but it is brittle. The problem, I can see, is that the old state meta information can linger somewhere depending on backend implementation.
For Heap (FS) backend, eventually the checkpoint/savepoint will have no records for the expired old state but the meta info can linger in memory while the job is running. It will go away if the job is restarted.
For RocksDB, the column family of the old state can linger. Moreover, the background cleanup runs only during compaction. If the table is too small, like the part which is in memory, this part (maybe even a bit on disk) will linger. It will go away after restart if cleanup on full snapshot is active (not for incremental checkpoints).
All in all, it depends on how often you have to create the new state and restart your job from savepoint/checkpoint.
I created a ticket to document what can be changed in TTL config and when,
so check some details in the issue.
I guess the "old" state would be garbage collected (?).
from the Flink documentation Cleanup of Expired State.
By default, expired values are explicitly removed on read, such as
ValueState#value, and periodically garbage collected in the background
if supported by the configured state backend. Background cleanup can
be disabled in the StateTtlConfig:
import org.apache.flink.api.common.state.StateTtlConfig;
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.seconds(1))
.disableCleanupInBackground()
.build();
or execute the clean up after a full snapshot:
import org.apache.flink.api.common.state.StateTtlConfig;
import org.apache.flink.api.common.time.Time;
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.seconds(1))
.cleanupFullSnapshot()
.build();
you can change the TTL at anytime according to the documentation. However, you have to restart the query (it I snot in run-time):
For existing jobs, this cleanup strategy can be activated or
deactivated anytime in StateTtlConfig, e.g. after restart from
savepoint.
But why don'y you see the timers on RocksDB like David said on the referenced answer?
Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.
I am using BroadcastState to perform streaming computation in Flink. I have defined a class extending KeyedBroadcastProcessFunction for my job. Say I have a stream A which is keyed by (user_id, location), and a stream B, which is broadcasted to all executors to process elements in A using my defined class. I understand I can registered a timer in processBroadcastElement or processElement in this class so that when it times out, I can delete the associated state for a specific key group by calling state.clear(). I wonder after that, does this key group still exist?
For example, in stream A, a new message comes with (user_id=1, location='usa') and we have such key group and its associated states generated. After that if another message with (user_id=1, location='usa') comes, it will trigger processElement() and emit result.
Say after 24 hours, I'm no longer interested with this key group (user_id=1, location='usa'), I can register a timer to clear the associated state, but I have no control of this key group. As a result, after 24 hours, when another message with (user_id=1, location='usa') comes, since this key group still exists, processElement() will still be invoked. As the job runs, although their associated states will be cleared after 24 hours, will key groups accumulate or that should not be a concern for memory usage?
Relevant blogs: https://www.da-platform.com/blog/a-practical-guide-to-broadcast-state-in-apache-flink
Flink's keyed state is organized as a distributed (or sharded) key-value store, where the keys can be simple things, like integers and strings, or composites, like (user_id=1, location='usa'). Key groups are something different than composite keys. A key group is a runtime construct that was introduced in Flink 1.2 (see FLINK-3755) to permit efficient rescaling of key-value state. A key group is a subset of the key space, and is checkpointed as an independent unit. At runtime, all of the keys in the same key group are partitioned together in job graph -- each subtask has the key-value state for one or more complete key groups. This design doc gives more details. As a user working with the DataStream API, key groups are an implementation detail, and not something you work with directly.
As for timers in a KeyedBroadcastProcessFunction, they can be registered in the processElement or onTimer method, but not in the processBroadcastElement method. This is because timers are always associated with a key, and there is no key associated with a broadcast element. You can, however, manipulate any or all of the keyed state during your processBroadcastElement method by using the applyToKeyedState method on the KeyedBroadcastProcessFunction.Context object. See the docs for more details.
Once you call state.clear(), the state entry for that key is deleted. New stream events for that key may, of course, arrive after the state has been cleared, and you are able to once again store value state for that key, if you wish. In order to avoid unbounded memory usage due to keeping state for no-longer-relevant keys, you do need to be careful. You might want some logic like this to expire the state 24 hours after each time it is created:
processElement:
if state.value() is null, register timer
state.update(...)
onTimer:
state.clear()
Or you might need more complex logic that extends the lifetime of the state whenever it is updated or accessed.
Another option would be to use the state time-to-live feature.
Update:
Whenever you are in a processElement or onTimer method of any of the ProcessFunction types, there is a specific key implicitly in context, and anything done to keyed state (such as .update() or .clear()) will only affect the state for that one key.
Broadcast state works differently. Broadcast state is always MapState, and is replicated into all of the parallel subtasks. Broadcast state is keyless -- if you read broadcast state during the processElement method you will see the same value for the broadcast state regardless of what key is in context during that call.
Only in the processBroadcastElement method of a KeyedBroadcastProcessFunction can you modify (or clear) broadcast state, and it's important that whatever modifications (or deletions) occur be done in the same way in all of the parallel instances. This is designed this way so as to guarantee that every parallel instance will have the same contents in broadcast state. Ignoring this rule will lead to inconsistencies in the state, which can be very difficult to debug. See the docs for more info.
So yes, if you call .clear() on the broadcast state, then all of the broadcast state for all keys will be removed. Or you might remove a specific item from the broadcast state (remember, broadcast state is MapState), in which case that specific item will be removed for all keys.
There are several examples of working with broadcast state in the Flink training site. See
https://training.da-platform.com/exercises/ongoingRides.html
https://training.da-platform.com/exercises/nearestTaxi.html
https://training.da-platform.com/exercises/taxiQuery.html
I use apply function to get unique count. But i want to collect the count when the number of unique data changes.
Code :
hashMap
.keyBy(x => x.hash)
.timeWindow(Time.minutes(15))
.apply(new DataWindow())
But apply function is triggered when the time windows end, how can I get the value more frequently without sliding window.
I would recommend using a ProcessFunction rather than a window. You will want to use key-partitioned state to hold whatever data structure you decide use to track the unique values. You can use either an event time timer or a processing time timer to clear the state every 15 minutes, depending on what kind of time is appropriate to your application.
But if you want to stick with windowing, you could implement a custom Trigger. In this case you would need to keep your state in the partitioned state available on the TriggerContext. Also see more info about windows and triggers.
I got a special question regarding timer boundary events on a user task in Activiti/Camunda:
When starting the process I set the timer duration with a process variable and use expressions in the boundary definition to resolve the variable. The boundary event is defined on a user task.
<bpmn2:timerEventDefinition id="_TimerEventDefinition_11">
<bpmn2:timeDuration xsi:type="bpmn2:tFormalExpression">${hurry}</bpmn2:timeDuration>
</bpmn2:timerEventDefinition>
In some cases, when the timer is already running it can occur, that the deadline (dueDate) should be extended because the asignee has requested more time. For this purpose i want to change the value of the process variable defining the deadline.
As it happens, the variable is already resolved at the process-start and set to the boundary event.
Any further changes of the variable do not affect the dueDate of the boundary timer because it is stored in the database and is not updated when the value of the variable changes.
I know how to update the dueDate of the job element via the Java API, but i want to provide a generic approach like setting it with changing the value of the variable.
The most common use case for extending the deadline will be when the boundary timer is already running.
Any ideas how to cope with this problem?
Any tips are very apprechiated.
Cheers Chris
After some time of thinking I came up with a workaround like that:
I start the process with two variables. "hurry" is evaluated for the boundary timer. And "extendDeadline" is initialized with false. If the timer triggers and the process advances to the exclusive gateway, the value of "extendDeadline" is evaluated.
If a user has changed the value of "extendDeadline" to true during the the timer was running the process returns to the user task again where the boundary timer is set to the value of "hurry".
If the "extendDeadline" is still set to false, the process can proceed.
If the timer is running you can change dueDate of the timer by executing a signal. If a assginee requested more time, set new value of hurry and execute the signal. The old timer will be canceled and the new timer will be created with new due date.
runtimeService.setVariable(execution.getId(), "hurry", newDueDate);
runtimeService.signalEventReceived(signalName, execution.getId());
Solution is to have 2 out going sequence flow, one should be from boundary timer on task and another one should be from task it self, as shown in diagram added by #theFriedC. See following image also.
Then you can use some exclusive gateway on 2nd sequence flow and reroute that back to same task with a new timer value.