How to detect a gap of certain duration in an Akka Stream? - akka-stream

I don't think it currently (Akka Stream 2.5.21) can be, and am interested in simple work-arounds, or people seeing this could be part of the Akka Stream library, itself.
My current work-around is:
/*
* Implement a flow that does the 'action' when there is a gap of minimum 'duration' in the stream.
*/
def onGap[T](duration: FiniteDuration, action: => Unit): Flow[T,T,NotUsed] = {
Flow[T]
.map(Some(_))
.keepAlive(duration, () => { action; None })
.collect{ case Some(x) => x }
}
What I'd like to see is akin to .keepAlive, but igniting only once per gap, and not injecting an entry to the stream.
Other approaches I considered:
.idleTimeout(duration) with Supervision.Decider, but this would need a separate ActorSystem to be created.
GraphStage but those always feel complex..
My use case for this is to analyse (guess) that consumption of a Kinesis stream has reached "current state" (historic values have been seen).
Would there be a better way?

Related

Persist Apache Flink window

I'm trying to use Flink to consume a bounded data from a message queue in a streaming passion. The data will be in the following format:
{"id":-1,"name":"Start"}
{"id":1,"name":"Foo 1"}
{"id":2,"name":"Foo 2"}
{"id":3,"name":"Foo 3"}
{"id":4,"name":"Foo 4"}
{"id":5,"name":"Foo 5"}
...
{"id":-2,"name":"End"}
The start and end of messages can be determined using the event id. I want to receive such batches and store the latest (by overwriting) batch on disk or in memory. I can write a custom window trigger to extract the events using the start and end flags as shown below:
DataStream<Foo> fooDataStream = ...
AllWindowedStream<Foo, GlobalWindow> fooWindow = fooDataStream.windowAll(GlobalWindows.create())
.trigger(new CustomTrigger<>())
.evictor(new Evictor<Foo, GlobalWindow>() {
#Override
public void evictBefore(Iterable<TimestampedValue<Foo>> elements, int size, GlobalWindow window, EvictorContext evictorContext) {
for (Iterator<TimestampedValue<Foo>> iterator = elements.iterator();
iterator.hasNext(); ) {
TimestampedValue<Foo> foo = iterator.next();
if (foo.getValue().getId() < 0) {
iterator.remove();
}
}
}
#Override
public void evictAfter(Iterable<TimestampedValue<Foo>> elements, int size, GlobalWindow window, EvictorContext evictorContext) {
}
});
but how can I persist the output of the latest window. One way would be using a ProcessAllWindowFunction to receive all the events and write them to disk manually but it feels like a hack. I'm also looking into the Table API with Flink CEP Pattern (like this question) but couldn't find a way to clear the Table after each batch to discard the events from the previous batch.
There are a couple of things getting in the way of what you want:
(1) Flink's window operators produce append streams, rather than update streams. They're not designed to update previously emitted results. CEP also doesn't produce update streams.
(2) Flink's file system abstraction does not support overwriting files. This is because object stores, like S3, don't support this operation very well.
I think your options are:
(1) Rework your job so that it produces an update (changelog) stream. You can do this with toChangelogStream, or by using Table/SQL operations that create update streams, such as GROUP BY (when it's used without a time window). On top of this, you'll need to choose a sink that supports retractions/updates, such as a database.
(2) Stick to producing an append stream and use something like the FileSink to write the results to a series of rolling files. Then do some scripting outside of Flink to get what you want out of this.

How might I implement a map of maps in Flink keyed state that supports fast insert, lookup and iteration of nested maps?

I'd like to write a Flink streaming operator that maintains say 1500-2000 maps per key, with each map containing perhaps 100,000s of elements of ~100B. Most records will trigger inserts and reads, but I’d also like to support occasional fast iteration of entire nested maps.
I've written a KeyedProcessFunction that creates 1500 RocksDb-backed MapStates per key, and tested it by generating a stream of records with a single distinct key, but I find things perform poorly. Just initialising all of them takes on the order of several minutes, and once data begin to flow async incremental checkpoints frequently fail due to timeout. Is this is a reasonable approach? If not, what alternative(s) should I consider?
Thanks!
Functionally my code is along the lines of:
val stream = env.fromCollection(new Iterator[(Int, String)] with Serializable {
override def hasNext: Boolean = true
override def next(): (Int, String) = {
(1, randomString())
}
})
stream
.keyBy(_._1)
.process(new KPF())
.writeUsingOutputFormat(...)
class KFP extends KeyedProcessFunction[Int, (Int, String), String] {
var states: Array[MapState[Int, String]] = _
override def processElement(
value: (Int, String),
ctx: KeyedProcessFunction[Int, (Int, String), String]#Context,
out: Collector[String]
): Unit = {
if (states(0).isEmpty) {
// insert 0-300,000 random strings <= 100B
}
val state = states(random.nextInt(1500))
// Read from R random keys in state
// Write to W random keys state
// With probability 0.01 iterate entire contents of state
if (random.nextInt(100) == 0) {
state.iterator().forEachRemaining {
// do something trivial
}
}
}
override def open(parameters: Configuration): Unit = {
states = (0 until 1500).map { stateId =>
getRuntimeContext.getMapState(new MapStateDescriptor[Int, String](stateId.toString, classOf[Int], classOf[String]))
}.toArray
}
}
There's nothing in what you've described that's an obvious explanation for poor performance. You are already doing the most important thing, which is to use MapState<K, V> rather than ValueState<Map<K, V>>. This way each key/value pair in the map is a separate RocksDB object, rather than the entire Map being one RocksDB object that has to go through ser/de for every access/update for any of its entries.
To understand the performance better, the next step might be to enable the RocksDB native metrics, and study those for clues. RocksDB is quite tunable, and better performance may be achievable. E.g., you can tune for your expected mix of read and writes, and if you are trying to access keys that don't exist, then you should enable bloom filters (which are turned off by default).
The RocksDB state backend has to go through ser/de for every state access/update, which is certainly expensive. You should consider whether you can optimize the serializer; some serializers can be 2-5x faster than others. (Some benchmarks.)
Also, you may want to investigate the new spillable heap state backend that is being developed. See https://flink-packages.org/packages/spillable-state-backend-for-flink, https://cwiki.apache.org/confluence/display/FLINK/FLIP-50%3A+Spill-able+Heap+Keyed+State+Backend, and https://issues.apache.org/jira/browse/FLINK-12692. Early benchmarking suggest this state backend is significantly faster than RocksDB, as it keeps its working state as objects on the heap, and spills cold objects to disk. (How much this would help probably depends on how often you have to iterate.)
And if you don't need to spill to disk, the the FsStateBackend would be faster still.

How does Flink treat timestamps within iterative loops?

How are timestamps treated within an iterative DataStream loop within Flink?
For example, here is an example of a simple iterative loop within Flink where the feedback loop is of a different type to the input stream:
DataStream<MyInput> inputStream = env.addSource(new MyInputSourceFunction());
IterativeStream.ConnectedIterativeStreams<MyInput, MyFeedback> iterativeStream = inputStream.iterate().withFeedbackType(MyFeedback.class);
// define an output tag so we can emit feedback objects via a side output
final OutputTag<MyFeedback> outputTag = new OutputTag<MyFeedback>("feedback-output"){};
// now do some processing
SingleOutputStreamOperator<MyOutput> combinedStreams = iterativeStream.process(new CoProcessFunction<MyInput, MyFeedback, MyOutput>() {
#Override
public void processElement1(MyInput value, Context ctx, Collector<MyOutput> out) throws Exception {
// do some processing of the stream of MyInput values
// emit MyOutput values downstream by calling out.collect()
out.collect(someInstanceOfMyOutput);
}
#Override
public void processElement2(MyFeedback value, Context ctx, Collector<MyOutput> out) throws Exception {
// do some more processing on the feedback classes
// emit feedback items
ctx.output(outputTag, someInstanceOfMyFeedback);
}
});
iterativeStream.closeWith(combinedStreams.getSideOutput(outputTag));
My questions revolve around how does Flink use timestamps within a feedback loop:
Within the ConnectedIterativeStreams, how does Flink treat ordering of the input objects across the streams of regular inputs and feedback objects? If I emit an object into the feedback loop, when will it be seen by the head of the loop with respect to the regular stream of input objects?
How does the behaviour change when using event time processing?
AFAICT, Flink doesn't provide any guarantees on the ordering of input objects. I've run into this when trying to use iterations for a clustering algorithm in Flink, where the centroid updates don't get processed in a timely manner. The only solution I found was to essentially create a single (unioned) stream of the incoming events and the centroid updates, versus using a co-stream.
FYI there's this proposal to address some of the short-comings of iterations.

Akka Streams - Creating a Flow which emits from a source when an input is received

I have a Source that provides elements of type A.
And I have a Flow that receives elements of type B.
What I would like to do, is when the flow receives an input, the next element from the source is emitted as the output of the flow.
The way I do this currently, is I have the source connected to a Sink.queue. Then for each element in the flow, I map over it, discarding the input, and pulling the next value from the queue. Once the queue is empty, I complete the flow.
I feel like there ought to be a simpler way that I'm missing. That there is probably some built in mechanism to allow a input to trigger an element from a source.
For example:
val source = ... some akka streams source
val queue = source.grouped(limit.toInt).runWith(Sink.queue[Seq[DataFrame]])
Flow[Message]
.prepend(Source.single(TextMessage.Strict("start")))
.collect {
case TextMessage.Strict(text) => Future.successful(text)
case TextMessage.Streamed(textStream) => textStream.runFold("")(_ + _).flatMap(Future.successful)
}
// swallow the future of the incoming message's text
.mapAsync(1)(identity)
// take the next batch
.mapAsync(1)(_ => queue.pull())
// swallow the option monad, and add in an end or page-end message
.collect {
case Some(batch) if batch.size == limit => batch.toList :+ pageend
case Some(batch) => batch.toList :+ end
case None => List(end)
}
// flatten out the frames
.mapConcat(identity)
end and pageend are just special frames that the ui uses. The key part of the question is around this use of a queue.

Connecting SourceShape/PortOpts from a UniformFanOutShape to sources in Akka Streams

Most examples of FanOut shapes in Akka steams either merge the faned-out flows into a single stream, or immediately connect them to a sink. I want to instead return a sequence of Sources that I can then apply transformation and connect to a sinks later on. An example would look something like this
def transformedSources(src: Source[Int,NotUsed]) : Seq[Source[Int,NotUsed]] {
import GraphDSL.Implicits._
val builder = new GraphDSL.Builder
val bcast = builder.add(Broadcast[Int](2))
src.out ~> bcast.in
val sourceShapes = (bcast.out).map { out => SourceShape(out)}}
??? // How to convert sourceShapes into Sources
}
Is there a way to achieve this?
p.s
My real use case is an AmorphousShape that takes X sources, and produces X output. Ideally I would like to apply the AmorphousShape stage, and then continue operating as if nothing changes.
So if my code now is
sources.map{s => s.via(someStage).runWith(Sink.seq)}
I would like to transform it into
transformSource(sources).map{s => s.via(someStage).runWith(Sink.seq)}
def transformSource(sources:Seq[Source[Int, NotUsed]]) : Seq[Source[Int, NotUsed]] = {do some magic with an AmorphousShape graph }

Resources