I have a flink(v1.13.3) application with un-bounded stream (using kafka). And one of the my stream is so busy. And also busy value (I can see on the UI) increases over the time. When I just start flink application:
sum by(task_name) (flink_taskmanager_job_task_busyTimeMsPerSecond{job="Flink", task_name="MyProcessFunction"}) returns 300-450 ms
After five++ hours sum by(task_name) (flink_taskmanager_job_task_busyTimeMsPerSecond{job="Flink", task_name="MyProcessFunction"}) returns 5-7 sn.
This function is so simple, and it just use rocksdb for the state backend:
public class MyObj implements Serializable
{
private Set<String> distinctValues;
public MyObj()
{
this.distinctValues = new HashSet<>();
}
public Set<String> getDistinctValues() {
return distinctValues;
}
public void setDistinctValues(Set<String> values) {
this.distinctValues = values;
}
}
public class MyProcessFunction extends KeyedProcessFunction<String, KafkaRecord, Output>
{
private transient ValueState<MyObj> state;
#Override
public void open(Configuration parameters)
{
ValueStateDescriptor<MyObj> stateDescriptor = new ValueStateDescriptor<>("MyObj",
TypeInformation.of(MyObj.class));
state = getRuntimeContext().getState(stateDescriptor);
}
#Override
public void processElement(KafkaRecord value, Context ctx, Collector<Output> out) throws Exception
{
MyObj stateValue = state.value();
if (stateValue == null)
{
stateValue = new MyObj();
ctx.timerService().registerProcessingTimeTimer(value.getTimestamp() + 10mins);
}
stateValue.getDistinctValues().add(value.getValue());
if (stateValue.getDistinctValues().size() >= 20)
{
state.clear();
}
else
{
state.update(stateValue);
}
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<Output> out)
{
state.clear();
}
}
NOTE: Before implementing valueState, I was just using ListState. But using with listState flink_taskmanager_job_task_busyTimeMsPerSecond returns 25-30sn:
public class MyProcessFunction extends extends KeyedProcessFunction<String, KafkaRecord, Output>
{
private transient ListState<String> listState;
#Override
public void open(Configuration parameters)
{
ListStateDescriptor<String> listStateDescriptor = new ListStateDescriptor<>("myobj", TypeInformation.of(String.class));
listState = getRuntimeContext().getListState(listStateDescriptor);
}
#Override
public void processElement(KafkaRecord value, Context ctx, Collector<KafkaRecord> out) throws Exception
{
List<String> values = IteratorUtils.toList(listState.get().iterator());
if (CollectionUtils.isEmpty(values))
{
ctx.timerService().registerProcessingTimeTimer(value.getTimestamp() + 10min);
}
if (!values.contains(value.getValue()))
{
values.add(value.getValue());
listState.update(values);
}
if (values.size() >= 20)
{
...
}
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<KafkaRecord> out)
{
listState.clear();
}
}
Some slowdown is to be expected once RocksDB reaches the point where the working state no longer fits in memory. However, in this case you should be able to dramatically improve performance by switching from ValueState to MapState.
Currently you are deserializing and reserializing the entire hashSet for every record. As these hashSets grow over time, performance degrades.
The RocksDB state backend has an optimized implementation of MapState. Each individual key/value entry in the map is stored as a separate RocksDB object, so you can lookup, insert, and update entries without having to do serde on the rest of the map.
ListState is also optimized for RocksDB (it can be appended to without deserializing the list). In general it's best to avoid storing collections in ValueState when using RocksDB, and use ListState or MapState instead wherever possible.
Since the heap-based state backend keeps its working state as objects on the heap, it doesn't have the same issues.
Related
I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false
Can someone please help me understand when and how is the window (session) in flink happens? Or how the samples are processed?
For instance, if I have a continuous stream of events flowing in, events being request coming in an application and response provided by the application.
As part of the flink processing we need to understand how much time is taken for serving a request.
I understand that there are time tumbling windows which gets triggered every n seconds which is configured and as soon as the time lapses then all the events in that time window will be aggregated.
So for example:
Let's assume that the time window defined is 30 seconds and if an event arrives at t time and another arrives at t+30 then both will be processed, but an event arrivng at t+31 will be ignored.
Please correct if I am not right in saying the above statement.
Question on the above is: if say an event arrives at t time and another event arrives at t+3 time, will it still wait for entire 30 seconds to aggregate and finalize the results?
Now in case of session window, how does this work? If the event are being processed individually and the broker time stamp is used as session_id for the individual event at the time of deserialization, then the session window will that be created for each event? If yes then do we need to treat request and response events differently because if we don't then doesn't the response event will get its own session window?
I will try posting my example (in java) that I am playing with in short time but any inputs on the above points will be helpful!
process function
DTO's:
public class IncomingEvent{
private String id;
private String eventId;
private Date timestamp;
private String component;
//getters and setters
}
public class FinalOutPutEvent{
private String id;
private long timeTaken;
//getters and setters
}
===============================================
Deserialization of incoming events:
public class IncomingEventDeserializationScheme implements KafkaDeserializationSchema {
private ObjectMapper mapper;
public IncomingEventDeserializationScheme(ObjectMapper mapper) {
this.mapper = mapper;
}
#Override
public TypeInformation<IncomingEvent> getProducedType() {
return TypeInformation.of(IncomingEvent.class);
}
#Override
public boolean isEndOfStream(IncomingEvent nextElement) {
return false;
}
#Override
public IncomingEvent deserialize(ConsumerRecord<byte[], byte[]> record) throws Exception {
if (record.value() == null) {
return null;
}
try {
IncomingEvent event = mapper.readValue(record.value(), IncomingEvent.class);
if(event != null) {
new SessionWindow(record.timestamp());
event.setOffset(record.offset());
event.setTopic(record.topic());
event.setPartition(record.partition());
event.setBrokerTimestamp(record.timestamp());
}
return event;
} catch (Exception e) {
return null;
}
}
}
===============================================
main logic
public class MyEventJob {
private static final ObjectMapper mapper = new ObjectMapper();
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
MyEventJob eventJob = new MyEventJob();
InputStream inStream = eventJob.getFileFromResources("myConfig.properties");
ParameterTool parameter = ParameterTool.fromPropertiesFile(inStream);
Properties properties = parameter.getProperties();
Integer timePeriodBetweenEvents = 120;
String outWardTopicHostedOnServer = localhost:9092";
DataStreamSource<IncomingEvent> stream = env.addSource(new FlinkKafkaConsumer<>("my-input-topic", new IncomingEventDeserializationScheme(mapper), properties));
SingleOutputStreamOperator<IncomingEvent> filteredStream = stream
.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<IncomingEvent>() {
long eventTime;
#Override
public long extractTimestamp(IncomingEvent element, long previousElementTimestamp) {
return element.getTimestamp();
}
#Override
public Watermark getCurrentWatermark() {
return new Watermark(eventTime);
}
})
.map(e -> { e.setId(e.getEventId()); return e; });
SingleOutputStreamOperator<FinalOutPutEvent> correlatedStream = filteredStream
.keyBy(new KeySelector<IncomingEvent, String> (){
#Override
public String getKey(#Nonnull IncomingEvent input) throws Exception {
return input.getId();
}
})
.window(GlobalWindows.create()).allowedLateness(Time.seconds(defaultSliceTimePeriod))
.trigger( new Trigger<IncomingEvent, Window> (){
private final long sessionTimeOut;
public SessionTrigger(long sessionTimeOut) {
this.sessionTimeOut = sessionTimeOut;
}
#Override
public TriggerResult onElement(IncomingEvent element, long timestamp, Window window, TriggerContext ctx)
throws Exception {
ctx.registerProcessingTimeTimer(timestamp + sessionTimeOut);
return TriggerResult.CONTINUE;
}
#Override
public TriggerResult onProcessingTime(long time, Window window, TriggerContext ctx) throws Exception {
return TriggerResult.FIRE_AND_PURGE;
}
#Override
public TriggerResult onEventTime(long time, Window window, TriggerContext ctx) throws Exception {
return TriggerResult.CONTINUE;
}
#Override
public void clear(Window window, TriggerContext ctx) throws Exception {
//check the clear method implementation
}
})
.process(new ProcessWindowFunction<IncomingEvent, FinalOutPutEvent, String, SessionWindow>() {
#Override
public void process(String arg0,
ProcessWindowFunction<IncomingEvent, FinalOutPutEvent, String, SessionWindow>.Context arg1,
Iterable<IncomingEvent> input, Collector<FinalOutPutEvent> out) throws Exception {
List<IncomingEvent> eventsIn = new ArrayList<>();
input.forEach(eventsIn::add);
if(eventsIn.size() == 1) {
//Logic to handle incomplete request/response events
} else if (eventsIn.size() == 2) {
//Logic to handle the complete request/response and how much time it took
}
}
} );
FlinkKafkaProducer<FinalOutPutEvent> kafkaProducer = new FlinkKafkaProducer<>(
outWardTopicHostedOnServer, // broker list
"target-topic", // target topic
new EventSerializationScheme(mapper));
correlatedStream.addSink(kafkaProducer);
env.execute("Streaming");
}
}
Thanks
Vicky
From your description, I think you want to write a custom ProcessFunction, which is keyed by the session_id. You'll have a ValueState, where you store the timestamp for the request event. When you get the corresponding response event, you calculate the delta and emit that (with the session_id) and clear out state.
It's likely you'd also want to set a timer when you get the request event, so that if you don't get a response event in safe/long amount of time, you can emit a side output of failed requests.
So, with the default trigger, each window is finalized after it's time fully passes. Depending on whether You are using EventTime or ProcessingTime this may mean different things, but in general, Flink will always wait for the Window to be closed before it is fully processed. The event at t+31 in Your case would simply go to the other window.
As for the session windows, they are windows too, meaning that in the end they simply aggregate samples that have a difference between timestamps lower than the defined gap. Internally, this is more complicated than the normal windows, since they don't have defined starts and ends. The Session Window operator gets sample and creates a new Window for each individual sample. Then, the operator verifies, if the newly created window can be merged with already existing ones (i.e. if their timestamps are closer than the gap) and merges them. This finally results with window that has all elements with timestamps closer to each other than the defined gap.
You are making this more complicated than it needs to be. The example below will need some adjustment, but will hopefully convey the idea of how to use a KeyedProcessFunction rather than session windows.
Also, the constructor for BoundedOutOfOrdernessTimestampExtractor expects to be passed a Time maxOutOfOrderness. Not sure why you are overriding its getCurrentWatermark method with an implementation that ignores the maxOutOfOrderness.
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Event> events = ...
events
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks(OUT_OF_ORDERNESS))
.keyBy(e -> e.sessionId)
.process(new RequestReponse())
...
}
public static class RequestReponse extends KeyedProcessFunction<KEY, Event, Long> {
private ValueState<Long> requestTimeState;
#Override
public void open(Configuration config) {
ValueStateDescriptor<Event> descriptor = new ValueStateDescriptor<>(
"request time", Long.class);
requestState = getRuntimeContext().getState(descriptor);
}
#Override
public void processElement(Event event, Context context, Collector<Long> out) throws Exception {
TimerService timerService = context.timerService();
Long requestedAt = requestTimeState.value();
if (requestedAt == null) {
// haven't seen the request before; save its timestamp
requestTimeState.update(event.timestamp);
timerService.registerEventTimeTimer(event.timestamp + TIMEOUT);
} else {
// this event is the response
// emit the time elapsed between request and response
out.collect(event.timestamp - requestedAt);
}
}
#Override
public void onTimer(long timestamp, OnTimerContext context, Collector<Long> out) throws Exception {
//handle incomplete request/response events
}
}
public static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks(Time t) {
super(t);
}
#Override
public long extractTimestamp(Event event) {
return event.eventTime;
}
}
I am write my Apache Flink(1.10) to update records real time like this:
public class WalletConsumeRealtimeHandler {
public static void main(String[] args) throws Exception {
walletConsumeHandler();
}
public static void walletConsumeHandler() throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkUtil.initMQ();
FlinkUtil.initEnv(env);
DataStream<String> dataStreamSource = env.addSource(FlinkUtil.initDatasource("wallet.consume.report.realtime"));
DataStream<ReportWalletConsumeRecord> consumeRecord =
dataStreamSource.map(new MapFunction<String, ReportWalletConsumeRecord>() {
#Override
public ReportWalletConsumeRecord map(String value) throws Exception {
ObjectMapper mapper = new ObjectMapper();
ReportWalletConsumeRecord consumeRecord = mapper.readValue(value, ReportWalletConsumeRecord.class);
consumeRecord.setMergedRecordCount(1);
return consumeRecord;
}
}).assignTimestampsAndWatermarks(new BoundedOutOfOrdernessGenerator());
consumeRecord.keyBy(
new KeySelector<ReportWalletConsumeRecord, Tuple2<String, Long>>() {
#Override
public Tuple2<String, Long> getKey(ReportWalletConsumeRecord value) throws Exception {
return Tuple2.of(value.getConsumeItem(), value.getTenantId());
}
})
.timeWindow(Time.seconds(5))
.reduce(new SumField(), new CollectionWindow())
.addSink(new SinkFunction<List<ReportWalletConsumeRecord>>() {
#Override
public void invoke(List<ReportWalletConsumeRecord> reportPumps, Context context) throws Exception {
WalletConsumeRealtimeHandler.invoke(reportPumps);
}
});
env.execute(WalletConsumeRealtimeHandler.class.getName());
}
private static class CollectionWindow extends ProcessWindowFunction<ReportWalletConsumeRecord,
List<ReportWalletConsumeRecord>,
Tuple2<String, Long>,
TimeWindow> {
public void process(Tuple2<String, Long> key,
Context context,
Iterable<ReportWalletConsumeRecord> minReadings,
Collector<List<ReportWalletConsumeRecord>> out) throws Exception {
ArrayList<ReportWalletConsumeRecord> employees = Lists.newArrayList(minReadings);
if (employees.size() > 0) {
out.collect(employees);
}
}
}
private static class SumField implements ReduceFunction<ReportWalletConsumeRecord> {
public ReportWalletConsumeRecord reduce(ReportWalletConsumeRecord d1, ReportWalletConsumeRecord d2) {
Integer merged1 = d1.getMergedRecordCount() == null ? 1 : d1.getMergedRecordCount();
Integer merged2 = d2.getMergedRecordCount() == null ? 1 : d2.getMergedRecordCount();
d1.setMergedRecordCount(merged1 + merged2);
d1.setConsumeNum(d1.getConsumeNum() + d2.getConsumeNum());
return d1;
}
}
public static void invoke(List<ReportWalletConsumeRecord> records) {
WalletConsumeService service = FlinkUtil.InitRetrofit().create(WalletConsumeService.class);
Call<ResponseBody> call = service.saveRecords(records);
call.enqueue(new Callback<ResponseBody>() {
#Override
public void onResponse(Call<ResponseBody> call, Response<ResponseBody> response) {
}
#Override
public void onFailure(Call<ResponseBody> call, Throwable t) {
t.printStackTrace();
}
});
}
}
and now I found the Flink task only receive at least 2 records to trigger sink, is the reduce action need this?
You need two records to trigger the window. Flink only knows when to close a window (and fire subsequent calculation) when it receives a watermark that is larger than the configured value of the end of the window.
In your case, you use BoundedOutOfOrdernessGenerator, which updates the watermark according to the incoming records. So it generates a second watermark only after having seen the second record.
You can use a different watermark generator. In the troubleshooting training there is a watermark generator that also generates watermarks on timeout.
I am writting a pipe to group session for a user keyed by id and window using eventSessionWindow. I am using the Periodic WM and a custom session accumulator which will count the event is a given session.
What is happenning is my window operator is consuming records but not emmiting out. I am not sure what is missing here.
FlinkKafkaConsumer010<String> eventSource =
new FlinkKafkaConsumer010<>("events", new SimpleStringSchema(), properties);
eventSource.setStartFromLatest();
DataStream<Event> eventStream = env.addSource(eventSource
).flatMap(
new FlatMapFunction<String, Event>() {
#Override
public void flatMap(String value, Collector<Event> out) throws Exception {
out.collect(Event.toEvent(value));
}
}
).assignTimestampsAndWatermarks(
new AssignerWithPeriodicWatermarks<Event>() {
long maxTime;
#Override
public long extractTimestamp(Event element, long previousElementTimestamp) {
maxTime = Math.max(previousElementTimestamp, maxTime);
return previousElementTimestamp;
}
#Nullable
#Override
public Watermark getCurrentWatermark() {
return new Watermark(maxTime);
}
}
);
DataStream <Session> session_stream =eventStream.keyBy((KeySelector<Event, String>)value -> value.id)
.window(EventTimeSessionWindows.withGap(Time.minutes(5)))
.aggregate(new AggregateFunction<Event, pipe.SessionAccumulator, Session>() {
#Override
public pipe.SessionAccumulator createAccumulator() {
return new pipe.SessionAccumulator();
}
#Override
public pipe.SessionAccumulator add(Event e, pipe.SessionAccumulator sessionAccumulator) {
sessionAccumulator.add(e);
return sessionAccumulator;
}
#Override
public Session getResult(pipe.SessionAccumulator sessionAccumulator) {
return sessionAccumulator.getLocalValue();
}
#Override
public pipe.SessionAccumulator merge(pipe.SessionAccumulator prev, pipe.SessionAccumulator next) {
prev.merge(next);
return prev;
}
}, new WindowFunction<Session, Session, String, TimeWindow>() {
#Override
public void apply(String s, TimeWindow timeWindow, Iterable<Session> iterable, Collector<Session> collector) throws Exception {
collector.collect(iterable.iterator().next());
}
});
public static class SessionAccumulator implements Accumulator<Event, Session>{
Session session;
public SessionAccumulator(){
session = new Session();
}
#Override
public void add(Event e) {
session.add(e);
}
#Override
public Session getLocalValue() {
return session;
}
#Override
public void resetLocal() {
session = new Session();
}
#Override
public void merge(Accumulator<Event, Session> accumulator) {
session.merge(Collections.singletonList(accumulator.getLocalValue()));
}
#Override
public Accumulator<Event, Session> clone() {
SessionAccumulator sessionAccumulator = new SessionAccumulator();
sessionAccumulator.session = new Session(
session.id,
);
return sessionAccumulator;
}
}
public static class SessionAccumulator implements Accumulator<Event, Session>{
Session session;
public SessionAccumulator(){
session = new Session();
}
#Override
public void add(Event e) {
session.add(e);
}
#Override
public Session getLocalValue() {
return session;
}
#Override
public void resetLocal() {
session = new Session();
}
#Override
public void merge(Accumulator<Event, Session> accumulator) {
session.merge(Collections.singletonList(accumulator.getLocalValue()));
}
#Override
public Accumulator<Event, Session> clone() {
SessionAccumulator sessionAccumulator = new SessionAccumulator();
sessionAccumulator.session = new Session(
session.id,
session.lastEventTime,
session.earliestEventTime,
session.count;
);
return sessionAccumulator;
}
}
If your watermarks are not advancing, this would explain why no results are being emitted by the window. Possible causes include:
Your events haven't been timestamped by Kafka, and thus previousElementTimestamp isn't set.
You have an idle Kafka partition holding back the watermarks. (This is a somewhat complex topic. If this turns out to be the cause of your problems, and you get stuck on it, please come back with a new question.)
Another possibility is that there is never a 5 minute-long gap in the events, in which case the events will accumulate in a never-ending session.
Also, you don't appear to have included a sink. If you don't print or otherwise send the results to a sink, Flink won't do anything.
And don't forget that you must call env.execute() to get anything to happen.
A few other things:
Your watermark generator isn't allowing for any out-of-orderness, so the window is going to ignore all out-of-order events (because they will be late). If your events have strictly ascending timestamps you should go ahead and use a AscendingTimestampExtractor; if they can be out-of-order, then a BoundedOutOfOrdernessTimestampExtractor is appropriate.
Your WindowFunction is superfluous. It is simply forwarding downstream the result from the aggregator, so you could remove it.
You have posted two different implementations of SessionAccumulator.
I have StreamExecutionEnvironment job that consumes from kafka simple cql select queries.
I try to handle this queries asynchronically using following code:
public class GenericCassandraReader extends RichAsyncFunction {
private static final Logger logger = LoggerFactory.getLogger(GenericCassandraReader.class);
private ExecutorService executorService;
private final Properties props;
private Session client;
public ExecutorService getExecutorService() {
return executorService;
}
public GenericCassandraReader(Properties props, ExecutorService executorService) {
super();
this.props = props;
this.executorService = executorService;
}
#Override
public void open(Configuration parameters) throws Exception {
client = Cluster.builder().addContactPoint(props.getProperty("cqlHost"))
.withPort(Integer.parseInt(props.getProperty("cqlPort"))).build()
.connect(props.getProperty("keyspace"));
}
#Override
public void close() throws Exception {
client.close();
synchronized (GenericCassandraReader.class) {
try {
if (!getExecutorService().awaitTermination(1000, TimeUnit.MILLISECONDS)) {
getExecutorService().shutdownNow();
}
} catch (InterruptedException e) {
getExecutorService().shutdownNow();
}
}
}
#Override
public void asyncInvoke(final UserDefinedType input, final AsyncCollector<ResultSet> asyncCollector) throws Exception {
getExecutorService().submit(new Runnable() {
#Override
public void run() {
ListenableFuture<ResultSet> resultSetFuture = client.executeAsync(input.query);
Futures.addCallback(resultSetFuture, new FutureCallback<ResultSet>() {
public void onSuccess(ResultSet resultSet) {
asyncCollector.collect(Collections.singleton(resultSet));
}
public void onFailure(Throwable t) {
asyncCollector.collect(t);
}
});
}
});
}
}
each response of this code provides Cassandra ResultSet with different amount of fields .
Any Ideas for handling Cassandra ResultSet in Flink or should I use another technics to reach my goal ?
Thanks for any help in advance!
Cassandra ResultSet is not thread-safe. Better try to use Flink Cassandra connector. Or at least write your implementation in a similar way