Data streaming using MongoOperations - spring-data-mongodb

I am using spring data mongoOperations to issue queries against mongoDb and my result result set contains large set of documents. These can't be written all at once in to the local memory to avoid memory hog.
I checked MongoOperations API and it does have a stream method. Not sure if this stream method is a wrapper on top of Mongo Cursors or something in sync with Java 8 stream support.
What would be the best way to stream data using mongoTemplate and not write all the documents at once in to the memory ?

The mongoOperations.stream(...) method returns an Iterator. Use StreamUtils.createStreamFromIterator. With a static import it's concise and works fine:
import static org.springframework.data.util.StreamUtils.createStreamFromIterator;
//
createStreamFromIterator(mongoOperations.stream(query, SomeEntity.class)).
map(SomeEntity::getFirstName). ...

Related

Flink 1.12.x DataSet --> Flink 1.14.x DataStream

I am trying to migrate from Flink 1.12.x DataSet api to Flink 1.14.x DataStream api. mapPartition is not available in Flink DataStream.
Our Code using Flink 1.12.x DataSet
dataset
.<few operations>
.mapPartition(new SomeMapParitionFn())
.<few more operations>
public static class SomeMapPartitionFn extends RichMapPartitionFunction<InputModel, OutputModel> {
#Override
public void mapPartition(Iterable<InputModel> records, Collector<OutputModel> out) throws Exception {
for (InputModel record : records) {
/*
do some operation
*/
if (/* some condition based on processing *MULTIPLE* records */) {
out.collect(...); // Conditional collect ---> (1)
}
}
// At the end of the data, collect
out.collect(...); // Collect processed data ---> (2)
}
}
(1) - Collector.collect invoked based on some condition after processing few records
(2) - Collector.collect invoked at the end of data
Initially we thought of using flatMap instead of mapPartition, but collector not available in close function.
https://issues.apache.org/jira/browse/FLINK-14709 - Only available in case of chained drivers
How to implement this in Flink 1.14.x DataStream? Please advise...
Note: Our application works with only finite set of data (Batch Mode)
In Flink's DataSet API, a MapPartitionFunction has two parameters. An iterator for the input and a collector for the result of the function. A MapPartitionFunction in a Flink DataStream program would never return from the first function call, because the iterator would iterate over an endless stream of records. However, Flink's internal stream processing model requires that user functions return in order to checkpoint function state. Therefore, the DataStream API does not offer a mapPartition transformation.
In order to implement similar function, you need to define a window over the stream. Windows discretize streams which is somewhat similar to mini batches but windows offer way more flexibility
Solution provided by Zhipeng
One solution could be using a streamOperator to implement BoundedOneInput
interface.
An example code could be found here [1].
[1]
https://github.com/apache/flink-ml/blob/56b441d85c3356c0ffedeef9c27969aee5b3ecfc/flink-ml-core/src/main/java/org/apache/flink/ml/common/datastream/DataStreamUtils.java#L75
Flink user mailing link: https://lists.apache.org/thread/ktck2y96d0q1odnjjkfks0dmrwh7kb3z

How to join a stream and dataset?

How to join a stream and dataset?
I have a stream and I have a static data in a file. I want to enrich the data of stream using the data in the file.
Example: in stream I get airports code and in file I have the name of the airports and codes in file.
Now I want to join the stream data to the file to form a new stream with airport names. Please provide steps on how to achieve this.
There are lots of ways to approach stream enrichment with Flink, depending on the exact requirements. https://www.youtube.com/watch?v=cJS18iKLUIY is a good talk by Konstantin Knauf that covers many different approaches, and the tradeoffs between them.
In the simple case where the enrichment data is immutable and reasonably small, I would just use a RichFlatMap and load the whole file in the open() method. That would look something like this:
public class EnrichmentWithPreloading extends RichFlatMapFunction<Event, EnrichedEvent> {
private Map<Long, SensorReferenceData> referenceData;
#Override
public void open(final Configuration parameters) throws Exception {
super.open(parameters);
referenceData = loadReferenceData();
}
#Override
public void flatMap(
final Event event,
final Collector<EnrichedEvent> collector) throws Exception {
SensorReferenceData sensorReferenceData =
referenceData.get(sensorMeasurement.getSensorId());
collector.collect(new EnrichedEvent(event, sensorReferenceData));
}
}
You'll find more code examples for other approaches in https://github.com/knaufk/enrichments-with-flink.
UPDATE:
If what you'd rather do is preload some larger, partitioned reference data to join with a stream, there are a few ways to approach this, some of which are covered in the video and repo I shared above. For those specific requirements, I suggest using a custom partitioner; there's an example here in that same github repo. The idea is that the enrichment data is sharded, and each streaming event is steered toward the instance with the relevant reference data.
In my opinion, this is simpler than trying to get the Table API to do this particular enrichment as a join.

Apache Flink flatMap with millions of outputs

Whenever i receive a message, i want to do a read from a database, possibly returning millions of rows, which i then want to pass on down the stream. Is this considered good practice in Flink?
public static class StatsReader implements FlatMapFunction<Msg, Json> {
Transactor txor =
...;
#Override
public void flatMap(Msg msg, Collector<Json> out) {
//Possibly lazy and async stream
java.util.Stream<Json> results =
txor.exec(Stats.read(msg));
results.foreach(stat->out.collect(stat));
}
}
Edit:
Background: I would like to dynamically run a report. The db basically acts as a huge window. The report is based on that window + live data. The report is highly customizable, threfore its hard to preprocess results or define pipelines a priori.
I use vanilla java today, and the pipeline is roughly like this:
ReportDefinition -> ( elasticsearch query + realtime stream ) -> ( ReportProcessingPipeline ) -> ( Websocket push )
In principle this should be possible. However, I'd recommend to use an AsyncFunction instead of a FlatMapFunction.
Please note that, such a setup might require tuning the checkpointing parameters, such as the checkpoint interval.

Spring Batch FlatFileItemWriter does not write data to a file

I am new to Spring Batch application. I am trying to use FlatFileItemWriter to write the data into a file. Challenge is application is creating the file on a given path, but, now writing the actual content into it.
Following are details related to code:
List<String> dataFileList : This list contains the data that I want to write to a file
FlatFileItemWriter<String> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource("C:\\Desktop\\test"));
writer.open(new ExecutionContext());
writer.setLineAggregator(new PassThroughLineAggregator<>());
writer.setAppendAllowed(true);
writer.write(dataFileList);
writer.close();
This is just generating the file at proper place but contents are not getting written into the file.
Am I missing something? Help is highly appreciated.
Thanks!
This is not a proper way to use Spring Batch Writer and writer data. You need to declare bean of Writer first.
Define Job Bean
Define Step Bean
Use your Writer bean in Step
Have a look at following examples:
https://github.com/pkainulainen/spring-batch-examples/blob/master/spring-boot/src/main/java/net/petrikainulainen/springbatch/csv/in/CsvFileToDatabaseJobConfig.java
https://spring.io/guides/gs/batch-processing/
You probably need to force a sync to disk. From the docs at https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/FlatFileItemWriter.html,
setForceSync
public void setForceSync(boolean forceSync)
Flag to indicate that changes should be force-synced to disk on flush. Defaults to false, which means that even with a local disk changes could be lost if the OS crashes in between a write and a cache flush. Setting to true may result in slower performance for usage patterns involving many frequent writes.
Parameters:
forceSync - the flag value to set

Apache Flink DataStream API doesn't have a mapPartition transformation

Spark DStream has mapPartition API, while Flink DataStream API doesn't. Is there anyone who could help explain the reason. What I want to do is to implement a API similar to Spark reduceByKey on Flink.
Flink's stream processing model is quite different from Spark Streaming which is centered around mini batches. In Spark Streaming each mini batch is executed like a regular batch program on a finite set of data, whereas Flink DataStream programs continuously process records.
In Flink's DataSet API, a MapPartitionFunction has two parameters. An iterator for the input and a collector for the result of the function. A MapPartitionFunction in a Flink DataStream program would never return from the first function call, because the iterator would iterate over an endless stream of records. However, Flink's internal stream processing model requires that user functions return in order to checkpoint function state. Therefore, the DataStream API does not offer a mapPartition transformation.
In order to implement functionality similar to Spark Streaming's reduceByKey, you need to define a keyed window over the stream. Windows discretize streams which is somewhat similar to mini batches but windows offer way more flexibility. Since a window is of finite size, you can call reduce the window.
This could look like:
yourStream.keyBy("myKey") // organize stream by key "myKey"
.timeWindow(Time.seconds(5)) // build 5 sec tumbling windows
.reduce(new YourReduceFunction); // apply a reduce function on each window
The DataStream documentation shows how to define various window types and explains all available functions.
Note: The DataStream API has been reworked recently. The example assumes the latest version (0.10-SNAPSHOT) which will be release as 0.10.0 in the next days.
Assuming your input stream is single partition data (say String)
val new_number_of_partitions = 4
//below line partitions your data, you can broadcast data to all partitions
val step1stream = yourStream.rescale.setParallelism(new_number_of_partitions)
//flexibility for mapping
val step2stream = step1stream.map(new RichMapFunction[String, (String, Int)]{
// var local_val_to_different_part : Type = null
var myTaskId : Int = null
//below function is executed once for each mapper function (one mapper per partition)
override def open(config: Configuration): Unit = {
myTaskId = getRuntimeContext.getIndexOfThisSubtask
//do whatever initialization you want to do. read from data sources..
}
def map(value: String): (String, Int) = {
(value, myTasKId)
}
})
val step3stream = step2stream.keyBy(0).countWindow(new_number_of_partitions).sum(1).print
//Instead of sum(1), you can use .reduce((x,y)=>(x._1,x._2+y._2))
//.countWindow will first wait for a certain number of records for perticular key
// and then apply the function
Flink streaming is pure streaming (not the batched one). Take a look at Iterate API.

Resources