For benchmarking purposes, I have to simulate an online data stream (Time Series data), to be ingested by various databases.
In this particular use case, Time Series data should be recorded at very high frequencies.
I would process this stream with Apache Kafka.
Stream is essentially composed by many real valued numbers, with their timestamp and a text label.
I have seen many Kafka producer examples, but noone fitted my use case really good: how can I use Kafka in order to GENERATE and PROCESS such a data stream??
EDIT:
Using python APIs, I am currently able to directly send data to various databases. The remaining issue regards now speed: my target frequency is near to 15kHz.
I can barely reach 2kHz using the same (multi-threaded) Python script.
Any suggestion?
Related
I have a Flink application running in Amazon's Kinesis Data Analytics Service (managed Flink cluster). In the app, I read in user data from a Kinesis stream, keyBy userId, and then aggregate some user information. After asking this question, I learned that Flink will split the reading of a stream across physical hosts in a cluster. Flink will then forward incoming events to the host that has the aggregator task assigned to the key space that corresponds to the given event.
With this in mind, I am trying to decide what to use as a partition key for the Kinesis stream that my Flink application reads from. My goal is to limit network traffic between hosts in the Flink cluster in order to optimize performance of my Flink application. I can either partition randomly, so the events are evenly distributed across the shards, or I can partition my shards by userId.
The decision depends on how Flink works internally. Is Flink smart enough to assign the local aggregator tasks on a host a key space that will correspond to the key space of the shard(s) the Kinesis consumer task on the same host is reading from? If this is the case, then sharding by userId would result in ZERO network traffic, since each event is streamed by the host that will aggregate it. It seems like Flink would not have a clear way of doing this, since it does not know how the Kinesis streams are sharded.
OR, does Flink randomly assign each Flink consumer task a subset of shards to read and randomly assign aggregator tasks a portion of the key space? If this is the case, then it seems a random partitioning of shards would result in the least amount of network traffic since at least some events will be read by a Flink consumer that is on the same host as the event's aggregator task. This would be better than partitioning by userId and then having to forward all events over the network because the keySpace of the shards did not align with the assigned key spaces of the local aggregators.
10 years ago, it was really important that as little data as possible is shipped over the network. Since 5 years, network has become so incredible fast that you notice little difference between accessing a chunk of data over network or memory (random access is of course still much faster), such that I wouldn't sweat to much about the additional traffic (unless you have to pay for it). Anecdotally, Google Datastream started to stream all data to a central shuffle server between two tasks, effectively doubling the traffic; but they still experience tremendous speedups on their Petabyte network.
So with that in mind, let's move to Flink. Flink currently has no way to dynamically adjust to shards as they can come and go over time. In half a year with FLIP-27, it could be different.
For now, there is a workaround, currently mostly used in Kafka-land (static partition). DataStreamUtils#reinterpretAsKeyedStream allows you to specify a logical keyby without a physical shuffle. Of course, you are responsible that the provided partitioning corresponds to the reality or else you would get incorrect results.
We are working on HomeKit-enabled IoT devices. HomeKit is designed for consumer use and does not have the ability to collect metrics (power, temperature, etc.), so we need to implement it separately.
Let's say we have 10 000 devices. They send one collection of metrics every 5 seconds. So each second we need to receive 10000/5=2000 collections. The end-user needs to see graphs of each metric in the specified period of time (1 week, month, year, etc.). So each day the system will receive 172,8 millions of records. Here come a lot of questions.
First of all, there's no need to store all data, as the user needs only graphs of the specified period, so it needs some aggregation. What database solution fits it? I believe no RDMS will handle such amount of data. Then, how to get average data of metrics to present it to the end-user?
AWS has shared time-series data processing architecture:
Very simplified I think of it this way:
Devices push data directly to DynamoDB using HTTP API
Metrics are stored in one table per 24 hours
At the end of the day some procedure runs on Elastic Map Reduce and
produces ready JSON files with data required to show graphs per time
period.
Old tables are stored in RedShift for further applications.
Has anyone already done something similar before? Maybe there is simpler architecture?
This requires bigdata infrastructure like
1) Hadoop cluster
2) Spark
3) HDFS
4) HBase
You can use Spark to read the data as stream. The steamed data can be store in HDFS file system that allows you to store large file across the Hadoop cluster. You can use map reduce algorithm to get the required data set from HDFS and store in HBase which is the Hadoop database. HDFS is distributed, scalable and big data store to store the records. Finally, you can use the query tools to query the hbase.
IOT data --> Spark --> HDFS --> Map/Reduce --> HBase -- > Query Hbase.
The reason I am suggesting this architecture is for
scalability. The input data can grow based on the number of IOT devices. In the above architecture, infrastructure is distributed and the nodes in the cluster can grow without limit.
This is proven architecture in big data analytics application.
I'm currently reading into Kafka, trying to find a way to seperate our timeseries database storage engine from our application by making this more of a generic stand-alone microservice rather than an integral part of our application as it currently is.
We currently store our sample data (with timestamp) in a our in-house developed timeseries database and our application enables us to do a scala of analyses dedicated for our industry.
Kafka seems to be ideal for continously streaming data into it and out of it (what we need as well), but querying a datasource over a set period of time in the past, to get a data result stream, which therefore has a begin and an end, seems not to be a part of the scope of Kafka.
That is, I can't find a proper way to create that in Kafka yet.
Having read this: https://www.confluent.io/blog/hello-world-kafka-connect-kafka-streams/ I think I'm very close to what I want but I can't see yet how Kafka handles various queries for various recorded sample sets over different periods of time.
We have a lot of sample data sets over a long period of time (3+ years of 10000s of sample sets at a sampling rate of every 5 seconds to every 1 minute), and as our storage is limited, I hope Kafka does offer a more 'transient' way, than storing the result data of every request for 2 days (as it is set as default), if I understand it correctly, to get our data every time we want to do analysis.
I'm just so close, but I can't get my head around it how to do this properly in Kafka.
Thank you very very much for your time.
I have a map DataStream with a parallelism of 8. I add two sinks to the DataStream. One is slow (Elasticsearch) the other one is fast (HDFS). However, my events are only written to HDFS after they have been flushed to ES, so it takes a magnitude longer with ES than it takes w/o ES.
dataStream.setParallelism(8);
dataStream.addSink(elasticsearchSink);
dataStream.addSink(hdfsSink);
It appears to me, that both sinks use the same thread. Is with possible by using the same source with two sinks, or do I have to add another job, one for earch sink, to write the output parallel?
I checked in the logs that Map(1/8) to Map(8/8) are getting deployed and receive data.
If the Elasticsearch sink can not keep up with the speed at which its input is produced it slowdowns its input operator(s). This concept is called backpressure which means that a slow consumer blocks a fast producer from processing.
The only way to make your program behave as you expect (HDFS sink writing faster than Elasticsearch sink) is to buffer all records that the HDFS sink wrote but the Elasticsearch sink hasn't written yet. If the Elasticsearch sink is consistently slower you will run out of memory / disk space at some point in time.
Flink's approach to solve issues with slow consumers is backpressure.
I see two ways to fix this issue:
increase the parallelism of the ElasticsearchSink. This might help or not, depending on the capabilities of your Elasticsearch setup.
run both jobs as independent pipelines. In this case you'll have to compute all results twice.
I am working on building an application with below requirements and I am just getting started with flink.
Ingest data into Kafka with say 50 partitions (Incoming rate - 100,000 msgs/sec)
Read data from Kafka and process each data (Do some computation, compare with old data etc) real time
Store the output on Cassandra
I was looking for a real time streaming platform and found Flink to be a great fit for both real time and batch.
Do you think flink is the best fit for my use case or should I use Storm, Spark streaming or any other streaming platforms?
Do I need to write a data pipeline in google data flow to execute my sequence of steps on flink or is there any other way to perform a sequence of steps for realtime streaming?
Say if my each computation take like 20 milliseconds, how can I better design it with flink and get better throughput.
Can I use Redis or Cassandra to get some data within flink for each computation?
Will I be able to use JVM in-memory cache inside flink?
Also can I aggregate data based on a key for some time window (example 5 seconds). For example lets say there are 100 messages coming in and 10 messages have the same key, can I group all messages with the same key together and process it.
Are there any tutorials on best practices using flink?
Thanks and appreciate all your help.
Given your task description, Apache Flink looks like a good fit for your use case.
In general, Flink provides low latency and high throughput and has a parameter to tune these. You can read and write data from and to Redis or Cassandra. However, you can also store state internally in Flink. Flink does also have sophisticated support for windows. You can read the blog on the Flink website, check out the documentation for more information, or follow this Flink training to learn the API.