We are working on HomeKit-enabled IoT devices. HomeKit is designed for consumer use and does not have the ability to collect metrics (power, temperature, etc.), so we need to implement it separately.
Let's say we have 10 000 devices. They send one collection of metrics every 5 seconds. So each second we need to receive 10000/5=2000 collections. The end-user needs to see graphs of each metric in the specified period of time (1 week, month, year, etc.). So each day the system will receive 172,8 millions of records. Here come a lot of questions.
First of all, there's no need to store all data, as the user needs only graphs of the specified period, so it needs some aggregation. What database solution fits it? I believe no RDMS will handle such amount of data. Then, how to get average data of metrics to present it to the end-user?
AWS has shared time-series data processing architecture:
Very simplified I think of it this way:
Devices push data directly to DynamoDB using HTTP API
Metrics are stored in one table per 24 hours
At the end of the day some procedure runs on Elastic Map Reduce and
produces ready JSON files with data required to show graphs per time
period.
Old tables are stored in RedShift for further applications.
Has anyone already done something similar before? Maybe there is simpler architecture?
This requires bigdata infrastructure like
1) Hadoop cluster
2) Spark
3) HDFS
4) HBase
You can use Spark to read the data as stream. The steamed data can be store in HDFS file system that allows you to store large file across the Hadoop cluster. You can use map reduce algorithm to get the required data set from HDFS and store in HBase which is the Hadoop database. HDFS is distributed, scalable and big data store to store the records. Finally, you can use the query tools to query the hbase.
IOT data --> Spark --> HDFS --> Map/Reduce --> HBase -- > Query Hbase.
The reason I am suggesting this architecture is for
scalability. The input data can grow based on the number of IOT devices. In the above architecture, infrastructure is distributed and the nodes in the cluster can grow without limit.
This is proven architecture in big data analytics application.
Related
I am building an IoT device that will be producing 200Kb of data per second, and I need to save this data to storage. I currently have about 500 devices, I am trying to figure out what is the best way to store the data? And the best database for this purpose? In the past I have stored data to GCP's BigQuery and done processing by using compute engine instance groups, but the size of the data was much smaller.
This is my best answer based upon the limited information in your question.
The first step is to document / describe what type of data that you are processing. Is it structured data (SQL) or unstructured (NoSQL)? What type of queries do you need to make? How long do you need to store the data and what is the expected total data size. This will determine the choice of the backend performing the query processing and analytics.
Next you need to look at the rate of data being transmitted. At 200 Kbits (or is it 200 KBytes) times 500 devices this is 100 Mbits (or 800 MBits) per second. How valuable is the data and how tolerant is your design for data loss? What is the data transfer rate for each device (cellular, wireless, etc.) and connection reliability?.
To push the data into the cloud I would use Pub/Sub. Then process the data to merge, combine, compress, purge, etc and push to Google Cloud Storage or to BigQuery (but other options may be better such as Cloud SQL or Cloud Datastore / BigTable). The answer for the intermediate processor depends on the previous questions but you will need some horsepower to process that rate of data stream. Options might be Google Cloud Dataproc running Spark or Google Cloud Dataflow.
There is a lot to consider for this type of design. My answer has created a bunch of questions, hopefully this will help you architect a suitable solution.
You could also look at IoT Core as a possible way to handle the load balancing piece (it auto-scales). There would be some up front overhead registering all your devices, but it also then handles secure connection as well (TLS stack + JWT encryption for security on devices using IoT Core).
With 500 devices and 200KB/s, that sound well within the capabilities of the system to handle. Pub/Sub is the limiter, and it handles between 1-2M messages per second so it should be fine.
I have a "database choice" and arhitecture question.
Use-case:
Clients will upload large .json files (or other format like .tsv, it is irrelevant) where each line is a data about their customers (e.g name, address etc.)
We need to stream this data later on to process it and store results which will also be some large file where each line is data about each customer (approximately same as uploaded file).
My requirements:
Streaming should be as fast it could (e.g > 1000 rps) and we could have multiple process running in parallel (for multiple clients)
Database should be scalable and fault tolerant. Because there could easily be uploaded many GB of data it should be easy for me to implement automatically adding new commodity instances (using AWS) if storage gets low.
Database should have kind of replication because we don't want to lose data.
No index is required since we are just streaming data.
What would you suggest for database for this problem? We tried to upload it to Amazon S3 and let them take care of scaling etc. but there is a problem of slow read/streaming.
Thanks,
Ivan
Initially uploading the files to S3 is fine, but then pick them up and push each line to Kinesis (or MSK or even Kafka on EC2s if you prefer); from there, you can hook up the stream processing framework of your choice (Flink, Spark Streaming, Samza, Kafka Streams, Kinesis KCL) to do transformations and enrichment, and finally you’ll want to pipe the results into a storage stack that will allow streaming appends. A few obvious candidates:
HBase
Druid
Keyspaces for Cassandra
Hudi (or maybe LakeFS?) on top of S3
Which one you choose is kind of up to your needs downstream in terms of query flexibility, latency, integration options/standards, etc.
Trying to use Google Cloud Datastore to store streaming data from IOT devices. Currently getting data from 10,000 devices at rate of 2 rows (entities) in one minute per device. Data entities would never be updated but purged at regular interval. Backend code is in PHP.
Do I need to partition my data to get better performance as I do at present in MySQL table. Currently using table partitions based on key.
If Yes, Should I used use namespaces as one NAMESPACE for one device OR I should create one KIND for one device such as "device_data_1", "device_data_2"
Thanks
No, you do not need partitioning, the datastore performance is not impacted by the number or entities being written or read (as long as they're not in the same entity group, which has an overall write rate of 1/sec).
See also these somehow related answers: Does Google Datastore have a provisioned capacity system like DynamoDB?
We are currently developing a tool to count wildlife passing through defined areas. The gadget that automatically counts the animals will be sending data (weather, # of animals passing etc.) in a 5 minute interval via HTTP to our API. There will be hundreds of these measurement stations and it should be scalable.
Now the question arised whether to use a filesystem or a RDBMS to save this data.
Pro DB
save exact time and date when the entry was created
directly related to area# via indexed key
Pro Filesystem
Collecting data is not as resource intensive since for every API call only 1 line will be appended to the file
Properties of the data:
only related to 1 DB entry (the area #)
the measurement stations are in remote areas we have to account for outages
What will be done with the data
Give a overview over timeperiods per area#
act as a early warning system if the # of animals is surprisingly low/high
Probably by using a cronjob and comparing to simliar data
We are thinking to chose a RDBMS to save the data but I am worried that after millions of entries the DB will slow down and eventually stop working. This question was asked here where 360M entries is not really considered "big data" so I'm not quite sure about my task either.
Should we chose these recommended techniques (MongoDB ...) or can this task be handled by PostgreSQL or MySQL?
I have created such a system for marine boyes. The devices sends data over GPRS / iridum using HTTP or raw tcp sockets (to minimize bandwidth).
The recieving server stores the data in a db-table, with data provided and timestamp.
The data is then parsed and records are created in another table.
The devices can also request UTC-time from the server, thus not needing a RTC.
Before any storage is made to the "raw" table, a row is appended to a text-file. This is puerely for logging or being able to recover from database downtime.
As for database type, I'd recommend regular RDBMS. Define markers for your data. We use 4-digit codes that gives headroom for 10000 types of measure values.
My company maintains a number of large time series databases of process data. We implement a replica of a subset at a pseudo-central location. I access the data from my laptop. The data access over our internal WAN even to the pseudo-central server is fairly expensive (time).
I would like to cache data requests locally on my laptop so that when I access it for a second time I actually pull from a local db.
There is a fairly ugly client side DAO that I can wrap to maintain the cache but I'm unsure how I can get the "official" client applications to talk to the cache easily. I have the freedom to write my own "client" graphing/plotting system, and already have a custom application that does some data mining already implemented. The custom application dumps data into .csv files that are manually moved around on a very ad-hoc basis.
What is the best approach to this sort of caching/synchonization? What tools could implement the cache?
For further info, the raw data set I have estimated at approx 5-8Tb of RAW time series data per year, with at least half of the data being very compressible. I only want to cache say a few hundred Mb locally. When ad-hoc queries are made on the data it tends to be very repetitive over very small chunks of the data.