InfluxDB data structure & database model - database

Can you please tell me, which data structure has an InfluxDB und which data model InfluxDB use? Is this key-value model. I read the full documentation and I didn't catch that.
Thank you in advance!

1. Data model and terminology
An InfluxDB database stores points. A point has four components: a measurement, a tagset, a fieldset, and a timestamp.
The measurement provides a way to associate related points that might have different tagsets or fieldsets. The tagset is a dictionary of key-value pairs to store metadata with a point. The fieldset is a set of typed scalar values—the data being recorded by the point.
The serialization format for points is defined by the [line protocol] (which includes additional examples and explanations if you’d like to read more detail). An example point from the specification helps to explain the terminology:
temperature,machine=unit42,type=assembly internal=32,external=100 1434055562000000035
The measurement is temperature.
The tagset is machine=unit42,type=assembly. The keys, machine and type, in the tagset are called tag keys. The values, unit42 and assembly, in the tagset are called tag values.
The fieldset is internal=32,external=100. The keys, internal and external, in the fieldset are called field keys. The values, 32 and 100, in the fieldset are called field values.
Each point is stored within exactly one database within exactly one retention policy. A database is a container for users, retention policies, and points. A retention policy configures how long InfluxDB keeps points (duration), how many copies of those points are stored in the cluster (replication factor), and the time range covered by shard groups (shard group duration). The retention policy makes it easy for users (and efficient for the database) to drop older data that is no longer needed. This is a common pattern in time series applications.
We’ll explain replication factor, shard groups, andshards later when we describe how the write path works in InfluxDB.
There’s one additional term that we need to get started: series. A series is simply a shortcut for saying retention policy + measurement + tagset. All points with the same retention policy, measurement, and tagset are members of the same series.
You can refer to the [documentation glossary] for these terms or others that might be used in this blog post series.
2. Receiving points from clients
Clients POST points (in line protocol format) to InfluxDB’s HTTP /write endpoint. Points can be sent individually; however, for efficiency, most applications send points in batches. A typical batch ranges in size from hundreds to thousands of points. The POST specifies a database and an optional retention policy via query parameters. If the retention policy is not specified, the default retention policy is used. All points in the body will be written to that database and retention policy. Points in a POST body can be from an arbitrary number of series; points in a batch do not have to be from the same measurement or tagset.
When the database receives new points, it must (1) make those points durable so that they can be recovered in case of a database or server crash and (2) make the points queryable. This post focuses on the first half, making points durable.
3. Persisting points to storage
To make points durable, each batch is written and fsynced to a write ahead log (WAL). The WAL is an append only file that is only read during a database recovery. For space and disk IO efficiency, each batch in the WAL is compressed using [snappy compression] before being written to disk.
While the WAL format efficiently makes incoming data durable, it is an exceedingly poor format for reading—making it unsuitable for supporting queries. To allow immediate query ability of new data, incoming points are also written to an in-memory cache. The cache is an in-memory data structure that is optimized for query and insert performance. The cache data structure is a map of series to a time-sorted list of fields.
The WAL makes new points durable. The cache makes new points queryable. If the system crashes or shut down before the cache is written to TSM files, it is rebuilt when the database starts by reading and replaying the batches stored in the WAL.
The combination of WAL and cache works well for incoming data but is insufficient for long-term storage. Since the WAL must be replayed on startup, it is important to constrain it to a reasonable size. The cache is limited to the size of RAM, which is also undesirable for many time series use cases. Consequently, data needs to be organized and written to long-term storage blocks on disk that are size-efficient (so that the database can store a lot of points) and efficient for query.
Time series queries are frequently aggregations over time—scans of points within a bounded time range that are then reduced by a summary function like mean, max, or moving windows. Columnar database storage techniques, where data is organized on disk by column and not by row, fit this query pattern nicely. Additionally, columnar systems compress data exceptionally well, satisfying the need to store data efficiently. There is a lot of literature on column stores. [Columnar-oriented Database Systems] is one such overview.
Time series applications often evict data from storage after a period of time. Many monitoring applications, for example, will store the last month or two of data online to support monitoring queries. It needs to be efficient to remove data from the database if a configured time-to-live expires. Deleting points from columnar storage is expensive, so InfluxDB additionally organizes its columnar format into time-bounded chunks. When the time-to-live expires, the time-bounded file can simply be deleted from the filesystem rather than requiring a large update to persisted data.
Finally, when InfluxDB is run as a clustered system, it replicates data across multiple servers for availability and durability in case of failures.
The optional time-to-live duration, the granularity of time blocks within the time-to-live period, and the number of replicas are configured using an InfluxDB retention policy:
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [DEFAULT]
The duration is the optional time to live (if data should not expire, set duration to INF). SHARD DURATION is the granularity of data within the expiration period. For example, a one- hour shard duration with a 24 hour duration configures the database to store 24 one-hour shards. Each hour, the oldest shard is expired (removed) from the database. Set REPLICATION to configure the replication factor—how many copies of a shard should exist within a cluster.
Concretely, the database creates this physical organization of data on disk:
'' Database director /db
'' Retention Policy directory /db/rp
'' Shard Group (time bounded). (Logical)
'' Shard directory (db/rp/Id#)
'' TSM0001.tsm (data file)
'' TSM0002.tsm (data file)
'' …
The in-memory cache is flushed to disk in the TSM format. When the flush completes, flushed points are removed from the cache and the corresponding WAL is truncated. (The WAL and cache are also maintained per-shard.) The TSM data files store the columnar-organized points. Once written, a TSM file is immutable. A detailed description of the TSM file layout is available in the [InfluxDB documentation].
4. Compacting persisted points
The cache is a relatively small amount of data. The TSM columnar format works best when it can store long runs of values for a series in a single block. A longer run produces both better compression and reduces seeks to scan a field for query. The TSM format is based heavily on log-structured merge-trees. New (level one) TSM files are generated by cache flushes. These files are later combined (compacted) into level two files. Level two files are further combined into level three files. Additional levels of compaction occur as the files become larger and eventually become cold (the time range they cover is no longer hot for writes.) The documentation reference above offers a detailed description of compaction.
There’s a lot of logic and sophistication in the TSM compaction code. However, the high-level goal is quite simple: organize values for a series together into long runs to best optimize compression and scanning queries.
Refer: https://www.influxdata.com/blog/influxdb-internals-101-part-one/

It is essentially key-value, key being time, where value can be one or more fields/columns. Values can also optionally be indexed columns, called tags in influxdb, that are optimised for searching along with time which is always required. At least one non-indexed value is required.
See schema design documentation for more details.
Much like Cassandra, in fact, though influx is essentially schema-on-write while developers write schema for Cassandra.
Storage engine wise again very similar to Cassandra, using a variation of SSTables as used in Cassandra, optimised for time series data.

I am not sure if the following influx document was there when you were looking for your answer:
https://docs.influxdata.com/influxdb/v1.5/concepts/key_concepts/
But it really helped me understanding the data structure of influxdb.

Related

What is an appropriate database solution for high throughput updates?

Suppose I am a subscription service and I have a table with each row representing customer data.
I want to build a system that consumes a daily snapshot of customer data. This daily snapshot contains data of all currently existing customers (i.e., there will be rows for new customers and customers that unsubscribed will not appear in this data). I also need to keep track of the duration that each customer has subscribed using start and end times. If a customer re-subscribes, another entry of this start and stop time is updated to that customer. A sample record/schema is shown below.
{
"CustomerId": "12345",
"CustomerName": "Bob",
"MagazineName": "DatabaseBoys",
"Gender": "Male",
"Address": "{streetName: \"Sesame Street\", ...}",
"SubscriptionTimeRanges": [{start:12345678, end: 23456789}, {start:34567890, end: 45678901},...]
}
I will be processing >250,000 rows of data once a day, every day
I need to know whether any record in the snapshot doesn't currently exist in the database
The total size of the table will be >250,000
There are longer-term benefits that would come from having a relational database (e.g., joining to another table that contains Magazine information)
I would like to get records by either CustomerId or MagazineName
Writes should not block reads
To achieve this, I anticipate needing to scan the entire table, iterating over every record, and individually updating the SubscriptionTimeRanges array/JSON blob of each record
Latency for writes is not a hard requirement, but at the same time, I shouldn't be expecting to take over an hour to update all of these records (could this be done in a single transaction if it's an update...?)
Reads should also be quick
Concurrent processing is always nice, but that may introduce locking for ACID compliant dbs?
I know that DynamoDB would be quick at handling this kind of use case, and the record schema is right up the NoSQL alley. I can use global secondary indexes / local secondary indexes to resolve some of my issues. I have some experience in PostgreSQL when using Redshift, but I mostly dealt with bulk inserts with no need for data modification. Now I need the data modification aspect. I'm thinking RDS Postgres would be nice for this, but would love to hear your thoughts or opinions.
P.S. don't take the "subscription" system design too seriously, it's the best parallel example I could think of when setting an example for similar requirements.. :)
This is a subjective question, but objectively speaking, DynamoDB is not designed for scans. It can do them, but it requires making repeated requests in a loop, starting each request where the last one left off. This isn't quick for large data sets, so there's also parallel scan but you have to juggle the threads and you consume a lot of table throughput with this.
On the flip side, it is easy and inexpensive to prototype and test against DynamoDB using the SDKs.
But with the daily need to scan the data, and potential need for joins, I would be strongly inclined to go with a relational database.
250,000 rows of data processed daily probably does not justify use of Amazon Redshift. It has a sweetspot of millions to billions of rows and is typically used when you want to do queries throughout the day.
If an RDS database suits your needs, then go for it! If you wish to save cost, you could accumulate the records in Amazon S3 throughout the day and then just load and process the data once per day, turning off the database when it isn't required. (Or even terminate it and launch a new one the next day, since it seems that you don't need to access historical data.)
Amazon Athena might even suit your needs, reading the daily data from S3 and not even needing a persistent database.

Storing and accessing a large number of relatively small files

I am running lots of very slow computations with reusable results (and often computing something new relies on a computation that was already performed before). To make use of them, I want to store the results somewhere (permanently). The computations can be uniquely identified by two identifiers: experiment name and computation name, and the value is an array of floats (which I currently store as raw binary data). They need to be individually accessed (read and written) by experiment and computation name very often, and sometimes also just by experiment name (i.e. all computations with their results for a given experiment). They are also sometimes concatenated, but if reading and writing is fast, no additional support for this operation is needed. This data will not need to be accessed for any web application (used only by non-production scripts that need the results of the computations, but calculating them each time is not feasible), and there is no need for transactions, but every write needs to be atomic (e.g. turning off the computer should not result in corrupted/partial data). Reading also needs to be atomic (e.g. if two processes try to access a result of one computation, and it's not there, so one of them starts saving the new result, the other process should either receive it when it's done, or receive nothing at all). Accessing the data remotely is not required, but helpful.
So, TL;DR requirements:
permanent storage of binary data (no metadata other than the identifier needs to be stored)
very fast access (read/write) based on a compound identifier
ability to read all data by one part of a compound identifier
concurrent, atomic read/write
no need for transactions, complex queries, etc.
remote access would be nice to have, but not required
the whole thing is there mostly to save time, so speed is critical
The solutions I tried so far are:
storing them as individual files (one directory per experiment, one binary file per computation) - requires manual handling of atomicity, and also most file systems support file names only up to 255 characters long (and computation names may be longer than that), so an additional mapping would be required; also I'm not sure if ext4 (which is the filesystem I'm using and can't change it) is designed to handle millions of files
using a sqlite database (with just one table and a compound primary key) - at first it seemed perfect, but when we got to hundreds of gigabytes of data (millions of ~100 KB blobs, and both number of them and their size will increase), it started being really slow, even after applying optimizations found on the internet
Naturally, after sqlite failed, the first idea was to just move to a "proper" database like postgres, but then I realized that perhaps in this case a relational database is not really the way to go (especially since speed is critical here, and I don't need most of their features) - and especially postgres is probably not the way to go, since the closest thing to a blob is bytea, which requires additional conversions (so a perfomance hit is guaranteed). However, after researching a bit about key-value databases (which seemed to apply to my problem), I found out that all of the databases that I checked do not support compound keys, and often have length limitations for keys (e.g. couchbase has just 250 bytes). So, should I just go with a normal relational database, try one of NoSQL databases, or maybe something completely different like HDF5?
One way to improve on the database solution is to externalize the data blob.
You can use SeaweedFS https://github.com/chrislusf/seaweedfs as an object store, upload the blob and get an file id, and then store the file id in the database. (I am working on SeaweedFS)
This should reduce the database load quite a bit, and querying will be much faster.
So, I ended up using a relational database anyway (since only there I could use compound keys without any hacks).
I performed a benchmark to compare sqlite with postgres and mysql - 500 000 inserts of ~60 KB blobs and then 50 000 selects by the whole key. This was not enough to slow down sqlite to the unacceptable levels I was experiencing, but set a point of reference (i.e. the speed at which sqlite was running with this few records was acceptable to me). I assumed that I wouldn't experience a huge performance hit when adding more records with mysql and postgres (since they were designed to work with much larger amounts of data than sqlite), and when finally using one of them, that turned out to be true.
The settings (other than defaults) were following:
sqlite: journal mode=wal (required for parallel access), isolation level autocommit, values as BLOB
postgres: isolation level autocommit (can't turn off transactions, and doing everything in one huge transaction was not an option for me), values as BYTEA (which sadly includes the double conversion I wrote about)
mysql: engine=aria, transactions disabled, values as MEDIUMBLOB
As you can see, I was able to customize mysql much more to fit the task at hand. The results below reflect it well:
sqlite postgres mysql
selects 90.816292 191.910514 106.363534
inserts 4367.483822 7227.473075 5081.281370
Mysql had similar speed to sqlite, with postgres being significantly slower.

Storing time-series data, relational or non?

I am creating a system which polls devices for data on varying metrics such as CPU utilisation, disk utilisation, temperature etc. at (probably) 5 minute intervals using SNMP. The ultimate goal is to provide visualisations to a user of the system in the form of time-series graphs.
I have looked at using RRDTool in the past, but rejected it as storing the captured data indefinitely is important to my project, and I want higher level and more flexible access to the captured data. So my question is really:
What is better, a relational database (such as MySQL or PostgreSQL) or a non-relational or NoSQL database (such as MongoDB or Redis) with regard to performance when querying data for graphing.
Relational
Given a relational database, I would use a data_instances table, in which would be stored every instance of data captured for every metric being measured for all devices, with the following fields:
Fields: id fk_to_device fk_to_metric metric_value timestamp
When I want to draw a graph for a particular metric on a particular device, I must query this singular table filtering out the other devices, and the other metrics being analysed for this device:
SELECT metric_value, timestamp FROM data_instances
WHERE fk_to_device=1 AND fk_to_metric=2
The number of rows in this table would be:
d * m_d * f * t
where d is the number of devices, m_d is the accumulative number of metrics being recorded for all devices, f is the frequency at which data is polled for and t is the total amount of time the system has been collecting data.
For a user recording 10 metrics for 3 devices every 5 minutes for a year, we would have just under 5 million records.
Indexes
Without indexes on fk_to_device and fk_to_metric scanning this continuously expanding table would take too much time. So indexing the aforementioned fields and also timestamp (for creating graphs with localised periods) is a requirement.
Non-Relational (NoSQL)
MongoDB has the concept of a collection, unlike tables these can be created programmatically without setup. With these I could partition the storage of data for each device, or even each metric recorded for each device.
I have no experience with NoSQL and do not know if they provide any query performance enhancing features such as indexing, however the previous paragraph proposes doing most of the traditional relational query work in the structure by which the data is stored under NoSQL.
Undecided
Would a relational solution with correct indexing reduce to a crawl within the year? Or does the collection based structure of NoSQL approaches (which matches my mental model of the stored data) provide a noticeable benefit?
Definitely Relational. Unlimited flexibility and expansion.
Two corrections, both in concept and application, followed by an elevation.
Correction
It is not "filtering out the un-needed data"; it is selecting only the needed data. Yes, of course, if you have an Index to support the columns identified in the WHERE clause, it is very fast, and the query does not depend on the size of the table (grabbing 1,000 rows from a 16 billion row table is instantaneous).
Your table has one serious impediment. Given your description, the actual PK is (Device, Metric, DateTime). (Please don't call it TimeStamp, that means something else, but that is a minor issue.) The uniqueness of the row is identified by:
(Device, Metric, DateTime)
The Id column does nothing, it is totally and completely redundant.
An Id column is never a Key (duplicate rows, which are prohibited in a Relational database, must be prevented by other means).
The Id column requires an additional Index, which obviously impedes the speed of INSERT/DELETE, and adds to the disk space used.
You can get rid of it. Please.
Elevation
Now that you have removed the impediment, you may not have recognised it, but your table is in Sixth Normal Form. Very high speed, with just one Index on the PK. For understanding, read this answer from the What is Sixth Normal Form ? heading onwards.
(I have one index only, not three; on the Non-SQLs you may need three indices).
I have the exact same table (without the Id "key", of course). I have an additional column Server. I support multiple customers remotely.
(Server, Device, Metric, DateTime)
The table can be used to Pivot the data (ie. Devices across the top and Metrics down the side, or pivoted) using exactly the same SQL code (yes, switch the cells). I use the table to erect an unlimited variety of graphs and charts for customers re their server performance.
Monitor Statistics Data Model.
(Too large for inline; some browsers cannot load inline; click the link. Also that is the obsolete demo version, for obvious reasons, I cannot show you commercial product DM.)
It allows me to produce Charts Like This, six keystrokes after receiving a raw monitoring stats file from the customer, using a single SELECT command. Notice the mix-and-match; OS and server on the same chart; a variety of Pivots. Of course, there is no limit to the number of stats matrices, and thus the charts. (Used with the customer's kind permission.)
Readers who are unfamiliar with the Standard for Modelling Relational Databases may find the IDEF1X Notation helpful.
One More Thing
Last but not least, SQL is a IEC/ISO/ANSI Standard. The freeware is actually Non-SQL; it is fraudulent to use the term SQL if they do not provide the Standard. They may provide "extras", but they are absent the basics.
Found very interesting the above answers.
Trying to add a couple more considerations here.
1) Data aging
Time-series management usually need to create aging policies. A typical scenario (e.g. monitoring server CPU) requires to store:
1-sec raw samples for a short period (e.g. for 24 hours)
5-min detail aggregate samples for a medium period (e.g. 1 week)
1-hour detail over that (e.g. up to 1 year)
Although relational models make it possible for sure (my company implemented massive centralized databases for some large customers with tens of thousands of data series) to manage it appropriately, the new breed of data stores add interesting functionalities to be explored like:
automated data purging (see Redis' EXPIRE command)
multidimensional aggregations (e.g. map-reduce jobs a-la-Splunk)
2) Real-time collection
Even more importantly some non-relational data stores are inherently distributed and allow for a much more efficient real-time (or near-real time) data collection that could be a problem with RDBMS because of the creation of hotspots (managing indexing while inserting in a single table). This problem in the RDBMS space is typically solved reverting to batch import procedures (we managed it this way in the past) while no-sql technologies have succeeded in massive real-time collection and aggregation (see Splunk for example, mentioned in previous replies).
You table has data in single table. So relational vs non relational is not the question. Basically you need to read a lot of sequential data. Now if you have enough RAM to store a years worth data then nothing like using Redis/MongoDB etc.
Mostly NoSQL databases will store your data on same location on disk and in compressed form to avoid multiple disk access.
NoSQL does the same thing as creating the index on device id and metric id, but in its own way. With database even if you do this the index and data may be at different places and there would be a lot of disk IO.
Tools like Splunk are using NoSQL backends to store time series data and then using map reduce to create aggregates (which might be what you want later). So in my opinion to use NoSQL is an option as people have already tried it for similar use cases. But will a million rows bring the database to crawl (maybe not , with decent hardware and proper configurations).
Create a file, name it 1_2.data. weired idea? what you get:
You save up to 50% of space because you don't need to repeat the fk_to_device and fk_to_metric value for every data point.
You save up even more space because you don't need any indices.
Save pairs of (timestamp,metric_value) to the file by appending the data so you get a order by timestamp for free. (assuming that your sources don't send out of order data for a device)
=> Queries by timestamp run amazingly fast because you can use binary search to find the right place in the file to read from.
if you like it even more optimized start thinking about splitting your files like that;
1_2_january2014.data
1_2_february2014.data
1_2_march2014.data
or use kdb+ from http://kx.com because they do all this for you:) column-oriented is what may help you.
There is a cloud-based column-oriented solution popping up, so you may want to have a look at: http://timeseries.guru
You should look into Time series database. It was created for this purpose.
A time series database (TSDB) is a software system that is optimized for handling time series data, arrays of numbers indexed by time (a datetime or a datetime range).
Popular example of time-series database InfluxDB
I think that the answer for this kind of question should mainly revolve about the way your Database utilize storage.
Some Database servers use RAM and Disk, some use RAM only (optionally Disk for persistency), etc.
Most common SQL Database solutions are using memory+disk storage and writes the data in a Row based layout (every inserted raw is written in the same physical location).
For timeseries stores, in most cases the workload is something like: Relatively-low interval of massive amount of inserts, while reads are column based (in most cases you want to read a range of data from a specific column, representing a metric)
I have found Columnar Databases (google it, you'll find MonetDB, InfoBright, parAccel, etc) are doing terrific job for time series.
As for your question, which personally I think is somewhat invalid (as all discussions using the fault term NoSQL - IMO):
You can use a Database server that can talk SQL on one hand, making your life very easy as everyone knows SQL for many years and this language has been perfected over and over again for data queries; but still utilize RAM, CPU Cache and Disk in a Columnar oriented way, making your solution best fit Time Series
5 Millions of rows is nothing for today's torrential data. Expect data to be in the TB or PB in just a few months. At this point RDBMS do not scale to the task and we need the linear scalability of NoSql databases. Performance would be achieved for the columnar partition used to store the data, adding more columns and less rows kind of concept to boost performance. Leverage the Open TSDB work done on top of HBASE or MapR_DB, etc.
I face similar requirements regularly, and have recently started using Zabbix to gather and store this type of data. Zabbix has its own graphing capability, but it's easy enough to extract the data out of Zabbix's database and process it however you like. If you haven't already checked Zabbix out, you might find it worth your time to do so.

How to instantly query a 64Go database

Ok everyone, I have an excellent challenge for you. Here is the format of my data :
ID-1 COL-11 COL-12 ... COL-1P
...
ID-N COL-N1 COL-N2 ... COL-NP
ID is my primary key and index. I just use ID to query my database. The datamodel is very simple.
My problem is as follow:
I have 64Go+ of data as defined above and in a real-time application, I need to query my database and retrieve the data instantly. I was thinking about 2 solutions but impossible to set up.
First use sqlite or mysql. One table is needed with one index on ID column. The problem is that the database will be too large to have good performance, especially for sqlite.
Second is to store everything in memory into a huge hashtable. RAM is the limit.
Do you have another suggestion? How about to serialize everything on the filesystem and then, at each query, store queried data into a cache system?
When I say real-time, I mean about 100-200 query/second.
A thorough answer would take into account data access patterns. Since we don't have these, we just have to assume equal probably distribution that a row will be accessed next.
I would first try using a real RDBMS, either embedded or local server, and measure the performance. If this this gives 100-200 queries/sec then you're done.
Otherwise, if the format is simple, then you could create a memory mapped file and handle the reading yourself using a binary search on the sorted ID column. The OS will manage pulling pages from disk into memory, and so you get free use of caching for frequently accessed pages.
Cache use can be optimized more by creating a separate index, and grouping the rows by access pattern, such that rows that are often read are grouped together (e.g. placed first), and rows that are often read in succession are placed close to each other (e.g. in succession.) This will ensure that you get the most back for a cache miss.
Given the way the data is used, you should do the following:
Create a record structure (fixed size) that is large enough to contain one full row of data
Export the original data to a flat file that follows the format defined in step 1, ordering the data by ID (incremental)
Do a direct access on the file and leave caching to the OS. To get record number N (0-based), you multiply N by the size of a record (in byte) and read the record directly from that offset in the file.
Since you're in read-only mode and assuming you're storing your file in a random access media, this scales very well and it doesn't dependent on the size of the data: each fetch is a single read in the file. You could try some fancy caching system but I doubt this would gain you much in terms of performance unless you have a lot of requests for the same data row (and the OS you're using is doing poor caching). make sure you open the file in read-only mode, though, as this should help the OS figure out the optimal caching mechanism.

How should I store extremely large amounts of traffic data for easy retrieval?

for a traffic accounting system I need to store large amounts of datasets about internet packets sent through our gateway router (containing timestamp, user id, destination or source ip, number of bytes, etc.).
This data has to be stored for some time, at least a few days. Easy retrieval should be possible as well.
What is a good way to do this? I already have some ideas:
Create a file for each user and day and append every dataset to it.
Advantage: It's probably very fast, and data is easy to find given a consistent file layout.
Disadvantage: It's not easily possible to see e.g. all UDP traffic of all users.
Use a database
Advantage: It's very easy to find specific data with the right SQL query.
Disadvantage: I'm not sure if there is a database engine that can efficiently handle a table with possibly hundreds of millions datasets.
Perhaps it's possible to combine the two approaches: Using an SQLite database file for each user.
Advantage: It would be easy to get information for one user using SQL queries on his file.
Disadvantage: Getting overall information would still be difficult.
But perhaps someone else has a very good idea?
Thanks very much in advance.
First, get The Data Warehouse Toolkit before you do anything.
You're doing a data warehousing job, you need to tackle it like a data warehousing job. You'll need to read up on the proper design patterns for this kind of thing.
[Note Data Warehouse does not mean crazy big or expensive or complex. It means Star Schema and smart ways to handle high volumes of data that's never updated.]
SQL databases are slow, but that slow is good for flexible retrieval.
The filesystem is fast. It's a terrible thing for updating, but you're not updating, you're just accumulating.
A typical DW approach for this is to do this.
Define the "Star Schema" for your data. The measurable facts and the attributes ("dimensions") of those facts. Your fact appear to be # of bytes. Everything else (address, timestamp, user id, etc.) is a dimension of that fact.
Build the dimensional data in a master dimension database. It's relatively small (IP addresses, users, a date dimension, etc.) Each dimension will have all the attributes you might ever want to know. This grows, people are always adding attributes to dimensions.
Create a "load" process that takes your logs, resolves the dimensions (times, addresses, users, etc.) and merges the dimension keys in with the measures (# of bytes). This may update the dimension to add a new user or a new address. Generally, you're reading fact rows, doing lookups and writing fact rows that have all the proper FK's associated with them.
Save these load files on the disk. These files aren't updated. They just accumulate. Use a simple notation, like CSV, so you can easily bulk load them.
When someone wants to do analysis, build them a datamart.
For the selected IP address or time frame or whatever, get all the relevant facts, plus the associated master dimension data and bulk load a datamart.
You can do all the SQL queries you want on this mart. Most of the queries will devolve to SELECT COUNT(*) and SELECT SUM(*) with various GROUP BY and HAVING and WHERE clauses.
I think the proper answer really depends on the definition of a "dataset". As you mention in your question you are storing individual sets of information for each record; timestamp, userid, destination ip, source ip, number of bytes etc..
SQL Server is perfectly capable of handing this type of data storage with hundreds of millions of records without any real difficulty. Granted this type of logging is going to require some good hardware to handle it, but it shouldn't be too complex.
Any other solution in my opinion is going to make reporting very hard, and from the sounds of it that is an important requirement.
So you are in one of the cases where you have much more write activity than read, you want your writes not to block you, and you want your reads to be "reasonably fast" but not critical. It's a typical business intelligence use case.
You should probably use a database and store your data in as a "denormalized" schema to avoid complex joins and multiple inserts for each record. Think of your table as a huge log file.
In this case, some of the "new and fancy" NoSQL databases are probably what you're looking for: they provide relaxed ACID constraints, which you should not terribly mind here (in case of crash, you can loose the last lines of your log), but they perform much better for insertion, because they don't have to sync journals on disk at each transaction.

Resources