Is Cassandra / ScyllaDB capable of handling millions of very wide data rows? - database

A new business need has emerged in our firm, where a relatively "big" data set needs to be accessed by online processes (with typical latency of up to 1 second). There is only one key with a high granularity / rows count measured in tens of millions and the expected number of columns / fields / value columns will likely exceed hundreds of thousands.
The key column is shared among all value columns, so key-value storage, while scalable, seems rather wasteful here. Is there any hope for using Cassandra / ScyllaDB (to which we gradually narrowed down our search) for such a wide data set, while ideally reducing also data storage needs by half (by storing the common key only once)?

If I understand your use case correctly, your use case will have tens of millions of partitions (what you called rows), and each will have hundreds of thousands of different values in each of them (each those would be a clustering row in modern CQL - CQL no longer supports un-schema-ed wide rows). This is a fairly reasonable data set for Scylla and Cassandra.
But I want to add that I'm not sure the storage saving you are hoping for will really be there. Yes, Scylla/Cassandra will not need to store the partition key multiple times, but unless the partition key is very long, this will be often be negligible compared to the other overheads of storing the data on disk.
Another thing you should consider is your expected queries. How will you read from this database? If you'll want to read all 100,000 columns of a particular key, or a contiguous range of them, then the data model you described is perfect. However, if the expected use case is that you always plan to read a single column from a specific key, then this data model will be inefficient - a random-access read from the middle of a long partition is slower than reading the value from a short partition.

Related

What decides the number of partitions in a DynamoDB table?

I'm a beginner to DynamoDB, and my online constructor doesn't answer his Q/A lol, and i've been confused about this.
I know that the partition key decides the partition in which the item will be placed.
I also know that the number of partitions is calculated based on throughput or storage using the famous formulas
So let's say a table has user_id as its partition Key, with 200 user_ids. Does that automatically mean that we have 200 partitions? If so, why didn't we calculate the no. of partitions based on the famous formulas?
Thanks
Let's establish 2 things.
A DynamoDB partition can support 3000 read operations and 1000 write operations. It keeps a divider between read and write ops so they do not interfere with each other. If you had a table that was configured to support 18000 reads and 6000 writes, you'd have at least 12 partition, but probably a few more for some head room.
A provisioned capacity table has 1 partition by default, but an on-demand partition has 4 partitions by default.
So, to answer your question directly. Just because you have 200 items, does not mean you have 200 partitions. It is very possible for those 200 items to be in just one partition if your table was in provisioned capacity mode. If the configuration of the table changes or it takes on more traffic, those items might move around to new partitions.
There are a few distinct times where DynamoDB will add partitions.
When partitions grow in storage size larger than 10GB. DynamoDB might see that you are taking on data and try to do this proactively, but 10GB is the cut off.
When your table needs to support more operations per second that it is currently doing. This can happen manually because you configured your table to support 20,000 reads/sec where before I only supported 2000. DynamoDB would have to add partitions and move data to be able to handle that 20,000 reads/sec. Or is can happen automatically to add partitions because you configured floor and ceiling values in DynamoDB auto-scaling and DynamoDB senses your ops/sec is climbing and will therefore adjust the number of partitions in response to capacity exceptions.
Your table is in on-demand capacity mode and DynamoDB attempts to automatically keep 2x your previous high water mark of capacity. For example, say your table just reached 10,000 RCU for the first time. DynamoDB would see that is past your previous high water mark and start adding more partitions as it tries to keep 2x the capacity at the ready in case you peak up again like you just did.
DynamoDB is actively monitoring your table and if it sees one or more items are particularly being hit hard (hot keys), are in the same partition and this might create a hot partition. If that is happening, DynamoDB might split the table to help isolate those items and prevent or fix a hot partition situation.
There are one or two other more rare edge cases, but you'd likely be talking to AWS Support if you encountered this.
Note: Once DynamoDB creates partitions, the number of partitions never shrinks and this is ok. Throughput dilution is no longer a thing in DynamoDB.
The partition key value is hashed to determine the actual partition to place the data item into.
Thus the number of distinct partition key values has zero affect on the number of physical partitions.
The only things that affect the physical number of partitions are RCUs/WCUs (throughput) and the amount of data stored.
Nbr Partions Pt = RCU/3000 + WCU/1000
Nbr Partions Ps = GB/10
Unless one of the above is more than 1.0, there will likely only be a single partition. But I'm sure the split happens as you approach the limits, when exactly is something only AWS knows.

Index performance on Postgresql for big tables

I have been searching good information about index benchmarking on PostgreSQL and found nothing really good.
I need to understand how PostgreSQL behaves while handling a huge amount of records.
Let's say 2000M records on a single non-partitioned table.
Theoretically, b-trees are O(log(n)) for reads and writes but in practicality
I think that's kind of an ideal scenario not considering things like HUGE indexes not fitting entirely in memory (swapping?) and maybe other things I am not seeing at this point.
There are no JOIN operations, which is fine, but note this is not an analytical database and response times below 150ms (less the better) are required. All searches are expected to be done using indexes, of course. Where we have 2-3 indexes:
UUID PK index
timestamp index
VARCHAR(20) index (non unique but high cardinality)
My concern is how writes and reads will perform once the table reach it's expected top capacity (2500M records)
... so specific questions might be:
May "adding more iron" achieve reasonable performance in such scenario?
NOTE this is non-clustered DB so this is vertical scaling.
What would be the main sources of time consumption either for reads and writes?
What would be the amount of records on a table that we can consider "too much" for this standard setup on PostgreSql (no cluster, no partitions, no sharding)?
I know this amount of records suggests taking some alternative (sharding, partitioning, etc) but this question is about learning and understanding PostgreSQL capabilities more than other thing.
There should be no performance degradation inserting into or selecting from a table, even if it is large. The expense of index access grows with the logarithm of the table size, but the base of the logarithm is large, and the index shouldn't have a depth of the index cannot be more than 5 or 6. The upper levels of the index are probably cached, so you end up with a handful of blocks read when accessing a single table row per index scan. Note that you don't have to cache the whole index.

Performance of Column Family in Cassandra DB

I have a table where my queries will be purely based on the id and created_time, I have the 50 other columns which will be queried purely based on the id and created_time, I can design it in two ways,
Either by multiple small tables with 5 column each for all 50 parameters
A single table with all 50 columns with id and created_at as primary
key
Which will be better, my rows will increase tremendously, so should I bother on the length of column family while modelling?
Actually, you need to have small tables to decrease the load on single table and should also try to maintain a query based table. If the query used contains the read statement to get all the 50 columns, then you can proceed with single table. But if you are planning to get part of data in each of your query, then you should maintain query based small tables which will redistribute the data evenly across the nodes or maintain multiple partitions as alex suggested(but you cannot get range based queries).
This really depends on how you structure of your partition key & distribution of data inside partition. CQL has some limits, like, max 2 billion cells per partitions, but this is a theoretical limit, and practical limits - something like, not having partitions bigger than 100Mb, etc. (DSE has recommendations in the planning guide).
If you'll always search by id & created_time, and not doing range queries on created_time, then you may even have the composite partition key comprising of both - this will distribute data more evenly across the cluster. Otherwise make sure that you don't have too much data inside partitions.
Or you can add another another piece into partition key, for example, sometimes people add the truncated date-time into partition key, for example, time rounded to hour, or to the day - but this will affect your queries. It's really depends on them.
Sort of in line with what Alex mentions, the determining factor here is going to be the size of your various partitions (which is an extension of the size of your columns).
Practically speaking, you can have problems going both ways - partitions that are too narrow can be as problematic as partitions that are too wide, so this is the type of thing you may want to try benchmarking and seeing which works best. I suspect for normal data models (staying away from the pathological edge cases), either will work just fine, and you won't see a meaningful difference (assuming 3.11).
In 3.11.x, Cassandra does a better job of skipping unrequested values than in 3.0.x, so if you do choose to join it all in one table, do consider using 3.11.2 or whatever the latest available release is in the 3.11 (or newer) branch.

Does DynamoDB scale when partition key bucket gets huge?

Does DynamoDB scale when partition key bucket gets huge? What are the possible solutions if this is really a problem?
TL;DR - Yes, DynamoDB scales very well as long as you use the service as intended!
As a user of Dynamo your responsibility is to choose a partition key (and range key combination if need be) that provides a relatively uniform distribution of the data and to allocate enough read and write capacity for your use case. If you do use a range key, this means you should aim to have approximately the same number of elements for each of the partition key values in your table.
As long as you follow this rule, Dynamo will scale very well. Even when you hit the size limit for a partition, Dynamo will automatically split the data in the original partition into two equal sized partitions which will each receive about half the data (again - as long as you did a good job of choosing the partition key and range key). This is very well explained in the Dynamo DB documentation
Of course, as your table grows and you get more and more partitions, you will have to allocate more and more read and write capacity to ensure enough is provisioned to sustain all partitions of your table. The capacity is equally distributed to all partitions in your table (except for splits - though, again, if the distribution is uniform even splits will receive capacity uniformly).
For reference, see also: how partitions work.

How large can a hbase table actually grow?

Would there be any reason to split a hbase table into smaller entities, or can it grow forever (assuming available disk space)?
Background:
We have realtime data (measurements), up to lets say 500,000/s, which consists essentially of timestamp, value, flags. If we distribute the values to different tables, it would also mean to insert each of the entries individually, which is a performance killer. If we insert in bulk it is much faster. The question is, are there any downsides to have a hbase table with an extreme size?
There could be a strong reason behind splitting a table, which is avoiding RegionServer hotspotting, by distributing the load across multiple RegionServers. HBase, by virtue of its nature, stores rows sequentially at one place. Rows with similar keys go to the same server(timeseries data, for example). This is to facilitate better range queries. However, this starts becoming a bottleneck once your data grows too big(and your disk still has space).
In cases like above data will continue to go to the same RegionServer, leading to hotspotting. So, we split tables manually to distribute the data uniformly across the cluster.
I don't see the point in manually splitting an HBase table, HBase does this on his own and extremely well (which called HBase table regions)
HBase has been made to handle extremely large data, so I like to believe that the limit depends on your hardware only (of course so configurations might impact performance such as automatic major compaction etc...)

Resources