Estimating extra maintenance cost when using GSI in a DynamoDB table - database

I have a Users table in DynamoDB that has a unique hash key username. I want, however, to be able to find a specific user in the most efficient way possible by providing either just the username, or just the email (the email is also unique). I can make the email a global secondary index, but I have a trouble estimating the additional cost of this approach. Will using the index to retrieve a user result in two read operations? Or how many operations exactly?
Also, I want read and write throughput of the index to equal those of the table (and ideally, scale automatically), can I do that by not providing specific throughput values when I create the index with API, or do I have to provide them?

The number of read operations you will need to retrieve values from the index will depend on what values you want to read (all of them vs just a subset) and what the projection type for the index is. If the projection is ALL then it only takes 1 read, but it may cost more. If the projection is KEYS_ONLY you will only get back the table's primary key, then you would have to query the table again by that. That takes more than 1 read, but may be cheaper. It will all depend on your use cases and usage patterns.
See "Attribute Projections" at https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
I think you need to provide the read capacity and write capacity for the index when it is created - it will not inherit any values from the parent table. Although if the table is using autoscaling, the autoscaling configuration can be automatically applied to the GSI. See https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.Console.html#AutoScaling.Console.ExistingTable

Related

Performance of Column Family in Cassandra DB

I have a table where my queries will be purely based on the id and created_time, I have the 50 other columns which will be queried purely based on the id and created_time, I can design it in two ways,
Either by multiple small tables with 5 column each for all 50 parameters
A single table with all 50 columns with id and created_at as primary
key
Which will be better, my rows will increase tremendously, so should I bother on the length of column family while modelling?
Actually, you need to have small tables to decrease the load on single table and should also try to maintain a query based table. If the query used contains the read statement to get all the 50 columns, then you can proceed with single table. But if you are planning to get part of data in each of your query, then you should maintain query based small tables which will redistribute the data evenly across the nodes or maintain multiple partitions as alex suggested(but you cannot get range based queries).
This really depends on how you structure of your partition key & distribution of data inside partition. CQL has some limits, like, max 2 billion cells per partitions, but this is a theoretical limit, and practical limits - something like, not having partitions bigger than 100Mb, etc. (DSE has recommendations in the planning guide).
If you'll always search by id & created_time, and not doing range queries on created_time, then you may even have the composite partition key comprising of both - this will distribute data more evenly across the cluster. Otherwise make sure that you don't have too much data inside partitions.
Or you can add another another piece into partition key, for example, sometimes people add the truncated date-time into partition key, for example, time rounded to hour, or to the day - but this will affect your queries. It's really depends on them.
Sort of in line with what Alex mentions, the determining factor here is going to be the size of your various partitions (which is an extension of the size of your columns).
Practically speaking, you can have problems going both ways - partitions that are too narrow can be as problematic as partitions that are too wide, so this is the type of thing you may want to try benchmarking and seeing which works best. I suspect for normal data models (staying away from the pathological edge cases), either will work just fine, and you won't see a meaningful difference (assuming 3.11).
In 3.11.x, Cassandra does a better job of skipping unrequested values than in 3.0.x, so if you do choose to join it all in one table, do consider using 3.11.2 or whatever the latest available release is in the 3.11 (or newer) branch.

How should I go about selecting rows with thousands of columns?

There are about a couple million records that look like this.
idA(text), idB(int), prop1(boolean), prop2(boolean), ..., prop6000(boolean) (more prop's can be added later on)
And the primary task will be finding records with some combination of prop values.
eg: SELECT idA, idB WHERE prop30=true AND prop1987=false AND ... AND prop5754=true
If the SELECT speed were the main concern, how should I go about this problem?
--
I was thinking about defining the props as a list of int and adding values only where the value are true and use CONATINS when SELECTing.
ie: INSERT INTO tbl VALUES('id1', 1, [10, 24, 2977]) -> if prop10, prop24 and prop 2977 were true
But then it is said that the secondary index does not scale very well and should not be used heavily.
Does it hold true even for lists? (I'm thinking maybe it's different for lists as they are sorted?)
One of the key things in Cassandra query performance is that you must - MUST - hit a partition before applying an index filter. In addition, when you apply multiple index filters, it only hits one index and filters the rest in mem (i.e. only one index is used). In your query, you're not hitting a partition, and as such, it'll be a cluster wide query, that is most likely to timeout.
In cassandra 3.0, the rules will be somewhat relaxed with the introduction of global indexes. Even then, your query won't really work that great.
If all your properties are booleans, you can consider storing them as a bitfield. One integer can then hold 64 flags. That might be more efficient. One the querying side, you will still need to find a partition key by which you'll hit a partition. With the flags approach, you can simply read in the integer and do a filter on the clientside. All rows in the partition will be loaded, but unless you've got hundreds of thousands of rows in the same partition, it shouldn't be a problem.
If you trully don't have a partition key, and all you can look up is props (as in your example above), then you'll need to manually carry out indexing. Built in indexes won't really work that well, and you can choose to create index tables yourself (which may be quite difficult) or use an indexing service like Lucene, which will allow you to do the search quickly.

When to use Cassandra vs. Solr in DSE?

I'm using DSE for Cassandra/Solr integration so that data are stored in Cassandra and indexed in Solr. It's very natural to use Cassandra to handle CRUD operation and use Solr for full text search respectively, and DSE can really simplify data synchronization between Cassandra and Solr.
When it comes to query, however, there are actually two ways to go: Cassandra secondary/manual configured index vs. Solr. I want to know when to use which method and what's the performance difference in general, especially under DSE setup.
Here is one example use case in my project. I have a Cassandra table storing some item entity data. Besides the basic CRUD operation, I also need to retrieve items by equality on some field (say category) and then sort by some order (in my case here, a like_count field).
I can think of three different ways to handle it:
Declare 'indexed=true' in Solr schema for both category and like_count field and query in Solr
Create a denormalized table in Cassandra with primary key (category, like_count, id)
Create a denormalized table in Cassandra with primary key (category, order, id) and use an external component, such as Spark/Storm,to sort the items by like_count
The first method seems to be the simplest to implement and maintain. I just write some trivial Solr accessing code and the rest heavy lifting are handled by Solr/DSE search.
The second method requires manual denormalization on create and update. I also need to maintain a separate table. There is also tombstone issue as the like_count can possibly be updated frequently. The good part is that the read may be faster (if there are no excessive tombstones).
The third method can alleviate the tombstone issue at the cost of one extra component for sorting.
Which method do you think is the best option? What is the difference in performance?
Cassandra secondary indexes have limited use cases:
No more than a couple of columns indexed.
Only a single indexed column in a query.
Too much inter-node traffic for high cardinality data (relatively unique column values)
Too much inter-node traffic for low cardinality data (high percentage of rows will match)
Queries need to be known in advance so data model can be optimized around them.
Because of these limitations, it is common for apps to create "index tables" which are indexed by whatever column is desired. This requires either that data be duplicated from the main table to each index table, or an extra query will be needed to read the index table and then read the actual row from the main table after reading the main key from the index table. Queries on multiple columns will have to be manually indexed in advance, making ad hoc queries problematic. And any duplicated will have to be manually updated by the app into each index table.
Other than that... they will work fine in cases where a "modest" number of rows will be selected from a modest number of nodes, and queries are well specified in advance and not ad hoc.
DSE/Solr is better for:
A moderate number of columns are indexed.
Complex queries with a number of columns/fields referenced - Lucene matches all specified fields in a query in parallel. Lucene indexes the data on each node, so nodes query in parallel.
Ad hoc queries in general, where the precise queries are not known in advance.
Rich text queries such as keyword search, wildcard, fuzzy/like, range, inequality.
There is a performance and capacity cost to using Solr indexing, so a proof of concept implementation is recommended to evaluate how much additional RAM, storage, and nodes are needed, which depends on how many columns you index, the amount of text indexed, and any text filtering complexity (e.g., n-grams need more.) It could range from 25% increase for a relatively small number of indexed columns to 100% if all columns are indexed. Also, you need to have enough nodes so that the per-node Solr index fits in RAM or mostly in RAM if using SSD. And vnodes are not currently recommended for Solr data centers.

Database indexing - how does it work?

how does indexing increases the performance of data retrieval?
How indexing works?
Database products (RDMS) such as Oracle, MySQL builds their own indexing system, they give some control to the database administrators however nobody exactly knows what happens on the background except people makes research in that area, so why indexing :
Put simply, database indexes help
speed up retrieval of data. The other
great benefit of indexes is that your
server doesn't have to work as hard to
get the data. They are much the same
as book indexes, providing the
database with quick jump points on
where to find the full reference (or
to find the database row).
There are many indexing techiques for example :
Primary indexing, secondary indexing
B-trees and variants (B+-trees,B*-trees)
Hashing and variants (linear hashing, spiral etc.)
for example, just think that you have a database with the primary keys are sorted (simply) and these all data is stored in blocks (in hdd) so everytime you want to access the data you don't want to increase the access time (sometimes called transaction time or i/o time) the indexing helps you which data is stored in which block by using these primary keys.
Alice (primary key is names, not good example but just give an idea)
Alice
...
...
AZ...
Bob
Bri
...
Bza
...
Now you have an index in this index you only store Alice and Bob and the blocks they point, with this way users can access the data faster.The RDMS deals with the details.
I don't give the details but if you want to delve these topics, i offer you take an Database course or look at this popular book which is taught most of the universities.
Database Management Systems Ramakrishn CGherke
Each index keep the indexed fields stored separately, sorted (typically) and in a data structure which makes finding the right entries particularly easy. The database finds the entries in the index then cross-references them to the entries in the tables (Except in the case of clustered indexes and covering indexes, in which case the index has it all already). This cross-referencing takes time but is faster (you hope) than scanning the entire table.
A clustered index is where the rows themselves with all columns* are stored together with the index. Scanning clustered indexes is better than scanning non-clustered non-covering indexes because fewer lookups are required.
A covering index is where the query only requires columns which are part of the index, so the rest of the row does not need to be looked up (This is often good for performance).
* typically excluding blob / long text columns etc
How does an index in a book increase the ease with which you find the right page?
Much easier to look through an alphabetic list and then go to the right page than read every page.
This is a gross oversimplification, but in general, database indexing creates another list of some of the contents of the table, arranged in a way that the database engine can find information quickly. By organizing table contents deliberately, this eliminates the need to look for a row of data by scanning the entire table, creating a create efficiency in searches.
Indexes provide an optimal data structure for lookup queries. If your dataset changes a lot, you might consider the performance of updating/regenerating the index as well.
There are lot of open source indexing engines like lucene available, and you can search online for detailed information about performance benchmarks.

How do database perform on dense data?

Suppose you have a dense table with an integer primary key, where you know the table will contain 99% of all values from 0 to 1,000,000.
A super-efficient way to implement such a table is an array (or a flat file on disk), assuming a fixed record size.
Is there a way to achieve similar efficiency using a database?
Clarification - When stored in a simple table / array, access to entries are O(1) - just a memory read (or read from disk). As I understand, all databases store their nodes in trees, so they cannot achieve identical performance - access to an average node will take a few hops.
Perhaps I don't understand your question but a database is designed to handle data. I work with database all day long that have millions of rows. They are efficiency enough.
I don't know what your definition of "achieve similar efficiency using a database" means. In a database (from my experience) what are exactly trying to do matters with performance.
If you simply need a single record based on a primary key, the the database should be naturally efficient enough assuming it is properly structure (For example, 3NF).
Again, you need to design your database to be efficient for what you need. Furthermore, consider how you will write queries against the database in a given structure.
In my work, I've been able to cut query execution time from >15 minutes to 1 or 2 seconds simply by optimizing my joins, the where clause and overall query structure. Proper indexing, obviously, is also important.
Also, consider the database engine you are going to use. I've been assuming SQL server or MySql, but those may not be right. I've heard (but have never tested the idea) that SQLite is very quick - faster than either of the a fore mentioned. There are also many other options, I'm sure.
Update: Based on your explanation in the comments, I'd say no -- you can't. You are asking about mechanizes designed for two completely different things. A database persist data over a long amount of time and is usually optimized for many connections and data read/writes. In your description the data in an array, in memory is for a single program to access and that program owns the memory. It's not (usually) shared. I do not see how you could achieve the same performance.
Another thought: The absolute closest thing you could get to this, in SQL server specifically, is using a table variable. A table variable (in theory) is held in memory only. I've heard people refer to table variables as SQL server's "array". Any regular table write or create statements prompts the RDMS to write to the disk (I think, first the log and then to the data files). And large data reads can also cause the DB to write to private temp tables to store data for later or what-have.
There is not much you can do to specify how data will be physically stored in database. Most you can do is to specify if data and indices will be stored separately or data will be stored in one index tree (clustered index as Brian described).
But in your case this does not matter at all because of:
All databases heavily use caching. 1.000.000 of records hardly can exceed 1GB of memory, so your complete database will quickly be cached in database cache.
If you are reading single record at a time, main overhead you will see is accessing data over database protocol. Process goes something like this:
connect to database - open communication channel
send SQL text from application to database
database analyzes SQL (parse SQL, checks if SQL command is previously compiled, compiles command if it is first time issued, ...)
database executes SQL. After few executions data from your example will be cached in memory, so execution will be very fast.
database packs fetched records for transport to application
data is sent over communication channel
database component in application unpacks received data into some dataset representation (e.g. ADO.Net dataset)
In your scenario, executing SQL and finding records needs very little time compared to total time needed to get data from database to application. Even if you could force database to store data into array, there will be no visible gain.
If you've got a decent amount of records in a DB (and 1MM is decent, not really that big), then indexes are your friend.
You're talking about old fixed record length flat files. And yes, they are super-efficient compared to databases, but like structure/value arrays vs. classes, they just do not have the kind of features that we typically expect today.
Things like:
searching on different columns/combintations
variable length columns
nullable columns
editiablility
restructuring
concurrency control
transaction control
etc., etc.
Create a DB with an ID column and a bit column. Use a clustered index for the ID column (the ID column is your primary key). Insert all 1,000,000 elements (do so in order or it will be slow). This is kind of inefficient in terms of space (you're using nlgn space instead of n space).
I don't claim this is efficient, but it will be stored in a similar manner to how an array would have been stored.
Note that the ID column can be marked as being a counter in most DB systems, in which case you can just insert 1000000 items and it will do the counting for you. I am not sure if such a DB avoids explicitely storing the counter's value, but if it does then you'd only end up using n space)
When you have your primary key as a integer sequence it would be a good idea to have reverse index. This kind of makes sure that the contiguous values are spread apart in the index tree.
However, there is a catch - with reverse indexes you will not be able to do range searching.
The big question is: efficient for what?
for oracle ideas might include:
read access by id: index organized table (this might be what you are looking for)
insert only, no update: no indexes, no spare space
read access full table scan: compressed
high concurrent write when id comes from a sequence: reverse index
for the actual question, precisely as asked: write all rows in a single blob (the table contains one column, one row. You might be able to access this like an array, but I am not sure since I don't know what operations are possible on blobs. Even if it works I don't think this approach would be useful in any realistic scenario.

Resources