Cassandra Vs ScyllaDB Memory Usage - database

I am doing performance comparisons of ScyllaDB and Cassandra, specifically looking at the impact of memory. The machines I am using each have 16GB and 8 cores.
Based on the docs, Cassandra will default to 4GB Xmx and use the remaining 12GB as file system cache.
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html
ScyllaDB instead will use all 16GB for itself.
http://docs.scylladb.com/faq/#scylla-is-using-all-of-my-memory-why-is-that-what-if-the-server-runs-out-of-memory
What I'm wondering is if this is a fair comparison setup (4GB Xmx for Cassandra vs 16GB for Scylla)? I realize this is what each recommend, but would a more fair test be 8GB Xmx for Cassandra and --memory 8G for ScyllaDB? My workload is mostly write intensive and I don't expect file system caching to always be able to help Cassandra. It's odd to me that ScyllaDB does not expect almost any file system caching compared to Cassandra's huge reliance on it.

Cassandra will always use all of the system memory; the heap size (-Xmx) setting just determines how much is used by the heap and how much by other memory consumers (off-heap structures and the page cache). So if you limit Scylla's memory usage, it will be at a disadvantage compared to Cassandra.

Scylla will use ~1/2 of the memory for MemTable, and the other half for Key/Partition caching.
If your workload is mostly write, more memory will have less of effect on performance, and should be bounded by either I/O or CPU.
I would recommend reading:
http://www.scylladb.com/2017/10/05/io-access-methods-scylla/
To understand the way Scylla is writing information.
And
http://www.scylladb.com/2016/12/15/sswc-part1/
To understand the way Scylla is balancing I/O workloads

Related

Whole Oracle database in memory

Suppose I have an Oracle database whose data files are 256 GB in size. Is it a good idea to use a server with, say, 384 GB RAM in order to host the entire database in RAM?
Is there any difference if you only have, say, 128 GB RAM?
I'm talking about caching and Oracle inner workings, not memory based filesystem. Suppose OLTP, and a 100 GB working set.
Regards,
Assuming you are talking about Oracle using the memory for caching and other processes and not a memory based filesystem (which is an awful idea)... more memory is almost always better than less memory.
The real world answer is it depends. If your working set of data is a few GB or less then the extra memory wouldn't help as much.
How much memory you need and when extra memory stops helping depends on your application and what style of DB (OLTP,DSS) and there is no simple yes/no answer.
Use the views V$SGA_TARGET_ADVICE and V$PGA_TARGET_ADVICE to predict the performance improvement of additional memory.
Oracle records many statistics about physical (disk) and logical (total) I/O requests. People used to obsess over the buffer cache hit ratio. It can be helpful but that number doesn't tell the whole story. If the ratio is 99% then your cache is probably sufficient and adding more memory won't help. If it's low then you might benefit from more memory, or perhaps the processes that use disk aren't time critical.
Be careful before you request more memory. I've seen a lot of memory wasted because some people assume more memory will solve everything. Oracle has many I/O features to help reduce memory requirements. The "in-memory database" fad is mostly hype.

Storage capacity of in-memory database?

Is storage capacity of in-memory database limited to size of RAM? If yes, is there any ways to increase its capacity except for increasing RAM size. If no, please give some explanations.
As previously mentioned, in-memory storage capacity is limited by the addressable memory, not by the amount of physical memory in the system. Simon was also correct that the OS will swap memory to the page file, but you really want to avoid that. In the context of the DBMS, the OS will do a worse job of it than if you simply used a persistent database with as large of a cache as you have physical memory to support. IOW, the DBMS will manage its cache more intelligently than the OS would manage paged memory containing in-memory database content.
On a 32 bit system, each process is limited to a total of 3GB of RAM, whether you have 3GB physically or 512MB. If you have more data (including the in-mem DB) and code then will fit into physical RAM then the Page file on disc is used to swap out memory that is currently not being used. Swapping does slow everything down though. There are some tricks you can use for extending that: Memory-mapped files, /3GB switch; but these are not easy to implement.
On 64 bit machines, a processes memory limitation is huge - I forget what it is but it's up in the TB range.
VoltDB is an in-memory SQL database that runs on a cluster of 64-bit Linux servers. It has high performance durability to disk for recovery purposes, but tables, indexes and materialized views are stored 100% in-memory. A VoltDB cluster can be expanded on the fly to increase the overall available RAM and throughput capacity without any down time. In a high-availability configuration, individual nodes can also be stopped to perform maintenance such as increasing the server's RAM, and then rejoined to the cluster without any down time.
The design of VoltDB, led by Michael Stonebraker, was for a no-compromise approach to performance and scalability of OLTP transaction processing workloads with full ACID guarantees. Today these workloads are often described as Fast Data. By using main memory, and single-threaded SQL execution code distributed for parallel processing by core, the data can be accessed as fast as possible in order to minimize the execution time of transactions.
There are in-memory solutions that can work with data sets larger than RAM. Of course, this is accomplished by adding some operations on disk. Tarantool's Vinyl, for example, can work with data sets that are 10 to 1000 times the size of available RAM. Like other databases of recent vintage such as RocksDB and Bigtable, Vinyl's write algorithm uses LSM trees instead of B trees, which helps with its speed.

High performance persistent key value store for huge amount of records

The scenario is about 1 billion records. Each record has 1kb data size and is store in SSD.
Which kv store can provide best random read performance? It need to reduce disk access to only 1 time per query and all of the data index will be stored in memory.
Redis is fast but it's too expensive to store 1 TB data in memory.
LevelDB reads disk several times per query.
The closest one I found is fatcache but it's not persistent. It's an SSD-backed memcached.
Any suggestions?
RocksDB might be the choice for you, which is optimized for fast storage like memory and flash-disk, and its highly customizable. If your application is read-only after initial bulk-load, then you can config RocksDB to compact everything in one single big file. In that way, reads are guaranteed to have at most single I/O. However, if your application handles both reads and writes, then in order to have at most one I/O per read, you will need to sacrifice the write performance as you need to config rocksdb to compact very often, and that hurts write performance.
Tuning guide for RocksDB can also be found here.
You may want to try RocksDB, it's a facebook library which optimized for SSD storage. You can also try Ardb, it's a redis protocol compatible NoSQL DB build on RockDB/LevelDB/LMDB.
Have you looked at aerospike ? I haven't use it, but they claim to have good performances on SSD.
LMDB is faster than RocksDB and uses 1/3rd as much memory. Also LMDb requires no tuning; RocksDB requires careful tuning of over 40 parameters to get performance that approaches LMDB's.
http://www.lmdb.tech/bench/inmem/scaling.html
Also LMDB is fully transactional and 100% crash-proof, RocksDB is neither.

Will performance of a SQL server degrade if the DB can't fit in the memory?

Will the performance of a SQL server drastically degrade if the database is bigger than the RAM? Or does only the index have to fit in the memory? I know this is complex, but as a rule of thumb?
Only the working set or common data or currently used data needs to fit into the buffer cache (aka data cache). This includes indexes too.
There is also the plan cache, network buffers + other stuff too. MS have put a lot of work into memory management on SQL Server and it's works well, IMHO.
Generally, more RAM will help but it's not essential.
Yes, when indexes cant fit in the memory or when doing full table scans. Doing aggregate functions over data not in memory will also require many (and maybe random) disc reads.
For some benchmarks:
Query time will depend significantly
on whether the affected data currently
resides in memory or disk access is
required. For disk intensive
operations, the characteristics of the
disk sequential and random I/O
performance are also important.
http://www.sql-server-performance.com/articles/per/large_data_operations_p7.aspx
There for, don't expect the same performance if your db size > ram size.
Edit:
http://highscalability.com/ is full of examples like:
Once the database doesn't fit in RAM you hit a wall.
http://highscalability.com/blog/2010/5/3/mocospace-architecture-3-billion-mobile-page-views-a-month.html
Or here:
Even if the DB size is just 10% bigger than RAM size this test shows a 2.6 times drop in performance.
http://www.mysqlperformanceblog.com/2010/04/08/fast-ssd-or-more-memory/
Although, remember that this is for hot data, data that you want to query over and don't can cache. If you can, you can easily live with significant less memory.
All DB operations will have to be backed up by writing to disk, having more RAM is helpful, but not essential.
Loading the whole database into RAM is not practical. Database can be upto a Terabytes these days. There is little chance that anyone would buy so much RAM. I think performance will be optimal even if the size of the RAM available is one tenth of the size of the database.

How much faster is a database running in RAM?

I"m looking to run PostgreSQL in RAM for performance enhancement. The database isn't more than 1GB and shouldn't ever grow to more than 5GB. Is it worth doing? Are there any benchmarks out there? Is it buggy?
My second major concern is: How easy is it to back things up when it's running purely in RAM. Is this just like using RAM as tier 1 HD, or is it much more complicated?
It might be worth it if your database is I/O bound. If it's CPU-bound, a RAM drive will make no difference.
But first things first, you should make sure that your database is properly tuned, you can get huge performance gains that way without losing any guarantees. Even a RAM-based database will perform badly if it's not properly tuned. See PostgreSQL wiki on this, mainly shared_buffers, effective_cache_size, checkpoint_*, default_statistics_target
Second, if you want to avoid synchronizing disk buffers on every commit (like codeka explained in his comment), disable the synchronous_commit configuration option. When your machine loses power, this will lose some latest transactions, but your database will still be 100% consistent. In this mode, RAM will be used to buffer all writes, including writes to the transaction log. So with very rare checkpoints, large shared_buffers and wal_buffers, it can actually approach speeds close to those of a RAM-drive.
Also hardware can make a huge difference. 15000 RPM disks can, in practice, be 3x as fast as cheap drives for database workloads. RAID controllers with battery-backed cache also make a significant difference.
If that's still not enough, then it may make sense to consider turning to volatile storage.
The whole thing about whether to hold you database in memory depends on size and performance as well how robust you want it to be with writes. I assume you are writing to your database and that you want to persist the data in case of failure.
Personally, I would not worry about this optimization until I ran into performance issues. It just seems risky to me.
If you are doing a lot of reads and very few writes a cache might serve your purpose, Many ORMs come with one or more caching mechanisms.
From a performance point of view, clustering across a network to another DBMS that does all the disk writing, seems a lot more inefficient than just having a regular DBMS and having it tuned to keep as much as possible in RAM as you want.
Actually... as long as you have enough memory available your database will already be fully running in RAM. Your filesystem will completely buffer all the data so it won't make much of a difference.
But... there is ofcourse always a bit of overhead so you can still try and run it all from a ramdrive.
As for the backups, that's just like any other database. You could use the normal Postgres dump utilities to backup the system. Or even better, let it replicate to another server as a backup.
5 to 40 times faster than disk resident DBMS. Check out Gartner's Magic Quadrant for Operational DBMSs 2013.
Gartner shows who is strong and more importantly notes severe cautions...bugs. .errors...lack of support and hard to use of vendors.

Resources