I am using redis as a datastore rather than cache, but there is a maxmemory limit set, In my understanding the maxmemory specifies the RAM that redis can use, should it not swap the data back to disk once the memory limit is reached.
I have a mixture of keys while some have their expiry set and others don't
I have tried both volatile-lru and allkeys-lru, as specified in the documentation both remove the old keys based on the property.
What configuration should I use to avoid data loss? Should I set an expiry on all keys and use volatile-lru? What am I missing?
Swapping out memory to disk (virtual memory) was deprecated/deleted in Redis 2.4/2.6. Most likely, you are not using such an old version.
You control what Redis does when memory is exhausted with maxmemory and maxmemory-policy. Both are settings in redis.conf. Take a look. Swapping memory out to disk is not an option in recent Redis versions.
If Redis can't remove keys according to the policy, or if the policy is
set to 'noeviction', Redis will start to reply with errors to commands
that would use more memory, like SET, LPUSH, and so on, and will continue
to reply to read-only commands like GET.
If maxmemory is reached, you lose data only if the eviction policy set in maxmemory-policy indicates Redis to evict some keys and how to select these keys (volatile or all, lfu/lru/ttl/random). Otherwise, Redis start rejecting write commands to preserve the data already in memory. Read commands continue to be served.
You can run Redis without a maxmemory setting (default), so it will continue using up memory until the OS memory is exhausted.
If your operating system has virtual memory enabled, and the maxmemory setting allows Redis to go over the physical memory available, then your OS (not Redis) starts to swap out memory to disk. You can expect a performance drop then.
In general as a rule of thumb:
Use the allkeys-lru policy when you expect a power-law distribution in
the popularity of your requests, that is, you expect that a subset of
elements will be accessed far more often than the rest. This is a good
pick if you are unsure.
Use the allkeys-random if you have a cyclic
access where all the keys are scanned continuously, or when you expect
the distribution to be uniform (all elements likely accessed with the
same probability).
Use the volatile-ttl if you want to be able to
provide hints to Redis about what are good candidate for expiration by
using different TTL values when you create your cache objects.
The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
As given in the documentation
Using Redis as an LRU cache
Do not set that param if you're use redis as a datastore, it is used for cache scenario.
Related
I have a particular use case for multiple in memory key value maps that need very fast lookup time. They are set just set once a day so can be considered immutable for all practical purposes. Redis is not an option since it gets CPU throttled in case of multiple threads accessing it. Multi instance redis takes up too much memory because of data replication. The important thing to consider here is that the read rate is very high in bursts. Around 10 million requests in bursts from around 40-50 workers simultaneously.
I was thinking of creating a simple client server architecture with multiple readers connecting to a server to read from shared memory maps. However I wonder if such an architecture already exists and has been tested profusely for this use case in which case I should not be reinventing the wheel.
So to sum up what is my best alternative? TIA.
Might not be suitable for you but you could try RBLDNSD and store your values in DNS. It's high performance and results will be cached, and it's easy to read the values from pretty much any programming environment. To write values to it you'll need to write directly to its zone files, but the format is simple and easy to write.
You don't mention the size of your maps, but given that performance is so critical, it sounds like you may want to consider keeping copies of your 'multiple in memory key value maps' with each worker.
You could then implement a simple mechanism to notify each worker that it's time to refresh their maps (e.g. Redis PUBLISH, or any other pubsub type framework).
At the risk of running afoul of the stackoverlow self-promotion police :-) eXtremeDB might be a consideration. It's not schema-less, but your schema can simply define a key-value pair. It supports MVCC (optimistic, non-blocking) concurrency so even the relatively infrequent writes won't get in the way of readers, and you'll be able to utilize all the CPU cores.
I am looking at using a spell checker for my GAE app and we have an algorithm already for spell checking, but I'm trying to figure out how to best store and load dictionary files for best performance.
I am considering the following strategies:
Place the dictionary data in a text file(s) in local app engine storage and load/read them using standard IO methods (open(),read(),etc)
Place the dictionary data in GCS and load/read using GCS IO methods
Place the dictionary data in an ndb.model() and load/cache information
One cache I don't quite understand is the context cache -- is this cache that is attached to a given instance? I.e. if I have a resident instance that is spun up, can I go ahead and load the dictionary data into the instance's RAM and thus accessing data should be extremely fast (microsecond vs millisecond seek/get times)? The dictionary data will probably be a sharded list of some sort that we'll optimize for performance. Are there other data storage methods/structures I'm not considering here that may be more appropriate? Thanks.
Cache (or its full name memcache) isn't exactly RAM but similar. When used with NDB it acts like a buffer. When you do writes it writes to the Memcache first then to the DB. Though this may sound slower its not, as writes to the DB take a while before they are accessible. When it reads it checks memcache, if it exists then it uses that info otherwise it pulls from the DB, stores it in Memcache then gives you the data. Just like RAM though its volatile, thus you cannot guaranty information is always acceptable, its limited (depending on what type of instance you have) and can be flush with no warning or reason. You can read more here:
https://developers.google.com/appengine/docs/python/memcache/
https://developers.google.com/appengine/articles/scaling/memcache
Ultimately Memcahe will be the fastest and most accessible as it it shared amongst all your instances, so if one instance pulls some data from the datastore then all of them can access it quickly. Even if its not in memcache it is still the fastest of all the options, as the others ones will fill up your memory and may cause errors and performance issues.
I see in many cases memcached is used. Can you give examples when to avoid memcached other than large files? How large files are appropriate for memcached?
Thanks for your answer
If you know when to bust the caches to prevent out-of-date things from being cached, there's not really a reason to avoid memcache for anything small unless it's so trivial to compute that it'd be approximately as long to hit memcache as it would to just compute it.
I have seen Memcached is used to store session data.As my point of view its not recommended storing sessions in Memcached.If a session disappears, often the user is logged out,If a portion of a cache disappears or either due to a hardware crash it should not cause your users noticable pain.According to the memcached site, “memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.” So while developing your application, remember that you must have a fall-back mechanism to retrieve the data once it is not found in the Memcached server. Here are some tips,
Avoid storing frequently updated data in Memcached.
Memcached does not ship with in-built security mechanisms. So It is your responsibility to handle security issues.
Try to maintain predefined fixed number of connections in the connection pool because each set/get operation is a new atomic connection to the Memcached server.
Avoid storing raw data coming straight from the database rather than storing processed data
More specifically, are there any databases that don't require secondary storage (e.g. HDD) to provide durability?
Note:This is a follow up of my earlier question.
If you want persistence of transations writing to persistent storage is only real option (you perhaps do not want to build many clusters with independent power supplies in independent data centers and still pray that they never fail simultaneously). On the other hand it depends on how valuable your data is. If it is dispensable then pure in-memory DB with sufficient replication may be appropriate. BTW even HDD may fail after you stored your data on it so here is no ideal solution. You may look at http://www.julianbrowne.com/article/viewer/brewers-cap-theorem to choose replication tradeoffs.
Prevayler http://prevayler.org/ is an example of in-memory system backed up with persistent storage (and the code is extremely simple BTW). Durability is provided via transaction logs that are persisted on appropriate device (e.g. HDD or SSD). Each transaction that modifies data is written into log and the log is used to restore DB state after power failure or database/system restart. Aside from Prevayler I have seen similar scheme used to persist message queues.
This is indeed similar to how "classic" RDBMS works except that logs are only data written to underlying storage. The logs can be used for replication also so you may send one copy of log to a live replica other one to HDD. Various combinations are possible of course.
All databases require non-volatile storage to ensure durability. The memory image does not provide a durable storage medium. Very shortly after you loose power your memory image becomes invalid. Likewise, as soon as the database process terminates, the operating system will release the memory containing the in-memory image. In either case, you loose your database contents.
Until any changes have been written to non-volatile memory, they are not truely durable. This may consist of either writing all the data changes to disk, or writing a journal of the change being done.
In space or size critical instances non-volatile memory such as flash could be substituted for a HDD. However, flash is reported to have issues with the number of write cycles that can be written.
Having reviewed your previous post, multi-server replication would work as long as you can keep that last server running. As soon as it goes down, you loose your queue. However, there are a number of alternatives to Oracle which could be considered.
PDAs often use battery backed up memory to store their databases. These databases are non-durable once the battery runs down. Backups are important.
In-memory means all the data is stored in memory for it to be accessed. When data is read, it can either be read from the disk or from memory. In case of in-memory databases, it's always retrieved from memory. However, if the server is turned off suddenly, the data will be lost. Hence, in-memory databases are said to lack support for the durability part of ACID. However, many databases implement different techniques to achieve durability. This techniques are listed below.
Snapshotting - Record the state of the database at a given moment in time. In case of Redis the data is persisted to the disk after every two seconds for durability.
Transaction Logging - Changes to the database are recorded in a journal file, which facilitates automatic recovery.
Use of NVRAM usually in the form of static RAM backed up by battery power. In this case data can be recovered after reboot from its last consistent state.
classic in memory database can't provide classic durability, but depending on what your requirements are you can:
use memcached (or similar) to storing in memory across enough nodes that it's unlikely that the data is lost
store your oracle database on a SAN based filesystem, you can give it enough RAM (say 3GB) that the whole database is in RAM, and so disk seek access never stores your application down. The SAN then takes care of delayed writeback of the cache contents to disk. This is a very expensive option, but it is common in places where high performance and high availability are needed and they can afford it.
if you can't afford a SAN, mount a ram disk and install your database on there, then use DB level replication (like logshipping) to provide failover.
Any reason why you don't want to use persistent storage?
Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us.
What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data.
Here's our two approaches that we have thought of right now:
Using memcache on >>all<< major queries and leaving it alone to do what it does best.
Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage.
The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance.
Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes.
What approach should we use?
- Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users?
Thanks a lot!
Compromize and combine this two method is a very clever way, I think.
The most obvious cache management rule is latency v.s. size rule, which is used in CPU cached also. In multi level caches each next level should have more size for compensating higher latency. We have higher latency but higher cache hit ratio. So, I didn't recommend you to place disk based cache in front of memcache. Сonversely it's should be place behind memcache. The only exception is if you cache directory mounted in memory (tmpfs). In this case file based cache could compensate high load on memcache, and also could have latency profits (because of data locality).
This two storages (file based, memcache) are not only storages that are convenient for cache. You also could use almost any KV database as they are very good at concurrency control.
Cache invalidation is separate question which can engage your attention. There are several tricks you could use to provide more subtle cache update on cache misses. One of them is dog pile effect prediction. If several concurrent threads got cache miss simultaneously all of them go to backend (database). Application should allow only one of them to proceed and rest of them should wait on cache. Second is background cache update. It's nice to update cache not in web request thread but in background. In background you can control concurrency level and update timeouts more gracefully.
Actually there is one cool method which allows you to do tag based cache tracking (memcached-tag for example). It's very simple under the hood. With every cache entry you save a vector of tags versions which it is belongs to (for example: {directory#5: 1, user#8: 2}). When you reading cache line you also read all actual vector numbers from memcached (this could be effectively performed with multiget). If at least one actual tag version is greater than tag version saved in cache line then cache is invalidated. And when you change objects (for example directory) appropriate tag version should be incremented. It's very simple and powerful method, but have it's own disadvantages, though. In this scheme you couldn't perform efficient cache invalidation. Memcached could easily drop out live entries and keep old entries.
And of course you should remember: "There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton.
Memcached is quite a scalable system. For instance, you can replicate cache to decrease access time for certain key buckets or implement Ketama algorithm that enables you to add/remove Memcached instances from pool without remap of all keys. In this way, you can easily add new machines dedicated to Memcached when you happen to have extra memory. Furthermore, as its instance can be run with different sizes, you can throw up one instance by adding more RAM to an old machine. Generally, this approach is more economic and to some extent does not inferior to the first one, especially for multiget() requests. Regarding a performance drop with data growth, the runtime of the algorithms used in Memcached does not vary with the size of the data, and therefore the access time depend only on number of simultaneous requests. Finally, if you want to tune your memory/performance priorities you can set expire time and available memory configuration values which will strict RAM usage or increase cache hits.
At the same time, when you use a hard-disk the file system can become a bottleneck of your application. Besides general I/O latency, such things as fragmentation and huge directories can noticeably affect your overall request speed. Also, beware that default Linux hard disk settings are tuned more for compatibility than for speed, so it is advisable to configure it properly before usage (for instance, you can try hdparm utility).
Thus, before adding one more integrating point, I think you should tune the existent system. Usually, properly designed database, configured PHP, Memcached and handling of static data should be enough even for a high-load web site.
I would suggest that you first use memcache for all major queries. Then, test to find queries that are least used or data that is rarely changed and then provide a cache for this.
If you can isolate common data from rarely used data, then you can focus on improving performance on the more commonly used data.
Memcached is something that you use when you're sure you need to. You don't worry about it being heavy on memory, because when you evaluate it, you include the cost of the dedicated boxes that you're going to deploy it on.
In most cases putting memcached on a shared machine is a waste of time, as its memory would be better used caching whatever else it does instead.
The benefit of memcached is that you can use it as a shared cache between many machines, which increases the hit rate. Moreover, you can have the cache size and performance higher than a single box can give, as you can (and normally would) deploy several boxes (per geographical location).
Also the way memcached is normally used is dependent on a low latency link from your app servers; so you wouldn't normally use the same memcached cluster in different geographical locations within your infrastructure (each DC would have its own cluster)
The process is:
Identify performance problems
Decide how much performance improvement is enough
Reproduce problems in your test lab, on production-grade hardware with necessary driver machines - this is nontrivial and you may need a lot of dedicated (even specialised) hardware to drive your app hard enough.
Test a proposed solution
If it works, release it to production, if not, try more options and start again.
You should not
Cache "everything"
Do things without measuring their actual impact.
As your performance test environment will never be perfect, you should have sufficient instrumentation / monitoring that you can measure performance and profile your app IN PRODUCTION.
This also means that every single thing that you cache should have a cache hit/miss counter on it. You can use this to determine when the cache is being wasted. If a cache has a low hit rate (< 90%, say), then it is probably not worthwhile.
It may also be worth having the individual caches switchable in production.
Remember: OPTIMISATIONS INTRODUCE FUNCTIONAL BUGS. Do as few optimisations as possible, and be sure that they are necessary AND effective.
You can delegate the combination of disk/memory cache to the OS (if your OS is smart enough).
For Solaris, you can actually even add SSD layer in the middle; this technology is called L2ARC.
I'd recommend you to read this for a start: http://blogs.oracle.com/brendan/entry/test.