What is the cache strategy of TDengine? LRU,LFU or others?
How can I config it?
I found some configurations in taos.cfg:
cache
blocks
cacheLast
I think TDengine use LRU as the default cache update policy. Not sure if other strategies are also implemented so not sure if its configurable.
Related
I have a Flink app with a RichAsyncFunction that does async queries to an external database. The function is using a Guava cache. This is working perfectly, however it currently doesn't get included in checkpoints/savepoints. Is there a way I can get the cache data included in Checkpoints/savepoints?
I notice that RichAsyncFunction doesn't support the state functionality at all. Does this mean I can't serialize my cache to checkpoints/savepoints?
There is one Guava cache for the entire Flink app which might make this
scenario simpler.
Is there a recommended way to handle this situation?
FYI, I need lock-free concurrency support including check-and-set, which is offered by both java.util.concurrent.ConcurrentMap and Guava's cache, but not Flink's MapState. Is it in line with Flink best practices to use a Guava cache?
In a multi node Janusgraph cluster, data modification done from one instance does not sync with others until it reaches the given expiry time (cache.db-cache-time)
As per the documentation[1] it does not recommends to enable database level cache in a distributed setup as cached data does not share amoung instances.
Any suggestions for a solution/workaround where i can see the data changes from other JG instances immediately and avoid stale data access ?
[1] https://docs.janusgraph.org/operations/cache/#cache-expiration-time
If you want immediate access to the most up-to-date version of the data then by definition, you cannot cache any of it.
The contents of the cache will be accessed as long as they have not expired or been evicted. Unfortunately there is no way around it if consistency is your top priority. Cheers!
I need to setup a caching system which is able to cache some GBs of data without consuming too much RAM. Is there a wat to configure Redis to use disk storage automatically when reaching X RAM usage? If not, is there any in-memory database in which this is possible?
If you need a caching system, why do you want to spill over to disk and not just remove (probably old and not used) entries from the cache?
Redis has various strategies when memory reaches some limits...
Please read Redis documentation about memory configuration and eviction policies
Bottom line in redis.conf its possible to configure memory limits and a policy of caching (probably volatile-lru or allkey-lru will help here)
No , You can't .
Redis store all data in memory , all disk storages are just for restore .
Maybe you can try other k-v store , eg : RocksDB 、LevelDB 。
I have a per-user DB architecture like so:
There is around 200 user DBs and each has a continuous replication link to the master couch. ( All within the same couch instance) The problem is the CPU usage at any given time is always close to 100%.
The DBs are idle, so no data is getting written/read from them.There's only a few KB of data per DB so I don't think the load is an issue at this point. The master DB size is less than 10 MB.
How can I go about debugging this performance issue?
You should have a look at https://github.com/redgeoff/spiegel - it's a tool to handle many CouchDB replications in a scalable way. Basically it achieves that by listening to the _global_changes endpoint and creating replications only when needed.
In recent CouchDB versions (2.1.0+), the replicator has been improved, but I think for replicating per-user databases it still makes sense to use an external mechanism like Spiegel to manage the number of active replications.
Just as a reminder, there are some security flaws in CouchDB 2.1.0 and you might need to upgrade to 2.1.1. Maybe you've been hacked like this one.
I see in many cases memcached is used. Can you give examples when to avoid memcached other than large files? How large files are appropriate for memcached?
Thanks for your answer
If you know when to bust the caches to prevent out-of-date things from being cached, there's not really a reason to avoid memcache for anything small unless it's so trivial to compute that it'd be approximately as long to hit memcache as it would to just compute it.
I have seen Memcached is used to store session data.As my point of view its not recommended storing sessions in Memcached.If a session disappears, often the user is logged out,If a portion of a cache disappears or either due to a hardware crash it should not cause your users noticable pain.According to the memcached site, “memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.” So while developing your application, remember that you must have a fall-back mechanism to retrieve the data once it is not found in the Memcached server. Here are some tips,
Avoid storing frequently updated data in Memcached.
Memcached does not ship with in-built security mechanisms. So It is your responsibility to handle security issues.
Try to maintain predefined fixed number of connections in the connection pool because each set/get operation is a new atomic connection to the Memcached server.
Avoid storing raw data coming straight from the database rather than storing processed data