Does update of a TLB entry leverage the data cache? - arm

In ARM CPU architecture, we know that if a TLB cache misses CPU will lookup the page table to compute the physical address of the required virtual address. My question is that if the page table is cached in the data cache area, does CPU use the page table in the cache or the page table in the DRAM when computing the physical address?

The hardware page table walker sends regular memory requests just like load instructions. Therefore, they pass through the cache hierarchy and page table entries may get cached in the data caches at any level of the cache hierarchy (in general). The OS is responsible for maintaining coherence between the data caches and the TLB.

Related

Why is a distributed in-memory cache faster than a database query?

https://medium.com/#i.gorton/six-rules-of-thumb-for-scaling-software-architectures-a831960414f9 states about distributed caches:
Better, why query the database if you don’t need to? For data that is frequently read and changes rarely, your processing logic can be modified to first check a distributed cache, such as a memcached server. This requires a remote call, but if the data you need is in cache, on a fast network this is far less expensive than querying the database instance.
The claim is that a distributed in-memory cache is faster than querying the database. Looking at Latency Numbers Every Programmer Should Know, it shows that the latencies of the operations compare like this: Main memory reference <<< Round trip within same datacenter < Read 1 MB sequentially from SSD <<< Send packet CA->Netherlands->CA.
I interpret a network call to access the distributed cache as "Send packet CA->Netherlands->CA" since the cached data may not be in the same datacenter. Am I wrong? Should I assume that replication factor is high such that cached data should be available in all datacenters and instead the comparison between a distributed cache and a database is more like "Round trip within same datacenter" vs "Read 1 MB sequentially from SSD"?
Databases typically require accessing data from disk, which is slow. Although most will cache some data in memory, which makes frequently run queries faster, there are other overheads such as:
query parsing and syntax checking
database permission/authorisation checking
data access plan creation by the optimizer
a quite chatty protocol, especially when multiple rows are returned
All of which add latency.
Caches have none of these overheads. In the general case, there are more reads than writes for caches, plus caches always have a value available in memory (if not a cold hit) - writing to the cache doesn't stop reading the current value - synchronised writes just mean a slight delay between the write request and the new value being available (everywhere).

How In-Memory databases avoid use of Virtual Memory

Since the memory is managed by the OS, how an In-Memory Database process avoid its pages in physical memory of being moved to the virtual memory at the disk?
On some systems, it is possible to pin pages in memory, but this is discouraged - you are defeating the operating system's virtual memory manager, which might benefit the IMDS but be detrimental to overall system performance.
Our (McObject) recommendation is to ensure that you have enough physical memory so that the operating system does not swap in-memory database pages to the swap space.
If it's not possible to ensure that you have enough physical memory, then you're better off creating a conventional persistent database, and creating as large of a database cache with the DBMS' facility as you can (again, within the constraints of physical memory), and allow the DBMS to move pages into and out of it's own cache. It will do so more intelligently than than the operating system.

Explain analyze buffers - Does it give OS Cache as well

When we are doing explain (analyze,buffers) the query we get results and shows how much data comes from the cache and how much comes from disk.
But there are two layers in postgres, one is the OS cache and the shared buffers itself.Does the query plan shows the cache from shared_buffers or OS cache or both ?
There are extensions to see them individually i.e pgfincore and pg_buffer_cache, but what data I see in the query plan? Does it belong to shared_buffers/OS cache or both of them just combined ?
Postgres only controls and knows about its own cache. It can't know about the cache management of the operating system.
Does it belong to shared_buffers/OS cache or both of them just combined?
Those figures only relate to shared_buffers, not the cache of the operating system.

Storage capacity of in-memory database?

Is storage capacity of in-memory database limited to size of RAM? If yes, is there any ways to increase its capacity except for increasing RAM size. If no, please give some explanations.
As previously mentioned, in-memory storage capacity is limited by the addressable memory, not by the amount of physical memory in the system. Simon was also correct that the OS will swap memory to the page file, but you really want to avoid that. In the context of the DBMS, the OS will do a worse job of it than if you simply used a persistent database with as large of a cache as you have physical memory to support. IOW, the DBMS will manage its cache more intelligently than the OS would manage paged memory containing in-memory database content.
On a 32 bit system, each process is limited to a total of 3GB of RAM, whether you have 3GB physically or 512MB. If you have more data (including the in-mem DB) and code then will fit into physical RAM then the Page file on disc is used to swap out memory that is currently not being used. Swapping does slow everything down though. There are some tricks you can use for extending that: Memory-mapped files, /3GB switch; but these are not easy to implement.
On 64 bit machines, a processes memory limitation is huge - I forget what it is but it's up in the TB range.
VoltDB is an in-memory SQL database that runs on a cluster of 64-bit Linux servers. It has high performance durability to disk for recovery purposes, but tables, indexes and materialized views are stored 100% in-memory. A VoltDB cluster can be expanded on the fly to increase the overall available RAM and throughput capacity without any down time. In a high-availability configuration, individual nodes can also be stopped to perform maintenance such as increasing the server's RAM, and then rejoined to the cluster without any down time.
The design of VoltDB, led by Michael Stonebraker, was for a no-compromise approach to performance and scalability of OLTP transaction processing workloads with full ACID guarantees. Today these workloads are often described as Fast Data. By using main memory, and single-threaded SQL execution code distributed for parallel processing by core, the data can be accessed as fast as possible in order to minimize the execution time of transactions.
There are in-memory solutions that can work with data sets larger than RAM. Of course, this is accomplished by adding some operations on disk. Tarantool's Vinyl, for example, can work with data sets that are 10 to 1000 times the size of available RAM. Like other databases of recent vintage such as RocksDB and Bigtable, Vinyl's write algorithm uses LSM trees instead of B trees, which helps with its speed.

Performance scenario RAM Disk and In memory database(IMDB)?

I was just wondering, we have in memory database(IMDB) and we also have a way to put the database in a RAM Disk. So which would be faster? You valuable comments and experiences
Wikipedia - Computer Data Storage
Latency
The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage
It really depends on the hardware architecture. However internal memory is almost always the fastest way of storing and retrieving data, unless you have a specialized main board.

Resources