what does STREAM memory bandwidth benchmark really measure? - benchmarking

I have a few questions on STREAM (http://www.cs.virginia.edu/stream/ref.html#runrules) benchmark.
Below is the comment from stream.c. What is the rationale about the requirement that arrays should be 4 times the size of cache?
* (a) Each array must be at least 4 times the size of the
* available cache memory. I don't worry about the difference
* between 10^6 and 2^20, so in practice the minimum array size
* is about 3.8 times the cache size.
I originally assume STREAM measures the peak memory bandwidth. But I later found that when I add extra arrays and array accesses, I can get larger bandwidth numbers. So it looks to me that STREAM doesn't guarantee to saturate memory bandwidth. Then my question is what does STREAM really measures and how do you use the numbers reported by STREAM?
For example, I added two extra arrays and make sure to access them together with the original a/b/c arrays. I modify the bytes accounting accordingly. With these two extra arrays, my bandwidth number is bumped up by ~11.5%.
> diff stream.c modified_stream.c
181c181,183
< c[STREAM_ARRAY_SIZE+OFFSET];
---
> c[STREAM_ARRAY_SIZE+OFFSET],
> e[STREAM_ARRAY_SIZE+OFFSET],
> d[STREAM_ARRAY_SIZE+OFFSET];
192,193c194,195
< 3 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,
< 3 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE
---
> 5 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,
> 5 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE
270a273,274
> d[j] = 3.0;
> e[j] = 3.0;
335c339
< c[j] = a[j]+b[j];
---
> c[j] = a[j]+b[j]+d[j]+e[j];
345c349
< a[j] = b[j]+scalar*c[j];
---
> a[j] = b[j]+scalar*c[j] + d[j]+e[j];
CFLAGS = -O2 -fopenmp -D_OPENMP -DSTREAM_ARRAY_SIZE=50000000
My last level cache is around 35MB.
Any commnet?
Thanks!
This is for a Skylake Linux server.

Memory accesses in modern computers are a lot more complex than one might expect, and it is very hard to tell when the "high-level" model falls apart because of some "low-level" detail that you did not know about before....
The STREAM benchmark code only measures execution time -- everything else is derived. The derived numbers are based on both decisions about what I think is "reasonable" and assumptions about how the majority of computers work. The run rules are the product of trial and error -- attempting to balance portability with generality.
The STREAM benchmark reports "bandwidth" values for each of the kernels. These are simple calculations based on the assumption that each array element on the right hand side of each loop has to be read from memory and each array element on the left hand side of each loop has to be written to memory. Then the "bandwidth" is simply the total amount of data moved divided by the execution time.
There are a surprising number of assumptions involved in this simple calculation.
The model assumes that the compiler generates code to perform all the loads, stores, and arithmetic instructions that are implied by the memory traffic counts. The approach used in STREAM to encourage this is fairly robust, but an advanced compiler might notice that all the array elements in each array contain the same value, so only one element from each array actually needs to be processed. (This is how the validation code works.)
Sometimes compilers move the timer calls out of their source-code locations. This is a (subtle) violation of the language standards, but is easy to catch because it usually produces nonsensical results.
The model assumes a negligible number of cache hits. (With cache hits, the computed value is still a "bandwidth", it is just not the "memory bandwidth".) The STREAM Copy and Scale kernels only load one array (and store one array), so if the stores bypass the cache, the total amount of traffic going through the cache in each iteration is the size of one array. Cache addressing and indexing are sometimes very complex, and cache replacement policies may be dynamic (either pseudo-random or based on run-time utilization metrics). As a compromise between size and accuracy, I picked 4x as the minimum array size relative to the cache size to ensure that most systems have a very low fraction of cache hits (i.e., low enough to have negligible influence on the reported performance).
The data traffic counts in STREAM do not "give credit" to additional transfers that the hardware does, but that were not explicitly requested. This primarily refers to "write allocate" traffic -- most systems read each store target address from memory before the store can update the corresponding cache line. Many systems have the ability to skip this "write allocate", either by allocating a line in the cache without reading it (POWER) or by executing stores that bypass the cache and go straight to memory (x86). More notes on this are at http://sites.utexas.edu/jdm4372/2018/01/01/notes-on-non-temporal-aka-streaming-stores/
Multicore processors with more than 2 DRAM channels are typically unable to reach asymptotic bandwidth using only a single core. The OpenMP directives that were originally provided for large shared-memory systems now must be enabled on nearly every processor with more than 2 DRAM channels if you want to reach asymptotic bandwidth levels.
Single-core bandwidth is still important, but is typically limited by the number of cache misses that a single core can generate, and not by the peak DRAM bandwidth of the system. The issues are presented in http://sites.utexas.edu/jdm4372/2016/11/22/sc16-invited-talk-memory-bandwidth-and-system-balance-in-hpc-systems/
For the single-core case, the number of outstanding L1 Data Cache misses much too small to get full bandwidth -- for your Xeon Scalable processor about 140 concurrent cache misses are required for each socket, but a single core can only support 10-12 L1 Data Cache misses. The L2 hardware prefetchers can generate additional memory concurrency (up to ~24 cache misses per core, if I recall correctly), but reaching average values near the upper end of this range requires simultaneous accesses to more 4KiB pages. Your additional array reads give the L2 hardware prefetchers more opportunity to generate (close to) the maximum number of concurrent memory accesses. An increase of 11%-12% is completely reasonable.
Increasing the fraction of reads is also expected to increase the performance when using all the cores. In this case the benefit is primarily by reducing the number of "read-write turnaround stalls" on the DDR4 DRAM interface. With no stores at all, sustained bandwidth should reach 90% peak on this processor (using 16 or more cores per socket).
Additional notes on avoiding "write allocate" traffic:
In x86 architectures, cache-bypassing stores typically invalidate the corresponding address from the local caches and hold the data in a "write-combining buffer" until the processor decides to push the data to memory. Other processors are allowed to keep and use "stale" copies of the cache line during this period. When the write-combining buffer is flushed, the cache line is sent to the memory controller in a transaction that is very similar to an IO DMA write. The memory controller has the responsibility of issuing "global" invalidations on the address before updating memory. Care must be taken when these streaming stores are used to update memory that is shared across cores. The general model is to execute the streaming stores, execute a store fence, then execute an "ordinary" store to a "flag" variable. The store fence will ensure that no other processor can see the updated "flag" variable until the results of all of the streaming stores are globally visible. (With a sequence of "ordinary" stores, results always become visible in program order, so no store fence is required.)
In the PowerPC/POWER architecture, the DCBZ (or DCLZ) instruction can be used to avoid write allocate traffic. If the line is in cache, its contents are set to zero. If the line is not in cache, a line is allocated in the cache with its contents set to zero. One downside of this approach is that the cache line size is exposed here. DCBZ on a PowerPC with 32-Byte cache lines will clear 32 Bytes. The same instruction on a processor with 128-Byte cache lines will clear 128 Bytes. This was irritating to a vendor who used both. I don't remember enough of the details of the POWER memory ordering model to comment on how/when the coherence transactions become visible with this instruction.

The key point here, as pointed out by Dr. Bandwidth's answer, is that STREAMS only counts the useful bandwidth seen by the source code. (He's the author of the benchmark.)
In practice the write stream will incur read bandwidth costs as well for the RFO (Read For Ownership) requests. When a CPU want to write 16 bytes (for example) to a cache line, first it has to load the original cache line and then modify it in L1d cache.
(Unless your compiler auto-vectorized with NT stores that bypass cache and avoid that RFO. Some compilers will do that for loops they expect to write an array too larger for cache before any of it is re-read.)
See Enhanced REP MOVSB for memcpy for more about cache-bypassing stores that avoid an RFO.
So increasing the number of read streams vs. write streams will bring software-observed bandwidth closer to actual hardware bandwidth. (Also a mixed read/write workload for the memory may not be perfectly efficient.)

The purpose of the STREAM benchmark is not to measure the peak memory bandwidth (i.e., the maximum memory bandwidth that can be achieved on the system), but to measure the "memory bandwidth" of a number of kernels (COPY, SCALE, SUM, and TRIAD) that are important to the HPC community. So when the bandwidth reported by STREAM is higher, it means that HPC applications will probably run faster on the system.
It's also important to understand the meaning of the term "memory bandwidth" in context of the STREAM benchmark, which is explained in the last section of the documentation. As mentioned in that section, there are at least three ways to count the number of bytes for a benchmark. The STREAM benchmark uses the STREAM method, which count the number of bytes read and written at the source code level. For example, in the SUM kernel (a(i) = b(i) + c(i)), two elements are read and one element is written. Therefore, assuming that all accesses are to memory, the number of bytes accessed from memory per iteration is equal to the number of arrays multiplied by the size of an element (which is 8 bytes). STREAM calculates bandwidth by multiplying the total number of elements accessed (counted using the STREAM method) by the element size and dividing that by the execution time of the kernel. To take run-to-run variations into account, each kernel is run multiple times and the arithmetic average, minimum, and maximum bandwidths are reported.
As you can see, the bandwidth reported by STREAM is not the real memory bandwidth (at the hardware level), so it doesn't even make sense to say that it is the peak bandwidth. In addition, it's almost always much lower than the peak bandwidth. For example, this article shows how ECC and 2MB pages impact the bandwidth reported by STREAM. Writing a benchmark that actually achieves the maximum possible memory bandwidth (at the hardware level) on modern Intel processors is a major challenge and may be a good problem for a whole Ph.D. thesis. In practice, though, the peak bandwidth is less important than the STREAM bandwidth in the HPC domain. (Related: See my answer for information on the issues involved in measuring the memory bandwidth at the hardware level.)
Regarding your first question, notice that STREAM just assumes that all reads and writes are satisfied by the main memory and not by any cache. Allocating an array that is much larger than the size of the LLC helps in making it more likely that this is the case. Essentially, complex and undocumented aspects of the LLC including the replacement policy and the placement policy need to be defeated. It doesn't have to be exactly 4x larger than the LLC. My understanding is that this is what Dr. Bandwidth found to work in practice.

Related

Does the distance between read and write locations have an effect on cache performance?

I have a buffer of size n that is full, and a successor buffer of size n that is empty. I want to insert a value within the first buffer at position i, but I would need to move a range of memory forward in order to do that, since the buffer is full (ie. sequential insert). I have two options here:
Prefer write close to read (adjacent):
Push the last value of the first buffer into the second.
Move between i and n - 1 in the first buffer one forward.
Insert at i.
Prefer fewer steps:
Copy the range i to n - 1 from the first into the second buffer.
Insert at i.
Most of what I can find only talks about locality in a read context, and I am wondering whether the distance between the read and the write memory should be considered.
Does the distance between read and write locations have an effect on cache performance?
Yes. Normally (not including rare situations where CPU can write an entire cache line with new data) the CPU has to fetch the most recent version of a cache line into its cache before doing the write. If the cache line is already in the cache (e.g. due to a previous read of some other data that happened to be in the same cache line) then CPU won't need to fetch the cache line before doing the write.
Note that there's also various other quirks (cache aliasing, TLB misses, etc); and all of it depends on the specific situation and which CPU (e.g. if all of the process' data fits in the CPU's cache, there's no shared memory in involved, and there's no task switches or other processes using the CPU; then you can assume everything will always be in the cache anyway).
I want to insert a value within the first buffer at position i, but I would need to move a range of memory forward in order to do that, since the buffer is full (ie. sequential insert).
Without more information (how often this happens, how much data is involved, etc) I can't really make any suggestions. However (at first glance, without much information), the entire idea seems bad. More specifically, it sounds like you're adding a bunch of hassle to make two smaller arrays behave exactly the same as one larger array would have (and then worrying about the cost of insertion because arrays aren't good for insertion in general).
this is a component deep within a data structure at the lowest level where n is small and constant
by small I assume you mean smaller than L1 cpu cache of being somewhere less than 1MB or L2 cache up to 10-20 MB, depending on your CPU then no,
I am wondering whether the distance between the read and the write memory should be considered.
sometimes; if all the data can fit into the CPU L1, L2, L3 cache that the process is running on then what you think random access means applies it would all be the same latency. You can get nitty gritty and delve into the differences between L1, L2, L3 cache but for sake of brevity (and i simply take it for granted) anywhere within a memory boundary it's all the same latency to access. So in your case where N is small and if it all fits into cpu cache (the first of many boundaries) then it would be the manner and efficiency in which you chose to move/change values and the number of times you end up doing that which affects performance (time to complete).
Now if N were big, for example in a 2 or more socket system (over intel QPI or UPI) and that data resided on DDR RAM that is located across the QPI or UPI path to memory dimms off the memory controller of the other CPU, then definitely yes big performance hit (relatively speaking) because now a boundary has been crossed, and that would be what could NOT fit into cache of the CPU that the process was running on (which was initally fetched from DIMMS LOCAL to that cpu memory controller) now incurs the overhead talking to the other CPU over the QPI or UPI path (while still very fast compared to previous architecures) and that other CPU then fetches the data from it's set of memory DIMMS and sends it back over QPI or UPI to the cpu your process is running on.
So when you exceed L1 cache limit into L2 there is a performance hit, likewise into L3 cache, all within one CPU. when a process has to repeatedly fetch from it's local set of dimms more data that it could not fit into cache then performance hit. And when that data is not on dimms local to that cpu = slower. And when that data is not on the same motherboard and goes across some kind of high speed fiber RDMA = slower. When it's across ethernet even slower... and so on.

How does random access memory work? Why is it constant-time random-access?

Or in other words, why does accessing an arbitrary element in an array take constant time (instead of O(n) or some other time)?
I googled my heart out looking for an answer to this and did not find a very good one so I'm hoping one of you can share your low level knowledge with me.
Just to give you an idea of how low of an answer I'm hoping for, I'll tell you why I THINK it takes constant time.
When I say array[4] = 12 in a program, I'm really just storing the bit representation of the memory address into a register. This physical register in the hardware will turn on the corresponding electrical signals according to the bit representation I fed it. Those electrical signals will then somehow magically ( hopefully someone can explain the magic ) access the right memory address in physical/main memory.
I know that was rough, but it was just to give you an idea of what kind of answer I'm looking for.
(editor's note: From the OP's later comments, he understands that address calculations take constant time, and just wonders about what happens after that.)
Because software likes O(1) "working" memory and thus the hardware is designed to behave that way
The basic point is that the address space of a program is thought as abstractly having O(1) access performance, i.e. whatever memory location you want to read, it should take some constant time (which anyway isn't related to the distance between it and the last memory access). So, being arrays nothing more than contiguous chunks of address space, they should inherit this property (accessing an element of an array is just a matter of adding the index to the start address of the array, and then dereferencing the obtained pointer).
This property comes from the fact that, in general, the address space of a program have some correspondence with the physical RAM of the PC, which, as the name (random access memory) partially implies, should have by itself the property that, whatever location in the RAM you want to access, you get to it in constant time (as opposed, for example, to a tape drive, where the seek time depends from the actual length of tape you have to move to get there).
Now, for "regular" RAM this property is (at least AFAIK) true - when the processor/motherboard/memory controller asks to a RAM chip to get some data, it does so in constant time; the details aren't really relevant for software development, and the internals of memory chips changed many times in the past and will change again in the future. If you are interested in an overview of the details of current RAMs, you can have a look here about DRAMs.
The general concept is that RAM chips don't contain a tape that must be moved, or a disk arm that must be positioned; when you ask to them a byte at some location, the work (mostly changing the settings of some hardware muxes, that connect the output to the cells where the byte status is stored) is the same for any location you could be asking for; thus, you get O(1) performance
There is some overhead behind this (the logical address have to be mapped to physical address by the MMU, the various motherboard pieces have to talk with each other to tell the RAM to fetch the data and bring it back to the processor, ...), but the hardware is designed to do so in more or less constant time.
So:
arrays map over address space, which is mapped over RAM, which has O(1) random access; being all maps (more or less) O(1), arrays keep the O(1) random access performance of RAM.
The point that does matter to software developers, instead, is that, although we see a flat address space and it normally maps over RAM, on modern machines it's false that accessing any element has the same cost. In facts, accessing elements that are in the same zone can be way cheaper than jumping around the address space, due to the fact that the processor has several onboard caches (=smaller but faster on-chip memories) that keep recently used data and memory that is in the same neighborhood; thus, if you have good data locality, continuous operations in memory won't keep hitting the ram (which have much longer latency than caches), and in the end your code will run way faster.
Also, under memory pressure, operating systems that provide virtual memory can decide to move rarely used pages of you address space to disk, and fetch them on demand if they are accessed (in response to a page fault); such operation is very costly, and, again, strongly deviates from the idea that accessing any virtual memory address is the same.
The calculation to get from the start of the array to any given element takes only two operations, a multiplication (times sizeof(element)) and addition. Both of those operations are constant time. Often with today's processors it can be done in essentially no time at all, as the processor is optimized for this kind of access.
When I say array[4] = 12 in a program, I'm really just storing the bit
representation of the memory address into a register. This physical
register in the hardware will turn on the corresponding electrical
signals according to the bit representation I fed it. Those electrical
signals will then somehow magically ( hopefully someone can explain
the magic ) access the right memory address in physical/main memory.
I am not quite sure what you are asking but I dont see any answers related to what is really going on in the magic of the hardware. Hopefully I understood enough to go through this long winded explanation (which is still very high level).
array[4] = 12;
So from comments it sounds like it is understood that you have to get the base address of array, and then multiply by the size of an array element (or shift if that optimization is possible) to get the address (from your programs perspective) of the memory location. Right of the bat we have a problem. Are these items already in registers or do we have to go get them? The base address for array may or may not be in a register depending on code that surrounds this line of code, in particular code that precedes it. That address might be on the stack or in some other location depending on where you declared it and how. And that may or may not matter as to how long it takes. An optimizing compiler may (often) go so far as to pre-compute the address of array[4] and place that somewhere so it can go into a register and the multiply never happens at runtime, so it is absolutely not true that the computation of array[4] for a random access is a fixed amount of time compared to other random accesses. Depending on the processor, some immediate patterns are one instruction others take more that also has a factor on whether this address is read from .text or stack or etc, etc...To not chicken and egg that problem to death, assume we have the address of array[4] computed.
This is a write operation, from the programmers perspective. Starting with a simple processor, no cache, no write buffer, no mmu, etc. Eventually the simple processor will put the address on the edge of the processor core, with a write strobe and data, each processors bus is different than other processor families, but it is roughly the same the address and data can come out in the same cycle or in separate cycles. The command type (read, write) can happen at the same time or different. but the command comes out. The edge of the processor core is connected to a memory controller that decodes that address. The result is a destination, is this a peripheral if so which one and on what bus, is this memory, if so on what memory bus and so on. Assume ram, assume this simple processor has sram not dram. Sram is more expensive and faster in an apples to apples comparison. The sram has an address and write/read strobes and other controls. Eventually you will have the transaction type, read/write, the address and the data. The sram however its geometry is will route and store the individual bits in their individual pairs/groups of transistors.
A write cycle can be fire and forget. All the information that is needed to complete the transaction, this is a write, this is the address, this is the data, is known right then and there. The memory controller can if it chooses tell the processor that the write transaction is complete, even if the data is nowhere near the memory. That address/data pair will take its time getting to the memory and the processor can keep operating. Some systems though the design is such that the processors write transaction waits until a signal comes back to indicate that the write has made it all the way to the ram. In a fire and forget type setup, that address/data will be queued up somewhere, and work its way to the ram. The queue cant be infinitely deep otherwise it would be the ram itself, so it is finite, and it is possible and likely that many writes in a row can fill that queue faster than the other end can write to ram. At that point the current and or next write has to wait for the queue to indicate there is room for one more. So in situations like this, how fast your write happens, whether your simple processor is I/O bound or not has to do with prior transactions which may or may not be write instructions that preceded this instruction in question.
Now add some complexity. ECC or whatever name you want to call it (EDAC, is another one). The way an ECC memory works is the writes are all a fixed size, even if your implementation is four 8 bit wide memory parts giving you 32 bits of data per write, you have to have a fixed with that the ECC covers and you have to write the data bits plus the ecc bits all at the same time (have to compute the ecc over the full width). So if this was an 8 bit write for example into a 32 bit ECC protected memory then that write cycle requires a read cycle. Read the 32 bits (check the ecc on that read) modify the new 8 bits in that 32 bit pattern, compute the new ecc pattern, write the 32 bits plus ecc bits. Naturally that read portion of the write cycle can end up with an ecc error, which just makes life even more fun. Single bit errors can be corrected usually (what good is an ECC/EDAC if it cant), multi-bit errors not. How the hardware is designed to handle these faults affects what happens next, the read fault may just trickle back to the processor faulting the write transaction, or it may go back as an interrupt, etc. But here is another place where one random access is not the same as another, depending on the memory being accessed, and the size of the access a read-modify-write definitely takes longer than a simple write.
Dram can also fall into this fixed width category, even without ECC. Actually all memory falls into this category at some point. The memory array is optimized on the silicon for a certain height and width in units of bits. You cannot violate that memory it can only be read and written in units of that width at that level. The silicon libraries will include many geometries of ram, and the designers will chose those geometries for their parts, and the parts will have fixed limits and often you can use multiple parts to get some integer multiple width of that size, and sometimes the design will allow you to write to only one of those parts if only some of the bits are changing, or some designs will force all parts to light up. Notice how the next ddr family of modules that you plug into your home computer or laptop, the first wave is many parts on both sides of the board. Then as that technology gets older and more boring, it may change to fewer parts on both sides of the board, eventually becoming fewer parts on one side of the board before that technology is obsolete and we are already into the next.
This fixed width category also carries with it alignment penalties. Unfortunately most folks learn on x86 machines, which dont restrict you to aligned accesses like many other platforms. There is a definite performance penalty on x86 or others for unaligned accesses, if allowed. It is usually when folks go to a mips or usually an arm on some battery powered device is when they first learn as programmers about aligned accesses. And sadly find them to be painful rather than a blessing (due to the simplicity both in programming and for the hardware benefits that come from it). In a nutshell if your memory is say 32 bits wide and can only be accessed, read or write, 32 bits at a time that means it is limited to aligned accesses only. A memory bus on a 32 bit wide memory usually does not have the lower address bits a[1:0] because there is no use for them. those lower bits from a programmers perspective are zeros. if though our write was 32 bits against one of these 32 bit memories and the address was 0x1002. Then somebody along the line has to read the memory at address 0x1000 and take two of our bytes and modify that 32 bit value, then write it back. Then take the 32 bits at address 0x1004 and modify two bytes and write it back. four bus cycles for a single write. If we were writing 32 bits to address 0x1008 though it would be a simple 32 bit write, no reads.
sram vs dram. dram is painfully slow, but super cheap. a half to a quarter the number of transistors per bit. (4 for sram for example 1 for dram). Sram remembers the bit so long as the power is on. Dram has to be refreshed like a rechargable battery. Even if the power stays on a single bit will only be remembered for a very short period of time. So some hardware along the way (ddr controller, etc) has to regularly perform bus cycles telling that ram to remember a certain chunk of the memory. Those cycles steal time from your processor wanting to access that memory. dram is very slow, it may say 2133Mhz (2.133ghz) on the box. But it is really more like 133Mhz ram, right 0.133Ghz. The first cheat is ddr. Normally things in the digital world happen once per clock cycle. The clock goes to an asserted state then goes to a deasserted state (ones and zeros) one cycle is one clock. DDR means that it can do something on both the high half cycle and on the low half cycle. so that 2133Ghz memory really uses a 1066mhz clock. Then pipeline like parallelisms happen, you can shove in commands, in bursts, at that high rate, but eventually that ram has to actually get accessed. Overall dram is non-determinstic and very slow. Sram on the other hand, no refreshes required it remembers so long as the power is on. Can be several times faster (133mhz * N), and so on. It can be deterministic.
The next hurdle, cache. Cache is good and bad. Cache is generally made from sram. Hopefully you have an understanding of a cache. If the processor or someone upstream has marked the transaction as non-cacheable then it goes through uncached to the memory bus on the other side. If cacheable then the a portion of the address is looked up in a table and will result in a hit or miss. this being a write, depending on the cache and/or transaction settings, if it is a miss it may pass through to the other side. If there is a hit then the data will be written into the cache memory, depending on the cache type it may also pass through to the other side or that data may sit in the cache waiting for some other chunk of data to evict it and then it gets written to the other side. caches definitely make reads and sometimes make writes non-deterministic. Sequential accesses have the most benefit as your eviction rate is lower, the first access in a cache line is slow relative to the others, then the rest are fast. which is where we get this term of random access anyway. Random accesses go against the schemes that are designed to make sequential accesses faster.
Sometimes the far side of your cache has a write buffer. A relatively small queue/pipe/buffer/fifo that holds some number of write transactions. Another fire and forget deal, with those benefits.
Multiple layers of caches. l1, l2, l3...L1 is usually the fastest either by its technology or proximity, and usually the smallest, and it goes up from there speed and size and some of that has to do with cost of the memory. We are doing a write, but when you do a cache enabled read understand that if l1 has a miss it goes to l2 which if it has a miss goes to l3 which if it has a miss goes to main memory, then l3, l2 and l1 all will store a copy. So a miss on all 3 is of course the most painful and is slower than if you had no cache at all, but sequential reads will give you the cached items which are now in l1 and super fast, for the cache to be useful sequential reads over the cache line should take less time overall than reading that much memory directly from the slow dram. A system doesnt have to have 3 layers of caches, it can vary. Likewise some systems can separate instruction fetches from data reads and can have separate caches which dont evict each other, and some the caches are not separate and instruction fetches can evict data from data reads.
caches help with alignment issues. But of course there is an even more severe penalty for an unaligned access across cache lines. Caches tend to operate using chunks of memory called cache lines. These are often some integer multiple in size of the memory on the other side. a 32 bit memory for example the cache line might be 128 bits or 256 bits for example. So if and when the cache line is in the cache, then a read-modify-write due to an unaligned write is against faster memory, still more painful than aligned but not as painful. If it were an unaligned read and the address was such that part of that data is on one side of a cache line boundary and the other on the other then two cache lines have to be read. A 16 bit read for example can cost you many bytes read against the slowest memory, obviously several times slower than if you had no caches at all. Depending on how the caches and memory system in general is designed, if you do a write across a cache line boundary it may be similarly painful, or perhaps not as much it might have the fraction write to the cache, and the other fraction go out on the far side as a smaller sized write.
Next layer of complexity is the mmu. Allowing the processor and programmer the illusion of flat memory spaces and/or control over what is cached or not, and/or memory protection, and/or the illusion that all programs are running in the same address space (so your toolchain can always compile/link for address 0x8000 for example). The mmu takes a portion of the virtual address on the processor core side. looks that up in a table, or series of tables, those lookups are often in system address space so each one of those lookups may be one or more of everything stated above as each are a memory cycle on the system memory. Those lookups can result in ecc faults even though you are trying to do a write. Eventually after one or two or three or more reads, the mmu has determined what the address is on the other side of the mmu is, and the properties (cacheable or not, etc) and that is passed on to the next thing (l1, etc) and all of the above applies. Some mmus have a bit of a cache in them of some number of prior transactions, remember because programs are sequential, the tricks used to boost the illusion of memory performance are based on sequential accesses, not random accesses. So some number of lookups might be stored in the mmu so it doesnt have to go out to main memory right away...
So in a modern computer with mmus, caches, dram, sequential reads in particular, but also writes are likely to be faster than random access. The difference can be dramatic. The first transaction in a sequential read or write is at that moment a random access as it has not been seen ever or for a while. Once the sequence continues though the optimizations fall in order and the next few/some are noticeably faster. The size and alignment of your transaction plays an important role in performance as well. While there are so many non-deterministic things going on, as a programmer with this knowledge you modify your programs to run much faster, or if unlucky or on purpose can modify your programs to run much slower. Sequential is going to be, in general faster on one of these systems. random access is going to be very non-deterministic. array[4]=12; followed by array[37]=12; Those two high level operations could take dramatically different amounts of time, both in the computation of the write address and the actual writes themselves. But for example discarded_variable=array[3]; array[3]=11; array[4]=12; Can quite often execute significantly faster than array[3]=11; array[4]=12;
Arrays in C and C++ have random access because they are stored in RAM - Random Access Memory in a finite, predictable order. As a result, a simple linear operation is required to determine the location of a given record (a[i] = a + sizeof(a[0]) * i). This calculation has constant time. From the CPU's perspective, no "seek" or "rewind" operation is required, it simply tells memory "load the value at address X".
However: On a modern CPU the idea that it takes constant time to fetch data is no-longer true. It takes constant amortized time, depending on whether a given piece of data is in cache or not.
Still - the general principle is that the time to fetch a given set of 4 or 8 bytes from RAM is the same regardless of the address. E.g. if, from a clean slate, you access RAM[0] and RAM[4294967292] the CPU will get the response within the same number of cycles.
#include <iostream>
#include <cstring>
#include <chrono>
// 8Kb of space.
char smallSpace[8 * 1024];
// 64Mb of space (larger than cache)
char bigSpace[64 * 1024 * 1024];
void populateSpaces()
{
memset(smallSpace, 0, sizeof(smallSpace));
memset(bigSpace, 0, sizeof(bigSpace));
std::cout << "Populated spaces" << std::endl;
}
unsigned int doWork(char* ptr, size_t size)
{
unsigned int total = 0;
const char* end = ptr + size;
while (ptr < end) {
total += *(ptr++);
}
return total;
}
using namespace std;
using namespace chrono;
void doTiming(const char* label, char* ptr, size_t size)
{
cout << label << ": ";
const high_resolution_clock::time_point start = high_resolution_clock::now();
auto result = doWork(ptr, size);
const high_resolution_clock::time_point stop = high_resolution_clock::now();
auto delta = duration_cast<nanoseconds>(stop - start).count();
cout << "took " << delta << "ns (result is " << result << ")" << endl;
}
int main()
{
cout << "Timer resultion is " <<
duration_cast<nanoseconds>(high_resolution_clock::duration(1)).count()
<< "ns" << endl;
populateSpaces();
doTiming("first small", smallSpace, sizeof(smallSpace));
doTiming("second small", smallSpace, sizeof(smallSpace));
doTiming("third small", smallSpace, sizeof(smallSpace));
doTiming("bigSpace", bigSpace, sizeof(bigSpace));
doTiming("bigSpace redo", bigSpace, sizeof(bigSpace));
doTiming("smallSpace again", smallSpace, sizeof(smallSpace));
doTiming("smallSpace once more", smallSpace, sizeof(smallSpace));
doTiming("smallSpace last", smallSpace, sizeof(smallSpace));
}
Live demo: http://ideone.com/9zOW5q
Output (from ideone, which may not be ideal)
Success time: 0.33 memory: 68864 signal:0
Timer resultion is 1ns
Populated spaces
doWork/small: took 8384ns (result is 8192)
doWork/small: took 7702ns (result is 8192)
doWork/small: took 7686ns (result is 8192)
doWork/big: took 64921206ns (result is 67108864)
doWork/big: took 65120677ns (result is 67108864)
doWork/small: took 8237ns (result is 8192)
doWork/small: took 7678ns (result is 8192)
doWork/small: took 7677ns (result is 8192)
Populated spaces
strideWork/small: took 10112ns (result is 16384)
strideWork/small: took 9570ns (result is 16384)
strideWork/small: took 9559ns (result is 16384)
strideWork/big: took 65512138ns (result is 134217728)
strideWork/big: took 65005505ns (result is 134217728)
What we are seeing here are the effects of cache on memory access performance. The first time we hit smallSpace it takes ~8100ns to access all 8kb of small space. But when we call it again immediately after, twice, it takes ~600ns less at ~7400ns.
Now we go away and do bigspace, which is bigger than current CPU cache, so we know we've blown away the L1 and L2 caches.
Coming back to small, which we're sure is not cached now, we again see ~8100ns for the first time and ~7400 for the second two.
We blow the cache out and now we introduce a different behavior. We use a strided loop version. This amplifies the "cache miss" effect and significantly bumps the timing, although "small space" fits into L2 cache so we still see a reduction between pass 1 and the following 2 passes.

Programming to fill our system's L1 or L2 cache

How we can systematically write code to load data into our L1 or L2 cache?
I am specifically trying to target filling the the L1 I cache of my system for some higher analysis.
Any suggestions will do - with respect to writing assembly code or simple C programming. Related articles on this topic will be even more helpful.
A cache stores recently-accessed data. To fill the cache, just access the data. Or in this case, instructions. Fill a block of memory with no-op instructions (and a looping branch instruction at the end) and jump to it.
The tricky part is keeping data in there once it's loaded. You can't access anything outside the 32K (or whatever) data set as long as your benchmark is running.
I can't imagine what you get from artificially filling a cache and then keeping it filled with the same data set, but there you go.
You will need to find out the cache associativity of your CPU and the replacement policy. I can't think of a general solution to this problem that would work on all the CPUs I've worked with. Even caches advertised as fully associative with an LRU replacement policy aren't exactly that in reality and it can be very hard to figure out a pattern of memory access that completely fills the cache.
If you want this for some very specific benchmark (which is a bad idea for other reasons), I'd recommend you trying to figure out how to flush the cache instead. That is actually doable.
I just last week performed this task for ECC filling of an l1 and l2 cache.
Basically if you have a 64Kbyte cache, for example, total (x number of ways, y number of cache lines, etc) for the data just access that much data linearly through the cache (might need an mmu on to enable caching) start on some 64Kbyte boundary and read 64Kbytes worth of data ideally in cache line sized reads (or multiples) if possible. For icache you need that many bytes worth of instructions (nops or add reg+1 or something), remember there is probably a pre-fetch at the end so you might have to back off the final return a few instructions so that the prefetch takes you all the way to the end (might take some practice and if you dont have visibility into the logic (a chip sim) then you might not figure it out.
you can use the mmu or other games your logic might have to reduce the amount of memory required, for example if you have an mmu with an entry size that covers say 4Kb, then you could fill 4Kb of real memory with data, then use 16 different mmu entries (with 16 different virtual addresses) and for each of the 16 read through the 4K. Of course that is if your cache is on the virtual address side of the mmu.
overall it is kind of an ugly thing to do, if your mmu prevents instruction caching, you could put the code performing the test in a non-cached space so that it doesnt mess with the icache and only instructions used to fill the cache are in a cached address space.
Good luck...

Do we need to take cache thrashing into account with CUDA?

I'm not familiar with the workings of GPU memory caching, so would like to know if the assumptions of temporal and spatial proximity of memory access associated with CPUs also applies with GPUs. That is, programming in CUDA C, do I need to take into account C's row-major array storage format to prevent cache thrashing?
Many thanks.
Yes, very much.
Say you are fetching 4 byte integers for each thread.
Scenario one
Each thread is fetching one integer with the index of its thread id. That means thread zero is fetching a[0], thread 1 is fetching a[1] etc... As with the GPU it will fetch in cache lines of 128 bytes. As a coincidence a warp is 32 threads, ergo 32*4 = 128 bytes. This means for one warp, it will one do one fetch request from memory.
Scenario two
If the threads are fetching in total random order with a distance between the indexes is greater than 128 bytes. It will have to make 32 memory requests of 128 bytes. This means that you will for each warp fill the caches with 32 times more memory, and if your problem is big your cache will be invalidated up to 32 more times than scenario one.
This means that if you would request memory that would normally reside in the cache in scenario one, in scenario two it is highly likely that it have to be resolve with another memory request from global memory.
No and yes. No because the GPU does not provides the same kind of "cache" that CPU.
But you have many other constraints that makes the underlying C array layout and how it is accessed by concurrent threads very important for performances.
You may have a look at this page for basics about CUDA memory types or here for more in depth details about cache on Fermi GPU.

Out of Order Execution and Memory Fences

I know that modern CPUs can execute out of order, However they always retire the results in-order, as described by wikipedia.
"Out of Oder processors fill these "slots" in time with other instructions that are ready, then re-order the results at the end to make it appear that the instructions were processed as normal."
Now memory fences are said to be required when using multicore platforms, because owing to Out of Order execution, wrong value of x can be printed here.
Processor #1:
while f == 0
;
print x; // x might not be 42 here
Processor #2:
x = 42;
// Memory fence required here
f = 1
Now my question is, since Out of Order Processors (Cores in case of MultiCore Processors I assume) always retire the results In-Order, then what is the necessity of Memory fences. Don't the cores of a multicore processor sees results retired from other cores only or they also see results which are in-flight?
I mean in the example I gave above, when Processor 2 will eventually retire the results, the result of x should come before f, right? I know that during out of order execution it might have modified f before x but it must have not retired it before x, right?
Now with In-Order retiring of results and cache coherence mechanism in place, why would you ever need memory fences in x86?
This tutorial explains the issues: http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf
FWIW, where memory ordering issues happen on modern x86 processors, the reason is that while the x86 memory consistency model offers quite strong consistency, explicit barriers are needed to handle read-after-write consistency. This is due to something called the "store buffer".
That is, x86 is sequentially consistent (nice and easy to reason about) except that loads may be reordered wrt earlier stores. That is, if the processor executes the sequence
store x
load y
then on the processor bus this may be seen as
load y
store x
The reason for this behavior is the afore-mentioned store buffer, which is a small buffer for writes before they go out on the system bus. Load latency is, OTOH, a critical issue for performance, and hence loads are permitted to "jump the queue".
See Section 8.2 in http://download.intel.com/design/processor/manuals/253668.pdf
The memory fence ensures that all changes to variables before the fence are visible to all other cores, so that all cores have an up to date view of the data.
If you don't put a memory fence, the cores might be working with wrong data, this can be seen especially in scenario's, where multiple cores would be working on the same datasets. In this case you can ensure that when CPU 0 has done some action, that all changes done to the dataset are now visible to all other cores, whom can then work with up to date information.
Some architectures, including the ubiquitous x86/x64, provide several
memory barrier instructions including an instruction sometimes called
"full fence". A full fence ensures that all load and store operations
prior to the fence will have been committed prior to any loads and
stores issued following the fence.
If a core were to start working with outdated data on the dataset, how could it ever get the correct results? It couldn't no matter if the end result were to be presented as-if all was done in the right order.
The key is in the store buffer, which sits between the cache and the CPU, and does this:
Store buffer invisible to remote CPUs
Store buffer allows writes to memory and/or caches to be saved to
optimize interconnect accesses
That means that things will be written to this buffer, and then at some point will the buffer be written to the cache. So the cache could contain a view of data that is not the most recent, and therefore another CPU, through cache coherency, will also not have the latest data. A store buffer flush is necessary for the latest data to be visible, this, I think is essentially what the memory fence will cause to happen at hardware level.
EDIT:
For the code you used as an example, Wikipedia says this:
A memory barrier can be inserted before processor #2's assignment to f
to ensure that the new value of x is visible to other processors at or
prior to the change in the value of f.
Just to make explicit what is implicit in the previous answers, this is correct, but is distinct from memory accesses:
CPUs can execute out of order, However they always retire the results in-order
Retirement of the instruction is separate from performing the memory access, the memory access may complete at a different time to instruction retirement.
Each core will act as if it's own memory accesses occur at retirement, but other cores may see those accesses at different times.
(On x86 and ARM, I think only stores are observably subject to this, but e.g., Alpha may load an old value from memory. x86 SSE2 has instructions with weaker guarentees than normal x86 behaviour).
PS. From memory the abandoned Sparc ROCK could in fact retire out-of-order, it spent power and transistors determining when this was harmless. It got abandoned because of power consumption and transistor count... I don't believe any general purpose CPU has been bought to market with out-of-order retirement.

Resources