The default data block size of HDFS/Hadoop is 64MB. The block size in the disk is generally 4KB.
What does 64MB block size mean? ->Does it mean that the smallest unit of reading from disk is 64MB?
If yes, what is the advantage of doing that?-> easy for continuous access of large files in HDFS?
Can we do the same by using the disk's original 4KB block size?
What does 64MB block size mean?
The block size is the smallest data unit that a file system can store. If you store a file that's 1k or 60Mb, it'll take up one block. Once you cross the 64Mb boundary, you need a second block.
If yes, what is the advantage of doing that?
HDFS is meant to handle large files. Let's say you have a 1000Mb file. With a 4k block size, you'd have to make 256,000 requests to get that file (1 request per block). In HDFS, those requests go across a network and come with a lot of overhead. Each request has to be processed by the Name Node to determine where that block can be found. That's a lot of traffic! If you use 64Mb blocks, the number of requests goes down to 16, significantly reducing the cost of overhead and load on the Name Node.
HDFS's design was originally inspired by the design of the Google File System (GFS). Here are the two reasons for large block sizes as stated in the original GFS paper (note 1 on GFS terminology vs HDFS terminology: chunk = block, chunkserver = datanode, master = namenode; note 2: bold formatting is mine):
A large chunk size offers several important advantages. First, it reduces clients’ need to interact with the master because reads and writes on the same chunk require only one initial request to the master for chunk location information. The reduction is especially significant for our workloads because applications mostly read and write large files sequentially. [...] Second, since on a large chunk, a client is more likely to perform many operations on a given chunk, it can reduce network overhead by keeping a persistent TCP connection to the chunkserver over an extended period of time. Third, it reduces the size of the metadata stored on the master. This allows us to keep the metadata
in memory, which in turn brings other advantages that we will discuss in Section 2.6.1.
Finally, I should point out that the current default size in Apache Hadoop is is 128 MB (see dfs.blocksize).
In HDFS the block size controls the level of replication declustering. The lower the block size your blocks are more evenly distributed across the DataNodes. The higher the block size your data are potentially less equally distributed in your cluster.
So what's the point then choosing a higher block size instead of some low value? While in theory equal distribution of data is a good thing, having a too low blocksize has some significant drawbacks. NameNode's capacity is limited, so having 4KB blocksize instead of 128MB means also having 32768 times more information to store. MapReduce could also profit from equally distributed data by launching more map tasks on more NodeManager and more CPU cores, but in practice theoretical benefits will be lost on not being able to perform sequential, buffered reads and because of the latency of each map task.
In normal OS block size is 4K and in hadoop it is 64 Mb.
Because for easy maintaining of the metadata in Namenode.
Suppose we have only 4K of block size in hadoop and we are trying to load 100 MB of data into this 4K then here we need more and more number of 4K blocks required. And namenode need to maintain all these 4K blocks of metadata.
If we use 64MB of block size then data will be load into only two blocks(64MB and 36MB).Hence the size of metadata is decreased.
Conclusion:
To reduce the burden on namenode HDFS prefer 64MB or 128MB of block size. The default size of the block is 64MB in Hadoop 1.0 and it is 128MB in Hadoop 2.0.
It has more to do with disk seeks of the HDD (Hard Disk Drives). Over time the disk seek time had not been progressing much when compared to the disk throughput. So, when the block size is small (which leads to too many blocks) there will be too many disk seeks which is not very efficient. As we make progress from HDD to SDD, the disk seek time doesn't make much sense as they are moving parts in SSD.
Also, if there are too many blocks it will strain the Name Node. Note that the Name Node has to store the entire meta data (data about blocks) in the memory. In the Apache Hadoop the default block size is 64 MB and in the Cloudera Hadoop the default is 128 MB.
If block size was set to less than 64, there would be a huge number of blocks throughout the cluster, which causes NameNode to manage an enormous amount of metadata.
Since we need a Mapper for each block, there would be a lot of Mappers, each processing a piece bit of data, which isn't efficient.
The reason Hadoop chose 64MB was because Google chose 64MB. The reason Google chose 64MB was due to a Goldilocks argument.
Having a much smaller block size would cause seek overhead to increase.
Having a moderately smaller block size makes map tasks run fast enough that the cost of scheduling them becomes comparable to the cost of running them.
Having a significantly larger block size begins to decrease the available read parallelism available and may ultimately make it hard to schedule tasks local to the tasks.
See Google Research Publication: MapReduce
http://research.google.com/archive/mapreduce.html
Below is what the book "Hadoop: The Definitive Guide", 3rd edition explains(p45).
Why Is a Block in HDFS So Large?
HDFS blocks are large compared to disk blocks, and the reason is to
minimize the cost of seeks. By making a block large enough, the time
to transfer the data from the disk can be significantly longer than
the time to seek to the start of the block. Thus the time to transfer
a large file made of multiple blocks operates at the disk transfer
rate.
A quick calculation shows that if the seek time is around 10 ms and
the transfer rate is 100 MB/s, to make the seek time 1% of the
transfer time, we need to make the block size around 100 MB. The
default is actually 64 MB, although many HDFS installations use 128 MB
blocks. This figure will continue to be revised upward as transfer
speeds grow with new generations of disk drives.
This argument shouldn’t be taken too far, however. Map tasks in
MapReduce normally operate on one block at a time, so if you have too
few tasks (fewer than nodes in the cluster), your jobs will run slower
than they could otherwise.
Related
As per my understanding if we change 1K block size in a fixed block size file system
to 2K it will lead to better disk throughput but poorer disk space utilization.
As now the file blocks are large and so disk space utilization could be poor since
now the last nonempty block could have approx 2K space. Whereas when it was
1K the last nonempty block could have approx 1K space.
So there is poor disk utilization. But now the same file that was stored
in say, 100 blocks could now be stored in 50 blocks. and so read/write will be faster.
Is that correct reasoning?
I think your reasoning is sound for most cases, but there is an edge case where larger block sizes may cause lower throughput.
Consider what happens when you have a system with many small files, and you want to read all those small files. Small files implies that these files do not use the entire 2K block size, but a block must be read in its entirety. So, if you have a million 1K files, and each file uses half of a block, then half of the time doing IO is wasted reading parts of the block that were garbage.
Like appending log entries at the tail of a file, or just like mysql recording its redo log, people always say sequential write is much faster than random write. But Why? I mean, when you write data on disk, the seek time and rotate time dominate the performance. But between your two consecutive sequential writes, there're maybe lots of other write requests(like nginx records access.log). Those write requests may move the magnetic head to other tracks, and when your process does the sequential write, it needs to move the magnetic head back again, and incur the rotation time. Even though when there's no other process, the magnetic head could stand still, you also need to wait for the rotation.
So, is it true that sequential writes is better than random write just because, in many cases, sequential write doesn't contain the seek time while random write always contains the seek time, but both sequantial and random write contain the rotation time?
The write performance of a disk is influenced by the physical properties of the storage device (e.g. the physical rotational speed in revolutions per minute in case of a mechanical disk), the disk I/O unit to I/O request size ratio and the OS/application.
Physical properties of an HDD
One major drawback of traditional mechanical disks is that, for an I/O request to be honored, the head has to reach the desired starting position (seek delay) and the platter has to reach the desired starting position (rotational delay).
This is true for sequential and random I/O. However, with sequential I/O, this delay gets considerably less noticeable because more data can be written without repositioning the head. An "advanced format" hard disk has a sector size of 4096 bytes (the smallest I/O unit) and a cylinder size in the megabyte-range. A whole cylinder can be read without repositioning the head. So, yes, there's a seek and rotational delay involved but the amount of data that can be read/written without further repositioning is considerably higher. Also, moving from one cylinder to the next is significantly more efficient than moving from the innermost to the outermost cylinder (worst-case seek).
Writing 10 consecutive sectors involves a seek and rotational delay once, writing 10 sectors spread across the disk involves 10 counts of seeks and rotational delay.
In general, both, sequential and random I/O involve seek and rotational delays. Sequential I/O takes advantage of sequential locality to minimize those delays.
Physical properties of an SSD
As you know, a solid-state disk does not have moving parts because it's usually built from flash memory. The data is stored in cells. Multiple cells form a page - the smallest I/O unit ranging from 2K to 16K in size. Multiple pages are managed in blocks - a block contains between 128 and 256 pages.
The problem is two-fold. Firstly, a page can only be written to once. If all pages contain data, they cannot be written to again unless the whole block is erased. Assuming that a page in a block needs to be updated and all pages contain data, the whole block must be erased and rewritten.
Secondly, the number of write cycles of an individual block is limited. To prevent approaching or exceeding the maximum number of write cycles faster for some blocks than others, a technique called wear leveling is used so that writes are distributed evenly across all blocks independent from the logical write pattern. This process also involves block erasure.
To alleviate the performance penalty of block erasure, SSDs employ a garbage collection process that frees used pages that are marked as stale by writing block pages excluding stale pages to a new block and erasing the original block.
Both aspects can cause more data to be physically read and written than required by the logical write. A full page write can trigger a read/write sequence that is 128 to 256 times larger, depending on the page/block relationship. This is known as write amplification. Random writes potentially hit considerably more blocks than sequential writes, making them significantly more expensive.
Disk I/O unit to I/O request size ratio
As outlined before, a disk imposes a minimum on the I/O unit that can be involved in reads and writes. If a single byte is written to disk, the whole unit has to be read, modified, and written.
In contrast to sequential I/O, where the likelihood of triggering large writes as the I/O load increases is high (e.g. in case of a database transaction log), random I/O tends to involve smaller I/O requests. As those requests become smaller than the smallest I/O unit, the overhead of processing those increases, adding to the cost of random I/O. This is another example of write amplification as a consequence of storage device characteristics. In this case, however, HDD and SSD scenarios are affected.
OS/application
The OS has various mechanisms to optimize both, sequential and random I/O.
A write triggered by an application is usually not processed immediately (unless requested by the application by means of synchronous/direct I/O or a sync command), the changes are executed in-memory based on the so-called page cache and written to disk at a later point-in-time.
By doing so, the OS maximizes the total amount of data available and the size of individual I/Os. Individual I/O operations that would have been inefficient to execute can be aggregated into one potentially large, more efficient operation (e.g. several individual writes to a specific sector can become a single write). This strategy also allows for I/O scheduling, choosing a processing order that is most efficient for executing the I/Os even though the original order as defined by the application or applications was different.
Consider the following scenario: a web server request log and a database transaction log are being written to the same disk. The web server write operations would normally interfere with the database write operations if they were executed in order, as issued by the applications involved. Due to the asynchronous execution based on the page cache, the OS can reorder those I/O requests to trigger two large sequential write requests each. While those are being executed, the database can continue to write to the transaction log without any delay.
One caveat here is that, while this should be true for the web server log, not all writes can be reordered at will. A database triggers a disk sync operation (fsync on Linux/UNIX, FlushFileBuffers on Windows) whenever the transaction log has to be written to stable storage as part of a commit. Then, the OS cannot delay the write operations any further and has to execute all previous writes to the file in question immediately. If the web server was to do the same, there could be a noticeable performance impact because the order is then dictated by those two applications. That is why it's a good idea to place a transaction log on a dedicated disk to maximize sequential I/O throughput in the presence of other disk syncs / large amounts of other I/O operations (the web server log shouldn't be a problem). Otherwise, asynchronous writes based on the page cache might not be able to hide the I/O delays anymore as the total I/O load and/or the number of disk syncs increase.
I have a few questions on STREAM (http://www.cs.virginia.edu/stream/ref.html#runrules) benchmark.
Below is the comment from stream.c. What is the rationale about the requirement that arrays should be 4 times the size of cache?
* (a) Each array must be at least 4 times the size of the
* available cache memory. I don't worry about the difference
* between 10^6 and 2^20, so in practice the minimum array size
* is about 3.8 times the cache size.
I originally assume STREAM measures the peak memory bandwidth. But I later found that when I add extra arrays and array accesses, I can get larger bandwidth numbers. So it looks to me that STREAM doesn't guarantee to saturate memory bandwidth. Then my question is what does STREAM really measures and how do you use the numbers reported by STREAM?
For example, I added two extra arrays and make sure to access them together with the original a/b/c arrays. I modify the bytes accounting accordingly. With these two extra arrays, my bandwidth number is bumped up by ~11.5%.
> diff stream.c modified_stream.c
181c181,183
< c[STREAM_ARRAY_SIZE+OFFSET];
---
> c[STREAM_ARRAY_SIZE+OFFSET],
> e[STREAM_ARRAY_SIZE+OFFSET],
> d[STREAM_ARRAY_SIZE+OFFSET];
192,193c194,195
< 3 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,
< 3 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE
---
> 5 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE,
> 5 * sizeof(STREAM_TYPE) * STREAM_ARRAY_SIZE
270a273,274
> d[j] = 3.0;
> e[j] = 3.0;
335c339
< c[j] = a[j]+b[j];
---
> c[j] = a[j]+b[j]+d[j]+e[j];
345c349
< a[j] = b[j]+scalar*c[j];
---
> a[j] = b[j]+scalar*c[j] + d[j]+e[j];
CFLAGS = -O2 -fopenmp -D_OPENMP -DSTREAM_ARRAY_SIZE=50000000
My last level cache is around 35MB.
Any commnet?
Thanks!
This is for a Skylake Linux server.
Memory accesses in modern computers are a lot more complex than one might expect, and it is very hard to tell when the "high-level" model falls apart because of some "low-level" detail that you did not know about before....
The STREAM benchmark code only measures execution time -- everything else is derived. The derived numbers are based on both decisions about what I think is "reasonable" and assumptions about how the majority of computers work. The run rules are the product of trial and error -- attempting to balance portability with generality.
The STREAM benchmark reports "bandwidth" values for each of the kernels. These are simple calculations based on the assumption that each array element on the right hand side of each loop has to be read from memory and each array element on the left hand side of each loop has to be written to memory. Then the "bandwidth" is simply the total amount of data moved divided by the execution time.
There are a surprising number of assumptions involved in this simple calculation.
The model assumes that the compiler generates code to perform all the loads, stores, and arithmetic instructions that are implied by the memory traffic counts. The approach used in STREAM to encourage this is fairly robust, but an advanced compiler might notice that all the array elements in each array contain the same value, so only one element from each array actually needs to be processed. (This is how the validation code works.)
Sometimes compilers move the timer calls out of their source-code locations. This is a (subtle) violation of the language standards, but is easy to catch because it usually produces nonsensical results.
The model assumes a negligible number of cache hits. (With cache hits, the computed value is still a "bandwidth", it is just not the "memory bandwidth".) The STREAM Copy and Scale kernels only load one array (and store one array), so if the stores bypass the cache, the total amount of traffic going through the cache in each iteration is the size of one array. Cache addressing and indexing are sometimes very complex, and cache replacement policies may be dynamic (either pseudo-random or based on run-time utilization metrics). As a compromise between size and accuracy, I picked 4x as the minimum array size relative to the cache size to ensure that most systems have a very low fraction of cache hits (i.e., low enough to have negligible influence on the reported performance).
The data traffic counts in STREAM do not "give credit" to additional transfers that the hardware does, but that were not explicitly requested. This primarily refers to "write allocate" traffic -- most systems read each store target address from memory before the store can update the corresponding cache line. Many systems have the ability to skip this "write allocate", either by allocating a line in the cache without reading it (POWER) or by executing stores that bypass the cache and go straight to memory (x86). More notes on this are at http://sites.utexas.edu/jdm4372/2018/01/01/notes-on-non-temporal-aka-streaming-stores/
Multicore processors with more than 2 DRAM channels are typically unable to reach asymptotic bandwidth using only a single core. The OpenMP directives that were originally provided for large shared-memory systems now must be enabled on nearly every processor with more than 2 DRAM channels if you want to reach asymptotic bandwidth levels.
Single-core bandwidth is still important, but is typically limited by the number of cache misses that a single core can generate, and not by the peak DRAM bandwidth of the system. The issues are presented in http://sites.utexas.edu/jdm4372/2016/11/22/sc16-invited-talk-memory-bandwidth-and-system-balance-in-hpc-systems/
For the single-core case, the number of outstanding L1 Data Cache misses much too small to get full bandwidth -- for your Xeon Scalable processor about 140 concurrent cache misses are required for each socket, but a single core can only support 10-12 L1 Data Cache misses. The L2 hardware prefetchers can generate additional memory concurrency (up to ~24 cache misses per core, if I recall correctly), but reaching average values near the upper end of this range requires simultaneous accesses to more 4KiB pages. Your additional array reads give the L2 hardware prefetchers more opportunity to generate (close to) the maximum number of concurrent memory accesses. An increase of 11%-12% is completely reasonable.
Increasing the fraction of reads is also expected to increase the performance when using all the cores. In this case the benefit is primarily by reducing the number of "read-write turnaround stalls" on the DDR4 DRAM interface. With no stores at all, sustained bandwidth should reach 90% peak on this processor (using 16 or more cores per socket).
Additional notes on avoiding "write allocate" traffic:
In x86 architectures, cache-bypassing stores typically invalidate the corresponding address from the local caches and hold the data in a "write-combining buffer" until the processor decides to push the data to memory. Other processors are allowed to keep and use "stale" copies of the cache line during this period. When the write-combining buffer is flushed, the cache line is sent to the memory controller in a transaction that is very similar to an IO DMA write. The memory controller has the responsibility of issuing "global" invalidations on the address before updating memory. Care must be taken when these streaming stores are used to update memory that is shared across cores. The general model is to execute the streaming stores, execute a store fence, then execute an "ordinary" store to a "flag" variable. The store fence will ensure that no other processor can see the updated "flag" variable until the results of all of the streaming stores are globally visible. (With a sequence of "ordinary" stores, results always become visible in program order, so no store fence is required.)
In the PowerPC/POWER architecture, the DCBZ (or DCLZ) instruction can be used to avoid write allocate traffic. If the line is in cache, its contents are set to zero. If the line is not in cache, a line is allocated in the cache with its contents set to zero. One downside of this approach is that the cache line size is exposed here. DCBZ on a PowerPC with 32-Byte cache lines will clear 32 Bytes. The same instruction on a processor with 128-Byte cache lines will clear 128 Bytes. This was irritating to a vendor who used both. I don't remember enough of the details of the POWER memory ordering model to comment on how/when the coherence transactions become visible with this instruction.
The key point here, as pointed out by Dr. Bandwidth's answer, is that STREAMS only counts the useful bandwidth seen by the source code. (He's the author of the benchmark.)
In practice the write stream will incur read bandwidth costs as well for the RFO (Read For Ownership) requests. When a CPU want to write 16 bytes (for example) to a cache line, first it has to load the original cache line and then modify it in L1d cache.
(Unless your compiler auto-vectorized with NT stores that bypass cache and avoid that RFO. Some compilers will do that for loops they expect to write an array too larger for cache before any of it is re-read.)
See Enhanced REP MOVSB for memcpy for more about cache-bypassing stores that avoid an RFO.
So increasing the number of read streams vs. write streams will bring software-observed bandwidth closer to actual hardware bandwidth. (Also a mixed read/write workload for the memory may not be perfectly efficient.)
The purpose of the STREAM benchmark is not to measure the peak memory bandwidth (i.e., the maximum memory bandwidth that can be achieved on the system), but to measure the "memory bandwidth" of a number of kernels (COPY, SCALE, SUM, and TRIAD) that are important to the HPC community. So when the bandwidth reported by STREAM is higher, it means that HPC applications will probably run faster on the system.
It's also important to understand the meaning of the term "memory bandwidth" in context of the STREAM benchmark, which is explained in the last section of the documentation. As mentioned in that section, there are at least three ways to count the number of bytes for a benchmark. The STREAM benchmark uses the STREAM method, which count the number of bytes read and written at the source code level. For example, in the SUM kernel (a(i) = b(i) + c(i)), two elements are read and one element is written. Therefore, assuming that all accesses are to memory, the number of bytes accessed from memory per iteration is equal to the number of arrays multiplied by the size of an element (which is 8 bytes). STREAM calculates bandwidth by multiplying the total number of elements accessed (counted using the STREAM method) by the element size and dividing that by the execution time of the kernel. To take run-to-run variations into account, each kernel is run multiple times and the arithmetic average, minimum, and maximum bandwidths are reported.
As you can see, the bandwidth reported by STREAM is not the real memory bandwidth (at the hardware level), so it doesn't even make sense to say that it is the peak bandwidth. In addition, it's almost always much lower than the peak bandwidth. For example, this article shows how ECC and 2MB pages impact the bandwidth reported by STREAM. Writing a benchmark that actually achieves the maximum possible memory bandwidth (at the hardware level) on modern Intel processors is a major challenge and may be a good problem for a whole Ph.D. thesis. In practice, though, the peak bandwidth is less important than the STREAM bandwidth in the HPC domain. (Related: See my answer for information on the issues involved in measuring the memory bandwidth at the hardware level.)
Regarding your first question, notice that STREAM just assumes that all reads and writes are satisfied by the main memory and not by any cache. Allocating an array that is much larger than the size of the LLC helps in making it more likely that this is the case. Essentially, complex and undocumented aspects of the LLC including the replacement policy and the placement policy need to be defeated. It doesn't have to be exactly 4x larger than the LLC. My understanding is that this is what Dr. Bandwidth found to work in practice.
I was under the impression that sequential scan of a file would actually be a sequential seek on disk. However, I read recently that the blocks of a file might not be written contiguously on disk by a file system. If inodes are used as a map and each block is obtained by following the block pointer, I am wondering whether the actual mechanism with which a file system retrieves the blocks of a file is actually sequential?
If the answer is file system dependant, it would be great to cite some major filesystems.
Thanks.
Filesystems try to allocate as much sequential blocks as possible during writes. But as they age (i.e lot of creates + deletes over time), fragmentation becomes inevitable. There are heuristics to reduce fragmentation like speculative preallocation, delayed preallocation etc. Applications themselves can do things like preallocation (example fallocate), enabling readahead and running de-fragmentation tools depending on the features available in the filesystem to make the blocks contiguous or at least reads faster.
I have a program that processes a large dataset consisting of a large number (300+) of sizable memory (40MB+) mapped files. All the files are needed together though they are accessed in a sequential way. at the moment I am memory mapping the files and then using madvise with MADV_SEQUENTIAL since I don't want the thing to be any more of a memory hog than it needs to be (without any madvise the consumption becomes a problem). The problem it that the program runs much slower (like 50x slower) than the diskio of the system would indicate it should, and becomes worse faster than linearly. as the number of files are involved are increased. Processing 100 files is more than 10x faster than 300 files despite being only 3x the data. I suspect that the memory mapped files are generating a page fault every time a 4kb page is crossed, net result disk seek time is greater than disk transfer time.
Can anyone think of a better way than using madvise with MADV_WILLNEED and MADV_DONTNEED every so often, and if this is the best way, any ideas as to how far to look ahead?