I need to extract a memory address from within an existing 64-bit value, and this address points to a 4K array, the starting value is:
0x000000030c486000
The address I need is stored within bits 51:12, so I extract those bits using:
address = start >> 12 & 0x0000007FFFFFFFFF
This leaves me with the address of:
0x000000000030c486
However, the documentation I'm reading states that the array stored at the address is 4KB in size, and naturally aligned.
I'm a little bit confused over what naturally aligned actually means. I know with page aligned stuff the address normally ends with '000' (although I could be wrong on that).
I'm assuming that as the address taken from the starting value is only 40 bits long, I need to perform an additional bitshifting operation to arrange the bits so that they can be correctly interpreted any further.
If anyone could offer some advice on doing this, I'd appreciate it.
Thanks
Normally, "naturally aligned" means that any item is aligned to at least a multiple of its own size. For example, a 4-byte object is aligned to an address that's a multiple of 4, an 8-byte object is aligned to an address that's a multiple of 8, etc.
For an array, you don't normally look at the size of the whole array, but at the size of an element of the array.
Likewise, for a struct or union, you normally look at the size of the largest element.
Natural alignment requires that every N byte access must be aligned on a memory address boundary of N. We can express this in terms of the modulus operator: addr % N must be zero. for examples:
Accessing 4 bytes of memory from address 0x10004 is aligned (0x10004 % 4 = 0).
Accessing 4 bytes of memory from address 0x10005 is unaligned (0x10005 % 4 = 1).
From a hardware perspective, memory is typically divided into chunks of some size, such that any or all of the data within a chunk can be read or written in a single operation, but any single operation can only affect data within a single chunk.
A typical 80386-era system would have memory grouped into four-byte chunks. Accessing a two-byte or four-byte value which fit entirely within a single chunk would require one operation. If the value was stored partially in one chunk and partially in another, two operations would be required.
Over the years, chunk sizes have gotten larger than data sizes, to the point that most randomly-placed 32-bit values would fit entirely within a chunk, but a second issue may arise with some processors: if a chunk is e.g. 512 bits (64 bytes) and a 32-bit word is known to be aligned at a multiple of four bytes (32 bits), fetching each bit of the word can come from any of 16 places. If the word weren't known to be aligned, each bit could come from any of 61 places for the cases where the word fits entirely within the chunk. The circuitry to quickly select from among 61 choices is more complex than circuitry to select among 16, and most code will use aligned data, so even in cases where an unaligned word would fit within a single accessible chunk, hardware might still need a little extra time to extract it.
A “naturally aligned” address is one that is a multiple of some value that is preferred for the data type on the processor. For most elementary data types on most common processors, the preferred alignment is the same as the size of the data: Four-byte integers should be aligned on multiples of four bytes, eight-byte floating-point should be aligned on multiples of eight bytes, and so on. Some platforms require alignment, some merely prefer it. Some types have alignment requirements different from their sizes. For example, a 12-byte long float may require four-byte alignment. Specific values depend in your target platform. “Naturally aligned” is not a formal term, so some people might define it only as preferred alignment that is a multiple of the data size, while others might allow it to be used for other alignments that are preferred on the processor.
Taking bits out of a 64-bit value suggests the address has been transformed in some way. For example, key bits from the address have been stored in a page table entry. Reconstructing the original address might or might not be as simple as extracting the bits and shifting them all the way to the “right” (low end). However, it is also common for bits such as this to be shifted to a different position (with zeroes left in the low bits). You should check the documentation carefully.
Note that a 4 KiB array, 4096 bytes, corresponds to 212 bytes. The coincidence of 12 with the 51:12 field in the 64-bit value suggests that the address might be obtained simply by extracting those 40 bits without shifting them at all.
Related
This question already has answers here:
Double alignment
(4 answers)
Closed 1 year ago.
As per this link https://www.geeksforgeeks.org/structure-member-alignment-padding-and-data-packing/ , on a 32-bit machine where size of data bus = 4 bytes, 'double' type struct members start from addresses which are multiple of 8.
But even if they started from addresses which are multiples of 4, we'd need 2 loads to bring them from memory. So I don't get the reason for the stricter constraint for starting address being a multiple of 8.
I am absolutely no expert, so if I'm wrong I'd love to know more too, but one reason I've seen to force double alignment on 8 bytes, is because of the cpu cache. If doubles were put on 4 byte alignments, the cache may only get half of the double and force more reads. By forcing alignment of 8 bytes, it makes sure that a single cache line is used to read the whole double.
This question is similar, why is data structure alignment important for performance? and some of the answers given may explain this better than I can for you.
In the model the linked page presents, there is no reason to restrict the address of a double to a multiple of eight bytes. It gives the number of four-byte memory transfers as a reason for alignment, and eight bytes can be loaded in two transfers as long as they start on a four-byte-aligned address. There is no need for an eight-byte-aligned address. (It should come as no surprise that some web page on the Internet is not of high quality.)
However, there is no single definition of a “32-bit machine” or a “64-bit machine”. Processor and systems vary in several regards, including bus width (and hence basic memory transfer size), processor register width, virtual memory mapping features, instruction set. No single one of these makes a machine “32 bit” or “64 bit.”
A processor might require eight-byte-aligned addresses for a double simply because its instruction set encoding is designed not to have low bits for the address of a double. The “load double” instruction that loads a double into a floating-point register might not have any way of specifying the low three bits of an address in certain addressing forms; they are always taken to be zero.
Another issue could be the processor is largely a 32-bit processor, with 32-bit general registers, but has a 64-bit bus. Loads of 32-bit items to general registers only need to be four-byte aligned because the processor always loads some eight-byte-aligned 64 bits and then takes the high or low 32 bits. (Likely it also coalesces consecutive 32-bit load instructions when possible, so the full 64 bits are used.)
As another answer states, requiring eight-byte alignment for eight-byte objects prevents them from straddling cache lines or memory pages.
I have following lines in the code
# define __align_(x) __attribute__((aligned(x)))
I can use it int i __align_; what difference does it makes like like
I am using aligned attribute as above or if I am just creating my variable like int i; does it differ in how variable get created in memory?
I can use it int i __align_; what difference does it makes like like
This will not work because the macro is defined to have a parameter, __align_(x). When it is used without a parameter, it will not be replaced, and the compiler will report a syntax error. Also, identifiers starting with __ are reserved for the C implementation (for the use of the compiler, the standard library, and any other parts forming the C implementation), so a regular program should not use such a name.
When you use the macro correctly, it changes the normal alignment requirement for the type.
Generally, objects of various types have alignment requirements: They should be located in memory at addresses that are multiples of their requirement. The reasons for this are because computer hardware is usually designed to work with groups of bytes, so it may fetch data from memory in groups of, for example, four bytes: Bytes from 0 to 3, bytes from 4 to 7, bytes from 8 to 11, and so on.
If a four-byte object with four-byte alignment requirement is located at a multiple of four bytes, then it can be read from memory easily, by loading the group of bytes it is in. It can also be written to memory easily.
If the object were not at a multiple of four bytes, it cannot be loaded as one group of bytes. It can be loaded by loading the two groups of bytes it straddles, extracting the desired bytes, and combining the desired bytes in one processor register. However, that takes more work, so we want to avoid it. The compiler is written to automatically align things as desired for the C implementation, and it writes load and store instructions that expect the desired alignment.1
Different object types can have different alignment requirements even though they are bound by the same hardware behavior. For example, with a two-byte short, the alignment requirement may be two bytes. This is because, whether it starts at byte 0 or byte 2 within a group (say at address 100, 102, 104, or 106), we can load the short by loading a single group of four bytes and taking just the two bytes we want. However, if it started at byte 3 (say at address 103), we would have to load two groups of bytes (100 to 103 and 104 to 107) to get the bytes we needed for the short (103 and 104). So two-byte alignment suffices for this short even though the hardware is designed with four-byte groups.
As mentioned, the compiler handles alignment automatically. When you define a structure with multiple members of different types, the compiler inserts padding so that each member is aligned correctly, and it inserts padding at the end of the structure so that an array of them keeps the alignment from element to element in the array.
There are times when we want to override the compiler’s automatic behavior. When we are preparing to send data over a network connection, the communication protocol might require the different fields of a message to be packed together in consecutive bytes, with no padding. In this case, we can define a structure with an alignment requirement of 1 byte for it and all its members. When we are ready to send a message, we could copy data into this structure’s members and then write the structure to the network device.
When you tell the compiler an object is not aligned normally, the compiler will generate instructions for that. Instead of the normal load or store instructions, it will use special unaligned load or store instructions if the computer architecture has them. If it does not, the compiler will use instructions to shift and store individual bytes or to shift and merge bytes and store them as aligned words, depending on what instructions are available in the computer architecture. This is generally inefficient; it will slow down your program. So it should not be used in normal programming. Decreasing the alignment requirements should be used only when there is a need for controlling the layout of data in memory.
Sometimes increasing the alignment requirements is used for performance. For example, an array of four-byte float elements generally only needs four-byte alignment. However, some computers have special instructions to process four float elements (16 bytes) at a time, and the benefit from having that data aligned to a multiple of 16 bytes. (And some computers have instructions for even more data at one time.) In this case, we might increase the alignment requirement for our float array (but not its individual elements) so that it is aligned to be good with these instructions.
Footnote
1 What happens if you force an object to be located at an undesired alignment without telling the compiler varies. In some computers, when a load instruction is executed with an unaligned address, the processor will “trap,” meaning it stops normal program execution and transfers control to the operating system, reporting an error in your program. In some computers, the processor will ignore the low bits of the address and load the wrong data. In some computers, the processor will load the two groups of bytes, extract the desired bytes, and merge them. On computers that trap, the operating system might do the manual fix-up of loading the bytes, or it might terminate your program or report the error to your program.
The attribute tells the compiler that the variable in question must be placed in memory in addresses that are aligned to a certain number of bytes (addr % alignement == 0).
This is important because the CPU can only work on some integer values if they are aligned - such as int32 must be 4 bytes aligned and int64 must be 8 bytes aligned, pointers need to be 4/8 (32/64 bit cpu) aligned too.
The attribute is mostly used for structures, where certain fields within the structure must be memory aligned in order to allow the CPU to do integer operations on them (like mov.l) without hitting a BUS ERROR from the memory controller.
If structures aren't properly aligned, the compiler will have to add extra instructions to first load the unaligned value into a register with several memory operations which is more expensive in performance.
It can also be used to bump performance in more performance sensitive systems by creating buffers that are page aligned (4k usually) so that paging will have less of an impact, or if you want to create DMA-able buffer zones - but that's a bit more advanced...
In a 64 bit system every memory cell is 64 bit, so how does it save an int variable that contains less space? Wouldn't it spend one 64 bit address any way? If so why bother to use difference types of variables if they going to catch one cell any way.
Your use of terminology is all over the place.
A memory cell typically corresponds to a logic gate on the hardware level and is very likely to be 1 bit large assuming binary computers.
What I think you are asking about is the smallest addressable unit in a computer, also known as a byte, which is very likely 8 bits large.
This has nothing to do with the data register width of the CPU, which is what one usually refers to when talking about "64 bit computers". The data register width is the largest chunk of data that the CPU can process in a single instruction, but not necessarily the smallest. And this has no relation with the address bus width of the computer, though they are often the same nowadays.
When you declare a variable in C, the size allocated depends on the system. An int is for example very likely 32 bit large on all 32 bit and 64 bit computers. Notably, all mainstream 64 bit computers also support 32 bit or smaller instructions. So it doesn't necessarily make sense for the compiler to allocate more memory than 32 bit - you might get larger memory use for no speed gained.
I believe the term you are fishing for is alignment. It is only inefficient for the computer to read smaller chunks in case they are allocated on misaligned addresses. That is, an address which is not evenly divisible by the data register width (expressed in bytes). Such accesses are typically slower, or in some cases not supported at all. So a 64 bit compiler might therefore decide to allocate a small variable inside a 8 byte chunk, and leave the remaining bytes that aren't used as padding bytes. However, in case the compiler optimizes for size, it may chose to store data in a more memory-effective way, at the cost of access time.
Computer Systems: a Programmer's Perspective says
The x86-64 hardware will work correctly regardless of the alignment of
data. However, Intel recommends that data be aligned to improve memory
system performance. Their alignment rule is based on the principle
that any primitive object of K bytes must have an address that is a
multiple of K. We can see that this rule leads to the following
alignments:
K Types
1 char
2 short
4 int, float
8 long, double, char *
Why is it that "any primitive object of K bytes must have an address that is a multiple of K"?
How is "aligned" defined or what does it mean?
On a x86-64 machine,
if an object has K bytes (such as K=2 (e.g. short) or K=4 (e.g. int, or float)), "any primitive object of K bytes must have an address that is a multiple of K" means that such an object must have an address that is a multiple of K. But isn't the object aligned, as long as its storage space falls completely between two addresses which are two consecutive multiples of 8, which is a less strict requirement than that the object must have an address that is a multiple of K?
If the K of an object is smaller than 8 but not equal to 1, 2 or 4, does "any primitive object of K bytes must have an address that is a multiple of K" still apply? For example if K=3,5,6, or 7?
On a X86 machine, which has 32-bit addresses,
what is the alignment rule, and Does "any primitive object of K bytes must have an address that is a multiple of K" still apply?
Thanks.
Since this was tagged in C as well; do note that not only does the architecture make these decisions, but so do compilers. The C compiler often has its own alignment rules that mostly follow either the required or the preferred alignment of the architecture - especially when optimizing for speed. And the compiler's requirements are what you you need to worry about the most time, not the architecture requirement.
Even if the processor supports unaligned accesses, it might have a preferred alignment for multibyte objects that the C compiler can exploit. For example a compiler is allowed to know that a any int will reside at, and therefore any int * pointer will always point to - an address divisible by 4.
Now there are people who say that since x86-64 supports unaligned acccess, they can make an int * pointer that points to an address not divisible by 4 and things will work fine.
They're wrong.
There are some instructions in the x86-64 instruction set that require alignment. I.e. the "will work correctly regardless of alignment" means that these instructions too work "correctly, according to the specification, when given an unaligned access" - they raise an exception that would kill your process. The reason for having these is that they can be so much faster and require less silicon to implement than the versions that can deal with unaligned data.
And the compiler knows exactly when it is allowed to use these instructions! Whenever it sees an int * being dereferenced it knows that it can use an instruction that requires the operand be aligned at 4 bytes, should it be more effective.
See this question for a case where OP run into problem with C code that "should have been fine on x86-64 anyway": C undefined behavior. Strict aliasing rule, or incorrect alignment?
As for x86-32, the alignment requirement for doubles is generally 4 in C compilers because doubles need to be passed on stack and stack grows in 4 not 8 byte increments.
And finally:
If the K of an object is smaller than 8 but not equal to 1, 2 or 4, does "any primitive object of K bytes must have an address that is a multiple of K" still apply? For example if K=3,5,6, or 7?
There are no primitive objects with K<-{3,5,6,7} in x86.
The C standard's stance is that an alignment can only be a power of 2, and there are no gaps in arrays. Therefore an object with such a size would need to be padded upwards to its alignment requirement, or its alignment requirement must be 1.
The rules are different on each processor model. I will discuss one hypothetical example. We may have a processor with an eight-byte interface to the bus. Given some address X, the processor can load eight bytes from that address by requesting the memory to deliver eight bytes from its unit of storage numbered X/8. That is, the memory does not have any way to address individual bytes. The processor can only request data at a certain address that is a multiple of eight, and the memory will send the entire eight bytes at that address. (Keep in mind this is a hypothetical example to illustrate basic principles. Also, I am ignoring cache. Cache helps mask some of the effects of alignment issues, because the misalignments can be largely managed in level-one cache inside the processor. But handling this still requires extra hardware, as discussed below.)
Suppose we want the four-byte object that is in bytes 7, 8, 9, and 10. To get this, the processor has to request unit 0 from memory, which supplies bytes 0 through 7, and it has to request unit 1, which supplies bytes 8 through 15. So, already, there is a performance problem: We had to use two bus transfers to get this word that is only half the size of one transfer. That is inefficient, and the bus can only do half as many of these double transfers as it can if we loaded only aligned data requiring single transfers.
Continuing, the processor has all the bytes it needs, 0 through 15, so it extracts bytes 7 through 10, which make up the object we want. To do this, though, it has to shift the bytes to put them into a register. Ideally, if nobody did any “unaligned” loads, four-byte objects would come in from the bus only at offsets 0 and 4 in the eight-byte transfers, and the processor only needs to have wires gong from those offsets to the register destinations.
However, our processor supports unaligned loads, so it has additional switches and wires so the data can be shunted down a different path, where it will be shifted by three bytes. Keep in mind, the data from both transfers has to be shifted by three bytes and then spliced together. So a lot of extra wires and switches are needed. Two eight-byte transfers is 128 bits, so there are hundreds of extra connections involved in this.
Well, fine, the processor has these wires and switches, why not use them? To make this processor fast, it supports multiple loads and stores in progress simultaneously. As soon as the bus transfers one piece of data, we want to be getting another from the bus, while the data from the first is still on its way to a register. So there are actually multiple parts of the processor moving data around for several loads. Since we expect unaligned loads to be rare, maybe only one of the parts for handling loads has the extra components to handle unaligned loads. The others all handle aligned loads. So, if you have just one unaligned load occasionally, the processor sends it to that part, and the performance effect is unnoticeable. However, if you do many unaligned loads in a row, they all have to go through the one part, so they end up waiting in a queue instead of running in parallel, and performance decreases.
That is just for loads. When you store that four-byte object, there is no way to write just bytes 7 through 10. Since the bus and the memory only work in eight-byte units, we need to write units 0 and 1, which also contains bytes 0 through 6 and bytes 11 to 15. To implement the store, the processor must:
Load memory unit 0, providing bytes 0 through 7.
Load memory unit 1, providing bytes 8 through 15.
Move the first byte of the four-byte object into byte 7.
Move the last three bytes of the object into bytes 8 through 10.
Store the changed memory unit 0.
Store the changed memory unit 1.
Again, that is twice as much work as it would be with an aligned object (load one memory unit, move the bytes in, store the unit). And, besides the time of the operations, you are occupying more resources inside the processor—it has to use two internal registers to hold the data from memory temporarily while it is merging the changes.
Actually, it is more than twice the work and resources, because it also requires extra wires and switches to shift the bytes by non-standard amounts.
The processor bus, which is the media used to access memory is normally the processor size in bits. This means a 32bit processor normally access memory in 32bit chunks, meaning that only one memory read access is necessary to read the data from memory.
Addresses by the contrary, are byte oriented, so a double (8 bytes) normally occupies eight different contiguous memory. So to make an access to a single eight bytes data (with only one bus request) The data must begin at a single eight byte word and finish before we get to the next. For old processors this was imperative, in case you requested a memory access that is not data aligned, an exception was fired. Actual processors don't have this restriction, but beware you that in case you have for example a double in a non multiple of eight address, the processor will need to make two bus accesses (with the overhead that this implies) to get the data from memory.
For this reason (you can double or even more, the time required to execute some piece of code if all the data is unaligned, against the time required to if the data is properly aligned) the processor vendor warns you about the alignment of data.
Modern processors have several levels of caches, that are read from main memory in chunks of one cache line (64 or even more bytes) so this is not an issue. Anyway, it is good idea to have data aligned anyway, for the case you need to run your code in a non-such-advanced processor.
while i was taking a course in hardware/software interface, our teacher said the cpu get the data using the word machine, for example, the CPU can get the value at address 0x00 but cannot get the value at address 0x01 but the value at address 0x00 + (word-size:normally it 4 byte), so i want to understand how we can use short int in c that contain only 2 byte?
Different CPU's have different memory access restrictions. One common form is to allow access at any byte offset, but suffer performance penalties if this occurs because two word-aligned transfers might be needed in order to retrieve all bytes necessary for the operation. The second form simply disallows non-aligned memory access.
In the first case, the compile has nothing to worry about, memory can be "packed" or word-aligned and both will work, but one will be faster and one will use less memory.
In the second case, the compiler must generate code to get an un-aligned data into the correct register location(s) to allow arithmetic and logical operations to occur. These operations shift and/or mask bits in byte-sized units. For example a 4-byte load may be needed to load a 2-byte variable. In this instance 2 bytes of the data will likely need to be "zeroed" and the 2 "good" bytes may need to be shifted into the low-order bits of the register. All of this is the responsibility of the compiler, but it adds overhead to access the data. On the other hand, it means the data occupies less memory when not stored in a register. This can be important if you have a large amount of data, such as an array of a few million integers; storing it in half the space may actually make the program run faster (even given the above overhead), since twice as many array values fit in the cache.
A 32 bit processor uses 4 bytes for integer, So first let me explain about memory banking
memory banking= If the processor had used the single byte addressing scheme,it would take 4 memory cycles to perform a read/write operation. So from processors point of view memory is banked as shown.
So if a integer is stored as in the address range 0x0000-0x0003 ie;4 byes ,it just requires a single memory read/write cycle so the processor is relieved from the extra memory cycles and hence improved performance.
Also it is also possible that a variable allocation can start from a number which is not a multiple of four hence it might require 2 or more memory cycles(float,double etc..) in accordance with how they are stored. The CPU fetches the data in minimum number of read cycles.
In short data type only two bytes of the memory are allocated and if it starts from 0x0001 then it ends in 0x0002 still 1 read cycle is required for it. But the advantage is that in integer it allocates 4 bytes where as the short allocates two byes so the two bytes is saved from the integer data type. So storage is improved
Reference:
Geeksforgeeks
IBM