Here is an interview question I was asked:
you have the following code:
long num;
char buff[50];
If buff is aligned to address 0 what is the most efficient way for num to get first 4 bytes of buff;
if we want to get the value of buff in a specidic place k (buff[k]) how will we do it?
Is it pertained with memory alignment?
Regards,
Ron
First, understand that the question is intended to ask about characteristics beyond those specified in the C standard. The C standard does not impose requirements about efficiency, so any question asking about efficiency is necessarily asking about C implementations, not about the C standard. The interviewer is not probing your knowledge of C per se; they are probing your knowledge of modern hardware, compilers, and so on.
As mentioned in xvan’s answer, you could use *num = * (long *) buff;. This works given some assumptions implicit in the question. In order for this to work reliably:
long must not have any trap representations, or we must know that the data being copied is not a trap representation.
long must be four bytes.
The compiler must tolerate aliasing. That is, it must not assume that, because the elements of buff are char, we will not access them through a pointer to long.
buff must be four-byte aligned as stated in the question, or the target hardware must support unaligned loads.
These characteristics are not uncommon in C implementations, particularly with corresponding options selected during compilation. The result of this code is likely to be a two-instruction sequence that loads four bytes from memory to a register and that stores four bytes from a register to memory. That is the knowledge I think the interviewer was testing you for.
However, this is not a great solution. As Ilja Everilä noted in a comment, you can simply write memcpy(&num, buff, sizeof num);. This is a proper C-standard way to copy bytes, and a good compiler will optimize it. For example, I just compiled this source code using Apple LLVM 8.1.0 on macOS 10.12.6 with “-O3 -std=c11 -S” (switches that request optimization, use of the 2011 C standard, and assembly code output):
#include <stdint.h>
#include <string.h>
void foo(uint32_t *L, char *A)
{
memcpy(L, A, sizeof *L);
}
and the resulting routine contains these instructions between the usual routine entry and exit code:
movl (%rsi), %eax
movl %eax, (%rdi)
Thus, the compiler has optimized the memcpy call into a load instruction and a store instruction. This is even though the compiler does not know what the alignment of buff might be. It apparently “believes” that unaligned loads and stores perform reasonably well on the target architecture, so it chose to implement the memcpy directly with load and store instructions rather than explicitly calling a library routine and looping to copy four individual bytes.
If a compiler does not immediately optimize the memcpy call like this, it may need a little help. For example, if the compiler does not know that buff is four-byte aligned, and the target hardware does not perform unaligned four-byte loads well (or at all), then the compiler will not optimize this memcpy into a load-store pair. In that case, some compilers have language extensions that let you tell them a pointer has more than the normal alignment, such as GCC’s __builtin_assume_aligned() as M.M. mentions. For example, Apple LLVM, I could do this:
typedef char AlignedBuffer[50] __attribute__((__aligned__(4)));
void foo(uint32_t *L, AlignedBuffer *A)
{
*L = * (long *) A;
}
That typedef tells the compiler that the AlignedBuffer type is always four-byte aligned, at least. This is, of course, an extension to the C language that is not available in all compilers. (Also, when doing this, I would have to ensure to use the compiler option that supports aliasing things through pointers to other types.)
Given that this compiler already knows how to optimize this case, trying to outsmart it with pointer casting is pointless. However, when working with other compilers in other situations, something like the pointer casting may be necessary to get the performance desired. But one needs to know that this is implementation dependent, and the code should be documented as such so that other people know it cannot be ported to other C implementations without addressing these issues.
Regarding the follow-up question, one can write *num = * (long *) (buff + k);. It is likely the point of this follow-up question is to probe your knowledge of hardware alignment requirements. On many systems, attempting to load four-byte data from an address that is not four-byte-aligned causes an exception. Therefore, this assignment statement is likely to fail on such hardware when k is not a multiple of four. (Also, we should note that k must be such that all bytes to be loaded are within buff, or are otherwise known to be accessible.) The interviewer likely wanted you to display that knowledge.
Typically with interview questions like this, there is not necessarily a single right answer that the interviewer wants. Mostly, they want to see that you are aware of the issues, have some understanding of them, and have some knowledge of potential ways to address them.
Related
I'm trying to understand how C allocates memory on stack. I always thought variables on stack could be depicted like structs member variables, they occupy successive, contiguous bytes block within the Stack. To help illustrate this issue I found somewhere, I created this small program which reproduced the phenomenon.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void function(int *i) {
int *_prev_int = (int *) ((long unsigned int) i - sizeof(int)) ;
printf("%d\n", *_prev_int );
}
void main(void)
{
int x = 152;
int y = 234;
function(&y);
}
See what I'm doing? Suppose sizeof(int) is 4: I'm looking 4 bytes behind the passed pointer, as that would read the 4 bytes before where int y in the caller's stack.
It did not print the 152. Strangely when I look at the next 4 bytes:
int *_prev_int = (int *) ((long unsigned int) i + sizeof(int)) ;
and now it works, prints whatever in x inside the caller's stack. Why x has a lower address than y? Are stack variables stored upside down?
Stack organization is completely unspecified and is implementation specific. In practice, it depends a lot of the compiler (even of its version) and of optimization flags.
Some variables don't even sit on the stack (e.g. because they are just kept inside some registers, or because the compiler optimized them -e.g. by inlining, constant folding, etc..).
BTW, you could have some hypothetical C implementation which does not use any stack (even if I cannot name such implementation).
To understand more about stacks:
Read the wikipage on call stacks, tail calls, threads, and on continuations
Become familiar with your computer's architecture & instruction set (e.g. x86) & ABI, then ...
ask your compiler to show the assembler code and/or some intermediate compiler representations. If using GCC, compile some simple code with gcc -S -fverbose-asm (to get assembler code foo.s when compiling foo.c) and try several optimization levels (at least -O0, -O1, -O2 ....). Try also the -fdump-tree-all option (it dumps hundred of files showing some internal representations of the compiler for your source code). Notice that GCC also provides return address builtins
Read Appel's old paper on garbage collection can be faster than stack allocation, and understand garbage collection techniques (since they often need to inspect and possibly change some pointers inside call stack frames). To know more about GC, read the GC handbook.
Sadly, I know no low-level language (like C, D, Rust, C++, Go, ...) where the call stack is accessible at the language level. This is why coding a garbage collector for C is difficult (since GC-s need to scan the call stack pointers)... But see Boehm's conservative GC for a very practical and pragmatic solution.
Almost all the processors architectures nowadays supports stack manipulation instruction (e.g LDM,STM instructions in ARM). Compilers with the help of those implements stack. In most of the cases when data is pushed into stack, stack pointer decrements (Growing Downwards) and Increments when data popped from stack.
So it depends on processor architecture and compiler how stack is implemented.
Depends on the compiler and platform. The same thing can be done in more than one way as long it is done consistently by a program (this case the compiler translation to assembly, i.e. machine code) and the platform supports it (good compilers try to optimize assembly to get the “most” of each platform).
A very good source to deeply understand what goes behind the scenes of c, what happens when compiling a program and why they happen, is the free book Reverse Engineering for Beginners (Understanding Assembly Language) by Dennis Yurichev, the latest version can be found at his site.
I read System V ABI for i386 and AMD64. They are telling that arguments must be rounded to multiple of word size. And i dont understand why.
Here is situation. If you pass 4 char arguments to a function on i386 architecture it will take 16 bytes (4 bytes for each char argument). Isn't it more efficient to allocate only 4 bytes for all 4 arguments like it should be with local variables?
Alignment is not the answer. Because it could take 4-12 bytes padding for 16 byte stack alignment in both situiation.
Putting the 4 chars in a single register (or stack location) would require creating and afterwards extracting the individual parameters, which is costly in terms of instructions. Note that even if you are talking about the stack, the memory access should be very quick given it will be most likely in the cache.
If you really want to save that much space, you can still do it yourself using a single 4-byte argument.
Isn't it more efficient to ...
You always have to say what you want to optimize:
Fast execution speed
Small program size
Less stack usage
Simpler compilers
...
If you want to optimize for less stack usage, passing bytes to the function really would be more efficient.
However, normally you want to optimize for fast execution speed or small program size.
Unlike modern compilers (that mov the arguments to the stack) most compilers written in the 1990s I know push the arguments to the stack. If a compiler uses push operations, putting bytes to the stack would be rather complex - it would make the program slow and long.
(Note that I have never seen that a pop operation is done on a parameter.)
I think the original C authors had their eye on portability and maintainability more than squeezing every byte and cycle. Not that C is careless with resources, but appropriate trade-offs were made.
Promoting each parameter to the stack granule size made sense, and really still does. If you are desperate to squeeze it in, you could always replace:
int f(int a, int b, int c, int d) { ... }
with
struct fparm { char a,b,c,d; }; int f(struct fparm a) { ... }
Modern C compilers are not so user friendly; or rather their only friend is a luser named benchmark....
I have this StructType st = StructTypeSecondInstance->st; and it generates a segfault. The strange part is when the stack backtrace shows me:
0x1067d2cc: memcpy + 0x10 (0, 10000, 1, 1097a69c, 11db0720, bfe821c0) + 310
0x103cfddc: some_function + 0x60 (0, bfe823d8, bfe82418, 10b09b10, 0, 0) +
So, does struct assigment use memcpy?
One can't tell. Small structs may even be kept in registers. Whether memcpy is used is an implementation detail (it's not even implementation-defined, or unspecified -- it's just something the compiler writer choses and does not need to document.)
From a C Standard point of view, all that matters is that after the assigment, the struct members of the destination struct compare equal to the corresponding members of the source struct.
I would expect compiler writers to make a tradeoff between speed and simplicity, probably based on the size of the struct, the larger the more likely to use a memcpy. Some memcpy implementations are very sophisticated and use different algorithms depending on whether the length is some power of 2 or not, or the alignment of the src and dst pointers. Why reinvent the wheel or blow up the code with an inline version of memcpy?
It might, yes.
This shouldn't be surprising: the struct assignment needs to copy a bunch of bytes from one place to another as quickly as possible, which happens to be the exact thing memcpy() is supposed to be good at. Generating a call to it seems like a no-brainer if you're a compiler writer.
Note that this means that assigning structs with lots of padding might be less efficient than optimally, since memcpy() can't skip the padding.
The standard doesn't say anything at all about how assignment (or any other operator) is actually realized by the compiler. There's nothing stopping a compiler from (say) generating a function call for every operation in your source file.
The compiler has license to implement assignment as it thinks best. Most of the time, with most compilers on most platforms, this means that if the structure is reasonably small, the compiler will generate an inline sequence of move instructions; if the structure is large, calling memcpy is common.
It would be perfectly valid, however, for the compiler to loop over generating random bitfields and stop when one of them matches the source of the assignment (Let's call this algorithm bogocopy).
Compilers that support non-hosted operation usually give you a switch to turn off emitting such libcalls if you're targeting a platform without an available (or complete) libc.
It depends on the compiler and platform. Assignment of big objects can use memcpy. But it must not be the reason of segfault.
In embedded software domain for copying structure of same type people don't use direct assignment and do that by memcpy() function or each element copying.
lets have for example
struct tag
{
int a;
int b;
};
struct tag exmple1 = {10,20};
struct tag exmple2;
for copying exmple1 into exmple2..
instead of writing direct
exmple2=exmple1;
people use
memcpy(exmple2,exmple1,sizeof(struct tag));
or
exmple2.a=exmple1.a;
exmple2.b=exmple1.b;
why ????
One way or the other there is nothing specific about embedded systems that makes this dangerous, the language semantics are identical for all platforms.
C has been used in embedded systems for many years, and early C compilers, before ANSI/ISO standardisation did not support direct structure assignment. Many practitioners are either from that era, or have been taught by those that were, or are using legacy code written by such practitioners. This is probably the root of the doubt, but it is not a problem on an ISO compliant implementation. On some very resource constrained targets, the available compiler may not be fully ISO compliant for a number of reasons, but I doubt that this feature would be affected.
One issue (that applies to embedded and non-embedded alike), is that when assigning a structure, an implementation need not duplicate the value of any undefined padding bits, therefore if you performed a structure assignment, and then performed a memcmp() rather than member-by-member comparison to test for equality, there is no guarantee that they will be equal. However if you perform a memcpy(), any padding bits will be copied so that memcmp() and member-by-member comparison will yield equality.
So it is arguably safer to use memcpy() in all cases (not just embedded), but the improvement is marginal, and not conducive to readability. It would be a strange implementation that did not use the simplest method of structure assignment, and that is a simple memcpy(), so it is unlikely that the theoretical mismatch would occur.
In your given code there is no problem even if you write:
example2 = example1;
But just assume if in future, the struct definition changes to:
struct tag
{
int a[1000];
int b;
};
Now if you execute the assignment operator as above then (some of the) compiler might inline the code for byte by byte (or int by int) copying. i.e.
example1.a[0] = example.a[0];
example1.a[1] = example.a[1];
example1.a[2] = example.a[2];
...
which will result in code bloat in your code segment. Such kind of memory errors are not trivial to find. That's why people use memcpy.
[However, I have heard that modern compilers are capable enough to use memcpy internally when such instruction is encountered especially for PODs.]
Copying C-structures via memcpy() is often used by programmers who learned C decades ago and did not follow the standardization process since. They simple don't know that C supports assignment of structures (direct structure assignment was not available in all pre-ANSI-C89 compilers).
When they learn about this feature some still stick to the memcpy() way because it is their custom. There are also motivations that originate in cargo cult programming, e.g. it is claimed that memcpy is just faster - of course - without being able to back this up with a benchmark test case.
Structures are also memcpy()ied by some newbie programmers because they either confuse structure assignment with the assignment of a pointer of a structure - or they simply overuse memcpy() (they often also use memcpy() where strcpy() would be more appropriate).
There is also the memcmp() structure comparison anti-pattern that is sometimes cited by some programmers for using memcpy() instead of structure assignment. The reasoning behind this is the following: since C does not automatically generate a == operator for structures and writing a custom structure comparison function is tedious, memcmp() is used to compare structures. In the next step - to avoid differences in the padding bits of compared structures - memset(...,0,...) is used to initialize all structures (instead of using the C99 initializer syntax or initializing all fields separately) and memcpy() is used to copy the structures! Because memcpy() also copies the content of the padding bits ...
But note that this reasoning is flawed for several reasons:
the use of memcpy()/memcmp()/memset() introduce new error possibilities - e.g. supplying a wrong size
when the structure contains integer fields the ordering under memcmp() changes between big- and little-endian architectures
a char array field of size n that is 0-terminated at position x must also have all elements after position x zeroed out at any time - else 2 otherwise equal structs compare unequal
assignment from a register to a field may also set the neighbouring padding bits to values unequal 0, thus, following comparisons with otherwise equal structures yield an unequal result
The last point is best illustrated with a small example (assuming architecture X):
struct S {
int a; // on X: sizeof(int) == 4
char b; // on X: 24 padding bits are inserted after b
int c;
};
typedef struct S S;
S s1;
memset(&s1, 0, sizeof(S));
s1.a = 0;
s1.b = 'a';
s1.c = 0;
S s2;
memcpy(&s2, &s1, sizeof(S));
assert(memcmp(&s1, &s2, sizeof(S)==0); // assertion is always true
s2.b = 'x';
assert(memcmp(&s1, &s2, sizeof(S)!=0); // assertion is always true
// some computation
char x = 'x'; // on X: 'x' is stored in a 32 bit register
// as least significant byte
// the other bytes contain previous data
s1.b = x; // the complete register is copied
// i.e. the higher 3 register bytes are the new
// padding bits in s1
assert(memcmp(&s1, &s2, sizeof(S)==0); // assertion is not always true
The failure of the last assertion may depend on code reordering, change of the compiler, change of compiler options and stuff like that.
Conclusion
As a general rule: to increase code correctness and portability use direct struct assignment (instead of memcpy()), C99 struct initialization syntax (instead of memset) and a custom comparison function (instead of memcmp()).
In C people probably do that, because they think that memcpy would be faster. But I don't think that is true. Compiler optimizations would take care of that.
In C++ it may also have different semantics because of user defined assignment operator and copy constructors.
On top of what the others wrote some additional points:
Using memcpy instead of a simple assignment gives a hint to someone who maintains the code that the operation might be expensive. Using memcpy in these cases will improves the understanding of the code.
Embedded systems are often written with portability and performance in mind. Portability is important because you may want to re-use your code even if the CPU in the original design is not available or if a cheaper micro-controller can do the same job.
These days low-end micro-controllers come and go faster than the compiler developers can catch up, so it is not uncommon to work with compilers that use a simple byte-copy loop instead of something optimized for structure assignments. With the move to 32 bit ARM cores this is not true for a large part of embedded developers. There are however a lot of people out there who build products that target obscure 8 and 16 bit micro-controllers.
A memcpy tuned for a specific platform may be more optimal than what a compiler can generate. For example on embedded platforms having structures in flash memory is common. Reading from flash is not as slow as writing to it, but it is still a lot slower than a ordinary copy from RAM to RAM. A optimized memcpy function may use DMA or special features from the flash controller to speed up the copy process.
That is a complete nonsense. Use whichever way you prefer. The simplest is :
exmple2=exmple1;
Whatever you do, don't do this:
exmple2.a=exmple1.a;
exmple2.b=exmple1.b;
It poses a maintainability problem because any time that anyone adds a member to the structure, they have to add a line of code to do the copy of that member. Someone is going to forget to do that and it will cause a hard to find bug.
On some implementations, the way in which memcpy() is performed may differ from the way in which "normal" structure assignment would be performed, in a manner that may be important in some narrow contexts. For example, one or the other structure operand may be unaligned and the compiler might not know about it (e.g. one memory region might have external linkage and be defined in a module written in a different language that has no means of enforcing alignment). Use of a __packed declaration would be better if a compiler supported such, but not all compilers do.
Another reason for using something other than structure assignment could be that a particular implementation's memcpy might access its operands in a sequence that would work correctly with certain kinds of volatile source or destination, while that implementation's struct assignment might use a different sequence that wouldn't work. This is generally not a good reason to use memcpy, however, since aside from the alignment issue (which memcpy is required to handle correctly in any case) the specifications for memcpy don't promise much about how the operation will be performed. It would be better to use a specially-written routine which performed the operations exactly as required (for example, if the target is a piece of hardware which needs to have 4 bytes of structure data written using four 8-bit writes rather than one 32-bit writes, one should write a routine which does that, rather than hoping that no future version of memcpy decides to "optimize" the operation).
A third reason for using memcpy in some cases would be the fact that compilers will often perform small structure assignments using a direct sequence of loads and stores, rather than using a library routine. On some controllers, the amount of code this requires may vary depending upon where the structures are located in memory, to the point that the load/store sequence may end up being bigger than a memcpy call. For example, on a PICmicro controller with 1Kwords of code space and 192 bytes of RAM, coping a 4-byte structure from bank 1 to bank 0 would take 16 instructions. A memcpy call would take eight or nine (depending upon whether count is an unsigned char or int [with only 192 bytes of RAM total, unsigned char should be more than sufficient!] Note, however, that calling a memcpy-ish routine which assumed a hard-coded size and required both operands be in RAM rather than code space would only require five instructions to call, and that could be reduced to four with the use of a global variable.
first version is perfect.
second one may be used for speed (there is no reason for your size).
3rd one is used only if padding is different for target and source.
I am making a memory block copy routine and need to deal with blocks of raw memory in efficient chunks. My question is not about the specialized copy routine I'm making, but in how to correctly examine raw pointer alignment in C.
I have a raw pointer of memory, let's say it's already cast as a non-null char *.
In my architecture, I can very efficiently copy memory in 64 byte chunks WHEN IT IS ALIGNED TO A 64 BYTE chunk. So the (standard) trick is that I will do a simple copy of 0-63 bytes "manually" at the head and/or tail to transform the copy from an arbitrary char* of arbitrary length to a 64 byte aligned pointer with some multiple of 64 bytes in length.
Now the question is, how do you legally "examine" a pointer to determine (and manipulate) its alignment?
The obvious way is to cast it into an integer and just examine the bits:
char *pointer=something.
int p=(int)pointer;
char *alignedPointer=(char *)((p+63)&~63);
Note here I realize that alignedPointer doesn't point to the same memory as pointer... this is the "rounded up" pointer that I can call my efficient copy routine on, and I'll handle any other bytes at the beginning manually.
But compilers (justifiably) freak out at casting a pointer into an integer. But how else can I examine and manipulate the pointer's lower bits in LEGAL C? Ideally so that with different compilers I'd get no errors or warnings.
For integer types that are large enough to hold pointers, C99 stdint.h has:
uintptr_t
intptr_t
For data lengths there are:
size_t
ssize_t
which have been around since well before C99.
If your platform doesn't have these, you can maximise your code's portability by still using these type names, and making suitable typedefs for them.
I don't think that in the past people were as reluctant to do their own bit-banging, but maybe the current "don't touch that" mood would be conducive to someone creating some kind of standard library for aligning pointers. Lacking some kind of official api, you have no choice but to AND and OR your way through.
Instead of int, try a datatype that's guaranteed to be the same size as a pointer (INT_PTR on Win32/64). Maybe the compiler won't freak out too much. :) Or use a union, if 64-bit compatibility is not important.
Casting pointers to and from integers is valid, but the results are implementation-defined. See section 6.3.2.3 of the standard. The intention seems to be that the results are what anybody familiar with the system would expect, and indeed this appears to be routinely the case in practice.
If the architecture in question can efficiently manipulate pointers and integers interchangeably, and the issue is just whether it will work on all compilers for that system, then the answer is that it probably will anyway.
(Certainly, if I were writing this code, I would think it fine as-is until proven otherwise. My experience has been that compilers for a given system all behave in very similar ways at this sort of level; the assembly language just suggests a particular approach, that all then take.)
"Probably works" isn't very good general advice though, so my suggestion would be just write the code that works, surround it enough suitable #ifdefs that only the known compiler(s) will compile it, and defer to memcpy in other cases.
#ifdef is rarely ideal, but it's fairly lightweight compared to other possibilities. And if implementation-defined behaviour or compiler-specific tricks are needed then the options are pretty limited anyway.