What are the reasons behind a C compiler ignoring register declarations? I understand that this declaration is essentially meaningless for modern compilers since they store values in registers when appropriate. I'm taking a Computer Architecture class so it's important that I understand why this is the case for older compilers.
"A register declaration advises the compiler that the variable in
question will be heavily used. The idea is that register variables
are to be placed in machine registers, which may result in smaller and
faster programs. But compilers are free to ignore the advice." ~ The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie
Thank you!
Historical C compilers might only be "smart" (complex) enough to look at one C statement at a time, like modern TinyCC. Or parse a whole function to find out how many variables there are, then come back and still only do code-gen one statement at a time. For examples of how simplistic and naive old compilers are, see Why do C to Z80 compilers produce poor code? - some of the examples shown could have been optimized for the special simple case, but weren't. (Despite Z80 and 6502 being quite poor C compiler targets because (unlike PDP-11) "a pointer" isn't something you can just keep in a register.)
With optimization enabled, modern compilers do have enough RAM (and compile-time) available to use more complex algorithms to map out register allocation for the whole function and make good decisions anyway, after inlining. (e.g. transform the program logic into an SSA form.) See also https://en.wikipedia.org/wiki/Register_allocation
The register keyword becomes pointless; the compiler can already notice when the address of a variable isn't taken (something the register keyword disallows). Or when it can optimize away the address-taking and keep a variable in a register anyway.
TL:DR: Modern compilers no longer need hand-holding to fully apply the as-if rule in the ways the register keyword hinted at.
They basically always keep everything in registers except when forced to spill it back to memory, unless you disable optimization for fully consistent debugging. (So you can change variables with a debugger when stopped at a breakpoint, or even jump between statements.)
Fun fact: Why does clang produce inefficient asm with -O0 (for this simple floating point sum)? shows an example where register float makes modern GCC and clang create more efficient asm with optimization disabled.
Related
I am wondering what methods there are to add typing information to generated C methods. I'm transpiling a higher-level programming language to C and I'd like to add a moving garbage collector. However to do that I need the method variables to have typing information, otherwise I could modify a primitive value that looks like a pointer.
An obvious approach would be to encapsulate all (primitive and non-primitive) variables in a struct that has an extra (enum) variable for typing information, however this would cause memory and performance overhead, the transpiled code is namely meant for embedded platforms. If I were to accept the memory overhead the obvious option would be to use a heap handle for all objects and then I'd be able to freely move heap blocks. However I'm wondering if there's a more efficient better approach.
I've come up with a potential solution, namely to predeclare and group variables based whether they're primitives or not (I can do that in the transpiler), and add an offset variable to each method at the end (I need to be able to find it accurately when scanning the stack area), that tells me where the non-primitive variables begin and where they end, so I can only scan those. This means that each method will use an additional 16/32-bit (depending on arch) of memory, however this should still be more memory efficient than the heap handle approach.
Example:
void my_func() {
int i = 5;
int z = 3;
bool b = false;
void* person;
void* person_info = ...;
.... // logic
volatile int offset = 0x034;
}
My aim is for something that works universally across GCC compilers, thus my concerns are:
Can the compiler reorder the variables from how they're declared in
the source code?
Can I force the compiler to put some data in the
method's stack frame (using volatile)?
Can I find the offset accurately when scanning the stack?
I'd like to avoid assembly so this approach can work (by default) across multiple platforms, however I'm open for methods even if they involve assembly (if they're reliable).
Typing information could be somehow encoded in the C function name; this is done by C++ and other implementations and called name mangling.
Actually, you could decide, since all your C code is generated, to adopt a different convention: generate long C identifiers which are practically unique and sort-of random program-wide, such as tiziw_7oa7eIzzcxv03TmmZ and keep their typing information elsewhere (e.g. some database). On Linux, such an approach is friendly to both libbacktrace and dlsym(3) + dladdr(3) (and of course nm(1) or readelf(1) or gdb(1)), so used in both bismon and RefPerSys projects.
Typing information is practically tied to calling conventions and ABIs. For example, the x86-64 ABI for Linux mandates different processor registers for passing floating points or pointers.
Read the Garbage Collection handbook or at least P.Wilson Uniprocessor Garbage Collection Techniques survey. You could decide to use tagged integers instead of boxing them, and you could decide to have a conservative GC (e.g. Boehm's GC) instead of a precise one. In my old GCC MELT project I generated C or C++ code for a generational copying GC. Similar techniques are used both in Bismon and in RefPerSys.
Since you are transpiling to C, consider also alternatives, such as libgccjit or LLVM. Look into libjit and asmjit.
Study also the implementation of other transpilers (compilers to C), including Chicken/Scheme and Bigloo.
Can the GCC compiler reorder the variables from how they're declared in the source code?
Of course yes, depending upon the optimizations you are asking. Some variables won't even exist in the binary (e.g. those staying in registers).
Can I force the compiler to put some data in the method's stack frame (using volatile)?
Better generate a single struct variable containing all your language variables, and leave optimizations to the compiler. You will be surprised (see this draft report).
Can I find the offset accurately when scanning the stack?
This is the most difficult, and depends a lot of compiler optimizations (e.g. if you run gcc with -O1 or -O3 on the generated C code; in some cases a recent GCC -e.g GCC 9 or GCC 10 on x86-64 for Linux- is capable of tail-call optimizations; check by compiling using gcc -O3 -S -fverbose-asm then looking into the produced assembler code). If you accept some small target processor and compiler specific tricks, this is doable. Study the implementation of the Ocaml compiler.
Send me (to basile#starynkevitch.net) an email for discussion. Please mention the URL of your question in it.
If you want to have an efficient generational copying GC with multi-threading, things become extremely tricky. The question is then how many years of development can you afford spending.
If you have exceptions in your language, take also a great care. You could with great caution generate calls to longjmp.
See of course this answer of mine.
With transpiling techniques, the evil is in the details
On Linux (specifically!) see also my manydl.c program. It demonstrates that on a Linux x86-64 laptop you could generate, in practice, hundred of thousands of dlopen(3)-ed plugins. Read then How to write shared libraries
Study also the implementation of SBCL and of GNU Prolog, at least for inspiration.
PS. The dream of a totally architecture-neutral and operating-system independent transpiler is an illusion.
I have read this, saying that
For example, it is common for programs that are written primarily in C to contain portions that are in an assembly language for optimization of processor efficiency.
I have never seen a program written primarily in C that contains assembly code too, at least not directly as source code. Only, their example with the Linux kernel.
Is this statement true and if so, how could it possibly optimize processor efficiency?
Aren't C code just translated into assembly code by the compiler?
No, it's not true. I'd estimate that less than 1% of C programmers even know how to program in assembly, and the need to use it is very rare. It's generally only needed for very special applications, such as some parts of an OS kernel or programming embedded systems, because they need to perform machine operations that don't have corresponding C code (such as directly manipulating CPU registers). In earlier days some programmers would use it for performance-critical sections of code, but compiler optimizations have improved significantly, and CPUs have gotten faster, so this is rarely needed now. It might still be used in the built-in libraries, so that functions like strcpy() will be as fast as possible. But application programmers almost never have to resort to assembly.
Aren't C code just translated into assembly code by the compiler?
Yes, but...
There are situations where you may want to access a specific register or other platform-specific location, and Standard C doesn't provide good ways to do that. If you want to look at a status word or load/read a data register directly, then you often need to drop down to the assembler level.
Also, even in this age of very smart optimizing compilers, it's still possible for a human assembly programmer to write assembly code that will out-perform code generated by the compiler. If you need to wring every possible cycle out of your code, you may need to "go manual" for a couple of routines.
i totally stucked with this question, i had heard that there are 5 to 10 variables only can declare having datatype of register. i wish to know how many register datatype variables would be declare.it is pretty to seem. but this level of understanding we required while executing a programs.otherwise we mightstuck at runtime while execution thanks for your answers in advance
1).is registers(for datatypes) are vary to different types of compilers/machines??
2).how many register datatype variables could we pick ?
3).what are all these registers?(i mean cpu reg, memory reg, general purpose reg???)
register is no more a useful keyword in C programs compiled by a recent C optimizing compiler (e.g. recent versions of GCC or Clang/LLVM).
Today, it simply means that a variable qualified as register cannot be the operand of the & unary address-of operator (notice that register is a qualifier like const or volatile are, not a data type like int).
In the 1990s, register was then an important keyword for compilers.
The compiler (when optimizing) does a pretty good job about register allocation and spilling.
Try to compile your favorite C function with e.g. gcc -Wall -O2 -fverbose-asm -S; you'll get an assembly file suffixed .s and you could look inside it; the compiler is doing pretty well about register allocation.
Notice that GCC provides language extensions to put a few global (or local) variables in explicit registers. This is rarely useful, and it is target processor and ABI specific.
BTW, on desktop or laptop processors, the CPU cache matters a lot more than the registers (see references and hints in this answer to another question).
In C there is no way to explicitely define variable in CPU registers. You may barely hint a compiler with register specifier, but there is no point to do it nowadays, as compilers have sophisticated optimizer, which takes care of registers allocation.
do not use register. The optomizer will do a better job
What's the purpose of using assembly language inside a C program? Compilers are able to generate assembly language already. In what cases would it be better to write assembly than C? Is performance a consideration?
In addition to what everyone said: not all CPU features are exposed to C. Sometimes, especially in driver and operating system programming, one needs to explicitly work with special registers and/or commands that are not otherwise available.
Also vector extensions.
That was especially true before the advent of compiler intrinsics. Those alleviate the need for inline assembly somewhat.
One more use case for inline assembly has to do with interfacing C with reflected languages. Specifically, assembly is all but necessary if you need to call a function when its prototype is not known at compile time. In other words, when the quantity and datatypes of that function's arguments are but runtime variables. C variadic functions and the stdarg machinery won't help you in this case - they would help you parse a stack frame, but not build one. In assembly, on the other hand, it's quite doable.
This is not an OS/driver scenario. There are at least two technologies out there - Java's JNI and COM Automation - where this is a must. In case of Automation, I'm talking about the way the COM runtime is marshaling dual interfaces using their type libraries.
I can think of a very crude C alternative to assembly for that, but it'd be ugly as sin. Slightly less ugly in C++ with templates.
Yet another use case: crash/run-time error reporting. For postmortem debugging, you'd want to capture as much of program state at the point of crash as possible (i. e. all the CPU registers), and assembly is a much better vehicle for that than C. Postmortem debugging of crashing native code usually involves staring at the assembly anyway.
Yet another use case - code that is intended for execution in another process without that process' co-operation or knowledge. This is often referred to as "shellcode", but it doesn't have to be shell related. Code like that needs to be very carefully written, and it can't rely on the conveniences of a high level language (like the run time library, or having a data section) that are normally taken for granted. When one is after injecting a significant piece of functionality into a target process, they usually end up loading a dynamic library, but the initial trampoline code that loads the library and passes control to it tends to be in assembly.
I've been only covering cases where assembly is necessary. Hand-optimizing for performance is covered in other answers.
There are a few, although not many, cases where hand-optimized assembly language can be made to run more efficiently than assembly language generated by C compilers from C source code. Also, for developers used to assembly language, some things can just seem easier to write in assembler.
For these cases, many C compilers allow inline assembly.
However, this is becoming increasingly rare as C compilers get better and better and producing efficient code, and most platforms put restrictions on some of the low-level type of software that is often the type of software that benefits most from being written in assembler.
In general, it is performance but performance of a very specific kind. For example, the SIMD parallel instructions of a processor might not generated by the compiler. By utilizing processor specific data formats and then issuing processor specific parallel instructions (e.g. ARM NEON or Intel SSE), very fast performance on graphics or signal processing problems can occur. Even then, some compilers allow these to be expressed in C using intrinsic functions.
While it used to be common to use assembly language inserts to hand-optimize critical functions, those days are largely done. Modern compilers are very good and modern processors have very complicated timing requirements so hand optimized code is often less optimal than expected.
There were various reasons to write inline assemblies in C. We can simply categorize the reasons into necessary and unnecessary.
For the reasons of unnecessary, possibly be:
platform compatibility
performance concerning
code optimization
etc.
I consider above as unnecessary because sometime they can be discard or implemented through pure C. For example of platform compatibility, you can totally implement particular version for each platform, however, use inline assemblies might reduce the effort. Here we are not going to talk too much about the unnecessary reasons.
For necessary reasons, they possibly be:
something with standard libraries was insufficient to do
some instruction set was not supported by compilers
object code generated incorrectly
writing stack-sensitive code
etc.
These reasons considered necessary, because of they are almost not possibly done with pure C language. For example, in old DOSes, software interrupt INT21 was not reentrantable. If you want to write a Virtual Dirve fully use INT21 supported by the compiler, it was impossible to do. In this situation, you would need to hook the original INT21, and make it reentrantable. However, the compiled code wraps your every call with prolog/epilog. Thus, you can never break something restricted, or you just crashed the code. You can try any of trick by using the pure language of C with libraries; but even you can successfully find a trick, that would mean you found a particular order that the compiler generates the machine code; this is implying: you tried to let the compiler compiles your code to exactly machine code. So, why not just write inline assemblies directly?
This example explained all above of necessary reasons except instruction set not supported, but I think that was easy to think about.
In fact, there're more reasons to write inline assemblies, but now you have some ideas of them, and so on.
Just as a curiosity, I'm adding here a concrete example of something not-so-low-level you can only do in assembly. I read this in an assembly book from my university time where it was used to show an inherent limitation of C/C++, and how to overcome it with assembly.
The problem is how do I invoke a function when the exact number of parameters is only known at runtime? In fact, in C/C++ you can easily define a function that takes a variable number of arguments like printf. But when it comes to calling that function, the compiler wants to know exactly how many parameters must be passed. You may pass more paremters than required, that won't do any harm. But what if the number grows unexpectedly to 100 or 1000 parameters, and must be picked out of an array?
The solution of course is using assembly, where you can dynamically create a stack frame of the proper size, copy the parameters on the stack, invoke the function, and finally reset the stack.
In practice, this would hardly ever be a limitation (except if the library you're using is really really bad designed). People who use assembly in C have much better reasons to do so like others have pointed out in their answers. Still, I think may be an interesting fact to know.
I would rather think of that as a way to write a very specific code for a specific platform, optimization, though still common, is used less nowadays. Knowledge and usage of assembly in C is also practiced by all-color hats.
Is there a way to know the access specifier used by the compiler in c.
For Example-
In the case of register variables, it all depends on the compiler to decide whether a variable's access specifier would be auto or register. Is there a way to dynamically know what access specifier is chosen by the compiler??
Our are mixing up the specification level of the language and the realization of your program in machine code. The two terms "register" here are only loosely related.
The wording of the keyword register is just confusing, a misnomer. register only implies that you are not allowed to take the address of such a variable. Whether or not your compiler realizes a variable on the stack and addresses it directly or stores it in a CPU register is nothing stable that you can rely upon. It will change from compiler to compiler version and optimization level.
As others said you can read the assembler to know for a particular compilation if you are interested in micro-optimization, but in general it is nothing that you should even worry about.
You could take the adress of the variable and get a hint depending on the architecture. But this approach would probably force the compiler to allocate the variable in memory instead of a register.
Compile the C module to assembly and read that. Be aware that some compilers may perform whole-program optimization just before linking, so even the assembler output isn't 100% reliable.