I just have a question about potential problem of multiple weak symbols, this question is from my textbook:
One module A:
int x;
int y;
p1() {...}
the other module B:
double x;
p2() {...}
and my textbook says that 'write to x in p2 might overwrite y'
I can kind of get the idea of the textbook( double x is twice the size of int x, and int y is placed right after int x, here comes the problem), but still lost in details, I know when there are multiple weak symbols, the linker will just randomly pick one, so my question is, which x of module that the linker choose will result in writing to x in p2 will overwrite y.
This is my understanding :if the linker choose the int x of module A will result in the consequence, because in that way x,y are both 4 bytes and the p2(image after compilation there is one assembly code movq compared by movl in p1 )will change 8 bytes therefore overwrite y.
But my instructor said if only the linker choose double x of module B, that will result in overwriting y, how come, am I correct or my instructor is correct?
According to ISO C, the program invokes undefined behavior. An external name which is used must have exactly one definition somewhere in the program.*
"Weak symbols" are a concept in some dynamic library systems like ELF on GNU/Linux. That terminology does not apply here. A linker which allows multiple definitions of an external symbol is said to be implementing the "relaxed ref/def" model. This term comes from section 6.1.2.2 of the ANSI C rationale.
If we regard the relaxed ref/def model as a documented language extension, then the multiple definitions of a name become locally defined behavior. However, what if they are inconsistently typed? That is almost certainly undefined by the reasoning that the situation resembles bad type aliasing. It is possible that if one module has int x; int y; and the other has double x, that a write through the double x alias will clobber y. This isn't something you can portably rely on. It's a very poor way to obtain an aliasing effect on purpose; you want to use a union between two structures or some such.
Now about "weak symbols": those are external names in shared libraries that can be overridden by alternative definitions. For instance, most of the functions in the GNU C library on a GNU/Linux system are weak symbols. A program can define its own read function to replace the POSIX one, for instance. The library itself will not break not matter how read is redefined; when it needs to call read, it doesn't use the weak symbol read but some internal alias like __libc_read.
This mechanism is important; it allows the library to conform to ISO C. A strictly conforming ISO C program is allowed to use read as an external name.
* In the ISO C99 standard, this was given in 6.9 External Definitions: "If an identifier declared with external linkage is used in an expression (other than as part of the operand of a sizeof operator whose result is an integer constant), somewhere in the entire program there shall be exactly one external definition for the identifier; otherwise, there shall be no more than one."
Related
I'm new to C and have read that each function may only be defined once, but I can't seem to reconcile this with what I'm seeing in the console. For example, I am able to overwrite the definition of printf without an error or warning:
#include <stdio.h>
extern int printf(const char *__restrict__format, ...) {
putchar('a');
}
int main() {
printf("Hello, world!");
return 0;
}
So, I tried looking up the one-definition rule in the standard and found Section 6.9 (5) on page 155, which says (emphasis added):
An external definition is an external declaration that is also a definition of a function (other than an inline definition) or an object. If an identifier delared with external linkage is used in an expression [...], somewhere in the entire program there shall be exactly one external definition for that identifier; otherwise, there shall be no more than one.
My understanding of linkage is very shaky, so I'm not sure if this is the relevant clause or what exactly is meant by "entire program". But if I take "entire program" to mean all the stuff in <stdio.h> + my source file, then shouldn't I be prohibited from redefining printf in my source file since it has already been defined earlier in the "entire program" (i.e. in the stdio bit of the program)?
My apologies if this question is a dupe, I couldn't find any existing answers.
The C standard does not define what happens if there is more than one definition of a function.
… shouldn't I be prohibited…
The C standard has no jurisdiction over what you do. It specifies how C programs are interpreted, not how humans may behave. Although some of its rules are written using “shall,” this is not a command to the programmer about what they may or may not do. It is a rhetorical device for specifying the semantics of C programs. C 2018 4 2 tells us what it actually means:
If a “shall” or “shall not” requirement that appears outside of a constraint or runtime-constraint is violated, the behavior is undefined…
So, when you provide a definition of printf and the standard C library provides a definition of printf, the C standard does not specify what happens. In common practice, several things may happen:
The linker uses your printf. The printf in the library is not used.
The compiler has built-in knowledge of printf and uses that in spite of your definition of printf.
If your printf is in a separate source module, and that module is compiled and inserted into a library, then which printf the program uses depends on the order the libraries are specified to the linker.
While the C standard does not define what happens if there are multiple definitions of a function (or an external symbol in general), linkers commonly do. Ordinarily, when a linker processes a library file, its behavior is:
Examine each module in the library. If the module defines a symbol that is referenced by a previously incorporated object module but not yet defined, then include that module in the output the linker is building. If the module does not define any such symbol, do not use it.
Thus, for ordinary functions, the behavior of multiple definitions that appear in library files is defined by the linker, even though it is not defined by the C standard. (There can be complications, though. Suppose a program uses cos and sin, and the linker has already included a module that defines cos when it finds a library module that defines both sin and cos. Because the linker has an unresolved reference to sin, it includes this library module, which brings in a second definition of cos, causing a multiple-definition error.)
Although the linker behavior may be well defined, this still leaves the issue that compilers have built-in knowledge about the standard library functions. Consider this example. Here, I added a second printf, so the program has:
printf("Hello, world!");
printf("Hello, world!\n");
The program output is “aHello, world.\n”. This shows the program used your definition for the first printf call but used the standard behavior for the second printf call. The program behaves as if there are two different printf definitions in the same program.
Looking at the assembly language shows what happens. For the second call, the compiler decided that, since printf("Hello, world!\n"); is printing a string with no conversion specifications and ending with a new-line character, it can use the more-efficient puts routine instead. So the assembly language has call puts for the second printf. The compiler cannot do this for the first printf because it does not end with a new-line character, which puts automatically adds.
Please aware of declaration and definition. The term are totally different.
stdio.h only provide the declaration. And therefore, when you declare/define in your file, as long as the prototype is similar, it is fine with this.
You are free to define in your source file. And if it is available, the final program will link to the yours instead of the one in library.
I'm trying to understand when gcc's builtin functions are used. In the following code, both gcc's sqrt() and my custom sqrt() are invoked when I compile without -fno-builtin. Can someone explain what is going on?
Also, I know the list of gcc's builtin functions is at https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html and realize the recommended way around these types of problems is to just rename the conflicting function. Is there a gcc output option/warning that will show when a custom function is named the same as a builtin or when a builtin is used instead of the custom function?
#include <stdio.h>
double sqrt(double);
int main(void)
{
double n;
n = 2.0;
printf("%f\n", sqrt(n));
printf("%f\n", sqrt(2.0));
return 0;
}
double sqrt(double x)
{
printf("my_sqrt ");
return x;
}
running after compiling with gcc -o my_sqrt my_sqrt.c the output is:
my_sqrt 2.000000
1.414214
running after compiling with gcc -fno-builtin -o my_sqrt my_sqrt.c the output is:
my_sqrt 2.000000
my_sqrt 2.000000
It's not the case that two different sqrt functions are called at runtime. The call to sqrt(2.0) happens at compile time, which is legal because 2.0 is a constant and sqrt is a standard library function, so the compiler knows its semantics. And the compiler is allowed to assume that you are not breaking the rules. We'll get around to what that means in a minute.
At runtime, there is no guarantee that your sqrt function will be called for sqrt(n), but it might be. GCC uses your sqrt function, unless you declare n to be const double; Clang goes ahead and does the computation at compile time because it can figure out what n contains at that point is known. Both of them will use the built-in sqrt function (unless you specify -fno-builtin) for an expression whose value cannot be known at compile-time. But that doesn't necessarily mean that they will issue code to call a function; if the machine has a reliable SQRT opcode, the compiler could choose to just emit it rather than emitting a function call.
The C standard gives compilers a lot of latitude here, because it only requires the observable behaviour of a program to be consistent with the results specified by the semantics in the standard, and furthermore it only requires that to be the case if the program does not exhibit undefined behaviour. So the compiler is basically free to do any computation it wants at compile-time, provided that a program without undefined behaviour would produce the same result. [Note 1].
Moreover, the definition of "the same result" is also a bit loose for floating point computations, because the standard semantics do not prevent computations from being done with more precision than the data types can theoretically represent. That may seem innocuous, but in some cases a computation with extra precision can produce a different result after rounding. So if during compilation a compiler can compute a more accurate intermediate result than would result from the code it would have generated for run-time computation of the same expression, that's fine as far as the standard is concerned. (And most of the time, it will be fine for you, too. But there are exceptions.)
To return to the main point, it still seems surprising that the compiler, which knows that you have redefined the sqrt function, can still use the built-in sqrt function in its compile-time computation. The reason is simple (although often ignored): your program is not valid. It exhibits undefined behaviour, and when your program has undefined behaviour, all bets are off.
The undefined behaviour is specified in §7.1.3 of the standard, which concerns Reserved Identifiers. It supplies a list of reserved identifiers, which really are reserved, whether the compiler you happen to be using warns you about that or not. The list includes the following, which I'll quote in full:
All identifiers with external linkage in any of the following subclauses (including the future library directions) and errno are always reserved for use as identifiers with external linkage.
The "following subclauses" at point contain the list of standard library functions, all of which have external linkage. Just to nail the point home, the standard continues with:
If the program declares or defines an identifier in a context in which it is reserved (other than as allowed by 7.1.4), the behavior is undefined. [Note 2]
You have declared sqrt as an externally-visible function, and that's not permitted whether or not you include math.h. So you're in undefined behaviour territory, and the compiler is perfectly entitled to not worry about your definition of the sqrt function when it is doing compile-time computation. [Note 3]
(You could try to declare your sqrt implementation as static in order to avoid the restriction on externally-visible names. That will work with recent versions of GCC; it allows the static declaration to override the standard library definition. Clang, which is more aggressive about compile-time computations, still uses the standard definition of sqrt. And a quick test with MSVC (on godbolt.org) seems to indicate that it just outright bans redefinition of the standard library function.)
So what if you really really want to write sqrt(x) for your own definition of sqrt? The standard does give you an out: since sqrt is not reserved for macro names, you can define it as a macro which is substituted by the name of your implementation [Note 4], at least if you don't #include <math.h>. If you do include the header, then this is probably not conformant, because in that case the identifiers are reserved as well for macro names [Note 5].
Notes
That liberty is not extended to integer constant expressions, with the result that a compiler cannot turn strlen("Hello") into the constant value 5 in a context where an integer constant expression is required. So this is not legal:
switch (i) {
case strlen("Hello"):
puts("world");
break;
default: break;
}
But this will probably not call strlen six times (although you shouldn't count on that optimisation, either):
/* Please don't do this. Calling strlen on every loop iteration
* blows up linear-time loops into quadratic time monsters, which is
* an open invitation for someone to do a denial-of-service attackç
* against you by supplying a very long string.
*/
for (int i = 0; i < strlen("Hello"); ++i) {
putchar("world"[i]);
}
Up to the current C standard, this statement was paragraph 2 of §7.1.3. In the C23 draft, though, it has been moved to paragraph 8 of §6.4.2.1 (the lexical rules for identifiers). There are some other changes to the restrictions on reserved identifiers (and a large number of new reserved identifiers), but that doesn't make any difference in this particular case.
In many instances of undefined behaviour, the intent is simply to let the compiler avoid doing extra sanity checks. Instead, it can just assume that you didn't break the rules, and do whatever it would otherwise do.
Please don't use the name _sqrt, even though it will probably work. Names starting with underscores are all reserved, by the same §7.1.3. If the name starts with two underscores or an underscore followed by a capital letter, it is reserved for all uses. Other identifiers starting with an underscore are reserved for use at file scope (both as a function name and as a struct tag). So don't do that. If you want to use underscores to indicate that the name is somehow internal to your code, put it at the end of the indentifier rather than at the beginning.
Standard headers may also define the names of standard library functions as function-like macros, possibly in order to substitute a different reserved name, known to the compiler, which causes the generation of inline code, perhaps using special-purpose machine opcodes. Regardless, the standard requires that the functions exist, and it allows you to #undef the macros in order to guarantee that the actual function will be used. But it doesn't explicitly allow the names to be redefined.
I just realized when I define a function in C and use it, I can either use it and define the function later or define it and use it later. For example,
int mult (int x, int y)
{
return x * y;
}
int main()
{
int x;
int y;
scanf( "%d", &x );
scanf( "%d", &y );
printf( "The product of your two numbers is %d\n", mult( x, y ) );
}
and
int main()
{
int x;
int y;
scanf( "%d", &x );
scanf( "%d", &y );
printf( "The product of your two numbers is %d\n", mult( x, y ) );
}
int mult (int x, int y)
{
return x * y;
}
will both run just fine. However, in Python, the second code will fail since it requires mult(x,y) to be defined before you can use it and Python executes from top to bottom(as far as I know). Obviously, that can't be the case in C since the second one runs just fine. So how does C code actually flow?
Well, the second code is not valid C, strictly speaking.
It uses your compiler's flexibility to allow an implicit declaration of a function, which has been disallowed in C standard.
The C11 standatd explicitly mentions the exclusion in the "Foreword",
Major changes in the second edition included:
...
remove implicit function declaration
You have to either
Forward declare the function.
Define the function before it's usage (like snippet 1).
Enable the warning in your compiler and your compiler should produce some warning message to let you know about this problem.
As others have noted, routines should be declared before they are used, although they do not need to be defined before they are used. Additionally, older versions of C allowed some implicit declaration of routines, and some compilers still do, although this is largely archaic now.
As to how C is able to support the calling of functions before they are defined, C programs are first translated into some executable format, after which a program is executed.
During translation, a C compiler reads, analyzes, and processes the entire program. Any references to functions that have not yet been defined are recorded as things that need to be resolved in the program. In the process of preparing a final executable file, a linker goes through all of the processed data, finds the function definitions, and resolves the references by inserting the addresses (or other information) of the called routines.
Most commonly, a C compiler translates source code into an object module. The object module contains machine-language instructions for the program. It also contains any data for the program that is defined in the source code, and it contains information about unresolved references that the compiler found while analyzing the source code. Multiple source files may be translated separately into multiple object modules. Sometimes these translations are done by different people at different times. A company might produce a software library which is the result of translating their source files into object modules and packaging them into a library file. Then a software developer would compile their own source files and link the resulting object modules with the object modules in the library.
Multiple object modules can be linked together to make an executable file. This is a file that the operating system is able to load into memory and execute.
For second code you should use forward declaration. That means first declare the function so that compiler will know you will be using this function. For now your code is executed as per C compiler flexibility.
Python compiler is not flexible enough so it will fail to compile.
You said both the codes work just fine. Well, it doesn't. The second snippet will show an error in my compiler and if it does compile correctly, it should not be used.
It voids C regulations.
Declaring functions before is helpful when too many user defined functions are needed.
I was reading the book "Compilers: Principles, Techniques, and Tools (2nd Edition)" by Alfred V. Aho. There is an example in this book (example 1.7) which asks to analyze the scope of x in the following macro definition in C:
#define a (x+1)
From this example,
We cannot resolve x statically, that is, in terms of the program text.
In fact, in order to interpret x, we must use the usual dynamic-scope
rule. We examine all the function calls that are currently active, and
we take the most recently called function that has a declaration of x.
It is to this declaration that the use of x refers.
I've become confused reading this - as far as I know, macro substitution happens in the preprocessing stage, before compilation starts. But if I get it right, the book says it happens when the program is getting executed. Can anyone please clarify this?
The macro itself has no notion of scope, at least not in the same sense as the C language has. Wherever the symbol a appears in the source after the #define (and before a possible #undef) it is replaced by (x + 1).
But the text talks about the scope of x, the symbol in the macro substitution. That is interpreted by the usual C rules. If there is no symbol x in the scope where a was substituted, this is a compilation error.
The macro is not self-contained. It uses a symbol external to the macro, some kind of global variable if you will, but one whose meaning will change according to the place in the source text where the macro is invoked. I think what the quoted text wants to say is that we cannot know what macro a does unless we know where it is evoked.
I've become confused reading this - as far as I know, macro substitution happens in preprocessing stage, before compilation starts.
Yes, this is how a compiler works.
But if I get it right, the book says it happens when the program is getting executed. Can anyone please clarify this?
Speaking without referring to the book, there are other forms of program analysis besides translating source code to object code (a.k.a. compilation). A C compiler replaces macros before compiling, thus losing information about what was originally a macro, because that information is not significant to the rest of the translation process. The question of the scope of x within the macro never comes up, so the compiler may ignore the issue.
Debuggers often implement tighter integration with source code, though. One could conceive of a debugger that points at subexpressions while stepping through the program (I have seen this feature in an embedded toolchain), and furthermore points inside macros which generate expressions (this I have never seen, but it's conceivable). Or, some debuggers allow you to point at any identifier and see its value. Pointing at the macro definition would then require resolving the identifiers used in the macro, as Aho et al discuss there.
It's difficult to be sure without seeing more context from the book, but I think that passage is at least unclear, and probably incorrect. It's basically correct about how macro definitions work, but not about how the name x is resolved.
#define a (x+1)
C macros are expanded early in the compilation process, in translation phase 4 of 8, as specified in N1570 5.1.1.2. Variable names aren't resolved until phase 7).
So the name x will be meaningfully visible to the compiler, not at the point where the macro is defined, but at the point in the source code where the macro a is used. Two different uses of the a macro could refer to two different declarations of variables named x.
We cannot resolve x statically, that is, in terms of the program text.
We cannot resolve it at the point of the macro definition.
In fact, in order to interpret x, we must use the usual dynamic-scope
rule. We examine all the function calls that are currently active, and
we take the most recently called function that has a declaration of x.
It is to this declaration that the use of x refers.
This is not correct for C. When the compiler sees a reference to x, it must determine what declaration it refers to (or issue a diagnostic if there is no such declaration). That determination does not depend on currently active function calls, something that can only be determined at run time. C is statically scoped, meaning that the appropriate declaration of x can be determined entirely by examining the program text.
At compile time, the compiler will examine symbol table entries for the current block, then for the enclosing block, then for the current function (x might be the name of a parameter), then for file scope.
There are languages that uses dynamic scoping, where the declaration a name refers to depends on the current run-time call stack. C is not one of them.
Here's an example of dynamic scoping in Perl (note that this is considered poor style):
#!/usr/bin/perl
use strict;
use warnings;
no strict "vars";
sub inner {
print " name=\"$name\"\n";
}
sub outer1 {
local($name) = "outer1";
print "outer1 calling inner\n";
inner();
}
sub outer2 {
local($name) = "outer2";
print "outer2 calling inner\n";
inner();
}
outer1();
outer2();
The output is:
outer1 calling inner
name="outer1"
outer2 calling inner
name="outer2"
A similar program in C would be invalid, since the declaration of name would not be statically visible in the function inner.
In my college days I read about the auto keyword and in the course of time I actually forgot what it is. It is defined as:
defines a local variable as having a
local lifetime
I never found it is being used anywhere, is it really used and if so then where is it used and in which cases?
If you'd read the IAQ (Infrequently Asked Questions) list, you'd know that auto is useful primarily to define or declare a vehicle:
auto my_car;
A vehicle that's consistently parked outdoors:
extern auto my_car;
For those who lack any sense of humor and want "just the facts Ma'am": the short answer is that there's never any reason to use auto at all. The only time you're allowed to use auto is with a variable that already has auto storage class, so you're just specifying something that would happen anyway. Attempting to use auto on any variable that doesn't have the auto storage class already will result in the compiler rejecting your code. I suppose if you want to get technical, your implementation doesn't have to be a compiler (but it is) and it can theoretically continue to compile the code after issuing a diagnostic (but it won't).
Small addendum by kaz:
There is also:
static auto my_car;
which requires a diagnostic according to ISO C. This is correct, because it declares that the car is broken down. The diagnostic is free of charge, but turning off the dashboard light will cost you eighty dollars. (Twenty or less, if you purchase your own USB dongle for on-board diagnostics from eBay).
The aforementioned extern auto my_car also requires a diagnostic, and for that reason it is never run through the compiler, other than by city staff tasked with parking enforcement.
If you see a lot of extern static auto ... in any code base, you're in a bad neighborhood; look for a better job immediately, before the whole place turns to Rust.
auto is a modifier like static. It defines the storage class of a variable. However, since the default for local variables is auto, you don't normally need to manually specify it.
This page lists different storage classes in C.
The auto keyword is useless in the C language. It is there because before the C language there existed a B language in which that keyword was necessary for declaring local variables. (B was developed into NB, which became C).
Here is the reference manual for B.
As you can see, the manual is rife with examples in which auto is used. This is so because there is no int keyword. Some kind of keyword is needed to say "this is a declaration of a variable", and that keyword also indicates whether it is a local or external (auto versus extrn). If you do not use one or the other, you have a syntax error. That is to say, x, y; is not a declaration by itself, but auto x, y; is.
Since code bases written in B had to be ported to NB and to C as the language was developed, the newer versions of the language carried some baggage for improved backward compatibility that translated to less work. In the case of auto, the programmers did not have to hunt down every occurrence of auto and remove it.
It's obvious from the manual that the now obsolescent "implicit int" cruft in C (being able to write main() { ... } without any int in front) also comes from B. That's another backward compatibility feature to support B code. Functions do not have a return type specified in B because there are no types. Everything is a word, like in many assembly languages.
Note how a function can just be declared extrn putchar and then the only thing that makes it a function that identifier's use: it is used in a function call expression like putchar(x), and that's what tells the compiler to treat that typeless word as a function pointer.
In C auto is a keyword that indicates a variable is local to a block. Since that's the default for block-scoped variables, it's unnecessary and very rarely used (I don't think I've ever seen it use outside of examples in texts that discuss the keyword). I'd be interested if someone could point out a case where the use of auto was required to get a correct parse or behavior.
However, in the C++11 standard the auto keyword has been 'hijacked' to support type inference, where the type of a variable can be taken from the type of its initializer:
auto someVariable = 1.5; // someVariable will have type double
Type inference is being added mainly to support declaring variables in templates or returned from template functions where types based on a template parameter (or deduced by the compiler when a template is instantiated) can often be quite painful to declare manually.
With the old Aztec C compiler, it was possible to turn all automatic variables to static variables (for increased addressing speed) using a command-line switch.
But variables explicitly declared with auto were left as-is in that case. (A must for recursive functions which would otherwise not work properly!)
The auto keyword is similar to the inclusion of semicolons in Python, it was required by a previous language (B) but developers realized it was redundant because most things were auto.
I suspect it was left in to help with the transition from B to C. In short, one use is for B language compatibility.
For example in B and 80s C:
/* The following function will print a non-negative number, n, to
the base b, where 2<=b<=10. This routine uses the fact that
in the ASCII character set, the digits 0 to 9 have sequential
code values. */
printn(n, b) {
extern putchar;
auto a;
if (a = n / b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n % b + '0');
}
auto can only be used for block-scoped variables. extern auto int is rubbish because the compiler can't determine whether this uses an external definition or whether to override the extern with an auto definition (also auto and extern are entirely different storage durations, like static auto int, which is also rubbish obviously). It could always choose to interpret it one way but instead chooses to treat it as an error.
There is one feature that auto does provide and that's enabling the 'everything is an int' rule inside a function. Unlike outside of a function, where a=3 is interpreted as a definition int a =3 because assignments don't exist at file scope, a=3 is an error inside a function because apparently the compiler always interprets it as an assignment to an external variable rather than a definition (even if there are no extern int a forward declarations in the function or in the file scope), but a specifier like static, const, volatile or auto would imply that it is a definition and the compiler takes it as a definition, except auto doesn't have the side effects of the other specifiers. auto a=3 is therefore implicitly auto int a = 3. Admittedly, signed a = 3 has the same effect and unsigned a = 3 is always an unsigned int.
Also note 'auto has no effect on whether an object will be allocated to a register (unless some particular compiler pays attention to it, but that seems unlikely)'
Auto keyword is a storage class (some sort of techniques that decides lifetime of variable and storage place) example. It has a behavior by which variable made by the Help of that keyword have lifespan (lifetime ) reside only within the curly braces
{
auto int x=8;
printf("%d",x); // here x is 8
{
auto int x=3;
printf("%d",x); // here x is 3
}
printf("%d",x); // here x is 8
}
I am sure you are familiar with storage class specifiers in C which are "extern", "static", "register" and "auto".
The definition of "auto" is pretty much given in other answers but here is a possible usage of "auto" keyword that I am not sure, but I think it is compiler dependent.
You see, with respect to storage class specifiers, there is a rule. We cannot use multiple storage class specifiers for a variable. That is why static global variables cannot be externed. Therefore, they are known only to their file.
When you go to your compiler setting, you can enable optimization flag for speed. one of the ways that compiler optimizes is, it looks for variables without storage class specifiers and then makes an assessment based on availability of cache memory and some other factors to see whether it should treat that variable using register specifier or not. Now, what if we want to optimize our code for speed while knowing that a specific variable in our program is not very important and we dont want compiler to even consider it as register. I though by putting auto, compiler will be unable to add register specifier to a variable since typing "register auto int a;" OR "auto register int a;" raises the error of using multiple storage class specifiers.
To sum it up, I thought auto can prohibit compiler from treating a variable as register through optimization.
This theory did not work for GCC compiler however I have not tried other compilers.