I am trying to print the value of a particularly large value after reading it from the console. When I am trying to print it from two different ways, one directly after assigning, one with the return value from the strtol function, I get different output! Can someone please explain me why I am noticing two different outputs?
Input value is: 4294967290
Here is the code snippet.
long int test2 = strtol(argv[1], &string, 10);
printf("the strtol value is %ld\n", test2);
printf("the strtol function value is %ld\n", strtol(argv[1], &string, 10));
Output
the strtol value is -6
the strtol function value is 4294967290
You need to do two things:
Add the line #include <stdlib.h> at the beginning of your program. That will cause strtol to be declared. (You also need #include <stdio.h> in order to declare printf.)
Add -Wall to your compiler flags (if you are using gcc or clang). That will cause the compiler to tell you that you need to declare strtol, and it might even suggest which header to include.
What is going on is that you haven't declared strtol, with the result that the compiler assumes that it returns an int.
Since 4294967290 is 232-6, it is the 32-bit Two's-complement representation of -6. Because the compiler assumes that strtol returns an int, the code it produces only looks at the low-order 32 bits. Since you are assigning the value to a long, the compiler needs to emit code which sign-extends the int. In other words, it takes the low-order 32 bits as though they were a signed integer (which would be -6) and then widens that -6 to a long.
In the second call to printf, the return value of strtol is inserted in the printf argument list without conversion. You tell printf that the argument is a long (by using the l flag in %ld), and by luck the entire 64 bits are in the argument list, so printf reads them out as a long and prints the "correct" value.
Needless to say, all this is undefined behaviour, and the actual output you are seeing is in no way guaranteed; it just happens to work that way with your compiler on your architecture. On some other compiler or on some other architecture, things might have been completely different, including the bit-length of int and long. So the above explanation, although possibly interesting, is of no practical value. Had you simply included the correct header and thereby told the compiler the real return type of strtol, you would have gotten the output you expected.
Related
My question involves the memory layout and mechanics behind the C printf() function. Say I have the following code:
#include <stdio.h>
int main()
{
short m_short;
int m_int;
m_int = -5339876;
m_short = m_int;
printf("%x\n", m_int);
printf("%x\n", m_short);
return 0;
}
On GCC 7.5.0 this program outputs:
ffae851c
ffff851c
My question is, where is the ffff actually coming from in the second hex number? If I'm correct, those fs should be outside the bounds of the short, but printf is getting them from somewhere.
When I properly format with specifier %hx, the output is rightly:
ffae851c
851c
As far as I have studied, the compiler simply truncates the top half of the number, as shown in the second output. So in the first output, are the first four fs from the program actually reading into memory that it shouldn't? Or does the C compiler behind-the-scenes still reserve a full integer even for a short, sign-extended, but the high half shall be undefined behavior, if used?
Note: I am performing research, in a real-world application, I would never try to abuse the language.
When a char or short (including signed and unsigned versions) is used as a function argument where there is no specific type (as with the ... arguments to printf(format,...))1, it is automatically promoted to an int (assuming it is not already as wide as an int2).
So printf("%x\n", m_short); has an int argument. What is the value of that argument? In the assignment m_short = m_int;, you attempted to assign it the value −5339876 (represented with bytes 0xffae851c). However, −5339876 will not fit in this 16-bit short. In assignments, a conversion is automatically performed, and, when a conversion of an integer to a signed integer type does not fit, the result is implementation-defined. It appears your implementation, as many do, uses two’s complement and simply takes the low bits of the integer. Thus, it puts the bytes 0x851c in m_short, representing the value −31460.
Recall that this is being promoted back to int for use as the argument to printf. In this case, it fits in an int, so the result is still −31460. In a two’s complement int, that is represented with the bytes 0xffff851c.
Now we know what is being passed to printf: An int with bytes 0xffff851c representing the value −31460. However, you are printing it with %x, which is supposed to receive an unsigned int. With this mismatch, the behavior is not defined by the C standard. However, it is a relatively minor mismatch, and many C implementations let it slide. (GCC and Clang do not warn even with -Wall.)
Let’s suppose your C implementation does not treat printf as a special known function and simply generates code for the call as you have written it, and that you later link this program with a C library. In this case, the compiler must pass the argument according to the specification of the Application Binary Interface (ABI) for your platform. (The ABI specifies, among other things, how arguments are passed to functions.) To conform to the ABI, the C compiler will put the address of the format string in one place and the bits of the int in another, and then it will call printf.
The printf routine will read the format string, see %x, and look for the corresponding argument, which should be an unsigned int. In every C implementation and ABI I know of, an int and an unsigned int are passed in the same place. It may be a processor register or a place on the stack. Let’s say it is in register r13. So the compiler designed your calling routine to put the int with bytes 0xffff851c in r13, and the printf routine looked for an unsigned int in r13 and found bytes 0xffff851c.
So the result is that printf interprets the bytes 0xffff851c as if they were an unsigned int, formats them with %x, and prints “ffff851c”.
Essentially, you got away with this because (a) a short is promoted to an int, which is the same size as the unsigned int that printf was expecting, and (b) most C implementations are not strict about mismatching integer types of the same width with printf. If you had instead tried printing an int using %ld, you might have gotten different results, such as “garbage” bits in the high bits of the printed value. Or you might have a case where the argument you passed is supposed to be in a completely different place from the argument printf expected, so none of the bits are correct. In some architectures, passing arguments incorrectly could corrupt the stack and break the program in a variety of ways.
Footnotes
1 This automatic promotion happens in many other expressions too.
2 There are some technical details regarding these automatic integer promotions that need not concern us at the moment.
I am unable to deduce the internal happenings inside the machine when we print data using format specifiers.
I was trying to understand the concept of signed and unsigned integers and the found the following:
unsigned int b=-12;
printf("%d\n",b); //prints -12
printf("%u\n\n",b); //prints 4294967284
I am guessing that b actually stores the binary version of -12 as 11111111111111111111111111110100.
So, since b is unsigned , b technically stores 4294967284.
But still the format specifier %d causes the binary value of b to be printed as its signed version i,e, -12.
However,
printf("%f\n",2); //prints 0.000000
printf("%f\n",100); //prints 0.000000
printf("%d\n",3.2); //prints 2147483639
printf("%d\n",3.1); //prints 2147483637
I kind of expected the 2 to be printed as 2.00000 and 3.2 to be printed as 3 as per type conversion norms.
Why does this not happen and what exactly takes place at machine level ?
Mismatching format specifier and argument type (like using the floating point specifier "%f" to print an int value) leads to undefined behavior.
Remember that 2 is an integer value, and vararg functions (like printf) doesn't really know the types of the arguments. The printf function have to rely on the format specifier to assume the argument is of the specified type.
To better understand how you get the results you get, to understand "the internal happenings", we first must make two assumptions:
The system uses 32 bits for the int type
The system uses 64 bits for the double type
Now what happens with
printf("%f\n",2); //prints 0.000000
is that the printf function sees the "%f" specifier, and fetch the next argument as a 64-bit double value. Since the int value you provided in the argument list is only 32 bits, half of the bits in the double value will be unknown. The printf function will then print the (invalid) double value. If you're unlucky some of the unknown bits might lead the value to be a trap value which can cause a crash.
Similarly with
printf("%d\n",3.2); //prints 2147483639
the printf function fetches the next argument as a 32-bit int value, losing half of the bits in the 64-bit double value provided as the actual argument. Exactly which 32 bits are copied into the internal int value depends on endianness. Integers don't have trap values so no crashes happens, just an unexpected value will be printed.
what exactly takes place at machine level ?
The stdio.h functions are quite far from the machine level. They provide a standardized abstraction layer on top of various OS API. Whereas "machine level" would refer to the generated assembler. The behavior you experience is mostly related to details of the C language rather than the machine.
On the machine level, there exists no signed numbers, but everything is treated as raw binary data. The compiler can turn raw binary data into a signed number by using an instruction that tells the CPU: "use what's stored at this location and treat it as a signed number". Specifically, as a two's complement signed number on all common computers. But this is irrelevant when explaining why your code misbehaves.
The integer constant 12 is of type int. When we write -12 we apply the unary - operator on that. The result is still of type int but now of value -12.
Then you attempt to store this negative number in an unsigned int. This triggers an implicit conversion to unsigned int, which should be carried out according to the C standard:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type
The maximum value of a 32 bit unsigned int is 2^32 - 1, which equals 4.29*10^9 - 1. "One more than the maximum" gives 4.29*10^9. If we calculate-12 + 4.29*10^9 we get 4294967284. This is in range of an unsigned int and is the result you see later.
Now as it happens, the printf family of functions is very unsafe. If you provide a wrong format specifier which doesn't matches the type, they might crash or display the wrong result etc - the program invokes undefined behavior.
So when you use %d or %i reserved for signed int, but pass an unsigned int, anything can happen. "Anything" includes the compiler trying to convert the passed type to match the passed format specifier. That's what happened when you used %d.
When you pass values of types completely mismatching the format specifier, the program just prints gibberish though. Because you are still invoking undefined behavior.
I kind of expected the 2 to be printed as 2.00000 and 3.2 to be printed as 3 as per type conversion norms.
The reason why the printf family can't do anything intelligent like assuming that 2 should be converted to 2.0, is because they are variadic (variable argument) functions. Meaning they can take any number of arguments. In order to make that possible, the parameters are essentially passed as raw binary through something called va_list, and all type information is lost. The printf implementation is therefore left with no type information but the format string you gave it. This is why variadic functions are so unsafe to use.
Unlike a regular function which has more type safety - if you declare void foo (float f) and pass the integer constant 2 (type int), it will attempt to implicitly convert from integer to float, and perhaps also give a conversion warning.
The behaviors you observe are the result of printf interpreting the bits given to it as the type specified by the format specifier. In particular, at least for your system:
The bits for an int argument and an unsigned argument in the same position within the argument list would be passed in the same place, so when you give printf one and tell it to format the other, it uses the bits you give it as if they were the bits of the other.
The bits for an int argument and a double argument would be passed in different places—possibly a general register for the int argument and a special floating-point register for the double argument, so when you give printf one and tell it to format the other, it does not get the bits for the double to use for the int; it gets completely unrelated bits that were left lying around by previous operations.
Whenever a function is called, values for its arguments must be placed in certain places. These places vary according to the software and hardware used, and they vary by the type and number of arguments. However, for any particular argument type, argument position, and specific software and hardware used, there is a specific place (or combination of places) where the bits of that argument should be stored to be passed to the function. The rules for this are part of the Application Binary Interface (ABI) for the software and hardware being used.
First, let us neglect any compiler optimization or transformation and examine what happens when the compiler implements a function call in source code directly as a function call in assembly language. The compiler will take the arguments you provide for printf and write them to the places designated for those types of arguments. When printf executes, it examines the format string. When it sees a format specifier, it figures out what type of argument it should have, and it looks for the value of that argument in the place for that type of argument.
Now, there are two things that can happen. Say you passed an unsigned but used a format specifier for int, like %d. In every ABI I have seen, an unsigned and an int argument (in the same position within the list of arguments) are passed in the same place. So, when printf looks for the bits for the int it is expected, it will get the bits for the unsigned you passed.
Then printf will interpret those bits as if they encoded the value for an int, and it will print the results. In other words, the bits of your unsigned value are reinterpreted as the bits of an int.1
This explains why you see “-12” when you pass the unsigned value 4,294,967,284 to printf to be formatted with %d. When the bits 11111111111111111111111111110100 are interpreted as an unsigned, they represent the value 4,294,967,284. When they are interpreted as an int, they represent the value −12 on your system. (This encoding system is called two’s complement. Other encoding systems include one’s complement and sign-and-magnitude, in which these bits would represent −1 and −2,147,483,636, respectively. Those systems are rare for plain integer types these days.)
That is the first of two things that can happen, and it is common when you pass the wrong type but it is similar to the correct type in size and nature—it is passed in the same place as the wrong type. The second thing that can happen is that the argument you pass is passed in a different place than the argument that is expected. For example, if you pass a double as an argument, it is, in many systems, placed in separate set of registers for floating-point values. When printf goes looking for an int argument for %d, it will not find the bits of your double at all. Instead, what it finds in the place where it looks for an int argument might be whatever bits happened to be left in a register or memory location from previous operations, or it might be the bits of the next argument in the list of arguments. In any case, this means that the value printf prints for the %d will have nothing to do with the double value you passed, because the bits of the double are not involved in any way—a complete different set of bits is used.
This is also part of the reason the C standard says it does not define the behavior when the wrong argument type is passed for a printf conversion. Once you have messed up the argument list by passing double where an int should have been, all the following arguments may be in the wrong places too. They might be in different registers from where they are expected, or they might be in different stack locations from where they are expected. printf has no way to recover from this mistake.
As stated, all of the above neglects compiler optimization. The rules of C arose out of various needs, such as accommodating the problems above and making C portable to a variety of systems. However, once those rules are written, compilers can take advantage of them to allow optimization. The C standard permits a compiler to make any transformation of a program as long as the changed program has the same behavior as the original program under the rules of the C standard. This permission allows compilers to speed up programs tremendously in some circumstances. But a consequence is that, if your program has behavior not defined by the C standard (and not defined by any other rules the compiler follows), it is allowed to transform your program into anything. Over the years, compilers have grown increasingly aggressive about their optimizations, and they continue to grow. This means, aside from the simple behaviors described above, when you pass incorrect arguments to printf, the compiler is allowed to produce completely different results. Therefore, although you may commonly see the behaviors I describe above, you may not rely on them.
Footnote
1 Note that this is not a conversion. A conversion is an operation whose input is one type and whose output is another type but has the same value (or as nearly the same as is possible, in some sense, as when we convert a double 3.5 to an int 3). In some cases, a conversion does not require any change to the bits—an unsigned 3 and an int 3 use the same bits to represent 3, so the conversion does not change the bits, and the result is the same as a reinterpretation. But they are conceptually different.
Here is my program:
#include <stdio.h>
int main()
{
int a=0x09;
int b=0x10;
unsigned long long c=0x123456;
printf("%x %llx\n",a,b,c);//in "%llx", l is lowercase of 'L', not digit 1
return 0;
}
the output was:
9 12345600000010
I want to know:
how function printf() is executed?
what will happen if the number of arguments isn't equal to that of formats?
please help me and use this program as an example to make an explanation.
The problem is that your types don't match. This is undefined behavior.
Your second argument b does not match the type of the format. So what's happening is that printf() is reading past the 4 bytes holding b (printf is expecting an 8-byte operand, but b is only 4 bytes). Therefore you're getting junk. The 3rd argument isn't printed at all since your printf() only has 2 format codes.
Since the arguments are usually passed consecutively (and adjacent) in memory, the 4 extra bytes that printf() is reading are actually the lower 4 bytes of c.
So in the end, the second number that's being printed is equal to b + ((c & 0xffffffff) << 32).
But I want to reiterate: this behavior is undefined. It's just that most systems today behave like this.
If the arguments that you pass to printf don't match the format specification then you get undefined behavior. This means that anything can happen and you cannot reason about the results that you happen to see on your specific system.
In your case, %llx requires and argument of type unsigned long long but you supplied an int. This alone causes undefined behaviour.
It is not an error to pass more arguments to printf than there are format specificiers, the excess arguments are evaluated but ignored.
printf() increases a pointer to read an argument at a time according to the format. If the number of formatting arguments is larger than the number of parameters, then printf() will output data from unknown memory locations. But if the number of parameters is larger than the number of formatting arguments, then no harm was done. E.g. gcc will warn you if the number of formatting arguments and parameters don't match.
Greetings , and again today when i was experimenting on language C in C99 standard , i came across a problem which i cannot comprehend and need expert's help.
The Code:
#include <stdio.h>
int main(void)
{
int Fnum = 256; /* The First number to be printed out */
printf("The number %d in long long specifier is %lld\n" , Fnum , Fnum);
return 0;
}
The Question:
1.)This code prompted me an warning message when i try to run this code.
2.)But the strange thing is , when I try to change the specifier %lld to %hd or %ld,
the warning message were not shown during execution and the value printed out on the console is the correct digit 256 , everything also seems to be normal even if i try with
%u , %hu and also %lu.In short the warning message and the wrong printing of digit only happen when I use the variation of long long specifier.
3.)Why is this happening??I thought the memory size for long long is large enough to hold the value 256 , but why it cannot be used to print out the appropriate value??
The Warning Message :(For the above source code)
C:\Users\Sam\Documents\Pelles C Projects\Test1\Test.c(7): warning #2234: Argument 3 to 'printf' does not match the format string; expected 'long long int' but found 'int'.
Thanks for spending time reading my question.God bless.
You're passing the Fnum variable to printf, which is typed int, but it's expecting long long. This has very little to do with whether a long long can hold 256, just that the variable you chose is typed int.
If you just want to print 256, you can get a constant that's typed to unsigned long long as follows:
printf("The number %d in long long specifier is %lld\n" ,256 , 256ULL);
or cast:
printf("The number %d in long long specifier is %lld\n" , Fnum , (long long int)Fnum);
There are three things going on here.
printf takes a variable number of arguments. That means the compiler doesn't know what type the arguments (beyond the format string) are supposed to be. So it can't convert them to an appropriate type.
For historical reasons, however, integer types smaller than int are "promoted" to int when passed in a variable argument list.
You appear to be using Windows. On Windows, int and long are the same size, even when pointers are 64 bits wide (this is a willful violation of C89 on Microsoft's part - they actually forced the standard to be changed in C99 to make it "okay").
The upshot of all this is: The compiler is not allowed to convert your int to a long long just because you used %lld in the argument list. (It is allowed to warn you that you forgot the cast, because warnings are outside standard behavior.) With %lld, therefore, your program doesn't work. But if you use any other size specifier, printf winds up looking for an argument the same size as int and it works.
When dealing with a variadic function, the caller and callee need some way of agreeing the types of the variable arguments. In the case of printf, this is done via the format string. GCC is clever enough to read the format string itself and work out whether printf will interpret the arguments in the same way as they have been actually provided.
You can get away with slightly different types of arguments in some cases. For example, if you pass a short then it gets implicitly converted to an int. And when sizeof(int) == sizeof(long int) then there is also no distinction. But sizeof(int) != sizeof(long long int) so the parameter fails to match the format string in that case.
This is due to the way varargs work in C. Unlike a normal function, printf() can take any number of arguments. It is up to the programmer to tell printf() what to expect by providing a correct format string.
Internally, printf() uses the format specifiers to access the raw memory that corresponds to the input arguments. If you specify %lld, it will try to access a 64-bit chunk of memory (on Windows) and interpret what it finds as a long long int. However, you've only provided a 32-bit argument, so the result would be undefined (it will combine your 32-bit int with whatever random garbage happens to appear next on the stack).
I have an unsigned long long that I use to track volume. The volume is incremented by another unsigned long long. Every 5 seconds I print this value out and when the value reaches the 32 bit unsigned maximum the printf gives me a negative value. The code snippet follows:
unsigned long long vol, vold;
char voltemp[10];
vold = 0;
Later...
while (TRUE) {
vol = atoi(voltemp);
vold += vol;
fprintf(fd2, "volume = %llu);
}
What am I doing wrong? This runs under RedHat 4 2.6.9-78.0.5.ELsmp gcc version 3.4.5
Since you say it prints a negative value, there must be something else wrong, apart from your use of atoi instead of strtoull. A %llu format specifier just doesn't print a negative value.
It strongly looks like the problem is the fprintf call. Check that you included stdio.h and that the argument list is indeed what is in the source code.
Well I can't really tell because your code has syntax errors, but here is a guess:
vol = atoi(voltemp);
atoi converts ascii to integer. You might want to try atol but that only gets it to a long, not a long long.
Your C standard library MIGHT have atoll.
You can't use atoi if the number can exceed the bounds of signed int.
EDIT: atoll (which is apparently standard), as suggested, is another good option. Just keep in mind that limits you to signed long long. Actually, the simplest option is strtoull, which is also standard.
Are you sure fprintf can take in a longlong as a parameter, rather than a pointer to it? It looks like it is converting your longlong to an int before passing it in.
I'd guess the problem is that printf is not handling %llu the way you think it is.
It's probably taking only 32 bits off the stack, not 64.
%llu is only standard since C99. maybe your compiler likes %LU better?
For clarification the fprintf statement was copied incorrectly (my mistake, sorry). The fprintf statement should actually read:
fprintf(fd2, "volume = %llu\n", vold);
Also, while admittedly sloppy the maximum length of the the array voltemp is 9 bytes (digits) which is well within the limits of a 32-bit integer.
When I pull this code out of the program it is part of and run it in a test program I get the result I would expect which is puzzling.
If voltemp is ever really big, you'll need to use strtoull, not atoi.