Print wrong value of unsigned int variable in C - c

I have written this small program using C:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
unsigned int un_i = 1112;
printf ("%d and %d", (1 - un_i), (1 - un_i)/10);
return 0;
}
My expectation is: "-1111 and -111"
But my result is: "-1111 and 429496618"
I don't know why it prints out 429496618 instead of -111. Please explain for me
I use gcc ver 4.4.7 and OS centos with kernel 2.6.32
Thank you very much!

That because un_i is of type unsigned int, it does not represent negative values. If you expect the result to be negative, you will need signed type, like int, try this:
unsigned int un_i = 1112;
printf ("PRINTF: un_i[%d] and u_i/10[%d]\n", (1 - un_i), (1 - (int)un_i)/10);

You expect to print -1111 and -111. However 1 - un_i produces a result that is also of type unsigned int; the result is also always non-negative. If unsigned int is 32 bits wide (as it is in your case), the result will be 4294966185; and that divided by 10 would result in 429496618.
The %d switch expects a (signed) int, not an unsigned int. Of that the C11 standard says that when using variable arguments with wrong type the behaviour is undefined, excepting that
one type is a signed integer type, the other type is the corresponding unsigned integer type, and the value is representable in both types
Thus printing 429496618 with %d has defined behaviour, as this same value is representable as both signed int and unsigned int.
However 1 - 1112U will have the value of UINT_MAX - 1110, which, when used as an argument to printf and converted with %d will lead to undefined behaviour since the UINT_MAX - 1100 value is not representable as a signed int; getting -1111 printed just happens to be the (undefined) behaviour in this case.
Since you really want to use signed numbers, then you should declare your variable un_i as an int instead of unsigned int.
In case you expect to do signed math with numbers greater than int use long int or long long int or even better, types such as int64_t instead of this unsigned trickery.

Your issue is from Signed and Unsigned conversion.
1, In case of unsigned int un_i = 1112;
(1 - un_i) = 0xfffffba9 /* first bit is 1 */
Even the first bit is 1, but un_i is unsigned, so it treat the first bit as normal value bit, so the first bit will be 0 after division:
(1 - un_i)/10 = 0x1999992a /* first bit is 0 */
printf ("%d", (1 - un_i)/10); /* 0x1999992a = 429496618 because Sign bit is 0 */
2, In case of int sig_i = 1112, the first bit is treated as Sign bit, and keep it is still 1 (negative) after devision by 10:
(1 - sig_i) = 0xfffffba9 /* first bit is 1 */
(1 - sig_i)/10 = 0xffffff91 /* first bit is 1 */
printf ("%d", (1 - sig_i)/10); /* 0xffffff91 = -111 because Sign bit is 1 */
Please run my code to get the detail result:
unsigned int un_i = 1112;
int sig_i = 1112;
printf ("Unsigned \n%d [hex: %x]\n", (1 - un_i), (1 - un_i));
printf ("and %d [hex: %x]\n", (1 - un_i)/10, (1 - un_i)/10);
printf ("Signed \n%d[hex: %x]\n", (1 - sig_i), (1 - sig_i));
printf ("and %d [hex: %x]\n", (1 - sig_i)/10, (1 - sig_i)/10);
Result
Unsigned
-1111 [hex: fffffba9]
and 429496618 [hex: 1999992a]
Signed
-1111[hex: fffffba9]
and -111 [hex: ffffff91]

Related

When R.H.S have negative int and unsigned int outside the range of int in arithmetic operation

I apologize for the title since I had to somehow find a unique one.
Consider the code below:
#include<stdio.h>
int main(void)
{
int b = 2147483648; // To show the maximum value of int type here is 2147483647
printf("%d\n",b);
unsigned int a = 2147483650;
unsigned int c = a+(-1);
printf("%u\n",c);
}
The output of the above program when run on a 64 bit OS with gcc compiler is:
-2147483648
2147483649
Please see my understanding of the case:
Unsigned int a is outside the range of signed int type. In the R.H.S (-1) will converted to unsigned int since the operands are of different types. The result of converting -1 to unsigned int is:
-1 + (unsigned int MAX_UINT +1) = unsigned int MAX_UINT = 4294967295.
Now R.H.S will be:
unsigned int MAX_UINT + 2147483650
Now this looks like it is outside the range of unsigned int. I do not know how to proceed from here and it looks like even if I proceed with this explanation I will not reach the empirical output.
Please give a proper explanation.
PS: To know how int b = 2147483648 became -2147483648 is not my intention. I just added that line in the code so it is pretty clear that 2147483650
is outside the range of int.
2147483648 is not a 32-bit int, it is just above INT_MAX whose value is 2147483647 on such platforms.
int b = 2147483648; is implementation defined. On your platform, it seems to perform 32-bit wrap around, which is typical of two's complement architectures but not guaranteed by the C Standard.
As a consequence printf("%d\n", b); outputs -2147483648.
The rest of the code is perfectly defined on 32-bit systems, and the output 2147483649 is correct and expected. The fact that the OS by 64 bit plays a very subtle role in the evaluation steps but is mostly irrelevant to the actual result, which is fully defined by the C Standard.
Here are steps:
unsigned int a = 2147483650; no surprise here, a is an unsigned int and its initializer is either an int, a long int or a long long int depending on which of these types has at least 32 value bits. On Windows and 32-bit linux, it would be long long int whereas on 64-bit linux it would be long int. The value is truncated to 32-bit upon storing to the unsigned int variable.
You can verify these steps by adding this code:
printf("sizeof(2147483650) -> %d\n", (int)sizeof(2147483650));
printf(" sizeof(a) -> %d\n", (int)sizeof(a));
The second definition unsigned int c = a+(-1); undergoes the same steps:
c is defined as an unsigned int and its initializer is truncated to 32 bits when stored into c. The initializer is an addition:
the first term is an unsigned int with value 2147483650U.
the second term is a parenthesized expression with the unary negation of an int with value 1. Hence it is an int with value -1 as you correctly analyzed.
the second term is converted to unsigned int: conversion is performed modulo 232, hence the value is 4294967295U.
the addition is then performed using unsigned arithmetics, which is specified as taking place modulo the width of the unsigned int type, hence the result is an unsigned int with value 2147483649U, (6442450945 modulo 232)
This unsigned int value is stored into c and prints correctly with printf("%u\n", c); as 2147483649.
If the expression had been instead 2147483650 + (-1), the computation would have taken place in 64 bits signed arithmetics, with type long int or long long int depending on the architecture, with a result of 2147483649. This value would then be truncated to 32-bits when stored into c, hence the same value for c as 2147483649.
Note that the above steps do not depend on the actual representation of negative values. They are fully defined for all architectures, only the width of type int matters.
You can verify these steps with extra code. Here is a complete instrumented program to illustrate these steps:
#include <limits.h>
#include <stdio.h>
int main(void) {
printf("\n");
printf(" sizeof(int) -> %d\n", (int)sizeof(int));
printf(" sizeof(unsigned int) -> %d\n", (int)sizeof(unsigned int));
printf(" sizeof(long int) -> %d\n", (int)sizeof(long int));
printf(" sizeof(long long int) -> %d\n", (int)sizeof(long long int));
printf("\n");
int b = 2147483647; // To show the maximum value of int type here is 2147483647
printf(" int b = 2147483647;\n");
printf(" b -> %d\n",b);
printf(" sizeof(b) -> %d\n", (int)sizeof(b));
printf(" sizeof(2147483647) -> %d\n", (int)sizeof(2147483647));
printf(" sizeof(2147483648) -> %d\n", (int)sizeof(2147483648));
printf(" sizeof(2147483648U) -> %d\n", (int)sizeof(2147483648U));
printf("\n");
unsigned int a = 2147483650;
printf(" unsigned int a = 2147483650;\n");
printf(" a -> %u\n", a);
printf(" sizeof(2147483650U) -> %d\n", (int)sizeof(2147483650U));
printf(" sizeof(2147483650) -> %d\n", (int)sizeof(2147483650));
printf("\n");
unsigned int c = a+(-1);
printf(" unsigned int c = a+(-1);\n");
printf(" c -> %u\n", c);
printf(" sizeof(c) -> %d\n", (int)sizeof(c));
printf(" a+(-1) -> %u\n", a+(-1));
printf(" sizeof(a+(-1)) -> %d\n", (int)sizeof(a+(-1)));
#if LONG_MAX == 2147483647
printf(" 2147483650+(-1) -> %lld\n", 2147483650+(-1));
#else
printf(" 2147483650+(-1) -> %ld\n", 2147483650+(-1));
#endif
printf(" sizeof(2147483650+(-1)) -> %d\n", (int)sizeof(2147483650+(-1)));
printf(" 2147483650U+(-1) -> %u\n", 2147483650U+(-1));
printf("sizeof(2147483650U+(-1)) -> %d\n", (int)sizeof(2147483650U+(-1)));
printf("\n");
return 0;
}
Output:
sizeof(int) -> 4
sizeof(unsigned int) -> 4
sizeof(long int) -> 8
sizeof(long long int) -> 8
int b = 2147483647;
b -> 2147483647
sizeof(b) -> 4
sizeof(2147483647) -> 4
sizeof(2147483648) -> 8
sizeof(2147483648U) -> 4
unsigned int a = 2147483650;
a -> 2147483650
sizeof(2147483650U) -> 4
sizeof(2147483650) -> 8
unsigned int c = a+(-1);
c -> 2147483649
sizeof(c) -> 4
a+(-1) -> 2147483649
sizeof(a+(-1)) -> 4
2147483650+(-1) -> 2147483649
sizeof(2147483650+(-1)) -> 8
2147483650U+(-1) -> 2147483649
sizeof(2147483650U+(-1)) -> 4
int b = 2147483648;
printf("%d\n",b);
// -2147483648
Conversion of an integer (any signed or unsigned) that is outside the range of the target signed type:
... either the result is implementation-defined or an implementation-defined signal is raised. C11 §6.3.1.3 3
In your case with the signed integer 2147483648, the implementation-defined behavior appears to map the lowest 32-bits of the source 2147483648 to your int's 32-bits. This may not be the result with another compiler.
a+(-1) is the same as a + (-(1u)) same as a + (-1u + UINT_MAX + 1u) same as a + UINT_MAX. The addition overflows the unsigned range, yet unsigned overflow wraps around. So the sum is 2147483649 before the assignment. With the below code, there is no out of range conversion. The only conversion is signed 1 to unsigned 1 and long 2147483650 (or long long 2147483650) to unsigned 2147483650. Both in range conversions.
unsigned int a = 2147483650;
unsigned int c = a+(-1);
printf("%u\n",c);
// 2147483649
Look at it like this
2147483650 0x80000002
+ -1 +0xFFFFFFFF
---------- ----------
2147483649 0x80000001
Where does the 0xFFFFFFFF come from? Well, 0 is 0x00000000, and if you subtract 1 from that you get 0xFFFFFFFF because unsigned arithmetic is well-defined to "wrap".
Or taking your decimal version further, 0 - 1 is UINT_MAX because unsigned int wraps, and so does the sum.
your value 2147483650
UINT_MAX + 4294967295
----------
6442450945
modulo 2^32 % 4294967296
----------
2147483649

Signed/Unsigned int, short and char

I am trying to understand the output of the code given at : http://phrack.org/issues/60/10.html
Quoting it here for reference:
#include <stdio.h>
int main(void){
int l;
short s;
char c;
l = 0xdeadbeef;
s = l;
c = l;
printf("l = 0x%x (%d bits)\n", l, sizeof(l) * 8);
printf("s = 0x%x (%d bits)\n", s, sizeof(s) * 8);
printf("c = 0x%x (%d bits)\n", c, sizeof(c) * 8);
return 0;
}
The output i get on my machine is:-
l = 0xdeadbeef (32 bits)
s = 0xffffbeef (16 bits)
c = 0xffffffef (8 bits)
Here is my understanding:-
The assignments s=l, c=l will result in s and c being promoted to ints and they will have the last 16 bits (0xbeef) and last 8 bits (0xef) of l respectively.
Printf tries to interpret each of the above values (l,s and c) as unsigned integers (as %x is passed as the format specifier). From the output i see that sign extension has taken place. My doubt is that since %x represents unsigned int, why has the sign extension taken place while printing s and c? Should not the output for s be 0x0000beef and for c be 0x000000ef?
why has the sign extension taken place while printing s and c
Let's see the following code:
unsigned char ucr8bit; /* Range is 0 to 255 on my machine */
signed char cr8bit; /* Range is -128 to 127 on my machine */
int i32bit;
cr8bit = MINUS_100; /* (char)(-100) or 0x9C */
i32bit = cr8bit; /* i32 bit is -100 or 0xFFFFFF9C */
As you can see, althout the number -100 is same, its representation is not mere prepending 0s in wider character but may be prepending the MSB or sign bit of the signed type in 2s complement system and 1s complement system.
In your example you are trying to print s and c as wider type and hence getting the sign bit replication.
Also your code contains many sources of undefined and unspecified behavior and thus may give different output on different compilers.
(For instance, you should use signed char instead of char as char may behave as unsigned char on some implementation and as signed char on some other implmentations)
l = 0xdeadbeef; /* Initializing l from an unsigned
if sizeof l is 32 bit UB as l is signed */
s = l; /* Initializing with an undefined value. Moreover
implicit conversion wider to narrower type */
printf("l = 0x%x (%d bits)\n", l, sizeof(l) * 8); /* Using %x
to print signed number and %d to print size_t */
You are using a 32-bit signed integer. That means that only 31 bits can be used for positive numbers. 0xdeadbeef uses 32 bits. Therefore, assigning it to a 32-bit signed integer makes it a negative number.
When shown with an unsigned conversion specifier, %x, it looks like the negative number that it is (with the sign extension).
When copying it into a short or char, the property of it being a negative number is retained.
To further show this, try setting:
l = 0xef;
The output is now:
l = 0xef (32 bits)
s = 0xef (16 bits)
c = 0xffffffef (8 bits)
0xef uses 8 bits which is positive when placed into a 32-bit or 16-bit variable. When you place an 8-bit number into a signed 8-bit variable (char), you are creating a negative number.
To see the retention of the negative number, try the reverse:
c = 0xef;
s = c;
l = c;
The output is:
l = 0xffffffef (32 bits)
s = 0xffffffef (16 bits)
c = 0xffffffef (8 bits)

Unsigned int from 32 bit to 64bit OS

This code snippet is excerpted from a linux book.
If this is not appropriate to post the code snippet here, please let me know.
I will delete it. Thanks.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char buf[30];
char *p;
int i;
unsigned int index = 0;
//unsigned long index = 0;
printf("index-1 = %lx (sizeof %d)\n", index-1, sizeof(index-1));
for(i = 'A'; i <= 'Z'; i++)
buf[i - 'A'] = i;
p = &buf[1];
printf("%c: buf=%p p=%p p[-1]=%p\n", p[index-1], buf, p, &p[index-1]);
return 0;
}
On 32-bit OS environment:
This program works fine no matter the data type of index is unsigned int or unsigned long.
On 64-bit OS environment:
The same program will run into "core dump" if index is declared as unsigned int.
However, if I only change the data type of index from unsigned int to a) unsigned long or b) unsigned short,
this program works fine too.
The reason from the book only tells me that 64-bit will cause the core-dump due to non-negative number. But I have no idea exactly about the reason why unsigned long and unsigned short work but unsigned int.
What I am confused is that
p + (0u -1) == p + UINT_MAX when index is unsigned int.
BUT,
p + (0ul - 1) == p[-1] when index is unsigned long.
I get stuck at here.
If anyone can help to elaborate the details, it is highly appreciated!
Thank you.
Here comes some result on my 32 bit(RHEL5.10/gcc version 4.1.2 20080704)
and 64 bit machine (RHEL6.3/gcc version 4.4.6 20120305)
I am not sure if gcc version makes any difference here.
So, I paste the information as well.
On 32 bit:
I tried two changes:
1) Modify unsigned int index = 0 to unsigned short index = 0.
2) Modify unsigned int index = 0 to unsigned char index = 0.
The program can run without problem.
index-1 = ffffffff (sizeof 4)
A: buf=0xbfbdd5da p=0xbfbdd5db p[-1]=0xbfbdd5da
It seems that the data type of index will be promoted to 4 bytes due to -1.
On 64 bit:
I tried three changes:
1) Modify unsigned int index = 0 to unsigned char index = 0.
It works!
index-1 = ffffffff (sizeof 4)
A: buf=0x7fffef304ae0 p=0x7fffef304ae1 p[-1]=0x7fffef304ae0
2) Modify unsigned int index = 0 to unsigned short index = 0.
It works!
index-1 = ffffffff (sizeof 4)
A: buf=0x7fff48233170 p=0x7fff48233171 p[-1]=0x7fff48233170
3) Modify unsigned int index = 0 to unsigned long index = 0.
It works!
index-1 = ffffffff (sizeof 8)
A: buf=0x7fffb81d6c20 p=0x7fffb81d6c21 p[-1]=0x7fffb81d6c20
BUT, only
unsigned int index = 0 runs into the core dump at the last printf.
index-1 = ffffffff (sizeof 4)
Segmentation fault (core dumped)
Do not lie to the compiler!
Passing printf an int where it expects a long (%ld) is undefined behavior.
(Creating a pointer pointing outside any valid object (and not just behind one) is UB too...)
Correct the format specifiers and the pointer arithmetic (that includes indexing as a special case) and everything will work.
UB includes "It works as expected" as well as "Catastrophic failure".
BTW: If you politely ask your compiler for all warnings, it would warn you. Use -Wall -Wextra -pedantic or similar.
One other problem is code has is in your printf():
printf("index-1 = %lx (sizeof %d)\n", index-1, sizeof(index-1));
Lets simplify:
int i = 100;
print("%lx", i-1);
You are telling printf here is a long but in reality you are sending an int. clang does tell you the corrent warning (I think gcc should also spit the correct waring). See:
test1.c:6:19: warning: format specifies type 'unsigned long' but the argument has type 'int' [-Wformat]
printf("%lx", i - 100);
~~~ ^~~~~~~
%x
1 warning generated.
Solution is simple: you need to pass a long to printf or tell printf to print an int:
printf("%lx", (long)(i-100) );
printf("%x", i-100);
You got luck on 32bit and your app did not crash. Porting it to 64bit revealed a bug in your code and you can now fix it.
Arithmetic on unsigned values is always defined, in terms of wrap-around. E.g. (unsigned)-1 is the same as UINT_MAX. So an expression like
p + (0u-1)
is equivalent to
p + UINT_MAX
(&p[0u-1] is equivalent to &*(p + (0u-1)) and p + (0u-1)).
Maybe this is easier to understand if we replace the pointers with unsigned integer types. Consider:
uint32_t p32; // say, this is a 32-bit "pointer"
uint64_t p64; // a 64-bit "pointer"
Assuming 16, 32, and 64 bit for short, int, and long, respectively (entries on the same line equal):
p32 + (unsigned short)-1 p32 + USHRT_MAX p32 + (UINT_MAX>>16)
p32 + (0u-1) p32 + UINT_MAX p32 - 1
p32 + (0ul-1) p32 + ULONG_MAX p32 + UINT_MAX p32 - 1
p64 + (0u-1) p64 + UINT_MAX
p64 + (0ul-1) p64 + ULONG_MAX p64 - 1
You can always replace operands of addition, subtraction and multiplication on unsigned types by something congruent modulo the maximum value + 1. For example,
-1 ☰ ffffffffhex mod 232
(ffffffffhex is 232-1 or UINT_MAX), and also
ffffffffffffffffhex ☰ ffffffffhex mod 232
(for a 32-bit unsigned type you can always truncate to the least-significant 8 hex-digits).
Your examples:
32-bit
unsigned short index = 0;
In index - 1, index is promoted to int. The result has type int and value -1 (which is negative). Same for unsigned char.
64-bit
unsigned char index = 0;
unsigned short index = 0;
Same as for 32-bit. index is promoted to int, index - 1 is negative.
unsigned long index = 0;
The output
index-1 = ffffffff (sizeof 8)
is weird, it’s your only correct use of %lx but looks like you’ve printed it with %x (expecting 4 bytes); on my 64-bit computer (with 64-bit long) and with %lx I get:
index-1 = ffffffffffffffff (sizeof 8)
ffffffffffffffffhex is -1 modulo 264.
unsigned index = 0;
An int cannot hold any value unsigned int can, so in index - 1 nothing is promoted to int, the result has type unsigned int and value -1 (which is positive, being the same as UINT_MAX or ffffffffhex, since the type is unsigned). For 32-bit-addresses, adding this value is the same as subtracting one:
bfbdd5db bfbdd5db
+ ffffffff - 1
= 1bfbdd5da
= bfbdd5da = bfbdd5da
(Note the wrap-around/truncation.) For 64-bit addresses, however:
00007fff b81d6c21
+ ffffffff
= 00008000 b81d6c20
with no wrap-around. This is trying to access an invalid address, so you get a segfault.
Maybe have a look at 2’s complement on Wikipedia.
Under my 64-bit Linux, using a specifier expecting a 32-bit value while passing a 64-bit type (and the other way round) seems to “work”, only the 32 least-significant bits are read. But use the correct ones. lx expects an unsigned long, unmodified x an unsigned int, hx an unsigned short (an unsigned short is promoted to int when passed to printf (it’s passed as a variable argument), due to default argument promotions). The length modifier for size_t is z, as in %zu:
printf("index-1 = %lx (sizeof %zu)\n", (unsigned long)(index-1), sizeof(index-1));
(The conversion to unsigned long doesn’t change the value of an unsigned int, unsigned short, or unsigned char expression.)
sizeof(index-1) could also have been written as sizeof(+index), the only effect on the size of the expression are the usual arithmetic conversions, which are also triggered by unary +.

cast without * operator

Could someone explain to me what's happening to "n" in this situation?
main.c
unsigned long temp0;
PLLSYS0_FWD_DIV_A_DECODE(n);
main.h
#define PLLSYS0_FWD_DIV_A_DECODE(n) ((((unsigned long)(n))>>8)& 0x0000000f)
I understand that n is being shifted 8 bits and then anded with 0x0000000f. So what does (unsigned long)(n) actually do?
#include <stdio.h>
int main(void)
{
unsigned long test1 = 1;
printf("test1 = %d \n", test1);
printf("(unsigned long)test1 = %d \n", (unsigned long)(test1));
return 0;
}
Output:
test1 = 1
(unsigned long)test1 = 1
In your code example, the cast doesn't make much sense because test1 is already an unsigned long, but it makes sense when the macro is used on a different type like unsigned char etc.
Also you should use %lu in printf to print unsigned long.
printf("(unsigned long)test1 = %lu\n", (unsigned long)(test1));
// ^^
It widens it to be the size of an unsigned long. Imagine if you called this with a char and shifted it 8 bits to the right, the anding wouldn't work the same.
Also just found this (look under right-shift operator) for why it's unsigned. Apparently unsigned forces a logical shift in which the left-most bit is replaced with a zero for each position shifted. Whereas a signed value shifted performs an arithmetic shift where the left-most bit is replaced by the dropped rightmost bit.
Example:
11000011 ( unsigned, shifted to the right by 1 )
01100001
11000011 ( signed, shifted to the right by 1 )
11100001
Could someone explain to me what's happening to "n" in this situation?
You are casting n to unsigned long.
So what does (unsigned long)(n) actually do?
It will promote n to unsigned long.
Casting the input is all it's doing before the bit shift and the anding. Being careful about order if operations and precedence of operators. It's pretty ugly.
But looks like they're avoiding hitting the sign bit and by doing this instead of a function, there's no type checking on n.
It's just ugly.
Better form would be to have a clean clear function that has input type checking.
That ensures that n has the proper size (in bits) and most importantly is treated as unsigned. As the shift operators perform sign extension, when a number is signed and negative, the extension will be done with 1 not zero. It means that a negative number shifted will always result in a negative number.
For example:
int main()
{
long i = -1;
long x, y;
x = ((unsigned long)i) >> 8;
y = i >> 8;
printf("%ld %ld\n", x, y);
}
On my machine it outputs:
72057594037927935 -1
Because of the sign extension in y, the number continues to be -1:

datatype promotion in c

In the following code:
#include "stdio.h"
signed char a= 0x80;
unsigned char b= 0x01;
void main (void)
{
if(b*a>1)
printf("promoted\n");
else if (b*a<1)
printf("why doesnt promotion work?");
while(1);
}
I expected "promoted' to be printed. But it doesnt. Why?
If I can the datatypes to signed and unsigned int, and have a as a negative number, eg, 0x80000000 and b as a positive number, 0x01, "promoted" gets printed as expected.
PLZ HELP me understand what the problem is!
You've just been caught by the messy type-promotion rules of C.
In C, intermediates of integral type smaller than int are automatically promoted to int.
So you have:
0x80 * 0x01 = -128 * 1
0x80 gets signed extended to type int:
0xffffff80 * 0x00000001 = -128 * 1 = -128
So the result is -128 and thus is less than 1.
When you use type int and unsigned int, both operands get promoted to unsigned int. 0x80000000 * 0x01 = 0x80000000 as an unsigned integer is bigger than 1.
So here's the side-by-side comparison of the type promotion that's taking place:
(signed char) * (unsigned char) -> int
(signed int ) * (unsigned int ) -> unsigned int
(signed char)0x80 * (unsigned char)0x01 -> (int) 0xffffff80
(signed int )0x80000000 * (unsigned int )0x01 -> (unsigned int)0x80000000
(int) 0xffffff80 is negative -> prints "why doesnt promotion work?"
(unsigned int)0x80000000 is positive -> prints "promoted"
Here's a reference to the type-promotion rules of C.
The reason printf("promoted\n"); never runs
is because b*a is always == -128, which is less than 1
a b
0x80 * 0x01 = -128 * 1

Resources