Incorrect multiplication result using fixed point in C - c

I'm trying to implement signed unsigned multiplication in C using fixed point arithmetic, but I get a incorrect result. I can't imagine how to solve this problem. I think there is some problem in the bit extension.
Here is the piece of code:
int16_t audio_sample=0x1FF; //format signed Q1.8 -> Value represented=-0.00390625
uint8_t gain=0xA; //format unsigned Q5.2 -> Value represented = 2.5
int16_t result= (int16_t)(((int16_t)((int32_t)audio_sample * (int32_t) gain);
printf("%x",result);
The result from printf is 0x13F6, which is of course the result from 0x1FF*0xA, but the fixed-point arithmetics said that the correct results would be 0x3FF6, considering the proper bit-extension. 0x3FF6 in Q6.10 format represent -0.009765625=-0.00390625*2.5 .
Please help me find my mistake.
Thank in advance.

You should use unsigned types here. The representation is in your head (or the comments), not in the data types in the code.
2's complement means the 1 on the left is theoretically continued forever. e.g. 0x1FF in Q1.8 is the same as 0xFFFF in Q8.8 (-1 / 256).
If you have a 16bit integer, you cannot have Q1.8, it will always be Q8.8, the machine will not ignore the other bits. So, 0x1FF in Q1.8 should be 0xFFFF in Q8.8. The 0xA in Q5.2 do not change in Q6.2.
0xFFFF * 0xA = 0x9FFF6, cut away the overflow (therefore use unsigned) and you have 0xFFF6 in Q6.10, which is -10 / 1024, which is your expected result.

It is best to think of fixed-point as a matter of scaling, and to express your calculation simply and clearly in terms of numbers — rather than bits. (Example)
A Q1.8 or Q5.2 number in AMD Q notation is a real number scaled by a factor of 28 or 22 respectively.
But C doesn't have 9 or 7-bit number types. Your int16_t and uint8_t variables have enough range to store such numbers. But for arithmetic operations, it is unwise to use unsigned integers, or to mix signed and unsigned types. int has enough range and avoids some efficiency pitfalls.
int audio_sample = -0.00390625*256; // Q1.8
int gain = 2.5*4; // Q5.2
The product of numbers scaled by 28 and 22 has a scale of 210.
int result = audio_sample * gain; // Q6.10
To convert back to the real value, divide by the scaler.
printf("%lg * %lg = %lg\n",
(double)audio_sample/256,
(double)gain/4,
(double)result/1024);
Please help me find my mistake.
The mistake was in assigning 0x1FF to audio_sample, instead of -1. 0x1FF is the unsigned truncation of the 9-bit two's-complement value -1. But audio_sample is wider and would require more leading 1 bits. It would have been clearer and safer to express your intent by assigning -0.00390625*256 to audio_sample.
the fixed-point arithmetics said that the correct results would be 0x3FF6, considering the proper bit-extension
0x3FF6 is the unsigned 14-bit truncation of the correct two's complement answer. But the result requires 16-bits so you're probably looking for value, 0xFFF6.
printf("unsigned Q6.10: 0x%x\n", (unsigned)result & 0xFFFF);

Related

C - Unsigned long long to double on 32-bit machine

Hi I have two questions:
uint64_t vs double, which has a higher range limit for covering positive numbers?
How to convert double into uint64_t if only the whole number part of double is needed.
Direct casting apparently doesn't work due to how double is defined.
Sorry for any confusion, I'm talking about the 64bit double in C on a 32bit machine.
As for an example:
//operation for convertion I used:
double sampleRate = (
(union { double i; uint64_t sampleRate; })
{ .i = r23u.outputSampleRate}
).sampleRate;
//the following are printouts on command line
// double uint64_t
//printed by %.16llx %.16llx
outputSampleRate 0x41886a0000000000 0x41886a0000000000 sampleRate
//printed by %f %llu
outputSampleRate 51200000.000000 4722140757530509312 sampleRate
So the two numbers remain the same bit pattern but when print out as decimals, the uint64_t is totally wrong.
Thank you.
uint64_t vs double, which has a higher range limit for covering positive numbers?
uint64_t, where supported, has 64 value bits, no padding bits, and no sign bit. It can represent all integers between 0 and 264 - 1, inclusive.
Substantially all modern C implementations represent double in IEEE-754 64-bit binary format, but C does not require nor even endorse that format. It is so common, however, that it is fairly safe to assume that format, and maybe to just put in some compile-time checks against the macros defining FP characteristics. I will assume for the balance of this answer that the C implementation indeed does use that representation.
IEEE-754 binary double precision provides 53 bits of mantissa, therefore it can represent all integers between 0 and 253 - 1. It is a floating-point format, however, with an 11-bit binary exponent. The largest number it can represent is (253 - 1) * 21023, or nearly 21077. In this sense, double has a much greater range than uint64_t, but the vast majority of integers between 0 and its maximum value cannot be represented exactly as doubles, including almost all of the numbers that can be represented exactly by uint64_t.
How to convert double into uint64_t if only the whole number part of double is needed
You can simply assign (conversion is implicit), or you can explicitly cast if you want to make it clear that a conversion takes place:
double my_double = 1.2345678e48;
uint64_t my_uint;
uint64_t my_other_uint;
my_uint = my_double;
my_other_uint = (uint64_t) my_double;
Any fractional part of the double's value will be truncated. The integer part will be preserved exactly if it is representable as a uint64_t; otherwise, the behavior is undefined.
The code you presented uses a union to overlay storage of a double and a uint64_t. That's not inherently wrong, but it's not a useful technique for converting between the two types. Casts are C's mechanism for all non-implicit value conversions.
double can hold substantially larger numbers than uint64_t, as the value range for 8 bytes IEEE 754 is 4.94065645841246544e-324d to 1.79769313486231570e+308d (positive or negative) [taken from here][more detailed explanation]. However if you do addition of small values in that range, you will be in for a surprise, because at some point the precision will not be able to represent e.g. an addition of 1 and will round down to the lower value, essentially making a loop steadily incremented by 1 non-terminating.
This code for example:
#include <stdio.h>
2 int main()
3 {
4 for (double i = 100000000000000000000000000000000.0; i < 1000000000000000000000000000000000000000000000000.0; ++i)
5 printf("%lf\n", i);
6 return 0;
7 }
gives me a constant output of 100000000000000005366162204393472.000000. That's also why we have nextafter and nexttoward functions in math.h. You can also find ceil and floor functions there, which, in theory, will allow you to solve your second problem: removing the fraction part.
However, if you really need to hold large numbers you should look at bigint implementations instead, e.g. GMP. Bigints were designed to do operations on very large integers, and operations like an addition of one will truly increment the number even for very large values.

is it safe to subtract between unsigned integers?

Following C code displays the result correctly, -1.
#include <stdio.h>
main()
{
unsigned x = 1;
unsigned y=x-2;
printf("%d", y );
}
But in general, is it always safe to do subtraction involving
unsigned integers?
The reason I ask the question is that I want to do some conditioning
as follows:
unsigned x = 1; // x was defined by someone else as unsigned,
// which I had better not to change.
for (int i=-5; i<5; i++){
if (x+i<0) continue
f(x+i); // f is a function
}
Is it safe to do so?
How are unsigned integers and signed integers different in
representing integers? Thanks!
1: Yes, it is safe to subtract unsigned integers. The definition of arithmetic on unsigned integers includes that if an out-of-range value would be generated, then that value should be adjusted modulo the maximum value for the type, plus one. (This definition is equivalent to truncating high bits).
Your posted code has a bug though: printf("%d", y); causes undefined behaviour because %d expects an int, but you supplied unsigned int. Use %u to correct this.
2: When you write x+i, the i is converted to unsigned. The result of the whole expression is a well-defined unsigned value. Since an unsigned can never be negative, your test will always fail.
You also need to be careful using relational operators because the same implicit conversion will occur. Before I give you a fix for the code in section 2, what do you want to pass to f when x is UINT_MAX or close to it? What is the prototype of f ?
3: Unsigned integers use a "pure binary" representation.
Signed integers have three options. Two can be considered obsolete; the most common one is two's complement. All options require that a positive signed integer value has the same representation as the equivalent unsigned integer value. In two's complement, a negative signed integer is represented the same as the unsigned integer generated by adding UINT_MAX+1, etc.
If you want to inspect the representation, then do unsigned char *p = (unsigned char *)&x; printf("%02X%02X%02X%02X", p[0], p[1], p[2], p[3]);, depending on how many bytes are needed on your system.
Its always safe to subtract unsigned as in
unsigned x = 1;
unsigned y=x-2;
y will take on the value of -1 mod (UINT_MAX + 1) or UINT_MAX.
Is it always safe to do subtraction, addition, multiplication, involving unsigned integers - no UB. The answer will always be the expected mathematical result modded by UINT_MAX+1.
But do not do printf("%d", y ); - that is UB. Instead printf("%u", y);
C11 §6.2.5 9 "A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."
When unsigned and int are used in +, the int is converted to an unsigned. So x+i has an unsigned result and never is that sum < 0. Safe, but now if (x+i<0) continue is pointless. f(x+i); is safe, but need to see f() prototype to best explain what may happen.
Unsigned integers are always 0 to power(2,N)-1 and have well defined "overflow" results. Signed integers are 2's complement, 1's complement, or sign-magnitude and have UB on overflow. Some compilers take advantage of that and assume it never occurs when making optimized code.
Rather than really answering your questions directly, which has already been done, I'll make some broader observations that really go to the heart of your questions.
The first is that using unsigned in loop bounds where there's any chance that a signed value might crop up will eventually bite you. I've done it a bunch of times over 20 years and it has ultimately bit me every time. I'm now generally opposed to using unsigned for values that will be used for arithmetic (as opposed to being used as bitmasks and such) without an excellent justification. I have seen it cause too many problems when used, usually with the simple and appealing rationale that “in theory, this value is non-negative and I should use the most restrictive type possible”.
I understand that x, in your example, was decided to be unsigned by someone else, and you can't change it, but you want to do something involving x over an interval potentially involving negative numbers.
The “right” way to do this, in my opinion, is first to assess the range of values that x may take. Suppose that the length of an int is 32 bits. Then the length of an unsigned int is the same. If it is guaranteed to be the case that x can never be larger than 2^31-1 (as it often is), then it is safe in principle to cast x to a signed equivalent and use that, i.e. do this:
int y = (int)x;
// Do your stuff with *y*
x = (unsigned)y;
If you have a long that is longer than unsigned, then even if x uses the full unsigned range, you can do this:
long y = (long)x;
// Do your stuff with *y*
x = (unsigned)y;
Now, the problem with either of these approaches is that before assigning back to x (e.g. x=(unsigned)y; in the immediately preceding example), you really must check that y is non-negative. However, these are exactly the cases where working with the unsigned x would have bitten you anyway, so there's no harm at all in something like:
long y = (long)x;
// Do your stuff with *y*
assert( y >= 0L );
x = (unsigned)y;
At least this way, you'll catch the problems and find a solution, rather than having a strange bug that takes hours to find because a loop bound is four billion unexpectedly.
No, it's not safe.
Integers usually are 4 bytes long, which equals to 32 bits. Their difference in representation is:
As far as signed integers is concerned, the most significant bit is used for sign, so they can represent values between -2^31 and 2^31 - 1
Unsigned integers don't use any bit for sign, so they represent values from 0 to 2^32 - 1.
Part 2 isn't safe either for the same reason as Part 1. As int and unsigned types represent integers in a different way, in this case where negative values are used in the calculations, you can't know what the result of x + i will be.
No, it's not safe. Trying to represent negative numbers with unsigned ints smells like bug. Also, you should use %u to print unsigned ints.
If we slightly modify your code to put %u in printf:
#include <stdio.h>
main()
{
unsigned x = 1;
unsigned y=x-2;
printf("%u", y );
}
The number printed is 4294967295
The reason the result is correct is because C doesn't do any overflow checks and you are printing it as a signed int (%d). This, however, does not mean it is safe practice. If you print it as it really is (%u) you won't get the correct answer.
An Unsigned integer type should be thought of not as representing a number, but as a member of something called an "abstract algebraic ring", specifically the equivalence class of integers congruent modulo (MAX_VALUE+1). For purposes of examples, I'll assume "unsigned int" is 16 bits for numerical brevity; the principles would be the same with 32 bits, but all the numbers would be bigger.
Without getting too deep into the abstract-algebraic nitty-gritty, when assigning a number to an unsigned type [abstract algebraic ring], zero maps to the ring's additive identity (so adding zero to a value yields that value), one means the ring's multiplicative identity (so multiplying a value by one yields that value). Adding a positive integer N to a value is equivalent to adding the multiplicative identity, N times; adding a negative integer -N, or subtracting a positive integer N, will yield the value which, when added to +N, would yield the original value.
Thus, assigning -1 to a 16-bit unsigned integer yields 65535, precisely because adding 1 to 65535 will yield 0. Likewise -2 yields 65534, etc.
Note that in an abstract algebraic sense, every integer can be uniquely assigned into to algebraic rings of the indicated form, and a ring member can be uniquely assigned into a smaller ring whose modulus is a factor of its own [e.g. a 16-bit unsigned integer maps uniquely to one 8-bit unsigned integer], but ring members are not uniquely convertible to larger rings or to integers. Unfortunately, C sometimes pretends that ring members are integers, and implicitly converts them; that can lead to some surprising behavior.
Subtracting a value, signed or unsigned, from an unsigned value which is no smaller than int, and no smaller than the value being subtracted, will yield a result according to the rules of algebraic rings, rather than the rules of integer arithmetic. Testing whether the result of such computation is less than zero will be meaningless, because ring values are never less than zero. If you want to operate on unsigned values as though they are numbers, you must first convert them to a type which can represent numbers (i.e. a signed integer type). If the unsigned type can be outside the range that is representable with the same-sized signed type, it will need to be upcast to a larger type.

Converting a large integer to a floating point number in C

I recently wrote a block of code that takes as an input an 8 digit hexadecimal number from the user, transforms it into an integer and then converts it into a float. To go from integer to float I use the following:
int myInt;
float myFloat;
myFloat = *(float *)&myInt;
printf("%g", myFloat);
It works perfectly for small numbers. But when the user inputs hexadecimal numbers such as:
0x0000ffff
0x7eeeeeef
I get that myInt = -2147483648 and that myFloat = -0. I know that the number I get for myInt is the smallest possible number that can be stored in an int variable in C.
Because of this problem, the input range of my program is extremely limited. Does anyone have any advice on how I could expand the range of my code so that it could handle a number as big as:
0xffffffff
Thank you so much for any help you may give me!
The correct way to get the value transferred as accurately as float will allow is:
float myFloat = myInt;
If you want better accuracy, use double instead of float.
What you're doing is trying to reinterpret the bit pattern for the int as if it was a float, which is not a good idea. There are hexadecimal floating-point constants and conversions available in C99 and later. (However, if that's what you are trying, your code in the question is correct — your problem appears to be in converting hex to integer.)
If you get -2147483648 from 0x0000FFFF (or from 0x7EEEFFFF), there is a bug in your conversion code. Fix that before doing anything else. How are you doing the hex to integer conversion? Using strtol() is probably a good way (and sscanf()
and friends is also possible), but be careful about overflows.)
Does anyone have any advice on how I could expand the range of my code so that it could
handle a number as big as 0xffffffff
You can't store 0xffffffff in a 32-bit int; the largest positive hex value you can store in a 32 bit int is 0x7FFFFFFF or (2^31 -1) or 2147483647, but the negative range is -2^31 or -2147483648,
The ranges are due to obvious limitations in the number of bits available and the 2's complement system.
Use an unsigned int if you want 0xffffffff.

Why is this bit-hack code portable?

int v;
int sign; // the sign of v ;
sign = -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
Q1: Since v in defined by type of int ,so why bother to cast it into int again? Is it related to portability?
Edit:
Q2:
sign = v >> (sizeof(int) * CHAR_BIT - 1);
this snippt isn't portable, since right shift of signed int is implementation defined, how to pad the left margin bits is up to complier.So
-(int)((unsigned int)((int)v)
do the poratable trick. Explain me why thid works please.
Isn't right shift of unsigned int alway padding 0 in the left margin bits ?
It's not strictly portable, since it is theoretically possible that int and/or unsigned int have padding bits.
In a hypothetical implementation where unsigned int has padding bits, shifting right by sizeof(int)*CHAR_BIT - 1 would produce undefined behaviour since then
sizeof(int)*CHAR_BIT - 1 >= WIDTH
But for all implementations where unsigned int has no padding bits - and as far as I know that means all existing implementations - the code
int v;
int sign; // the sign of v ;
sign = -(int)((unsigned int)((int)v) >> (sizeof(int) * CHAR_BIT - 1));
must set sign to -1 if v < 0 and to 0 if v >= 0. (Note - thanks to Sander De Dycker for pointing it out - that if int has a negative zero, that would also produce sign = 0, since -0 == 0. If the implementation supports negative zeros and the sign for a negative zero should be -1, neither this shifting, nor the comparison v < 0 would produce that, a direct inspection of the object representation would be required.)
The cast to int before the cast to unsigned int before the shift is entirely superfluous and does nothing.
It is - disregarding the hypothetical padding bits problem - portable because the conversion to unsigned integer types and the representation of unsigned integer types is prescribed by the standard.
Conversion to an unsigned integer type is reduction modulo 2^WIDTH, where WIDTH is the number of value bits in the type, so that the result lies in the range 0 to 2^WIDTH - 1 inclusive.
Since without padding bits in unsigned int the size of the range of int cannot be larger than that of unsigned int, and the standard mandates (6.2.6.2) that signed integers are represented in one of
sign and magnitude
ones' complement
two's complement
the smallest possible representable int value is -2^(WIDTH-1). So a negative int of value -k is converted to 2^WIDTH - k >= 2^(WIDTH-1) and thus has the most significant bit set.
A non-negative int value, on the other hand cannot be larger than 2^(WIDTH-1) - 1 and hence its value will be preserved by the conversion and the most significant bit will not be set.
So when the result of the conversion is shifted by WIDTH - 1 bits to the right (again, we assume no padding bits in unsigned int, hence WIDTH == sizeof(int)*CHAR_BIT), it will produce a 0 if the int value was non-negative, and a 1 if it was negative.
It should be quite portable because when you convert int to unsigned int (via a cast), you receive a value that is 2's complement bit representation of the value of the original int, with the most significant bit being the sign bit.
UPDATE: A more detailed explanation...
I'm assuming there are no padding bits in int and unsigned int and all bits in the two types are utilized to represent integer values. It's a reasonable assumption for the modern hardware. Padding bits are a thing of the past, from where we're still carrying them around in the current and recent C standards for the purpose of backward compatibility (i.e. to be able to run code on old machines).
With that assumption, if int and unsigned int have N bits in them (N = CHAR_BIT * sizeof(int)), then per the C standard we have 3 options to represent int, which is a signed type:
sign-and-magnitude representation, allowing values from -(2N-1-1) to 2N-1-1
one's complement representation, also allowing values from -(2N-1-1) to 2N-1-1
two's complement representation, allowing values from -2N-1 to 2N-1-1 or, possibly, from -(2N-1-1) to 2N-1-1
The sign-and-magnitude and one's complement representations are also a thing of the past, but let's not throw them out just yet.
When we convert int to unsigned int, the rule is that a non-negative value v (>=0) doesn't change, while a negative value v (<0) changes to the positive value of 2N+v, hence (unsigned int)-1=UINT_MAX.
Therefore, (unsigned int)v for a non-negative v will always be in the range from 0 to 2N-1-1 and the most significant bit of (unsigned int)v will be 0.
Now, for a negative v in the range from to -2N-1 to -1 (this range is a superset of the negative ranges for the three possible representations of int), (unsigned int)v will be in the range from 2N+(-2N-1) to 2N+(-1), simplifying which we arrive at the range from 2N-1 to 2N-1. Clearly, the most significant bit of this value will always be 1.
If you look carefully at all this math, you will see that the value of (unsigned)v looks exactly the same in binary as v in 2's complement representation:
...
v = -2: (unsigned)v = 2N - 2 = 111...1102
v = -1: (unsigned)v = 2N - 1 = 111...1112
v = 0: (unsigned)v = 0 = 000...0002
v = 1: (unsigned)v = 1 = 000...0012
...
So, there, the most significant bit of the value (unsigned)v is going to be 0 for v>=0 and 1 for v<0.
Now, let's get back to the sign-and-magnitude and one's complement representations. These two representations may allow two zeroes, a +0 and a -0. But arithmetic computations do not visibly distinguish between +0 and -0, it's still a 0, whether you add it, subtract it, multiply it or compare it. You, as an observer, normally wouldn't see +0 or -0 or any difference from having one or the other.
Trying to observe and distinguish +0 and -0 is generally pointless and you should not normally expect or rely on the presence of two zeroes if you want to make your code portable.
(unsigned int)v won't tell you the difference between v=+0 and v=-0, in both cases (unsigned int)v will be equivalent to 0u.
So, with this method you won't be able to tell whether internally v is a -0 or a +0, you won't extract v's sign bit this way for v=-0.
But again, you gain nothing of practical value from differentiating between the two zeroes and you don't want this differentiation in portable code.
So, with this I dare to declare the method for sign extraction presented in the question quite/very/pretty-much/etc portable in practice.
This method is an overkill, though. And (int)v in the original code is unnecessary as v is already an int.
This should be more than enough and easy to comprehend:
int sign = -(v < 0);
Nope its just excessive casting. There is no need to cast it to an int. It doesn't hurt however.
Edit: Its worth noting that it may be done like that so the type of v can be changed to something else or it may have once been another data type and after it was converted to an int the cast was never removed.
It isn't. The Standard does not define the representation of integers, and therefore it's impossible to guarantee exactly what the result of that will be portably. The only way to get the sign of an integer is to do a comparison.

Integer overflow problem

Please explain the following paragraph.
"The next question is whether we can assign a certain value to a variable without losing precision. It is not sufficient if we just check for overflow during the addition or subtraction, because someone might add 1 to -5 and assign the result to an unsigned int. Then the actual addition does not overflow, but the result still does not fit."
when i am adding 1 to -5 i dont see any reason to worry.the answer is as it should be -4.
so what is the problem of result not being fit??
you can find the full article here through which i was going:
http://www.fefe.de/intof.html
The binary representation of -4, in a 32-bit word, is as follows (hex notation)
0xfffffffc
When interpreted as an unsigned integer, this bit pattern represents the number 2**32-4, or 18446744073709551612. I'm not sure I would call this phenomenon "overflow", but it is a common mistake to assign a small negative integer to a variable of unsigned type and wind up with a really big positive integer.
This trick is actually exploited for bounds checking: if you have a signed integer i and want to know if it is in the range 0 <= i < n, you can test
if ((unsigned)i < n) { ... }
which gives you the answer using one comparison instead of two. The cast to unsigned has no run-time cost; it just tells the compiler to generate an unsigned comparison instead of a signed comparison.
Try assigning it to a unsigned int, not an int.
The term unsigned int is the key - by default an int datatype will hold negative and positive numbers; however, unsigned ints are always positive. They provide this option because uints can technically hold greater positive values than regular signed ints because they do not need to use a bit to keep track of whether or not its negative or positive.
Please see:
Signed versus Unsigned Integers
The problem is that you're storing -4 in an unsigned int. Unsigned ints can only contain zero and positive values. If you assign -4 to one, you'll actually end up getting a very large positive number (the actual value depends on how wide an int you're using).
The problem is that the sizes of storage such as unsigned int can only hold so much. With 1 and -5 it does not matter, but with 1 and -500000000 you might end up with a confusing result. Also, unsigned storage will interpret anything stored in it as positive, so you cannot put a negative value in an unsigned variable.
Two big things to watch out for:
1. Overflow in the operation itself: 1 + -500000000
2. Issues in casting: (unsigned int)(1 + -500)
Unsigned variables, like unsigned int, cannot hold negative values. So assigning 1 - 5 to an unsigned int won't give you -4. I'm not sure what it'll give you, it's probably compiler specific.
Some code:
signed int first, second;
unsigned int result;
first = obtain(); // happens to be 1
second = obtain(); // happens to be -5
result = first + second; // unexpected result here - very large number - and it's too late to check that there's a problem
Say you obtained those values from keyboard. You need to check before addition that the result can be represented in unsigned int. That's what the article talks about.
By definition the number -4 cannot be represented in an unsigned int. -4 is a signed integer. The same goes for any negative number.
When you assign a negative integer to an unsigned int the actual bits of the number do not change, but they are merely represented differently. You'll get some ridiculously-large number due to the way integers are represented in binary (two's complement).
In two's complement, -4 is represented as 0xfffffffc. When 0xfffffffc is represented as an unsigned int you'll get the number 4,294,967,292.
You have to remember that fundamentally you're working with bits. So you can assign a value of -4 to an unsigned integer and this will place a series of bits into that memory location. Those bits can be interpreted as -4 in certain circumstances. One such circumstance is the obvious one: you've told the compiler/system that the bits in that memory location should be interpreted as a two's compliment signed number. So if you do printf("%s",i) prtinf does its magic and converts the two's compliment number to a magnitude and sign. The magnitude will be 4 and the sign will be negative, so it displays '-4'.
However, if you tell the compiler that the data at that memory location is not signed then the bits don't change but instead their interpretation does. So when you do your addition, store the result in an unsigned integer memory location and then call printf on the result it doesn't bother looking for the sign because by definition it is always positive. It calculates the magnitude and prints it. The magnitude will be off because the sign information is still encoded in the bits but it's treated as magnitude information.

Resources