converting from signed int to unsigned int without casting - c

int isNegative(int x) {
return ((unsigned) x)>> 31;
}
I'm writing a function that takes 32 bits and returns 1 if x<0, and 0 otherwise. How do I convert a signed int to an unsigned int without casting.

OP has different implied questions
a function that takes 32 bits and returns 1 if x<0, and 0 otherwise.
int isNegative(int x) {
return x < 0;
}
// or maybe return bool/_Bool
bool isNegative(int x) {
return x < 0;
}
// or pedantically
int isNegative(int_least32_t x) {
return x < 0;
}
// or pedantically and nearly universally portable,
int isNegative(int32_t x) {
return x < 0;
}
converting from signed int to unsigned int without casting (and)
How do I convert a signed int to an unsigned int without casting.
Simply assign the value
int i;
unsigned u = i;
Attempting to use >> to combined these two risk implementation defined behavior and should be avoided unless compelling reasons are given.
EXAMPLE An example of implementation-defined behavior is the propagation of the high-order bit when a signed integer is shifted right. C11ยง3.4.2 2

In this case you don't need to. If you get rid of the cast, right shifting retains the sign bit.
Assuming int is 32 bit, removing the cast results in -1 for negative numbers and 0 for nonnegative numbers. While the actual return value differs for the nonnegative case, in a Boolean context it will work as you expect.

Your question is to write a function that takes 32 bits and returns 1 if x < 0
int isNegative(int x) {
return x < 0;
}
This will work no matter what the size of int and does not invoke any type conversions. I suppose you have over-thought this problem.

There is a variety of operations that perform automatic type conversions, among them assignments, arithmetic operations under certain conditions, and function calls (conversion of certain argument values).
Thus, you could achieve conversion of (signed) int arguments to unsigned int simply by declaring that to be the parameter type:
int isNegative(unsigned x) {
return x >> 31;
}
Admittedly, however, that function interface could be a little confusing. You might therefore prefer to do this:
int isNegative(int x) {
unsigned ux = x;
return x >> 31;
}
However, I don't think either of those is as clear as your original version, with its cast. Type conversion is the entire purpose of a cast, and when conversion is what you need (and all that you need), a cast is the right tool for the job.
Of course, I do overall prefer the even simpler family of approaches suggested by #chux.

Related

How int is converted to char and how char is converted to int?

In the following example the bit representation of byte with all ones is printed:
#include <stdio.h>
int main (void)
{
char c = 255;
char z;
for (int i = 7; i >= 0; i--) {
z = 1 << i;
if ((z & c) == z) printf("1"); else printf("0");
}
printf("\n");
return 0;
}
The output is 11111111
Now we change char c to int c, so that the example becomes:
#include <stdio.h>
int main (void)
{
int c = 255;
char z;
for (int i = 7; i >= 0; i--) {
z = 1 << i;
if ((z & c) == z) printf("1"); else printf("0");
}
printf("\n");
return 0;
}
Now the output is 01111111.
Why the output is different?
UPDATE
Compile the following test.c:
#include <stdio.h>
int main(void)
{
char c=-1;
printf("%c",c);
return 0;
}
$ gcc test.c
$ ./a.out | od -b
0000000 377
0000001
The output is 377, which means that glibc contradicts to gcc, because signed char is converted to unsigned char automatically.
Why such complications? It is reasonable to have char unsigned by default. Is there any specific reason why not?
The first problem here is the char type. This type should never be used for storing integer values, because it has implementation-defined signedness. This means that it could be either signed or unsigned, and you will get different results on different compilers. If char is unsigned on the given compiler, then this code will behave as you expected.
But in case char is signed, char c = 255; will result in a value which is too large. The value 255 will then get converted to a signed number in some compiler-specific way. Usually by translating the raw data value to the two's complement equivalent.
Good compilers like GCC will give a warning for this: "overflow in implicit constant conversion".
Solve this bug by never using char for storing integers. Use uint8_t instead.
The same problem appears when you try to store 1 << 7 inside a char type that is signed on your given compiler. z will end up as a negative value (-128) when that happens.
In the expression z & c, both operands are silently integer promoted to type int. This happens in most C expressions whenever you use small integer types such as char.
The & operator doesn't care if the operands are signed or not, it will do a bitwise AND on the "raw data" values of the variables. When c is a signed char and has the raw value 0xFF, you will get a result which is negative, with the sign bit set. Value -1 on two's complement computers.
So to answer why you get different results in the two cases:
When you switch type to int, the value 255 will fit inside c without getting converted to a negative value. The result of the & operation will also be an int and the sign bit of this int will never be set, unlike it was in the char case.
When you execute -128 & 255 the result will be 128 (0x80). This is a positive integer. z is however a negative integer with the value -128. It will get promoted to int by the == operator but the sign is preserved. Since 128 is not equal to -128, the MSB will get printed as a zero.
You would get the same result if you switched char to uint8_t.
for char to int, you have to define char as unsigned because by default char or any type is treated as singed.
int main (void)
{
int c = 255;
unsigned char z;
int i;
for (i = 7; i >= 0; i--) {
z = 1 << i;
if ((z & c) == z) printf("1"); else printf("0");
}
printf("\n");
return 0;
}
(edit to clarify "signed by default")
In the first listing, (z == c) tests two char ; in the second listing however, (z == c) tests one char and one int.
In order to perform the & and == operations between a char and an int the compiler expands the char to the size of an int.
Regarding the bit 7 (8th):
If your compiler would consider char to be unsigned by default, the condition
(((int)(128) & (int)255) == (int)128)
would render true, and a 1 would be printed. However in your case the result is false, and a 0 is displayed.
The reason is likely your compiler that considers char to be signed (like gcc by default). In this case, a char set to 1 << 7 is actually -128, while in an int (at least two bytes) 255 is positive.
(char)-128 expanded to an int is (int)-128, thus the condition
if ((z & c) == z)
reads
if (((int)(-128) & (int)255) == (int)-128)
which is false in this case.

How to assign an int to unsigned long using C?

I am using C language. There is a function named "npu_session_total".
Then, I will use the return value of this func and assign it to an unsigned long variable "accelerated_count".
int npu_session_total(void)
{
// this will return an int
return atomic_read(&npu_session_count);
}
........
unsigned long accelerated_count = npu_session_total();
Will this cause any problems? How can I do the cast?
Thanks!
Assigning an int to a unsigned long can be done simply as OP did. It is well defined in C. When some_int_value >= 0 it will always fit unchanged into an unsigned long.
INT_MAX <= UINT_MAX <= ULONG_MAX
No cast, masking, nor math is needed - just like OP did.
unsigned long some_unsigned_long_object = some_int_value;
The trick is when some_int_value < 0. The value saved will be some_int_value + ULONG_MAX + 1. #AnT Now is this OK for OP's code? Perhaps not.
A safer conversion would test for negativeness first.
int session_total = npu_session_total();
if (session_total < 0) {
Handle_Negative_Case(session_total);
}
else {
unsigned long accelerated_count = npu_session_total();
...
}
#OP comments that the int value should never be negative. Defensive coding would still detect negative values and handle that. Maybe a simple error message and exit.

Adding 32 bit signed in C

I have been given this problem and would like to solve it in C:
Assume you have a 32-bit processor and that the C compiler does not support long long (or long int). Write a function add(a,b) which returns c = a+b where a and b are 32-bit integers.
I wrote this code which is able to detect overflow and underflow
#define INT_MIN (-2147483647 - 1) /* minimum (signed) int value */
#define INT_MAX 2147483647 /* maximum (signed) int value */
int add(int a, int b)
{
if (a > 0 && b > INT_MAX - a)
{
/* handle overflow */
printf("Handle over flow\n");
}
else if (a < 0 && b < INT_MIN - a)
{
/* handle underflow */
printf("Handle under flow\n");
}
return a + b;
}
I am not sure how to implement the long using 32 bit registers so that I can print the value properly. Can someone help me with how to use the underflow and overflow information so that I can store the result properly in the c variable with I think should be 2 32 bit locations. I think that is what the problem is saying when it hints that that long is not supported. Would the variable c be 2 32 bit registers put together somehow to hold the correct result so that it can be printed? What action should I preform when the result over or under flows?
Since this is a homework question I'll try not to spoil it completely.
One annoying aspect here is that the result is bigger than anything you're allowed to use (I interpret the ban on long long to also include int64_t, otherwise there's really no point to it). It may be temping to go for "two ints" for the result value, but that's weird to interpret the value of. So I'd go for two uint32_t's and interpret them as two halves of a 64 bit two's complement integer.
Unsigned multiword addition is easy and has been covered many times (just search). The signed variant is really the same if the inputs are sign-extended: (not tested)
uint32_t a_l = a;
uint32_t a_h = -(a_l >> 31); // sign-extend a
uint32_t b_l = b;
uint32_t b_h = -(b_l >> 31); // sign-extend b
// todo: implement the addition
return some struct containing c_l and c_h
It can't overflow the 64 bit result when interpreted signed, obviously. It can (and should, sometimes) wrap.
To print that thing, if that's part of the assignment, first reason about which values c_h can have. There aren't many possibilities. It should be easy to print using existing integer printing functions (that is, you don't have to write a whole multiword-itoa, just handle a couple of cases).
As a hint for the addition: what happens when you add two decimal digits and the result is larger than 9? Why is the low digit of 7+6=13 a 3? Given only 7, 6 and 3, how can you determine the second digit of the result? You should be able to apply all this to base 232 as well.
First, the simplest solution that satisfies the problem as stated:
double add(int a, int b)
{
// this will not lose precision, as a double-precision float
// will have more than 33 bits in the mantissa
return (double) a + b;
}
More seriously, the professor probably expected the number to be decomposed into a combination of ints. Holding the sum of two 32-bit integers requires 33 bits, which can be represented with an int and a bit for the carry flag. Assuming unsigned integers for simplicity, adding would be implemented like this:
struct add_result {
unsigned int sum;
unsigned int carry:1;
};
struct add_result add(unsigned int a, unsigned int b)
{
struct add_result ret;
ret.sum = a + b;
ret.carry = b > UINT_MAX - a;
return ret;
}
The harder part is doing something useful with the result, such as printing it. As proposed by harold, a printing function doesn't need to do full division, it can simply cover the possible large 33-bit values and hard-code the first digits for those ranges. Here is an implementation, again limited to unsigned integers:
void print_result(struct add_result n)
{
if (!n.carry) {
// no carry flag - just print the number
printf("%d\n", n.sum);
return;
}
if (n.sum < 705032704u)
printf("4%09u\n", n.sum + 294967296u);
else if (n.sum < 1705032704u)
printf("5%09u\n", n.sum - 705032704u);
else if (n.sum < 2705032704u)
printf("6%09u\n", n.sum - 1705032704u);
else if (n.sum < 3705032704u)
printf("7%09u\n", n.sum - 2705032704u);
else
printf("8%09u\n", n.sum - 3705032704u);
}
Converting this to signed quantities is left as an exercise.

summing unsigned and signed ints, same or different answer?

If I have the following code in C
int main()
{
int x = <a number>
int y = <a number>
unsigned int v = x;
unsigned int w = y;
int ssum = x * y;
unsigned int usum = v * w;
printf("%d\n", ssum);
printf("%d\n", usum);
if(ssum == usum){
printf("Same\n");
} else {
printf("Different\n");
}
return 0;
}
Which would print the most? Would it be equal since signed and unsigned would produce the same result, then if you have a negative like -1, when it gets assigned to int x it becomes 0xFF, and if you want to do -1 + (-1), if you do it the signed way to get -2 = 0xFE, and since the unsigned variables would be set to 0xFF, if you add them you would still get 0xFE. And the same holds true for 2 + (-3) or -2 + 3, in the end the hexadecimal values are identical. So in C is that what's looked at when it sees signedSum == unsignedSum? It doesnt care that one is actually a large number and the other is -2, as long at the 1's and 0's are the same?
Are there any values that would make this not true?
The examples you have given are incorrect in C. Also, converting between signed and unsigned types is not required to preserve bit patterns (the conversion is by value), although with some representations bit patterns are preserved.
There are circumstances where the result of operations will be the same, and circumstances where the result will differ.
If the (actual) sum of adding two ints would overflow an int
(i.e. value outside range that an int can represent) the result is
undefined behaviour. Anything can happen at that point (including
the program terminating abnormally) - subsequently converting to an unsigned doesn't change anything.
Converting an int with negative value to unsigned int uses modulo
arithmetic (modulo the maximum value that an unsigned can
represent, plus one). That is well defined by the standard, but
means -1 (type int) will convert to the maximum value that an
unsigned can represent (i.e. UINT_MAX, an implementation defined
value specified in <limits.h>).
Similarly, adding two variables of type unsigned int always uses
modulo arithmetic.
Because of things like this, your question "which would produce the most?" is meaningless.

Correct way to take absolute value of INT_MIN

I want to perform some arithmetic in unsigned, and need to take absolute value of negative int, something like
do_some_arithmetic_in_unsigned_mode(int some_signed_value)
{
unsigned int magnitude;
int negative;
if(some_signed_value<0) {
magnitude = 0 - some_signed_value;
negative = 1;
} else {
magnitude = some_signed_value;
negative = 0;
}
...snip...
}
But INT_MIN might be problematic, 0 - INT_MIN is UB if performed in signed arithmetic.
What is a standard/robust/safe/efficient way to do this in C?
EDIT:
If we know we are in 2-complement, maybe implicit cast and explicit bit ops would be standard? if possible, I'd like to avoid this assumption.
do_some_arithmetic_in_unsigned_mode(int some_signed_value)
{
unsigned int magnitude=some_signed_value;
int negative=some_signed_value<0;
if (negative) {
magnitude = (~magnitude) + 1;
}
...snip...
}
Conversion from signed to unsigned is well-defined: You get the corresponding representative modulo 2N. Therefore, the following will give you the correct absolute value of n:
int n = /* ... */;
unsigned int abs_n = n < 0 ? UINT_MAX - ((unsigned int)(n)) + 1U
: (unsigned int)(n);
Update: As #aka.nice suggests, we can actually replace UINT_MAX + 1U by 0U:
unsigned int abs_n = n < 0 ? -((unsigned int)(n))
: +((unsigned int)(n));
In the negative case, take some_signed_value+1. Negate it (this is safe because it can't be INT_MIN). Convert to unsigned. Then add one;
You can always test for >= -INT_MAX, this is always well defined. The only case is interesting for you is if INT_MIN < -INT_MAX and that some_signed_value == INT_MIN. You'd have to test that case separately.
I want to perform some arithmetic in unsigned, and need to take absolute value of negative int, ...
To handle pedantic cases:
The |SOME_INT_MIN|1 has some special cases:
1. Non-two's complement
Ones' complement and sign-magnitude are rarely seen these days.
SOME_INT_MIN == -SOME_INT_MAX and some_abs(some_int) is well defined. This is the easy case.
#if INT_MIN == -INT_MAX
some_abs(x); // use matching abs, labs, llabs, imaxabs
#endif
2. SOME_INT_MAX == SOME_UINT_MAX, 2's complement
C allows the max of the signed and unsigned version of an integer type to be the same. This is rarely seen these days.
2 approaches:
1) use a wider integer type, if it exist.
#if -INTMAX_MAX <= SOME_INT_MIN
imaxabs((intmax_t)x)
#endif
2) Use wide(st) floating-point (FP) type.
Conversion to a wide FP will work for SOME_INT_MIN (2's complement) as that value is a -(power-of-2). For other large negatives, the cast may lose precision for a wide integer and not so wide long double. E.g. 64-bit long long and 64-bit long double.
fabsl(x); // see restriction above.
3. SOME_INT_MAX < SOME_UINT_MAX
This is the common case well handle by #Kerrek SB's answer. The below also handles case 1.
x < 0 ? -((unsigned) x) : ((unsigned) x);
Higher Level Alternative
In cases when code is doing .... + abs(x), a well defined alternative is to subtract the negative absolute value: .... - nabs(x). Or as in abs(x) < 100, use nabs > -100.
// This is always well defined.
int nabs(int x) {
return (x < 0) x : -x;
}
1 SOME_INT implies int, long, long long or intmax_t.
static unsigned absolute(int x)
{
if (INT_MIN == x) {
/* Avoid tricky arithmetic overflow possibilities */
return ((unsigned) -(INT_MIN + 1)) + 1U;
} else if (x < 0) {
return -x;
} else {
return x;
}
}

Resources