How to programmatically determine maximum and minimum limit of int data in C? - c

I am attempting exercise 2.1 of K&R. The exercise reads:
Write a program to determine the ranges of char, short, int, and long variables, both signed and unsigned, by printing appropriate values from standard headers and by direct computation. Harder if you compute them: determine the ranges of the various floating-point types.
Printing the values of constants in the standards headers is easy, just like this (only integer shown for example):
printf("Integral Ranges (from constants)\n");
printf("int max: %d\n", INT_MAX);
printf("int min: %d\n", INT_MIN);
printf("unsigned int max: %u\n", UINT_MAX);
However, I want to determine the limits programmatically.
I tried this code which seems like it should work but it actually goes into an infinite loop and gets stuck there:
printf("Integral Ranges (determined programmatically)\n");
int i_max = 0;
while ((i_max + 1) > i_max) {
++i_max;
}
printf("int max: %d\n", i_max);
Why is this getting stuck in a loop? It would seem that when an integer overflows it jumps from 2147483647 to -2147483648. The incremented value is obviously smaller than the previous value so the loop should end, but it doesn't.

Ok, I was about to write a comment but it got too long...
Are you allowed to use sizeof?
If true, then there is an easy way to find the max value for any type:
For example, I'll find the maximum value for an integer:
Definition: INT_MAX = (1 << 31) - 1 for 32-bit integer (2^31 - 1)
The previous definition overflows if we use integers to compute int max, so, it has to be adapted properly:
INT_MAX = (1 << 31) - 1
= ((1 << 30) * 2) - 1
= ((1 << 30) - 1) * 2 + 2) - 1
= ((1 << 30) - 1) * 2) + 1
And using sizeof:
INT_MAX = ((1 << (sizeof(int)*8 - 2) - 1) * 2) + 1
You can do the same for any signed/unsigned type by just reading the rules for each type.

So it actually wasn't getting stuck in an infinite loop. C code is usually so fast that I assume it's broken if it doesn't complete immediately.
It did eventually return the correct answer after I let it run for about 10 seconds. Turns out that 2,147,483,647 increments takes quite a few cycles to complete.
I should also note that I compiled with cc -O0 to disable optimizations, so this wasn't the problem.
A faster solution might look something like this:
int i_max = 0;
int step_size = 256;
while ((i_max + step_size) > i_max) {
i_max += step_size;
}
while ((i_max + 1) > i_max) {
++i_max;
}
printf("int max: %d\n", i_max);
However, as signed overflow is undefined behavior, probably it is a terrible idea to ever try to programmatically guess this in practice. Better to use INT_MAX.

The simplest I could come up with is:
signed int max_signed_int = ~(1 << ((sizeof(int) * 8) -1));
signed int min_signed_int = (1 << ((sizeof(int) * 8) -1));
unsigned int max_unsigned_int = ~0U;
unsigned int min_unsigned_int = 0U;
In my system:
// max_signed_int = 2147483647
// min_signed_int = -2147483648
// max_unsigned_int = 4294967295
// min_unsigned_int = 0

Assuming a two's complement processor, use unsigned math:
unsigned ... smax, smin;
smax = ((unsigned ...)0 - (unsigned ...)1) / (unsigned ...) 2;
smin = ~smax;

As it has been pointed here in other solutions, trying to overflow an integer in C is undefined behaviour, but, at least in this case, I think you can get an valid answer, even from the U.B. thing:
The case is tha if you increment a value and compare the new value with the last, you always get a greater value, except on an overflow (in this case you'll get a value lesser or equal ---you don't have more values greater, that's the case in an overflow) So you can try at least:
int i_old = 0, i = 0;
while (++i > i_old)
i_old = i;
printf("MAX_INT guess: %d\n", i_old);
After this loop, you will have got the expected overflow, and old_i will store the last valid number. Of course, in case you go down, you'll have to use this snippet of code:
int i_old = 0, i = 0;
while (--i < i_old)
i_old = i;
printf("MIN_INT guess: %d\n", i_old);
Of course, U.B. can even mean program stopping run (in this case, you'll have to put traces, to get at least the last value printed)
By the way, in the ancient times of K&R, integers used to be 16bit wide, a value easily accessible by counting up (easier than now, try 64bit integers overflow from 0 up)

I would use the properties of two's complement to compute the values.
unsigned int uint_max = ~0U;
signed int int_max = uint_max >> 1;
signed int int_min1 = (-int_max - 1);
signed int int_min2 = ~int_max;
2^3 is 1000. 2^3 - 1 is 0111. 2^4 - 1 is 1111.
w is the length in bits of your data type.
uint_max is 2^w - 1, or 111...111. This effect is achieved by using ~0U.
int_max is 2^(w-1) - 1, or 0111...111. This effect can be achieved by bitshifting uint_max 1 bit to the right. Since uint_max is an unsigned value, the logical shift is applied by the >> operator, means it adds in leading zeroes instead of extending the sign bit.
int_min is -2^(w-1), or 100...000. In two's complement, the most significant bit has a negative weight!
This is how to visualize the first expression for computing int_min1:
...
011...111 int_max +2^(w-1) - 1
100...000 (-int_max - 1) -2^(w-1) == -2^(w-1) + 1 - 1
100...001 -int_max -2^(w-1) + 1 == -(+2^(w-1) - 1)
...
Adding 1 would be moving down, and subtracting 1 would be moving up. First we negate int_max in order to generate a valid int value, then we subtract 1 to get int_min. We can't just negate (int_max + 1) because that would exceed int_max itself, the biggest int value.
Depending on which version of C or C++ you are using, the expression -(int_max + 1) would either become a signed 64-bit integer, keeping the signedness but sacrificing the original bit width, or it would become an unsigned 32-bit integer, keeping the original bit width but sacrificing the signedness. We need to declare int_min programatically in this roundabout way to keep it a valid int value.
If that's a bit (or byte) too complicated for you, you can just do ~int_max, observing that int_max is 011...111 and int_min is 100...000.
Keep in mind that these techniques I've mentioned here can be used for any bit width w of an integer data type. They can be used for char, short, int, long, and also long long. Keep in mind that integer literals are almost always 32-bits by default, so you may have to cast the 0U to the data type with the appropriate bit width before bitwise NOTing it. But other than that, these techniques are based on the fundamental mathematical principles of two's complement integer representation. That said, they won't work if your computer uses a different way of representing integers, for example ones' complement or most-significant sign-bit.

The assignment says that "printing appropriate values from standard headers" is allowed, and in the real world, that is what you would do. As your prof wrote, direct computation is harder, and why make things harder for its own sake when you're working on another interesting problem and you just want the result? Look up the constants in <limits.h>, for example, INT_MIN and INT_MAX.
Since this is homework and you want to solve it yourself, here are some hints.
The language standard technically allows any of three different representations for signed numbers: two's-complement, one's-complement and sign-and-magnitude. Sure, every computer made in the last fifty years has used two's-complement (with the partial exception of legacy code for certain Unisys mainframes), but if you really want to language-lawyer, you could compute the smallest number for each of the three possible representations and find the minimum by comparing them.
Attempting to find the answer by overflowing or underflowing a signed value does not work! This is undefined behavior! You may in theory, but not in practice, increment an unsigned value of the same width, convert to the corresponding signed type, and compare to the result of casting the previous or next unsigned value. For 32-bit long, this might just be tolerable; it will not scale to a machine where long is 64 bits wide.
You want to use the bitwise operators, particularly ~ and <<, to calculate the largest and smallest value for every type. Note: CHAR_BITS * sizeof(x) gives you the number of bits in x, and left-shifting 0x01UL by one fewer than that, then casting to the desired type, sets the highest bit.
For floating-point values, the only portable way is to use the constants in <math.h>; floating-point values might or might not be able to represent positive and negative infinity, are not constrained to use any particular format. That said, if your compiler supports the optional Annex G of the C11 standard, which specifies IEC 60559 complex arithmetic, then dividing a nonzero floating-point number by zero will be defined as producing infinity, which does allow you to "compute" infinity and negative infinity. If so, the implementation will #define __STDC_IEC_559_COMPLEX__ as 1.
If you detect that infinity is not supported on your implementation, for instance by checking whether INFINITY and -INFINITY are infinities, you would want to use HUGE_VAL and -HUGE_VAL instead.

#include <stdio.h>
int main() {
int n = 1;
while(n>0) {
n=n<<1;
}
int int_min = n;
int int_max = -(n+1);
printf("int_min is: %d\n",int_min);
printf("int_max is: %d\n", int_max);
return 0;
}

unsigned long LMAX=(unsigned long)-1L;
long SLMAX=LMAX/2;
long SLMIN=-SLMAX-1;
If you don't have yhe L suffix just use a variable or cast to signed before castong to unsigned.
For long long:
unsigned long long LLMAX=(unsigned long long)-1LL;

Related

Generating random 64/32/16/ and 8-bit integers in C

I'm hoping that somebody can give me an understanding of why the code works the way it does. I'm trying to wrap my head around things but am lost.
My professor has given us this code snippet which we have to use in order to generate random numbers in C. The snippet in question generates a 64-bit integer, and we have to adapt it to also generate 32-bit, 16-bit, and 8-bit integers. I'm completely lost on where to start, and I'm not necessarily asking for a solution, just on how the original snippet works, so that I can adapt it form there.
long long rand64()
{
int a, b;
long long r;
a = rand();
b = rand();
r = (long long)a;
r = (r << 31) | b;
return r;
}
Questions I have about this code are:
Why is it shifted 31 bits? I thought rand() generated a number between 0-32767 which is 16 bits, so wouldn't that be 48 bits?
Why do we say | (or) b on the second to last line?
I'm making the relatively safe assumption that, in your computer's C implementation, long long is a 64-bit data type.
The key here is that, since long long r is signed, any value with the highest bit set will be negative. Therefore, the code shifts r by 31 bits to avoid setting that bit.
The | is a logical bit operator which combines the two values by setting all of the bits in r which are set in b.
EDIT:
After reading some of the comments, I realized that my answer needs correction. rand() returns a value no more than RAND_MAX which is typically 2^31-1. Therefore, r is a 31-bit integer. If you shifted it 32 bits to the left, you'd guarantee that its 31st bit (0-up counting) would always be zero.
rand() generates a random value [0...RAND_MAX] of questionable repute - but let us set that reputation aside and assume rand() is good enough and it is a
Mersenne number (power-of-2 - 1).
Weakness to OP's code: If RAND_MAX == pow(2,31)-1, a common occurrence, then OP's rand64() only returns values [0...pow(2,62)). #Nate Eldredge
Instead, loop as many times as needed.
To find how many random bits are returned with each call, we need the log2(RAND_MAX + 1). This fortunately is easy with an awesome macro from Is there any way to compute the width of an integer type at compile-time?
#include <stdlib.h>
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
#define RAND_MAX_BITWIDTH (IMAX_BITS(RAND_MAX))
Example: rand_ul() returns a random value in the [0...ULONG_MAX] range, be unsigned long 32-bit, 64-bit, etc.
unsigned long rand_ul(void) {
unsigned long r = 0;
for (int i=0; i<IMAX_BITS(ULONG_MAX); i += RAND_MAX_BITWIDTH) {
r <<= RAND_MAX_BITWIDTH;
r |= rand();
}
return r;
}

Unary negation of unsigned integer 4

If x is an unsigned int type is there a difference in these statements:
return (x & 7);
and
return (-x & 7);
I understand negating an unsigned value gives a value of max_int - value. But is there a difference in the return value (i.e. true/false) among the above two statements under any specific boundary conditions OR are they both same functionally?
Test code:
#include <stdio.h>
static unsigned neg7(unsigned x) { return -x & 7; }
static unsigned pos7(unsigned x) { return +x & 7; }
int main(void)
{
for (unsigned i = 0; i < 8; i++)
printf("%u: pos %u; neg %u\n", i, pos7(i), neg7(i));
return 0;
}
Test results:
0: pos 0; neg 0
1: pos 1; neg 7
2: pos 2; neg 6
3: pos 3; neg 5
4: pos 4; neg 4
5: pos 5; neg 3
6: pos 6; neg 2
7: pos 7; neg 1
For the specific case of 4 (and also 0), there isn't a difference; for other values, there is a difference. You can extend the range of the input, but the outputs will produce the same pattern.
If you ask specifically for true/false (i.e. is zero / not zero) and two's complement then there is indeed no difference. (You do however return not just a simple truth value but allow different bit patterns for true. As long as the caller does not distinguish, that is fine.)
Consider how a two's complement negation is formed: invert the bits then increment. Since you take only the least significant bits, there will be no carry in for the increment. This is a necessity, so you can't do this with anything but a range of least significant bits.
Let's look at the two cases:
First, if the three low bits are zero (for a false equivalent). Inverting gives all ones, incrementing turns them to zero again. The fourth and more significant bits might be different, but they don't influence the least significant bits and they don't influence the result since they are masked out. So this stays.
Second, if the three low bits are not all zero (for a true equivalent). The only way this can change into false is when the increment operation leaves them at zero, which can only happen if they were all ones before, which in turn could only happen if they were all zeros before the inversion. That can't be, since that is the first case. Again, the more significant bits don't influence the three low bits and they are masked out. So the result does not change.
But again, this only works when the caller considers only the truth value (all bits zero / not all bits zero) and when the mask allows a range of bits starting from the least significant without a gap.
Firstly, negating an unsigned int value produces UINT_MAX - original_value + 1. (For example, 0 remains 0 under negation). The alternative way to describe negation is full inversion of all bits followed by increment.
It is not clear why you'd even ask this question, since it is obvious that basically the very first example that comes to mind — an unsigned int value 1 — already produces different results in your expression. 1u & 7 is 1, while -1u & 7 is 7. Did you mean something else, by any chance?

How to sign extend a 9-bit value when converting from an 8-bit value?

I'm implementing a relative branching function in my simple VM.
Basically, I'm given an 8-bit relative value. I then shift this left by 1 bit to make it a 9-bit value. So, for instance, if you were to say "branch +127" this would really mean, 127 instructions, and thus would add 256 to the IP.
My current code looks like this:
uint8_t argument = 0xFF; //-1 or whatever
int16_t difference = argument << 1;
*ip += difference; //ip is a uint16_t
I don't believe difference will ever be detected as a less than 0 with this however. I'm rusty on how signed to unsigned works. Beyond that, I'm not sure the difference would be correctly be subtracted from IP in the case argument is say -1 or -2 or something.
Basically, I'm wanting something that would satisfy these "tests"
//case 1
argument = -5
difference -> -10
ip = 20 -> 10 //ip starts at 20, but becomes 10 after applying difference
//case 2
argument = 127 (must fit in a byte)
difference -> 254
ip = 20 -> 274
Hopefully that makes it a bit more clear.
Anyway, how would I do this cheaply? I saw one "solution" to a similar problem, but it involved division. I'm working with slow embedded processors (assumed to be without efficient ways to multiply and divide), so that's a pretty big thing I'd like to avoid.
To clarify: you worry that left shifting a negative 8 bit number will make it appear like a positive nine bit number? Just pad the top 9 bits with the sign bit of the initial number before left shift:
diff = 0xFF;
int16 diff16=(diff + (diff & 0x80)*0x01FE) << 1;
Now your diff16 is signed 2*diff
As was pointed out by Richard J Ross III, you can avoid the multiplication (if that's expensive on your platform) with a conditional branch:
int16 diff16 = (diff + ((diff & 0x80)?0xFF00:0))<<1;
If you are worried about things staying in range and such ("undefined behavior"), you can do
int16 diff16 = diff;
diff16 = (diff16 | ((diff16 & 0x80)?0x7F00:0))<<1;
At no point does this produce numbers that are going out of range.
The cleanest solution, though, seems to be "cast and shift":
diff16 = (signed char)diff; // recognizes and preserves the sign of diff
diff16 = (short int)((unsigned short)diff16)<<1; // left shift, preserving sign
This produces the expected result, because the compiler automatically takes care of the sign bit (so no need for the mask) in the first line; and in the second line, it does a left shift on an unsigned int (for which overflow is well defined per the standard); the final cast back to short int ensures that the number is correctly interpreted as negative. I believe that in this form the construct is never "undefined".
All of my quotes come from the C standard, section 6.3.1.3. Unsigned to signed is well defined when the value is within range of the signed type:
1 When a value with integer type is converted to another integer type
other than _Bool, if the value can be represented by the new type, it
is unchanged.
Signed to unsigned is well defined:
2 Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
Unsigned to signed, when the value lies out of range isn't too well defined:
3 Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined or an
implementation-defined signal is raised.
Unfortunately, your question lies in the realm of point 3. C doesn't guarantee any implicit mechanism to convert out-of-range values, so you'll need to explicitly provide one. The first step is to decide which representation you intend to use: Ones' complement, two's complement or sign and magnitude
The representation you use will affect the translation algorithm you use. In the example below, I'll use two's complement: If the sign bit is 1 and the value bits are all 0, this corresponds to your lowest value. Your lowest value is another choice you must make: In the case of two's complement, it'd make sense to use either of INT16_MIN (-32768) or INT8_MIN (-128). In the case of the other two, it'd make sense to use INT16_MIN - 1 or INT8_MIN - 1 due to the presense of negative zeros, which should probably be translated to be indistinguishable from regular zeros. In this example, I'll use INT8_MIN, since it makes sense that (uint8_t) -1 should translate to -1 as an int16_t.
Separate the sign bit from the value bits. The value should be the absolute value, except in the case of a two's complement minimum value when sign will be 1 and the value will be 0. Of course, the sign bit can be where-ever you like it to be, though it's conventional for it to rest at the far left hand side. Hence, shifting right 7 places obtains the conventional "sign" bit:
uint8_t sign = input >> 7;
uint8_t value = input & (UINT8_MAX >> 1);
int16_t result;
If the sign bit is 1, we'll call this a negative number and add to INT8_MIN to construct the sign so we don't end up in the same conundrum we started with, or worse: undefined behaviour (which is the fate of one of the other answers).
if (sign == 1) {
result = INT8_MIN + value;
}
else {
result = value;
}
This can be shortened to:
int16_t result = (input >> 7) ? INT8_MIN + (input & (UINT8_MAX >> 1)) : input;
... or, better yet:
int16_t result = input <= INT8_MAX ? input
: INT8_MIN + (int8_t)(input % (uint8_t) INT8_MIN);
The sign test now involves checking if it's in the positive range. If it is, the value remains unchanged. Otherwise, we use addition and modulo to produce the correct negative value. This is fairly consistent with the C standard's language above. It works well for two's complement, because int16_t and int8_t are guaranteed to use a two's complement representation internally. However, types like int aren't required to use a two's complement representation internally. When converting unsigned int to int for example, there needs to be another check, so that we're treating values less than or equal to INT_MAX as positive, and values greater than or equal to (unsigned int) INT_MIN as negative. Any other values need to be handled as errors; In this case I treat them as zeros.
/* Generate some random input */
srand(time(NULL));
unsigned int input = rand();
for (unsigned int x = UINT_MAX / ((unsigned int) RAND_MAX + 1); x > 1; x--) {
input *= (unsigned int) RAND_MAX + 1;
input += rand();
}
int result = /* Handle positives: */ input <= INT_MAX ? input
: /* Handle negatives: */ input >= (unsigned int) INT_MIN ? INT_MIN + (int)(input % (unsigned int) INT_MIN)
: /* Handle errors: */ 0;
If the offset is in the 2's complement representation, then
convert this
uint8_t argument = 0xFF; //-1
int16_t difference = argument << 1;
*ip += difference;
into this:
uint8_t argument = 0xFF; //-1
int8_t signed_argument;
signed_argument = argument; // this relies on implementation-defined
// conversion of unsigned to signed, usually it's
// just a bit-wise copy on 2's complement systems
// OR
// memcpy(&signed_argument, &argument, sizeof argument);
*ip += signed_argument + signed_argument;

Why is my bitwise division function producing a segmentation fault?

My code is below, and it works for most inputs, but I've noticed that for very large numbers(2147483647 divided by 2 for a specific example), I get a segmentation fault and the program stops working. Note that the badd() and bsub() functions simply add or subtract integers respectively.
unsigned int bdiv(unsigned int dividend, unsigned int divisor){
int quotient = 1;
if (divisor == dividend)
{
return 1;
}
else if (dividend < divisor)
{ return -1; }// this represents dividing by zero
quotient = badd(quotient, bdiv(bsub(dividend, divisor), divisor));
return quotient;
}
I'm also having a bit of trouble with my bmult() function. It works for some values, but the program fails for values such as -8192 times 3. This function is also listed. Thanks in advance for any help. I really appreciate it!
int bmult(int x,int y){
int total=0;
/*for (i = 31; i >= 0; i--)
{
total = total << 1;
if(y&1 ==1)
total = badd(total,x);
}
return total;*/
while (x != 0)
{
if ((x&1) != 0)
{
total = badd(total, y);
}
y <<= 1;
x >>= 1;
}
return total;
}
The problem with your bdiv is most likely resulting from recursion depth. In the example you gave, you will be putting about 1073741824 frames on to the stack, basically using up your allotted memory.
In fact, there is no real reason this function need be recursive. I could quite easily be converted to an iterative solution, alleviating the stack issue.
In the multiplication, this line is going to overflow and truncate y, and so badd() will be getting wrong inputs:
y<<=1;
This line:
x>>=1;
Is not going to work for negative x well. Most compilers will do a so-called arithmetic shift here, which is like a regular shift with 0 shifted into the most significant bit, but with a twist, the most significant bit will not change. So, shifting any negative value right will eventually give you -1. And -1 shifted right will remain -1, resulting in an infinite loop in your multiplication.
You should not be using the algorithm for multiplication of unsigned integers to multiply signed integers. It's unlikely to work well (if at all) if it uses signed types in its core.
If you want to multiply signed integers, you can first implement multiplication for unsigned ones, using unsigned types. And then you can actually use it for signed multiplication. This will work on virtually all systems because they use 2's complement representation of signed integers.
Examples (assuming 16-bit 2's complement integers):
-1 * +1 -> 0xFFFF * 1 = 0xFFFF -> convert back to signed -> -1
-1 * -1 -> 0xFFFF * 0xFFFF = 0xFFFE0001 -> truncate to 16 bits & convert to signed -> 1
In the division the following two lines
else if (dividend < divisor)
{ return -1; }// this represents dividing by zero
Are plain wrong. Think, how much is 1/2? It's 0, not -1 or (unsigned int)-1.
Further, how much is UINT_MAX/1? It's UINT_MAX. So, when your division function returns UINT_MAX or (unsigned int)-1 you won't be able to tell the difference, because the two values are the same. You really should use a different mechanism to notify the caller of the overflow.
Oh, and of course, this line:
quotient = badd(quotient, bdiv(bsub(dividend, divisor), divisor));
is going to cause a stack overflow when the quotient is expected to be big. Don't do this recursively. At the very least, use a loop instead.

How can I check if a signed integer is positive?

Using bitwise operators and I suppose addition and subtraction, how can I check if a signed integer is positive (specifically, not negative and not zero)? I'm sure the answer to this is very simple, but it's just not coming to me.
If you really want an "is strictly positive" predicate for int n without using conditionals (assuming 2's complement):
-n will have the sign (top) bit set if n was strictly positive, and clear in all other cases except n == INT_MIN;
~n will have the sign bit set if n was strictly positive, or 0, and clear in all other cases including n == INT_MIN;
...so -n & ~n will have the sign bit set if n was strictly positive, and clear in all other cases.
Apply an unsigned shift to turn this into a 0 / 1 answer:
int strictly_positive = (unsigned)(-n & ~n) >> ((sizeof(int) * CHAR_BIT) - 1);
EDIT: as caf points out in the comments, -n causes an overflow when n == INT_MIN (still assuming 2's complement). The C standard allows the program to fail in this case (for example, you can enable traps for signed overflow using GCC with the-ftrapv option). Casting n to unsigned fixes the problem (unsigned arithmetic does not cause overflows). So an improvement would be:
unsigned u = (unsigned)n;
int strictly_positive = (-u & ~u) >> ((sizeof(int) * CHAR_BIT) - 1);
Check the most significant bit. 0 is positive, 1 is negative.
If you can't use the obvious comparison operators, then you have to work harder:
int i = anyValue;
if (i && !(i & (1U << (sizeof(int) * CHAR_BIT - 1))))
/* I'm almost positive it is positive */
The first term checks that the value is not zero; the second checks that the value does not have the leading bit set. That should work for 2's-complement, 1's-complement or sign-magnitude integers.
Consider how the signedness is represented. Often it's done with two's-complement or with a simple sign bit - I think both of these could be checked with a simple logical and.
Check that is not 0 and the most significant bit is 0, something like:
int positive(int x) {
return x && (x & 0x80000000);
}

Resources