Check if unsigned is less than zero - c

Playing with some sources found code like this:
void foo(unsigned int i)
{
if(i<0)
printf("Less then zero\n");
else
printf("greater or equ\n");
}
int main()
{
int bar = -2;
foo(bar);
return 0;
}
I think there is no sense, but may be there some cases(security?) that makes this check sensable?

An unsigned int cannot be less than 0 by definition. So, to more directly answer your question, you're right in thinking that this makes no sense. It is not a meaningful security item either unless you encounter something like a loop that accidently decrements a signed int past 0 and then casts it as an unsigned int for use as an index into an array and therefore indexes memory outside of the array.

i will always be >=0because it is declared as unsigned and thus interpreted as an unsigned integer.
So your first test will always be false.
Your call foo(bar) actually converts an int into an unsigned int. This may be what confuses you. And "conversion" does not actually change the bytes/bits value of your integer, it is just a matter of formal typing and interpretation.
See this answer for examples of signed/unsigned conversions.
Here is a simple example (the exact output depends on the number of bytes of an unsigned inton your system, for me it is 4 bytes).
Code:
printf("%u\n", (unsigned int) -2);
Output:
4294967294

Related

wrong conversion of two bytes array to short in c

I'm trying to convert 2 bytes array to an unsigned short.
this is the code for the conversion :
short bytesToShort(char* bytesArr)
{
short result =(short)((bytesArr[1] << 8)|bytesArr[0]);
return result;
}
I have an InputFile which stores bytes, and I read its bytes via loop (2 bytes each time) and store it in char N[] arr in this manner :
char N[3];
N[2]='\0';
while(fread(N,1,2,inputFile)==2)
when the (hex) value of N[0]=0 the computation is correct otherwise its wrong, for example :
0x62 (N[0]=0x0,N[1]=0x62) will return 98 (in short value), but 0x166 in hex (N[0]=0x6,N[1]=0x16) will return 5638 (in short value).
In the first place, it's generally best to use type unsigned char for the bytes of raw binary data, because that correctly expresses the semantics of what you're working with. Type char, although it can be, and too frequently is, used as a synonym for "byte", is better reserved for data that are actually character in nature.
In the event that you are furthermore performing arithmetic on byte values, you almost surely want unsigned char instead of char, because the signedness of char is implementation-defined. It does vary among implementations, and on many common implementations char is signed.
With that said, your main problem appears simple. You said
166 in hex (N[0]=6,N[1]=16) will return 5638 (in short value).
but 0x166 packed into a two-byte little-endian array would be (N[0]=0x66,N[1]=0x1). What you wrote would correspond to 0x1606, which indeed is the same as decimal 5638.
The problem is sign extension due to using char. You should use unsigned char instead:
#include <stdio.h>
short bytesToShort(unsigned char* bytesArr)
{
short result = (short)((bytesArr[1] << 8) | bytesArr[0]);
return result;
}
int main()
{
printf("%04x\n", bytesToShort("\x00\x11")); // expect 0x1100
printf("%04x\n", bytesToShort("\x55\x11")); // expect 0x1155
printf("%04x\n", bytesToShort("\xcc\xdd")); // expect 0xddcc
return 0;
}
Note: the problem in the code is not the one presented by the OP. The problem is returning the wrong result upon the input "\xcc\xdd". It will produce 0xffcc where it should be 0xddcc

For Loop Expression Result Unused but code still works what's wrong?

For Loop Expression Result Unused in xcode
but code still works I don't want to show the whole loop i just need the last number it gets to what's wrong why the warning message ?
#include <stdio.h>
int main(int argc, const char * argv[])
{
// how much memory float consumes
printf("a float consumes %zu bytes\n\n", sizeof(float));
// what is the smallest number a short can hold and its largest
short x;
short y;
for (x; x > -1; x++) {
continue;
}
for (y; y < 1; y--) {
continue;
}
printf("\nSmallest short %d\nlargest short %d\n", x, y);
// same question but unsigned short instead
unsigned short i;
for (i; i < 1; i--) {
continue;
}
printf("largest unsigned short is %d\n", i);
return 0;
}
You did not initialize x, y, or i so they do not store coherent values and it's nonsense to use them in expressions. The compiler detects this kind of condition, known as undefined behavior, and flags the expressions as unused because they are useless.
The first fix is to initialize the variables.
short x = 0;
short y = 0;
The second problem is that the conditions are still nonsense, even if you do start from zero.
for (x; x > -1; x++) { // Keep adding one as long as the result is positive.
// But one plus any positive number is always positive.
for (y; y < 1; y--) { // Keep subtracting one as long as the result is negative.
// But any negative number minus one is always negative.
The program is based on a common misconception, that integer types wrap around from the maximum value to the minimum value when addition generates a carry into the sign bit. Although this often happens in practice, it is classified as undefined behavior and usually considered a computational malfunction, not a feature. Wraparound may be guaranteed by a particular compiler, as documented by its manual. C++ even provides a way to inspect this, namely std::numeric_limits::is_modulo. However, Clang and most other modern compilers instead opt to assume, for purposes of static analysis, that overflow does not occur and addition of positive numbers only produces positive numbers. Leading the compiler to make a faulty assumption may cause it to produce a faulty executable. This may be a second, independent reason for the warning message.
(Unsigned types do always wrap around from the maximum value to zero. The third loop should work, although you don't actually need a loop but (unsigned short) -1 would do the trick.)
To cleanly find the limits of short and unsigned short, use this:
#include <limits.h>
short smin = SHRT_MIN, smax = SHRT_MAX;
unsigned short usmax = USHRT_MAX;
What Potatoswatter said is correct, the immediate problem is that x, y and i are not initialised, so the compiler can either skip the loops or use whatever garbage happens to be in those registers at runtime.
However, even if they were correctly initialised, the behaviour of signed integer overflow is also undefined in C and C++, so the loops may function as intended (wrapping around till the sign flips) or not, or the compiler may choose to completely ignore them altogether.
DO NOT rely on signed integers wrapping around. Here be dragons.

Is it bad to underflow then overflow an unsigned variable?

Kraaa.
I am a student in a programming school who requires us to write C functions with less than 25 lines of code. So, basically, every line counts. Sometimes, I have the need to shorten assignments like so:
#include <stddef.h>
#include <stdio.h>
#define ARRAY_SIZE 3
int main(void)
{
int nbr_array[ARRAY_SIZE] = { 1, 2, 3 };
size_t i;
i = -1;
while (++i < ARRAY_SIZE)
printf("nbr_array[%zu] = %i\n", i, nbr_array[i]);
return (0);
}
The important part of this code is the size_t counter named i. In order to save up several lines of code, I would like to pre-increment it in the loop's condition. But, insofar as the C standard defines size_t as an unsigned type, what I am basically doing here, is underflowing the i variable (from 0 to a very big value), then overflowing it once (from that big value to 0).
My question is the following: regardless of the bad practises of having to shorten our code, is it safe to set an unsigned (size_t) variable to -1 then pre-increment it at each iteration to browse an array?
Thanks!
The i = -1; part of your program is fine.
Converting -1 to an unsigned integer type is defined in C, and results in a value that, if incremented, results in zero.
This said, you are not gaining any line of code with respect to the idiomatic for (i=0; i<ARRAY_SIZE; i++) ….
Your %zi format should probably be %zu.
Unsigned arithmetic never "overflows/underflows" (at least in the way the standard talks about the undefined behavior of signed arithmetic overflow). All unsigned arithmetic is actually modular arithmetic, and as such is safe (i.e. it won't cause undefined behavior in and of itself).
To be precise, the C standard guarantees two things:
Any integer conversion to an unsigned type is well defined (as if the signed number were represented as 2-complement)
overflow/underflow of unsigned integers is well defined (modular arithmetic with 2^n)
Since size_t is an unsigned type, you are not doing anything evil.

What is the difference between #define and declaration

#include <stdio.h>
#include <math.h>
// #define LIMIT 600851475143
int isP(long i);
void run();
// 6857
int main()
{
//int i = 6857;
//printf("%d\n", isP(i));
run();
}
void run()
{
long LIMIT = 600851475143;
// 3, 5
// under 1000
long i, largest =1, temp=0;
for(i=3; i<=775147; i+=2)
{
temp = ((LIMIT/i)*i);
if(LIMIT == temp)
if(isP(i)==1)
largest = i;
}
printf("%d\n",largest);
}
int isP(long i)
{
long j;
for(j=3; j<= i/2; j+=2)
if(i == (i/j)*j)
return 0;
return 1;
}
I just met an interesting issue. As above shows, this piece of code is designed to calculate the largest prime number of LIMIT. The program as shows above gave me an answer of 29, which is incorrect.
While, miraculously, when I defined the LIMIT value (instead of declaring it as long), it could give me the correct value: 6857.
Could someone help me to figure out the reason? Thanks a lot!
A long on many platforms is a 4 byte integer, and will overflow at 2,147,483,647. For example, see Visual C++'s Data Type Ranges page.
When you use a #define, the compiler is free to choose a more appropriate type which can hold your very large number. This can cause it to behave correctly, and give you the answer you expect.
In general, however, I would recommend being explicit about the data type, and choosing a data type that will represent the number correctly without requiring compiler and platform specific behavior, if possible.
I would suspect a numeric type issue here.
#define is a preprocessor directive, so it would replace LIMIT with that number in the code before running the compiler. This leaves the door open for the compiler to interpret that number how it wants, which may not be as a long.
In your case, long probably isn't big enough, so the compiler chooses something else when you use #define. For consistent behavior, you should specify a type that you know has an appropriate range and not rely on the compiler to guess correctly.
You should also turn on full warnings on your compiler, it might be able to detect this sort of problem for you.
When you enter an expression like:
(600851475143 + 1)
everything is fine, as the compiler automatically promotes both of those constants to an appropriate type (like long long in your case) large enough to perform the calculation. You can do as many expressions as you want in this way. But when you write:
long n = 600851475143;
the compiler tries to assign a long long (or whatever the constant is implicitly converted to) to a long, which results in a problem in your case. Your compiler should warn you about this, gcc for example says:
warning: overflow in implicit constant conversion [-Woverflow]
Of course, if a long is big enough to hold that value, there's no problem, since the constant will be a type either the same size as long or smaller.
It is probably because 600851475143 is larger than LONG_MAX (2147483647 according to this)
Try replacing the type of limit with 'long long'. The way it stands, it wraps around (try printing limit as a long). The preprocessor knows that and uses the right type.
Your code basically reduce to these two possibilities:
long LIMIT = 600851475143;
x = LIMIT / i;
vs.
#define LIMIT 600851475143
x = LIMIT / i;
The first one is equivalent to casting the constant into long:
x = (long)600851475143 / i;
while the second one will be precompiled into:
x = 600851475143 / i;
And here lies the difference: 600851475143 is too big for your compiler's long type so if it is cast into long it overflows and goes crazy. But if it is used directly in the division, the compiler knows that it doesn't fit into a long, automatically interprets it as a long long literal, the i is promoted and the division is done as a long long.
Note, however, that even it the algorithm seems to work most of the time, you still have overflows elsewhere and so the code is incorrect. You should declare any variable that may hold these big values as long long.

Printing int type with %lu - C+XINU

I have a given code, in my opinion there is something wrong with that code:
I compile under XINU.
The next variables are relevant :
unsigned long ularray[];
int num;
char str[100];
There is a function returns int:
int func(int i)
{
return ularray[i];
}
now the code is like this:
num = func(i);
sprintf(str, "number = %lu\n", num);
printf(str);
The problem is I get big numbers while printing with %lu, which is not correct.
If i change the %lu to %d, i get the correct number.
For example: with %lu i get 27654342, while with %d i get 26, the latter is correct;
The variables are given, the declaration of the function is given, i write the body but it must return int;
My questions are:
I'm not familiar with 'sprintf' maybe the problem is there?
I assigned unsigned long to int and then print the int with %lu, is That correct?
How can i fix the problem?
Thanks in advance.
Thanks everyone for answering.
I just want to mention I'm working under XINU, well i changed the order of the compilation of the files and what you know... its working and showing same numbers on %lu and %d.
I'm well aware that assigning 'unsigned long' to int and then printing with %lu is incorrect coding and may result loss of data.
But as i said, the code is given, i couldn't change the variables and the printing command.
I had no errors or warnings btw.
I have no idea why changing the compilation order fixed the problem, if someone have an idea you are more then welcome to share.
I want to thank all of you who tried to help me.
I assigned unsigned long to int and then print the int with %lu, is That correct?
No, it isn't correct at all. Think about it a bit! Printf tries to access the memory represented by the variables you pass in and in your case, unsigned long is represented on more bits than int, hence when printf is told to print an unsigned int, it'll read past your actual int and read some other memory which is probably garbage, hence the random numbers. If printf had a prototype mentioning an unsigned long exactly, the compiler could perform an implicit cast and fill the rest of the unwanted memory with zeroes, but since it's not the case, you have to do either one of these solutions:
One, explicit cast your variable:
printf("%lu", (unsigned long)i);
Two, use the correct format specifier:
printf("%d", i);
Also, there are problems with assigning an unsigned long to an int - if the long contains a too big number, then it won't fit into the int and get truncated.
1) the misunderstanding is format specifiers in general
2) num is an int -- therefore, %d is correct when an int is what you want to print.
3) ideally
int func(int i) would be unsigned long func(size_t i)
and int num would be unsigned long num
and sprintf(str, "number = %d\n", num); would be sprintf(str, "number = %lu\n", num);
that way, there would be no narrowing and no conversions -- the type/values would be correctly preserved throughout execution.
and to be pedantic, printf should be printf("%s", str);
if you turn your warning levels way up, your compiler will warn you of some of these things. i have been programming for a long time, and i still leave the warning level obnoxiously high (by some people's standards).
If you have an int, use %d (or %u for unsigned int). If you have a long, use %ld (or %lu for unsigned long).
If you tell printf that you're giving it a long but only pass an int, you'll print random garbage. (Technically that would be undefined behavior.)
It doesn't matter if that int somehow "came from" a long. Once you've assigned it to something shorter, the extra bytes are lost. You only have a int left.
I assigned unsigned long to int and then print the int with %lu, is That correct?
No, and I suggest not casting to int first or else simply using int as the array type. It seems senseless to store a much larger representation and only use a smaller one. Either way, the sprint results will always be off until you properly pair the type (technically the encoding) of the variable with the format's conversion specifier. This means that if you pass an unsigned long, use %ul, if it's an int, use either %i or %d (the difference is that %d is always base-10, %i can take additional specifiers to print in other bases.
How can I fix the problem?
Change the return of your func and the encoding of num to unsigned long
unsigned long ularray[];
unsigned long num;
char str[100];
unsigned long func(int i)
{
return ularray[i];
}

Resources