comparison between signed and unsigned integer expressions [-Wsign-compare] warning - c

for ( i= 0; i < sizeof(r)/sizeof(r[0]); ++i ){
r[i]= 0;
}
So this is the for loop I'm having troubles with, how can I rewrite it so I don't get the warning:
comparison between signed and unsigned integer expressions [-Wsign-compare]

sizeof() returns an unsigned integer of type size_t. So use an index of the same type.
size_t i;
for (i = 0; i < sizeof(r)/sizeof(r[0]); ++i) {
r[i] = 0;
}
Recommend to not use int size = sizeof(r)/sizeof(r[0]);. The range of size_t may greatly exceed the positive range of int. The assignment could then lose significant bits.
size_t is the type best used to index array variables. Remember, though, since it is some unsigned integer, it can not represent negative indexes.

In your code:
for ( i= 0; i < sizeof(r)/sizeof(r[0]); ++i ){
r[i]= 0;
}
I think the "i" is declared as an int, try "unsigned int i;" like this:
for (unsigned int i = 0; i < sizeof(r)/sizeof(r[0]); ++i ){
r[i]= 0;
}
Run your code and it should remove this warning for sure.

Related

False min and max assignments in function

I have a function which is supposed to find min and max in an array using struct.
But somehow the function assigns wrong values to min and max variables. Could someone please explain where do I have the mistake? Thank you very much. P.S. In my assignment the function doesn't need to take the first element of the array
min_max_t min_max(unsigned int *array, int size)
{
min_max_t flag;
flag.min = array[1];
flag.max = array[1];
printf("Flag.min: %d | ", flag.min);
printf("Flag.max: %d\n", flag.max);
for (int i = 1; i < size; i++)
{
printf("i = %d - [A:%d - Min:%d - Max:%d]\n", i, array[i], flag.min, flag.max);
if(array[i] > flag.max)
{
flag.max = array[i];
}
else if (array[i] < flag.min)
{
flag.min = array[i];
}
printf("i = %d - [A:%d - Min:%d - Max:%d]\n\n", i, array[i], flag.min, flag.max);
}
return flag;
}
Screenshot of function process
There is nothing wrong with the logic for finding min and max.
The problem with your code is that you print unsigned int using %d. When printing unsigned int values use %u.
Another issue, that you may consider to handle, is illegal values of the function parameter size. Your function requires that size is at least 2. To avoid undefined behavior you may want to check for that.
In the start of the function you could for instance add
assert(size >= 2);
or
if (size < 2)
{
// return some suitable value
}
That said you can also just document the function to require size being at least 2. In C it's not uncommon to set such contract requirements for functions. Several stdlib functions have such requirements.
BTW: If you add check for size you should probably also check for array not being NULL.
BTW: Your Screenshot indicates that you pass an array of int to the function. If that's true then you have a bug in the caller-code. Don't pass an array of int to a function expecting an an array of unsigned int.
For starters the function should be declared like
min_max_t min_max( const unsigned int *array, size_t size );
and the structure min_max_t should contain two data members that will store indices to minimum and maximum elements. For example
typedef struct min_max_t
{
size_t min;
size_t max;
} min_max_t;
Otherwise the function can invoke undefined behavior when for example the user passed as the second argument 0.
Indices in arrays start from 0. So you skipped within the function the first element of the passed array.
As the array has elements of the type unsigned int then the expression -1 is converted implicitly to the maximum value of the type unsigned int. So you need to decide whether indeed you want to deal with unsigned integer arrays or with signed integer arrays.
Using the conversion specifier %d instead of %u in a call of printf to output an object of the type unsigned int can invoke undefined behavior if the value of the object does not fit in an object of the type int.
So your function can look the following way
typedef struct min_max_t
{
size_t min;
size_t max;
} min_max_t;
min_max_t min_max( const unsigned int *array, size_t size )
{
min_max_t flag = { .min = 0, .max = 0 };
printf( "Flag.min: %zu | ", flag.min );
printf( "Flag.max: %zu\n", flag.max );
for ( size_t i = 1; i < size; i++ )
{
printf( "i = %zu - [A:%u - Min:%u - Max:%u]\n", i, array[i], array[flag.min], array[flag.max] );
if ( array[flag.max] < arra[i] )
{
flag.max = i;
}
else if ( array[i] < array[flag.min] )
{
flag.min = i;
}
printf( "i = %zu - [A:%u - Min:%u - Max:%u]\n\n", i, array[i], array[flag.min], array[flag.max] );
}
return flag;
}

Conversion of string constant to numeric value using C

I have written a C program which uses two different algorithms to convert a string constant representing a numeric value to its integer value. For some reasons, the first algorithm, atoi(), doesn't execute properly on large values, while the second algorithm, atoi_imp(), works fine. Is this an optimization issue or some other error? The problem is that the first function makes the program's process to terminate with an error.
#include <stdio.h>
#include <string.h>
unsigned long long int atoi(const char[]);
unsigned long long int atoi_imp(const char[]);
int main(void) {
printf("%llu\n", atoi("9417820179"));
printf("%llu\n", atoi_imp("9417820179"));
return 0;
}
unsigned long long int atoi(const char str[]) {
unsigned long long int i, j, power, num = 0;
for (i = strlen(str) - 1; i >= 0; --i) {
power = 1;
for (j = 0; j < strlen(str) - i - 1; ++j) {
power *= 10;
}
num += (str[i] - '0') * power;
}
return num;
}
unsigned long long int atoi_imp(const char str[]) {
unsigned long long int i, num = 0;
for (i = 0; str[i] >= '0' && str[i] <= '9'; ++i) {
num = num * 10 + (str[i] - '0');
}
return num;
}
atoi is part of C standard library, with signature int atoi(const char *);.
You are declaring that a function with that name exists, but give it different return type. Note that in C, function name is the only thing that matters, and the toolchain can only trust what you tell in the source code. If you lie to the compiler, like here, all bets are off.
You should select different name for your own implementation to avoid issues.
As researched by #pmg, C standard (link to C99.7.1.3) says, using names from C standard library for your own global symbols (functions or global variables) is explicitly Undefined Behavior. Beware of nasal demons!
Ok there is at least one problem with your function atoi.
You are looping down on an unsigned value and check if its bigger equal zero, which should be an underflow.
The most easy fix is index shifting i.e.:
unsigned long long int my_atoi(const char str[]) {
unsigned long long int i, j, power, num = 0;
for (i = strlen(str); i != 0; --i) {
power = 1;
for (j = 0; j < strlen(str) - i; ++j) {
power *= 10;
}
num += (str[i-1] - '0') * power;
}
return num;
}
Too late, but may help. I did for base 10, in case you change the base you need to take care about how to compute the digit 0, in *p-'0'.
I would use the Horner's rule to compute the value.
#include <stdio.h>
void main(void)
{
char *a = "5363", *p = a;
int unsigned base = 10;
long unsigned x = 0;
while(*p) {
x*=base;
x+=(*p-'0');
p++;
}
printf("%lu\n", x);
}
Your function has an infinite loop: as i is unsigned, i >= 0 is always true.
It can be improved in different ways:
you should compute the length of str just once. strlen() is not cheap, it must scan the string until it finds the null terminator. The compiler is not always capable of optimizing away redundant calls for the same argument.
power could be computed incrementally, avoiding the need for a nested loop.
you should not use the name atoi as it is a standard function in the C library. Unless you implement its specification exactly and correctly, you should use a different name.
Here is a corrected and improved version:
unsigned long long int atoi_power(const char str[]) {
size_t i, len = strlen(str);
unsigned long long int power = 1, num = 0;
for (i = len; i-- > 0; ) {
num += (str[i] - '0') * power;
power *= 10;
}
return num;
}
Modified this way, the function should have a similar performance as the atoi_imp version. Note however that they do not implement the same semantics. atoi_pow must be given a string of digits, whereas atoi_imp can have trailing characters.
As a matter of fact neither atoi_imp nor atoi_pow implement the specification of atoi extended to handle larger unsigned integers:
atoi ignored any leading white space characters,
atoi accepts an optional sign, either '+' or '-'.
atoi consumes all following decimal digits, the behavior on overflow is undefined.
atoi ignores and trailing characters that are not decimal digits.
Given these semantics, the natural implementation or atoi is that of atoi_imp with extra tests. Note that even strtoull(), which you could use to implement your function handles white space and an optional sign, although the conversion of negative values may give surprising results.

Comparing unsigned and signed int

I guess this is one of the classical questions.
As far as I know comparing unsigned and signed int are performed using unsigned arithmetic, which means that if length = -1 = unsigned max of 32 bits.
The code can be fixed by either declaring length to be an int, or by changing the test of the for loop to be i < length.
Declaring length to be an int, it's easy to understand, but changing the loop to be i < length not really easy.
If we have the following situation: 5 < -1 which if performed using unsigned arithmetic, in my computer yields 5 < 4294967295, how can this be a solution, it seems like it will access undefined elements.
Code
float sum_elements(float a[], unsigned length)
{
int i;
float result = 0;
for (i = 0; i <= length-1; i++)
result += a[i];
return result;
}
Consider the condition.
i <= length-1
As you mentioned, if length is zero then you will enter into a situation like 5 < 4294967295.
Changing the condition to "i < length" will prevent this.
Also changing type of variable "i" to "unsigned" makes sense because (a) it is array index. (b) you are comparing it with an "unsigned".
So I would prefer this code.
float sum_elements(float a[], unsigned length)
{
unsigned i = 0;
//float result = 0.0; //Refer comment section.
double result = 0.0;
for (i = 0; i < length; i++)
result += (double)a[i];
return result;
}
Option #1:
for (i = 0; i <= (int)length-1; i++)
Option #2:
for (i = 0; i+1 <= length; i++)
Option #3:
for (i = 0; i < length; i++)
It's your compilator job's, when he creates he's parser lexer, he uses a table for your variables. If he saw something like :
float a = b + 60
60 will be cast in 60.0 by your compilator.
I think this is the same thing here:
(unsigned int)length = (unsigned int)length (int)-1
becomes:
(unsigned int)length = (int)length (int)-1;
If you want a proper arithmetic comparison, you should use the flag -Wextra
A pedantic <= compare of and int <= unsigned would test for negative-ness first.
for (i = 0; i < 0 || ((unsigned) i) <= length-1; i++)
Removing the -1 helps to avoid overflow.
for (i = 0; i < 0 || ((unsigned) i) < length; i++)
A good compiler will likely optimize the code so 2 compares are not actually in the executable.
If -Wsign-conversion or its equivalent compiler option is not used, drop the cast for cleaner code #R..
for (i = 0; i < 0 || i < length; i++)
As well commented by #chqrlie the compare may perform well but subsequent operations on i may be a problem. In particular when i == INT_MAX, the i++ is UB.
Better to use size_t (an unsigned type) for array size computation and indexing.
float sum_elements(float a[], size_t length) {
float result = 0;
size_t i;
for (i = 0; i < length; i++)
result += a[i];
return result;
}
Your code will not perform as expected in 2 cases:
if length == 0, length - 1, computed using unsigned arithmetic, is a very large number and comparing i <= length - 1 will be always true because the comparison is also performed using unsigned arithmetics.
if length is larger than the maximum integer value, i can never reach such a value and although the comparison performed using unsigned arithmetic will work as expected, the indexing a[i] will be incorrect on 64-bit systems where the negative index will point outside the array.
The compiler correctly diagnoses a real problem. Using a signed type for i and comparing that to an unsigned length expression can lead to unexpected behavior. Correct the problem this way:
float sum_elements(float a[], unsigned length) {
double result = 0.0;
for (unsigned i = 0; i < length; i++) {
result += a[i];
}
return result;
}
Notes:
the types for length and i really should be size_t as this may be a larger type than unsigned.
the sum should be computed using double arithmetics, to achieve better precision than using float. Precision will be better, but still limited. Summing the array elements in a different order can produce a different result.
Lose the i variable, to save a little stack space and make the function faster.
float sum_elements(float a[], unsigned length)
{
float result = 0;
while (length--)
result += *a++;
return result;
}

Two's complement and loss of information in C

I want do the two's complement of a float data.
unsigned long Temperature ;
Temperature = (~(unsigned long)(564.48))+1;
But the problem is that the cast loses information, 564 instead of 564.48.
Can i do the two's complement without a loss of information?
That is a very weird thing to do; floating-point numbers are not stored as 2s complement, so it doesn't make a lot of sense.
Anyway, you can perhaps use the good old union trick:
union {
float real;
unsigned long integer;
} tmp = { 564.48 };
tmp.integer = ~tmp.integer + 1;
printf("I got %f\n", tmp.real);
When I tried it (on ideone) it printed:
I got -0.007412
Note that this relies on unspecified behavior, so it's possible it might break if your compiler does not implement the access in the most straight-forward manner. This is distinct form undefined behavior (which would make the code invalid), but still not optimal. Someone did tell me that newer standards make it clearer, but I've not found an exact reference so ... consider yourself warned.
You can't use ~ over floats (it must be an integer type):
#include <stdio.h>
void print_binary(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char *) ptr;
unsigned char byte;
int i, j;
for (i = size - 1; i >= 0; i--) {
for (j = 7; j >= 0; j--) {
byte = b[i] & (1 << j);
byte >>= j;
printf("%u", byte);
}
}
printf("\n");
}
int main(void)
{
float f = 564.48f;
char *p = (char *)&f;
size_t i;
print_binary(sizeof(f), &f);
for (i = 0; i < sizeof(float); i++) {
p[i] = ~p[i];
}
print_binary(sizeof(f), &f);
f += 1.f;
return 0;
}
Output:
01000100000011010001111010111000
10111011111100101110000101000111
Of course print_binary is there for test the result, remove it, and (as pointed out by #barakmanos) print_binary assumes little endian, the rest of the code is not affected by endiannes:
#include <stdio.h>
int main(void)
{
float f = 564.48f;
char *p = (char *)&f;
size_t i;
for (i = 0; i < sizeof(float); i++) {
p[i] = ~p[i];
}
f += 1.f;
return 0;
}
Casting a floating-point value to an integer value changes the "bit contents" of that value.
In order to perform two's complement on the "bit contents" of a floating-point value:
float f = 564.48f;
unsigned long Temperature = ~*(unsigned long*)&f+1;
Make sure that sizeof(long) == sizeof(float), or use double instead of float.

Bit vector implementation from a given array

I'm trying to create a bit vector set from a given array. Not sure how to start it off. For example given the array: int rows[] = {1, 2, 5} I need to make a function unsigned short MakeBitVector(int values[], int nValues) You're allowed to assume that the range for the elements in the array is 1-9. Here's what I have so far:
unsigned short MakeBitVector(int values[], int nValues)
{
(55)unsigned short int set = calloc(nValues, sizeof(unsigned short));
for(int i = 0; i < nValues; i++){
(57)set[i] = values[i];
}
return set;
}
I keep getting warnings and errors:
bits.c:55: warning: initialization makes integer from pointer without a cast
bits.c:57: error: subscripted value is neither array nor pointer
Any ideas on how to fix this?
You definitely need your set to be a pointer:
unsigned short int* set = calloc(nValues, sizeof(unsigned short));
And you have to change the return type of the function to pointer as well.
Edit: if you want to pack everything into one int, you can go on in a simpler way:
unsigned short MakeBitVector(int values[], int nValues)
{
unsigned short int set = 0;
for (int i = 0; i < nValues; i++)
set |= 1 << values[i];
return set;
}
You don't need to allocate a single int, returning the copy is just fine.
I don't think you need dynamic allocation at all; calloc is just confusing things. Also, you will need to operate on single bits somewhere which your code isn't at present. What about this:
unsigned short MakeBitVector(int values[], int nValues) {
unsigned short int set = 0;
for(int i = 0; i < nValues; i++){
set |= 1 << values[i];
}
return set;
}
Obviously the output of this is undefined if the input contains indices >= 16, but you said that shouldn't be a problem (and you could easily extend it to 32 anyway).
set isn't a pointer. Change that to a pointer instead. You would also need to return a pointer as well.

Resources