c code output unexpected/expected behaviour - c

what is wrong with this code ? can anyone explain ?
#include <stdio.h>
#include <malloc.h>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};
int main()
{
int num;
int d;
int size = TOTAL_ELEMENTS -2;
printf("%d\n",(TOTAL_ELEMENTS-2));
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
when i print it gives 5, but inside for loop what is happening ?

The sizeof operator returns a value of type size_t, which is an unsigned value. In your for loop condition test:
d <= (TOTAL_ELEMENTS-2)
you are comparing a signed value (d) with an unsigned value (TOTAL_ELEMENTS-2). This is usually a warning condition, and you should turn up the warning level on your compiler so you'll properly get a warning message.
The compiler can only generate code for either a signed or an unsigned comparison, and in this case the comparison is unsigned. The integer value in d is converted to an unsigned value, which on a 2's complement architecture ends up being 0xFFFFFFFF or similar. This is not less than your TOTAL_ELEMENTS-2 value, so the comparison is false.

You're starting the loop by setting d = -1, it should be d = 0. So for the first element you're reading random bits of memory.
If you fix that, then you can change your printf to be
printf("%d\n",array[d]);
As you've also marked this as homework, I'd advise also take a look at your loop terminating condition.

Related

Finding maximum value of a short int variable in C

I was working on Exercise 2-1 of K&R, the goal is to calculate the range of different variable types, bellow is my function to calculate the maximum value a short int can contain:
short int max_short(void) {
short int i = 1, j = 0, k = 0;
while (i > k) {
k = i;
if (((short int)2 * i) > (short int)0)
i *= 2;
else {
j = i;
while (i + j <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
My problem is that the returned value by this function is: -32768 which is obviously wrong since I'm expecting a positive value. I can't figure out where the problem is, I used the same function (with changes in the variables types) to calculate the maximum value an int can contain and it worked...
I though the problem could be caused by comparison inside the if and while statements, hence the typecasting but that didn't help...
Any ideas what is causing this ? Thanks in advance!
EDIT: Thanks to Antti Haapala for his explanations, the overflow to the sign bit results in undefined behavior NOT in negative values.
You can't use calculations like this to deduce the range of signed integers, because signed integer overflow has undefined behaviour, and narrowing conversion at best results in an implementation-defined value, or a signal being raised. The proper solution is to just use SHRT_MAX, INT_MAX ... of <limits.h>. Deducing the maximum value of signed integers via arithmetic is a trick question in standardized C language, and has been so ever since the first standard was published in 1989.
Note that the original edition of K&R predates the standardization of C by 11 years, and even the 2nd one - the "ANSI-C" version predates the finalized standard and differs from it somewhat - they were written for a language that wasn't almost, but not quite, entirely unlike the C language of this day.
You can do it easily for unsigned integers though:
unsigned int i = -1;
// i now holds the maximum value of `unsigned int`.
Per definition, you cannot calculate the maximum value of a type in C, by using variables of that very same type. It simply doesn't make any sense. The type will overflow when it goes "over the top". In case of signed integer overflow, the behavior is undefined, meaning you will get a major bug if you attempt it.
The correct way to do this is to simply check SHRT_MAX from limits.h.
An alternative, somewhat more questionable way would be to create the maximum of an unsigned short and then divide that by 2. We can create the maximum by taking the bitwise inversion of the value 0.
#include <stdio.h>
#include <limits.h>
int main()
{
printf("%hd\n", SHRT_MAX); // best way
unsigned short ushort_max = ~0u;
short short_max = ushort_max / 2;
printf("%hd\n", short_max);
return 0;
}
One note about your code:
Casts such as ((short int)2*i)>(short int)0 are completely superfluous. Most binary operators in C such as * and > implement something called "the usual arithmetic conversions", which is a way to implicitly convert and balance types of an expression. These implicit conversion rules will silently make both of the operands type int despite your casts.
You forgot to cast to short int during comparison
OK, here I assume that the computer would handle integer overflow behavior by changing into negative integers, as I believe that you have assumed in writing this program.
code that outputs 32767:
#include <stdlib.h>
#include <stdio.h>
#include <malloc.h>
short int max_short(void)
{
short int i = 1, j = 0, k = 0;
while (i>k)
{
k = i;
if (((short int)(2 * i))>(short int)0)
i *= 2;
else
{
j = i;
while ((short int)(i + j) <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
int main() {
printf("%d", max_short());
while (1);
}
added 2 casts

C Program - About Sizeof and Constant Used Together

Below is the code I am having issues with. I understand constants and believe I understand the sizeof function but must be missing something. Here is what I have tried to do to solve on my own:
- printf statement with TOTAL_ELEMENTS as the %d - it returns 7
- printf statement of TOTAL_ELEMENTS - 2 - it returns 5 (as expected)
- substitute 5 in the for loop - loop runs correctly
- initialize a global int variable of any name and set it equal to (sizeof(array) / sizeof(array[0])). Then used the variable in the for loop where TOTAL_ELEMENTS would go - again the loop runs correctly.
So (at least in my head), it has to be something involving both the constant and the sizeof function - I'm positive the array/array[0] also plays a part but through testing and substitution I can't figure out what the issue is. I have read up on the sizeof function as well as constants to no avail. I have tried searching but have gotten no where as I'm not fully certain what I'm searching for. I don't necessarily need answers but if someone could point me in the right direction it would be much appreciated. Thank you in advance.
#include <stdio.h>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};
int main()
{
int d;
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
This issue is not related to sizeof. this is because of the comparing signed with unsigned value. In your code , (TOTAL_ELEMENTS-2) has an unsigned value , but d is a signed variable. therefore , for condition will compare 5 with 0xFFFFFFFF and 5 is less than 0xFFFFFFFF then it is false always!
For example :
int main()
{
int d;
unsigned int e = 5;
for (d = -1; d <= e; d++)
printf("%d\n", array[d + 1]);
return 0;
}
It do not print any things! , same as your code.

undefined behaviour of macro in for loop

can anyone explain me the output
I have code like
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};
for(d=-1;d <= (TOTAL_ELEMENTS);d++)
{
printf("%d\n",array[d+1]);
}
It shows no output why it is so ?
But when i change the value of d in for loop like d=1 it shows the output why?
if i remove macro TOTAL_ELEMENT wit d<=4 ; i get the desired output why ?
As other stated in their answers, with d = -1 it does not print anything as in:
d <= TOTAL_ELEMENTS
d is converted to an unsigned integer type (TOTAL_ELEMENTS is of type size_t because of sizeof). After conversion d value becomes a huge unsigned integer and the comparison with TOTAL_ELEMENTS value fails.
Then:
printf("%d\n",array[d+1]);
will overflow your array as the last element of your array is at index TOTAL_ELEMENTS - 1 and you access your array up to TOTAL_ELEMENTS + 1.
To display your array elements just use the regular form starting from index 0:
int i;
for (i = 0; i < TOTAL_ELEMENTS; i++)
{
printf("%d\n", array[i]);
}
You need to understand "Conversion rules for comparision between signed and unsigned types". In the example, for(d=-1;d <= (TOTAL_ELEMENTS);d++), here d is signed int, and TOTAL_ELEMENTS is unsigned, and d <= TOTAL_ELEMENTS converts d to unsigned. Unsigned -1 is huge number which is not < TOTAL_ELEMENTS, so the loop never gets executed. Typecast as shown below. It will work.
for(d=-1;d <= (int)(TOTAL_ELEMENTS);d++)
try casting TOTAL_ELEMENTS to int
for(d=-1;d <= (int)(TOTAL_ELEMENTS);d++)

automatic type conversions [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
signed to unsigned conversions
A riddle (in C)
I am trying to understand why this program is not working
#include<stdio.h>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};
int main()
{
int d;
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
I came across this program when I was going through automatic type conversions in C.But I don't understand how conversions happen between signed and unsigned data types.Please explain.
Thank you,
Harish
sizeof() is of type unsigned, wile d is signed.
you check if d is smaller then an unsigned integer. Thus, the signed d is converted to unsinged.
But the bits representaion of the signed -1 when read as unsigned is greater then 2^31, and obviously greater then TOTAL_ELEMENTS-2, thus the condition is never met and you do not enter the for loop even once.
Look at this code snap, it might clear up things for you:
#include <stdio.h>
#include <stdlib.h>
int main() {
unsigned int x = 50;
int y = -1;
printf("x < y is actually %u < %u which yields %u\n", y,x,y < x);
return 0;
}
The above code prints:
x < y is actually 4294967295 < 50 which yields 0

A Macro using sizeof(arrays) is not giving the expected output

#include <stdio.h>
int arr[] = {1, 2,3,4,5};
#define TOT (sizeof(arr)/sizeof(arr[0]))
int main()
{
int d = -1, x = 0;
if(d<= TOT){
x = arr[4];
printf("%d", TOT);
}
printf("%d", TOT);
}
TOT has the value 5 but the if condition is failing..why is that?
Because there are "the usual arithmetic conversions" at work for the if.
The sizeof operator returns an unsigned type ... and d is converted to unsigned making it greater than the number of elements in arr.
Try
#define TOT (int)(sizeof(arr)/sizeof(arr[0]))
or
if(d<= (int)TOT){
That's because sizeof returns an unsigned number, while d is signed. When d implicitly converted to a singed number, and then it is much much larger than TOT.
You should get a warning about comparison of signed-unsigned comparison from the compiler.
Your expression for TOT is an unsigned value because the sizeof() operator always returns unsigned (positive) values.
When you compare the signed variable d with it, d gets automatically converted to a very large unsigned value, and hence becomes larger than TOT.
return type of sizeof is unsigned integer ....that is why if is failing ...because "d" which is treated as signed by the compiler is greater than TOT

Resources