for(int ii or for(char ii in C? - c

I've always seen people writing for(int ii = 0; ii < 50; ii ++)
but for numbers < 256, why don't people write char ii instead of int ii since it's sure that it will be 8bits and not more?

C does all arithmetic of values smaller than int on the value converted to int, and the compiler will likely put ii into a register, so going for a smaller type won't win you anything. It might even be worse due to constantly converting back to (char).
Next, you have a problem: char is only guaranteed to go up to 127, so 255 would be out of bounds. You would have to go for unsigned char.
Not that that signifies in your example loop. For this one, there's most likely no difference after compiling.

Related

How it's going on infinite times?

I tried to understand the logic behind it but couldn't. What is happening behind the scene and how it's going on for infinite times?
char j=1;
while(j <= 255)
{
printf("%d", j);
j = j+1;
}
In C char may be signed or unsigned - that is implementation dependent. If signed, then the range (on most platforms) is -128 to +127 so always less than 255.
Changing the type as follows:
unsigned char j=1;
will remove the ambiguity. But even then j <= 255 will always be true on most common platforms because it can never be greater then 255. In this case it will "wrap" to zero, so:
while(j != 0)
will work, or more simply and with no platform dependency issues, just change the type of j to int:
int j=1;
which makes much more sense.

Iterating an array backward in C using an unsigned index

Is the following code, safe to iterate an array backward?
for (size_t s = array_size - 1; s != -1; s--)
array[s] = <do something>;
Note that I'm comparing s, which is unsigned, against -1;
Is there a better way?
This code is surprisingly tricky. If my reading of the C standard is correct, then your code is safe if size_t is at least as big as int. This is normally the case because size_t is usually implemented as something like unsigned long int.
In this case -1 is converted to size_t (the type of s). -1 can't be represented by an unsigned type, so we apply modulo arithmetic to bring it in range. This gives us SIZE_MAX (the largest possible value of type size_t). Similarly, decrementing s when it is 0 is done modulo SIZE_MAX+1, which also results in SIZE_MAX. Therefore your loop ends exactly where you want it to end, after processing the s = 0 case.
On the other hand, if size_t were something like unsigned short (and int bigger than short), then int could represent all possible size_t values and s would be converted to int. In other words, the comparison would be done as (int)SIZE_MAX != -1, which would always return false, thus breaking your code. But I've never seen a system where this could happen.
You can avoid any potential problems by using SIZE_MAX (which is provided by <stdint.h>) instead of -1:
for (size_t s = array_size - 1; s != SIZE_MAX; s--)
...
But my favorite solution is this:
for (size_t s = array_size; s--; )
...
Well, s will never be -1, so your ending condition will never happen. s will go from 0 to SIZE_MAX, at which point your program will probably segfault from a memory access error. The better solution would be to start at the max size, and subtract one from everywhere you use it:
for (size_t s = array_size; s > 0; s--)
array[s-1] = <do something>;
Or you can combine this functionality into the for loop's syntax:
for (size_t s = array_size; s--;)
array[s] = <do something>;
Which will subtract one before going into the loop, but checks for s == 0 before subtracting 1.
IMO in the iterations use large enough signed value. It ss easier to read by humans.

Proper way to count down with unsigned

I am reading Carnegie Mellon slides on computer systems for my quiz. In the slide page 49 :
Counting Down with Unsigned
Proper way to use unsigned as loop index
unsigned i;
for (i = cnt-2; i < cnt; i--)
a[i] += a[i+1];
Even better
size_t i;
for (i = cnt-2; i < cnt; i--)
a[i] += a[i+1];
I don't get why it's not going to be infinite loop. I am decrementing i and it is unsigned so it should be always less than cnt. Please explain.
The best option for down counting loops I have found so far is to use
for(unsigned i=N; i-->0; ) { }
This invokes the loop body with i=N-1 ... 0. This works the same way for both signed and unsigned data types and does not rely on any overflows.
This loop is simply relying on the fact that i will be decremented past 0, which makes it the max uint value. Which breaks the loop because now i < cnt == false.
Per Overflowing of Unsigned Int:
unsigned numbers can't overflow, but instead wrap around using the
properties of modulo.
Both the C and C++ standard guarantee this uint wrapping behavior, but it's undefined for signed integers.
The goal of the loops is to loop from cnt-2 down to 0. It achieves the effect of writing i >= 0.
The previous slide correctly talks about why a loop condition of i >= 0 doesn't work. Unsigned numbers are always greater than or equal to 0, so such a condition would be vacuously true. A loop condition of i < cnt ends up looping until i goes past 0 and wraps around. When you decrement an unsigned 0 it becomes UINT_MAX (232 - 1 for a 32-bit integer). When that happens, i < cnt is guaranteed to be false, and the loop terminates.
I would not write loops like this. It is technically correct but very poor style. Good code is not just correct, it is readable, so others can easily figure out what it's doing.
It's taking advantage of what happens when you decrement unsigned integer 0. Here's a simple example.
unsigned cnt = 2;
for (int i = 0; i < 5; i++) {
printf("%u\n", cnt);
cnt--;
}
That produces...
2
1
0
4294967295
4294967294
Unsigned integer 0 - 1 becomes UINT_MAX. So instead of looking for -1, you watch for when your counter becomes bigger than its initial state.
Simplifying the example a bit, here's how you can count down to 0 from 5 (exclusive).
unsigned i;
unsigned cnt = 5;
for (i = cnt-1; i < cnt; i--) {
printf("%d\n", i);
}
That prints:
4
3
2
1
0
On the final iteration i = UINT_MAX which is guaranteed to be larger than cnt so i < cnt is false.
size_t is "better" because it's unsigned and it's as big as the biggest thing in C, so you don't have to ensure that cnt is the same type as i.
This appears to be an alternative expression of the established idiom for implementing the same thing
for (unsigned i = N; i != -1; --i)
...;
They simply replaced the more readable condition of i != -1 with a slightly more cryptic i < cnt. When 0 is decremented in the unsigned domain it actually wraps around to the UINT_MAX value, which compares equal to -1 (in the unsigned domain) and which is greater than or equal to cnt. So, either i != -1 or i < cnt works as a condition for continuing iterations.
Why would they do it that way specifically? Apparently because they start from cnt - 2 and the value of cnt can be smaller than 2, in which case their condition does indeed work properly (and i != -1 doesn't). Aside from such situations there's no reason to involve cnt into the termination condition. One might say that an even better idea would be to pre-check the value of cnt and then use the i != -1 idiom
if (cnt >= 2)
for (unsigned i = cnt - 2; i != -1; --i)
...;
Note, BTW, that as long as the starting value of i is known to be non-negative, the implementation based on the i != -1 condition works regardless of whether i is signed or unsigned.
I think you are confused with int and unsigned int data types. These two are different. In the int datatype (2 byte storage size), you have its range from -32,768 to 32,767 whereas in the unsigned int datatype (2 byte storage size). you have the range from 0 to 65,535 .
In the above example mentioned, you are using the variable i of type unsigned int. It will decrements up to i=0 and then ends the for loop as per the semantics.

C Pointer to negative length array strange behaviour Apple LLVM compiler

I know that negative length arrays have undefined behaviours, but with the standard of E1[E2] being identical to (*((E1)+(E2))) I expected something to work.
So in this instance what I do is I create an array that spans in direction -32, so the accessible indexes I expected to gain are -31 up to 0.
I am using 8bit unsigned chars of 0 to 255 and signed chars of -128 to +127, but this happens with 32bit integers and 64bit integers too.
I use C99's ability to declare arrays of variable length to construct the negative spanning array, specifically I am compiling to the GNU99 C standards.
I assign these values to the indexes and print them out as I go, all seems to work fine.
It goes strange when I make a pointer to the value at array index [-31] and then loop through that, 0 to 31, printing the values.
const signed char length = 32;
const signed char negativeLength = -length;
signed char array[negativeLength];
for ( signed char ii = 0; ii > negativeLength; ii-- ) {
array[ii] = ii;
printf( "array %d\n", array[ii] ); /* Prints out expected values */
}
printf( "==========\n" );
signed char * const pointer = &array[negativeLength + 1];
for ( unsigned char ii = 0; ii < length; ii++ ) {
printf( "pointer %d\n", pointer[ii] ); /* Begins printing expected values then goes funky */
}
I get different results every time, but with 32 it generally starts out okay for the first 3 values, it then goes funky and starts printing out -93 up to +47 and then at index pointer[8], array[-23] it goes fine again.
I am running this on an iPad 2.
What exactly is going on here? Is the iPad messing with the pointer or the array when it detects the negative spanning array length?
I sometimes advocate understanding the behavior observed in some C implementations in situations where the C standard does not define the behavior, because it can be illuminating about how certain implementing work or how computers work. In this case, however: Do not do that.
To access an array with arbitrary integer indices, from X (inclusive) to Y (exclusive), do this:
ElementType ArrayMemory[Y-X], *Array = ArrayMemory - X;
If X <= 0 <= Y and X < Y, the behavior of this is defined by the C standard.
Why would you expect something to work when you've done something with undefined behaviour?
While E1[E2] being equivalent to *(E1 + E2) is well defined, the data you're accessing is not well defined so all bets are off.

C code running well only upto a certain point

I wrote this simple code to generate 4th power of all positive integers up to 1005. It works fine only up to integer 215. After that it gives erroneous readings. why so?
# include<stdio.h>
int main(void)
{
int i;
unsigned long long int j;
for (i = 1; i <= 1005; i++){
j = i*i*i*i;
printf("%i.........%llu\n",i,j);
}
return 0;
}
You can fix it by making this small change.
unsigned long long i;
The problem is that in the line j = i*i*i*i;, the right hand side is being calculated as an int before it is being assigned to j. Because of this if i^4 exceeds integer limits, it will basically start to go first negative and start cycling around when higher bits get clipped. When the negative number is assigned to j, since j is unsigned, -i becomes max - i, which is where the huge numbers come from. You will also need to change the printf format specifier from %i to %llu for i.
You can also fix this by doing the below
j = (unsigned long long)i*i*i*i;
This basically forces a cast up to the type of j before performing the multiplication.
Sanity check - 215 ^4 = 2136750625 which is very close to the upper limit of signed int of 2,147,483,647.
i*i produces an int. And so do i*i*i and i*i*i*i. 215 is the greatest positive integer whose 4th power fits into a 32-bit int.
Beyond that the result is typically truncated (typically because strictly speaking you are having a case of undefined behavior; signed integer overflows result in UB per the C standard).
You may want to cast i to unsigned long long or define it as unsigned long long, so the multiplications are 64-bit:
j = (unsigned long long)i*i*i*i;

Resources