Finding maximum value of a short int variable in C - c

I was working on Exercise 2-1 of K&R, the goal is to calculate the range of different variable types, bellow is my function to calculate the maximum value a short int can contain:
short int max_short(void) {
short int i = 1, j = 0, k = 0;
while (i > k) {
k = i;
if (((short int)2 * i) > (short int)0)
i *= 2;
else {
j = i;
while (i + j <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
My problem is that the returned value by this function is: -32768 which is obviously wrong since I'm expecting a positive value. I can't figure out where the problem is, I used the same function (with changes in the variables types) to calculate the maximum value an int can contain and it worked...
I though the problem could be caused by comparison inside the if and while statements, hence the typecasting but that didn't help...
Any ideas what is causing this ? Thanks in advance!
EDIT: Thanks to Antti Haapala for his explanations, the overflow to the sign bit results in undefined behavior NOT in negative values.

You can't use calculations like this to deduce the range of signed integers, because signed integer overflow has undefined behaviour, and narrowing conversion at best results in an implementation-defined value, or a signal being raised. The proper solution is to just use SHRT_MAX, INT_MAX ... of <limits.h>. Deducing the maximum value of signed integers via arithmetic is a trick question in standardized C language, and has been so ever since the first standard was published in 1989.
Note that the original edition of K&R predates the standardization of C by 11 years, and even the 2nd one - the "ANSI-C" version predates the finalized standard and differs from it somewhat - they were written for a language that wasn't almost, but not quite, entirely unlike the C language of this day.
You can do it easily for unsigned integers though:
unsigned int i = -1;
// i now holds the maximum value of `unsigned int`.

Per definition, you cannot calculate the maximum value of a type in C, by using variables of that very same type. It simply doesn't make any sense. The type will overflow when it goes "over the top". In case of signed integer overflow, the behavior is undefined, meaning you will get a major bug if you attempt it.
The correct way to do this is to simply check SHRT_MAX from limits.h.
An alternative, somewhat more questionable way would be to create the maximum of an unsigned short and then divide that by 2. We can create the maximum by taking the bitwise inversion of the value 0.
#include <stdio.h>
#include <limits.h>
int main()
{
printf("%hd\n", SHRT_MAX); // best way
unsigned short ushort_max = ~0u;
short short_max = ushort_max / 2;
printf("%hd\n", short_max);
return 0;
}
One note about your code:
Casts such as ((short int)2*i)>(short int)0 are completely superfluous. Most binary operators in C such as * and > implement something called "the usual arithmetic conversions", which is a way to implicitly convert and balance types of an expression. These implicit conversion rules will silently make both of the operands type int despite your casts.

You forgot to cast to short int during comparison
OK, here I assume that the computer would handle integer overflow behavior by changing into negative integers, as I believe that you have assumed in writing this program.
code that outputs 32767:
#include <stdlib.h>
#include <stdio.h>
#include <malloc.h>
short int max_short(void)
{
short int i = 1, j = 0, k = 0;
while (i>k)
{
k = i;
if (((short int)(2 * i))>(short int)0)
i *= 2;
else
{
j = i;
while ((short int)(i + j) <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
int main() {
printf("%d", max_short());
while (1);
}
added 2 casts

Related

Integer overflow vs implicit conversion from long long to int

Take for example int a=INT_MAX-1; and int b=INT_MAX-1; and assume that int is 32-bit and a function
int product(int a,int b)
{
return a*b;
}
Now here the product a*b overflows resulting in undefined behavior from the standard:
If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.
However if we have instead
int product(int a,int b)
{
long long x = (long long)a*b;
return x;
}
Then assuming this answer is correct and applies to long long as well by the standard the result is implementation-defined.
I'm thinking that undefined behavior can cause anything including a crash so it's better to avoid it all costs, hence that the second version is preferable. But I'm not quite sure if my reasoning is okay.
Question: Is second version preferable or is the first one or are they equally preferable?
Both of the options are bad because they do not produce the desired result. IMHO it is a moot point trying to rank them in badness order.
My advice would be to fix the function to be well-defined for all use cases.
If you (the programmer) will never (ever!) pass values to the product() function that will cause undefined behavior, then the first version, why not.
The second version returns the sizeof(int)*CHAR_BIT least significant bits of the result (this is implementation defined behavior) and still may overflow on architectures where LLONG_MAX == INT_MAX. The second version may take ages to execute on a 8-bit processor with real bad support for long long multiplication and maybe you should handle the overflow when converting long long to int with some if (x > INT_MAX) return INT_MAX;, unless you are only really interested in only the least significant bits of the product result.
The preferable version is that, where no undefined behavior exists. If you aren't sure if multiplication a and b will result in undefined behavior or not, you should check if it will and prepare for such a case.
#include <assert.h>
#include <limits.h>
int product(int a, int b)
{
assert(a < INT_MAX/b && b < INT_MAX/a);
if (!(a < INT_MAX/b && b < INT_MAX/a))
return INT_MAX;
return a * b;
}
or in GNUC:
int product(int a, int b) {
int c;
if (__builtin_sadd_overflow(a, b, &c)) {
assert(0);
return INT_MAX;
}
return c;
}
I believe that slightly tweaked second version might be interesting for you:
int product(int a, int b)
{
long long x = (long long)a * b;
if (x < INT_MIN || x > INT_MAX)
{
fprintf(stderr, "Error in product(): Result out of range of int\n");
abort();
}
return x;
}
This function takes two integers as long ints, computes their product and checks if
the result is in range of int. If it is, we can return it from the function without any bad consequences. If it is not, we can print error message and abort, or do exception handling of a different kind.
EDIT 1: But this code stil expects that (long long)a * b does not overflow, which is not guaranteed when i. e. sizeof(long long) == sizeof(int). In such case, an overflow check should be added to make sure this does not happen. The (6.54) Integer Overflow Builtins could be interesting for you if you don't mind using GCC-dependent code. If you want to stay in C without any extensions, there are methods to detect multiplication overflow as well, see this StackOverflow answer: https://stackoverflow.com/a/1815371/1003701

C - erroneous output after multiplication of large numbers

I'm implementing my own decrease-and-conquer method for an.
Here's the program:
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
double dncpow(int a, int n)
{
double p = 1.0;
if(n != 0)
{
p = dncpow(a, n / 2);
p = p * p;
if(n % 2)
{
p = p * (double)a;
}
}
return p;
}
int main()
{
int a;
int n;
int a_upper = 10;
int n_upper = 50;
int times = 5;
time_t t;
srand(time(&t));
for(int i = 0; i < times; ++i)
{
a = rand() % a_upper;
n = rand() % n_upper;
printf("a = %d, n = %d\n", a, n);
printf("pow = %.0f\ndnc = %.0f\n\n", pow(a, n), dncpow(a, n));
}
return 0;
}
My code works for small values of a and n, but a mismatch in the output of pow() and dncpow() is observed for inputs such as:
a = 7, n = 39
pow = 909543680129861204865300750663680
dnc = 909543680129861348980488826519552
I'm pretty sure that the algorithm is correct, but dncpow() is giving me wrong answers.
Can someone please help me rectify this? Thanks in advance!
Simple as that, these numbers are too large for what your computer can represent exactly in a single variable. With a floating point type, there's an exponent stored separately and therefore it's still possible to represent a number near the real number, dropping the lowest bits of the mantissa.
Regarding this comment:
I'm getting similar outputs upon replacing 'double' with 'long long'. The latter is supposed to be stored exactly, isn't it?
If you call a function taking double, it won't magically operate on long long instead. Your value is simply converted to double and you'll just get the same result.
Even with a function handling long long (which has 64 bits on nowadays' typical platforms), you can't deal with such large numbers. 64 bits aren't enough to store them. With an unsigned integer type, they will just "wrap around" to 0 on overflow. With a signed integer type, the behavior of overflow is undefined (but still somewhat likely a wrap around). So you'll get some number that has absolutely nothing to do with your expected result. That's arguably worse than the result with a floating point type, that's just not precise.
For exact calculations on large numbers, the only way is to store them in an array (typically of unsigned integers like uintmax_t) and implement all the arithmetics yourself. That's a nice exercise, and a lot of work, especially when performance is of interest (the "naive" arithmetic algorithms are typically very inefficient).
For some real-life program, you won't reinvent the wheel here, as there are libraries for handling large numbers. The arguably best known is libgmp. Read the manuals there and use it.

INT_MAX does not behave right in an If-Statement

My C-Program performs a "Turmrechnung"(A predefined number("init_num" gets multiplied with a predefined range of numbers(init_num*2*3*4*5*6*7*8*9 in my case, defined by the variable "h"), and after that it is divided by those same numbers and the result should be the initial value of "init_num". My task is to integrate a way to stop the calculation if the value of init_num becomes larger than INT_MAX(from limits.h).
But the If-Statement is always true, even if it is not, in case of a larger initial value of "init_num", which results in values bigger than INT_MAX along the way of the calculation.
It only works if i replace "INT_MAX" with a smaller number than INT_MAX like 200000000 in my If-Statement. Why?
#include <limits.h>
#include <stdio.h>
int main() {
int init_num = 1000000;
int h = 9;
for (int i = 2; i < h+1; ++i)
{
if (init_num * i < INT_MAX)
{
printf("%10i %s %i\n", init_num, "*", i);
init_num *= i;
}
else
{
printf("%s\n","An overflow has occurred!");
break;
}
}
for (int i = 2; i < h+1; ++i)
{
printf("%10i %s %i\n", init_num, ":", i);
init_num /= i;
}
printf("%10i\n", init_num);
}
if (init_num * i < INT_MAX)
INT_MAX is the maximum value of int , therefore , this condition will never be false or in other words 0 (except when it is equal to INT_MAX).
If you want you can write your condition like this -
if (init_num < INT_MAX/i)
init_num * i < INT_MAX will only be 0 if int_num * i is INT_MAX. This is not particularly likely. Note that signed integer overflow is undefined behaviour in C, so do be particularly careful here.
You can rewrite your statement to init_num < INT_MAX / i in your particular case. Do note that integer division truncates though.
The problem is signed integer overflow is undefined behaviour. Concentrate on the "undefined" part and think about it. Briefly: avoid under all circumstances.
To avoid this, you can either use a devinitively wider type which is gauranteed to hold the result of the multiplication and then test:
// ensure the type we use for cast is large enough
_Static_assert(LLONG_MAX > INT_MAX, "LLONG too small.");
if ( (long long)init_num * i < (long long)INT_MAX )
This apparently does not work is you are already at the limit (i.e. use the largest data type). So you have to check in advance:
if ( init_num < (INT_MAX / i) ) {
init_num *= i;
Although more time-consuming due to the extra division, this is in general the better approach, as it does not require a larger data type (where multiplication might be also more expensive).
init_num * int
results in an int and though cannot grow beyond a maximum possible int (INT_MAX) by defintion.
So provide "room" for calculations "larger" than an int by replacing
if (init_num * i < INT_MAX)
with
if ((long) init_num * i < (long) INT_MAX)
The casting to long leads to a long result.
(The above approach assumes long being wider then int.)

does modulus function is only applicable on integer data types?

my algorithm calculates the arithmetic operations given below,for small values it works perfectly but for large numbers such as 218194447 it returns a random value,I have tried to use long long int,double but nothing works because modulus function which I have used can only be used with int types , can anyone explain how to solve it or could provide a links that can be useful
#include<stdio.h>
#include<math.h>
int main()
{
long long i,j;
int t,n;
scanf("%d\n",&t);
while(t--)
{
scanf("%d",&n);
long long k;
i = (n*n);
k = (1000000007);
j = (i % k);
printf("%d\n",j);
}
return 0;
}
You could declare your variables as int64_t or long long ; then they would compute the modulus in their range (e.g. 64 bits for int64_t). And it would work correctly only if all intermediate values fit in their range.
However, you probably want or need bignums. I suggest you to learn and use GMPlib for that.
BTW, don't use pow since it computes in floating point. Try i = n * n; instead of i = pow(n,2);
P.S. this is not for a beginner in C programming, using gmplib requires some fluency with C programming (and programming in general)
The problem in your code is that intermittent values of your computation exceed the range of values that can be stored in an int. n^2 for values of n>2^30 cannot be represented as int.
Follow the link above given by R.T. for a way of doing modulo on big numbers. That won't be enough though, since you also need a class/library that can handle big integer values . With only standard C libraries in place, that will otherwise be a though task do do on your own. (ok, for 2^31, a 64 bit integer would do, but if you're going even larger, you're out of luck again)
After accept answer
To find the modulo of a number n raised to some power p (2 in OP's case), there is no need to first calculate power(n,p). Instead calculate intermediate modulo values as n is raise to intermediate powers.
The following code works with p==2 as needed by OP, but also works quickly if p=1000000000.
The only wider integers needed are integers that are twice as wide as n.
Performing all this with unsigned integers simplifies the needed code.
The resultant code is quite small.
#include <stdint.h>
uint32_t powmod(uint32_t base, uint32_t expo, uint32_t mod) {
// `y = 1u % mod` needed only for the cases expo==0, mod<=1
// otherwise `y = 1u` would do.
uint32_t y = 1u % mod;
while (expo) {
if (expo & 1u) {
y = ((uint64_t) base * y) % mod;
}
expo >>= 1u;
base = ((uint64_t) base * base) % mod;
}
return y;
}
#include<stdio.h>
#include<math.h>
int main(void) {
unsigned long j;
unsigned t, n;
scanf("%u\n", &t);
while (t--) {
scanf("%u", &n);
unsigned long k;
k = 1000000007u;
j = powmod(n, 2, k);
printf("%lu\n", j);
}
return 0;
}

warning: comparison of unsigned expression >= 0 is always true

I have the following error when compiling a C file:
t_memmove.c: In function ‘ft_memmove’:
ft_memmove.c:19: warning: comparison of unsigned expression >= 0 is always true
Here's the full code, via cat ft_memmove.c:
#include "libft.h"
#include <string.h>
void *ft_memmove(void *s1, const void *s2, size_t n)
{
char *s1c;
char *s2c;
size_t i;
if (!s1 || !s2 || !n)
{
return s1;
}
i = 0;
s1c = (char *) s1;
s2c = (char *) s2;
if (s1c > s2c)
{
while (n - i >= 0) // this triggers the error
{
s1c[n - i] = s2c[n - i];
++i;
}
}
else
{
while (i < n)
{
s1c[i] = s2c[i];
++i;
}
}
return s1;
}
I do understand that size_t is unsigned and that both integers will be >= 0 because of that. But since I'm subtracting one from the other, I don't get it. Why does this error come up?
If you subtract two unsigned integers in C, the result will be interpreted as unsigned. It doesn't automatically treat it as signed just because you subtracted. One way to fix that is use n >= i instead of n - i >= 0.
consider this loop:
for(unsigned int i=5;i>=0;i--)
{
}
This loop will be infinite because whenever i becomes -1 it'll be interprated as a very large possitive value as sign bit is absent in unsigned int.
This is the reason a warning is generated here
According to section 6.3.1.8 of the draft C99 standard Usual arithmetic conversions, since they are both of the same type, the result will also be size_t. The section states:
[...]Unless explicitly stated otherwise, the common real type is also the corresponding real type of the result[...]
and later on says:
If both operands have the same type, then no further conversion is needed.
mathematically you can just move the i over to the other side of the expression like so:
n >= i
Arithmetic on unsigned results in an unsigned and that's why you are getting this warning. Better to change n - i >= 0 to n >= i.
Operations with unsigned operands are performed in the domain of unsigned type. Unsigned arithmetic follows the rules of modular arithmetic. This means that the result will never be negative, even if you are subtracting something from something. For example 1u - 5u does not produce -4. If produces UINT_MAX - 3, which is a huge positive value congruent to -4 modulo UINT_MAX + 1.

Resources