I wrote this simple code to generate 4th power of all positive integers up to 1005. It works fine only up to integer 215. After that it gives erroneous readings. why so?
# include<stdio.h>
int main(void)
{
int i;
unsigned long long int j;
for (i = 1; i <= 1005; i++){
j = i*i*i*i;
printf("%i.........%llu\n",i,j);
}
return 0;
}
You can fix it by making this small change.
unsigned long long i;
The problem is that in the line j = i*i*i*i;, the right hand side is being calculated as an int before it is being assigned to j. Because of this if i^4 exceeds integer limits, it will basically start to go first negative and start cycling around when higher bits get clipped. When the negative number is assigned to j, since j is unsigned, -i becomes max - i, which is where the huge numbers come from. You will also need to change the printf format specifier from %i to %llu for i.
You can also fix this by doing the below
j = (unsigned long long)i*i*i*i;
This basically forces a cast up to the type of j before performing the multiplication.
Sanity check - 215 ^4 = 2136750625 which is very close to the upper limit of signed int of 2,147,483,647.
i*i produces an int. And so do i*i*i and i*i*i*i. 215 is the greatest positive integer whose 4th power fits into a 32-bit int.
Beyond that the result is typically truncated (typically because strictly speaking you are having a case of undefined behavior; signed integer overflows result in UB per the C standard).
You may want to cast i to unsigned long long or define it as unsigned long long, so the multiplications are 64-bit:
j = (unsigned long long)i*i*i*i;
Related
EDIT: After some discussion in the comments it came out that because of a luck of knowledge in how floating point numbers are implemented in C, I asked something different from what I meant to ask.
I wanted to use (do operations with) integers larger than those I can have with unsigned long long (that for me is 8 bytes), possibly without recurring to arrays or bigint libraries. Since my long double is 16 bytes, I thought it could've been possible by just switching type. It came out that even though it is possible to represent larger integers, you can't do operations -with these larger long double integers- without losing precision. So it's not possible to achieve what I wanted to do. Actually, as stated in the comments, it is not possible for me. But in general, wether it is possible or not depends on the floating point characteristics of your long double.
// end of EDIT
I am trying to understand what's the largest integer that I can store in a long double.
I know it depends on environment which the program is built in, but I don't know exactly how. I have a sizeof(long double) == 16 for what is worth.
Now in this answer they say that the the maximum value for a 64-bit double should be 2^53, which is around 9 x 10^15, and exactly 9007199254740992.
When I run the following program, it just works:
#include <stdio.h>
int main() {
long double d = 9007199254740992.0L, i;
printf("%Lf\n", d);
for(i = -3.0; i < 4.0; i++) {
printf("%.Lf) %.1Lf\n", i, d+i);
}
return 0;
}
It works even with 11119007199254740992.0L that is the same number with four 1s added at the start. But when I add one more 1, the first printf works as expected, while all the others show the same number of the first print.
So I tried to get the largest value of my long double with this program
#include <stdio.h>
#include <math.h>
int main() {
long double d = 11119007199254740992.0L, i;
for(i = 0.0L; d+i == d+i-1.0; i++) {
if( !fmodl(i, 10000.0L) ) printf("%Lf\n", i);
}
printf("%.Lf\n", i);
return 0;
}
But it prints 0.
(Edit: I just realized that I needed the condition != in the for)
Always in the same answer, they say that the largest possible value of a double is DBL_MAX or approximately 1.8 x 10^308.
I have no idea of what does it mean, but if I run
printf("%e\n", LDBL_MAX);
I get every time a different value that is always around 6.9 x 10^(-310).
(Edit: I should have used %Le, getting as output a value around 1.19 x 10^4932)
I took LDBL_MAX from here.
I also tried this one
printf("%d\n", LDBL_MAX_10_EXP);
That gives the value 4932 (which I also found in this C++ question).
Since we have 16 bytes for a long double, even if all of them were for the integer part of the type, we would be able to store numbers till 2^128, that is around 3.4 x 10^38. So I don't get what 308, -310 and 4932 are supposed to mean.
Is someone able to tell me how can I find out what's the largest integer that I can store as long double?
Inasmuch as you express in comments that you want to use long double as a substitute for long long to obtain increased range, I assume that you also require unit precision. Thus, you are asking for the largest number representable by the available number of mantissa digits (LDBL_MANT_DIG) in the radix of the floating-point representation (FLT_RADIX). In the very likely event that FLT_RADIX == 2, you can compute that value like so:
#include <float.h>
#include <math.h>
long double get_max_integer_equivalent() {
long double max_bit = ldexpl(1, LDBL_MANT_DIG - 1);
return max_bit + (max_bit - 1);
}
The ldexp family of functions scale floating-point values by powers of 2, analogous to what the bit-shift operators (<< and >>) do for integers, so the above is similar to
// not reliable for the purpose!
unsigned long long max_bit = 1ULL << (DBL_MANT_DIG - 1);
return max_bit + (max_bit - 1);
Inasmuch as you suppose that your long double provides more mantissa digits than your long long has value bits, however, you must assume that bit shifting would overflow.
There are, of course, much larger values that your long double can express, all of them integers. But they do not have unit precision, and thus the behavior of your long double will diverge from the expected behavior of integers when its values are larger. For example, if long double variable d contains a larger value then at least one of d + 1 == d and d - 1 == d will likely evaluate to true.
You can print the maximum value on your machine using limits.h, the value is ULLONG_MAX
In https://www.geeksforgeeks.org/climits-limits-h-cc/ is a C++ example.
The format specifier for printing unsigned long long with printf() is %llu for printing long double it is %Lf
printf("unsigned long long int: %llu ",(unsigned long long) ULLONG_MAX);
printf("long double: %Lf ",(long double) LDBL_MAX);
https://www.tutorialspoint.com/format-specifiers-in-c
Is also in Printing unsigned long long int Value Type Returns Strange Results
Assuming you mean "stored without loss of information", LDBL_MANT_DIG gives the number of bits used for the floating-point mantissa, so that's how many bits of an integer value that can be stored without loss of information.*
You'd need 128-bit integers to easily determine the maximum integer value that can be held in a 128-bit float, but this will at least emit the hex value (this assumes unsigned long long is 64 bits - you can use CHAR_BIT and sizeof( unsigned long long ) to get a portable answer):
#include <stdio.h>
#include <float.h>
#include <limits.h>
int main( int argc, char **argv )
{
int tooBig = 0;
unsigned long long shift = LDBL_MANT_DIG;
if ( shift >= 64 )
{
tooBig = 1;
shift -= 64;
}
unsigned long long max = ( 1ULL << shift ) - 1ULL;
printf( "Max integer value: 0x" );
// don't emit an extraneous zero if LDBL_MANT_DIG is
// exactly 64
if ( max )
{
printf( "%llx", max );
}
if ( tooBig )
{
printf( "%llx", ULLONG_MAX );
}
printf( "\n" );
return( 0 );
}
* - pedantically, it's the number of digits in FLT_RADIX base, but that base is almost certainly 2.
This question already has answers here:
Sum of positive values in an array gives negative result in a c program
(4 answers)
Closed 4 years ago.
I have written the following c code to find the sum of first 49 numbers of a given array, but the sum is coming out to be negative.
#include<stdio.h>
int main()
{
int i;
long int sum=0;
long int a[50]={846930887,1681692778,1714636916, 1957747794, 424238336, 719885387, 1649760493, 596516650, 1189641422, 1025202363, 1350490028, 783368691, 1102520060, 2044897764, 1967513927, 1365180541, 1540383427, 304089173, 1303455737, 35005212, 521595369, 294702568, 1726956430, 336465783, 861021531, 278722863, 233665124, 2145174068, 468703136, 1101513930, 1801979803, 1315634023, 635723059, 1369133070, 1125898168, 1059961394, 2089018457, 628175012, 1656478043, 1131176230, 1653377374, 859484422, 1914544920, 608413785, 756898538, 1734575199, 1973594325, 149798316, 2038664371, 1129566414};
for(i=0;i<49;i++)
{
sum=sum+a[i];
printf("sum is : %ld\n",sum);
}
printf("\nthe total sum is %ld",sum);
}
i don't know why it is coming so?please help.
Using long long instead of long, the program works:
Ouput: 56074206897
Reason
Range of long: -2^31+1 to +2^31-1
Range of long long: -2^63+1 to +2^63-1
As you can see 2^31-1 = 2147483647 <
56074206897; but 2^63-1 = 9,223,372,036,854,775,807 > 56074206897
This leads to overflow. According to the C standard, the result of signed integer overflow is undefined behavior. What that means is that if this condition ever happens at runtime, the compiler is allowed to make your code do anything. Your program could crash, or produce the wrong answer, or have unpredictable effects on other parts of your code, or it might silently do what you intended.
In your case it is overflowing the maximum value of long int on your system. Because long int is signed, when the most significant bit gets set, it becomes a negative number.
I didn't actually add them up, but just looking at them, I'd say its a pretty safe guess that you are running into an integer overflow error.
A long int has a maximum size of about 2 billion (2^31). If you add more than that, it'll look back around and go to -2^31.
You'll need to use a data type that can hold more than that if you want to sum up those numbers. Probably a long long int should work. If you're sure it'll always be positive, even better to use an unsigned long long int.
As long int has maximum range upto 2,147,483,647, and the value of sum is more than the range.So, it is coming as negative value. You can use the following code...
#include<stdio.h>
int main()
{
int i;
long long int sum=0; //Taking long long int instead of long int
int a[50]={846930887,1681692778,1714636916, 1957747794, 424238336,
719885387, 1649760493, 596516650, 1189641422, 1025202363, 1350490028,
783368691, 1102520060, 2044897764, 1967513927, 1365180541, 1540383427,
304089173, 1303455737, 35005212, 521595369, 294702568, 1726956430,
336465783, 861021531, 278722863, 233665124, 2145174068, 468703136,
1101513930, 1801979803, 1315634023, 635723059, 1369133070, 1125898168,
1059961394, 2089018457, 628175012, 1656478043, 1131176230, 1653377374,
859484422, 1914544920, 608413785, 756898538, 1734575199, 1973594325,
149798316, 2038664371, 1129566414};
for(i=0;i<49;i++)
{
sum=sum+a[i];
printf("sum is : %lld\n",sum);
}
printf("\nTotal sum is %lld",sum);
}
As Vlad from Moscow said this is a overflow issue, which made an undefined behavior. In you system (long int sum) sum does not have capacity to hold the total value. Not sure but you can use long long int sum =0;(after C99). If it still cannot work properly, search for "BigInteger" implement.
I was working on Exercise 2-1 of K&R, the goal is to calculate the range of different variable types, bellow is my function to calculate the maximum value a short int can contain:
short int max_short(void) {
short int i = 1, j = 0, k = 0;
while (i > k) {
k = i;
if (((short int)2 * i) > (short int)0)
i *= 2;
else {
j = i;
while (i + j <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
My problem is that the returned value by this function is: -32768 which is obviously wrong since I'm expecting a positive value. I can't figure out where the problem is, I used the same function (with changes in the variables types) to calculate the maximum value an int can contain and it worked...
I though the problem could be caused by comparison inside the if and while statements, hence the typecasting but that didn't help...
Any ideas what is causing this ? Thanks in advance!
EDIT: Thanks to Antti Haapala for his explanations, the overflow to the sign bit results in undefined behavior NOT in negative values.
You can't use calculations like this to deduce the range of signed integers, because signed integer overflow has undefined behaviour, and narrowing conversion at best results in an implementation-defined value, or a signal being raised. The proper solution is to just use SHRT_MAX, INT_MAX ... of <limits.h>. Deducing the maximum value of signed integers via arithmetic is a trick question in standardized C language, and has been so ever since the first standard was published in 1989.
Note that the original edition of K&R predates the standardization of C by 11 years, and even the 2nd one - the "ANSI-C" version predates the finalized standard and differs from it somewhat - they were written for a language that wasn't almost, but not quite, entirely unlike the C language of this day.
You can do it easily for unsigned integers though:
unsigned int i = -1;
// i now holds the maximum value of `unsigned int`.
Per definition, you cannot calculate the maximum value of a type in C, by using variables of that very same type. It simply doesn't make any sense. The type will overflow when it goes "over the top". In case of signed integer overflow, the behavior is undefined, meaning you will get a major bug if you attempt it.
The correct way to do this is to simply check SHRT_MAX from limits.h.
An alternative, somewhat more questionable way would be to create the maximum of an unsigned short and then divide that by 2. We can create the maximum by taking the bitwise inversion of the value 0.
#include <stdio.h>
#include <limits.h>
int main()
{
printf("%hd\n", SHRT_MAX); // best way
unsigned short ushort_max = ~0u;
short short_max = ushort_max / 2;
printf("%hd\n", short_max);
return 0;
}
One note about your code:
Casts such as ((short int)2*i)>(short int)0 are completely superfluous. Most binary operators in C such as * and > implement something called "the usual arithmetic conversions", which is a way to implicitly convert and balance types of an expression. These implicit conversion rules will silently make both of the operands type int despite your casts.
You forgot to cast to short int during comparison
OK, here I assume that the computer would handle integer overflow behavior by changing into negative integers, as I believe that you have assumed in writing this program.
code that outputs 32767:
#include <stdlib.h>
#include <stdio.h>
#include <malloc.h>
short int max_short(void)
{
short int i = 1, j = 0, k = 0;
while (i>k)
{
k = i;
if (((short int)(2 * i))>(short int)0)
i *= 2;
else
{
j = i;
while ((short int)(i + j) <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
int main() {
printf("%d", max_short());
while (1);
}
added 2 casts
I am new to C (and programming in general, minus a few weeks with Python). I am interested in learning how information is handled on a machine level, therefore I moved to C. Currently, I am working through some simple coding challenges and am having trouble finding information to resolve my current issue.
The challenge is to take N large integers into an array from input and print the sum of the numbers. The transition from Python to C has actually been more difficult than I expected due to the simplified nature of Python code.
Example input for the code below:
5
1000000001 1000000002 1000000003 1000000004 1000000005
Expected output:
5000000015
Code:
int main() {
long long unsigned int sum = 0;
int nums[200], n, i;
scanf("%i", &n);
for (i = 0; i =! n; i++) {
scanf("%i", &nums[i]);
sum = sum + nums[i];
}
printf("%llu", sum);
return 0;
}
The program seems to accept input for N, but it stops there.
One last question, in simple terms, what is the difference between a signed and unsigned variable?
Change your for loop like this
for (i = 0; i != n; i++) {
scanf("%i", &nums[i]);
sum = sum + nums[i];
}
if you say i =! n that is the same as i = !n. What that does is to assign the negated value of n to i. Since you gave a non-zero value to n the result is zero and the loop terminates.
Welcome to C!
Regarding the signed vs unsigned question. signed types can have negative values and unsigned can't. But they both take up the same space (number of bits) in memory. For instance, assuming twos' complement representation and a 32 bit integer, the range of values is
singed : -2^31 to 2^31 - 1 or –2147483648 to 2147483647
unsigned : 0 to 2^32 - 1 or 0 to 4294967295
I have a problem that is, when I sum the values of an array (that are all positive, I verified by printing the values of the array), I end up with a negative value. My code for the sum is:
int summcp = 0;
for (k = 0; k < SIMUL; k++)
{
summcp += mcp[k];
}
printf("summcp: %d.\n", summcp);`
Any hint about this problem would be appreciated.
You are invoking undefined behaviour.
As integer variables can only hold a limited range of values, going beyond this range is undefined by the standard. Basically anything can happen. In your (and the most common) case, it simply wraps around its binary representation. As this is used for negative values, you will read this as such.
To circomvent this, use a type for summcp which can hold all possible values. An alternative would be to check if the next addition will overflow by:
if ( summcp >= INT_MAX - mcp[k] ) {
// handle positive overflow
}
Note that the above only works for mcp[k] >= 0 and positive overflow. The other 3 cases have to be handled differently. It is in general best, faster and much easier to use a large enough type for the sum and test lateron for overflow, if required.
Do not feel tempted to add and test the result!. As integer overflow is undefined behaviour, this will not work for all architectures.
This smells like an overflow issue; you might want to add a check against that like
for (k = 0; k < SIMUL && (INT_MAX - summcp > mcp[k]); k++ )
{
sumcp += mcp[k];
}
if (k < SIMUL)
{
// sum of all mcp values is larger than what an int can represent
}
else
{
// use summcp
}
If you are running into overflow issues, you might want to use a wider type for summcp like long or long long.
EDIT
The problem is that the behavior of signed integer overflow is not well-defined; you'll get a different result for INT_MAX + 1 based on whether your platform uses one's complement, two's complement, sign magnitude, or some other representation for signed integers.
Like I said in my comment below, if mcp can contain negative values, you should add a check for underflow as well.
Regardless of the type you use (int, long, long long), you should keep the over- and underflow checks.
If mcp only ever contains non-negative values, then consider using an unsigned type for your sum. The advantage of this is that the behavior of unsigned integer overflow is well-defined, and the result will be the sum of all elements of mcp modulo UINT_MAX (or ULONG_MAX, or ULLONG_MAX, etc.).
Declare variable summcp like
long long int summcp = 0;
I think your final output is going out of range. For this you will have to declare summcp as long long int
long long int summcp = 0;
for (k = 0; k < SIMUL; k++)
{
summcp += mcp[k];
}
printf("summcp: %lld.\n", summcp);
int variable throws garbage value if its range is exceeded.