Multiplying two long numbers - c

I have tried to multiply to numbers i.e. 10000 and 10000 + 1 through C program. But I am not getting the correct output.
printf("%lld",(100000)*(100001));
I have tried the above code on different compilers but I am getting same 1410165408 instead of 10000100000.

Well, let's multiply
int64_t a = 100000;
int64_t b = 100001;
int64_t c = a * b;
And we'll get (binary)
1001010100000011010110101010100000 /* 10000100000 decimal */
but if you convert it to int32_t
int32_t d = (int32_t) c;
you'll get the last 32 bits only (and throw away the top 10):
01010100000011010110101010100000 /* 1410165408 decimal */
A simplest way out, probably, is to declare both constants as 64-bit values (LL suffix stands for long long):
printf("%lld",(100000LL)*(100001LL));

In C, the type which is used for a calculation is determined from the type of the operands, not from the type where you store the result in.
Plain integer constants such as 100000 is of type int, because they will fit inside one. The multiplication of 100000 * 100001 will however not fit, so you get integer overflow and undefined behavior. Switching to long won't necessarily solve anything, because it might be 32 bit too.
In addition, printing an int with the %lld format specifier is also undefined behavior on most systems.
The root of all evil here is the crappy default types in C (called "primitive data types" for a reason). Simply get rid of them and all their uncertainties, and all your bugs will go away with them:
#include <stdio.h>
#include <inttypes.h>
int main(void)
{
printf("%"PRIu64, (uint64_t)100000 * (uint64_t)100001);
return 0;
}
Or equivalent: UINT64_C(100000) * UINT64_C(100001).

Your two integers are int, that will make the result int too. That the printf() format specifier says %lld, which needs long long int, doesn't matter.
You can cast or use suffixes:
printf("%lld", 100000LL * 100001LL);
This prints 10000100000. Of course there's still a limit, since the number of bits in a long long int is still constant.

You can do it like this
long long int a = 100000;
long long int b = 100001;
printf("%lld",(a)*(b));
this will give the correct answer.
What you are doing is (100000)*(100001) i.e by default compiler takes 100000 into an integer and multiplies 100001 and stores it in (int)
But during printf it prints (int) as (long long int)

Related

What data types in C language are these variables and where I am wrong?

Now I am learning the C language and I have two tasks. Define variables and print them Here is my code:
#include <stdio.h>
int main()
{
printf("Excersise 1\n\n");
int cVarOne = -250;
int cVarTwo = 250;
int cVarThree = 4589498;
double cVarFour = 10000000000000000000;
double cVarFive = -9000000000000000000;
printf("%d\n%d\n%d\n%.0lf\n%.0lf\n\n", cVarOne,cVarTwo,cVarThree,cVarFour,cVarFive);
printf("Excersise Two\n\n");
unsigned short cVarSix = 43112;
int cVarSeven = -1357674;
int cVarEight = 1357674;
int cVarNine = -1357674000;
unsigned int cVarTen = 3657895000;
printf("%u\n%d\n%d\n%d\n%lu\n", cVarSeven, cVarEight, cVarNine, cVarTen);
return 0;
}
The problem is that for exercise 1 the following message appears:
Excersise 1
and for exercise two the same numbers that are set as variables are not printed propetly. Here is the result:
Excersise 2
I tried to use different types, but without result. Where am I wrong? What data types and specifiers should be used and why do these errors occur?
I am using this online compiler: https://www.onlinegdb.com/online_c_compiler
Thanks.
The warning in Exercise 2 is self-explanatory, simple use the correct conversion specifier for unsigned int, as taught in chapter 1 of your favourite C book. The rest of this answer is regarding Exercise 1:
Why you get the warning:
Integer constants, these things: 123, have a type just like variables.
10000000000000000000 is not a floating point type since it doesn't end with . (double) or .f (float), but a fixed point signed integer. The compiler will try to find the smallest signed integer type where the value will fit, but it fails to find one at all.
The reason is because 10,000,000,000,000,000,000 = 10^19 is larger than the largest signed 64 bit integer, which can only hold values up to 2^63 - 1 = 9.22*10^18.
The compiler notes however that the value might fit in a 64 bit unsigned integer since 2^64 -1 = 18.44*10^18, large enough. Hence it tries that, but gives you the warning for writing such smelly code.
Solution:
Use 10000000000000000000.0 which is of type double.
Or better yet, use 10e18 which is also double but readable by humans.
General best practices:
Never mix floating point and fixed point in the same expression.
Never mix float and double in the same expression. (Stay clear of float entirely on most mainstream systems like PC.)
When size and portability matters, never use the default integer types of C like int, long etc. Use the types from stdint.h instead.
Each integer type has a fixed size in bytes for internal representations of values. As a result there is a limit for the maximum and a minimum value that can be stored in an integer type.
For example for the type uintmax_t the following characteristics can be provided by the compiler
#include <stdio.h>
#include <inttypes.h>
int main( void )
{
uintmax_t x = UINTMAX_MAX;
printf( "sizeof( uintmax_t ) = %zu and the maximum value is %"
PRIuMAX "\n", sizeof( x ), x );
}
The program output is
sizeof( uintmax_t ) = 8 and the maximum value is 18446744073709551615
When you are using integer constant without suffix like this 10000000000000000000 then the compiler selects the type of the constant the first signed integer type that can represent such a constant. However neither signed integer type including long long int can represent this constant. So the compiler issues a message that such a constant can be represented only in an unsigned integer type.
You could use a constant with a suffix like
double cVarFour = 10000000000000000000llu;
The suffix determines that the constant has the type unsigned long long int.
As for the second message then you have to use correct conversion specifiers with arguments.
The conversion specifier lu is designed to output values of objects of the type unsigned long int. However the corresponding argument of the call of printf
printf("%u\n%d\n%d\n%d\n%lu\n", cVarSeven, cVarEight, cVarNine, cVarTen);
has the type unsigned int
unsigned int cVarTen = 3657895000;
So either use the conversion specifier u like
printf("%u\n%d\n%d\n%d\n%u\n", cVarSeven, cVarEight, cVarNine, cVarTen);
or cast the argument to the type unsigned ling like
printf("%u\n%d\n%d\n%d\n%lu\n", cVarSeven, cVarEight, cVarNine, ( unsigned long )cVarTen);
Also the length modifier l in this format specification %.0lf has no effect. You could just write %.0f

Declaration of Long (Modifier) in c

Original question
I have a piece of code here:
unsigned long int a =100000;
int a =100000UL;
Do the above two lines represent the same thing?
Revised question
#include <stdio.h>
int main(void)
{
long int x=50000*1024000;
printf("%ld\n",x);
return 0;
}
For a long int, my compiler uses 8 bytes, so the max range is (2^63-1). So here 50000*1024000 results in something which is definitely less than the max range of long int So why does my compiler warn of overflow and give the wrong output?
Original question
The two definitions are not the same.
The types of the variables are different — unsigned long versus (signed) int. The behaviour of these types is quite different because of the difference in signedness. They also may have quite different ranges of valid values.
Technically, the numeric constants are different too; the first is a (signed) int unless int cannot hold the value 100,000, in which case it will be (signed) long instead. That will be converted to unsigned long and assigned to the first a. The other constant is an unsigned long value because of the UL integer suffix, and will be converted to int using the normal rules. If int cannot hold the value 100,000, the normal conversion rules will apply. It is legitimate, though very unusual these days, for sizeof(int) == 2 * sizeof(CHAR_BIT) where CHAR_BIT is 8 — so int is a 16-bit signed type. This is normally treated as a short and normally int is a 32-bit signed type, but the standard does not rule out the alternative.
Most likely, the two variants of a both end up holding the value 100,000, but they are not the same because of the difference in signedness.
Revised question
The arithmetic is done in terms of the two operands of the * operator, and those are 50000 and 1024000. Each of those fits in a 32-bit int, so the calculation is done as int — and the result would be 51200000000, but that requires at least 36 bits to represent the value, so you have 32-bit arithmetic overflow, and the result is undefined behaviour.
After the arithmetic is complete, the int result is converted to 64-bit long — not before.
The compiler is correct to warn, and because you invoked undefined behaviour, anything that is printed is 'correct'.
To fix the code, you can write:
#include <stdio.h>
int main(void)
{
long x = 50000L * 1024000L;
printf("%ld\n", x);
return 0;
}
Strictly, you only need one of the two L suffixes, but symmetry suggests using both. You could use one or two (long) casts instead if you prefer. You can save on spaces too, if you wish, but they help the readability of the code.
The long int and int are not necessarily the same, but they might be. Unsigned and signed are not the same thing. Numerical constants can represent the same value without being the same thing, as in 100000 and 100000UL (the former being a signed int, the latter being unsigned long)

Integer Overflow and the difference between pow() and multiplication

When I tried this multiplication compiler gave an integer overflow error
int main(){
long long int x;
x = 55201 * 55201;
printf("%lld", x);
return 0;
}
But When i do the same operation with pow() function i do not get any error.
int main(){
long long int x;
x = pow(55201, 2);
printf("%lld", x);
return 0;
}
Why is that so? I must use the first code.
You need to change your code like this
int main(){
long long int x;
x = 55201LL * 55201LL; // <--- notice the LL
printf("%lld", x);
return 0;
}
to make the multiplication done as long long
When you use the pow function you don't see any problems because pow uses floating point for calculations.
Here (Linux 64bits, gcc 5.2.1), 55201 is an integer literal of size 4, and the expression 55201 * 55201 seems to be stored in an integer of size 4 before being assigned to your long long int.
One option is storing the factor in another variable before multiplying, to increase the range.
int main(){
long long int x, factor;
factor = 55201;
x = factor * factor;
printf("%lld", x);
return 0;
}
In below code 55201 is taken as integer by default and then multiplied and result will also be an integer after multiplying. During code optimization phase multiplication is going to be calculated but then it seems to overflow the integer limit...That's why compiler generates the warning i.e. integer overflow
int main(){
long long int x;
x = 55201 * 55201;
printf("%lld", x);
return 0;
}
Declaration of pow is as:
double pow(double x, double y);
But in second case function pow take every arguments as double so now "55201" and "2" will be implicitly cast as double and now calculation takes place on the double precision so after calculation result will not cross the limit for double type...And hence the compiler will not generate any overflow message in this case.
To establish same result but using method 1 can be done as:
long long int result, number;
number = 55201;
result = number * number;
// Print result as..
printf("%lld\n", result);
That's it.. Was it helpful to understand...
Problem is the operation is performed with the largest type of the operands, at least int (C standard, 6.3.1.8). The assignment is just another expression in C and the type of the left hand side of the = is irrelevant for the right hand side operation.
On your platform, both constants fit into an int, so the expression 55201 * 55201 is evaluated as int. Problem is the result does not fit into an int, thus generates an overflow.
Signed integer overflow is undefined behaviour. This means everything can happen. Luckily your compiler is clever enough to detect this and warn you instead of the computer jumping out of the window. Briefly: avoid it!
Solution is to perform the operation with a type which can hold the full result. A short calculation yields that the product requires 32 bits to represent the value. Thus an unsigned long would be sufficient. If you want a signed integer, you need another bit for the sign, i.e. 33 bits. Such a type is very rare nowadays, so you have to use a long long which has at least 64 bits. (Don't feel tempted to use long, even iff it has 64 bits on your platform; this makes your code implementation defined, thus non-portable without any benefit.)
For this, you need at least one of the operands to have the type of the result type:
x = 55201LL * 55201; // first factor is a long long constant
If variables are involved use a cast:
long f1 = 55201; // an int is not guaranteed to hold this value!
x = (long long)f1 * 55201;
Note not using L suffix for the constants here. They will automatically be promoted to the smallest type (int at least) which can represent the value.
The other expression x = pow(55201, 2) uses a floating point function. Thus the arguments are converted to double before pow is called. The double result is converted by the assignment operator to the left hand side type.
This has two problems:
A double is not guaranteed to have a mantissa of 63 bits (excluding sign) like a long long. The common IEEE754 implementations have this problem. (This is not relevant for this specific calculation)
All floating point arithmetic may include rounding errors, so the result might deviate from the exactly result. That's why you have to use the first version.
As a general rule one should never use floating point arithmetic if an exact result is required. And mixing floating point and integer should be done very cautiously.

left shifting a positive int is giving me a negative number

I´m using shift to make optimizations on some math I have to do in a program written in C:
int h;
h = 104;
h = (h<<8)+(h<<6);
printf("offset: %d",h);
I get this result:
offset: -32256
When what I expect is
(104* (2^8) ) + (104 * (2^6)) = 33280
Can anyone explain why I´m getting the negative result and what can I do to get the result I expect?
I´m using the BORLANDC compiler with DOSBOX to run my program if that´s useful.
On Borland C compiler, the size of int is 16-bits. So the statement
h = (h<<8)+(h<<6);
overflows a signed integer. The individual shift operations are fine as the results are within the limit of what a 16-bit signed integer can hold. But the addition results in overflow. Signed integer overflow is undefined behaviour in C.
It seems In the environment where the program was run type int has width of 2 bytes and its maximum value is +32767.
So when a positive value exceedes this limit then the sign bit can be set and you get a negative value.
You can check the maximum poisitive value that can be stored in an object of type int the following way
#include <stdio.h>
#include <limits.h>
int main( void )
{
printf( "The maximum is %d\n", INT_MAX );
}
Instead of the type int you could use type unsigned int or type long or unsigned long
For example the maximum value for type unsigned int provided by your compiler can be equal to 65535. It is enough to store the result of your expression.

Why there is a need of cast in the following program?

Posted the whole code(if needed)
/* multiplication of n1 and n2*/
#include<stdio.h>
int arr[200]; //for storing the binary representation of n2
int k=0; // to keep the count of the no of digits in the binary representation
void bin(int n)
{
if(n>1)
bin(n/2);
arr[k++]=n%2;
}
int main()
{
int n1=1500000;
int n2=10000000;
int i=0,t=0;
long long int temp,pro=0;
bin(n2); // calculating the binary of n2 and stroring in 'arr'
for(i=k-1; i>=0 ;i--)
{
temp = (n1*arr[i]) << t; /* Why need cast here ? */
pro = pro + temp;
t++;
}
printf("product: %lld",pro);
return 0;
}
In the above program, I am not getting the desired output. But when I do this:
temp = (n1*(long long int)arr[i]) << t;,
then I am getting the correct output !
I am not able to understand the above behavior ?
It seems likely that on your system int is 32 bits and long long int is 64 bits.
n1 and arr[i] are both int, and result of multiplication is then int. But there is not enough bits in int to hold the answer because it is too big.
When you cast one member of the operation to long long int, then result will also be long long int too.
n1, as well as arr[i], are integers. Without the cast, you get an integer multiplication (which may overflow), shift that integer left (again producing an integer result that may overflow), then assign that integer to temp.
If you cast arr[i] to long long, all calculations are done in long. So, if your integers are 32 bit (and thus limited to about 2e10), your integers will overflow, but the long longs, which should be 64 bit, will not.
I suspect that an int isn't large enough to store the result of (n1*arr[i]), so casting to a (long long int) gives enough room to store the result.
int Not smaller than short. At least 16 bits.
long long int Not smaller than long. At least 64 bits.
Have a look at C++ Data Types.
a multiplication between two integer goes into an integer unless you cast one of them to something else. So in the first multiplication the result is stored into an integer (32 bit signed on a 32 bit system), in the second case into a long long (64 bit unsigned on a 32 bit system). You may prefer using types like int64_t to have better portability.
You can use sizeof(int), sizeof(long long) to see the difference between the two.

Resources