I have made a code to read two long integers (A and B) from standard input and output (to standard output) A to the power of B.
It works for 1^some huge number, 3^3 etc.
But not for 13^16.
I've tried to put long int ans to solve it, it gave me a different value but not the right one.
#include <stdio.h>
#include <math.h>
int main()
{
int x, n;
long int ans;
scanf("%d \n %d",&x, &n);
ans = pow(x,n);
printf("%d", ans);
return 0;
}
pow(1,anything) is always 1. pow(3, 3) is 27. These are both quite small numbers and easily fit into a 32 bit integer. pow(13,16) is (approximately) 6.65 x 1017. This is too big for a 32 bit integer to contain. It will go into a 64 bit integer (although pow(14, 17) will not). It's likely that your compiler treats a long as a 32 bit value, which is not uncommon. You could try long long which is likely to be 64 bits or int64_t which is explicitly 64 bits long.
Note though that the prototype for pow() is
double pow(double x, double y);
which means that it is returning a double precision floating point number and then coercing it into the type of your variable. double (a 64 bit floating point number) only has 53 bits of precision in its mantissa, which means you are not going to get the exact number when you cast it back to even a 64 bit integer. You could use powl() whose prototype is
long double powl(long double x, long double y);
But long double might be defined as 80 bits or 128 bits or even only 64 bits (Microsoft). It might give you the precision you need, but such is the nature of power operations, your input numbers won't have to get much bigger to overflow the precision of even the longest long double.
If you really need to raise large numbers to large powers, you are going to need a big integer library.
Rather than use floating point pow() and friends with their potential limited precision for an integer problem within 64 bits (1316 needs 60 bits), use an integer power function.
unsigned long long upow(unsigned x, unsigned y) {
unsigned long long z = 1;
unsigned long long xx = x;
while (y) {
if (y % 2) {
z *= xx;
}
y /= 2;
xx *= xx;
}
return z;
}
int main() {
printf("%llu\n", upow(3, 3));
printf("%llu\n", upow(13, 16));
}
Output
27
665416609183179841
If code needs to handle answers more the 64 bits, consider long double (with potential loss of precision) or big_integer libraries.
You have defined "ans" as long int then you are trying to print it as int (%d - Take the next argument and print it as an int ) So change printf("%d", ans) to printf("%ld",ans) . your code would look something like this :
#include <stdio.h>
#include <math.h>
int main()
{
int x, n;
long int ans;
scanf("%d \n %d",&x, &n);
ans = pow(x,n);
printf("%ld", ans);
return 0;
}
Related
Tried calculating factorial of 65, got correct output. Anything greater than 65 results in output of 0. Shocking since I'm using unsigned long int. What is amiss ?
Code:
#include <stdio.h>
void factorial(int unsigned long);
int main()
{
int unsigned long num, result;
printf("\nEnter number to obtain factorial : ");
scanf("%ld", &num);
factorial(num);
}
void factorial (int unsigned long x)
{
register int unsigned long f = 1;
register int unsigned long i;
for (i=x;i>=1;i--)
f= f*i;
printf("\nFactorial of %lu = %lu\n",x,f);
}
You certainly did not get the correct result for 65! log2(65!) is just over 302 bits (Google it), so you'd need a long int of at least 303 bits to calculate that correctly. There is no computer in the world where long int is over 300 bits (let's see how this answer ages!).
The largest factorial you can compute in 64 bits is 20! (which is about 2.4e18).
Adding on to #JohnZwinck, the maximum value for a variable of type unsigned long long (ULLONG_MAX) is 18446744073709551615. So all the values greater than 20! are going to have a garbage value
You can refer to this for more information.
For the following program.
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
float d = -((a*b)*(c/3));
printf("d = %f\n", d);
return 0;
}
It is very strange that output is
d = 4294965248.000000
When I change the magic number 3 in the expression to calculate d to 3.0, I got correct result:
d = 2000.000000
If I change the type of a, b, c to int, I also got correct result.
I guess this error occurred by the conversion from unsigned int to float, but I do not know details about how the strange result was created.
I think you realize that you casting minus to unsigned int before assignment to float. If you run the below code, you will get highly likely 4294965296
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
printf("%u", -((a*b)*(c/3)));
return 0;
}
The -2000 to the right of your equals sign is set up as a signed
integer (probably 32 bits in size) and will have the hexadecimal value
0xFFFFF830. The compiler generates code to move this signed integer
into your unsigned integer x which is also a 32 bit entity. The
compiler assumes you only have a positive value to the right of the
equals sign so it simply moves all 32 bits into x. x now has the
value 0xFFFFF830 which is 4294965296 if interpreted as a positive
number. But the printf format of %d says the 32 bits are to be
interpreted as a signed integer so you get -2000. If you had used
%u it would have printed as 4294965296.
#include <stdio.h>
#include <limits.h>
int main()
{
float d = 4294965296;
printf("d = %f\n\n", d);
return 0;
}
When you convert 4294965296 to float, the number you are using is long to fit into the fraction part. Now that some precision was lost. Because of the loss, you got 4294965248.000000 as I got.
The IEEE-754 floating-point standard is a standard for representing
and manipulating floating-point quantities that is followed by all
modern computer systems.
bit 31 30 23 22 0
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
The bit numbers are counting from the least-significant bit. The first
bit is the sign (0 for positive, 1 for negative). The following
8 bits are the exponent in excess-127 binary notation; this
means that the binary pattern 01111111 = 127 represents an exponent
of 0, 1000000 = 128, represents 1, 01111110 = 126 represents
-1, and so forth. The mantissa fits in the remaining 24 bits, with
its leading 1 stripped off as described above. Source
As you can see, when doing conversion 4294965296 to float, precision which is 00011000 loss occurs.
11111111111111111111100 00011000 0 <-- 4294965296
11111111111111111111100 00000000 0 <-- 4294965248
This is because you use - on an unsigned int. The - inverts the bits of the number. Lets print some unsigned integers:
printf("Positive: %u\n", 2000);
printf("Negative: %u\n", -2000);
// Output:
// Positive: 2000
// Negative: 4294965296
Lets print the hex values:
printf("Positive: %x\n", 2000);
printf("Negative: %x\n", -2000);
// Output
// Positive: 7d0
// Negative: fffff830
As you can see, the bits are inverted. So the problem comes from using - on unsigned int, not from casting unsigned intto float.
As others have said, the issue is that you are trying to negate an unsigned number. Most of the solutions already given have you do some form of casting to float such that the arithmetic is done on floating point types. An alternate solution would be to cast the results of your arithmetic to int and then negate, that way the arithmetic operations will be done on integral types, which may or may not be preferable, depending on your actual use-case:
#include <stdio.h>
int main(void)
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
float d = -(int)((a*b)*(c/3));
printf("d = %f\n", d);
return 0;
}
Your whole calculation will be done unsigned so it is the same as
float d = -(2000u);
-2000 in unsigned int (assuming 32bits int) is 4294965295
this gets written in your float d. But as float can not save this exact number it gets saved as 4294965248.
As a rule of thumb you can say that float has a precision of 7 significant base 10 digits.
What is calculated is 2^32 - 2000 and then floating point precision does the rest.
If you instead use 3.0 this changes the types in your calculation as follows
float d = -((a*b)*(c/3.0));
float d = -((unsigned*unsigned)*(unsigned/double));
float d = -((unsigned)*(double));
float d = -(double);
leaving you with the correct negative value.
you need to cast the ints to floats
float d = -((a*b)*(c/3));
to
float d = -(((float)a*(float)b)*((float)c/3.0));
-((a*b)*(c/3)); is all performed in unsigned integer arithmetic, including the unary negation. Unary negation is well-defined for an unsigned type: mathematically the result is modulo 2N where N is the number of bits in unsigned int. When you assign that large number to the float, you encounter some loss of precision; the result, due to its binary magnitude, is the nearest number to the unsigned int that divides 2048.
If you change 3 to 3.0, then c / 3.0 is a double type, and the result of a * b is therefore converted to a double before being multiplied. This double is then assigned to a float, with the precision loss already observed.
How to use unsigned int properly? My function unsigned int sub(int num1, int num2);doesn't work when user input a is less than b. If I simply use int, we can expect a negative answer and I need to determine which is larger before subtracting. I know that's easy but maybe there is a way to use unsigned int to avoid that?
For instance when a = 7 and b = 4, the answer is 3.000000. But when I flip them, the code gives me 4294967296.000000 which I think is a memory address.
#include <stdio.h>
unsigned int sub(int num1,int num2){
unsigned int diff = 0;
diff = num1 - num2;
return diff;
}
int main(){
printf("sub(7,4) = %u\n", sub(7,4));
printf("sub(4,7) = %u\n", sub(4,7));
}
output:
sub(7,4) = 3
sub(4,7) = 4294967293
Unsigned numbers are unsigned... This means that they cannot be negative.
Instead they wrap (underflow or overflow).
If you do an experiment with an 8-bit unsigned, then you can see the effect of subtracting 1 from 0:
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint8_t i;
i = 1;
printf("i: %hhu\n", i);
i -= 1;
printf("i: %hhu\n", i);
i -= 1;
printf("i: %hhu\n", i);
return 0;
}
i: 1
i: 0
i: 255
255 is the largest value that an 8-bit unsigned can hold (2^8 - 1).
We can then do the same experiment with a 32-bit unsigned, using your 4 - 7:
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint32_t i;
i = 4;
printf("i: %u\n", i);
i -= 7;
printf("i: %u\n", i);
return 0;
}
i: 4
i: 4294967293
4294967293 is effectively 0 - 3, which wraps to 2^32 - 3.
You also need to be careful of assigning an integer value (the return type of your sub() function) to a float... Generally this is something to avoid.
See below. x() returns 4294967293 as an unsigned int, but it is stored in a float... This float is then printed as 4294967296???
#include <stdio.h>
#include <stdint.h>
unsigned int x(void) {
return 4294967293U;
}
int main(void) {
float y;
y = x();
printf("y: %f\n", y);
return 0;
}
y: 4294967296.000000
This is actually to do with the precision of float... it is impossible to store the absolute value 4294967293 in a float.
What you see is called unsigned integer overflow. As you noted, unsigned integers can't hold negative numbers. So when you try to do something that would result in a negative number, seemingly weird things can occour.
If you work with 32-bit integers,
int (int32_t) can hold numbers between (-2^31) and (+2^31-1), INT_MIN and INT_MAX
unsigned int (uint32_t) can hold numbers between (0) and (2^32-1), UINT_MIN and UINT_MAX
When you try to add something to an int that would lead to a number greater than the type can hold, it will overflow.
unsigned int cannot be used to represent a negative variable. I believe what you wanted is to find the absolute value of the difference between a and b. If so, you can use the abs() function from stdlib.h. The abs() function takes in an int variable i and returns the absolute value of i.
The reason why unsigned int returns a huge number in your case is due to the way integers are represented in the system. You declared diff as an int type, which is capable of storing a negative value, but the same sequence of bits that represents -3 when interpreted as unsigned int represents 4294967296 instead.
It becomes a negative number, unsigned data types dont hold negative number, so instead it becomes a large number
"absolute value of the difference between a and b" does not work for many combinations of a,b. Any a-b that overflows is undefined behavior. Obviously abs(INT_MAX - INT_MIN) will not generate the correct answer.
Also, using abs() invokes undefined behavior with a select value of int. abs(INT_MIN) is undefined behavior when -INT_MIN is not representable as an int.
To calculate the absolute value difference of 2 int, subtract them as unsigned.
unsigned int abssub(int num1,int num2){
return (num1 > num2) ? (unsigned) num1 - num2 : (unsigned) num2 - num1;
}
I am getting precision loss when converting a big double (17+ digits) number to integer.
#include <stdio.h>
int main() {
int n = 20;
double acum = 1;
while (n--) acum *= 9;
printf("%.0f\n", acum);
printf("%llu\n", (unsigned long long)acum);
return 0;
}
The output of this code is:
12157665459056929000
12157665459056928768
I can't use unsigned long long for the calculations because this is just a pseudo code and I need the precision on the real code, where divisions are included.
If I increase the decimals the first output becomes, for e.g 12157665459056929000.0000000000.
I've tried round(acum) and trunc(acum) and in both cases the result were the same as the second output. Shouldn't they be equal to the first??
I know float has only 6 decimals precision and double has about 17. But what's wrong with the digits?!?
Actually, when I change the acum's type to unsigned long long like:
unsigned long long acum = 1;
the result is:
12157665459056928801
When I use Python to calculate the accurate answer:
>>9**20
12157665459056928801L
You see?
12157665459056929000 is not an accurate answer at all and is actually an approximation of the accurate.
Then I change the code like this:
printf("%llu\n", (unsigned long long)1.2157665459056929e+019);
printf("%llu\n", (unsigned long long)1.2157665459056928e+019);
printf("%llu\n", (unsigned long long)1.2157665459056927e+019);
printf("%llu\n", (unsigned long long)1.2157665459056926e+019);
And result is:
12157665459056928768
12157665459056928768
12157665459056926720
12157665459056926720
In fact 19 digits is exceeding the numeric digit limit of cpp and the result of converting such a big number is unexpectable and unsafe.
So recently, I met this problem:
Given: int x; unsigned int y; x = 0xAB78; y = 0xAB78; write a program to display the decimal values of x and y on the screen.
And here is the program I wrote: ( I am on a 64-bit windows 7 machine)
#include<stdio.h>
#include<math.h>
int main()
{
short int x;
unsigned short int y;
x = 0xAB78;
y = 0xAB78;
printf("The decimal values of x and y are: %d & %hu.\n", x, y);
return 0;
}
The output I got is:
x=-21640, and y=43896.
I am ok with the unsigned hex number,
since 0xAB78 = 10*16^3 + 11*16^2 + 7*16^1 + 8*16^0 = 43896.
However for the signed hex number,
should it be: -1*16^3 + 11*16^2 + 7*16^1 +8*16^0 = -1160?
why is it -21640 then?
Thank you for your time!
signed short int range values is [-32768 ; 32767]. Value 0xAB78 (or 43896) won't fit into, so it overflows into -32768 + (43896 - 32768) thus ending in -21640.
If the constant does not fit in your signed integer type, the result is implementation-defined.
C allows 1s-complement, sign-and-magnitude and 2s-complement for signed numbers. Also, short has minimum width 16 value-bits and may contain non-value bits. Trap-representations are also possible.
That gives quite a lot of possible results.
For POSIX and Windows machines, 16-bit 2s complement is mandatory, thus just reduce the constant modulo 2**16. If that is >= 2**15, subtract 2**16.