I want to multiply 2 very big hex numbers and print them out like for example:
28B2D48D74212E4F x 6734B42C025D5CF7 = 1068547cd3052bbe5688de35695b1239
Since I expected it to be a very big number I used unsigned long long int type:
unsigned long long int x = 0x28B2D48D74212E4F;
unsigned long long int y = 0x6734B42C025D5CF7;
and print the multiplication like this:
fprintf(stdout, "%llx\n", x*y);
What I get is exactly the half of the expected result:
5688de35695b1239
Why does it truncate it to exactly the half? Is there something bigger than unsigned long long?
The response you're looking for won't fit in a 64-bit unsigned long long, which is the normal size on a 64-bit platform; any excess during multiply is overflow and dropped.
Newer versions of GCC do support 128-bit integers on 64-bit machines with __int128 (and unsigned __int128), and this works:
unsigned long long int x = 0x28B2D48D74212E4FULL;
unsigned long long int y = 0x6734B42C025D5CF7ULL;
unsigned __int128 xy = x * (unsigned __int128)y;
Note that you have to cast one of x or y to the wider type so the multiplication is done in 128 bits; otherwise that promotion to 128 is not done until after the (truncated) 64-bit multiply.
The problem is, as far as I can tell, printf() doesn't have a way to do this easily, so you're going to have to roll your own a bit.
Some reasonable discussion here: how to print __uint128_t number using gcc?
But this worked for me on:
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
#include <stdio.h>
int main()
{
unsigned long long int x = 0x28B2D48D74212E4F;
unsigned long long int y = 0x6734B42C025D5CF7;
unsigned __int128 xy = x * (unsigned __int128)y;
printf("Result = %016llx%016llx\n",
(unsigned long long)( xy >> 64),
(unsigned long long)( xy & 0xFFFFFFFFFFFFFFFFULL));
return 0;
The casts inside the printf are important: otherwise the shifting/masking are done in 128-bit scalars, and those 128 bits pushed onto the stack, but then each %llx expects 64 bits.
Note that this is all entirely dependent on the underlying platform and is not portable; there's surely a way to use various #ifdefs and sizeofs to make it more general, but there's probably no super awesome way to make this work everywhere.
Related
I am trying to store "10000000000000000000"(19 zeros) inside long long int data type.
#if __WORDSIZE == 64
typedef long long int intmaximum_t;
#else
__extension__
typedef unsigned long int intmaximum_t;
#endif
const intmaximum_t = 10000000000000000000;
But it is giving output "-8446744073709551616" (in negative).
I have 64 bit machine with ubuntu OS. How to store this value?
The largest possible value for a 64 bit long long int is 9,223,372,036,854,775,807. Your number is bigger than that. (Note that this number less the one you get is one off the number you actually want - that's not a coincidence but a property of 2's complement arithmetic).
It would fit in an unsigned long long int, but if you need an unsigned type you'll need to use a large number library for your number, or int128_t if your compiler supports it.
The value 10000000000000000000 is too large to fit in a signed 64-bit integer but will fit in an unsigned 64-bit integer. So when you attempt to assign the value it gets converted in an implementation defined way (typically by just assigning the binary representation directly), and it prints as negative because you are most likely using %d or %ld as your format specifier.
You need to declare your variable as unsigned long long and print it with the %llu format specifier.
unsigned long long x = 10000000000000000000;
printf("x=%llun", x);
So... the modulo operation doesn't seem to work on a 64-bit value of all ones.
Here is my C code to set up the edge case:
#include <stdio.h>
int main(int argc, char *argv[]) {
long long max_ll = 0xFFFFFFFFFFFFFFFF;
long long large_ll = 0x0FFFFFFFFFFFFFFF;
long long mask_ll = 0x00000F0000000000;
printf("\n64-bit numbers:\n");
printf("0x%016llX\n", max_ll % mask_ll);
printf("0x%016llX\n", large_ll % mask_ll);
long max_l = 0xFFFFFFFF;
long large_l = 0x0FFFFFFF;
long mask_l = 0x00000F00;
printf("\n32-bit numbers:\n");
printf("0x%08lX\n", max_l % mask_l);
printf("0x%08lX\n", large_l % mask_l);
return 0;
}
The output shows this:
64-bit numbers:
0xFFFFFFFFFFFFFFFF
0x000000FFFFFFFFFF
32-bit numbers:
0xFFFFFFFF
0x000000FF
What is going on here?
Why doesn't modulo work on a 64-bit value of all ones, but it will on a 32-bit value of all ones?
It this a bug with the Intel CPU? Or with C somehow? Or is it something else?
More Info
I'm on a Windows 10 machine with an Intel i5-4570S CPU. I used the cl compiler from Visual Studio 2015.
I also verified this result using the Windows Calculator app (Version 10.1601.49020.0) by going into the Programmer mode. If you try to modulus 0xFFFF FFFF FFFF FFFF with anything, it just returns itself.
Specifying unsigned vs signed didn't seem to make any difference.
Please enlighten me :) I actually did have a use case for this operation... so it's not purely academic.
Your program causes undefined behaviour by using the wrong format specifier.
%llX may only be used for unsigned long long. If you use the right specifier, %lld then the apparent mystery will go away:
#include <stdio.h>
int main(int argc, char* argv[])
{
long long max_ll = 0xFFFFFFFFFFFFFFFF;
long long mask_ll = 0x00000F0000000000;
printf("%lld %% %lld = %lld\n", max_ll, mask_ll, max_ll % mask_ll);
}
Output:
-1 % 16492674416640 = -1
In ISO C the definition of the % operator is such that (a/b)*b + a%b == a. Also, for negative numbers, / follows "truncation towards zero".
So -1 / 16492674416640 is 0, therefore -1 % 16492674416640 must be -1 to make the above formula work.
As discussed in comments, the following line:
long long max_ll = 0xFFFFFFFFFFFFFFFF;
causes implementation-defined behaviour (assuming that your system has long long as a 64-bit type). The constant 0xFFFFFFFFFFFFFFFF has type unsigned long long, and it is out of range for long long whose maximum permitted value is 0x7FFFFFFFFFFFFFFF.
When an out-of-range assignment is made to a signed type, the behaviour is implementation-defined, which means the compiler documentation must say what happens.
Typically, this will be defined as generating the value which is in range of long long and has the same representation as the unsigned long long constant has. In 2's complement , (long long)-1 has the same representation as the unsigned long long value 0xFFFFFFFFFFFFFFFF, which explains why you ended up with max_ll holding the value -1.
Actually it does make a difference whether the values are defined as signed or unsigned:
#include <stdio.h>
#include <limits.h>
int main(void) {
#if ULLONG_MAX == 0xFFFFFFFFFFFFFFFF
long long max_ll = 0xFFFFFFFFFFFFFFFF; // converts to -1LL
long long large_ll = 0x0FFFFFFFFFFFFFFF;
long long mask_ll = 0x00000F0000000000;
printf("\n" "signed 64-bit numbers:\n");
printf("0x%016llX\n", max_ll % mask_ll);
printf("0x%016llX\n", large_ll % mask_ll);
unsigned long long max_ull = 0xFFFFFFFFFFFFFFFF;
unsigned long long large_ull = 0x0FFFFFFFFFFFFFFF;
unsigned long long mask_ull = 0x00000F0000000000;
printf("\n" "unsigned 64-bit numbers:\n");
printf("0x%016llX\n", max_ull % mask_ull);
printf("0x%016llX\n", large_ull % mask_ull);
#endif
#if UINT_MAX == 0xFFFFFFFF
int max_l = 0xFFFFFFFF; // converts to -1;
int large_l = 0x0FFFFFFF;
int mask_l = 0x00000F00;
printf("\n" "signed 32-bit numbers:\n");
printf("0x%08X\n", max_l % mask_l);
printf("0x%08X\n", large_l % mask_l);
unsigned int max_ul = 0xFFFFFFFF;
unsigned int large_ul = 0x0FFFFFFF;
unsigned int mask_ul = 0x00000F00;
printf("\n" "unsigned 32-bit numbers:\n");
printf("0x%08X\n", max_ul % mask_ul);
printf("0x%08X\n", large_ul % mask_ul);
#endif
return 0;
}
Produces this output:
signed 64-bit numbers:
0xFFFFFFFFFFFFFFFF
0x000000FFFFFFFFFF
unsigned 64-bit numbers:
0x000000FFFFFFFFFF
0x000000FFFFFFFFFF
signed 32-bit numbers:
0xFFFFFFFF
0x000000FF
unsigned 32-bit numbers:
0x000000FF
0x000000FF
64 bit hex constant 0xFFFFFFFFFFFFFFFF has value -1 when stored into a long long. This is actually implementation defined because of out of range conversion into a signed type, but on Intel processors, with current compilers, the conversion just keeps the same bit pattern.
Note that you are not using the fixed size integers defined in <stdint.h>: int64_t, uint64_t, int32_t and uint32_t. long long types are specified in the standard as having at least 64 bits, and on Intel x86_64, they do, and long has at least 32 bits, but for the same processor, the size differs between environments: 32 bits in Windows 10 (even in 64 bit mode) and 64 bits on MaxOS/10 and linux64. This is the reason why you observe surprising behavior for the long case where unsigned and signed may produce the same result. They don't on Windows, but they do in linux and MacOS because the computation is done in 64 bits and these values are just positive numbers.
Also note that LLONG_MIN / -1 and LLONG_MIN % -1 both invoke undefined behavior because of signed arithmetic overflow, and this one is not ignored on Intel PCs, it usually fires an uncaught exception and exits the program, just like 1 / 0 and 1 % 0.
Try putting unsigned before your long long. As a signed number, your 0xFF...FF is actually -1 on most platforms.
Also, in your code, your 32-bit numbers are still 64-bits (you have them declared as long long as well).
Posted the whole code(if needed)
/* multiplication of n1 and n2*/
#include<stdio.h>
int arr[200]; //for storing the binary representation of n2
int k=0; // to keep the count of the no of digits in the binary representation
void bin(int n)
{
if(n>1)
bin(n/2);
arr[k++]=n%2;
}
int main()
{
int n1=1500000;
int n2=10000000;
int i=0,t=0;
long long int temp,pro=0;
bin(n2); // calculating the binary of n2 and stroring in 'arr'
for(i=k-1; i>=0 ;i--)
{
temp = (n1*arr[i]) << t; /* Why need cast here ? */
pro = pro + temp;
t++;
}
printf("product: %lld",pro);
return 0;
}
In the above program, I am not getting the desired output. But when I do this:
temp = (n1*(long long int)arr[i]) << t;,
then I am getting the correct output !
I am not able to understand the above behavior ?
It seems likely that on your system int is 32 bits and long long int is 64 bits.
n1 and arr[i] are both int, and result of multiplication is then int. But there is not enough bits in int to hold the answer because it is too big.
When you cast one member of the operation to long long int, then result will also be long long int too.
n1, as well as arr[i], are integers. Without the cast, you get an integer multiplication (which may overflow), shift that integer left (again producing an integer result that may overflow), then assign that integer to temp.
If you cast arr[i] to long long, all calculations are done in long. So, if your integers are 32 bit (and thus limited to about 2e10), your integers will overflow, but the long longs, which should be 64 bit, will not.
I suspect that an int isn't large enough to store the result of (n1*arr[i]), so casting to a (long long int) gives enough room to store the result.
int Not smaller than short. At least 16 bits.
long long int Not smaller than long. At least 64 bits.
Have a look at C++ Data Types.
a multiplication between two integer goes into an integer unless you cast one of them to something else. So in the first multiplication the result is stored into an integer (32 bit signed on a 32 bit system), in the second case into a long long (64 bit unsigned on a 32 bit system). You may prefer using types like int64_t to have better portability.
You can use sizeof(int), sizeof(long long) to see the difference between the two.
As we know that 4294967295 is the largest number in unsigned int if I multiply this number by itself then how to display it? I have tried:
long unsigned int NUMBER = 4294967295 * 4294967295;
but still getting 1 as answer.
You are getting an overflow. Consider the muplication in hexadecimal:
0xffffffff * 0xffffffff == 0xfffffffe00000001
^^^^^^^^
only the last 32 bits are returned
The solution is to use a larger type such as long long unsigned:
long long unsigned int NUMBER = 4294967295ULL * 4294967295ULL;
The suffix ULL means unsigned long long.
See it working online: ideone
The multiplication overflows.
#include <stdio.h>
int main()
{
unsigned int a = 4294967295;
unsigned int b = 4294967295;
// force to perform multiplication based on larger type than unsigned int
unsigned long long NUMBER = (unsigned long long)a * b;
printf("%llu\n", NUMBER);
}
You state in your question that you know max int is equal to 4294967295. That means that you can't store a number larger than that if you are using unsigned int.
C longs store up to 18,446,744,073,709,551,615 when unsigned on a 64 bit unix system [source] so you need only suffix your numbers with UL : 4294967295UL
If you aren't using a 64-bit unix system then you should use long long unsigned int and suffix with LL
Yes, it's an overflow. If you are using c, there isn't any easy way to do such big number multiply as i knew. Maybe you need write one by yourself. In fact some language support such features originally.
The printf function takes an argument type, such as %d or %i for a signed int. However, I don't see anything for a long value.
Put an l (lowercased letter L) directly before the specifier.
unsigned long n;
long m;
printf("%lu %ld", n, m);
I think you mean:
unsigned long n;
printf("%lu", n); // unsigned long
or
long n;
printf("%ld", n); // signed long
On most platforms, long and int are the same size (32 bits). Still, it does have its own format specifier:
long n;
unsigned long un;
printf("%ld", n); // signed
printf("%lu", un); // unsigned
For 64 bits, you'd want a long long:
long long n;
unsigned long long un;
printf("%lld", n); // signed
printf("%llu", un); // unsigned
Oh, and of course, it's different in Windows:
printf("%l64d", n); // signed
printf("%l64u", un); // unsigned
Frequently, when I'm printing 64-bit values, I find it helpful to print them in hex (usually with numbers that big, they are pointers or bit fields).
unsigned long long n;
printf("0x%016llX", n); // "0x" followed by "0-padded", "16 char wide", "long long", "HEX with 0-9A-F"
will print:
0x00000000DEADBEEF
Btw, "long" doesn't mean that much anymore (on mainstream x64). "int" is the platform default int size, typically 32 bits. "long" is usually the same size. However, they have different portability semantics on older platforms (and modern embedded platforms!). "long long" is a 64-bit number and usually what people meant to use unless they really really knew what they were doing editing a piece of x-platform portable code. Even then, they probably would have used a macro instead to capture the semantic meaning of the type (eg uint64_t).
char c; // 8 bits
short s; // 16 bits
int i; // 32 bits (on modern platforms)
long l; // 32 bits
long long ll; // 64 bits
Back in the day, "int" was 16 bits. You'd think it would now be 64 bits, but no, that would have caused insane portability issues. Of course, even this is a simplification of the arcane and history-rich truth. See wiki:Integer
It depends, if you are referring to unsigned long the formatting character is "%lu". If you're referring to signed long the formatting character is "%ld".
%ld see printf reference on cplusplus.com
I needed to print unsigned long long, so I found this works:
unsigned long long n;
printf("%llu", n);
For all other combinations, I believe you use the table from the printf manual, taking the row, then column label for whatever type you're trying to print (as I do with printf("%llu", n) above).
I think to answer this question definitively would require knowing the compiler name and version that you are using and the platform (CPU type, OS etc.) that it is compiling for.