Printing multiple integers as one arbitrarily long decimal string - c

Say I have 16 64-bit unsigned integers. I have been careful to feed the carry as appropriate between them when performing operations. Could I feed them into a method to convert all of them into a single string of decimal digits, as though it was one 1024-bit binary number? In other words, is it possible to make a method that will work for an arbitrary number of integers that represent one larger integer?
I imagine that it would be more difficult for signed integers, as there is the most significant bit to deal with. I suppose it would be that the most significant integer would be the signed integer, and the rest would be unsigned, to represent the remaining 'parts' of the number.
(This is semi-related to another question.)

You could use the double dabble algorithm, which circumvents the need for multi-precision multiplication and division. In fact, the Wikipedia page contains a C implementation for this algorithm.

This is a bit unclear.
Of course, a function such as
void print_1024bit(uint64_t digits[]);
could be written to do this. But if you mean if any of the standard library's printf()-family of functions can do this, then I think the answer is no.
As you probably saw in the other question, the core of converting a binary number into a different base b is made of two operations:
Modulo b, to figure out the current least significant digit
Division by b, to remove that digit once it's been generated
When applied until the number is 0, this generates all the digits in reverse order.
So, you need to implement "modulo 10" and "divide by 10" for your 1024-bit number.
For instance, consider the number decimal 4711, which we want to convert to octal just for this example:
4711 % 8 is 7, so the right-most digit is 7
4711 / 8 is 588
588 % 8 is 4, the next digit is 4
588 / 8 is 73
73 % 8 is 1
73 / 8 is 9
9 % 8 is 1
8 / 8 is 1
1 % 8 is 1
1 / 8 is 0, we're done.
So, reading the bold digits from the bottom and up towards the right-most digits, we conclude that 471110 = 111478. You can use a calculator to verify this, or just trust me. :)

It's possible, of course, but not terribly straight-forward.
Rather than reinventing the wheel, how about reusing a library?
The GNU Multi Precision Arithmetic Library is one such possibility. I've not needed such things myself, but it seems to fit your bill.

Related

Problem of a simple float number serialization example

I am reading the Serialization section of a tutorial http://beej.us/guide/bgnet/html/#serialization .
And I am reviewing the code which Encode the number into a portable binary form.
#include <stdint.h>
uint32_t htonf(float f)
{
uint32_t p;
uint32_t sign;
if (f < 0) { sign = 1; f = -f; }
else { sign = 0; }
p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // whole part and sign
p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // fraction
return p;
}
float ntohf(uint32_t p)
{
float f = ((p>>16)&0x7fff); // whole part
f += (p&0xffff) / 65536.0f; // fraction
if (((p>>31)&0x1) == 0x1) { f = -f; } // sign bit set
return f;
}
I ran into problems with this line p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // whole part and sign .
According to the original code comments, this line extracts the whole part and sign, and the next line deals with fraction part.
Then I found an image about how float is represented in memory and started the calculation by hand.
From Wikipedia Single-precision floating-point format:
So I then presumed that whole part == exponent part.
But this (uint32_t)f)&0x7fff)<<16) is getting the last 15bits of the fraction part, if based on the image above.
Now I get confused, where did I get wrong?
It's important to realize what this code is not. This code does not do anything with the individual bits of a float value. (If it did, it wouldn't be portable and machine-independent, as it claims to be.) And the "portable" string representation it creates is fixed point, not floating point.
For example, if we use this code to convert the number -123.125, we will get the binary result
10000000011110110010000000000000
or in hexadecimal
807b2000
Now, where did that number 10000000011110110010000000000000 come from? Let's break it up into its sign, whole number, and fractional parts:
1 000000001111011 0010000000000000
The sign bit is 1 because our original number was negative. 000000001111011 is the 15-bit binary representation of 123. And 0010000000000000 is 8192. Where did 8192 come from? Well, 8192 ÷ 65536 is 0.125, which was our fractional part. (More on this below.)
How did the code do this? Let's walk through it step by step.
(1) Extract sign. That's easy: it's the ordinary test if(f < 0).
(2) Extract whole-number part. That's also easy: We take our floating-point number f, and cast it to type unint32_t. When you convert a floating-point number to an integer in C, the behavior is pretty obvious: it throws away the fractional part and gives you the integer. So if f is 123.125, (uint32_t)f is 123.
(3) Extract fraction. Since we've already got the integer part, we can isolate the fraction by starting with the original floating-point number f, and subtracting the integer part. That is, 123.125 - 123 = 0.125. Then we multiply the fractional part by 65536, which is 216.
It may not be obvious why we multiplied by 65536 and not some other number. In one sense, it doesn't matter what number you use. The goal here is to take a fractional number f and turn it into two integers a and b such that we can recover the fractional number f again later (although perhaps approximately). The way we're going to recover the fractional number f again later is by computing
a + b / x
where x is, well, some other number. If we chose 1000 for x, we'd break 123.125 up into a and b values of 123 and 125. We're choosing 65536, or 216, for x because that lets us make maximal use of the 16 bits we've allocated for the fractional part in our representation. Since x is 65536, b has to be some number we can divide by 65536 in order to get 0.125. So since b / 65536 = 0.125, by simple algebra we have b = 0.125 * 65536. Make sense?
Anyway, let's now look at the actual code for performing steps 1, 2, and 3.
if (f < 0) { sign = 1; f = -f; }
Easy peasy. If f is negative, our sign bit will be 1, and we want the rest of the code to operate on the positive version of f.
p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31);
As mentioned, the important part here is (uint32_t)f, which just grabs the integer (whole-number) part of f. The bitmask & 0x7fff extracts the low-order 15 bits of it, throwing anything else away. (This is since our "portable representation" only allocates 15 bits for the whole-number part, meaning that numbers greater than 215-1 or 32767 can't be represented). The shift << 16 moves it into the high half of the eventual unint32_t result, where it belongs. And then | (sign<<31) takes the sign bit and puts it in the high-order position where it belongs.
p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // fraction
Here, (int)f recomputes the integer (whole-number) part of f, and then f - (int)f extracts the fraction. We multiply it by 65536, as explained above. There may still be a fractional part (even after the multiplication, that is), so we cast to (uint32_t) again to throw that away, retaining only the integer part. We can only handle 16 bits of fraction, so we extract those bits (discarding anything else) with & 0xffff, although this should be unnecessary since we started with a positive fractional number less than 1, and multiplied it by 65536, so we should end up with a positive number less than 65536, i.e. we shouldn't have a number that won't exactly fit in 16 bits. Finally, the p |= operation stuffs these 16 bits we've just computed into the low-order half of p, and we're done.
Addendum: It may still not be obvious where the number 65536 came from, and why that was used instead of 10000 or some other number. So let's review two key points: we're ultimately dealing with integers here. Also, in one sense, the number 65536 actually was pretty arbitrary.
At the end of the day, any bit pattern we're working with is "really" just an integer. It's not a character, or a floating-point number, or a pointer — it's just an integer. If it has N bits, it represents integers from 0 to 2N-1.
In the fixed-point representation we're using here, there are three subfields: a 1-bit sign, a 15-bit whole-number part, and a 16-bit fraction part.
The interpretation of the sign and whole-number parts is obvious. But the question is: how shall we represent a fraction using a 16-bit integer?
And the answer is, we pick a number, almost any number, to divide by. We can call this number a "scaling factor".
We really can pick almost any number. Suppose I chose the number 3467 as my scaling factor. Here is now I would then represent several different fractions as integers:
    ½ → 1734/3467 → 1734
    ⅓ → 1155/3467 → 1155
    0.125 → 433/3467 → 433
So my fractions ½, ⅓, and 0.125 are represented by the integers 1734, 1155, and 433. To recover my original fractions, I just divide by 3467:
    1734 → 1734 ÷ 3467 → 0.500144
    1155 → 1155 ÷ 3467 → 0.333141
    433 → 1734 ÷ 3467 → 0.124891
Obviously I wasn't able to recover my original fractions exactly, but I came pretty close.
The other thing to wonder about is, where does that number 3467 "live"? If you're just looking at the numbers 1734, 1155, and 433, how do you know you're supposed to divide them by 3467? And the answer is, you don't know, at least, not just by looking at them. 3567 would have to be part of the definition of my silly fractional number format; people would just have to know, because I said so, that they had to multiply by 3467 when constructing integers to represent fractions, and divide by 3467 when recovering the original fractions.
And the other thing to look at is what the implications are of choosing various different scaling factors. The first thing is that, since in the end we're going to be using a 16-bit integer for the fractional representation, we absolutely can't use a scaling factor any greater than 65536. If we did, sometimes we'd end up with an integer greater than 65535, and it wouldn't fit in 16 bits. For example, suppose we tried to use a scaling factor of 70000, and suppose we tried to represent the fraction 0.95. Now, 0.95 is equal to 66500/70000, so our integer would be 66500, but that doesn't fit in 16 bits.
On the other hand, it turns out that ideally we don't want to use a number less than 65536, either. The smaller a number we use, the more of our 16-bit fractional representation we'll waste. When I chose 3467 in my silly example a little earlier, that meant I would represent fractions from 0/3467 = 0.00000 and 1/3467 = 0.000288 up to 3466/3467 = 0.999711. But I'd never use any of the integers from 3467 through 65536. They'd be wasted, and by not using them, I'd unnecessarily limit the precision of the fractions I could represent.
The "best" (least wasteful) scaling factor to use is 65536, although there's one other consideration, namely, which fractions do you want to be able to represent exactly? When I used 3467 as my scaling factor, I couldn't represent any of my test numbers ½, ⅓, or 0.125 exactly. If we use 65536 as the scaling factor, it turns out that we can represent fractions involving small powers of two exactly — that is, halves, quarters, eights, sixteenths, etc. — but not any other fractions, and in particular not most of the decimal fractions like 0.1. If we wanted to be able to represent decimal fractions exactly, we would have to use a scaling factor that was a power of 10. The largest power of 10 that will fit in 16 bits is 10000, and that would indeed let us exactly represent decimal fractions as small as 0.00001, although we'd waste about 5/6 (or 85%) of our 16-bit fractional range.
So if we wanted to represent decimal fractions exactly, without wasting precision, the inescapable conclusion is that we should not have allocated 16 bits for our fraction field in the first place. Better choices would have been 10 bits (ideal scaling factor 1024, we'd use 1000, wasting only 2%) or 20 bits (ideal scaling factor 1048576, we'd use 1000000, wasting about 5%).
The relevant excerpts from the page are
The thing to do is to pack the data into a known format and send that over the wire for decoding. For example, to pack floats, here’s something quick and dirty with plenty of room for improvement
and
On the plus side, it’s small, simple, and fast. On the minus side, it’s not an efficient use of space and the range is severely restricted—try storing a number greater-than 32767 in there and it won’t be very happy! You can also see in the above example that the last couple decimal places are not correctly preserved.
The code is presented only as an example. It is really quick and dirty, because it packs and unpacks the float as a fixed point number with 16 bits for fractions, 15 bits for integer magnitude and one for sign. It is an example and does not attempt to map floats 1:1.
It is in fact rather incredibly stupid algorithm: It can map 1:1 all IEEE 754 float32s within magnitude range ~256...32767 without losing a bit of information, truncate the fractions in floats in range 0...255 to 16 bits, and fail spectacularly for any number >= 32768. And NaNs.
As for the endianness problem: for any protocol that does not work with integers >= 32 bits intrinsically, someone needs to decide how to again serialize these integers into the other format. For example in the Internet at lowest levels data consists of 8-bit octets.
There are 24 obvious ways mapping a 32-bit unsigned integer into 4 octets, of which 2 are now generally used, and some more historically. Of course there are a countably infinite (and exponentially sillier) ways of encoding them...

Can i use long double to compute integers in c language

I try to write a factorial function to compute a large number(factorial(105)),its result have 168 digit, so use long double, but it seems to be a error, can't it use like this?
#include <stdio.h>
long double factorial(long double n,long double base = 1){
if (n <= 1){
return 1*base;
}else{
return factorial(n-1,n * base);
}
}
int main(){
printf("%.0Lf\n",factorial(25)); // this result is correct
printf("%.0Lf\n",factorial(26));
//correct result is 403291461126605635584000000,but it return 403291461126605635592388608
return 0;
}
Back of the envelope calculation: 25! is slightly more than 1025; three orders of magnitude is roughly 10 bits, so you would need roughly 83 bits of mantissa even just to represent precisely the result.
Given that a long double, on platforms that support it, is generally 80 bits for the whole value (not just the mantissa!), apparently there's no way you have enough mantissa to perform that calculations of that order of magnitude with integer precision.
However: factorial here is a bit magic, as many of the factors contain powers of two, which just add binary zeroes to the right, which do not require mantissa (they end up in the exponent). In particular:
25! = 2 4 2 8 2 4 2 16 2 4 2 8 = 2²² · m
3 5 3 7 9 5 11 3 13 7 15 17 9 19 5 21 11 23 3 25
(m being the product of all non-2 factors, namely m = 310 · 56 · 73 · 112 · 13 · 17 · 19 · 23, so effectively the data we have to store in the mantissa)
Hence, our initial estimate exceeds the actual requirements by 22 bits.
It turns out that
log2(f) = 10·log23 + 6·log25 + 3·log27 + 2·log211 + log213 + log217 + log219 + log223 = 61.68
which is indeed just below the size of the mantissa of 80 bit long double (64 bit). But when you multiply it by 26 (so, excluding the factor 2, which ends up in the exponent, by 13), you are adding log2(13) = 3.7 bits. 61.7+3.7 is 65.4, so from 26! onwards you no longer have the precision to perform your calculation exactly.
Firstly, nobody knows what long double can or cannot represent. Specific properties of the format depend on the implementation.
Secondly, X86 extended precision floating-point format has 64-bit significand with explicit leading 1, which means that it can represent contiguous integers in ±264 range. Beyond that range representable integers are non-contiguous (i.e. they begin to "skip" with wider and wider gaps). Your factorials fall far outside that range, meaning that it is perfectly expected that they won't be represented precisely.
Since 105! is a huge number that does not fit in a single word (or two of them), you want arbitrary precision arithmetic, also known as bignums. Use the Stirling's approximation to get an idea of how big 105! is and read the wikipage on factorials.
Standard C (read n1570 to check) don't have bignums natively, but you'll find many libraries for that.
I recommend GMPlib. BTW, some of its code is hand-written assembly for performance reason (when coding bignum addition, you want to take advantage of add with carry machine instructions).
I recommend to avoid writing your own bignum operations. The existing libraries use very clever algorithms (and you'll need to make some PhD work to get something better). If you try coding your own bignum library, it probably will be much worse than competitors (unless you spend years of work).

Subtracting 1 from 0 in 8 bit binary

I have 8 bit int zero = 0b00000000; and 8 bit int one = 0b00000001;
according to binary arithmetic rule,
0 - 1 = 1 (borrow 1 from next significant bit).
So if I have:
int s = zero - one;
s = -1;
-1 = 0b1111111;
where all those 1s are coming from? There are nothing to borrow since all bits are 0 in zero variable.
This is a great question and has to do with how computers represent integer values.
If you’re writing out a negative number in base ten, you just write out the regular number and then prefix it with a minus sign. But if you’re working inside a computer where everything needs to either be a zero or a one, you don’t have any minus signs. The question then comes up of how you then choose to represent negative values.
One popular way of doing this is to use signed two’s complement form. The way this works is that you write the number using ones and zeros, except that the meaning of those ones and zeros differs from “standard” binary in how they’re interpreted. Specifically, if you have a signed 8-bit number, the lower seven bits have their standard meaning as 20, 21, 22, etc. However, the meaning of the most significant bit is changed: instead of representing 27, it represents the value -27.
So let’s look at the number 0b11111111. This would be interpreted as
-27 + 26 + 25 + 24 + 23 + 22 + 21 + 20
= -128 + 64 + 32 + 16 + 8 + 4 + 2 + 1
= -1
which is why this collection of bits represents -1.
There’s another way to interpret what’s going on here. Given that our integer only has eight bits to work with, we know that there’s no way to represent all possible integers. If you pick any 257 integer values, given that there are only 256 possible bit patterns, there’s no way to uniquely represent all these numbers.
To address this, we could alternatively say that we’re going to have our integer values represent not the true value of the integer, but the value of that integer modulo 256. All of the values we’ll store will be between 0 and 255, inclusive.
In that case, what is 0 - 1? It’s -1, but if we take that value mod 256 and force it to be nonnegative, then we get back that -1 = 255 (mod 256). And how would you write 255 in binary? It’s 0b11111111.
There’s a ton of other cool stuff to learn here if you’re interested, so I’d recommend reading up on signed and unsigned two’s-complement numbers.
As some exercises: what would -4 look like in this format? How about -9?
These aren't the only ways you can represent numbers in a computer, but they're probably the most popular. Some older computers used the balanced ternary number system (notably the Setun machine). There's also the one's complement format, which isn't super popular these days.
Zero minus one must give some number such that if you add one to it, you get zero. The only number you can add one to and get zero is the one represented in binary as all 1's. So that's what you get.
So long as you use any valid form of arithmetic, you get the same results. If there are eight cars and someone takes away three cars, the value you get for how many case are left should be five, regardless of whether you do the math with binary, decimal, or any other kind of representation.
So any valid system of representation that supports the operations you are using with their normal meanings must produce the same result. When you take the representation for zero and perform the subtraction operation using the representation for one, you must get the representation such that when you add one to it, you get the representation for zero. Otherwise, the result is just wrong based on the definitions of addition, subtraction, zero, one, and so on.

GMP most significant digits

I'm performing some calculations on arbitrary precision integers using GNU Multiple Precision (GMP) library. Then I need the decimal digits of the result. But not all of them: just, let's say, a hundred of most significant digits (that is, the digits the number starts with) or a selected range of digits from the middle of the number (e.g. digits 100..200 from a 1000-digit number).
Is there any way to do it in GMP?
I couldn't find any functions in the documentation to extract a range of decimal digits as a string. The conversion functions which convert mpz_t to character strings always convert the entire number. One can only specify the radix, but not the starting/ending digit.
Is there any better way to do it other than converting the entire number into a humongous string only to take a small piece of it and throw out the rest?
Edit: What I need is not to control the precision of my numbers or limit it to a particular fixed amount of digits, but selecting a subset of digits from the digit string of the number of arbitrary precision.
Here's an example of what I need:
71316831 = 19821203202357042996...2076482743
The actual number has 1112852 digits, which I contracted into the ....
Now, I need only an arbitrarily chosen substring of this humongous string of digits. For example, the ten most significant digits (1982120320 in this case). Or the digits from 1112841th to 1112849th (21203202 in this case). Or just a single digit at the 1112841th position (2 in this case).
If I were to first convert my GMP number to a string of decimal digits with mpz_get_str, I would have to allocate a tremendous amount of memory for these digits only to use a tiny fraction of them and throw out the rest. (Not to mention that the original mpz_t number in binary representation already eats up quite a lot.)
If you know the number of decimal digits of x = 7^1316831 in advance, e.g., 1112852. Then you get your lower, say, 10 digits with:
x % (10^10), and the upper 20 digits with:
x / (10^(1112852 - 20)).
Note, I get 19821203202357042995 for the latter; 5 at final, not 6.
I don't think you can do that in GMP. However you can use Boost Multiprecision Library
Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time (for example 50 or 100 decimal digits), or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types.
Emphasis mine
Another alternative is ttmath with the type ttmath::Big<e,m> that you can control the needed precision. Any fixed-precision types will work, provided that you only need the most significant digits, as they all drop the low significant digits like how float and double work. Those digits don't affect the high digits of the result, hence can be omitted safely. For instance if you need the high 20 digits then use a type that can store 20 digits and a little more, in order to provide enough data for correct rounding later
For demonstration let's take a simple example of 77 = 823543 and you only need the top 2 digits. Using a 4-digit type for calculation you'll get this
75 = 16807 => round to 1681×10¹ and store
75×7 = 1681×101×7 = 11767*10¹ ≈ 1177×102
75×7×7 = 1177×102×7 = 8232×102
As you can see the top digits are the same even without needing to get the full exact result. Calculating the full precision using GMP not only wastes a lot of time but also memory. Think about the amount of memory you need to store the result of another operation on 2 bigints to get the digits you want. By fixing the precision instead of leaving it at infinite you'll decrease the CPU and memory usage significantly.
If you need the 100th to 200th high order digits then use a type that has enough room for 201 digits and more, and extract those 101 digits after calculation. But this will be more wasteful so you may need to change to an arbitrary-precision (or fixed-precision) type that uses a base that's a power of 10 for its limbs (I'm using GMP notation here). For example if the type uses base 109 then each limb represents 9 digits in the decimal output and you can get arbitrary digit in decimal directly without any conversion from binary to decimal. That means zero waste for the string. I'm not sure which library uses base 10n but you can look at Mini-Pi's implementation which uses base 109, or write it yourself. This way it also work for efficiently getting the high digits
See
How are extremely large floating-point numbers represented in memory?
What is the simplest way of implementing bigint in C?

Modulo operation in output

In most coding competitions where the output of a program is presumed to be very large,it is generally instructed to divide the output by 10000007(or in that case a prime number).What is the significance of the prime number being taken because in many cases I find that the same number is given as 100004(i.e. not a prime number)..?
A prime number is used for two reasons. One reason is that the integers modulo a prime form a mathematical field. Arithmetic in an field works in many ways like arithmetic on integers. This makes an field useful in certain contest problems where otherwise the sequences used might collapse in some cases. Certain arithmetic might produce zero, other trivial results, or simpler results than desired, because the numbers involve happen upon a factor of the modulus, causing some reduction or elimination to occur.
Another reason is to compel the programmer to deal with arithmetic on integers of a certain size. If a composite number were used, then other techniques could be used that did not resort to arithmetic with large integers.
For example, supposed we want to know what 132 is modulo 35, but we have only a very small processor that cannot handle three-digit numbers, so it cannot compute 132 = 169.
Well, 35 = 5•7, and 13 is congruent to 3 modulo 5 and 6 modulo 7. Instead of computing the square of 13, we can compute the squares of these residues, which tells us that 132 is congruent to 32 = 9 = 4 modulo 5 and is congruent to 62 = 36 = 1 modulo 7. Combining these residues requires some additional knowledge (the Extended Euclidean Algorithm). For these particular numbers, we can multiply the residue of 5 by 21 and the residue of 7 by 15 to get 4·21+1·15 = 99. Reducing that modulo 35 yields 29, which is the answer (the residue of 132 modulo 35).
If the modulus is prime, this circumvention of the arithmetic is not available. A prime modulus essentially requires the use of arithmetic on numbers up to the square of the modulus (or time-consuming workarounds), but a composite modulus would allow the use of arithmetic on smaller numbers, up to just twice the modulus.

Resources