BigInteger in C? - c

What is the easiest way to handle huge numbers in C? I need to store values in the Area 1000^900, or in more human readable form 10^2700.
Does anybody know of an easy way to do that? Any help would really be appreciated!

Use libgmp:
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on...
Since version 6, GMP is distributed under the dual licenses, GNU LGPL v3 and GNU GPL v2...
GMP's main target platforms are Unix-type systems, such as GNU/Linux, Solaris, HP-UX, Mac OS X/Darwin, BSD, AIX, etc. It also is known to work on Windows in both 32-bit and 64-bit mode...

There are a few libraries to help you do this (arbitrary precision mathematics):
BigDigits;
iMath;
decNumber; and
no doubt others.
Assuming this isn't work related (ie you're doing it for fun or its a hobby or just an oportunity to learn something), coding up a library for arbitrary precision maths is a relatively interesting project. But if you need to absolutely rely on it and aren't interested in the nuts and bolts just use a library.

There are a number of libraries for handling huge numbers around. Do you need integer or floating point arithmetic?
You could look at the code built into Python for the task.
You could look at the extensions for Perl for the task.
You could look at the code in OpenSSL for the task.
You could look at the GNU MP (multi-precision) library - as mentioned by kmkaplan.

You can also try the BIGNUMs of openssl, see https://www.openssl.org/docs/man1.0.2/man3/bn.html, https://www.openssl.org/docs/man1.1.1/man3/, Convert a big number given as a string to an OpenSSL BIGNUM for details.

Related

How does an AVR perform floating point Arithmetic

I'm trying to implement a support for double and float and corresponding basic arithmetic on a CPU without an FPU.
I know that it is possible on all AVR ATmega controllers. An ATmega also has no FPU. So here comes the question: How does it work? If there any suggestions for literature or links with explanations and examples?
At the best case I will provide a support for code like this:
double twice ( double x )
{
return x*x;
}
Many thanks in advance,
Alex
Here are AVR related links with explanations and examples for implementing soft double:
You will find one double floating point lib here.
Another one can be found it in the last message here.
Double is very slow, so if speed is in concern, you might opt for fixed point math. Just read my messages in this thread:
This post may be interesting for you: Floating point calculations in a processor with no FPU
As stated:
Your compiler may provide support, or you may need to roll your own.
There are freely-available implementations, too.
If it's for an ATmega, you probably don't have to write anything yourself. All the already available libraries are probably already optimized much further than you possible can do yourself. If you need more performance, you could consider to convert the floating points to fixed points. You should consider this anyway. If you can get the job done in fixed point, you should stay away from floating point.
Avr uses AVR Libc, which you can download and examine.
There is a math library, but that is not what you are looking for. That contains the standard functions defined in math.h.
Instead, you need functions that perform multiplication and things like like that. These are also in Libc, under fplib and written in assembly language. But the user doesn't call them directly. Instead, when the compiler comes across a multiplication involving floats, the compiler will choose the correct function to insert.
You can browse through the AVR fplib to get an idea of what to do, but you are going to have to write your own assembly or bit-twiddling code for your processor.
You need to find out what standard your processor and language are using for floating point. IEEE, perhaps? And you'll also need to know if the system is little-endian or big-endian.
I am assuming you system doesn't have a C compiler already. Otherwise, all the floating point operations would already be implemented. That "twice()" function (actually square()) would work just fine as it is.

How to keep 9 points of precision with double in C [duplicate]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I’m looking for a good arbitrary precision math library in C or C++. Could you please give me some advices or suggestions?
The primary requirements:
It must handle arbitrarily big integers—my primary interest is on integers. In case that you don’t know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000).
The precision must not need to be specified during library initialization or object creation. The precision should only be constrained by the available resources of the system.
It should utilize the full power of the platform, and should handle “small” numbers natively. That means on a 64-bit platform, calculating (2^33 + 2^32) should use the available 64-bit CPU instructions. The library should not calculate this in the same way as it does with (2^66 + 2^65) on the same platform.
It must efficiently handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), GCD, factorial, and other common integer arithmetic calculations. The ability to handle functions like square root and logarithm that do not produce integer results is a plus. The ability to handle symbolic computations is even better.
Here are what I found so far:
Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don’t understand the math underneath. It may be based on theories and algorithms that I have never learnt.
The built-in integer type or in core libraries of bc, Python, Ruby, Haskell, Lisp, Erlang, OCaml, PHP, some other languages: I have used some of these, but I have no idea which library they are using, or which kind of implementation they are using.
What I have already known:
Using char for decimal digits and char* for decimal strings, and do calculations on the digits using a for-loop.
Using int (or long int, or long long) as a basic “unit” and an array of that type as an arbitrary long integer, and do calculations on the elements using a for-loop.
Using an integer type to store a decimal digit (or a few digits) as BCD (Binary-coded decimal).
Booth’s multiplication algorithm.
What I don’t know:
Printing the binary array mentioned above in decimal without using naive methods. An example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, … (2) use a char*-string mentioned above to store the intermediate decimal results).
What I appreciate:
Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion).
Good suggestions on books and articles that I should read. For example, an illustration with figures on how a non-naive binary-to-decimal conversion algorithm works would be good. The article “Binary to Decimal Conversion in Limited Precision” by Douglas W. Jones is an example of a good article.
Any help in general.
Please do not answer this question if you think that using double (or long double, or long long double) can solve this problem easily. If you do think so, you don’t understand the issue in question.
GMP is the popular choice. Squeak Smalltalk has a very nice library, but it's written in Smalltalk.
You asked for relevant books or articles. The tricky part of bignums is long division. I recommend Per Brinch Hansen's paper Multiple-Length Division Revisited: A Tour of the Minefield.
Overall, he fastest general purpose arbitrary precision library is GMP. If you want to work with floating point values, look at the the MPFR library. MPFR is based on GMP.
Regarding native arbitrary precision support in other languages, Python uses its own implementation because of license, code size, and code portability reasons. The GMPY module lets Python access the GMP library.
See TTMath, a small templated header-only library free for personal and commercial use.
I've not compared arbitrary precision arithmetic libraries to each other myself, but people who do seem to have more or less uniformly settled on GMP. For what it's worth, the arbitrary precision integers in GHC Haskell and GNU Guile Scheme are both implemented using GMP, and the fastest implementation of the pidigits benchmark on the language shootout is based on GMP.
What about Pari? It’s built on top GMP and provides all the other goodies about number theory operations you’ll ever need (and many symbolic computation stuff).

Calculate Pi in C up to a few million digits

I am actually very new to C, but for a project, I'd like to be able to calculate the value of Pi from 1 million to at least 32 million decimal places.
Basically, like what SuperPi/HyperPi does for benchmarking a CPU.
But obviously, the standard C library is incapable of this.
What library can I use, and what algorithm do I use for this task?
And precision too, anyone can cook up a rand() bloat and call it the "Ultimate value of Pi".
My compiler is GCC, so if possible, I'd like the library to be able to compile on it(I have the BigNum library).
I'v used the quadratic algorithm from there with success. I'd suggest MPFR for the library part.
As for the algorithm, see http://en.wikipedia.org/wiki/Chudnovsky_algorithm. For a library to deal with bignums, check http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic#Libraries. Have fun.

Implementation of ceil() and floor()

Just curious how are these implemented. I can't see where I would start. Do they work directly on the float's/double's bits?
Also where can I find the source code of functions from math.h? All I find are either headers with prototypes or files with functions that call other functions from somewhere else.
EDIT: Part of the message was lost after editing the title. What I meant in particular were the ceil() and floor() functions.
If you're interested in seeing source code for algorithms for this kind of thing, then fdlibm - the "Freely Distributable libm", originally from Sun, and the reference implementation for Java's math libraries - might be a good place to start. (For casual browsing, it's certainly a better place to start than GNU libc, where the pieces are scattered around various subdirectories - math/, sysdeps/ieee754/, etc.)
fdlibm assumes that it's working with an IEEE 754 format double, and if you look at the implementations - for example, the core of the implementation of log() - you'll see that they use all sorts of clever tricks, often using a mixture of both standard double arithmetic, and knowledge of the bit representation of a double.
(And if you're interested in algorithms for supporting basic IEEE 754 floating point arithmetic, such as might be used for processors without hardware floating point support, take a look at John R. Hauser's SoftFloat.)
As for your edit: in general, ceil() and floor() might well be implemented in hardware; for example, on x86, GCC (with optimisations enabled) generates code using the frndint instruction with appropriate fiddling of the FPU control word to set the rounding mode. But fdlibm's pure software implementations (s_ceil.c, s_floor.c) do work using the bit representation directly.
math.h is part of the Standard C Library.
If you are interested in source code, the GNU C Library (glibc) is available to inspect.
EDIT TO ADD:
As others have said, math functions are typically implemented at the hardware level.
Math functions like addition and division are almost always implemented by machine instructions. The exceptions are mostly small processors, like the 8048 family, which use a library to implement the functions for which there are not a simple machine instruction sequence to compute.
Math functions like sin(), sqrt(), log(), etc. are almost always implemented in the runtime library. A few rare CPUs, like the Cray, have a square root instruction.
Tell us which particular implementation (gcc, MSVC, etc./Mac, Linux, etc.) you are using and someone will direct you precisely where to look.
On many platforms (such as any modern x86-compatible), many of the maths functions are implemented directly in the floating-point hardware (see for example http://en.wikipedia.org/wiki/X86_instruction_listings#x87_floating-point_instructions). Not all of them are used, though (as I learnt through comments to other answers here). But, for instance, the sqrt library function is often implemented directly in terms of the SSE hardware instruction.
For some ideas on how the underlying algorithms work, you might try reading Numerical Recipes, which is available as PDFs somewhere.
A lot of it is done on processors these days.
The chip I cut my teeth on didn't even have a multiply instruction (z80)
We had to approximate stuff using the concept of the Taylor Series.
About 1/2 the way down this page, you can see how sin and cos are approximated.
While modern CPU do have hardware implementation of common transcendantal functions like sin, cos, etc..., it is rarely used as is. It may be for portability reason, speed, accuracy, etc... Instead, approximation algorithms are used.

Adding very large numbers [duplicate]

What is the easiest way to handle huge numbers in C? I need to store values in the Area 1000^900, or in more human readable form 10^2700.
Does anybody know of an easy way to do that? Any help would really be appreciated!
Use libgmp:
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on...
Since version 6, GMP is distributed under the dual licenses, GNU LGPL v3 and GNU GPL v2...
GMP's main target platforms are Unix-type systems, such as GNU/Linux, Solaris, HP-UX, Mac OS X/Darwin, BSD, AIX, etc. It also is known to work on Windows in both 32-bit and 64-bit mode...
There are a few libraries to help you do this (arbitrary precision mathematics):
BigDigits;
iMath;
decNumber; and
no doubt others.
Assuming this isn't work related (ie you're doing it for fun or its a hobby or just an oportunity to learn something), coding up a library for arbitrary precision maths is a relatively interesting project. But if you need to absolutely rely on it and aren't interested in the nuts and bolts just use a library.
There are a number of libraries for handling huge numbers around. Do you need integer or floating point arithmetic?
You could look at the code built into Python for the task.
You could look at the extensions for Perl for the task.
You could look at the code in OpenSSL for the task.
You could look at the GNU MP (multi-precision) library - as mentioned by kmkaplan.
You can also try the BIGNUMs of openssl, see https://www.openssl.org/docs/man1.0.2/man3/bn.html, https://www.openssl.org/docs/man1.1.1/man3/, Convert a big number given as a string to an OpenSSL BIGNUM for details.

Resources