Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am working on a project where I need to crunch large integers (like 3^361) with absolute precision and as much speed as possible. C is the fastest language I am familiar with, so I am trying to code my solution in that language.
The problem is that I have not been able to find a good implementation of any data types for representing limitless integers in C other than Python's source code. It is taking me time to go through the code and determine what I need.
I would much rather use someone else's tested code with a full set of functionality (addition, subtraction, multiplication, division, modulation, exponentiation, equality checking... even bitwise operation would be sweet) than spending the weeks it would take me to even begin to get my own version up to par. While it would be a great learning experience, it is not the focus of my problem, and I'd rather get to the part that interests me :)
A couple of people have already mentioned GMP. I would only add that at least the last time I looked, it was pretty well restricted to working with gcc.
If you want to use other compilers, are couple you might consider are NTL and MIRACL. I've tested MIRACL a bit, and it seems to work reasonably well. I've used NTL quite a bit more, and while large integers are more of a sideline for it, it still does them quite nicely. It doesn't claim to be as fast as GMP (and, in fact, can use GMP to do basic operations), but when I've done some minimal benchmarking between the two I haven't found a lot of significant differences (though that was long enough ago that I doubt it's valid anymore either).
Gnu MP provides a bignum library.
The OpenSSL library also provides a solid BigNum implementation (<openssl/bn.h>).
I use MAPM which is a portable arbitrary precision (integer and floating point) library.
libtommath, from libtomcrypt, is probably the smallest, simplest, and fastest. (Funny how those 3 superlatives almost always come together...) If you can't find an upstream you can get the source from the dropbear ssh source tree.
If you want ANSI Standard C, get the code in Dave Hanson's C Interfaces and Implementations. Very clear and well designed.
If gcc and gcc extensions are OK, then as others have pointed out the Gnu Multiprecision Library (GMP) is well thought of and widely used.
Mbed has a bignum implementation that serves as basis for the crypto functions.
It is heavily used on microcontrollers.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
As far as I know, it is almost true that any code that can be represented in the LLVM intermediate language, can also be represented in C, with two important exceptions:
Exceptions. (No pun intended.)
Signed integer arithmetic with well-defined behavior on overflow.
Is there anything else that can be represented in LLVM but not in C?
In addition to exception handling, other big features are garbage collection and out-of-the-box coroutines. Going to a lower level, there are trampoline intrinsics, patch points for JITs, and direct support for Obj-C ARC Runtime intrinsics.
C is Turing complete, so all of these things can be introduced to C with libraries and so on, but I put them as they are part of the LLVM language.
Metadata for example, including LLVM's branch-weight and debugloc metadata.
Except that they can if you're willing to be tortuous enough about the C you write. I think that's general: IF you're willing to write really tortuous, unidiomatic C, THEN you can write anything. So I vote to close this as unclear.
EDIT: Most things probably are expressible in C given enough discipline, verbosity and preprocessing directives, but I wonder about aliasing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been given a task to write a C language analyser using an AFD. I can choose whichever language I want so I think I will go for Ruby. However this task is a little overwhelming to grasp at the beginning.
The problem I stumble across is : How do I even represent the AFD of the entire C language?.
I have been doing a little bit of digging and I ended up reading this on lexical analysis. In this paper the author defines every token of the language as a transition between 2 states (which is very logical). I find it almost impossible for me not to miss a few or build such a big AFD by hand without many mistakes. Any tips ?
The task you have is a similar one posed to many undergraduate students in compiler courses every year in thousands of universities, and the notes you cite are good sample of the many sets of course notes available on the topic.
The solution is the same as any software engineering problem: testing against the specification.
Although the intellectual problem of the analysis and creation of AFDs for a whole language by hand might seem overwhelming error prone, don't forget you are tasked with also implementing this (in your chosen language of Ruby).
This implementation can be tested by feeding it carefully graded and selected samples of C language input. When it does not deliver the expected result there error will either be in the coding of the AFD or a fault in the AFD you constructed. You make the necessary change and go around the testing loop again.
You will eventually end up with a valid AFD for the entire C language and an analyser for it written in Ruby.
It is often a good idea to start small and implement a subset of the C language and get that working first and then add more to it using stepwise refinement. This is a less risky strategy than attempting to do the whole thing in one go.
You need to apply all those techniques you should have learned about building specifications, designs, programs and testing and apply it to this problem. Just apply good computer science and software engineering to this problem.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I tried to find a good place to ask my question, which isn't programming, though it implies programming in C.
Our schoolteachers told us that we need to start exercise our programing skills, in C, based on math exersices. Even if I search the web for the best ways to solve such exersices and I came up with the Math library [<math.h>] I couldn't find a good page with many examples of solving exersices. My best course so far is Wikipedia but I can understand the fact that the Wikipedia can't store all the functionalities the library gives. I even looked in some examples want to find a complete coursebook for sovling all good math exersices we can make with a paper and a pen!
Have anyone any good idea?
So, you are trying to learn programming to solve math problems, that's good. But, I think you are getting a wrong idea about programming, Programming does not solve problems for you. To solve a problem, you have to decide on an algorithm to solve a problem and then create programmatic statements, in whichever language you like, then the program that you have created will give you an outcome based on the algorithm. You have to take the outcome and then decide yourself for further study.
for example
finding a factorial of a number
int i = number;
fact = 1;
while(i > 0)
fact * = i;
this way you will get a factorial of the number you specify, It is you who has to decide whether your algorithm is working fine, by comparing it with manual work, or with records. As you can see, the above program is an infinite loop and it is me who has to debug such issues, the program does not do that. Then What does a program do, it just automates what you do manual and help you save time and improve efficiency at work.
So to solve exercises, you have to understand problem statements and then script a program. I see you are talking about math library, we have good resource online to study important math functions which are highly useful, for in depth understand of their working open up the library files and study.
A Program can never solve your problem without you writing it, and deciding on its efficiency
See if this cmath solves you problem.
Header <cmath> declares a set of functions to compute common mathematical operations and transformations
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I've been writing some cython code to implement multi-precision array operations (mostly dot products and matrix inversion) that I want to use in python. I used mpfr as the underlying C library and by testing both in C and in Cython I find mpfr (at 200 bits precision) to be 50-200 times slower (depending on the operation) than numpy (at machine precision). I know mpfr is very fast but i still find this overhead to be surprisingly large. Since my needs are very limited (fixed precision, only basic operations such as add, mult, etc..) I was wondering if I could just hand-code some multi-precision operations (disregarding careful rounding, etc..). Unfortunately this involve quite a lot of work so I was hoping to find some free code snippets in C or intel assembly for doing basic multi-precision arithmatic. I would appreciate any references to the latter or reasons why I should or should not take this approach.
UPDATE: I should have mentioned I've already tried the QD library and its actually (slightly) slower than MPFR at similar precision (212 bits). I guess this must be due to C++ overhead.
You could try a double-double or quad-double library. These libraries take advantage of existing double-precision hardware for speed (I wrote a summary as part of a question of my own). There seems to be code for the latter.
These libraries require the underlying hardware to operate exactly as mandated by the IEEE 754 standard. They break down if computations are made with excess precision. If you target a modern desktop processor, make sure your compiler generates SSE2 instructions for the floating-point computations. If you are stuck with 8087 instructions for some reason, you are better off using a double-double-extended library (numbers represented as the sum of two 80-bit numbers). There is one within CRlibm that should come out without too much work.
Alternately, it may be worth trying GMP's MPF type. It could be faster since it does not try to be as nice as MPFR according to the latter's FAQ.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
c89
gcc (GCC) 4.7.2
Hello,
I am looking at some string functions as I need to search for different words in a sentence.
I am just wondering are the c standard functions fully optimized.
For example the functions like these:
memchr, strstr, strspn, strchr, etc
In terms of high performance as that is what I need. Is there anything better?
Regards,
You will almost certainly find that the standard library functions have been optimised as much as they can, and they will probably outdo anything you code up in C.
Note that this is for the general case. If there is some restriction you're able to put on the functions, or some extra piece of information you have on the data itself, you may be able to get your code to run faster, since you have the advantage of that restriction or information.
For example, I've written C code for malloc that blew a library-supplied malloc away because I knew the application would never ask for more than 256 bytes so I just gave 256 bytes on every request. Yes, that was a waste of memory but it allowed speed improvements beyond the general case.
But, for the general case, you're better off sticking with the supplied stuff.
Fully optimized? Optimized for what?
Yes, C functions of stdlib written to be very efficient and were tested/debugged for a years, so you definitely shouldn't worry about most of them.
Assuming, that you always align your data to 16-byte boundaries and allocate every time about 16 bytes extra or so, it's definitely possible to speed up most stdlib routines.
But assuming that eg. strlen is not known in advance, or that reading just one byte too much can cause a segmentation fault, I wouldn't bother.