I have a task to implement Toom-Cook 3-way multiplication algorithm. I'm following description on wikipedia http://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication , and I managed to store two big numbers into strings and split the strings into smaller ones according to the "Splitting" step on the wikipedia page. The next step is "evaluation", and I have to calculate a new number p0 = m0 + m2 ("Faster evaluation" by Bordrato - found on the same page) where m0 and m2 are the digits which I created by splitting the large number (in the previous step). The problem is that I cannot simply add up m0 and m2, since those two numbers can still be very large and impossible to add together in a standard way. Does this mean that I have to implement my own algorithm for adding large numbers (as well as substracting and dividing, since they are also needed), or am I missing something? If anyone could link me a possible implementation or even a pseudo code, it would be appreciated.
You have to implement your own methods for addition, subtraction, modulo, etc. Sometime ago I was trying to implement a BigInteger library and I have found some resources that may be useful for you.
BigNum Math book (as pointed by the previous answer)
Java OpenJdk
BigInteger implementation, with documentation
Algorithms and data structures The basic toolbox, (I have learned Karatsube of this book).
By the way, I recommend to use base 2 for your numbers(see here.) because you can take advantage of the nature of the computer to make your operations more easy and fast.
LibTomMath is open source and includes a Toom-Cook multiplication; have a look.
Related
I had recently an interview, where I failed and was finally told having not enough experience to work for them.
The position was embedded C software developer. Target platform was some kind of very simple 32-bit architecture, those processor does not support floating-point numbers and their operations. Therefore double and float numbers cannot be used.
The task was to develop a C routine for this architecture. This takes one integer and returns whether or not that is a Fibonacci number. However, from the memory only an additional 1K temporary space is allowed to use during the execution. That means: even if I simulate very great integers, I can't just build up the sequence and interate through.
As far as I know, a positive integer is a exactly then a Fibonacci number if one of
(5n ^ 2) + 4
or
(5n ^ 2) − 4
is a perfect square. Therefore I responded the question: it is simple, since the routine must determine whether or not that is the case.
They responded then: on the current target architecture no floating-point-like operations are supported, therefore no square root numbers can be retrieved by using the stdlib's sqrt function. It was also mentioned that basic operations like division and modulus may also not work because of the architecture's limitations.
Then I said, okay, we may build an array with the square numbers till 256. Then we could iterate through and compare them to the numbers given by the formulas (see above). They said: this is a bad approach, even if it would work. Therefore they did not accept that answer.
Finally I gave up. Since I had no other ideas. I asked, what would be the solution: they said, it won't be told; but advised me to try to look for it myself. My first approach (the 2 formula) should be the key, but the square root may be done alternatively.
I googled at home a lot, but never found any "alternative" square root counter algorithms. Everywhere was permitted to use floating numbers.
For operations like division and modulus, the so-called "integer-division" may be used. But what is to be used for square root?
Even if I failed the interview test, this is a very interesting topic for me, to work on architectures where no floating-point operations are allowed.
Therefore my questions:
How can floating numbers simulated (if only integers are allowed to use)?
What would be a possible soultion in C for that mentioned problem? Code examples are welcome.
The point of this type of interview is to see how you approach new problems. If you happen to already know the answer, that is undoubtedly to your credit but it doesn't really answer the question. What's interesting to the interviewer is watching you grapple with the issues.
For this reason, it is common that an interviewer will add additional constraints, trying to take you out of your comfort zone and seeing how you cope.
I think it's great that you knew that fact about recognising Fibonacci numbers. I wouldn't have known it without consulting Wikipedia. It's an interesting fact but does it actually help solve the problem?
Apparently, it would be necessary to compute 5n²±4, compute the square roots, and then verify that one of them is an integer. With access to a floating point implementation with sufficient precision, this would not be too complicated. But how much precision is that? If n can be an arbitrary 32-bit signed number, then n² is obviously not going to fit into 32 bits. In fact, 5n²+4 could be as big as 65 bits, not including a sign bit. That's far beyond the precision of a double (normally 52 bits) and even of a long double, if available. So computing the precise square root will be problematic.
Of course, we don't actually need a precise computation. We can start with an approximation, square it, and see if it is either four more or four less than 5n². And it's easy to see how to compute a good guess: it will very close to n×√5. By using a good precomputed approximation of √5, we can easily do this computation without the need for floating point, without division, and without a sqrt function. (If the approximation isn't accurate, we might need to adjust the result up or down, but that's easy to do using the identity (n+1)² = n²+2n+1; once we have n², we can compute (n+1)² with only addition.
We still need to solve the problem of precision, so we'll need some way of dealing with 66-bit integers. But we only need to implement addition and multiplication of positive integers, is considerably simpler than a full-fledged bignum package. Indeed, if we can prove that our square root estimation is close enough, we could safely do the verification modulo 2³¹.
So the analytic solution can be made to work, but before diving into it, we should ask whether it's the best solution. One very common caregory of suboptimal programming is clinging desperately to the first idea you come up with even when as its complications become increasingly evident. That will be one of the things the interviewer wants to know about you: how flexible are you when presented with new information or new requirements.
So what other ways are there to know if n is a Fibonacci number. One interesting fact is that if n is Fib(k), then k is the floor of logφ(k×√5 + 0.5). Since logφ is easily computed from log2, which in turn can be approximated by a simple bitwise operation, we could try finding an approximation of k and verifying it using the classic O(log k) recursion for computing Fib(k). None of the above involved numbers bigger than the capacity of a 32-bit signed type.
Even more simply, we could just run through the Fibonacci series in a loop, checking to see if we hit the target number. Only 47 loops are necessary. Alternatively, these 47 numbers could be precalculated and searched with binary search, using far less than the 1k bytes you are allowed.
It is unlikely an interviewer for a programming position would be testing for knowledge of a specific property of the Fibonacci sequence. Thus, unless they present the property to be tested, they are examining the candidate’s approaches to problems of this nature and their general knowledge of algorithms. Notably, the notion to iterate through a table of squares is a poor response on several fronts:
At a minimum, binary search should be the first thought for table look-up. Some calculated look-up approaches could also be proposed for discussion, such as using find-first-set-bit instruction to index into a table.
Hashing might be another idea worth considering, especially since an efficient customized hash might be constructed.
Once we have decided to use a table, it is likely a direct table of Fibonacci numbers would be more useful than a table of squares.
I am seeking advice on how to incorporate C or C++ code into my R code to speed up a MCMC program, using a Metropolis-Hastings algorithm. I am using an MCMC approach to model the likelihood, given various covariates, that an individual will be assigned a particular rank in a social status hierarchy by a 3rd party (the judge): each judge (approx 80, across 4 villages) was asked to rank a group of individuals (approx 80, across 4 villages) based on their assessment of each individual's social status. Therefore, for each judge I have a vector of ranks corresponding to their judgement of each individual's position in the hierarchy.
To model this I assume that, when assigning ranks, judges are basing their decisions on the relative value of some latent measure of an individual's utility, u. Given this, it can then be assumed that a vector of ranks, r, produced by a given judge is a function of an unobserved vector, u, describing the utility of the individuals being ranked, where the individual with the kth highest value of u will be assigned the kth rank. I model u, using the covariates of interest, as a multivariate normally distributed variable and then determine the likelihood of the observed ranks, given the distribution of u generated by the model.
In addition to estimating the effect of, at most, 5 covariates, I also estimate hyperparameters describing variance between judges and items. Therefore, for every iteration of the chain I estimate a multivariate normal density approximately 8-10 times. As a result, 5000 iterations can take up to 14 hours. Obviously, I need to run it for much more than 5000 runs and so I need a means for dramatically speeding up the process. Given this, my questions are as follows:
(i) Am I right to assume that the best speed gains will be had by running some, if not all of my chain in C or C++?
(ii) assuming the answer to question 1 is yes, how do I go about this? For example, is there a way for me to retain all my R functions, but simply do the looping in C or C++: i.e. can I call my R functions from C and then do looping?
(iii) I guess what I really want to know is how best to approach the incorporation of C or C++ code into my program.
First make sure your slow R version is correct. Debugging R code might be easier than debugging C code. Done that? Great. You now have correct code you can compare against.
Next, find out what is taking the time. Use Rprof to run your code and see what is taking the time. I did this for some code I inherited once, and discovered it was spending 90% of the time in the t() function. This was because the programmer had a matrix, A, and was doing t(A) in a zillion places. I did one tA=t(A) at the start, and replaced every t(A) with tA. Massive speedup for no effort. Profile your code first.
Now, you've found your bottleneck. Is it code you can speed up in R? Is it a loop that you can vectorise? Do that. Check your results against your gold standard correct code. Always. Yes, I know its hard to compare algorithms that rely on random numbers, so set the seeds the same and try again.
Still not fast enough? Okay, now maybe you need to rewrite parts (the lowest level parts, generally, and those that were taking the most time in the profiling) in C or C++ or Fortran, or if you are really going for it, in GPU code.
Again, really check the code is giving the same answers as the correct R code. Really check it. If at this stage you find any bugs anywhere in the general method, fix them in what you thought was the correct R code and in your latest version, and rerun all your tests. Build lots of automatic tests. Run them often.
Read up about code refactoring. It's called refactoring because if you tell your boss you are rewriting your code, he or she will say 'why didn't you write it correctly first time?'. If you say you are refactoring your code, they'll say "hmmm... good". THIS ACTUALLY HAPPENS.
As others have said, Rcpp is made of win.
A complete example using R, C++ and Rcpp is provided by this blog post which was inspired by a this post on Darren Wilkinson's blog (and he has more follow-ups). The example is also included with recent releases of Rcpp in a directory RcppGibbs and should get you going.
I have a blog post which discusses exactly this topic which I suggest you take a look at:
http://darrenjw.wordpress.com/2011/07/31/faster-gibbs-sampling-mcmc-from-within-r/
(this post is more relevant than the post of mine that Dirk refers to).
I think the best method currently to integrate C or C++ is the Rcpp package of Dirk Eddelbuettel. You can find a lot of information at his website. There is also a talk at Google that is available through youtube that might be interesting.
Check out this project:
https://github.com/armstrtw/rcppbugs
Also, here is a link to the R/Fin 2012 talk:
https://github.com/downloads/armstrtw/rcppbugs/rcppbugs.pdf
I would suggest to benchmark each step of the MCMC sampler and identify the bottleneck. If you put each full conditional or M-H-step into a function, you can use the R compiler package which might give you 5%-10% speed gain. The next step is to use RCPP.
I think it would be really nice to have a general-purpose RCPP function which generates just one single draw using the M-H algorithm given a likelihood function.
However, with RCPP some things become difficult if you only know the R language: non-standard random distributions (especially truncated ones) and using arrays. You have to think more like a C programmer there.
Multivariate Normal is actually a big issue in R. Dmvnorm is very inefficient and slow. Dmnorm is faster, but it would give me NaNs quicker than dmvnorm in some models.
Neither does take an array of covariance matrices, so it is impossible to vectorize code in many instances. As long as you have a common covariance and means, however, you can vectorize, which is the R-ish strategy to speed up (and which is the oppositve of what you would do in C).
I have a number of very large length may be upto 50 digits. I am taking that as string input. However, I need to perform operations on it. So, I need to convert them to a proper base, lets say, 256.
What will be the best algorithm to do so?
Multiple-precision arithmetic (a.k.a. bignums) is a difficult subject, and the good algorithms are non intuitive (there are books about that).
There exist several libraries handling bignums, like e.g. the GMP library (and there are other ones). And most of them take profit from some hardware instructions (e.g. add with carry) with carefully tuned small chunks of assembler code. So they perform better than what you would be able to code in a couple of months.
I strongly recommend using existing bignum libraries. Writing your own would take you years of work, if you want it to be competitive.
See also answers to this question.
Recently I asked this question: How to get the fundamental frequency from FFT? (you don't actually need to read it)
My doubt right now it: how to use the cepstral algorithm?
I just don't know how to use it because the only language that I know is ActionScript 3, and for this reason I have few references about the native functions found in C, Java and so on, and how I should implement them on AS. Most articles are about these languages =/
(althought, answers in other languages than AS are welcome, just explain how the script works please)
The articles I found about cepstral to find the fundamental frequency of a FFT result told me that I should do this:
signal → FT → abs() → square → log → FT → abs() → square → power cepstrum
mathematically:
|F{log(|F{f(t)}|²)}|²
Important info:
I am developing a GUITAR TUNER in flash
This is the first time I am dealing with advanced sound
I am using an FFT to extract frequency bins from the signal that reaches user's microphone, but I got stuck in getting the fundamental frequency from it
I don't know:
How to apply a square in an ARRAY (I mean, the data that my FFT gives me is an array. Should I multiply it by itself? ActionScript's debug throws errors when I try to fftResults * fftResults)
How to apply the "log". I would not know how to apply it even if I had a single number.
What is the difference between complex cepstral and power cepstral. Also, what of them should I use? I am trying to develop a guitar tuner.
Thanks!
Note that the output of an FFT is an array of complex values, i.e. each bin = re + j*im. I think you can just combine the abs and square operations and calculate re*re + im*im for each bin. This gives you a single positive value for each bin, and obviously you can calculate the log value for each bin quite easily. You then need to do a second FFT on this log squared data and again using the output of this second FFT you will calculate re*re + im*im for each bin. You will then have an array of postive values which will have one or more peaks representing the fundamental frequency or frequencies of your input.
The autocorrelation is the easiest and most logical approach, and the best place to start.
To get this working, start with a simple autocorrelation, and then, if necessary, improve it following the outline provided by YIN. (YIN is based on the autocorrelation with refinements. But whether or not you'll need these refinements depends on details of your situation.) This way also, you can learn as you go rather than trying to understand the whole thing in one shot.
Although FFT approaches can also work, they are a bit more confusing. The issue is that what you are really after is the period, and this isn't well represented by the FFT. The missing fundamental is a good example of this, where if you have 2Hz and 3Hz, the fundamental is 1Hz, but is nowhere in the FFT, while 1Hz is obvious in a time based representation (e.g. the autocorrelation). Add to this that overtones aren't necessarily harmonic, and noise, etc... and all of these issues make it usually best to start with a direct approach to the problem.
There are many ways of finding fundamental frequency (F0).
For languages like Java etc there are many libraries with those type of algorithms already implemented (you can study their sources).
MFCC (based on cepstral) implemented in Comirva (Open source).
Audacity (beta version!) (Open source) presents cepstrum, autocorellation, enhanced autocorellation,
Yin based on autocorrelation (example )
Finding max signal values after FFT
All these algorithms may be be very helpful for you. However easiest way to get F0 (one value in Hz) would be to use Yin.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm doing some Project Euler problems and most of the time, the computations involve large numbers beyond int, float, double etc.
Firstly, I know that I should be looking for more efficient ways of calculation so as to avoid the large number problem. I've heard of the Bignum libraries.
But, for academics interests, I'd like to know how to code my own solution to this problem.
Can any expert please help me out? (My language is C)
You need to store the big numbers in a base that your computer can easily handle with its native types, and then store the digits in a variable length array. I'd suggest that for simplicity you start by storing the numbers in base 10 just to get the hang of how to do this. It will make debugging a lot easier.
Once you have a class that can store the numbers in this form, it's just a matter of implementing the operations add, subtract, multiply, etc. on this class. Each operation will have to iterate over digits of its operands and combine them, being careful to carry correctly so that your digits are never larger than the base. Addition and subtraction are simple. Multiplication requires a bit more work as the naive algorithm requires nested loops. Then once you have that working, you can try implementing exponentiation in an efficient manner (e.g. repeated squaring).
If you are planning to write a serious bignum implementation, base 10 won't cut it. It's wasteful of memory and it will be slow. You should choose a base that is natural for the computer, such as 256 or the word size (2**32). However this will make simple operations more difficult as you will get overflows if you naively add two digits, so you will need to handle that very carefully.
C is not a good choice for Project Euler. The benefits of C are raw speed, machine portability (to an extent, with standard C), language interoperability (if some language communicates with another, C is a popular first choice), sticking close to a specific library or platform's API (because C is common, e.g. OS API), and a stable language & stdlib. None of these benefits apply to solving Project Euler problems. Not even raw speed, because most of the problems aren't about raw computation, but understanding the algorithm required, and you can sit there all day and wait before submission.
If you are attempting Project Euler problems to broaden your experience with C, that's perfectly fine, just realize this experience doesn't necessarily apply to long-lived and real-world C projects you may work on.
For this kind of short, one-off problem those languages commonly described as "scripting languages" will work better, faster (in dev time), and easier. Try Python, it stays close to C in many ways, including a C API, and out of the various popular "scripting languages" is possibly the one for which you will find the most use in conjunction with C projects.
This may become an unpopular answer, but it isn't a rant—plus I really like C and use C/C++ often—and there is an explicit answer here to your problem: "don't use C", with your final large number solution depending on which alternative you choose. Again picking on Python, integers do not have an upper bound (note below), and I use this to naturally code answers to Project Euler problems, where in other languages I have to use a painful-by-comparison alternative number library.
(Python integers: There are two integer types in 2.x, 'int' and 'long' (which have been completely unified in 3.x). The conversion between them is practically seamless, and 'long' allows arbitrarily large values, instead of just being a bigger 'int' type as C's long is.)
A popular bignum library for C/C++ is the GNU MP Bignum Library. I've used it for several Project Euler problems, but fact remains that C isn't a very suitable language for Euler-problems. If performance was more important C would have more to give, but now you're much better off using a language which built in bignum support, such as Ruby (there are lots of others).
A simple way is to think of the number as its string representation in base b. Suppose b=10, simple arithmetic operation like addition on two such strings can be done using the same method we use when adding numbers by pen and paper. The same goes for other simple operations. For better results, you can take a larger base.
A simple bignum implementation like that should be enough for most Project Euler problems (probably all, but I haven't solved much at Euler so can't be sure), but there are ways of using much faster algorithms for operations such as multiplication and division/mod.
Although I recommend writing your own bignum for practice, if you are really stuck you can take ideas from the code of already implemented bigint libraries. For a serious implementation something like gmp is the obvious choice. But you cana also find small bigints coded by other people when solving similar practice problem online (e.g. Abednego's bigint.cpp).
Here's a nice and simple bignum module for C. You can learn from it for ideas. The C code isn't the highest quality, but the algorithm is well implemented and quite common.
For more advanced stuff, look up GMP.
If you want a nice C++ version (I know, you said C, but this is really interesting code), take a look at the internals of CGAL: http://www.cgal.org/