Algorithm to calculate factorial of a number with least complexity - factorial

what is the fastest algorithm to calculate factorial of a given number.
I prefer algorithm which has least time complexity

Related

Improving time complexity of finding triplets

The program in question is to find the number of triplets in an array/list with a given sum.
My approach has been to first sort the array and then use the two-pointer technique to find such triplets. The overall time complexity turns out to be O(n^2).
Is there any way how I can further improve the time complexity?.

Worst case runtime for Quicksort when a constant difference after partition is guaranteed

We want to sort with the standard quicksort and we are guaranteed, that, after calling the partition method, the size difference for both sections is at most a constant factor a. What is the worst case runtime for this algorithm?
With limited size-difference between partitions, quicksort is worst-case O(n log(n))
Essentially, quicksort traverses the entire array each time it makes a split. Therefore, we need only consider the worst-case partitioning, and how many splits are required to get that down to size 1 (or 2).
Now, if we are guaranteed that the larger of two sections is at most a times as large as the other, then the worst case is where indeed the larger section is always a times as large.
In this case, the amount of "layers" in the quicksort will equal the amount of times we have to divide the original size of the array by (1+a)/a to get 1. This is equal to the logarithm with base (1+a)/a of the input size. Because a is constant, so is (1+a)/a, and therefore the amount of splits is O(log(n)), which means the algorithm runs in O(n log(n)) worst-case.

Finding k-smallest eigen values and its corresponding eigen vector for large matrix

For a symmetric sparse square matrix of size 300,000*300,000, what is best way to find 10 smallest Eigenvalues and its corresponding Eigenvectors within an hours or so in any language or packages.
The Lanczos algorithm, which operates on a Hermitian matrix, is one good way to find the lowest and greatest eigenvalues and corresponding eigenvectors. Note that a real symmetric matrix is by definition Hermitian. Lanczos requires O(N) storage and also roughly O(N) time to evaluate the extreme eigenvalues/eigenvectors. This contrasts with brute force diagonalization which requires O(N^2) storage and O(N^3) running time. For this reason, the Lanczos algorithm made possible approximate solutions to many problems which previously were not computationally feasible.
Here is a useful link to a UC Davis site, which lists implementations of Lanczos in a number of languages/packages, including FORTRAN, C/C++, and MATLAB.

Do all recursion based algorithms can be converted into iterative (loop based) algorithms?

As in case of finding factorial of a number or fibonacci series, we can write both recursion based and loop based solutions. Is recursion always convertible to loop based solution. If not, please give example.
You can convert any recursive algorithm to an iterative one by explicitly storing the stack in your own object and maintaining it between iterations.

Overall complexity with multiple operations?

If I have an unsorted array and if I first sort it by using quick sort with a run time of O(nlgn) and then search for an element using Binary search, with a run time of O(lgn) then what would be the over all run time of both these operations? Would that individual run times be added?
It will be O(n logn) because O(n logn + logn) = O(n logn)
So yes, you sum it, but in this case it doesn't matter
If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n)
Wikipedia

Resources