Additivity of time complexity? - c

I'm currently working on a project for school in which I have a time complexity restriction. I'm still beginning to learn to program so apologies of this question is stupid.
Essentially, lets say I have a constraint that my program must satisfy the time complexity O(n^2 + mlogm), where n and m are some sizes of input. Does that mean that's the maximum time complexity any specific component of my program can run in? That is, if I have a ton of, say, O(n^2 + m) functions, but the most complex function runs in O(n^2 + mlogm), will I still satisfy this time bound? For example, if my program runs the following functions in order
functionA
functionB
functionC
functionD
And functions A, B, C are all O(n^2 + m) or something less than the time complexity restraint, but the last function is O(n^2 + mlogm), will my program be within the time constraint? Would that not somehow be O(n^2 + m ^ n^2 + m ^ n^2 + m + n^2 + mlogm) for the overall complexity?
I'm still learning about time complexity, so any help for this would be much appreciated. Thanks a bunch.

If you're executing different functions in sequence, you just choose the worst complexity and that's the overall complexity.
For example, if fnA was O(n) and fnB was O(nn!), then any effect of fnA would be totally swamped by fnB, assuming they're run sequentially (not nested).
In your case, the final of your functions meet the requirement and the first three are better, so the overall complexity is that of the final one.

Related

Overall complexity with multiple operations?

If I have an unsorted array and if I first sort it by using quick sort with a run time of O(nlgn) and then search for an element using Binary search, with a run time of O(lgn) then what would be the over all run time of both these operations? Would that individual run times be added?
It will be O(n logn) because O(n logn + logn) = O(n logn)
So yes, you sum it, but in this case it doesn't matter
If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n)
Wikipedia

Algorithm Complexity vs Running Time

I have an algorithm used for signal quantization. For the algorithm I have an equation to calculate its complexity with different values of parameters. This algorithm is implemented in C. Sometimes according to the equation I have less complexity but the running time is higher. I'm not 100% sure about the equation.
My question is running time and algorithm complexity are all the time having straight relation? Means, always the higher complexity we have, the higher running time happens? Or it's different from one algorithm to another?
Time complexity is more a measure of how time varies with input size than an absolute measure.
(This is an extreme simplification, but it will do for explaining the phenomenon you're seeing.)
If n is your problem size and your actual running time is 1000000000 * n, it has linear complexity, while 0.000000001*n^2 would be quadratic.
If you plot them against each other, you'll see that 0.000000001*n^2 is smaller than 1000000000 * n all the way up to around n = 1e18, despite its "greater complexity".
(0.000000001*n^2 + 1000000000 * n would also be quadratic, but always have worse execution time than both.)
No, running time and algorithmic complexity do not have a simple relationship.
Estimating or comparing run times can easily get very complicated and detailed. There are many variables that vary even with the same program and input data - that's why benchmarks do multiple runs and process them statistically.
If you're looking for big differences, generally the two most significant factors are algorithmic complexity ("big O()") and start up time. Frequently, the lower "big O()" algorithm requires more complex startup; that is, it takes more initial setup in the program before entering the actual loop. If it takes longer to do that initial setup than run the rest of the algorithm for small data sets, the larger O() rated algorithm will run faster for those small data sets. For large data sets, the lower O() algorithm will be faster. There will be a data set size where the total time is equal, called the "crossover" size.
For performance, you'd want to check if most of your data was above or below that crossover as part of picking the algorithm to implement.
Getting more and more detail and accuracy in runtime predictions gets much more complex very quickly.

What is the running time of comb sort?

According to me, Comb sort should also run in sub quadratic time just like shell sort. This is because comb sort is to bubble sort just how shell sort is related to insertion sort. Shell sort sorts the array according to gap sequences applying insertion sort and similarly comb sort sorts the array according to gap sequences applying bubble sort. So what is the the running time of comb sort?
(This question has been unanswered for a while, so I'm converting my comment into an answer.)
Although there are similarities between shell sort and comb sort, the average-case runtime of comb sort is O(n2). Proving this is a bit tricky, and the technique that I've seen used to prove it is the incompressibility method, an information-theoretic technique involving Kolmogorov complexity.
Hope this helps!
With what sequence of increments?
If the increments are chosen to be: the set of all numbers of the form (2^p * 3^q), that are less than N, then, yes, the running time is better than quadratic (it's proportional to N times the square of the logarithm of N). With that set of increments, Combsort performs exactly the same exchanges as a Shellsort using the same increments (the "Pratt sequence"). But that's not what people usually have in mind when they're talking about Combsort.
In theory...
With increments that are decreasing geometrically (e.g. on each pass over the input the increment is, say, about 80% of the previous increment), which is what people usually mean when they talk about Combsort... yes, asymptotically, it is quadratic in both the worst-case and the average case. But...
In practice...
So long as the increments are relatively prime and the ratio between one increment and the next is sensible (80% is fine), n has to astronomically large before the average running time will be much more than n.log(n). I've sorted hundreds of millions of records at a time with Combsort, and I've only ever seen quadratic running times when I've deliberately engineered them by constructing "killer inputs". In practice, with relatively prime increments (and a ratio between adjacent increments of 1.25:1), even for millions of records, Combsort requires on average, about 3 times as many comparisons as a mergesort and typically takes between 2 and 3 times as long to run.

Parallel algorithms O(log p)

First off this isn't for any homework question, it's just on a general type of algorithm. In a parallel computing course I'm taking I'm having trouble wrapping my head around a style of algorithm that has runtime O( something + ... log p). For example we've looked at sequence reduction algorithms that are O(n/p + log p) where p = #procs and n is problem size. Log base 2.
The problem I have is the idea of log(p). For one I'm used to seeing log(n) everywhere in reducing problems to two subproblems of size n/2 etc. The second is just the idea of having the step complexity of an algorithm as log(p). Because that would imply that for a problem of fixed size if I increase the number of processors then I am increasing the number of steps in the algorithm? I have always thought of the step complexity of an algorithm as the sort of inherent sequential aspect of the algorithm and hence increasing or decreasing the number of processors shouldn't have any effect on this. Is this a bad way to think of it?
I guess what would be helpful is some pseudocode of algorithms that have log(p) running time somewhere in them.
Consider computing the sum of n numbers. Each processor can be assigned n/p numbers, but how do you add up the results from the individual processors? You could pass all p results to one processor, for a runtime O(n/p+p), but you can combine the sums faster in a tree-like fashion.
I think that O(n/p + log(p)) does make sense, because n/p + log(p) it's decreasing at the increasing of the p variable, so the running time decrease as you add processors and this bound does make sense; otherwise a running time of log(p) isn't likely to be natural because it's decreasing in respect to the processors number.

how to sort an array in C in one cycle?

Is there any algorythm to sort an array of float numbers in one cycle?
If you mean one pass, then no. Sorting generally requires either O(N log N). Single-pass implies O(N).
Radix sort takes O(N*k) with average key-length k. Even though it's linear time, it requires multiple passes. It is also not usually suitable for sorting floats.
Take a laptop with a quicksort program on it. Then put the laptop on a unicycle. Tada! Sorting in one cycle.
check Counting sort
it runs in O(N + M) time where N is the input array size and M is the sorting array size
There are some sorting algorithms that are O(n) in the best-case. See here.
No, there is no algorithm that is O(n). Possibly using as many parallel computers as there are elements in your array or using quantum computers, but if you want O(n) now on a regular computer, you can forget about it.
No.
<aside>#codinghorror: why must my post have >= 15 characters?</aside>
Sorting algorithm "in one cycle" - I believe you mean algorithm with linear complexity. In addition to these answers you can also check Bucket Sort Algorithm. It has average performance O(n+k).

Resources