Overall complexity with multiple operations? - arrays

If I have an unsorted array and if I first sort it by using quick sort with a run time of O(nlgn) and then search for an element using Binary search, with a run time of O(lgn) then what would be the over all run time of both these operations? Would that individual run times be added?

It will be O(n logn) because O(n logn + logn) = O(n logn)
So yes, you sum it, but in this case it doesn't matter
If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f(n)
Wikipedia

Related

Improving time complexity of finding triplets

The program in question is to find the number of triplets in an array/list with a given sum.
My approach has been to first sort the array and then use the two-pointer technique to find such triplets. The overall time complexity turns out to be O(n^2).
Is there any way how I can further improve the time complexity?.

Additivity of time complexity?

I'm currently working on a project for school in which I have a time complexity restriction. I'm still beginning to learn to program so apologies of this question is stupid.
Essentially, lets say I have a constraint that my program must satisfy the time complexity O(n^2 + mlogm), where n and m are some sizes of input. Does that mean that's the maximum time complexity any specific component of my program can run in? That is, if I have a ton of, say, O(n^2 + m) functions, but the most complex function runs in O(n^2 + mlogm), will I still satisfy this time bound? For example, if my program runs the following functions in order
functionA
functionB
functionC
functionD
And functions A, B, C are all O(n^2 + m) or something less than the time complexity restraint, but the last function is O(n^2 + mlogm), will my program be within the time constraint? Would that not somehow be O(n^2 + m ^ n^2 + m ^ n^2 + m + n^2 + mlogm) for the overall complexity?
I'm still learning about time complexity, so any help for this would be much appreciated. Thanks a bunch.
If you're executing different functions in sequence, you just choose the worst complexity and that's the overall complexity.
For example, if fnA was O(n) and fnB was O(nn!), then any effect of fnA would be totally swamped by fnB, assuming they're run sequentially (not nested).
In your case, the final of your functions meet the requirement and the first three are better, so the overall complexity is that of the final one.

Should time complexity of Juggling algorithm be O(n)?

Time Complexity of Juggling algorithm for array rotation(Suppose array is rotated for 'd' times) is computed as O(n), where 'n' is the size of the array. But, for any number of rotation(i.e. for any value of 'd'), the algorithm runs exactly for n times. So, shouldn't the time complexity of the algorithm be "Theta(n)" instead of O(n) ? It always loops for n times in any case.If not, can anyone provide a test case where it doesn't run for exactly 'n' times?
Saying that f is in Θ(n) is the same thing as saying that it's in both O(n) and Ω(n). Colloquially, O(·) is often used when Θ(·) would be more precise. But a function in Θ(n) is definitely also in O(n).

What is the running time of comb sort?

According to me, Comb sort should also run in sub quadratic time just like shell sort. This is because comb sort is to bubble sort just how shell sort is related to insertion sort. Shell sort sorts the array according to gap sequences applying insertion sort and similarly comb sort sorts the array according to gap sequences applying bubble sort. So what is the the running time of comb sort?
(This question has been unanswered for a while, so I'm converting my comment into an answer.)
Although there are similarities between shell sort and comb sort, the average-case runtime of comb sort is O(n2). Proving this is a bit tricky, and the technique that I've seen used to prove it is the incompressibility method, an information-theoretic technique involving Kolmogorov complexity.
Hope this helps!
With what sequence of increments?
If the increments are chosen to be: the set of all numbers of the form (2^p * 3^q), that are less than N, then, yes, the running time is better than quadratic (it's proportional to N times the square of the logarithm of N). With that set of increments, Combsort performs exactly the same exchanges as a Shellsort using the same increments (the "Pratt sequence"). But that's not what people usually have in mind when they're talking about Combsort.
In theory...
With increments that are decreasing geometrically (e.g. on each pass over the input the increment is, say, about 80% of the previous increment), which is what people usually mean when they talk about Combsort... yes, asymptotically, it is quadratic in both the worst-case and the average case. But...
In practice...
So long as the increments are relatively prime and the ratio between one increment and the next is sensible (80% is fine), n has to astronomically large before the average running time will be much more than n.log(n). I've sorted hundreds of millions of records at a time with Combsort, and I've only ever seen quadratic running times when I've deliberately engineered them by constructing "killer inputs". In practice, with relatively prime increments (and a ratio between adjacent increments of 1.25:1), even for millions of records, Combsort requires on average, about 3 times as many comparisons as a mergesort and typically takes between 2 and 3 times as long to run.

how to sort an array in C in one cycle?

Is there any algorythm to sort an array of float numbers in one cycle?
If you mean one pass, then no. Sorting generally requires either O(N log N). Single-pass implies O(N).
Radix sort takes O(N*k) with average key-length k. Even though it's linear time, it requires multiple passes. It is also not usually suitable for sorting floats.
Take a laptop with a quicksort program on it. Then put the laptop on a unicycle. Tada! Sorting in one cycle.
check Counting sort
it runs in O(N + M) time where N is the input array size and M is the sorting array size
There are some sorting algorithms that are O(n) in the best-case. See here.
No, there is no algorithm that is O(n). Possibly using as many parallel computers as there are elements in your array or using quantum computers, but if you want O(n) now on a regular computer, you can forget about it.
No.
<aside>#codinghorror: why must my post have >= 15 characters?</aside>
Sorting algorithm "in one cycle" - I believe you mean algorithm with linear complexity. In addition to these answers you can also check Bucket Sort Algorithm. It has average performance O(n+k).

Resources