I am doing a question on www.hackerrank.com and I have been stuck on it for days.
Here is the statement of the question https://www.hackerrank.com/challenges/insertion-sort. Basically, I have to count how many swaps occur in insertion sort for a given array in O(nlog(n)) time.
http://paste.ubuntu.com/12637144/ Here is my submitted code. I use merge sort and count how many times each element is displaced. This code passes for more than half of the site's tests. When it fails it doesn't time out, and it doesn't have a compilation error or segmentation fault.
Furthermore, when I got the input for one of the failed test cases (Here is the input that it failed on the site http://paste.ubuntu.com/12637165/) and tested it with this variation of my code http://paste.ubuntu.com/12637127/ which actually runs the insertion sort algorithm counting the number of swaps along the way and checks it against the merge sort count, I pass all of the tests. Also, I have generated thousands of random test cases, and they also all pass using this test.
I don't think its a problem on the site's end because in the discussions for the problem other people seem to be passing the tests just fine without any questions or complaints. So maybe I am misunderstanding either the question or I am simply writing both the algorithm and the test cases for the algorithm wrong. Does anyone have any suggestions?
If n can be upto 100000, then the no. of inversions can be ~= n^2 / 2 which wont fit in a 32 bit integer. Try using a 64 bit integer for counting and for return value of mergeSort.
Related
So first of all let me talk about the motivation for this question. Let's supose you have to find the minimum and the maximum values in an array. In this case, you wave two ways of doing so.
The first one consists in iterating over the array and finding the maximum value, then doing the same thing to find the minimum value. This solution is O(2n).
The second one consists in iterating over the array just one time and finding both the minimum and maximum value at the same time. This solution is O(n).
Even though the time complexity has been halved, for each iteration of the O(n) solution you now have twice as many instructions (ignoring how the compiler can possibly optmize these instructions) so I believe they should take the same amount of time to execute.
Let me give you a second example. Now you need to reverse an array. Again, you have two ways of doing so.
The first one is to create an empty array, iterate over the data array filling the empty array. This solution is O(n).
The second one is to iterate over the data array, swapping the 0th and n-1th elements, then the 1th and n-2th elements and so on (using this strategy) until you reach the middle of the array. This solution is O((1/2)n).
Again, even though the time complexity has been cutted in half, you have three times more instructions per iteration. You're iterating over (1/2)n elements, but for each iteration you have to perform three XOR instructions. If you were not to use XOR, but an auxiliary variable you would still need 2 more instructions to perform the variable swapping, so now I believe that o((1/2)n) should actually be worse than o(n).
Having said these things, my question is the following:
Ignoring space complexity, garbage collecting and the compiler possible optimizations, can I assume that having O(c1*n) and O(c2*n) algorithms so that c1 > c2, can I be sure that the algorithm that gives me O(c1*n) is as fast or faster than the one that gives me O(c2*n)?
This question is cool because it can make a difference on how I start writing code from here and on. If the "more complex" (c1) way is as fast as the "less complex" (c2) but more readable, i'm sticking with the "more complex" one.
c1 > c2, can I be sure that the algorithm that gives me O(c1n) is as fast or faster than the one that gives me O(c2n)?
The whole issue lies within the words "fast" or "faster". Computational complexity doesn't strictly measure what we intuitively understand as "fast". Without going into mathematical details (although it's a good idea: https://en.wikipedia.org/wiki/Big_O_notation), it answers the question "how fast it will go slower when my input grows". So if you have O(n^2) complexity you can roughly expect that doubling the size of the input will make your algorithm take 4 times more time. Whereas for linear complexity, 2 times bigger input gives only doubles the time. As you can see, it's relative, so any constants cancel out.
To sum up: from the way you ask your question, it doesn't seem the big-O notation is the correct tool here.
By definition, if c1 and c2 are constants, O(c1*n) === O(c2*c) === O(n). That is, the number of operations per element of your array of length n is completely irrelevant in this kind of complexity analysis.
All that it will tell you is that "it's linear". That is, if you have 1 bazillion operations for an array of length n, then you'll have 2 bazillion operations for an array of length 2*n (plus or minus something that grows slower than linear).
can I assume that having O(c1n) and O(c2n) algorithms so that c1 > c2, can I be sure that the algorithm that gives me O(c1n) is as fast or faster than the one that gives me O(c2n)?
Nope, not at all.
First, because the constants there are meaningless in that analysis. There's no way to put it: it is absolutely irrelevant whatever restrictions you put in c1 and c2 for big-O analysis. The whole idea is that it will discard those restrictions.
Second, because they don't tell you anything that would enable you to compare the two algorithms runtime for a specific value of n.
Such complexity analysis only enables you to compare the asymptotic behavior of algorithms. Real-world problems in general don't care about where the asymptotes are.
Assume that A1(n) is the number of operations Algorithm 1 needs for an input of length n, and A2(n) is the same for Algorithm 2. You could have:
A1(n) = 10n + 900
A2(n) = 100n
The complexity of both is O(A1) = O(A2) = O(n). For small inputs, A2 is faster. For large inputs, A1 is faster. The point where they change is n == 10.
This question is cool because it can make a difference on how I start writing code from here and on. If the "more complex" (c1) way is as fast as the "less complex" (c2) but more readable, i'm sticking with the "more complex" one.
Not only that, but also there's the fact that when you have 2 different algorithms that are really of different complexity classes (e.g., linear vs quadratic), it might still make sense to use the one of higher complexity as it may still be faster.
For example:
A3(n) = n^2
A4(n) = n + 10^20.
E.g., Algorithm 3 is quadratic, while Algorithm 4 is linear but it has a constant huge initialization time.
For inputs of size of up to around n == 10^10, it will be faster to use the quadratic algorithm.
It may very well be the case that all relevant inputs for your specific problem fall within that range, meaning that the quadratic algorithm would be the better, faster choice.
The bottom line is: for analyzing the actual time it will take to run an algorithm on a given input (or a given bounded range of inputs, as nearly all real-world problems are) and compare it with another algorithm, big-O analysis is meaningless.
Another way to put it: you're asking a practical "engineering" question (i.e., which option is better / faster) but trying to answer the question with a tool that's only useful for "theoretical" analysis. That tool is important, yes. But it has no chance of giving you the answer you're looking for, by design.
By definition, time complexity ignores constants. So O((1/2)n) == O(n) == O(2n) == O(cn).
Your example of O((1/2)n) shows why this is the case, because the constants can measure units of anything, so comparing them is meaningless.
You can never tell which algorithm is faster based only on the time complexity. But, you can tell which one would be faster as n approaches infinity. Since constants are removed from the time complexity, they would be considered equal and therefore with O(c1n) and O(c2n) you still would not be able to tell which one is faster even as n approaches infinity.
(my theoretical computer science courses are a couple of decades ago)
O(cn) is O(n).
It's still a linear search over the array.
I got the following question:
Consider an array of distinct numbers sorted in increasing order. The array has been rotated (clockwise) k number of times. Given such an array, find the value of k.
I understand it's a well-known question which has been asked (and that there are some answers in the web for that), but my question is not about the solution to this problem.
I had this problem in a test, and I said that a solution to this problem can be: to find the first number that is smaller than the predecessors in the array. The way I chose to do it is to do increasing jumps with the power of 2 from the start of the array. I will demonstrate:
assume k=29
then I will check 2,4,8,16,32. the value in place 32 is smaller but not the first. then I will jump from place 17 (jumping described above): 17, 19, 23, 31. place 31 is smaller and but not the first.
next stage check: 24,26,30. this time the value in place 30 is smaller and first so return 30-1.
(I can write this as algorithm, but I think that the demonstrate clear the thinking)
My thinking is that this will work recursively and will certainly give me the answer.
My lecturer said to me that I was wrong.
So my question is do I really wrong?
secondly, I struggled to find the running time of the algorithm. I know the first run is a most O(logk) and the other runs (on the sub-arrays) are less, but what is the sum of then in the worst case. I tell you the truth I thought at first it will still be o(logk), but now I don't really sure about it.
thank you.
If there are, it would be my great pleasure if anyone can direct me to any. Preferable with a computer program that works for that purpose.
I'm actually referring to a polynomial time algorithm that will only test (without the actual partitioning) if a set of integers can be partitioned into two of equal sum. Like if yes, program return true and if no return false.
It is NP-Hard( NP-Complete also ).
What I mean is that we have not been able to find a polynomial time algorithm, and we have not proved that one doesn't exist. It is believed by everyone that no polynomial time algorithm exists since we have tried a lot over years. There are many such NP-Complete problems for which we have neither proved the existence of polynomial time algorithm nor disproved the existence of one.
But it turns out that if you prove or disprove the existence of polynomial time algorithm for one such problem, the same applies to all the elements of the NP-Complete class.
I came across this video which is discussing how most recursive functions can be written with for loops but when I thought about it, I couldn't see the logical difference between the two. I found this topic here but it only focuses on the practical difference as do many other similar topics on the web so what is the logical difference in the way a loop and a recursion are handled?
Bottom line up front -- recursion is more versatile but in practice is generally less efficient than looping.
A loop could in principle always be implemented as a recursion if you wished to do so. In practice the limits of stack resources put serious constraints on the size of the problems you can address. I can and have built loops that iterate a billion times, something I'd never try with recursion unless I was certain the compiler could and would convert the recursion into a loop. Because of the stack limits and efficiency, people often try to find a looping equivalent for recursions.
Tail recursions can always be converted to loops. However, there are recursions that can't be converted. As an example, I work with statistical design of experiments. Sometimes a large design is constructed by "crossing" several smaller sub-designs. Crossing is where you concatenate every row of a second design to each row of the first. For two sub-designs, all this needs is simple nested looping, but for three or more designs you need to increase the level of nesting, adding one level of nesting for each additional sub-design. So while this is nested looping in principle, in practice the amount of nesting is variable. If you tried to implement it with looping you'd have to revise your program to add/subtract nested loops every time you were dealing with a different number of sub-designs to be crossed, so you can't write an immutable loop-based version. This can easily be implemented with recursion. In this case, I'm happy to trade a slight amount of efficiency, because I wrote and debugged the code 6 years ago and haven't had to revise it since, despite creating lots of crossed designs of varying complexity since then.
One way to think through this is that the choice for recursion or iteration depends on how you think about the problem being solved. Certain "ways of thinking" lead more naturally to recursive solutions, and other ways of thinking lead to more iterative solutions. For any problem, you can in principle think in a way that gives you a recursive solution or a way that gives you an iterative solution. (Sometimes the iterative solution will just end up simulating a recursion stack, but there is no actual recursion there.)
Here's an example. You have an array of integers (positive or negative), and you want to find the maximum segment sum. A segment is a piece of the array that is contiguous. So in the array [3, -4, 2, 1, -2, 4], the maximum segment sum is 5, and you get that from the segment [2, 1, -2, 4]; its sum is 5.
OK - so how might we solve this problem? One thing you might do is reason like this: "if I knew the maximum segment sum in the left half, and the maximum segment sum in the right half, then maybe I could somehow jam those together and figure out the maximum segment sum overall". This idea would require you to find the maximum segment sum on the two subhalves, and this is a smaller instance of the original problem. This is recursion, and a direct translation of this idea into code would therefore be recursive.
But the maximum segment sum problem isn't "recursive" or "iterative" -- it can be both, depending on how you think about the solution. I gave a recursive thought process above. Here is an iterative process: "well, if I add up the elements in each of the segments that start at some index i and end at some index j, I can just take the maximum of these to solve the problem". And directly trying to code this approach would give you triply nested loops (and a bad mark on an assignment because it's horribly inefficient!).
So, the same problem, depending on how the problem is conceptualized, can lead to a recursive or iterative solution. Now, I happened to choose a problem where there are many ways of solving it, and where there are reasonable recursive and iterative solutions. Some problems, however, admit only one type of solution, and that solution may be most naturally implemented using recursion or iteration. For example, if I asked you to write a function that keeps asking the user to enter a letter until they enter y or n, you might start thinking: "keep repeating the prompt and asking for input..." and before you know it you have some iterative code. Perhaps you might instead think recursively: "if the user enters y or n, I am done; otherwise ask the user for y or n"... in which case you'd generate a recursive algorithm. But the recursion here doesn't give you much: it unnecessarily uses a stack and doesn't make the program any faster. (Recursion sometimes makes it easier to prove correctness, in which case you might present something recursively even though you could alternately give a reasonable iterative solution.)
According to me, Comb sort should also run in sub quadratic time just like shell sort. This is because comb sort is to bubble sort just how shell sort is related to insertion sort. Shell sort sorts the array according to gap sequences applying insertion sort and similarly comb sort sorts the array according to gap sequences applying bubble sort. So what is the the running time of comb sort?
(This question has been unanswered for a while, so I'm converting my comment into an answer.)
Although there are similarities between shell sort and comb sort, the average-case runtime of comb sort is O(n2). Proving this is a bit tricky, and the technique that I've seen used to prove it is the incompressibility method, an information-theoretic technique involving Kolmogorov complexity.
Hope this helps!
With what sequence of increments?
If the increments are chosen to be: the set of all numbers of the form (2^p * 3^q), that are less than N, then, yes, the running time is better than quadratic (it's proportional to N times the square of the logarithm of N). With that set of increments, Combsort performs exactly the same exchanges as a Shellsort using the same increments (the "Pratt sequence"). But that's not what people usually have in mind when they're talking about Combsort.
In theory...
With increments that are decreasing geometrically (e.g. on each pass over the input the increment is, say, about 80% of the previous increment), which is what people usually mean when they talk about Combsort... yes, asymptotically, it is quadratic in both the worst-case and the average case. But...
In practice...
So long as the increments are relatively prime and the ratio between one increment and the next is sensible (80% is fine), n has to astronomically large before the average running time will be much more than n.log(n). I've sorted hundreds of millions of records at a time with Combsort, and I've only ever seen quadratic running times when I've deliberately engineered them by constructing "killer inputs". In practice, with relatively prime increments (and a ratio between adjacent increments of 1.25:1), even for millions of records, Combsort requires on average, about 3 times as many comparisons as a mergesort and typically takes between 2 and 3 times as long to run.