Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm a beginner in C programming so i need some help for my time complexity function.
int function(int n)
{ if (n <= 1)
return n;
int i = random(n-1);
return test(i) + test(n - 1 - i);
}
I don't know how to deal with this problem because of the random function which have O(1) complexity which return randomly numbers.
I don't know how to deal with this problem because of the random function which have O(1) complexity which return randomly numbers.
Well clearly you treat the random(n-1) call itself as a simple (constant time) call. Taken in isolation that is straight forward. The interesting thing is what effect the value returned by the call has on the performance.
Hint: first consider the best-case and worst-case performance for the algorithm.
Hint: for the purposes of analysis, consider a hypothetical version of random which generates a number sequence that is the antithesis of random numbers :-)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Time Complexity of Juggling algorithm for array rotation(Suppose 'd' times) is computed as O(n), where n is the size of the array. But for any number of rotation(i.e. for any value of 'd'), the algorithm runs exactly for n times. So, shouldn't the time complexity of the algorithm be "Theta(n)" ? It always loops for n times in any case.If not, can anyone provide a test case where it doesn't run for n times?
It is unclear what you ask, but if we look at https://www.geeksforgeeks.org/array-rotation/ we see that it is described as O(n) time but if we want to rotate zero steps it could be done in O(1) time, so it doesn't always take n times - i.e. Theta(n) would be wrong; but O(n) is correct.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am interested, which of these forms of the for loop test expression is more right (from the point of the performance and the good code practice):
for(i = 0; i < size - 1; i++) {
do something
}
or
int decreased_size = size - 1;
for(i = 0; i < decreased_size; i++) {
do something
}
Is the test expression size - 1 calculated every time in the first example or does the compiler optimize it to the constant value, so there is no need to creating an additional variable decreased_size?
I was creating an additional variable all the time, but now, looking at the others solutions on the Codeforces, I doubts - whether it makes sense?
Compiler: GCC version 5.4.0 20160609
No one makes more sense than other. Indeed, with optimization, it produces same code : https://godbolt.org/g/vzVJVF
Secondly, time consumed by size-1 is, in most case, negligible vis-a-vis of time consumed by action in loop, so optimize this part has a really small effect on system.
In conclusion, optimize only when it's needed (so you see that there is an time/memory issue). In every day, prefer a readable, easy to understand code.
I agree with #Garf365. If you also look at https://www.tutorialspoint.com/assembly_programming/assembly_loops.htm you will see that loop count is loaded into register before loop starts and so size-1 has to be computed and loaded only once.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I've heard that lets say:
while(1){
i = !2;
wait(1);
}
is power efficient. Does this while loop stop at i != 2 and is therefore not polling? Let's say:
while(x == 3){
if(c == 3){
x = 4;
}
wait(1);
}
Does this follow a similar concept or is i = !2 a procedure that must be met in order to continue the while loop? Would you say that this is just as power efficient? Is the second example similar to the first in terms of power efficiency?
An example i've been shown using bad power efficient polling is:
while (x == 3) { }
The important thing from an efficiency standpoint is that the code doesn't just continually cycle. In your example, presumably the wait() function is returning control to your OS so that it can immediately dispatch another task.
In short, yes, your second example is power efficient as well, assuming wait() returns control to the the OS.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
The Warshall-Floyd algorithm is based on essentially the idea: exploit a relationship between a problem and its simpler rather than smaller version. Warshall and Floyd published their algorithms without mentioning dynamic programming. Nevertheless, the algorithms certainly have a dynamic programming flavor and have come to be considered applications of this technique.
ALGORITHM Warshall(A[1..n, 1..n])
//ImplementsWarshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ←A
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
R(k)[i, j ]←R(k−1)[i, j ] or (R(k−1)[i, k] and R(k−1)[k, j])
return R(n)
We can speed up the above implementation of Warshall’s algorithm for some inputs by restructuring its innermost loop
My question on above text are following
What does author mean by idea is " exploit a relationship between a problem and its simpler rather than smaller version" Please elobaorate.
How can we improve speed as author mentioned in above implemenation.
The formulation from 1. means that the shortest path problem (which can be seen as a generalization of the transitive closure problem) has the optimal substructure property; however for this property does not exist a formal description (in the sense of a mathematical definition). The optimal substructure property is necessary for a problem to be amenable to dynamic programming.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to make a word scrambler and am wondering if there are any algorithms I should use or if I should just build it from scratch. Any pointers would be helpful!
The standard algorithm for finding a random permutation of a sequence of elements (or, in your case, letters in a word) is the Fisher-Yates shuffle, which in linear time produces a truly random permutation of a sequence of elements. The algorithm is well-established and many standard libraries provide implementations of it (for example, the C++ std::random_shuffle algorithm is typically implemented using this algorithm), so you may be able to find a prewritten implementation. If not, the algorithm is extremely easy to implement, and here's some pseudocode for it:
for each index i = 0 to n - 1, inclusive:
choose a random index j in the range i to n - 1, inclusive.
swap A[i] and A[j]
Be careful when implementing this that when picking a random index, you do not pick an index between 0 and n-1 inclusive; this produces a nonuniform distribution of letters (you can read more about that in this earlier question).
Hope this helps!
Go with the Knuth Shuffle (AKA the Fisher–Yates Shuffle). It has the desirable feature of ensuring that every permutation of the set is equally likely. Here's a link to an implementation in C (along with implementations in other languages) that works on arbitrarily sized objects.