void fn(int n){
int p,q;
for(int i=0;i<n;i++){
p=0;
for(int j=n;j>1;j=j/2)
++p;
for(int k=1;k<p;k=k*2)
++q;
}
}
I think its complexity is nlogn
My friend says its nlog(logn)
and also please tell me - Do inner loops depend upon each other in this function?
It's actually of undefined complexity because you use q uninitialised.
Ignoring that small bug, the outer loop is obviously O(n). The first inner loop is O(log n). The second inner loop is O(log p) and p is log n so it's O(log log n) but it doesn't matter because it is executed sequentially after the first inner loop and therefore the total for both inner loops is O(log n) (When you add two complexities, the overall complexity is the fastest growing one). So your overall complexity is O(n log n)
Related
I don't know how to solve it's complexity and such. How to know if it is faster than other sorting algorithms?
I find difficulty finding it because I'm a little bit bad at math.
#include <stdio.h>
int func(int arr[])
{
int temp;
int numofarrays=9999;
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
if(arr[i] > arr[i+1])
{
temp=arr[i];
arr[i]=arr[i+1];
arr[i+1]=temp;
}
if(arr[numofarrays-i-1] > arr[numofarrays-i])
{
temp=arr[numofarrays-i-1];
arr[numofarrays-i-1]=arr[numofarrays-i];
arr[numofarrays-i]=temp;
}
}
}
for(int i=0; i<=9999; i++)
{
printf("%i\n",arr[i]);
}
}
int main()
{
int arr[10000];
for(int i=0; i<=9999; i++)
{
arr[i]=rand() % 10;
}
func(arr);
}
The big o notation is where the limit of steps of the code you write goes to infinity. Since the limit goes to infinity, we neglect the coefficients and look at the largest term.
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
Here, the largest term of the first loop is n, and the largest term of the second is n. But since loop 2 makes n turns for each turn of the first loop, we're multiplying n by n.
Result is O(n^2).
Here:
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
you have two nested for loops that both depends of the array size. So the complexity is O(N^2).
Further you ask:
How to know if it is faster than other sorting algorithms?
Well big-O does not directly tell you about execution time. For some values of N an O(N) implementation may be slower than an O(N^2) implementation.
Big-O tells you how the execution time increases as N increase. So you know that an O(N^2) will be slower than an O(N) when N gets above a apecific value but you can't know what that value is! It could be 1, 5, 10, 100, 1000, ...
Example:
The empirical approach is to time (t) your algorithm for a given input of n. Then do that experiment for larger and large n, say, 2^n for n = 0 to 20. Plot (2^n, t) on a graph. If you get a straight line, it's a linear algorithm. If you get something that grows faster, try graph ln(t) and see if you get a line, that would be an exponential algorithm, etc.
The analytical approach looks scary and sometimes math heavy, but it's doesn't really have to be. You decide what your expensive operation is, then you count how of many of those you do. Anything that runs in loops including recursive functions are the main candidates the run-time of that goes up as the loop runs more times:
Let's say we want count the number of swaps.
The inner loop swap numofarrays times which is O(n).
The outer loop runs numofarrays/2 times which is also O(n) as we drop the factor 1/2 as it's doesn't matter when n gets large.
This mean we do O(n) * O(n) = O(n^2) number of swaps.
Your print loop, is not doing any swaps so we consider them "free", but if determine that they are as expensive as swaps then we want to count those too. Your algorithm does numofarrays print operations which is O(n).
O(n^2) + O(n) is just O(n^2) as the larger O(n^2) dominates O(n) for sufficiently large n.
I've seen many similar questions but not quite what I'm looking for. I'm supposed to find the complexity for the code below. What makes this code different from what I've seen here already is that the function I have to find the complexity contains another function with a given complexity.
I think I can solve this but can't give the correct answer. Any detailed explanation would be very nice, also to help me better understand the flow of finding the complexity in those kinds of functions. The code is in C.
void f(int v[], int n, int i, int j){
int a = v[i];
int b = v[j];
int m = (i+j)/2;
g(v,n);
if(i<j){
if(a<b) f(v,n,i,m);
else f(v,n,m,j)
}
return;
}
The f function is called in the main where v is an array: f(v, n, 0, n-1).
The g function complexity is O(n).
Now, I really can't decide between O(log n) or O(n log n). Seeing that we're dividing the workspace in half using the int m I know it's logarithmic, but does the G function adds up and turn everything into O(n log n)?
Thank you.
PS: if an answer like this has been asked already, I couldn't find it and redirection would be great in case anyone else stumbles on the same problem as mine.
Your f function will execute exactly log(n) times (the range between i and j is always halved); each of these times, it will execute g, with an additional cost of O(n). Therefore, the total complexity is O(n * log(n)), which is the total number of times the inner loop* of g is called.
(* I am assuming that there is an inner loop in g for explanation purposes, because that is what you find in many, but certainly not all, O(n) functions).
I'm wondering what is the time-complexity of the inner for-loop, is it sqrt(n) or log(n)?
void foo(int n)
{
for (int i=0; i<n*n; ++i)
for (int j=1; j*j<n; j*=2)
printf("Hello there!\n");
}
j in inner for loop will take values 1,2,4,...2^t
Also according to constraint given,
2^2t = n
So, t = (1/2)logn
Therefore the inner loop should have Time Complexity O(log(n))
I think the inner for-loop has complexity O(sqrt(n)). To make it O(log(n)), the inner for loop should be something like this:
EDIT
It should be O(log(n)).
This is the function:
void f(int n)
{
for(int i=0; i<n; ++i)
for(int j=0; j<i; ++j)
for(int k=i*j; k>0; k/=2)
printf("~");
}
In my opinion, the calculation of the time complexity would end up to be something like this:
log((n-1)(n-2))+log((n-1)(n-3))+...+log(n-1)+log((n-2)(n-3))+...+log(n-2)+...log(2)
So, I get a time complexity of nlog(n!) (because loga+logb=log(a*b) and because n-1,n-2,n-3,... each appears n-1 times in total.
However, the correct answer is n^2*logn, and I have no idea where my mistake is. Could anyone here help?
Thanks a lot!
log(n!) can be approximated as (n+1/2)log(n) - n + constant (see https://math.stackexchange.com/questions/138194/approximating-log-of-factorial)
So the complexity is n*n*log(n) as expected.
Simpler: compute the complexity loop by loop independently and multiply them.
First 2 outer loops: trivial: n each, which makes n^2
Inner loop: has a log(n**2) complexity which is the same as log(n)
So n^2log(n) is the correct answer.
The Complexity is O(N*N*LOG_2(N^2)).
The first and the second loop both have O(N) and the last loop in k has logarithmic grow.
LOG_2(N^2) = 2*LOG_2(N) and
O(N*M)=O(N)*O(M).
O(constant)=1.
So for the grow of the last loop you can write also O(LOG_2(N^2))=O(LOG(N)).
i=1;
while(i<n*n)
i=i+n;
from my lecturer provided answer:
Big-O notation was O(n) instead O(n^2) why?
Because after each loop run n is added to i. So it has to run maximal n times to reach n², thus ending the loop.
O(n^2) would be:
i=1;
while(i<n*n)
i=i+1;