If my function void foo(n) has a time complexity of O(n), and I have a function call foo(4), would I say that the time complexity of foo(4) is O(4)?
Normally, the complexity of the function doesn't change depending of the parameters inserted, since it has been calculated over that parameter. In this case, if the complexity is O(n), over the parameter of the function (in you case, 4). Let's say that your function contains a for loop. Let's see an example in pseudo-code:
fun foo(int n) {
for int i = 0; i < n; i++) {
print(i);
}
}
This function prints the numbers from 0 to n. Since increasing n only increases the number of elements linearly, the function is O(n) independently of the value of n.
Another example in pseudo-code:
fun foo(int n) {
for int i = 0; i < 2^n; i++) {
print(i);
}
}
In this case, the function prints the values from 0 to 2^n. Increasing n increases the number of elements exponentially, so the function is O(2^n). Changing the value of n does not change the complexity of the function.
But what happens if we have a function like this one?:
fun foo(int n, boolean b) {
if(b == true) {
for int i = 0; i < n; i++) {
print(i);
}
} else {
for int i = 0; i < 2^n; i++) {
print(i);
}
}
}
In this case, the complexity of the function is O(n) if b is true, and O(2^n) if b is false. So, yes, the complexity of the function can change depending of the value of the parameters of the function, only if that parameter is not the selected for calculating the complexity.
Using Big-O notation will give you the notion how the time will increase with the n input. In this case, if foo() have always the same argument, the behavior will be constant. But keep this in mind, almost metrics used to measure complexity on time or/and memory only makes sense for a big n, because when the input is small the performance will be anyway high. So, in Big-O, you will look to your fucntion for an increasing n, then will be measurable and comparable.
I hope this will help you.
The big O of n (O (n)) notation allows to obtain an order of n for the execution time of your algorithm depending on the increase of n during the function. For example, if the complexity of your algorithm (foo(n)) is in order of (n ^ 2) (O(n²)) then in the worst case (when your algorithm performs a maximum of loop turns), n which is a variable which represents the size of the input data will be in order of n ^ 2. I hope it helped you
No.
O(n) simply means that as n gets large, the time it takes the function to execute is proportional to n; In other words, it's a qualitative statement, not a quantitative one. The time complexity of foo is O(n) whether the input is 4, 4000, 4000000, or 4000000000; all O(n) says is that the runtime grows linearly with n.
O(4) implies that the runtime is constant regardless of the size of the input - it's equivalent to writing O(1). Your function cannot be both O(n) and O(4).
Related
I don't know how to solve it's complexity and such. How to know if it is faster than other sorting algorithms?
I find difficulty finding it because I'm a little bit bad at math.
#include <stdio.h>
int func(int arr[])
{
int temp;
int numofarrays=9999;
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
if(arr[i] > arr[i+1])
{
temp=arr[i];
arr[i]=arr[i+1];
arr[i+1]=temp;
}
if(arr[numofarrays-i-1] > arr[numofarrays-i])
{
temp=arr[numofarrays-i-1];
arr[numofarrays-i-1]=arr[numofarrays-i];
arr[numofarrays-i]=temp;
}
}
}
for(int i=0; i<=9999; i++)
{
printf("%i\n",arr[i]);
}
}
int main()
{
int arr[10000];
for(int i=0; i<=9999; i++)
{
arr[i]=rand() % 10;
}
func(arr);
}
The big o notation is where the limit of steps of the code you write goes to infinity. Since the limit goes to infinity, we neglect the coefficients and look at the largest term.
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
Here, the largest term of the first loop is n, and the largest term of the second is n. But since loop 2 makes n turns for each turn of the first loop, we're multiplying n by n.
Result is O(n^2).
Here:
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
you have two nested for loops that both depends of the array size. So the complexity is O(N^2).
Further you ask:
How to know if it is faster than other sorting algorithms?
Well big-O does not directly tell you about execution time. For some values of N an O(N) implementation may be slower than an O(N^2) implementation.
Big-O tells you how the execution time increases as N increase. So you know that an O(N^2) will be slower than an O(N) when N gets above a apecific value but you can't know what that value is! It could be 1, 5, 10, 100, 1000, ...
Example:
The empirical approach is to time (t) your algorithm for a given input of n. Then do that experiment for larger and large n, say, 2^n for n = 0 to 20. Plot (2^n, t) on a graph. If you get a straight line, it's a linear algorithm. If you get something that grows faster, try graph ln(t) and see if you get a line, that would be an exponential algorithm, etc.
The analytical approach looks scary and sometimes math heavy, but it's doesn't really have to be. You decide what your expensive operation is, then you count how of many of those you do. Anything that runs in loops including recursive functions are the main candidates the run-time of that goes up as the loop runs more times:
Let's say we want count the number of swaps.
The inner loop swap numofarrays times which is O(n).
The outer loop runs numofarrays/2 times which is also O(n) as we drop the factor 1/2 as it's doesn't matter when n gets large.
This mean we do O(n) * O(n) = O(n^2) number of swaps.
Your print loop, is not doing any swaps so we consider them "free", but if determine that they are as expensive as swaps then we want to count those too. Your algorithm does numofarrays print operations which is O(n).
O(n^2) + O(n) is just O(n^2) as the larger O(n^2) dominates O(n) for sufficiently large n.
I would like to know exactly how to compute the big O of the second while when the number of repetitions keeps going down over time.
int duplicate_check(int a[], int n)
{
int i = n;
while (i > 0)
{
i--;
int j = i - 1;
while (j >= 0)
{
if (a[i] == a[j])
{
return 1;
}
j--;
}
}
return 0;
}
Still O(n^2) regardless of the smaller repetition.
The value you are computing is Sum of (n-k) for k = 0 to n.
This equates to (n^2 + n) / 2 which since O() ignores constants and minor terms is O(n^2).
Note you can solve this problem more efficiently by sorting the array O(nlogn) and then searching for two consecutive numbers that are the same O(n) so total O(nlogn)
Big O is an estimate/theoretical speed, it's not the exact calculation.
Like twain249 said, regardless, the time complexity is O(n^2)
BigO shows the worst case time complexity of an algorithm that means the maximum time an algorithm can take ever.It shows upper bound which indicates that whatever the input is time complexity will always be under that bound.
In your case the worst case will when i will iterate until 0 then complexity will be like:
for i=n j will run n-1 times for i=n-1 j will run n-2 times and so on.
adding all (n-1)+(n-2)+(n-3)+............(n-n)=(n-1)*(n)/2=n^2/2-n/2
after ignoring lower term that is n and constant that is 1/2 it becomes n^2.
So O(n^2) that's how it is computed.
#include<stdio.h>
#include<math.h>
void printboard(int n);
void fourQueen(int k,int n);
int place(int k,int i);
int x[100];
void NQueen(int k,int n)
{
int i;
for(i=1;i<=n;i++)
{
if(place(k,i)==1)
{ x[k]=i;
if(k==n)
{
printf("Solution\n");
printboard(n);
}
else
NQueen(k+1,n);
}
}
}
int place(int k,int i)
{
int j;
for(j=1;j<k;j++)
{
if((x[j]==i)||abs(x[j]-i)==abs(j-k))
return 0;
}
return 1;
}
void printboard(int n)
{
int i;
for(i=1;i<=n;i++)
printf("%d ",x[i]);
}
void main()
{
int n;
printf("Enter Value of N:");
scanf("%d",&n);
NQueen(1,n);
}
I think it has time complexity: O(n^n), As NQueen function is recursively calling, but is there is any tighter bound possible for this program? what about best case, and worst case time complexity. I am also confused about the place() function which is O(k) and calling from NQueen().
There are a lot of optimizations than can improve the time complexity of the algorithm.
There is more information in these links:
https://sites.google.com/site/nqueensolver/home/algorithm-results
https://sites.google.com/site/nqueensolver/home/algorithms/2backtracking-algorithm
For Your function T(n) = n*T(n-1) + O(n^2) which translates to O(N!) time complexity approximately.
TIME COMPLEXITY OF N-QUEEN PROBLEM IS
> O(N!)
Explanation:
If we add all this up and define the run time as T(N). Then T(N) = O(N2) + N*T(N-1). If you draw a recursion tree using this recurrence, the final term will be something like n3+ n!O(1). By the definition of Big O, this can be reduced to O(n!) running time.
O(n^n) is definitely an upper bound on solving n-queens using backtracking.
I'm assuming that you are solving this by assigning a queen column-wise.
However, consider this - when you assign a location of the queen in the first column, you have n options, after that, you only have n-1 options as you can't place the queen in the same row as the first queen, then n-2 and so on. Thus, the worst-case complexity is still upper bounded by O(n!).
Hope this answers your question even though I'm almost 4 years late!
Let us consider that our queen is a rook, meaning we need not take care of diagonal conflicts.
Time complexity in this case will be O(N!) in the worst case, supposed if we were on a hunt to check if any solution exists or not. Here is a simple explanation.
Let us take an example where N=4.
Supposed we are want to fill the 2-D matrix. X represents a vacant position while 1 represents a taken position.
In the starting, the answer matrix (which we need to fill) looks like,
X X X X
X X X X
X X X X
X X X X
Let us fill this row-wise, meaning will select one location in each row then move forward to the next row.
For the first row, since none is filled in the matrix as a whole, we have 4 options.
For the second row, we have 3 options as one row has already been taken off.
Similarly, for the third row, we have 2 options and for the final row, we are left with just 1 option.
Total options = 4*3*2*1 = 24 (4!)
Now, this was the case had our queen were a rook but since we have more constraints in case of a queen. Complexity should be less than O(N!) in terms of the actual number of operations.
The complexity is n^n and here is the explanation
Here n represent the number of of queens and will remain same for every function call.
K is the row number and function will be called times till k reaches the n.There if n=8,we have n rows and n queens.
T(n)=n(n+t(max of k - 1))=n^max of k=n^n as the max of k is n.
Note:The function has two parameters.In loop, n is not decreasing ,For every function call it remains same.But for number of times the function is called it is decreasing so that recursion could terminate.
The complexity is (n+1)!n^n begin with T(i)=O(niT(i+1)), T(n)=n^3.
So, T(1)=nT(2)=2n^2T(3)=...=(n-1)n^(n-1)!T(n)=(n+1)
I have a matrix m * n and, for each row, I need to compare all elements among them.
For each couple I find, I'll call a function that is going to perform some calculations.
Example:
my_array -> {1, 2, 3, 4, 5, ...}
I take 1 and I have: (1,2)(1,3)(1,4)(1,5)
I take 2 and I have: (2,1)(2,3)(2,4)(2,5)
and so on
Using C I wrote this:
for (i=0; i<array_length; i++) {
for (k=0; k<array_length; k++) {
if (i==k) continue;
//Do something
}
}
}
I was wondering if I can use an algorithm with lower complexity.
No, it's O(n^2) by definition [ too long to explain here, but trust me (-: ]
But you can decrease the number of iterations by half :
for (i=0; i<array_length; i++) {
for (k=i+1; k<array_length; k++) { // <-- no need to check the values before "i"
//Do something
//If the order of i and k make a different then here you should:
//'Do something' for (i,k) and 'Do something' for (k,i)
}
}
}
There are several things you might do, but which are possibile and which not depend on the array nature and the formula you apply. Overall complexity will probably remain unchanged or even grow, even if calculation can be made to go faster, unless the formula has a complexity dependancy on its arguments, in which case a decrease in complexity may be achievable.
Also, going from AO(N^a) to BO(N^b) with b > a (higher complexity) can still be worth pursuing, for some range of N, if B is sufficiently smaller than A.
In no particular order:
if the matrix has several repeated items, it can be convenient to use a caching function:
result function(arg1, arg2) {
int i = index(arg1, arg2); // Depending on the values, it could be
// something like arg1*(MAX_ARG2+1) + arg2;
if (!stored[i]) { // stored and values are allocated and initialised
// somewhere else - or in this function using a
// static flag.
stored[i] = 1;
values[i] = true_function(arg1, arg2);
}
return values[i];
}
Then, you have a memory overhead proportional to the number of different couples
of values available. The call overhead can be O(|arg1|*|arg2|), but in some circumstances
(e.g. true_function() is expensive) the savings will more than offset the added complexity.
chop the formula into pieces (not possible for every formula) and express it as:
F(x,y) = G(x) op H(y) op J(x,y)
then, you can do a O(max(M,N)) cycle pre-calculating G[] and H[]. This also has a O(M+N) memory cost. It is only convenient if the computational expenditure difference between F and J is significant. Or you might do:
for (i in 0..N) {
g = G(array[i]);
for (j in 0..N) {
if (i != j) {
result = f(array[i], array[j], g);
}
}
}
which brings some of the complexity from O(N^2) down to O(N).
the first two techniques are useable in tandem if G() or H() are practical to cache (limited range of argument, expensive function).
find a "law" to link F(a, b) with F(a+c, b+d). Then you can run the caching algorithm much more efficiently, reusing the same calculations. This shifts some complexity from O(N^2) to O(N) or even O(log N), so that while the overall cost is still quadratic, it grows much more slowly, and a higher bound for N becomes practical. If F is itself of a higher order of complexity than constant in (a,b), this may also reduce this order (as an extreme example, suppose F is iterative in a and/or b).
No, you can only get lower computational complexity if you include knowledge of the contents of the array, and semantics of the operation to optimize your algorithm.
I am pretty sure about my answer but today had a discussion with my friend who said I was wrong.
I think the complexity of this function is O(n^2) in average and worst case and O(n) in best case. Right?
Now what happens when k is not length of array? k is the number of elements you want to sort (rather than the whole array).
Is it O(nk) in best and worst case and O(n) in best case?
Here is my code:
#include <stdio.h>
void bubblesort(int *arr, int k)
{
// k is the number of item of array you want to sort
// e.g. arr[] = { 4,15,7,1,19} with k as 3 will give
// {4,7,15,1,19} , only first k elements are sorted
int i=k, j=0;
char test=1;
while (i && test)
{
test = 0;
--i;
for (j=0 ; j<i; ++j)
{
if ((arr[j]) > (arr[j+1]))
{
// swap
int temp = arr[j];
arr[j]=arr[j+1];
arr[j+1]=temp;
test=1;
}
} // end for loop
} // end while loop
}
int main()
{
int i =0;
int arr[] = { 89,11,15,13,12,10,55};
int n = sizeof(arr)/sizeof(arr[0]);
bubblesort(arr,n-3);
for (i=0;i<n;i++)
{
printf("%d ",arr[i]);
}
return 0;
}
P.S. This is not homework, just looks like one. The function we were discussing is very similar to Bubble sort. In any case, I have added homework tag.
Please help me confirm if I was right or not. Thank you for your help.
Complexity is normally given as a function over n (or N), like O(n), O(n*n), ...
Regarding your code the complexity is as you stated. It is O(n) in best case and O(n*n) in worst case.
What might have lead to misunderstanding in your case is that you have a variable n (length of array) and a variable k (length of part in array to sort). Of course the complexity of your sort does not depend on the length of the array but on the length of the part that you want to sort. So with respect to your variables the complexity is O(k) or O(k*k). But since normally complexity notation is over n you would say O(n) or O(n*n) where n is the length of the part to sort.
Is it O(nk) in best and worst case and O(n) in best case?
No, it's O(k^2) worst case and O(k) best case. Sorting the first k elements of an array of size n is exactly the same as sorting an array of k elements.
That's O(n^2), the outer while goes from k down to 1 (possibly stopping earlier for specific data, but we're talking worst case here), and the inner for goes from 0 to i (which in turn goes up to k), so multiplied they're k^2 in the worst case.
If you care about the best case, that's O(n) because the outer while loop only executes once then gets aborted.