Time complexity of N Queen using backtracking? - c

#include<stdio.h>
#include<math.h>
void printboard(int n);
void fourQueen(int k,int n);
int place(int k,int i);
int x[100];
void NQueen(int k,int n)
{
int i;
for(i=1;i<=n;i++)
{
if(place(k,i)==1)
{ x[k]=i;
if(k==n)
{
printf("Solution\n");
printboard(n);
}
else
NQueen(k+1,n);
}
}
}
int place(int k,int i)
{
int j;
for(j=1;j<k;j++)
{
if((x[j]==i)||abs(x[j]-i)==abs(j-k))
return 0;
}
return 1;
}
void printboard(int n)
{
int i;
for(i=1;i<=n;i++)
printf("%d ",x[i]);
}
void main()
{
int n;
printf("Enter Value of N:");
scanf("%d",&n);
NQueen(1,n);
}
I think it has time complexity: O(n^n), As NQueen function is recursively calling, but is there is any tighter bound possible for this program? what about best case, and worst case time complexity. I am also confused about the place() function which is O(k) and calling from NQueen().

There are a lot of optimizations than can improve the time complexity of the algorithm.
There is more information in these links:
https://sites.google.com/site/nqueensolver/home/algorithm-results
https://sites.google.com/site/nqueensolver/home/algorithms/2backtracking-algorithm

For Your function T(n) = n*T(n-1) + O(n^2) which translates to O(N!) time complexity approximately.

TIME COMPLEXITY OF N-QUEEN PROBLEM IS
> O(N!)
Explanation:
If we add all this up and define the run time as T(N). Then T(N) = O(N2) + N*T(N-1). If you draw a recursion tree using this recurrence, the final term will be something like n3+ n!O(1). By the definition of Big O, this can be reduced to O(n!) running time.

O(n^n) is definitely an upper bound on solving n-queens using backtracking.
I'm assuming that you are solving this by assigning a queen column-wise.
However, consider this - when you assign a location of the queen in the first column, you have n options, after that, you only have n-1 options as you can't place the queen in the same row as the first queen, then n-2 and so on. Thus, the worst-case complexity is still upper bounded by O(n!).
Hope this answers your question even though I'm almost 4 years late!

Let us consider that our queen is a rook, meaning we need not take care of diagonal conflicts.
Time complexity in this case will be O(N!) in the worst case, supposed if we were on a hunt to check if any solution exists or not. Here is a simple explanation.
Let us take an example where N=4.
Supposed we are want to fill the 2-D matrix. X represents a vacant position while 1 represents a taken position.
In the starting, the answer matrix (which we need to fill) looks like,
X X X X
X X X X
X X X X
X X X X
Let us fill this row-wise, meaning will select one location in each row then move forward to the next row.
For the first row, since none is filled in the matrix as a whole, we have 4 options.
For the second row, we have 3 options as one row has already been taken off.
Similarly, for the third row, we have 2 options and for the final row, we are left with just 1 option.
Total options = 4*3*2*1 = 24 (4!)
Now, this was the case had our queen were a rook but since we have more constraints in case of a queen. Complexity should be less than O(N!) in terms of the actual number of operations.

The complexity is n^n and here is the explanation
Here n represent the number of of queens and will remain same for every function call.
K is the row number and function will be called times till k reaches the n.There if n=8,we have n rows and n queens.
T(n)=n(n+t(max of k - 1))=n^max of k=n^n as the max of k is n.
Note:The function has two parameters.In loop, n is not decreasing ,For every function call it remains same.But for number of times the function is called it is decreasing so that recursion could terminate.

The complexity is (n+1)!n^n begin with T(i)=O(niT(i+1)), T(n)=n^3.
So, T(1)=nT(2)=2n^2T(3)=...=(n-1)n^(n-1)!T(n)=(n+1)

Related

Does time complexity change based on parameters?

If my function void foo(n) has a time complexity of O(n), and I have a function call foo(4), would I say that the time complexity of foo(4) is O(4)?
Normally, the complexity of the function doesn't change depending of the parameters inserted, since it has been calculated over that parameter. In this case, if the complexity is O(n), over the parameter of the function (in you case, 4). Let's say that your function contains a for loop. Let's see an example in pseudo-code:
fun foo(int n) {
for int i = 0; i < n; i++) {
print(i);
}
}
This function prints the numbers from 0 to n. Since increasing n only increases the number of elements linearly, the function is O(n) independently of the value of n.
Another example in pseudo-code:
fun foo(int n) {
for int i = 0; i < 2^n; i++) {
print(i);
}
}
In this case, the function prints the values from 0 to 2^n. Increasing n increases the number of elements exponentially, so the function is O(2^n). Changing the value of n does not change the complexity of the function.
But what happens if we have a function like this one?:
fun foo(int n, boolean b) {
if(b == true) {
for int i = 0; i < n; i++) {
print(i);
}
} else {
for int i = 0; i < 2^n; i++) {
print(i);
}
}
}
In this case, the complexity of the function is O(n) if b is true, and O(2^n) if b is false. So, yes, the complexity of the function can change depending of the value of the parameters of the function, only if that parameter is not the selected for calculating the complexity.
Using Big-O notation will give you the notion how the time will increase with the n input. In this case, if foo() have always the same argument, the behavior will be constant. But keep this in mind, almost metrics used to measure complexity on time or/and memory only makes sense for a big n, because when the input is small the performance will be anyway high. So, in Big-O, you will look to your fucntion for an increasing n, then will be measurable and comparable.
I hope this will help you.
The big O of n (O (n)) notation allows to obtain an order of n for the execution time of your algorithm depending on the increase of n during the function. For example, if the complexity of your algorithm (foo(n)) is in order of (n ^ 2) (O(n²)) then in the worst case (when your algorithm performs a maximum of loop turns), n which is a variable which represents the size of the input data will be in order of n ^ 2. I hope it helped you
No.
O(n) simply means that as n gets large, the time it takes the function to execute is proportional to n; In other words, it's a qualitative statement, not a quantitative one. The time complexity of foo is O(n) whether the input is 4, 4000, 4000000, or 4000000000; all O(n) says is that the runtime grows linearly with n.
O(4) implies that the runtime is constant regardless of the size of the input - it's equivalent to writing O(1). Your function cannot be both O(n) and O(4).

How to sort an int array in linear time?

I had been given a homework to do a program to sort an array in ascending order.I did this:
#include <stdio.h>
int main()
{
int a[100],i,n,j,temp;
printf("Enter the number of elements: ");
scanf("%d",&n);
for(i=0;i<n;++i)
{
printf("%d. Enter element: ",i+1);
scanf("%d",&a[i]);
}
for(j=0;j<n;++j)
for(i=j+1;i<n;++i)
{
if(a[j]>a[i])
{
temp=a[j];
a[j]=a[i];
a[i]=temp;
}
}
printf("Ascending order: ");
for(i=0;i<n;++i)
printf("%d ",a[i]);
return 0;
}
The input will not be more than 10 numbers. Can this be done in less amount of code than i did here? I want the code to be as shortest as possible.Any help will be appreciated.Thanks!
If you know the range of the array elements, one way is to use another array to store the frequency of each of the array elements ( all elements should be int :) ) and print the sorted array. I am posting it for large number of elements (106). You can reduce it according to your need:
#include <stdio.h>
#include <malloc.h>
int main(void){
int t, num, *freq = malloc(sizeof(int)*1000001);
memset(freq, 0, sizeof(int)*1000001); // Set all elements of freq to 0
scanf("%d",&t); // Ask for the number of elements to be scanned (upper limit is 1000000)
for(int i = 0; i < t; i++){
scanf("%d", &num);
freq[num]++;
}
for(int i = 0; i < 1000001; i++){
if(freq[i]){
while(freq[i]--){
printf("%d\n", i);
}
}
}
}
This algorithm can be modified further. The modified version is known as Counting sort and it sorts the array in Θ(n) time.
Counting sort:1
Counting sort assumes that each of the n input elements is an integer in the range
0 to k, for some integer k. When k = O(n), the sort runs in Θ(n) time.
Counting sort determines, for each input element x, the number of elements less
than x. It uses this information to place element x directly into its position in the
output array. For example, if 17 elements are less than x, then x belongs in output
position 18. We must modify this scheme slightly to handle the situation in which
several elements have the same value, since we do not want to put them all in the
same position.
In the code for counting sort, we assume that the input is an array A[1...n] and
thus A.length = n. We require two other arrays: the array B[1....n] holds the
sorted output, and the array C[0....k] provides temporary working storage.
The pseudo code for this algo:
for i ← 1 to k do
c[i] ← 0
for j ← 1 to n do
c[A[j]] ← c[A[j]] + 1
//c[i] now contains the number of elements equal to i
for i ← 2 to k do
c[i] ← c[i] + c[i-1]
// c[i] now contains the number of elements ≤ i
for j ← n downto 1 do
B[c[A[i]]] ← A[j]
c[A[i]] ← c[A[j]] - 1
1. Content has been taken from Introduction to Algorithms by
Thomas H. Cormen and others.
You have 10 lines doing the sorting. If you're allowed to use someone else's work (subsequent notes indicate that you can't do this), you can reduce that by writing a comparator function and calling the standard C library qsort() function:
static int compare_int(void const *v1, void const *v2)
{
int i1 = *(int *)v1;
int i2 = *(int *)v2;
if (i1 < i2)
return -1;
else if (i1 > i2)
return +1;
else
return 0;
}
And then the call is:
qsort(a, n, sizeof(a[0]), compare_int);
Now, I wrote the function the way I did for a reason. In particular, it avoids arithmetic overflow which writing this does not:
static int compare_int(void const *v1, void const *v2)
{
return *(int *)v1 - *(int *)v2;
}
Also, the original pattern generalizes to comparing structures, etc. You compare the first field for inequality returning the appropriate result; if the first fields are unequal, then you compare the second fields; then the third, then the Nth, only returning 0 if every comparison shows the values are equal.
Obviously, if you're supposed to write the sort algorithm, then you'll have to do a little more work than calling qsort(). Your algorithm is a Bubble Sort. It is one of the most inefficient sorting techniques — it is O(N2). You can look up Insertion Sort (also O(N2)) but more efficient than Bubble Sort), or Selection Sort (also quadratic), or Shell Sort (very roughly O(N3/2)), or Heap Sort (O(NlgN)), or Quick Sort (O(NlgN) on average, but O(N2) in the worst case), or Intro Sort. The only ones that might be shorter than what you wrote are Insertion and Selection sorts; the others will be longer but faster for large amounts of data. For small sets like 10 or 100 numbers, efficiency is immaterial — all sorts will do. But as you get towards 1,000 or 1,000,000 entries, then the sorting algorithms really matter. You can find a lot of questions on Stack Overflow about different sorting algorithms. You can easily find information in Wikipedia for any and all of the algorithms mentioned.
Incidentally, if the input won't be more than 10 numbers, you don't need an array of size 100.

calculating the no of steps in insertion sort

Here are the two versions of insertion sort, which I implement one from pseudo code and one directly. I want to know which version take more steps and space(even a little space is complex).
void insertion_sort(int a[], int n) {
int key, i, j;
for(i = 1; i < n; i++) {
key = a[i];
j = i - 1;
while(j >= 0 && a[j] > key) {
a[j+1] = a[j];
j--;
}
a[j+1] = key;
}
}
and this one
insertion_sort(item s[], int n) {
int i,j;
for (i=1; i<n; i++) {
j=i;
while ((j>0) && (s[j] < s[j-1])) {
swap(&s[j],&s[j-1]);
j = j-1;
}
}
}
Here is the sample sorting array a = {5, 2, 4, 6, 1, 3}.
In my opinion 2nd version take more steps because it swaps number one by one, while the 1st one swaps greater numbers in the while loop and then swaps the smallest number. For example:
Upto index = 3, both version take equal steps, but when index = 4 comes i.e. to swap number 1, 2nd takes more steps than 1st.
What do you think?
"Number of steps" isn't a useful measure of anything.
Is a step a line? A statement? An expression? An assembler instruction? A CPU micro-op?
That is, your "steps" are transformed into assembler and then optimized, and the resulting instructions can have different (and potentially variable) runtime costs.
Sensible questions you might ask:
1 what is the algorithmic complexity?
As given in Rafe Kettler's comment and Arpit's answer, this is about how the algorithm scales as the input size grows
2 how does it perform
If you want to know which is faster (for some set of inputs), you should just measure it.
If you just want to know which performs more swaps, why not just write a swap function that increments a global counter every time it is called, and find out?
Number of swaps is the wrong term, you should count the number of assignments. swap() expands to three assignments and you therefore usually end up with more assignments in the second version without saving space (you may not have key in the second version, but swap() internally has something similar).
Both versions are using two loops. so complexity O(n*n) time. Considering constant(1) time for all other statements.
Let's analyze it line by line. I assume complexity of swap to be 3
a)
Computational complexity:
3+(n-1)*(1+1+((n-1)/2)*(1+1+1)*(1+1)+1)=1+(n-1)*(3n)=3n^2-3n+1
(We use n/2 because it appears to be the average of continuous worst case scenarios).
Memory:
3 ints, +1 (for loop)
b)
Computational complexity:2+(n-1)(1+((n-1))/2(1+1+1)(3+1))=2+(n-1)*(6n-5)=6n^2-11n+7
Memory:
2 ints, +cost of swap (most likely additional 1 integer)
Not counting the input memory, as it is the same in both cases.
Hope it helps.

Time complexity of this function

I am pretty sure about my answer but today had a discussion with my friend who said I was wrong.
I think the complexity of this function is O(n^2) in average and worst case and O(n) in best case. Right?
Now what happens when k is not length of array? k is the number of elements you want to sort (rather than the whole array).
Is it O(nk) in best and worst case and O(n) in best case?
Here is my code:
#include <stdio.h>
void bubblesort(int *arr, int k)
{
// k is the number of item of array you want to sort
// e.g. arr[] = { 4,15,7,1,19} with k as 3 will give
// {4,7,15,1,19} , only first k elements are sorted
int i=k, j=0;
char test=1;
while (i && test)
{
test = 0;
--i;
for (j=0 ; j<i; ++j)
{
if ((arr[j]) > (arr[j+1]))
{
// swap
int temp = arr[j];
arr[j]=arr[j+1];
arr[j+1]=temp;
test=1;
}
} // end for loop
} // end while loop
}
int main()
{
int i =0;
int arr[] = { 89,11,15,13,12,10,55};
int n = sizeof(arr)/sizeof(arr[0]);
bubblesort(arr,n-3);
for (i=0;i<n;i++)
{
printf("%d ",arr[i]);
}
return 0;
}
P.S. This is not homework, just looks like one. The function we were discussing is very similar to Bubble sort. In any case, I have added homework tag.
Please help me confirm if I was right or not. Thank you for your help.
Complexity is normally given as a function over n (or N), like O(n), O(n*n), ...
Regarding your code the complexity is as you stated. It is O(n) in best case and O(n*n) in worst case.
What might have lead to misunderstanding in your case is that you have a variable n (length of array) and a variable k (length of part in array to sort). Of course the complexity of your sort does not depend on the length of the array but on the length of the part that you want to sort. So with respect to your variables the complexity is O(k) or O(k*k). But since normally complexity notation is over n you would say O(n) or O(n*n) where n is the length of the part to sort.
Is it O(nk) in best and worst case and O(n) in best case?
No, it's O(k^2) worst case and O(k) best case. Sorting the first k elements of an array of size n is exactly the same as sorting an array of k elements.
That's O(n^2), the outer while goes from k down to 1 (possibly stopping earlier for specific data, but we're talking worst case here), and the inner for goes from 0 to i (which in turn goes up to k), so multiplied they're k^2 in the worst case.
If you care about the best case, that's O(n) because the outer while loop only executes once then gets aborted.

What is time complexity of this algorithm as Big-O notation?

This is a algorithm for this question: Rotate a array of n elements left by i positions. For instance, with n = 8 and i = 3, the array abcdefg is rotated to defghabc.
/* Alg 1: Rotate by reversal */
void reverse(int i, int j)
{ int t;
while (i < j) {
t = x[i]; x[i] = x[j]; x[j] = t;
i++;
j--;
}
}
void revrot(int rotdist, int n)
{ reverse(0, rotdist-1);
reverse(rotdist, n-1);
reverse(0, n-1);
}
What is the time complexity of this method? And is there any better solution to this problem?
Thanks indeed.
Should be roughly linear O(n).
The loop has to go for no more than (i+j)/2 times. Dropping the constant, O(i+j).
Big-O notation:
n is always O(n). (loops, as they have to go through several iterations)
1 so O(1). (if statement, specified quantity)
Agreed, it'd O(n) since we're merely shifting.
As a food for thought, another possible algorithm is to make a new array with the original appended to itself (ie. abcd --> abcdabcd). Then shift the pointers right n times! Of course, you'll need two pointers, one for the end, one for the beginning. Remember to cut off the end with '\0'
Same run time btw.

Resources