finding subsets dicretely c - c

for (i = 0; i < (pow(2,n)-1); i++) {
x = binary_conversion(i);
for (j = (n-1); j > 0; j--) {
if (x == 0) {
M[i][j] = 0;
}
else {
M[i][j] = x % 10;
x = x / 10;
}
}
}
i want to print the subsets of a set, so for a set of n elements i get the value of 2^n. from 0 to 2^n, i'm converting the values to binary. and i am keeping the binary values in a matrice and as i go through the matrice, if the value is 1, i am printing the corresponding element of the original set.But while creating the matrice, it assigns the same binary value to two consecutive rows so at the end i can not even get half of the subsets. What do you think is wrong with the code?

Ah because you don't cover the LSB or the 0th element. for (j = (n-1); j >= 0; j--) You missed the =.
Also you have to know if j-th bit is set in i or not.
And instead of pow you can simply use (1<<n)[Equivalent to 2^n]
Your code is not readable. I will post the pseudocode.
for ( int i = 0; i<= (1<<numOfSetElmts)-1; i++)
{
//print Subset-i
for(int pos = 0; pos<=n-1;pos++)
if( i&(1<<pos) )
print Set[pos]
}
Why am I not using pow?
The pow function is implemented by an algorithm and uses floating point functions and values to compute the power value.
So power of floating point to the power n is not necessarily multiplying it repetitively n times. As a result you end up with some errors and execution is a bit slower too.
Bitwise is faster?
Yes it is. Even if modern implementation are making changes to the architecture as a whole but still you won't lose the performance by using bitwise. Most of them it will have better performance than addition operation, if not equal.

Your program have worst complexity. There are better solutions for this problem with minimum complexity. Anyway the problem of your code is put '<='
i <= pow(2,n)-1
Also you can use i < 1<<n Both work same but second one is better and faster. The same problem happens in the inner loop where you didn't put '=' sign. ie, j>=0 . Else the program was good.
The better solution for your problem may look like this.
void subsets(char A[], int N)
{
int i,j;
for( i = 0;i < (1 << N); ++i)
{
for( j = 0;j < N;++j)
if(i & (1 << j))
printf("%c ", A[j] );
printf("\n");
}
}
In this there is no external binary conversion or matrix needed.

Related

More efficient way of iterating over every small square in big square array

I'm in my first few months of learning to code in C through a high school program. Someone recently mentioned to me that there's often a way to make code more efficient and I think I have a problem that could be made more efficient. I'm not sure how but I have a hunch that it could be made faster.
We're given a 2D square array of integers with row and col size n. We have subsquares within the 2D square array with row and col size s. We can always assume that s will evenly divide n. I've written the following code to iterate over each subsquare
Currently my code looks something like this:
int **grid;
int s, i, j, k, l;
// reading in inputs, other processing
for (i = 0; i < n; i += s) {
for (j = 0; j < n; j += s) {
for (k = 0; k < s; k++) {
for (l = 0; l < s; l++) {
printf("%d \n", grid[i + k][j + l]);
}
}
printf("next subsquare: \n");
}
}
As you can see, I've got 4 nested for loops and I feel like it's a bit messy to have it in this format. Is there a better way to do this? Later on I might be summing each subsquare or performing some other operation with each subsquare.

Counting number of primes within a given range of long int using C

Well, there are lots of such questions available in SO as well as other forums. However, none of these helped.
I wrote a program in "C" to find number of primes within a range. The range i in long int. I am using Sieve of Eratosthenes" algorithm. I am using an array of long ints to store all the numbers from 1 till the limit. I could not think of a better approach to achieve without using an array. The code works fine, till 10000000. But after that, it runs out of memory and exits. Below is my code.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
typedef unsigned long uint_32;
int main() {
uint_32 i, N, *list, cross=0, j=4, k, primes_cnt = 0;
clock_t start, end;
double exec_time;
system("cls");
printf("Enter N\n");
scanf("%lu", &N);
list = (uint_32 *) malloc( (N+1) * sizeof(uint_32));
start = clock();
for(i=0; i<=N+1; i++) {
list[i] = i;
}
for(i=0; cross<=N/2; i++) {
if(i == 0)
cross = 2;
else if(i == 1)
cross = 3;
else {
for(j=cross+1; j<=N; j++) {
if(list[j] != 0){
cross = list[j];
break;
}
}
}
for(k=cross*2; k<=N; k+=cross) {
if(k <= N)
list[k] = 0;
}
}
for(i=2; i<=N; i++) {
if(list[i] == 0)
continue;
else
primes_cnt++;
}
printf("%lu", primes_cnt);
end = clock();
exec_time = (double) (end-start);
printf("\n%f", exec_time);
return 0;
}
I am stuck and can't think of a better way to achieve this. Any help will be hugely appreciated. Thanks.
Edit:
My aim is to generate and print all prime numbers below the range. As printing consumed a lot of time, I thought of getting the first step right.
There are other algorithm that does not require you to generate prime number up to N to count number of prime below N. The easiest algorithm to implement is Legendre Prime Counting. The algorithm requires you to generate only sqrt(N) prime to determine the number of prime below N.
The idea behind the algorithm is that
pi(n) = phi(n, sqrt(n)) + pi(sqrt(n)) - 1
where
pi(n) = number of prime below N
phi(n, m) = number of number below N that is not divisible by any prime below m.
That's mean phi(n, sqrt(n)) = number of prime between sqrt(n) to n. For how to calculate the phi, you can go to the following link (Feasible implementation of a Prime Counting Function)
The reason why it is more efficient is because it is easiest to compute phi(n, m) than to compute pi(n). Let say that I want to compute phi(100, 3) means that how many number below or equal to 100 that does not divisible by 2 and 3. You can do as following. phi(100, 3) = 100 - 100/2 - 100/3 + 100/6.
Your code uses about 32 times as much memory as it needs. Note that since you initialized list[i] = i the assignment cross = list[j] can be replaced with cross = j, making it possible to replace list with a bit vector.
However, this is not enough to bring the range to 264, because your implementation would require 261 bytes (2 exbibytes) of memory, so you need to optimize some more.
The next thing to notice is that you do not need to go up to N/2 when "crossing" the numbers: √N is sufficient (you should be able to prove this by thinking about the result of dividing a composite number by its divisors above √N). This brings memory requirements within your reach, because your "crossing" primes would fit in about 4 GB of memory.
Once you have an array of crossing primes, you can build a partial sieve for any range without keeping in memory all ranges that precede it. This is called the Segmented sieve. You can find details on it, along with a simple implementation, on the page of primesieve generator. Another advantage of this approach is that you can parallelize it, bringing the time down even further.
You can tweak the algorithm a bit to calculate the prime numbers in chunks.
Load a part of the array (as much as fits the memory), and in addition hold a list of all known prime numbers.
Whenever you load a chunk, first go through the already known prime numbers, and similar to the regular sieve, set all non primes as such.
Then, go over the array again, mark whatever you can, and add to the list the new prime numbers found.
When done, you'll have a list containing all your prime numbers.
I could see that the approach you are using is the basic implementation of Eratosthenes, that first stick out all the 2's multiple and then 3's multiple and so on.
But I have a better solution to the question. Actually, there is question on spoj PRINT. Please go through it and do check the constraints it follows. Below is my code snippet for this problem:
#include<stdio.h>
#include<math.h>
#include<cstdlib>
int num[46500] = {0},prime[5000],prime_index = -1;
int main() {
/* First, calculate the prime up-to the sqrt(N) (preferably greater than, but near to
sqrt(N) */
prime[++prime_index] = 2; int i,j,k;
for(i=3; i<216; i += 2) {
if(num[i] == 0) {
prime[++prime_index] = i;
for(j = i*i, k = 2*i; j<=46500; j += k) {
num[j] = 1;
}
}
}
for(; i<=46500; i+= 2) {
if(num[i] == 0) {
prime[++prime_index] = i;
}
}
int t; // Stands for number of test cases
scanf("%i",&t);
while(t--) {
bool arr[1000005] = {0}; int m,n,j,k;
scanf("%i%i",&m,&n);
if(m == 1)
m++;
if(m == 2 && m <= n) {
printf("2\n");
}
int sqt = sqrt(n) + 1;
for(i=0; i<=prime_index; i++) {
if(prime[i] > sqt) {
sqt = i;
break;
}
}
for(; m<=n && m <= prime[prime_index]; m++) {
if(m&1 && num[m] == 0) {
printf("%i\n",m);
}
}
if(m%2 == 0) {
m++;
}
for(i=1; i<=sqt; i++) {
j = (m%prime[i]) ? (m + prime[i] - m%prime[i]) : (m);
for(k=j; k<=n; k += prime[i]) {
arr[k-m] = 1;
}
}
for(i=0; i<=n-m; i += 2) {
if(!arr[i]) {
printf("%i\n",m+i);
}
}
printf("\n");
}
return 0;
}
I hope you got the point:
And, as you mentioned that your program is working fine up-to 10^7 but above it fails, it must be because you must be running out of the memory.
NOTE: I'm sharing my code only for knowledge purpose. Please, don't copy and paste it, until you get the point.

Leetcode: Four Sum

Problem: Given an array S of n integers, are there elements a, b, c, and d in S such that a + b + c + d = target? Find all unique quadruplets in the array which gives the sum of target.
Note:
Elements in a quadruplet (a,b,c,d) must be in non-descending order. (ie, a ≤ b ≤ c ≤ d)
The solution set must not contain duplicate quadruplets.
For example, given array S = {1 0 -1 0 -2 2}, and target = 0.
A solution set is:
(-1, 0, 0, 1)
(-2, -1, 1, 2)
(-2, 0, 0, 2)
I know there's an O(n^3) solution to this problem, but I was wondering if there's a faster algorithm. I googled a lot and found that many people gave an O(n^2logn) solution, which fails to correctly deal with cases when there are duplicates of pair sums in S (like here
and here). I hope someone can give me a correct version of an O(n^2logn) algorithm if it really exists.
Thanks!
The brute-force algorithm takes time O(n^4): Use four nested loops to form all combinations of four items from the input, and keep any that sum to the target.
A simple improvement takes time O(n^3): Use three nested loops to form all combinations of three items from the input, and keep any that sum to the negative of the target.
The best algorithm I know is a meet-in-the-middle algorithm that operates in time O(n^2): Use two nested loops to form all combinations of two items from the input, storing the pairs and totals in some kind of dictionary (hash table, balanced tree) indexed by total. Then use two more nested loops to again form all combinations of two items from the input, and keep the two items from the nested loops, plus the two items from the dictionary, for any pair of items that sums to the negative of a total in the dictionary.
I have code at my blog.
IMHO, for O(n^2lgn) algorithm, the problem of duplicates can be solved when creating the aux[] array. (I'm using the name in the second link you provided). The basic idea is first sort the elements in the input, and then while processing the array, skip the duplicates.
vector<int> createAuxArray(vector<int> input) {
int len = input.size();
vector<int> aux;
sort(input.begin(), input.end());
for (int i = 0; i < len; ++i) {
if (i != 0 && input[i] == input[i - 1]) continue; // skip when encountered a duplicate
for (int j = i + 1; j < len; ++j) {
if (j != i + 1 && input[j] == input[j - 1]) continue; // same idea
aux.push_back(createAuxElement(input[i], input[j]);
}
}
return aux;
}
Complexity for this module is O(nlgn) + O(n^2) = O(n^2), which doesn't affect the overall performance. Once we have created aux array, we can plug it into the code mentioned in the post and the results will be correct.
Note that a BST or hashtable can be used to replace the sorting, but in general it doesn't decrease the complexity since you have to insert/query (O(lgN)) inside 2-nested loop.
This is a modified version of the geeksforgeeks solution which handles duplicates of pair sums as well. I noticed that some of the pairs were missing because the hash table was overwriting the old pairs when it found new pair that satisfies the sum. Thus, the fix is to avoid overwriting by storing them in a vector of pairs. Hope this helps!
vector<vector<int> > fourSum(vector<int> &a, int t) {
unordered_map<int, vector<pair<int,int> > > twoSum;
set<vector<int> > ans;
int n = a.size();
for (int i = 0; i < n; i++) for (int j = i + 1; j < n; j++) twoSum[a[i] + a[j]].push_back(make_pair(i, j));
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
if (twoSum.find(t - a[i] - a[j]) != twoSum.end()) {
for (auto comp : twoSum[t - a[i] - a[j]]) {
if (comp.first != i and comp.first != j and comp.second != i and comp.second != j) {
vector<int> row = {a[i], a[j], a[comp.first], a[comp.second]};
sort(row.begin(), row.end());
ans.insert(row);
}
}
}
}
}
vector<vector<int> > ret(ans.begin(), ans.end());
return ret;
}

Determining the complexities given codes

Given a snipplet of code, how will you determine the complexities in general. I find myself getting very confused with Big O questions. For example, a very simple question:
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
System.out.println("*");
}
}
The TA explained this with something like combinations. Like this is n choose 2 = (n(n-1))/2 = n^2 + 0.5, then remove the constant so it becomes n^2. I can put int test values and try but how does this combination thing come in?
What if theres an if statement? How is the complexity determined?
for (int i = 0; i < n; i++) {
if (i % 2 ==0) {
for (int j = i; j < n; j++) { ... }
} else {
for (int j = 0; j < i; j++) { ... }
}
}
Then what about recursion ...
int fib(int a, int b, int n) {
if (n == 3) {
return a + b;
} else {
return fib(b, a+b, n-1);
}
}
In general, there is no way to determine the complexity of a given function
Warning! Wall of text incoming!
1. There are very simple algorithms that no one knows whether they even halt or not.
There is no algorithm that can decide whether a given program halts or not, if given a certain input. Calculating the computational complexity is an even harder problem since not only do we need to prove that the algorithm halts but we need to prove how fast it does so.
//The Collatz conjecture states that the sequence generated by the following
// algorithm always reaches 1, for any initial positive integer. It has been
// an open problem for 70+ years now.
function col(n){
if (n == 1){
return 0;
}else if (n % 2 == 0){ //even
return 1 + col(n/2);
}else{ //odd
return 1 + col(3*n + 1);
}
}
2. Some algorithms have weird and off-beat complexities
A general "complexity determining scheme" would easily get too complicated because of these guys
//The Ackermann function. One of the first examples of a non-primitive-recursive algorithm.
function ack(m, n){
if(m == 0){
return n + 1;
}else if( n == 0 ){
return ack(m-1, 1);
}else{
return ack(m-1, ack(m, n-1));
}
}
function f(n){ return ack(n, n); }
//f(1) = 3
//f(2) = 7
//f(3) = 61
//f(4) takes longer then your wildest dreams to terminate.
3. Some functions are very simple but will confuse lots of kinds of static analysis attempts
//Mc'Carthy's 91 function. Try guessing what it does without
// running it or reading the Wikipedia page ;)
function f91(n){
if(n > 100){
return n - 10;
}else{
return f91(f91(n + 11));
}
}
That said, we still need a way to find the complexity of stuff, right? For loops are a simple and common pattern. Take your initial example:
for(i=0; i<N; i++){
for(j=0; j<i; j++){
print something
}
}
Since each print something is O(1), the time complexity of the algorithm will be determined by how many times we run that line. Well, as your TA mentioned, we do this by looking at the combinations in this case. The inner loop will run (N + (N-1) + ... + 1) times, for a total of (N+1)*N/2.
Since we disregard constants we get O(N2).
Now for the more tricky cases we can get more mathematical. Try to create a function whose value represents how long the algorithm takes to run, given the size N of the input. Often we can construct a recursive version of this function directly from the algorithm itself and so calculating the complexity becomes the problem of putting bounds on that function. We call this function a recurrence
For example:
function fib_like(n){
if(n <= 1){
return 17;
}else{
return 42 + fib_like(n-1) + fib_like(n-2);
}
}
it is easy to see that the running time, in terms of N, will be given by
T(N) = 1 if (N <= 1)
T(N) = T(N-1) + T(N-2) otherwise
Well, T(N) is just the good-old Fibonacci function. We can use induction to put some bounds on that.
For, example, Lets prove, by induction, that T(N) <= 2^n for all N (ie, T(N) is O(2^n))
base case: n = 0 or n = 1
T(0) = 1 <= 1 = 2^0
T(1) = 1 <= 2 = 2^1
inductive case (n > 1):
T(N) = T(n-1) + T(n-2)
aplying the inductive hypothesis in T(n-1) and T(n-2)...
T(N) <= 2^(n-1) + 2^(n-2)
so..
T(N) <= 2^(n-1) + 2^(n-1)
<= 2^n
(we can try doing something similar to prove the lower bound too)
In most cases, having a good guess on the final runtime of the function will allow you to easily solve recurrence problems with an induction proof. Of course, this requires you to be able to guess first - only lots of practice can help you here.
And as f final note, I would like to point out about the Master theorem, the only rule for more difficult recurrence problems I can think of now that is commonly used. Use it when you have to deal with a tricky divide and conquer algorithm.
Also, in your "if case" example, I would solve that by cheating and splitting it into two separate loops that don; t have an if inside.
for (int i = 0; i < n; i++) {
if (i % 2 ==0) {
for (int j = i; j < n; j++) { ... }
} else {
for (int j = 0; j < i; j++) { ... }
}
}
Has the same runtime as
for (int i = 0; i < n; i += 2) {
for (int j = i; j < n; j++) { ... }
}
for (int i = 1; i < n; i+=2) {
for (int j = 0; j < i; j++) { ... }
}
And each of the two parts can be easily seen to be O(N^2) for a total that is also O(N^2).
Note that I used a good trick trick to get rid of the "if" here. There is no general rule for doing so, as shown by the Collatz algorithm example
In general, deciding algorithm complexity is theoretically impossible.
However, one cool and code-centric method for doing it is to actually just think in terms of programs directly. Take your example:
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
System.out.println("*");
}
}
Now we want to analyze its complexity, so let's add a simple counter that counts the number of executions of the inner line:
int counter = 0;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
System.out.println("*");
counter++;
}
}
Because the System.out.println line doesn't really matter, let's remove it:
int counter = 0;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
counter++;
}
}
Now that we have only the counter left, we can obviously simplify the inner loop out:
int counter = 0;
for (int i = 0; i < n; i++) {
counter += n;
}
... because we know that the increment is run exactly n times. And now we see that counter is incremented by n exactly n times, so we simplify this to:
int counter = 0;
counter += n * n;
And we emerged with the (correct) O(n2) complexity :) It's there in the code :)
Let's look how this works for a recursive Fibonacci calculator:
int fib(int n) {
if (n < 2) return 1;
return fib(n - 1) + fib(n - 2);
}
Change the routine so that it returns the number of iterations spent inside it instead of the actual Fibonacci numbers:
int fib_count(int n) {
if (n < 2) return 1;
return fib_count(n - 1) + fib_count(n - 2);
}
It's still Fibonacci! :) So we know now that the recursive Fibonacci calculator is of complexity O(F(n)) where F is the Fibonacci number itself.
Ok, let's look at something more interesting, say simple (and inefficient) mergesort:
void mergesort(Array a, int from, int to) {
if (from >= to - 1) return;
int m = (from + to) / 2;
/* Recursively sort halves */
mergesort(a, from, m);
mergesort(m, m, to);
/* Then merge */
Array b = new Array(to - from);
int i = from;
int j = m;
int ptr = 0;
while (i < m || j < to) {
if (i == m || a[j] < a[i]) {
b[ptr] = a[j++];
} else {
b[ptr] = a[i++];
}
ptr++;
}
for (i = from; i < to; i++)
a[i] = b[i - from];
}
Because we are not interested in the actual result but the complexity, we change the routine so that it actually returns the number of units of work carried out:
int mergesort(Array a, int from, int to) {
if (from >= to - 1) return 1;
int m = (from + to) / 2;
/* Recursively sort halves */
int count = 0;
count += mergesort(a, from, m);
count += mergesort(m, m, to);
/* Then merge */
Array b = new Array(to - from);
int i = from;
int j = m;
int ptr = 0;
while (i < m || j < to) {
if (i == m || a[j] < a[i]) {
b[ptr] = a[j++];
} else {
b[ptr] = a[i++];
}
ptr++;
count++;
}
for (i = from; i < to; i++) {
count++;
a[i] = b[i - from];
}
return count;
}
Then we remove those lines that do not actually impact the counts and simplify:
int mergesort(Array a, int from, int to) {
if (from >= to - 1) return 1;
int m = (from + to) / 2;
/* Recursively sort halves */
int count = 0;
count += mergesort(a, from, m);
count += mergesort(m, m, to);
/* Then merge */
count += to - from;
/* Copy the array */
count += to - from;
return count;
}
Still simplifying a bit:
int mergesort(Array a, int from, int to) {
if (from >= to - 1) return 1;
int m = (from + to) / 2;
int count = 0;
count += mergesort(a, from, m);
count += mergesort(m, m, to);
count += (to - from) * 2;
return count;
}
We can now actually dispense with the array:
int mergesort(int from, int to) {
if (from >= to - 1) return 1;
int m = (from + to) / 2;
int count = 0;
count += mergesort(from, m);
count += mergesort(m, to);
count += (to - from) * 2;
return count;
}
We can now see that actually the absolute values of from and to do not matter any more, but only their distance, so we modify this to:
int mergesort(int d) {
if (d <= 1) return 1;
int count = 0;
count += mergesort(d / 2);
count += mergesort(d / 2);
count += d * 2;
return count;
}
And then we get to:
int mergesort(int d) {
if (d <= 1) return 1;
return 2 * mergesort(d / 2) + d * 2;
}
Here obviously d on the first call is the size of the array to be sorted, so you have the recurrence for the complexity M(x) (this is in plain sight on the second line :)
M(x) = 2(M(x/2) + x)
and this you need to solve in order to get to a closed form solution. This you do easiest by guessing the solution M(x) = x log x, and verify for the right side:
2 (x/2 log x/2 + x)
= x log x/2 + 2x
= x (log x - log 2 + 2)
= x (log x - C)
and verify it is asymptotically equivalent to the left side:
x log x - Cx
------------ = 1 - [Cx / (x log x)] = 1 - [C / log x] --> 1 - 0 = 1.
x log x
Even though this is an over generalization, I like to think of Big-O in terms of lists, where the length of the list is N items.
Thus, if you have a for-loop that iterates over everything in the list, it is O(N). In your code, you have one line that (in isolation all by itself) is 0(N).
for (int i = 0; i < n; i++) {
If you have a for loop nested inside another for loop, and you perform an operation on each item in the list that requires you to look at every item in the list, then you are doing an operation N times for each of N items, thus O(N^2). In your example above you do in fact, have another for loop nested inside your for loop. So you can think about it as if each for loop is 0(N), and then because they are nested, multiply them together for a total value of 0(N^2).
Conversely, if you are just doing a quick operation on a single item then that would be O(1). There is no 'list of length n' to go over, just a single one time operation.To put this in context, in your example above, the operation:
if (i % 2 ==0)
is 0(1). What is important isn't the 'if', but the fact that checking to see if a single item is equal to another item is a quick operation on a single item. Like before, the if statement is nested inside your external for loop. However, because it is 0(1), then you are multiplying everything by '1', and so there is no 'noticeable' affect in your final calculation for the run time of the entire function.
For logs, and dealing with more complex situations (like this business of counting up to j or i, and not just n again), I would point you towards a more elegant explanation here.
I like to use two things for Big-O notation: standard Big-O, which is worst case scenario, and average Big-O, which is what normally ends up happening. It also helps me to remember that Big-O notation is trying to approximate run-time as a function of N, the number of inputs.
The TA explained this with something like combinations. Like this is n choose 2 = (n(n-1))/2 = n^2 + 0.5, then remove the constant so it becomes n^2. I can put int test values and try but how does this combination thing come in?
As I said, normal big-O is worst case scenario. You can try to count the number of times that each line gets executed, but it is simpler to just look at the first example and say that there are two loops over the length of n, one embedded in the other, so it is n * n. If they were one after another, it'd be n + n, equaling 2n. Since its an approximation, you just say n or linear.
What if theres an if statement? How is the complexity determined?
This is where for me having average case and best case helps a lot for organizing my thoughts. In worst case, you ignore the if and say n^2. In average case, for your example, you have a loop over n, with another loop over part of n that happens half of the time. This gives you n * n/x/2 (the x is whatever fraction of n gets looped over in your embedded loops. This gives you n^2/(2x), so you'd get n^2 just the same. This is because its an approximation.
I know this isn't a complete answer to your question, but hopefully it sheds some kind of light on approximating complexities in code.
As has been said in the answers above mine, it is clearly not possible to determine this for all snippets of code; I just wanted to add the idea of using average case Big-O to the discussion.
For the first snippet, it's just n^2 because you perform n operations n times. If j was initialized to i, or went up to i, the explanation you posted would be more appropriate but as it stands it is not.
For the second snippet, you can easily see that half of the time the first one will be executed, and the second will be executed the other half of the time. Depending on what's in there (hopefully it's dependent on n), you can rewrite the equation as a recursive one.
The recursive equations (including the third snippet) can be written as such: the third one would appear as
T(n) = T(n-1) + 1
Which we can easily see is O(n).
Big-O is just an approximation, it doesn't say how long an algorithm takes to execute, it just says something about how much longer it takes when the size of its input grows.
So if the input is size N and the algorithm evaluates an expression of constant complexity: O(1) N times, the complexity of the algorithm is linear: O(N). If the expression has linear complexity, the algorithm has quadratic complexity: O(N*N).
Some expressions have exponential complexity: O(N^N) or logarithmic complexity: O(log N). For an algorithm with loops and recursion, multiply the complexities of each level of loop and/or recursion. In terms of complexity, looping and recursion are equivalent. An algorithm that has different complexities at different stages in the algorithm, choose the highest complexity and ignore the rest. And finally, all constant complexities are considered equivalent: O(5) is the same as O(1), O(5*N) is the same as O(N).

how to calculate total no of iteration of innermost loop of nested for loop? is there any formula?

for example
int count=0
for(int i=0;i<12;i++)
for(int j=i+1;j<10;j++)
for(int k=j+1;k<8;k++)
count++;
System.out.println("count = "+count);
or
for(int i=0;i<I;i++)
for(int j=i+1;j<J;j++)
for(int k=j+1;k<K;k++)
:
:
:
for(int z=y+1;z,<Z;z,++,)
count++;
what is value of count after all iteration? Is there any formula to calculate it?
It's a math problem of summation
Basically, one can prove that:
for (i=a; i<b; i++)
count+=1
is equivalent to
count+=b-a
Similarly,
for (i=a; i<b; i++)
count+=i
is equivalent to
count+= 0.5 * (b*(b+1) - a*(a+1))
You can get similar formulas using for instance wolframalpha (Wolfram's Mathematica)
This system will do the symbolic calculation for you, so for instance,
for(int i=0;i<A;i++)
for(int j=i+1;j<B;j++)
for(int k=j+1;k<C;k++)
count++
is a Mathematica query:
http://www.wolframalpha.com/input/?i=Sum[Sum[Sum[1,{k,j%2B1,C-1}],{j,i%2B1,B-1}],{i,0,A-1}]
Not a full answer but when i, j and k are all the same (say they're all n) the formula is C(n, nb_for_loops), which may already interest you :)
final int n = 50;
int count = 0;
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
for (int k = j + 1; k < n; k++) {
for (int l = k+1; l < n; l++) {
count++;
}
}
}
}
System.out.println( count );
Will give 230300 which is C(50,4).
You can compute this easily using the binomail coefficient:
http://en.wikipedia.org/wiki/Binomial_coefficient
One formula to compute this is: n! / (k! * (n-k)!)
For example if you want to know how many different sets of 5 cards can be taken out of a 52 cards deck, you can either use 5 nested loops or use the formula above, they'll both give: 2 598 960
That's roughly the volume of an hyperpyramid http://www.physicsinsights.org/pyramids-1.html => 1/d * (n ^d) (with d dimension)
The formula works for real number so you have to adapt it for integer
(for the case d=2 (the hyperpyramid is a triangle then) , 1/2*(n*n) becomes the well know formula n(n+1)/2 (or n(n-1)/2) depending if you include the diagonal or not). I let you do the math
I think the fact your not using n all time but I,J,K is not a problem as you can rewrite each loop as 2 loop stopping in the middle so they all stop as the same number
the formula might becomes 1/d*((n/2)^d)*2 (I'm not sure, but something similar should be ok)
That's not really the answer to your question but I hope that will help to find a real one.

Resources