Sorting in O(n) without any temp variable - c

I need to design an algorithm which will sort an array which contains numbers -1,0,1 only, without using any temp variable or array and by using only swapping I have come up with the following method I'm not sure if it is O(n).
#include <stdio.h>
#define MAXSIZE 10
int main()
{
int array[MAXSIZE];
int i, j, num = 8, temp;
int list[] = {-1,0,-1,0,1,1,0,1};
int size = sizeof(list)/sizeof(list[0]);
for (int i = 1; i < size; i++) {
if (list[i] < list[i - 1]) {
list[i] = list[i] + list[i - 1];
list[i - 1] = list[i] - list[i - 1];
list[i] = list[i] - list[i - 1];
i = 0;
}
}
printf("Sorted array is...\n");
for (int i = 0; i < size; i++)
{
printf("%d\n", list[i]);
}
}

The algorithm is definitely not O(n).
You are setting i to 0 when you do a swap. At worst, it is O(n^2).

The reason why your algorithm has been stated correctly by #RSahu, you are resetting the counter to 0 which means you can do as much as 1+2+...+n iterations.
Here is a small example exhibiting linear time to process the array:
#include <iostream>
#include <array>
using namespace std;
int main() {
array<int,10> A{-1, 0, -1, 0, 1, 1, 0, 1, 0, -1};
int i=0,j=0, k=9;
while(j!=k) {
if(A[j] == 0) {
++j;
}
else if(A[j] == -1) {
swap(A[i], A[j]);
++i; ++j;
}
else {
swap(A[j], A[k]);
--k;
}
}
for(auto ai : A)
cout << ai << " ";
cout << endl;
}
You can see it live there.
How does it work ? We maintain three counters i, j and k with the invariants that:
all items in the range: [0, i) are -1
all items in the range: [i, j) are 0
all items in the range: (k, n-1) are +1
Where [ means an inclusive bound, and ) or ( means an exclusive bound.
Initially
i=j=0 and 'k=n-1`. The invariants are respected.
First case
if(A[j] == 0) {
++j;
}
The value of A[j] is 0, so we can increment j and the invariants still hold.
Second case
else if(A[j] == -1) {
swap(A[i], A[j]);
++i; ++j;
}
As i is an exclusive bound, we are adding a -1 to the previous range of -1 and the increment of i is needed. If the range [i, j) was not empty, a 0 has been copied to position j and we must increment j. If the range was empty, then we had i==j, and as we increment i we must also increment j to keep the invariant. We can conclude that the invariants still hold after this step.
Third case
else {
swap(A[j], A[k]);
--k;
}
A[j] is 0 we can swap it with the value at A[k] and decrement k and the invariants will hold.
Termination and correctness
The final point is proving the program will terminate. Each step either:
- increment j
- decrement k
So the distance between j and k will decrease by 1 every step.
The distance between j and k is initially n-1, and decreases by one every step. So there will be at most n-1 steps. Each step does one swap. There will be at most n-1 swaps.
At the end of the program the invariants will hold:
from 0 to i excluded, all -1
from i to j==k excluded, all 0
from j==k to n-1 excluded, all +1

Related

Check whether exists index k such that elements of array A[] moved clockwise make a reverse bitonic array

Check whether exists index 0 <= k < n - 2 such that elements of array A[] moved clockwise by k indexes make a reverse bitonic array.
My approach to do it in O(n) time complexity:
bool is_antibitonicable(int A[], int n) {
// returns if there is such index k that
// after moving clockwise k elements of array
// A[], that array is reverse bitonic
// - strictly decreasing then strictly
// increasing
if (n < 3)
return false;
// if is_increasing[i] == 1 means this part of A[] is increasing,
// == 0 means that part of A[] is decreasing, == -1 default
int is_increasing[3] = { -1, -1, -1 };
for (int i = 0, j; i < n - 1;) {
if (A[i] < A[i + 1]) { // if A[] is increasing
j = 0;
while (j < 3 && is_increasing[j] != -1)
j++;
if (j == 3)
return false;
is_increasing[j] = 1;
while (i < n - 1 && A[i] < A[i + 1])
i++;
}
else if (A[i] > A[i + 1]) { // check if decreasing
j = 0;
while (j < 3 && is_increasing[j] != -1)
j++;
if (j == 3)
return false;
is_increasing[j] = 0;
while (i < n - 1 && A[i] > A[i + 1])
i++;
}
else // sequence of A[] is neither increasing nor decreasing
return false;
}
// if A[] is only increasing/decreasing
if (is_increasing[1] == is_increasing[2])
return false;
// if A[] is increasing->decreasing->increasing check if increasing
// parts can be merged into one increasing sequence
if (is_increasing[0] == 1 && is_increasing[1] == 0 && is_increasing[2] == 1)
return (A[0] > A[n - 1]);
// decreasing->increasing->decreasing
if (is_increasing[0] == 0 && is_increasing[1] == 1 && is_increasing[2] == 0)
return (A[0] < A[n - 1]);
return true; // increasing -> decreasing or opposite
}
I'd be very glad if someone could look at my solution and comment whether it seems correct or how to do it better, any feedback will be appreciated.
Your solution doesn't look bad, but it does incorrectly return false // if A[] is only increasing/decreasing. Such a sequence can always be turned into a first decreasing and then increasing one by rotating by one in the right (appropriate) direction.

efficient way to check if an array has all integers between 0 and n-1

in regarding to my previous post: Complexity to find if there is a missing element in an array --> i am trying to solve an algorithm to check if an array has all elements between 0 and n - 1 in the most efficient way (time complexity wise) without an extra array,. i came up with two solutions. could you help me determine which one is more efficient? which one should i turn in? thank you.
/* first attempt */
int check_missing_a(int *a, int n)
{
int i, flag = 0;
for (i = 0; i < n; i++)
{
if (a[i] < 0 || a[i] >= n) //check for unwanted integers
return 0;
while (i != a[i])
{
swap(&a[a[i]], &a[i]); //puts numbers in their index
flag++;
if (flag > 1 && a[i] == a[a[i]]) //check for duplicates
return 0;
}
flag = 0;
}
return 1;
}
/* second attempt */
int check_missing_b(int *a, int n)
{
int i, sum_a = 0, sum_i = 0, sum_aa = 0, sum_ii = 0;
for (i = 0; i < n; i++)
{
if (a[i] < 0 || a[i] >= n) //check for unwanted integers
return 0;
else
{
sum_a += a[i]; // sum of 'elements' should be equal to sum of 'i'
sum_i += i;
sum_aa += a[i] * a[i]; // multiplication sum of 'elements' should be equal to multiplication sum of 'i'
sum_ii += i * i;
}
}
return (sum_aa == sum_ii && sum_a == sum_i);
}
First of all, we need to fix check_missing_a because it's buggy. After the swap, a[i] might be out of bounds for following a[a[i]]. Fixed version:
int check_missing_a2(int *a, int n) {
for (int i = 0; i < n; ++i) {
while (i != a[i]) {
if (a[i] < i || a[i] >= n || a[i] == a[a[i]])
return 0;
swap(&a[i], &a[a[i]]);
}
}
return 1;
}
We can even save a few comparisons as follows: (Thanks to #chmike)
int check_missing_a2(int *a, int n)
{
for (int i = 0; i < n; ++i)
if (a[i] < 0 || a[i] >= n)
return 0;
for (int i = 0; i < n; ++i) {
while (i != a[i]) {
if (a[i] == a[a[i]])
return 0;
swap(&a[i], &a[a[i]]);
}
}
return 1;
}
Complexity of check_missing_a2
At first glance, one might think that check_missing_a2 is slower than O(N) because the outer loop does N passes and there's another inner loop.
However, the inner loop performs at most N-1 swaps. For example, the following illustrates the number of swaps for each arrangement of the numbers in 0..N-1 for N=8:
# swaps # arrangements
------- --------------
0 1
1 28
2 322
3 1960
4 6769
5 13132
6 13068
7 5040
As #4386427 explained, every swap places at least one element in its correct position. Consequently there can't be more than N swaps.
This means that no part of the function is executed more than 2*N times, for a resulting complexity of O(N).
Complexity of check_missing_b
A single loop with N passes, for a complexity of O(N).
As for actual performance, I suspect that check_missing_a2 will always be faster than check_missing_b.
Of course, there's also the issue that check_missing_a2 changes the array and that check_missing_b could overflow.
Function check_missing_b is definitely O(n) because it has only one loop. It also has the property to not modify the input array a. However, it has a limitation in the magnitude of n because sum_ii might overflow.
Function check_missing_a has two loops and is less obvious to analyze. It also sort the values in the array a and thus modify the input array. This might be a problem. On the other hand it is not subject to overflow which is an advantage over the other function.
This function is a radix sort because each swap puts a value in its final place. There will be less than n swaps. This function is thus O(n).
Unfortunately, this function has also a problem. A value a[a[i]] may be bigger than n when a[i] > i. The function requires thus two pass on the elements. A first pass, ensures that no value is smaller than 0 and bigger than n-1. A second pass does the radix sort.
Here is my suggested implementation of the function check_missing_a.
int check_missing_c(int *a, int n)
{
for (int i = 0; i < n; i++)
if (a[i] < 0 || a[i] >= n)
return 0;
for (int i = 0; i < n; i++)
while (i != a[i]) {
if (a[i] == a[a[i]])
return 0;
int tmp = a[i];
a[i] = a[tmp];
a[tmp] = tmp;
}
return 1;
}

Get the distinct elements from an array when comparing with another array, in liner time(without using collections)

INPUT: 2 sorted arrays [Array a and Array b)
OUPUT: Elements in array a that are not in array b
constraints: linear time without the use of collections
This is my attempt thus far in Java:
static void findMissing(int a[], int b[], int n, int m) {
for (int i = 0; i < n; i++) {
int j;
for (j = 0; j < m; j++)
if (a[i] == b[j])
break;
if (j == m)
System.out.print(a[i] + " ");
}
}
Your attempt does not use the condition that both array a and b are sorted, and that is why it is quadratic time. A linear solution needs two pointers i and j, i for array a and j for array b. Start from i = 0 and j = 0, compare a[i] and b[j].There are 3 cases to consider:
a[i] == b[j]: in this case, increment i and j by 1 since a[i] is in b.
a[i] < b[j]: since both arrays are sorted, and all elements that haven't been processed are equal to or bigger than b[j], we know that a[i] is not going to find a match. Increment i by 1.
a[i] > b[j]: similarly in this case, a[i] is not going to find a match from b[0] to b[j]. Increment j by 1.
If there are still unprocessed elements in a when all elements in b have been processed, output all these remaining elements in a as they are not in b.
Assuming you want to output duplicated elements multiple times if they don't exist in b, the following code implements this two pointer algorithm.
public class Solution {
public static void findMissing(int[] a, int[] b, int n, int m) {
int i = 0, j = 0;
while(i < n && j < m) {
if(a[i] == b[j]) {
i++;
j++;
}
else if(a[i] < b[j]) {
System.out.print(a[i] + " ");
i++;
}
else {
j++;
}
}
while(i < n) {
System.out.print(a[i] + " ");
i++;
}
}
public static void main(String[] args) {
int[] a = {1,3,5,7,8,9,10}, b = {3,5,7,9,11};
findMissing(a, b, 7, 4);
}
}
If you only want to output duplicated elements once, then each time i or j is advanced, check if the next element is the same with the previous one, if they are the same, keep incrementing until the adjacent elements are different or you reach the end of the array. All the rest logic stays the same.

Design the binomial coefficient algorithm using a single dimensional array

I have already designed the following algorithm that determines the binomial coefficient using a two dimensional array. For example, to calculate the binomial coefficient of n choose k, we can create a two dimensional array like so:
int[][] arr = new int[n][k];
We can populate the array in the following way:
for(int i = 0; i <= n; i++){
for(int j = 0; j <= minimum(i, k); j++){
if(j == 0 || i == j){
arr[i, j] = 1;
} else{
arr[i, j] = arr[i - 1, j - 1] + arr[i - 1, j];
}
}
}
However, I need to redesign this algorithm to use a one dimensional array from indexes 0-k. I am having a lot of trouble pinpointing how to do this. I have started in small steps, and realized some common occurrences:
If k = 0, arr[0] will be 1, and that will be returned regardless of n.
If k = 1, arr[0] will be 1, arr[1] should be n, if I'm designing it in a loop.
When I say k = 2, this is where it gets tricky, because the value of arr[2] will really depend on the previous values. I believe that as I loop (say from i = 0 to i = n), the values of arr[] will change but I can't quite grasp how. I've started with something along these lines:
for(int i = 0; i <= n; i++){
for(int j = 0; j <= minimum(i, k); j++){
if(j == 0 || i == j){
arr[j] = 1;
} else if(j == 1){
arr[j] = i;
} else{
arr[j] = ??; // I can't access previous values, because I didn't record them?
}
}
}
How should I handle this?
Here is a code which uses only one one dimensional array:
int[] coefficients = new int[k + 1];
coefficients[0] = 1;
for (int i = 1; i <= n; i++) {
for (int j = k; j >= 1; j--) {
coefficients[j] += coefficients[j - 1];
}
}
Why is it correct? To compute coefficients[j] for a fixed i, we need to know the value of coefficients[j - 1] and coefficients[j] for i - 1. If we iterate from k down to 0, we can safely record a new value for the current position because we will never need its old value.

Will this modified partitioning algorithm in quicksort function the same as the original in all situations?

The code given in class for QuickSort partitioning procedure had two inner loops with an
empty body. Suppose we rewrite these loops by moving the increment/decrement from the
test into the body of the loop, and by modifying the initial values of the indexes accordingly.
The original and modified partitioning procedures are as follows:
int partition( A[], int l, int r )
{
int pivot = A[l];
int i = l, j = r+1;
while(i < j)
{
while (A[--j] > pivot);
while (A[++i] < pivot);
if (i < j) swap (A[i], A[j]);
}
swap(A[l], A[j]);
return j;
}
int partition( A[], int l, int r )
{
int pivot = A[l];
int i = l+1, j = r;
while (i < j)
{
while (A[j] > pivot) j--;
while (A[i] < pivot) i++;
if (i < j) swap(A[i], A[j]);
}
swap(A[l], A[j])
return j;
}
Does the modified partitioning procedure work correctly in all situations? (ignore the
glitch of i running off the array when the pivot is the maximal element). Hint: consider what
happens when the current subarray contains at least two other keys equal to the pivot.
The modified partitioning procedure gets into an infinite loop when the subarray contains at least two other values that are equal to the pivot.
Let's consider a case where we have:
int A[] = { 3, 3, 1, 3 };
And we call:
partition(A, 0, 3);
On the first iteration of the outer while loop, i is 1 and j is 3:
3 3 1 3
^ ^
i j
Consider the first test:
while (A[j] > pivot) j--;
It is not true that 3 is greater than 3, so j doesn't get decremented.
Now the second test:
while (A[i] < pivot) i++;
Similarly, it is not true that 3 is less than 3, so i doesn't get incremented.
When A[i] is swapped with A[j], the array doesn't change because the values at i and j are the same.
The loop starts a new iteration because i is still less than j. Because nothing has changed since the previous iteration, the loop will go through the same tests and do the same thing, which amounts to nothing. Thus the infinite loop.

Resources