Bubble Sort Outer Loop and N-1 - c

I've read multiple posts on Bubble Sort, but still have difficulty verbalizing why my code works, particularly with respect to the outer loop.
for (int i = 0; i < (n - 1); i++)
{
for (int j = 0; j < (n - i - 1); j++)
{
if (array[j] > array[j + 1])
{
int temp = array[j];
array[j] = array[j + 1];
array[j + 1] = temp;
}
}
}
For any array of length n, at most n-1 pairwise comparisons are possible. That said, if we stop at i < n-1, we never see the final element. If, in the worst case, the array's elements (I'm thinking ints here) are in reverse order, we cannot assume that it is in its proper place. So, if we never examine the final array element in the outer loop, how can this possibly work?

Array indexing is done as 0 to n-1. If there are 10 elements in array, indexing will be n-1. So in first, iteration of inner loop (n-1) comparison will take place. First pass of bubble sort will bubble up the largest number to its position.
In the next iteration (n-1-1) iteration will take place and it will bubble up the second largest value to its place and so on until the whole array is sorted.

In this line you are accessing 1 element ahead of current position of j
array[j + 1];
In first iteration of the loop you run j from 0 to j<(n-0-1), so last index of array which you can get is j less than n, as you accessing array[j+1]. So if you declare you array as array[n], this will get the last element for your array.

n is typically the number of elements in your array, so if 10 elements in the array, the elements would be indexed from 0 - 9. You would not want to access array[10] in the outer loop as this would yield a segfault for accessing out of array bounds, hence the use of "n -1" in the loop condition statement. In C, when writing and calling a function that includes iterating an array, the size of the array is also passed as a parameter.

The n means the "number of all the elements". The initial number in the loops is 0, ranging from 0 to (n-1); so we will get n elements; all the elements will be travelsaled.

Related

Selection sort: What is n-1?

int main() {
int arr[] = { 64, 25, 12, 22, 11 };
int n = sizeof(arr) / sizeof(arr[0]);
selectionSort(arr, n);
return 0;
}
void selectionSort(int arr[], int n) {
int i, j, min_idx;
// One by one move boundary of unsorted subarray
for (i = 0; i < n - 1; i++) {
// Find the minimum element in unsorted array
min_idx = i;
for (j = i + 1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;
// Swap the found minimum element with the first element
swap(&arr[min_idx], &arr[i]);
}
}
I have see this C language code that'll do sorting algorithms called Selection Sort. But my question is in the selectionSort function.
Why in the first for loop, is the condition i < n - 1 whereas in the second loop it is j < n?
What will i < n - 1 do exactly? and why different cases for the second loop? Can you please explain this code to me like I'm in sixth grade of elementary school. Thank You.
The first loop has to iterate up to index n-2 (thus i < n-1) because the second for loop has to check numbers i+1 up to n-1 (thus j < n). If i could get the value n - 1, then the access in if (arr[j] < arr[min_idx]) would be out of bounds, specifically arr[j] would be out of bounds for j==n.
You could think that this implementation of selection sort moves from left to right on the array, leaving always a sorted array on its left. That's why the second for loop starts visiting elements from index i+1.
You could find many resources online to visualize how selection sort works, e.g., Selection sort in Wikipedia
The implementation on Wikipedia is annotated and explains it.
/* advance the position through the entire array */
/* (could do i < aLength-1 because single element is also min element) */
for (i = 0; i < aLength-1; i++)
Selection sort works by finding the smallest element and swapping it in place. When there's only one unsorted element left it is the smallest unsorted element and it is at the end of the sorted array.
For example, let's say we have {3, 5, 1}.
i = 0 {3, 5, 1} // swap 3 and 1
^
i = 1 {1, 5, 3} // swap 5 and 3
^
i = 2 {1, 3, 5} // swap 5 and... 5?
^
For three elements we only need two swaps. For n elements we only need n-1 swaps.
It's an optimization which might improve performance a bit on very small arrays, but otherwise inconsequential in an O(n^2) algorithm like selection sort.
Why in the first for loop, the condition is i < n-1? But in the second loop is j < n?
The loop condition for the inner loop is j < n because the index of the last element to be sorted is n - 1, so that when j >= n we know that it is past the end of the data.
The loop condition for the outer loop could have been i < n, but observe that no useful work would then be done on the iteration when i took the value n - 1. The initial value of j in the inner loop is i + 1, which in that case would be n. Thus no iterations of the inner loop would be performed.
But no useful work is not the same as no work at all. On every outer-loop iteration in which i took the value n - 1, some bookkeeping would be performed, and arr[i] would be swapped with itself. Stopping the outer loop one iteration sooner avoids that guaranteed-useless extra work.
All of this is directly related to the fact that no work needs to be expended to sort a one-element array.
Here is the logic of these nested loops:
for each position i in the array
find the smallest element of the slice starting at this position extending to the end of the array
swap the smallest element and the element at position i
The smallest element of the 1 element slice at the end of the array is obviously already in place, so there is no need to run the last iteration of the outer loop. That's the reason for the outer loop to have a test i < n - 1.
Note however that there is a nasty pitfall in this test: if instead of int we use size_t for the type of the index and count of elements in the array, which is more correct as arrays can have more elements nowadays than the range of type int, i < n - 1 would be true for i = 0 and n = 0 because n - 1 is not negative but the largest size_t value which is huge. In other words, the code would crash on an empty array.
It would be safer to write:
for (i = 0; i + 1 < n; i++) { ...

Need help proving loop invariant (simple bubble sort, partial correctness)

The bubble-sort algorithm (pseudo-code):
Input: Array A[1...n]
for i <- n,...,2 do
for j <- 2,...,i do
if A[j - 1] >= A[j] then
swap the values of A[j-1] and A[j];
I am not sure but my proof seems to work, but is overly convoluted. Could you help me clean it up?
Loop-invariant: After each iteration i, the i - n + 1 greatest
elements of A are in the position they would be were A sorted
non-descendingly. In the case that array A contains more than one
maximal value, let the greatest element be the one with the smallest index
of all the possible maximal values.
Induction-basis (i = n): The inner loop iterates over every element of
A. Eventually, j points to the greatest element. This value will be
swapped until it reaches position i = n, which is the highest position
in array A and hence the final position for the greatest element of A.
Induction-step: (i = m -> i = m - 1 for all m > 3): The inner loop
iterates over every element of A. Eventually, j points to the greatest
element of the ones not yet sorted. This value will be swapped until
it reaches position i = m - 1, which is the highest position of the
positions not-yet-sorted in array A and hence the final position for
the greatest not-yet-sorted element of A.
After the algorithm was fully executed, the remaining element at
position 1 is also in its final position because were it not, the
element to its right side would not be in its final position, which is
a contradiction. Q.E.D.
I'd be inclined to recast your proof in the following terms:
Bubble sort A[1..n]:
for i in n..2
for j in 2..i
swap A[j - 1], A[j] if they are not already in order
Loop invariant:
let P(i) <=> for all k s.t. i < k <= n. A[k] = max(A[1..k])
Base case:
initially i = n and the invariant P(n) is trivially satisfied.
Induction step:
assuming the invariant holds for some P(m + 1),
show that after the inner loop executes, the invariant holds for P(m).

How do you reorganize an array within O(n) runtime & O(1) space complexity?

I'm a 'space-complexity' neophyte and was given a problem.
Suppose I have an array of arbitrary integers:
[1,0,4,2,1,0,5]
How would I reorder this array to have all the zeros at one end:
[1,4,2,1,5,0,0]
...and compute the count of non-zero integers (in this case: 5)?
... in O(n) runtime with O(1) space complexity?
I'm not good at this.
My background is more environmental engineering than computer science so I normally think in the abstract.
I thought I could do a sort, then count the non-zero integers.
Then I thought I could merely do a element-per-element copy as I re-arrange the array.
Then I thought something like a bubble sort, switching neighboring elements till I reached the end with the zeroes.
I thought I could save on the 'space-complexity' via shift array-members' addresses, being that the array point points to the array, with offsets to its members.
I either enhance the runtime at the expense of the space complexity or vice versa.
What's the solution?
Two-pointer approach will solve this task and keep within the time and memory constraints.
Start by placing one pointer at the end, another at the start of the array. Then decrement the end pointer until you see the first non-zero element.
Now the main loop:
If the start pointer points to zero, swap it with the value pointed
by the end pointer; then decrement the end pointer.
Always increment the start pointer.
Finish when start pointer becomes greater than or equal to the end
pointer.
Finally, return the position of the start pointer - that's the number of nonzero elements.
This is the Swift code for the smart answer provided by #kfx
func putZeroesToLeft(inout nums: [Int]) {
guard var firstNonZeroIndex: Int = (nums.enumerate().filter { $0.element != 0 }).first?.index else { return }
for index in firstNonZeroIndex..<nums.count {
if nums[index] == 0 {
swap(&nums[firstNonZeroIndex], &nums[index])
firstNonZeroIndex += 1
}
}
}
Time complexity
There are 2 simple (not nested) loops repeated max n times (where n is the length of input array). So time is O(n).
Space complexity
Beside the input array we only use the firstAvailableSlot int var. So the space is definitely a constant: O(1).
As indicated by the other answers, the idea is to have two pointers, p and q, one pointing at the end of the array (specifically at the first nonzero entry from behind) and the other pointing at the beginning of the array. Scan the array with q, each time you hit a 0, swap elements pointed to by p and q, increment p and decrement q (specifically, make it point to the next nonzero entry from behind); iterate as long as p < q.
In C++, you could do something like this:
void rearrange(std::vector<int>& v) {
int p = 0, q = v.size()-1;
// make q point to the right position
while (q >= 0 && !v[q]) --q;
while (p < q) {
if (!v[p]) { // found a zero element
std::swap(v[p], v[q]);
while (q >= 0 && !v[q]) --q; // make q point to the right position
}
++p;
}
}
Start at the far end of the array and work backwards. First scan until you hit a nonzero (if any). Keep track of the location of this nonzero. Keep scanning. Whenever you encounter a zero -- swap. Otherwise increase the count of nonzeros.
A Python implementation:
def consolidateAndCount(nums):
count = 0
#first locate last nonzero
i = len(nums)-1
while nums[i] == 0:
i -=1
if i < 0:
#no nonzeros encountered
return 0
count = 1 #since a nonzero was encountered
for j in range(i-1,-1,-1):
if nums[j] == 0:
#move to end
nums[j], nums[i] = nums[i],nums[j] #swap is constant space
i -=1
else:
count += 1
return count
For example:
>>> nums = [1,0,4,2,1,0,5]
>>> consolidateAndCount(nums)
5
>>> nums
[1, 5, 4, 2, 1, 0, 0]
The suggested answers with 2 pointers and swapping are changing the order of non-zero array elements which is in conflict with the example provided. (Although he doesn't name that restriction explicitly, so maybe it is irrelevant)
Instead, go through the list from left to right and keep track of the number of 0s encountered so far.
Set counter = 0 (zeros encountered so far).
In each step, do the following:
Check if the current element is 0 or not.
If the current element is 0, increment the counter.
Otherwise, move the current element by counter to the left.
Go to the next element.
When you reach the end of the list, overwrite the values from array[end-counter] to the end of the array with 0s.
The number of non-zero integers is the size of the array minus the counted zeros.
This algorithm has O(n) time complexity as we go at most twice through the whole array (array of all 0s; we could modify the update scheme a little to only go through at most exactly once though). It only uses an additional variable for counting which satisfies the O(1) space constraint.
Start with iterating over the array (say, i) and maintaining count of zeros encountered (say zero_count) till now.
Do not increment the iterative counter when the current element is 0. Instead increment zero_count.
Copy the value in i + zero_count index to the current index i.
Terminate the loop when i + zero_count is greater than array length.
Set the remaining array elements to 0.
Pseudo code:
zero_count = 0;
i = 0;
while i + zero_count < arr.length
if (arr[i] == 0) {
zero_count++;
if (i + zero_count < arr.length)
arr[i] = arr[i+zero_count]
} else {
i++;
}
while i < arr.length
arr[i] = 0;
i++;
Additionally, this also preserves the order of non-zero elements in the array,
You can actually solve a more generic problem called the Dutch national flag problem, which is used to in Quicksort. It partitions an array into 3 parts according to a given mid value. First, place all numbers less than mid, then all numbers equal to mid and then all numbers greater than mid.
Then you can pick the mid value as infinity and treat 0 as infinity.
The pseudocode given by the above link:
procedure three-way-partition(A : array of values, mid : value):
i ← 0
j ← 0
n ← size of A - 1
while j ≤ n:
if A[j] < mid:
swap A[i] and A[j]
i ← i + 1
j ← j + 1
else if A[j] > mid:
swap A[j] and A[n]
n ← n - 1
else:
j ← j + 1

Smallest Lexicographic Subsequence of size k in an Array

Given an Array of integers, Find the smallest Lexical subsequence with size k.
EX: Array : [3,1,5,3,5,9,2] k =4
Expected Soultion : 1 3 5 2
The problem can be solved in O(n) by maintaining a double ended queue(deque). We iterate the element from left to right and ensure that the deque always holds the smallest lexicographic sequence upto that point. We should only pop off element if the current element is smaller than the elements in deque and the total elements in deque plus remaining to be processed are at least k.
vector<int> smallestLexo(vector<int> s, int k) {
deque<int> dq;
for(int i = 0; i < s.size(); i++) {
while(!dq.empty() && s[i] < dq.back() && (dq.size() + (s.size() - i - 1)) >= k) {
dq.pop_back();
}
dq.push_back(s[i]);
}
return vector<int> (dq.begin(), dq.end());
}
Here is a greedy algorithm that should work:
Choose Next Number ( lastChoosenIndex, k ) {
minNum = Find out what is the smallest number from lastChoosenIndex to ArraySize-k
//Now we know this number is the best possible candidate to be the next number.
lastChoosenIndex = earliest possible occurance of minNum after lastChoosenIndex
//do the same process for k-1
Choose Next Number ( lastChoosenIndex, k-1 )
}
Algorithm above is high complexity.
But we can pre-sort all the array elements paired with their array index and do the same process greedily using a single loop.
Since we used sorting complexity still will be n*log(n)
Ankit Joshi's answer works. But I think it can be done with just a vector itself, not using a deque as all the operations done are available in vector too. Also in Ankit Joshi's answer, the deque can contain extra elements, we have to manually pop off those elements before returning. Add these lines before returning.
while(dq.size() > k)
{
dq.pop_back();
}
It can be done with RMQ in O(n) + Klog(n).
Construct an RMQ in O(n).
Now find the sequence where every ith element will be the smallest no. from pos [x(i-1)+1 to n-(K-i)] (for i [1 to K] , where x0 = 0, xi is the position of the ith smallest element in the given array)
If I've understood the question right, here's a DP Algorithm that should work but it takes O(NK) time.
//k is the given size and n is the size of the array
create an array dp[k+1][n+1]
initialize the first column with the maximum integer value (we'll need it later)
and the first row with 0's (keep element dp[0][0] = 0)
now run the loop while building the solution
for(int i=1; i<=k; i++) {
for(int j=1; j<=n; j++) {
//if the number of elements in the array is less than the size required (K)
//initialize it with the maximum integer value
if( j < i ) {
dp[i][j] = MAX_INT_VALUE;
}else {
//last minimum of size k-1 with present element or last minimum of size k
dp[i][j] = minimun (dp[i-1][j-1] + arr[j-1], dp[i][j-1]);
}
}
}
//it consists the solution
return dp[k][n];
The last element of the array contains the solution.
I suggest you can try use modified merge sort. The place for
modified is merge part, discard the duplicate value.
select the smallest four
The complexity is o(n logn)
Still thinking whether complexity can be o(n)

BUBBLE SORT Help in C . line-based questions from a source code

http://www.sanfoundry.com/c-program-sorting-bubble-sort/
My questions are:
At line 28: Why num - i - 1?
At line 30: what does the if condition mean? especially, why j + 1?
How to display the elements of the sorted array randomly? No Asc or Dsc order?
How to differ in the displayed random numbers?
at Line 30 code is doing j+1 that is why the condition is j<num-i-1 so that array out of bound exception will not occur.
you can change the logic as
for (i = 0; i < num; i++)
{
for (j = 1; j < (num - i); j++)
{
if (array[j-1] > array[j])
{
temp = array[j];
array[j] = array[j-1];
array[j-1] = temp;
}
}
}
if you want to display array in random way then there is no need of sorting!! because there is already no order in array.
you can write your own logic to display array elements randomly.
Because we're going to be adding one to the index, we can't loop up to num - i or we'd overshoot.
That adds one to the index to look at, and compare, the j:th with the (j + 1):th elements.
That doesn't make a lot of sense. If you want to display it randomly, don't sort it. "Shuffle" is the term for randomizing an array.
No idea what you mean there.
1) num is the number of numbers to sort. i is the number of elements already sorted and 1 is for the number that will be sorted.
In the loop it compares the element at array[j] and array[j+1] to see if they need to be swapped. The loop should then stop when j is the before last to test and j+1 is the last to test. This is why j starts from 0 and ends when j is num - i - 1.
2) it compares the number at position j with the number at position j+1. If you would write the numbers in sequence in a line from left to right, this would be a comparison between the number at position j and the number that follows it (right to it).
3) you would have to create an array of element index. This array would be initialized with values from 0 to num-1. You would then iterate over this array and swap the entry with another entry chosen randomly. To do this you pick a random number between 0 and num-1. This will shuffle the element index array.
To print the numbers in random order, you would use the value stored in the element index array as index of the next number to display. This ensure the order is random, that each number is displayed only once and that all numbers are displayed.
4° I don't understand that question.

Resources