I've been leariing sorting algorithms for couple of days. Presently i'm doing Insertion Sort. So the general algorithm is:
void insertionSort(int N, int arr[]) {
int i,j;
int value;
for(i=1;i<N;i++)
{
value=arr[i];
j=i-1;
while(j>=0 && value<arr[j])
{
arr[j+1]=arr[j];
j=j-1;
}
arr[j+1]=value;
}
for(j=0;j<N;j++)
{
printf("%d ",arr[j]);
}
printf("\n");
}
Now i've done this:
void print_array(int arr_count, int* arr){
int i;
for (i=0;i<arr_count;i++){
printf("%d ",arr[i]);
}
printf("\n");
}
void swap(int* m, int* n){
int t = 0;
t = *m;
*m = *n;
*n = t;
}
void insertionSort(int arr_count, int* arr) {
int i, j;
for(i = 0;i<arr_count;i++){
for (j=0;j<i;j++){
if (arr[i] < arr[j]){
swap(arr+i, arr+j);
}
}
//if (i!=0)
//print_array(arr_count, arr);
}
print_array(arr_count, arr);
}
Now, my question is whats the diffrence between my custom approach and the traditional appraoch.Both have N2 complexity....
Please help..
Thanks in advance
At each iteration, the original code you present moves each element into place by moving elements in a cycle. For an n-element cycle, that involves n+1 assignments.
It is possible to implement Insertion Sort by moving elements with pairwise swaps instead of in larger cycles. It is sometimes taught that way, in fact. This is possible because any permutation (not just cycles) can be expressed as a series of swaps. Implementing an n-element cycle via swaps requires n-1 swaps, and each swap, being a 2-element cycle, requires 2+1 = 3 assignments. For cycles larger than two elements, then, the approach using pairwise swaps does more work, scaling as 3*(n-1) as opposed to n+1. That does not change the asymptotic complexity, however, as you can see by the fact that the exponent of n does not change.
But note another key difference between the original code and yours: the original code scans backward through the list to find the insertion position, whereas you scan forward. Whether you use pairwise swaps or a larger cycle, scanning backward has the advantage that you can perform the needed reordering as you go, so that once you find the insertion position, you are done. This is one of the things that makes Insertion Sort so good among comparison sorts, and why it is especially fast for inputs that are initially nearly sorted.
Scanning forward means that once you find the insertion position, you've only started. You then have to cycle the elements. As a result, your approach examines every element of the sorted array head on every iteration. Additionally, when it actually performs the reordering, it does a bunch of unneeded comparisons. It could instead use the knowledge that the head of the list started sorted, and just perform a cycle (either way) without any more comparisons. The extra comparisons disguise the fact that the code is just performing the appropriate element cycling at that point (did you realize that?) and it's probably why several people mistook your implementation for a Bubble Sort.
Technically, yours is still an Insertion Sort, but it is an implementation that takes no advantage of the characteristics of the abstract Insertion Sort algorithm that give well-written implementations an advantage over other sorts of the same asymptotic complexity.
The main difference between insertion sort algorithm and your custom algorithm is the direction of processing.The insertion sort algorithm is moving one by one the smaller elements in range to the left side while your algorithm is one by one moving the larger elements in range to the right side.
Another key difference is in the best case time complexity of insertion sort and your algorithm.
The insertion sort stops if the value < arr[j] is not satisfying so it have the best case complexity of O(n){when the array is already sorted} while your algorithm always searches from index 0 to j so it takes O(n^2) steps even when the array is already sorted.
Related
I don't know how to solve it's complexity and such. How to know if it is faster than other sorting algorithms?
I find difficulty finding it because I'm a little bit bad at math.
#include <stdio.h>
int func(int arr[])
{
int temp;
int numofarrays=9999;
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
if(arr[i] > arr[i+1])
{
temp=arr[i];
arr[i]=arr[i+1];
arr[i+1]=temp;
}
if(arr[numofarrays-i-1] > arr[numofarrays-i])
{
temp=arr[numofarrays-i-1];
arr[numofarrays-i-1]=arr[numofarrays-i];
arr[numofarrays-i]=temp;
}
}
}
for(int i=0; i<=9999; i++)
{
printf("%i\n",arr[i]);
}
}
int main()
{
int arr[10000];
for(int i=0; i<=9999; i++)
{
arr[i]=rand() % 10;
}
func(arr);
}
The big o notation is where the limit of steps of the code you write goes to infinity. Since the limit goes to infinity, we neglect the coefficients and look at the largest term.
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
Here, the largest term of the first loop is n, and the largest term of the second is n. But since loop 2 makes n turns for each turn of the first loop, we're multiplying n by n.
Result is O(n^2).
Here:
for(int runtime=1; runtime<=numofarrays/2; runtime++)
{
for(int i=0; i<=numofarrays; i++)
{
you have two nested for loops that both depends of the array size. So the complexity is O(N^2).
Further you ask:
How to know if it is faster than other sorting algorithms?
Well big-O does not directly tell you about execution time. For some values of N an O(N) implementation may be slower than an O(N^2) implementation.
Big-O tells you how the execution time increases as N increase. So you know that an O(N^2) will be slower than an O(N) when N gets above a apecific value but you can't know what that value is! It could be 1, 5, 10, 100, 1000, ...
Example:
The empirical approach is to time (t) your algorithm for a given input of n. Then do that experiment for larger and large n, say, 2^n for n = 0 to 20. Plot (2^n, t) on a graph. If you get a straight line, it's a linear algorithm. If you get something that grows faster, try graph ln(t) and see if you get a line, that would be an exponential algorithm, etc.
The analytical approach looks scary and sometimes math heavy, but it's doesn't really have to be. You decide what your expensive operation is, then you count how of many of those you do. Anything that runs in loops including recursive functions are the main candidates the run-time of that goes up as the loop runs more times:
Let's say we want count the number of swaps.
The inner loop swap numofarrays times which is O(n).
The outer loop runs numofarrays/2 times which is also O(n) as we drop the factor 1/2 as it's doesn't matter when n gets large.
This mean we do O(n) * O(n) = O(n^2) number of swaps.
Your print loop, is not doing any swaps so we consider them "free", but if determine that they are as expensive as swaps then we want to count those too. Your algorithm does numofarrays print operations which is O(n).
O(n^2) + O(n) is just O(n^2) as the larger O(n^2) dominates O(n) for sufficiently large n.
I have read sources that say that the time complexities for Selection sort are:
Best-case: O(n^2)
Average-case: O(n^2)
Worst-case: O(n^2)
I was wondering if it is worth it to "optimize" the algorithm by adding a certain line of code to make the algorithm "short-circuit" itself if the remaining part is already sorted.
Here's the code written in C:
I have also added a comment which indicates which lines are part of the "optimization" part.
void printList(int* num, int numElements) {
int i;
for (i = 0; i < numElements; i ++) {
printf("%d ", *(num + i));
}
printf("\n");
}
int main() {
int numElements = 0, i = 0, j = 0, min = 0, swap = 0, numSorted = 0;
printf("Enter number of elements: ");
scanf("%d", &numElements);
int* num = malloc(sizeof(int) * numElements);
for (i = 0; i < numElements; i ++) {
printf("Enter number = ");
scanf(" %d", num + i);
}
for (i = 0; i < numElements-1; i++) {
numSorted = i + 1; // "optimized"
min = i;
for (j = i + 1; j < numElements; j++) {
numSorted += *(num + j - 1) <= *(num + j); // "optimized"
if (*(num + min) > *(num + j))
min = j;
}
if (numSorted == numElements) // "optimized"
break;
if (min != i) {
swap = *(num + i);
*(num + i) = *(num + min);
*(num + min) = swap;
}
printList(num, numElements);
}
printf("Sorted list:\n");
printList(num, numElements);
free(num);
getch();
return 0;
}
Optimizing selection sort is a little silly. It has awful best-case, average, and worst-case time complexity, so if you want a remotely optimized sort you would (almost?) always pick another sort. Even insertion sort tends to be faster and it's hardly much more complicated to implement.
More to the point, checking if the list is sorted increases the time the algorithm takes in the worst case scenarios (the average case too I'm inclined to think). And even a mostly sorted list will not necessarily go any faster this way: consider 1,2,3,4,5,6,7,9,8. Even though the list only needs two elements swapped at the end, the algorithm will not short-circuit as it is not ever sorted until the end.
Just because something can be optimized, doesn't necessarily mean it should. Assuming profiling or "boss-says-so" indicates optimization is warranted there are a few things you can do.
As with any algorithm involving iteration over memory, anything that reduces the number of iterations can help.
keep track of the min AND max values - cut number of iterations in half
keep track of multiple min/max values (4 each will be 1/8th the iterations)
at some point temp values will not fit in registers
the code will get more complex
It can also help to maximize cache locality.
do a backward iteration after the forward iteration
the recently accessed memory should still be cached
going straight to another forward iteration would cause a cache miss
since you are moving backward, the cache predictor may prefetch the rest
this could actually be worse on some architectures (RISC-V)
operate on a cache line at a time where possible
this can allow the next cache line to be prefetched in the mean time
you may need to align the data or specially handle the first and last data
even with increased alignment, the last few elements may need "padding"
Use SIMD instructions and registers where useful and practical
useful for non-branching rank order sort of temps
can hold many data points simultaneously (AVX-512 can do a cache line)
avoids memory access (thus less cache misses)
If you use multiple max/min values, optimize sorting the n values of max and min
see here for techniques to sort a small fixed number of values.
save memory swaps until the end of each iteration and do them once
keep temporaries (or pointers) in registers in the mean time
There are quite a few more optimization methods available, but eventually the resemblance to selection sort starts to get foggy. Each of these is going to increase complexity and therefore maintenance cost to the point where a simpler implementation of a more appropriate algorithm may be a better choice.
The only way I see how this can be answered is if you define the purpose of why you are optimizing it.
Is it worth it in a professional setting: on the job, for code running "in production" - most likely (even almost certainly) not.
Is it worth it as a teaching/learning tool - sometimes yes.
I teach programming to individuals and sometimes I teach them algorithms and datastructures. I consider selection sort to be one of the easiest to explain and teach - it flows so naturally after explaining the algorithm for finding the minimum and swapping two values (swap()). Then, at the end I introduce the concept of optimization where we can implement this counter "already sorted" detection.
Admittedly bubble sort is even better to introduce optimization, because it has at least 3 easy to explain and substantial optimizations.
I was wondering if it is worth it to "optimize" the algorithm by adding a certain line of code to make the algorithm "short-circuit" itself if the remaining part is already sorted.
Clearly this change reduces the best-case complexity from O(n2) to O(n). This will be observed for inputs that are already sorted except for O(1) leading elements. If such inputs are a likely case, then the suggested code change might indeed yield an observable and worthwhile performance improvement.
Note, however, that your change more than doubles the work performed in the innermost loop, and consider that for uniform random inputs, the expected number of outer-loop iterations saved is 1. Consider also that any outer-loop iterations you do manage to trim off will be the ones that otherwise would do the least work. Overall, then, although you do not change the asymptotic complexity, the actual performance in the average and worst cases will be noticeably worse -- runtimes on the order of twice as long.
If you're after better speed then your best bet is to choose a different sorting algorithm. Among the comparison sorts, Insertion Sort will perform about the same as your optimized Selection Sort on the latter's best case, but it has a wider range of best-case scenarios, and will usually outperform (regular) Selection Sort in the average case. How the two compare in the worst case depends on implementation.
If you want better performance still then consider Merge Sort or Quick Sort, both of which are pretty simple to implement. Or if your data are suited to it then Counting Sort is pretty hard to beat.
we can optimize selection sort in best case which will be O(n) instead of O(n^2).
here is my optimization code.
public class SelectionSort {
static void selectionSort(int[]arr){
for(int i=0; i< arr.length; i++){
int maxValue=arr[0];
int maxIndex=0;
int cnt=1;
for (int j=1; j< arr.length-i; j++){
if(maxValue<=arr[j]){
maxValue=arr[j];
maxIndex=j;
cnt++;
}
}
if(cnt==arr.length)break;
arr[maxIndex]=arr[arr.length-1-i];
arr[arr.length-1-i]=maxValue;
}
}
public static void main(String[] args) {
int[]arr={1,-3, 0, 8, -45};
selectionSort(arr);
System.out.println(Arrays.toString(arr));
}
}
I'm trying to write a program that counts the number of swaps made by insertion sort. My program works on small inputs, but produces the wrong answer on large inputs. I'm also not sure how to use the long int type.
This problem came up in a setting described at https://drive.google.com/file/d/0BxOMrMV58jtmNF9EcUNQNGpreDQ/edit?usp=sharing
Input is given as
The first line contains the number of test cases T. T test cases follow.
The first line for each case contains N, the number of elements to be sorted.
The next line contains N integers a[1],a[2]...,a[N].
Code I used is
#include <stdio.h>
#include <stdlib.h>
int insertionSort(int ar_size,int * ar)
{
int i,j,t,temp,count;
count=0;
int n=ar_size;
for(i=0;i<n-1;i++)
{
j=i;
while(ar[j+1]<ar[j])
{
temp=ar[j+1];
ar[j+1]=ar[j];
ar[j]=temp;
j--;
count++;
}
}
return count;
}
int main()
{
int _ar_size,tc,i,_ar_i;
scanf("%d", &tc);
int sum=0;
for(i=0;i<tc;i++)
{
scanf("%d", &_ar_size);
int *_ar;
_ar=(int *)malloc(sizeof(int)*_ar_size);
for(_ar_i = 0; _ar_i < _ar_size; _ar_i++)
{
scanf("%d", &_ar[_ar_i]);
}
sum=insertionSort(_ar_size, _ar);
printf("%d\n",sum);
}
return 0;
}
There are two issues that I currently see with the solution you have.
First, there's an issue brought up in the comments about integer overflow. On most systems, the int type can hold numbers up through 231 - 1. In insertion sort, the number of swaps that need to be made in the worst case on an array of length n is n(n - 1) / 2 (details later), so for an array of size 217, you may end up not being able to store the number of swaps that you need inside an int. To address this, consider using a larger integer type. For example, the uint64_t type can store numbers up to roughly 1018, which should be good enough to store the answer for arrays up to length around 109. You mentioned that you're not sure how to use it, but the good news is that it's not that hard. Just add the line
#include <stdint.h>
(for C) or
#include <cstdint>
(for C++) to the top of your program. After that, you should just be able to use uint64_t in place of int without making any other modifications and everything should work out just fine.
Next, there's an issue of efficiency. The code you've posted essentially runs insertion sort and therefore takes time O(n2) in the worst-case. For large inputs - say, inputs around size 108 - this is prohibitively expensive. Amazingly, though, you can actually determine how many swaps insertion sort will make without actually running insertion sort.
In insertion sort, the number of swaps made is equal to the number of inversions that exist in the input array (an inversion is a pair of elements that are out of order). There's a beautiful divide-and-conquer algorithm for counting inversions that runs in time O(n log n), which likely will scale up to work on much larger inputs than just running insertion sort. I think that the "best" answer to this question would be to use this algorithm, while taking care to use the uint64_t type or some other type like it, since it will make your algorithm work correctly on much larger inputs.
Based on a this logic given as an answer on SO to a different(similar) question, to remove repeated numbers in a array in O(N) time complexity, I implemented that logic in C, as shown below. But the result of my code does not return unique numbers. I tried debugging but could not get the logic behind it to fix this.
int remove_repeat(int *a, int n)
{
int i, k;
k = 0;
for (i = 1; i < n; i++)
{
if (a[k] != a[i])
{
a[k+1] = a[i];
k++;
}
}
return (k+1);
}
main()
{
int a[] = {1, 4, 1, 2, 3, 3, 3, 1, 5};
int n;
int i;
n = remove_repeat(a, 9);
for (i = 0; i < n; i++)
printf("a[%d] = %d\n", i, a[i]);
}
1] What is incorrect in above code to remove duplicates.
2] Any other O(N) or O(NlogN) solution for this problem. Its logic?
Heap sort in O(n log n) time.
Iterate through in O(n) time replacing repeating elements with a sentinel value (such as INT_MAX).
Heap sort again in O(n log n) to distil out the repeating elements.
Still bounded by O(n log n).
Your code only checks whether an item in the array is the same as its immediate predecessor.
If your array starts out sorted, that will work, because all instances of a particular number will be contiguous.
If your array isn't sorted to start with, that won't work because instances of a particular number may not be contiguous, so you have to look through all the preceding numbers to determine whether one has been seen yet.
To do the job in O(N log N) time, you can sort the array, then use the logic you already have to remove duplicates from the sorted array. Obviously enough, this is only useful if you're all right with rearranging the numbers.
If you want to retain the original order, you can use something like a hash table or bit set to track whether a number has been seen yet or not, and only copy each number to the output when/if it has not yet been seen. To do this, we change your current:
if (a[k] != a[i])
a[k+1] = a[i];
to something like:
if (!hash_find(hash_table, a[i])) {
hash_insert(hash_table, a[i]);
a[k+1] = a[i];
}
If your numbers all fall within fairly narrow bounds or you expect the values to be dense (i.e., most values are present) you might want to use a bit-set instead of a hash table. This would be just an array of bits, set to zero or one to indicate whether a particular number has been seen yet.
On the other hand, if you're more concerned with the upper bound on complexity than the average case, you could use a balanced tree-based collection instead of a hash table. This will typically use more memory and run more slowly, but its expected complexity and worst case complexity are essentially identical (O(N log N)). A typical hash table degenerates from constant complexity to linear complexity in the worst case, which will change your overall complexity from O(N) to O(N2).
Your code would appear to require that the input is sorted. With unsorted inputs as you are testing with, your code will not remove all duplicates (only adjacent ones).
You are able to get O(N) solution if the number of integers is known up front and smaller than the amount of memory you have :). Make one pass to determine the unique integers you have using auxillary storage, then another to output the unique values.
Code below is in Java, but hopefully you get the idea.
int[] removeRepeats(int[] a) {
// Assume these are the integers between 0 and 1000
Boolean[] v = new Boolean[1000]; // A lazy way of getting a tri-state var (false, true, null)
for (int i=0;i<a.length;++i) {
v[a[i]] = Boolean.TRUE;
}
// v[i] = null => number not seen
// v[i] = true => number seen
int[] out = new int[a.length];
int ptr = 0;
for (int i=0;i<a.length;++i) {
if (v[a[i]] != null && v[a[i]].equals(Boolean.TRUE)) {
out[ptr++] = a[i];
v[a[i]] = Boolean.FALSE;
}
}
// Out now doesn't contain duplicates, order is preserved and ptr represents how
// many elements are set.
return out;
}
You are going to need two loops, one to go through the source and one to check each item in the destination array.
You are not going to get O(N).
[EDIT]
The article you linked to suggests a sorted output array which means the search for duplicates in the output array can be a binary search...which is O(LogN).
Your logic just wrong, so the code is wrong too. Do your logic by yourself before coding it.
I suggest a O(NlnN) way with a modification of heapsort.
With heapsort, we join from a[i] to a[n], find the minimum and replace it with a[i], right?
So now is the modification, if the minimum is the same with a[i-1] then swap minimum and a[n], reduce your array item's number by 1.
It should do the trick in O(NlnN) way.
Your code will work only on particular cases. Clearly, you're checking adjacent values but duplicate values can occur any where in array. Hence, it's totally wrong.
I want to write a program to find the n-th smallest element without using any sorting technique..
Can we do it recursively, divide and conquer style like quick-sort?
If not, how?
You can find information about that problem here: Selection algorithm.
What you are referring to is the Selection Algorithm, as previously noted. Specifically, your reference to quicksort suggests you are thinking of the partition based selection.
Here's how it works:
Like in Quicksort, you start by picking a good
pivot: something that you think is nearly
half-way through your list. Then you
go through your entire list of items
swapping things back and forth until
all the items less than your pivot
are in the beginning of the list, and
all things greater than your pivot
are at the end. Your pivot goes into the leftover spot in the middle.
Normally in a quicksort you'd recurse
on both sides of the pivot, but for
the Selection Algorithm you'll only
recurse on the side that contains the
index you are interested in. So, if
you want to find the 3rd lowest
value, recurse on whichever side
contains index 2 (because index 0 is
the 1st lowest value).
You can stop recursing when you've
narrowed the region to just the one
index. At the end, you'll have one
unsorted list of the "m-1" smallest
objects, and another unsorted list of the "n-m" largest
objects. The "m"th object will be inbetween.
This algorithm is also good for finding a sorted list of the highest m elements... just select the m'th largest element, and sort the list above it. Or, for an algorithm that is a little bit faster, do the Quicksort algorithm, but decline to recurse into regions not overlapping the region for which you want to find the sorted values.
The really neat thing about this is that it normally runs in O(n) time. The first time through, it sees the entire list. On the first recursion, it sees about half, then one quarter, etc. So, it looks at about 2n elements, therefore it runs in O(n) time. Unfortunately, as in quicksort, if you consistently pick a bad pivot, you'll be running in O(n2) time.
This task is quite possible to complete within roughly O(n) time (n being the length of the list) by using a heap structure (specifically, a priority queue based on a Fibonacci heap), which gives O(1) insertion time and O(log n) removal time).
Consider the task of retrieving the m-th smallest element from the list. By simply looping over the list and adding each item to the priority queue (of size m), you can effectively create a queue of each of the items in the list in O(n) time (or possibly fewer using some optimisations, though I'm not sure this is exceedingly helpful). Then, it is a straightforward matter of removing the element with lowest priority in the queue (highest priority being the smallest item), which only takes O(log m) time in total, and you're finished.
So overall, the time complexity of the algorithm would be O(n + log n), but since log n << n (i.e. n grows a lot faster than log n), this reduces to simply O(n). I don't think you'll be able to get anything significantly more efficient than this in the general case.
You can use Binary heap, if u dont want to use fibonacci heap.
Algo:
Contruct the min binary heap from the array this operation will take O(n) time.
Since this is a min binary heap, the element at the root is the minimum value.
So keep on removing element frm root, till u get ur kth minimum value. o(1) operation
Make sure after every remove you re-store the heap kO(logn) operation.
So running time here is O(klogn) + O(n)............so it is O(klogn)...
Two stacks can be used like this to locate the Nth smallest number in one pass.
Start with empty Stack-A and Stack-B
PUSH the first number into Stack-A
The next number onwards, choose to PUSH into Stack-A only if the number is smaller than its top
When you have to PUSH into Stack-A, run through these steps
While TOP of Stack-A is larger than new number, POP TOP of Stack-A and push it into Stack-B
When Stack-A goes empty or its TOP is smaller than new number, PUSH in the new number and restore the contents of Stack-B over it
At this point you have inserted the new number to its correct (sorted) place in Stack-A and Stack-B is empty again
If Stack-A depth is now sufficient you have reached the end of your search
I generally agree to Noldorins' optimization analysis.
This stack solution is towards a simple scheme that will work (with relatively more data movement -- across the two stacks). The heap scheme reduces the fetch for Nth smallest number to a tree traversal (log m).
If your target is an optimal solution (say for a large set of numbers or maybe for a programming assignment, where optimization and the demonstration of it are critical) you should use the heap technique.
The stack solution can be compressed in space requirements by implementing the two stacks within the same space of K elements (where K is the size of your data set). So, the downside is just extra stack movement as you insert.
Here is the Ans to find Kth smallest element from an array:
#include<stdio.h>
#include<conio.h>
#include<iostream>
using namespace std;
int Nthmin=0,j=0,i;
int GetNthSmall(int numbers[],int NoOfElements,int Nthsmall);
int main()
{
int size;
cout<<"Enter Size of array\n";
cin>>size;
int *arr=(int*)malloc(sizeof(int)*size);
cout<<"\nEnter array elements\n";
for(i=0;i<size;i++)
cin>>*(arr+i);
cout<<"\n";
for(i=0;i<size;i++)
cout<<*(arr+i)<<" ";
cout<<"\n";
int n=sizeof(arr)/sizeof(int);
int result=GetNthSmall(arr,size,3);
printf("Result = %d",result);
getch();
return 0;
}
int GetNthSmall(int numbers[],int NoOfElements,int Nthsmall)
{
int min=numbers[0];
while(j<Nthsmall)
{
Nthmin=numbers[0];
for(i=1;i<NoOfElements;i++)
{
if(j==0)
{
if(numbers[i]<min)
{
min=numbers[i];
}
Nthmin=min;
}
else
{
if(numbers[i]<Nthmin && numbers[i]>min)
Nthmin=numbers[i];
}
}
min=Nthmin;
j++;
}
return Nthmin;
}
The simplest way to find the nth largest element in an array without using any sorting methods.
public static void kthLargestElement() {
int[] a = { 5, 4, 3, 2, 1, 9, 8 };
int n = 3;
int max = a[0], min = a[0];
for (int i = 0; i < a.length; i++) {
if (a[i] < min) {
min = a[i];
}
if (a[i] > max) {
max = a[i];
}
}
int max1 = max, c = 0;
for (int i = 0; i < a.length; i++) {
for (int j = 0; j < a.length; j++) {
if (a[j] > min && a[j] < max) {
max = a[j];
}
}
min = max;
max = max1;
c++;
if (c == (a.length - n)) {
System.out.println(min);
}
}
}