How to group data in array in C - c

I was trying make a non pre-emptive longest job first job scheduler. I know the algorithm for the sjf.
At first I sorted the arrays according to the arrival time
for(int i=0;i<noOfProcesses;i++)
{
for(int j=0;j<noOfProcesses;j++)
{
if(arrivalTime[i]<arrivalTime[j])
{
temp=processID[j];
processID[j]=processID[i];
processID[i]=temp;
temp=arrivalTime[j];
arrivalTime[j]=arrivalTime[i];
arrivalTime[i]=temp;
temp=burstTime[j];
burstTime[j]=burstTime[i];
burstTime[i]=temp;
}
}
}
Then I was trying to make the scheduler. I tried for the shortest job first but I cant make it work for longest job first.
The processes who have arrival time less than the burst time of the current process is to be put into the queue in descending order of the burst time.
for(int j=0;j<n;j++)
{
btime=btime+burstTime[j];
int min=burstTime[k];
for(i=k;i<n;i++)
{
if (btime>=arrivalTime[i] && burstTime[i]<min)
{
temp=processID[k];
processID[k]=processID[i];
processID[i]=temp;
temp=arrivalTime[k];
arrivalTime[k]=arrivalTime[i];
arrivalTime[i]=temp;
temp=burstTime[k];
burstTime[k]=burstTime[i];
burstTime[i]=temp;
}
}
k++;
}
This code was working for selecting processes in ascending order of burst time but I cant make it work for the descennding order
Example :
Process ID Arrival Time Burst Time
1 11 22
2 12 24
3 14 28
4 22 44
this may be taken as a sample input
Process ID Arrival Time Burst Time
1 11 22
2 14 28
3 12 24
4 22 44
this will be the the output as the process 1 will run from time 11 to 22
meanwhile, process 2 and 3 will have arrived the process 1 will finish on 11+22=33 so the cpu will have 2 choices to run ie process 2 and 3 but even though process 2 arrived before 3, cpu will select process process 3 next because it has a high burst time the code written selects process 2 because it has low burst time.

Related

How do I synchronize threads similar to a barrier, but with the caveat of the amount of threads being dynamic?

I have tried to create a function that works similar to a barrier function, except it can handle the active amount of threads changing. (I can't seem to get it to work either by destroying and reinitializing the barrier whenever a thread exits the function loop).
My issue is that I can't get my replacement function to run properly, i.e. the program softlocks for some reason.
So far nothing I've tried has worked to ensure that the threads are synchronized and the program doesn't softlock.
I've tried using barriers, I've tried making the exiting threads enter barrier wait as well, to help with the barriers (but I couldn't figure out how to not softlock with the exiting threads, as I always ended up with some thread(s) invariably being left inside the barrier_wait function).
This is my replacement function for the pthread_barrier_wait function:
void SynchThreads()
{
pthread_mutex_lock(&lock);
if (threadsGoingToWait < maxActiveThreads)
{
threadsGoingToWait++;
pthread_cond_signal(&condVar2);
pthread_cond_wait(&condVar1, &lock);
} else
{
threadsGoingToWait=1;
pthread_cond_broadcast(&condVar1);
}
pthread_mutex_unlock(&lock);
}
To change the value of maxActiveThreads, I have the threads do the following before they exit the function loop:
pthread_mutex_lock(&tlock);
maxActiveThreads--;
if (maxActiveThreads>0)
{
pthread_cond_wait(&condVar2, &tlock);
pthread_cond_broadcast(&condVar1);
}
else pthread_cond_broadcast(&condVar2);
pthread_mutex_unlock(&tlock);
I have the pthread variables initialized before the thread creation as this:
pthread_barrier_init(&barrier, NULL, maxActiveThreads);
pthread_mutex_init(&lock, NULL);
pthread_mutex_init(&tlock, NULL);
pthread_cond_init(&condVar1, NULL);
pthread_cond_init(&condVar2, NULL);
I have no clue why the program is softlocking right now, since as far as I know, so long as there's at least 1 thread either remaining or in the waiting fireld, it should release the other threads from the cond_wait they're in.
Edit:
If I remove the condVar2 from being used, and instead end the function loop with a barrier_wait, the program no longer softlocks, however it still doesn't function as if it's being synchronized properly.
To give some more detail as to what I'm working on: I'm trying to make a sequential Gaussian elimination function parallel. So the issues I've had so far is that either the matrix has the wrong values, or the vectors have the wrong values, or they all have the wrong values. I was hoping by having synchronization points distributed as following would fix the issue of synchronization errors:
static void* gauss_par(void* params)
{
/*getting the threads and the related data*/
for (int k = startRow; k < N; k+=threadCount) /* Outer loop */
{
SynchThreads();
/* Division step */
SynchThreads();
/* Vector y and matrix diagonal */
SynchThreads();
for (int i = k+1; i < N; i++)
{
/* Elimination step */
SynchThreads();
/* Vector b and matrix zeroing */
SynchThreads();
}
}
}
As a preliminary, I see &lock in your SyncThreads() and &tlock in your other code snippet. These almost certainly do not go with each other, because proper protection via a mutex relies on all threads involved using the same mutex to guard access to the data in question. I'm having trouble coming up with a way, in C, that the expressions &lock and &tlock could evaluate to pointers of type pthread_mutex_t *, pointing to the same object. Unless maybe one of lock and tlock were a macro expanding to the other, which would be nasty.
With that said,
I have tried to create a function that works similar to a barrier function, except it can handle the active amount of threads changing. (I can't seem to get it to work either by destroying and reinitializing the barrier whenever a thread exits the function loop).
Destroying and reinitializing the (same) barrier object should work if you can ensure that no thread is ever waiting at the barrier when you do so or arrives while you are doing so. But in practice, that's often a difficult condition to ensure.
Barriers are a somewhat specialized synchronization tool, whereas the Swiss army knife of thread synchronization is the condition variable. This appears to be the direction you've taken, and it's probably a good one. However, I suspect you're looking at CVs from a functional perspective rather than a semantic one. This leads to all sorts of issues.
The functional view is along these lines: a condition variable allows threads to block until another thread signals them.
The semantic view is this: a condition variable allows threads to block until some testable condition is satisfied. Generally, that condition is a function of one or more shared variables, and access to those is protected by the same mutex as is used with the CV. The same CV can be used by different threads at the same time to wait for different conditions, and that can make sense when the various conditions are all related to the same data.
The semantic view guides you better toward appropriate usage idioms, and it helps you come to the right answers on questions about who should signal or broadcast to the CV, under what conditions, and even whether to use a CV at all.
Diagrammatically, the basic usage pattern for a CV wait is this:
|
+++++++++++++++++|++++++++++++++++++++++++++++++++
+ V CRITICAL REGION +
+ +- - - - - + +
+ : optional : +
+ + - - - - -+ +
+ | +
+ V +
+ +-----------------------------+ +
+ | Is the condition satisfied? | <-+ +
+ +-----------------------------+ | +
+ | | | +
+ | Yes | No | on waking +
+ V V | +
+ +- - - - - + +---------+ | +
+ : optional : | CV wait |---+ +
+ + - - - - -+ +---------+ +
+ | +
+++++++|++++++++++++++++++++++++++++++++++++++++++
V
In particular, since the whole point is that threads don't proceed past the wait unless the condition is satisfied, it is essential that they check after waking whether they should, in fact, proceed. This protects against three things:
the thread was awakened via a signal / broadcast even though the condition it is waiting for was not satisfied;
the thread was awakened via a signal / broadcast but by the time it gets a chance to run, the condition it was waiting for is no longer satisfied; and
the thread woke despite the CV not having been signaled / broadcasted to (so there's no reason to expect the condition to be satisified, though it's ok to continue if it happens to be satisfied anyway).
With that in mind, let's consider your pseudo-barrier function. In English, the condition for a thread passing through would be something like "all currently active threads have (re-)reached the barrier since the last time it released." I take maxActiveThreads to be the number of currently active threads (and thus, slightly misnamed). There are some simple but wrong ways to implement that condition, most of them falling over in the event that a thread passes through the barrier and then returns to it before some of the other threads have passed through. A simple counter of waiting threads is not enough, given threads' need to check the condition before proceeding.
One thing you can do is switch which variable you use for a waiter count between different passages through the barrier. That might look something like this:
int wait_count[2];
int wait_counter_index;
void cycleBarrier() {
// this function is to be called only while holding the mutex locked
wait_counter_index = !wait_counter_index; // flip between 0 and 1
wait_count[wait_counter_index] = 0;
}
void SynchThreads() {
pthread_mutex_lock(&lock);
// A thread-specific copy of the wait counter number upon arriving
// at the barrier
const int my_wait_index = wait_counter_index;
if (++wait_count[my_wait_index] < maxActiveThreads) {
do {
pthread_cond_wait(&condVar1, &lock);
} while (wait_count[my_wait_index] < maxActiveThreads);
} else {
// This is the thread that crested the barrier, and the first one
// through it; prepare for the next barrier passage
assert(wait_counter_index == my_wait_index);
cycleBarrier();
}
pthread_mutex_unlock(&lock);
}
That flip-flops between two waiter counts, so that each thread can check whether the count for the barrier passage it is trying to make has been reached, while also providing for threads arriving at the next barrier passage.
Do note also that although this can accommodate the number of threads decreasing, it needs more work if you need to accommodate the number of threads increasing. If the thread count is increased at a point where some threads have passed the barrier but others are still waiting to do so then the ones still waiting will erroneously think that their target thread count has not yet been reached.
Now let's consider your strategy when modifying maxActiveThreads. You've put a wait on condVar2 in there, so what condition does a thread require to be satisfied in order to proceed past that wait? The if condition guarding it suggests maxActiveThreads <= 0 would be the condition, but surely that's not correct. In fact, I don't think you need a CV wait here at all. Acquiring the mutex in the first place is all a thread ought to need to do to be able to reduce maxActiveThreads.
However, having modified maxActiveThreads does mean that other threads waiting for a condition based on that variable might be able to proceed, so broadcasting to condVar1 is the right thing to do. On the other hand, there's no particular need to signal / broadcast to a CV while holding its associated mutex locked, and it can be more efficient to avoid doing so. Also, it's cheap and safe to signal / broadcast to a CV that has no waiters, and that can allow some code simplification.
Lastly, what happens if all the other threads have already arrived at the barrier? In the above code, it is the one to crest the barrier that sets up for the next barrier passage, and if all the other threads have reached the barrier already, then that would be this thread. That's why I introduced the cycleBarrier() function -- the code that reduces maxActiveThreads might need to cycle the barrier, too.
That leaves us with this:
pthread_mutex_lock(&lock);
maxActiveThreads--;
if (wait_count[wait_counter_index] >= maxActiveThreads) {
cycleBarrier();
}
pthread_mutex_unlock(&lock);
pthread_cond_broadcast(&condVar1);
Finally, be well aware that all these mutex and cv functions can fail. For robustness, a program must check their return values and take appropriate action in the event of failure. I've omitted all that for clarity and simplicity, but I am rigorous about error checking in real code, and I advise you to be, too.
An Ada language solution to this problem is shown below. This example implements a producer-consumer pattern supporting N producers and M consumers. The test of the solution implements one producer and 5 consumers, but the solution is the same for any positive number of producers and consumers, up to the limits of your hardware.
This example is implemented in three files. The first file is the package specification, which defines the interface to the producer and consumer task types. The second file is the package body which contains the implementation of the task types and their shared buffer. The shared buffer is implemented as an Ada protected object. Ada protected objects are passive elements of concurrency which implicitly control their own locking and unlocking based on the kind of methods implemented.
This example uses only protected entries which acquire an exclusive read-write lock on the object only when their guard condition evaluates to true. The guard conditions are implicitly evaluated at the end of each entry. Tasks suspended because the guard condition on an entry evaluates to FALSE are placed in an entry queue with FIFO ordering by default.
package specification:
package multiple_consumers is
task type producer is
entry Set_Id(Id : Positive);
entry stop;
end producer;
task type consumer is
entry Set_Id(Id : Positive);
entry stop;
end consumer;
end multiple_consumers;
package body:
with Ada.Text_IO; use Ada.Text_IO;
package body multiple_consumers is
type Index_Type is mod 10;
type Buffer_Type is array (Index_Type) of Integer;
------------
-- Buffer --
------------
protected Buffer is
entry Enqueue (Item : in Integer);
entry Dequeue (Item : out Integer);
private
Buff : Buffer_Type;
Write_Index : Index_Type := 0;
Read_Index : Index_Type := 0;
Count : Natural := 0;
end Buffer;
protected body Buffer is
entry Enqueue (Item : in Integer) when Count < Buff'Length is
begin
Buff (Write_Index) := Item;
Write_Index := Write_Index + 1;
Count := Count + 1;
end Enqueue;
entry Dequeue (Item : out Integer) when Count > 0 is
begin
Item := Buff (Read_Index);
Read_Index := Read_Index + 1;
Count := Count - 1;
end Dequeue;
end Buffer;
--------------
-- producer --
--------------
task body producer is
Value : Integer := 1;
Me : Positive;
begin
accept Set_Id (Id : in Positive) do
Me := Id;
end Set_Id;
loop
select
accept stop;
exit; -- Exit loop
else
select
Buffer.Enqueue (Value);
Put_Line
(" Producer" & Me'Image & " produced " & Value'Image);
Value := Value + 1;
or
delay 0.001;
end select;
end select;
end loop;
Put_Line ("Producer" & Me'Image & " stopped.");
end producer;
--------------
-- consumer --
--------------
task body consumer is
My_Value : Integer;
Me : Positive;
begin
accept Set_Id (Id : Positive) do
Me := Id;
end Set_Id;
loop
select
accept stop;
exit; -- exit loop
else
select
Buffer.Dequeue (My_Value);
Put_Line
("Consumer" & Me'Image & " consumed " & My_Value'Image);
or
delay 0.001;
end select;
end select;
end loop;
Put_Line ("Consumer" & Me'Image & " stopped.");
end consumer;
end multiple_consumers;
main procedure:
with multiple_consumers; use multiple_consumers;
procedure Main is
subtype Consumer_Idx is Positive range 1 .. 5;
Consumer_list : array (Consumer_Idx) of consumer;
P : producer;
begin
for I in Consumer_list'Range loop
Consumer_list (I).Set_Id (I);
end loop;
P.Set_Id (1);
delay 0.01; -- wait 0.01 seconds
P.stop;
for I in Consumer_list'Range loop
Consumer_list (I).stop;
end loop;
end Main;
Output:
Producer 1 produced 1
Consumer 1 consumed 1
Producer 1 produced 2
Consumer 2 consumed 2
Consumer 3 consumed 3
Producer 1 produced 3
Producer 1 produced 4
Producer 1 produced 5
Consumer 4 consumed 4
Consumer 5 consumed 5
Producer 1 produced 6
Producer 1 produced 7
Consumer 1 consumed 6
Consumer 2 consumed 7
Consumer 3 consumed 8
Producer 1 produced 8
Producer 1 produced 9
Producer 1 produced 10
Consumer 4 consumed 9
Consumer 5 consumed 10
Consumer 1 consumed 11
Producer 1 produced 11
Producer 1 produced 12
Consumer 2 consumed 12
Producer 1 produced 13
Consumer 3 consumed 13
Producer 1 produced 14
Consumer 4 consumed 14
Producer 1 produced 15
Consumer 5 consumed 15
Producer 1 produced 16
Consumer 1 consumed 16
Producer 1 produced 17
Consumer 2 consumed 17
Consumer 3 consumed 18
Producer 1 produced 18
Producer 1 produced 19
Consumer 4 consumed 19
Producer 1 produced 20
Consumer 5 consumed 20
Producer 1 produced 21
Consumer 1 consumed 21
Producer 1 produced 22
Consumer 2 consumed 22
Producer 1 produced 23
Consumer 3 consumed 23
Producer 1 produced 24
Consumer 4 consumed 24
Producer 1 produced 25
Consumer 5 consumed 25
Producer 1 produced 26
Consumer 1 consumed 26
Producer 1 produced 27
Consumer 2 consumed 27
Consumer 3 consumed 28
Producer 1 produced 28
Producer 1 produced 29
Consumer 4 consumed 29
Producer 1 produced 30
Consumer 5 consumed 30
Producer 1 produced 31
Consumer 1 consumed 31
Producer 1 produced 32
Consumer 2 consumed 32
Consumer 3 consumed 33
Producer 1 produced 33
Producer 1 produced 34
Consumer 4 consumed 34
Producer 1 produced 35
Consumer 5 consumed 35
Producer 1 produced 36
Consumer 1 consumed 36
Producer 1 produced 37
Consumer 2 consumed 37
Producer 1 produced 38
Consumer 3 consumed 38
Producer 1 produced 39
Consumer 4 consumed 39
Producer 1 produced 40
Consumer 5 consumed 40
Producer 1 produced 41
Consumer 1 consumed 41
Producer 1 produced 42
Consumer 2 consumed 42
Consumer 3 consumed 43
Producer 1 produced 43
Producer 1 produced 44
Consumer 4 consumed 44
Producer 1 produced 45
Consumer 5 consumed 45
Producer 1 produced 46
Consumer 1 consumed 46
Producer 1 produced 47
Consumer 2 consumed 47
Consumer 3 consumed 48
Producer 1 produced 48
Producer 1 produced 49
Consumer 4 consumed 49
Producer 1 produced 50
Consumer 5 consumed 50
Producer 1 produced 51
Consumer 1 consumed 51
Producer 1 produced 52
Consumer 2 consumed 52
Producer 1 produced 53
Consumer 3 consumed 53
Producer 1 produced 54
Consumer 4 consumed 54
Producer 1 produced 55
Consumer 5 consumed 55
Producer 1 produced 56
Consumer 1 consumed 56
Producer 1 produced 57
Consumer 2 consumed 57
Consumer 3 consumed 58
Producer 1 produced 58
Producer 1 produced 59
Consumer 4 consumed 59
Producer 1 produced 60
Consumer 5 consumed 60
Producer 1 produced 61
Consumer 1 consumed 61
Producer 1 produced 62
Consumer 2 consumed 62
Consumer 3 consumed 63
Producer 1 produced 63
Producer 1 produced 64
Consumer 4 consumed 64
Consumer 5 consumed 65
Producer 1 produced 65
Producer 1 produced 66
Consumer 1 consumed 66
Consumer 2 consumed 67
Producer 1 produced 67
Producer 1 produced 68
Consumer 3 consumed 68
Producer 1 produced 69
Consumer 4 consumed 69
Producer 1 produced 70
Consumer 5 consumed 70
Producer 1 produced 71
Consumer 1 consumed 71
Producer 1 produced 72
Consumer 2 consumed 72
Producer 1 produced 73
Consumer 3 consumed 73
Producer 1 produced 74
Consumer 4 consumed 74
Producer 1 produced 75
Consumer 5 consumed 75
Producer 1 produced 76
Consumer 1 consumed 76
Producer 1 stopped.
Consumer 1 stopped.
Consumer 2 stopped.
Consumer 3 stopped.
Consumer 4 stopped.
Consumer 5 stopped.

Do while loop in SAS

I am referring the SAS documentation page to understand the difference between DO while and DO until loops in SAS. While referring this document, I came to a conclusion that whenever I used DO while loop, the inequality of while statement should always less than (or less than or equal). The inequality cannot be greater than (or greater than or equal).
The reason is follows. When I ran this code:
data _null_;
n=0;
do while(n<5);
put n=;
n+1;
end;
run;
I got the output as 0,1,2,3,4.
But When I ran this code, I got nothing.
data _null_;
n=0;
do while(n>5);
put n=;
n+1;
end;
run;
Is my conclusion correct, or am I missing something?
Thank you
1:
It is not really whether you use > or <. But the logic of what to test is reversed. With DO WHILE() you execute the loop when the condition is TRUE. With DO UNTIL() you continue executing when the condition is FALSE.
11 data _null_;
12 do while(n<5);
13 put n #;
14 n+1;
15 end;
16 run;
0 1 2 3 4
NOTE: DATA statement used (Total process time):
real time 0.00 seconds
cpu time 0.00 seconds
17 data _null_;
18 do until(n>=5);
19 put n #;
20 n+1;
21 end;
22 run;
0 1 2 3 4
NOTE: DATA statement used (Total process time):
real time 0.00 seconds
cpu time 0.00 seconds
The other big difference is when the test is made. With DO WHILE() the test is before the loop starts. With DO UNTIL() it is after the loop finishes. So with DO UNTIL() the code always executes at least once.
Note your example is more easily done using an iterative DO loop instead.
6 data _null_;
7 do n=0 to 4;
8 put n #;
9 end;
10 run;
0 1 2 3 4

Assigning Values to a struct from text file with unkown amount of data

I'm stuck at this where I need to find the highest average number inside a text file with an unknown amount of data.
The data inside the txt is like this
id group score
2203 1 33
5123 2 58
3323 3 92
5542 2 86
....
....
and the file keeps going.
I'm currently trying to create a struct and then store the values inside of it, but I cannot determine the size of struct since the file has unknown amount of data and it might change every run.
What I tried is this
while(!feof(fptr)) {
for(i = 0; i < sizeoffile; i++ ) { // here i should add the size or the amount of data.
fscanf(fptr,"%d %d %d",&p[i].num, &p[i].grp, &p[i].score);
}
}
I tried adding a counter inside the while loop to get the amount of data but it doesn't work. Im not sure if i need to use malloc or something else.
Example run:
code read the following file
1312 1 30
1234 1 54
2931 2 23
2394 2 99
9545 3 95
8312 3 100
8542 4 70
2341 4 56
1233 1 70
2323 1 58
output
group 3 has the highest average of 97
Code could go through the file once and determine the count of records: N and then read again, this time saving in an array of size N.
Alternative, use a linked-list.
or allocated some memory and re-allocate as needed.

FreeRTOS priority 1 is special?

FreeRTOS priority 1 is special?
In my system,i have 6 priority 0-5. I know the idle pro at 0.
i assign a task at 1,others at 2-5.from the cpu time and IDEL pro info ,i could know the cpu have enough time to do all tasks.
I found a problem that the task at 1 could not work at the right time,the frequence is 10Hz,but i found sometimes it not work. maybe 8 or lower than 10,even lower than 1Hz.
when i set the task at 2,it's ok,work at 10Hz.
The code struct like this:
void SYS_MONITOR::run(){
int ret = 0;
while(1){
vTaskDelayUntil(&last_wake_time, SYS_MONITOR_RUN_INTERVAL_MS/portTICK_RATE_MS);
dosomething....
}
ID State Prio Mark CPU(%) Name
1 S 1 261 0.0000000 God
2 R 0 109 69.6136779 IDLE
3 S 5 470 3.9053585 Tmr Svc
...
...
44 B 2 179 0.0242588 SYS_MONITOR_run
Heap : Total 491520 , Used 193696 , Remain 297824
DmaHeap: Total 16384 , Used 2048 , Remain 14336
There is no enough information to answer this.
You have quite large setup , by looking at the number of tasks you have.
One thing :
1 S 1 261 0.0000000 God
.....
4 B 2 179 0.0242588 SYS_MONITOR_run
5 R 1 303 0.0142761 SYS_CLI_SERV_run
You have at least 2 Tasks with priority 1 there. If your SYS_MONITOR_run was 1 also and started working "better" after you bumped it's priority to 2 (higher) that is not surprising.
It depends on your scheduler configuration how the equal priority tasks get chance to run, e.g.: do you have time slicing round-robin or FIFO on equal priority tasks? That's one. ...
Two, you have complex setup (44 Tasks!) and too little info to really answer your question.

(Flip sorting) Same input, but different output in ideone and codeblocks

I was facing this problem during solving a problem in uVa.The name of the problem is "10327 - Flip Sort"
The problem statement is...
`Sorting in computer science is an important part. Almost every problem can be solved effeciently if
sorted data are found. There are some excellent sorting algorithm which has already acheived the lower
bound n · lg n. In this problem we will also discuss about a new sorting approach. In this approach
only one operation (Flip) is available and that is you can exchange two adjacent terms. If you think a
while, you will see that it is always possible to sort a set of numbers in this way.
A set of integers will be given. Now using the above approach we want to sort the numbers in
ascending order. You have to find out the minimum number of flips required. Such as to sort ‘1 2 3’
we need no flip operation whether to sort ‘2 3 1’ we need at least 2 flip operations.
Input
The input will start with a positive integer N (N ≤ 1000). In next few lines there will be N integers.
Input will be terminated by EOF.
Output
For each data set print ‘Minimum exchange operations : M’ where M is the minimum flip operations required to perform sorting. Use a seperate line for each case.
Sample Input
3
1 2 3
3
2 3 1
Sample Output
Minimum exchange operations : 0
Minimum exchange operations : 2
Which is pretty much simple.
Here is my code.
#include <stdio.h>
#include <stdlib.h>
int main()
{
int i,j,n,a[1001],temp,c;
while(scanf("%d",&n)!=EOF)
{
c=0;
for(i=0; i<n; i++)
{
scanf("%d",&a[i]);
}
i=0;
while(i<n)
{
j=0;
while(j<n)
{
if(a[j]>a[j+1])
{
temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
c++;
}
j++;
}
i++;
}
printf("Minimum exchange operations : %d\n",c);
}
return 0;
}
for these following test cases:
5
6 2 1 5 4
8
3 5 8 2 1 9 7 0
2
0 0
12
0 1 2 5 4 1 2 0 5 0 7 9
50
21 5 6 5 8 452 12 5 96 21 21 5 6 5 8 452 12 5 96 21 21 5 6 5 8 452 12 5 96 21 21 5 6 5 8 452 12 5 96 21 21 5 6 5 8 452 12 5 96 21
the accepted output is
Minimum exchange operations : 6
Minimum exchange operations : 16
Minimum exchange operations : 0
Minimum exchange operations : 19
Minimum exchange operations : 485
Which i got in codeblocks. But ideone keeps showing these outputs ->
Minimum exchange operations : 11
Minimum exchange operations : 23
Minimum exchange operations : 0
Minimum exchange operations : 28
Minimum exchange operations : 535
And i keep getting wrong answer, which i have no clue why.

Resources