Round robin: a special case as two processed one used up it time quantum and the other arrives at the same time - round-robin

I need help with this.
Round robin: a special case as two processed one used up it time quantum and the other arrives at the same time.
For example, we has the following processes:
Process P1:
+ Arrival time: 0
+ Burst time: 7
Process P2:
+ Arrival time: 5
+ Burst time: 7
Assuming that time quantum q = 5 and after time quantum ending if a process doesn't complete it is added to at the end of the queue.
My confusion is about at the time 5. At this time, time quantum of P1 expires and P2 also arrives. What should come the queue first?

The arrived process is placed in the ready queue before the existing executing process ( time quantum completed ) in order to minimize the average response time.
The time taken for context switching is negligible.

i think this question is similar to yours and the answer will be helpful
Special case schedulingin short
| P1 | P1 | P2 | P2 |
0 5 7 12 14
The reason for this,as given in the above link ,is that the OS prefers P1 since it was recently running and hence it can avoid an unnecessary context switch i.e p1->p1->p2->p2 is better than p1->p2->p1->p2

Related

How to control the time offset of two threads in Linux?

Is there any way to create two SCHED_DEADLINE threads with a specific offset (phase) to a global cycle?
I am trying to create two threads, both with a period of 10 ms, but with an offset of 5ms between their arrival times. The behaviour
should look like this, with | being the arrival time, x the actual start time and D being the absolute deadline. Both threads are independent, so there is no need to synchronize them using mutexes etc. They just need the time offset.
Thread 1 |-----xooooo---------D-------------|
Thread 2 ------------------|-----xoooo--D-----
Maybe you can use timer_create() with timer_settime().

How to scheduling the processes

I have problem with scheduling the processes homework and i need help.
Process P1 P2 P3 P4 P5
Service time 120 60 180 50 300
Draw a Gantt chart that shows the completion times for each process using
the following CPU scheduling:
first-come, first served CPU
shortest-job-first
round-robin with a time slice of 60.
Priority Arrival Time Service Time
P0 1 3 5
P1 2 2 6
P2 1 4 7
P3 2 1 3
Draw a Gantt chart that shows the completion times for each process using each of the following CPU Scheduling Techniques (non-preemptive). Then Calculate the Average Waiting Time and the Average Turn Around Time for each Case.
a) first-come, first served
b) Shortest-Job-First
c) Priority Scheduling
d) Round Robin Scheduling ( q = 5)
Assume that there's time. At "time=0" you can figure out (for each scheduler, except round robin maybe) which process will be given CPU time first, then determine how much time it will run for (for each scheduler, including round robin), then draw a line on your Gantt chart showing how long the process used the CPU for. Then you'll have a new value for "time" (from when the first process stopped using CPU time, at the end of the line you just drew), so you'd repeat the same steps to figure out which process gets CPU time next and draw another line; and you'd keep doing this (for each scheduler) until all the processes have finished.
You can/should try this on a piece of paper - like a rough draft. If you understand how each of the scheduling algorithms work it's not hard (and if you don't know how some of the scheduling algorithms work it's easy to find out - e.g. find a search engine and ...).
The only problem that I can see is that the order that processes are given CPU time isn't specified for round robin. You could say that (for round robin) P1 gets CPU time first, or P2 gets CPU time first, or ... I'd be tempted to assume they're given CPU time in numerical order (P1 gets CPU time first, then P2, then P3, ..); but I'd also state whatever assumption I made.

Why sys time of process is showing higher when the strace command is showing significantly less time?

I have written a program using C which has two threads.
Initially it was,
for(int i=0;i<n;i++){
long_operation(arr[i]);
}
then I divided the loop into two threads, two execute concurrently.
One thread will carry out the operation for arr[0] to arr[n/2], another thread will work for arr[n/2] to arr[n-1].
long_operation function is thread safe.
Initially I was using join but it was taking higher sys time for futex system call, which I observed using strace command.
So i removed the strace command and use two volatile variable in the two threads to keep track whether thread is completed or not and a busy loop in the thread spawing function two halt the execution of later code. And I made the thread detachable and remove join.
It improved performance a little bit. but when i used time command, the sys part was taking,
real 0m31.368s
user 0m53.738s
sys 0m15.203s
but when i checked using the strace command the output was,
% time seconds usecs/call calls errors syscall
55.79 0.000602 9 66 clone
44.21 0.000477 3 177 write
------ ----------- ----------- --------- --------- ---------------
100.00 0.001079 243 total
So the time command was showing that around 15 seconds CPU spend in kernel within the process. But the strace command showing almost 0 seconds was utilized for system calls.
Then why 15 seconds was wasted in kernel?
I have an dual-core hyper-threaded Intel CPU.

Trouble Understanding CPU Scheduling Concepts

I have to write a CPU scheduling simulation with kernel level threads. I have to be able to use either first come first served (FCFS) or round robin (RR) algorithms. Data for the processes and their threads is given in the form of a text file. At the moment my program reads in the text file data into linked lists. I'm not really sure how to start the simulation (I've never programmed a simulation before).
Is this how I would proceed in the case of FCFS? When I get to the first thread of the first process, I add the cpu time to the clock time. Then do I simply add the io time to the clock as well while the cpu is idle? or should I put it back in a waiting queue and allow the next thread to start running in the cpu? If so how do I keep track of how much of each thread has already been excecuted?
here is an example test file:
2 4 6 // number_of_processes thread_switch process_switch
1 5 // process_number(1) number_of_threads(1)
1 0 4 // thread_number(1) arrival_time(1) number_of_CPU(1)
1 15 100 // 1 cpu_time io_time
2 18 120 // 2 cpu_time io_time
3 12 100 // 3 cpu_time io_time
4 16 // 4 cpu_time
2 4 4 // thread_number(2) arrival_time(2) number_of_CPU(2)
1 18 110
2 15 80
3 20 75
4 15
3 6 5 //thread(3)
1 40 100
2 20 70
3 15 80
4 18 90
5 50
4 8 4 //thread(4)
1 25 60
2 15 50
3 20 80
4 18
5 18 4 //thread(5)
1 8 60
2 15 120
3 12 80
4 10
The I/O time seems ambiguous based upon the information provided. Do you have a spec/instructions to go along with this?
In general I would think that the I/O time provides you with a window in which the currently executing thread(s) can be swapped out while they wait for I/O requests to complete. Though nothing seems to indicate if the I/O time occurs before, after, or intermixed with the CPU time. That may be a design decision you're expected to make when implementing your simulator.
There's also ambiguity with respect to what impact the 'number_of_CPU' option has. How many CPU cores are you simulating? Or is this just a count of the number of requests that the thread will make of the CPU (which it seems like it is, since you can't have a single thread running on multiple CPU's concurrently)?
In any case, you're correct about the general approach for handling the FCFS algorithm. Essentially you'll want to maintain a queue of requests, and whenever the CPU is idle you simply pull the next thing out of the queue and execute it. Assuming a single-core CPU and ignoring I/O time, the result should look something like this:
time=0
Thread 1 arrives, and wants to do 15 time-units of work
Thread 1 starts executing 15 time-units of work
time=4
Thread 2 arrives, and wants to do 18 time-units of work
Thread 2 is blocked because Thread 1 is executing
time=6
Thread 3 arrives, and wants to do 40 time-units of work
Thread 3 is blocked because Thread 1 is Executing
time=8
Thread 4 arrives and wants to do 25 time-units of work
Thread 4 is blocked because Thread 1 is Executing
time=15
Thread 1 completes its initial set of work
Thread 2 is next in the queue, and begins executing
Thread 1 wants to do 18 time-units of work
Thread 1 is blocked because Thread 2 is executing
time=18
Thread 5 arrives and wants to do 8 time-units of work
Thread 5 is blocked because Thread 2 is executing
time=33
Thread 2 completes its initial set of work
Thread 3 is next in the queue, and begins executing
Thread 2 wants to do 15 time-units of work
Thread 2 is blocked because Thread 3 is executing
...

Pthread scheduling FIFO, RR mixed results

We are trying to analyze the effect of different schedule algorithms on a Ubuntu system for Pthreads. We create several (1,2, 4) threads and let them run on 1, 2 or 4 CPU's. Each thread is a for loop with 1 mathematical operation. 1 threads takes several seconds to finish.
When starting 2 threads on 1 CPU with FIFO they finish a great amount of time apart (logical, FIFO finishes the first created thread first). For RR they finish closer together (half a second difference in some cases). This is al as expected. Now we run each test 10 times and about 1/3rd of the measurements takes half as long as other. We measure the time for all threads to finish. So on 1 CPU we wait for 2 threads to finish. RR or FIFO makes little difference. But running the test multiple times can give you about 6s for 2 or 3 times and about 12s for 5 or 6 times. The extraordinary thing is that there are no occasions where the program finished around 9 or 10 seconds. Its or between 5 and 6 or between 11 and 13. We did these measurements for 4,2,1 threads on 4, 2 ,1 CPU('s). FIFO and RR. Priority has been set both to 0 and to 99 (real-time). No heavy application was using the CPU. on the used core more than 97% of the CPU time went to threads spawned by our program.
When using SCHED_OTHER we have no such phenomenon.
Has anyone got an explanation for this behavior?
It's hard to see how many context switches happen. For FIFO the amount of context switches should be close to 0 and for RR this should be a lot larger but still hardly affecting the total execution time. For SCHED_OTHER I'm guessing the most context switches but I'm not entirely sure.
Another interesting fact is that the total execution time for OTHER is more or less the same as the short time for FIFO with the same amount of threads and CPU's. So FIFO is sometimes as fast as OTHER but sometimes takes double the time.
Regards,
Roel Storms

Resources