So I have to implement a discrete event cpu scheduler for my OS class, but I don't quite understand what how it works. Every explanation/textbook I've read always put things in terms a little too abstract for me to be able to figure out how it actually works, nor does it put things in terms of cpu bursts and io bursts (some did but still not helpful enough).
I'm not posting any of the code I have (I wrote a lot actually but I think I'm going to rewrite it after I figure out (in the words of Trump) what is actually going on). Instead I just want help to figure out a sort of pseudocode I can then implement.
We are given multiple processes with an Arrival Time (AT), Total Cpu (TC), Cpu burst (CB), and Io burst (IO).
Suppose that I was given: p1 (AT=1, TC=200, CB=10, IO=20) and p2 (AT=1000, TC=200, CB=20, IO=10). And suppose I am implementing a First Come First Serve scheduler.
I also put question marks (?) where I'm not sure.
Put all processes into eventQ
initialize all processes.state = CREATED
While(eventQueue not empty) process = eventQueue.getFront()
if process.state==CREATED state, it can transition to ready
clock= process.AT
process.state = READY
then I add it back to the end (?) of the eventQueue.
if process.state==READY, it can transition to run
clock= process.AT + process.CPU_time_had + process.IO_time_had (?)
CPU_Burst = process.CB * Rand(b/w 0 and process.CB)
if (CB >= process.TC - process.CPU_time_had)
then it's done I don't add it back
process.finish_time = clock + CB
continue
else
process.CPU_time_had += CB
(?) Not sure if I put the process into BLOCK or READY here
Add it to the back of eventQueue (?)
if process.state==BLOCK
No idea what happens (?)
Or do things never get Blocked in FCFS (which would make sense)
Also how do IO bursts enter into this picture???
Thanks for the help guys!
Look at arrival time of each thread, you can sort the queue such that arrival times occurring first appear before threads with later arrival times. Run the front of the queue's thread (this is a thread scheduler). Run the thread a burst at a time, when the burst's cpu time is up, enter a new event at the back of the queue with an arrival time of the current time plus the burst's io time (sort the queue again on arrival times). This way other threads can execute while a thread is performing io.
(My answer is assuming you are in the same class as me. [CIS*3110])
Related
In my project, have a data provider, which provides data in every 2 milli seconds. Following is the delegate method in which the data is getting.
func measurementUpdated(_ measurement: Double) {
measurements.append(measurement)
guard measurements.count >= 300 else { return }
ecgView.measurements = Array(measurements.suffix(300))
DispatchQueue.main.async {
self.ecgView.setNeedsDisplay()
}
guard measurements.count >= 50000 else { return }
let olderMeasurementsPrefix = measurements.count - 50000
measurements = Array(measurements.dropFirst(olderMeasurementsPrefix))
print("Measurement Count : \(measurements.count)")
}
What I am trying to do is that when the array has more than 50000 elements, to delete the older measurement in the first n index of Array, for which I am using the dropFirst method of Array.
But, I am getting a crash with the following message:
Fatal error: Can't form Range with upperBound < lowerBound
I think the issue due to threading, both appending and deletion might happen at the same time, since the delegate is firing in a time interval of 2 millisecond. Can you suggest me an optimized way to resolve this issue?
So to really fix this, we need to first address two of your claims:
1) You said, in effect, that measurementUpdated() would be called on the main thread (for you said both append and dropFirst would be called on main thread. You also said several times that measurementUpdated() would be called every 2ms. You do not want to be calling a method every 2ms on the main thread. You'll pile up quite a lot of them very quickly, and get many delays in their updating, as the main thread is going to have UI stuff to be doing, and that always eats up time.
So first rule: measurementUpdated() should always be called on another thread. Keep it the same thread, though.
Second rule: The entire code path from whatever collects the data to when measurementUpdated() is called must also be on a non-main thread. It can be on the thread that measurementUpdated(), but doesn't have to be.
Third rule: You do not need your UI graph to update every 2ms. The human eye cannot perceive UI change that's faster than about 150ms. Also, the device's main thread will get totally bogged down trying to re-render as frequently as every 2ms. I bet your graph UI can't even render a single pass at 2ms! So let's give your main thread a break, by only updating the graph every, say, 150ms. Measure the current time in MS and compare against the last time you updated the graph from this routine.
Fourth rule: don't change any array (or any object) in two different threads without doing a mutex lock, as they'll sometimes collide (one thread will be trying to do an operation on it while another is too). An excellent article that covers all the current swift ways of doing mutex locks is Matt Gallagher's Mutexes and closure capture in Swift. It's a great read, and has both simple and advanced solutions and their tradeoffs.
One other suggestion: You're allocating or reallocating a few arrays every 2ms. It's unnecessary, and adds undue stress on the memory pools under the hood, I'd think. I suggest not doing append and dropsFirst calls. Try rewriting such that you have a single array that holds 50,000 doubles, and never changes size. Simply change values in the array, and keep 2 indexes so that you always know where the "start" and the "end" of the data set is within the array. i.e. pretend the next array element after the last is the first array element (pretend the array loops around to the front). Then you're not churning memory at all, and it'll operate much quicker too. You can surely find Array extensions people have written to make this trivial to use. Every 150ms you can copy the data into a second pre-allocated array in the correct order for your graph UI to consume, or just pass the two indexes to your graph UI if you own your graph UI and can adjust it to accommodate.
I don't have time right now to write a code example that covers all of this (maybe someone else does), but I'll try to revisit this tomorrow. It'd actually be a lot better for you if you made a renewed stab at it yourself, and then ask us a new question (on a new StackOverflow) if you get stuck.
Update As #Smartcat correctly pointed this solution has the potential of causing memory issues if the main thread is not fast enough to consume the arrays in the same pace the worker thread produces them.
The problem seems to be caused by ecgView's measurements property: you are writing to it on the thread receiving the data, while the view tries to read from it on the main thread, and simultaneous accesses to the same data from multiple thread is (unfortunately) likely to generate race conditions.
In conclusion, you need to make sure that both reads and writes happen on the same thread, and can easily be achieved my moving the setter call within the async dispatch:
let ecgViewMeasurements = Array(measurements.suffix(300))
DispatchQueue.main.async {
self.ecgView.measurements = ecgViewMeasurements
self.ecgView.setNeedsDisplay()
}
According to what you say, I will assume the delegate is calling the measuramentUpdate method from a concurrent thread.
If that's the case, and the problem is really related to threading, this should fix your problem:
func measurementUpdated(_ measurement: Double) {
DispatchQueue(label: "MySerialQueue").async {
measurements.append(measurement)
guard measurements.count >= 300 else { return }
ecgView.measurements = Array(measurements.suffix(300))
DispatchQueue.main.async {
self.ecgView.setNeedsDisplay()
}
guard measurements.count >= 50000 else { return }
let olderMeasurementsPrefix = measurements.count - 50000
measurements = Array(measurements.dropFirst(olderMeasurementsPrefix))
print("Measurement Count : \(measurements.count)")
}
}
This will put the code in an serial queue. This way you can ensure that this block of code will run only one at a time.
I am trying to implement a Selective Repeat Protocol using C for an networking assignment but am stumped at how to simulate the timer for each individual packets. I only have access to a single timer and can only call the functions as described below.
/* start timer at A or B (int), increment in time*/
extern void starttimer(int, double);
/* stop timer at A or B (int) */
extern void stoptimer(int);
Kurose and Ross mentioned in their networking textbook that
A single hardware timer can be used to mimic the
operation of multiple logical timers [Varghese 1997].
And I found the following hint for a similar assignment
You can simulate multiple virtual timers using a single physical timer. The basic idea is that you keep a chain of virtual timers ordered in their expiration time and the physical timer will go off at the first virtual timer expiration.
However, I do not have access to any time variables other than RTT as the emulator is on another layer of abstraction. How can I implement the timer for individual packets in this case?
You can do that in the same way it is implemented at Kernel level. You need to have a linked list of "timers" where each timer has a timeout relative to the preceding one. It would be something like:
Timer1: 500 ms from t0, Timer2: 400 ms from t0, Timer3 1000 ms from t0.
Then you will have a linked list in which each element has the timeout relative to the previous one, like this:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)
Every element contains minimum: timerID, relative timeout, absolute init time (timestamp from epoch). You can add a callback pointer per timer.
You use your only timer and set the timeout to the relative timeout of the first element in the list: 400ms (Timer2)
After timeout you will remove first element, probably execute a callback related to Timer2, ideally this callback is executed with another worker thread. Then you set the new timeout at the relative timeout of the next element, Timer1: 100ms.
Now, when you need to create a new timer, say at 3,000 ms, after 300 ms from t0, you need to insert it in the proper position navigating the linked list of timers. The relative timeout in Timer4 will be 2,300. This is calculated with (Timer2.RelativeTimeout - (now - Timer2.AbsoluteTimeout)) and going through the linked list to find the corresponding position, adding relative timeouts of each previous element. Your linked list will become:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)->Timer4(2,300)
In this way you implement many logical timers with one physical timer. Your timer create and find time will be O(n) but you can add various improvements for insertion performance. The most important is that timers timeout handling and update is O(1). Delete complexity will be O(n) for finding the timer and O(1) for delete.
You have to take care of possible race conditions between the thread controlling the timer and a thread inserting or deleting a timer. One way to implement this timer in user space is with condition variables and wait timeout.
I'm creating a card game in pygame for my college project, and a large aspect of the game is how the game's AI reacts to the current situation. I have a function to randomly generate a number within 2 parameters, and this is how long I want the program to wait.
All of the code on my ai is contained within an if statement, and once called I want the program to wait generated amount of time, and then make it's decision on what to do.
Originally I had:
pygame.time.delay(calcAISpeed(AIspeed))
This would work well, if it didn't pause the rest of the program whilst the AI is waiting, stopping the user from interacting with the program. This means I cannot use while loops to create my timer either.
What is the best way to work around this without going into multi-threading or other complex solutions? My project is due in soon and I don't want to make massive changes. I've tried using pygame.time.Clock functions to compare the current time to the generated one, but resetting the clock once the operation has been performed has proved troublesome.
Thanks for the help and I look forward to your input.
The easiest way around this would be to have a variable within your AI called something like "wait" and set it to a random number (of course it will have to be tweaked to your program speed... I'll explain in the code below.). Then in your update function have a conditional that waits to see if that wait number is zero or below, and if not subtract a certain amount of time from it. Below is a basic set of code to explain this...
class AI(object):
def __init__(self):
#put the stuff you want in your ai in here
self.currentwait = 100
#^^^ All you need is this variable defined somewhere
#If you want a static number as your wait time add this variable
self.wait = 100 #Your number here
def updateAI(self):
#If the wait number is less than zero then do stuff
if self.currentwait <= 0:
#Do your AI stuff here
else:
#Based on your game's tick speed and how long you want
#your AI to wait you can change the amount removed from
#your "current wait" variable
self.currentwait -= 100 #Your number here
To give you an idea of what is going on above, you have a variable called currentwait. This variable describes the time left the program has to wait. If this number is greater than 0, there is still time to wait, so nothing will get executed. However, time will be subtracted from this variable so every tick there is less time to wait. You can control this rate by using the clock tick rate. For example, if you clock rate is set to 60, then you can make the program wait 1 second by setting currentwait to 60 and taking 1 off every tick until the number reaches zero.
Like I said this is very basic so you will probably have to change it to fit your program slightly, but it should do the trick. Hope this helps you and good luck with your project :)
The other option is to create a timer event on the event queue and listen for it in the event loop: How can I detect if the user has double-clicked in pygame?
I'am currently working on tinyos and I Am trying to reset timer
lets say to 2 seconds when it is running at 45 seconds
but it is not working, i can't figure out why,
can someone help me figure it out
here is the code:
printf("timer before resetting it %ld",call Timer1.getNow());
offset = ((TimeMote_t*) payload)->tdata;
call Timer1.startPeriodic(offset);
printf("timer after resetting it %ld",call Timer1.getNow());
now actually it should have reset the timer to offset but it's not resetting it.
both printf statements are giving the same time.
No, it shouldn't. Timer.getNow() returns absolute time, which can't be changed or reset. Timer interface can be used to schedule events at a particular moment in the future. Timer.startPeriodic(offset) starts the timer, meaning that the event Timer.fired() will be signaled in the future. In this particular example, the event will be signaled offset units from the call to Timer.startPeriodic and then repeated every offset units infinitely or until call to Timer.stop(). Return value of Timer.getNow() doesn't change and increases monotonically regardless of whether the timer is started or not.
See: Interface: tos.lib.timer.Timer
Someone can show me how to create a non-blocking timer to delete data of a struct?
I've this struct:
struct info{
char buf;
int expire;
};
Now, at the end of the expire's value, I need to delete data into my struct. the fact is that in the same time, my program is doing something else. so how can I create this? even avoiding use of signals.
It won't work. The time it takes to delete the structure is most likely much less than the time it would take to arrange for the structure to be deleted later. The reason is that in order to delete the structure later, some structure has to be created to hold the information needed to find the structure later when we get around to deleting it. And then that structure itself will eventually need to be freed. For a task so small, it's not worth the overhead of dispatching.
In a difference case, where the deletion is really complicated, it may be worth it. For example, if the structure contains lists or maps that contain numerous sub-elements that must be traverse to destroy each one, then it might be worth dispatching a thread to do the deletion.
The details vary depending on what platform and threading standard you're using. But the basic idea is that somewhere you have a function that causes a thread to be tasked with running a particular chunk of code.
Update: Hmm, wait, a timer? If code is not going to access it, why not delete it now? And if code is going to access it, why are you setting the timer now? Something's fishy with your question. Don't even think of arranging to have anything deleted until everything is 100% finished with it.
If you don't want to use signals, you're going to need threads of some kind. Any more specific answer will depend on what operating system and toolchain you're using.
I think the motto is to have a timer and if it expires as in case of Client Server logic. You need to delete those entries for which the time is expired. And when a timer expires, you need to delete that data.
If it is yes: Then it can be implemented in couple of ways.
a) Single threaded : You create a sorted queue based on the difference of (interval - now ) logic. So that the shortest span should receive the callback first. You can implement the timer queue using map in C++. Now when your work is over just call the timer function to check if any expired request is there in your queue. If yes, then it would delete that data. So the prototype might look like set_timer( void (pf)(void)); add_timer(void * context, long time_to_expire); to add the timer.
b) Multi-threaded : add_timer logic will be same. It will access the global map and add it after taking lock. This thread will sleep(using conditional variable) for the shortest time in the map. Meanwhile if there is any addition to the timer queue, it will get a notification from the thread which adds the data. Why it needs to sleep on conditional variable, because, it might get a timer which is having lesser interval than the minimum existing already.
So suppose first call was for 5 secs from now
and the second timer is 3 secs from now.
So if the timer thread only sleeps and not on conditional variable, then it will wake up after 5 secs whereas it is expected to wake up after 3 secs.
Hope this clarifies your question.
Cheers,