Simulate Multiple Virtual Timers with one Physical Timer - c

I am trying to implement a Selective Repeat Protocol using C for an networking assignment but am stumped at how to simulate the timer for each individual packets. I only have access to a single timer and can only call the functions as described below.
/* start timer at A or B (int), increment in time*/
extern void starttimer(int, double);
/* stop timer at A or B (int) */
extern void stoptimer(int);
Kurose and Ross mentioned in their networking textbook that
A single hardware timer can be used to mimic the
operation of multiple logical timers [Varghese 1997].
And I found the following hint for a similar assignment
You can simulate multiple virtual timers using a single physical timer. The basic idea is that you keep a chain of virtual timers ordered in their expiration time and the physical timer will go off at the first virtual timer expiration.
However, I do not have access to any time variables other than RTT as the emulator is on another layer of abstraction. How can I implement the timer for individual packets in this case?

You can do that in the same way it is implemented at Kernel level. You need to have a linked list of "timers" where each timer has a timeout relative to the preceding one. It would be something like:
Timer1: 500 ms from t0, Timer2: 400 ms from t0, Timer3 1000 ms from t0.
Then you will have a linked list in which each element has the timeout relative to the previous one, like this:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)
Every element contains minimum: timerID, relative timeout, absolute init time (timestamp from epoch). You can add a callback pointer per timer.
You use your only timer and set the timeout to the relative timeout of the first element in the list: 400ms (Timer2)
After timeout you will remove first element, probably execute a callback related to Timer2, ideally this callback is executed with another worker thread. Then you set the new timeout at the relative timeout of the next element, Timer1: 100ms.
Now, when you need to create a new timer, say at 3,000 ms, after 300 ms from t0, you need to insert it in the proper position navigating the linked list of timers. The relative timeout in Timer4 will be 2,300. This is calculated with (Timer2.RelativeTimeout - (now - Timer2.AbsoluteTimeout)) and going through the linked list to find the corresponding position, adding relative timeouts of each previous element. Your linked list will become:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)->Timer4(2,300)
In this way you implement many logical timers with one physical timer. Your timer create and find time will be O(n) but you can add various improvements for insertion performance. The most important is that timers timeout handling and update is O(1). Delete complexity will be O(n) for finding the timer and O(1) for delete.
You have to take care of possible race conditions between the thread controlling the timer and a thread inserting or deleting a timer. One way to implement this timer in user space is with condition variables and wait timeout.

Related

Increase count value of hardware timer (on µC) more than one at each timer tick

Has anyone heard of a hardware-timer which can count by different values with one timer tick?
Normally a timer of a µC counts up or down by one. But I have a challenge where I need to add e.g. 500 each timer tick.
There are multiple options for your question. Depending on your microcontroller and timer you could:
Use the interrupt generation of the timer to manually up a variable by a set amount. 500 in your case.
Change the timer prescalers such that instead of 500 times in an expected period, the timer only triggers once during the expected period.
I personally don't know of a timer that has a variable increase amount but that doesn't mean that it doesn't exist. Maybe creating such a timer in VHDL or verilog may be a option.

Global Timer Interrupt for Time in Microcontroller

I have a lot of different time to keep track of in my design, but nothing is super critical. 10ms +/- a few ms isn't a big deal at all. But there might be 10 different timers that are all counting at different periods at the same time, which obviously I don't have enough dedicated timers to support each of those in their own independent timer in the MSP-430.
My solution is to create a single ISR for an MSP-430 micro timer that fires at 1 KHz. It simply increments an unsigned long for each ISR entry (so each tick is 1 ms). Then elsewhere in my code I can use the SET_TIMER and EXPIRED define calls below to check to see if a certain amount of time has elapsed. My question is, is this a good way to keep a "global" time?
Timer Definitions:
typedef unsigned long TIMER;
extern volatile TIMER Tick;
#define SET_TIMER(t,i) ((t)=Tick+(i))
#define EXPIRED(t) ((long)((t)-Tick)<0)
Timer Interrupt Service Routine:
void TIMER_B0_ISR(void)
{
Tick++;
}
Example usage in a single file:
case DO_SOMETHING:
if (EXPIRED(MyTimer1))
{
StateMachine = DO_SOMETHING_ELSE;
SET_TIMER(MyTimer1, 100);
}
break;
case DO_SOMETHING_ELSE:
if (EXPIRED(MyTimer1))
...
Your scheme is relatively costly to check for timer wraparound - that you don't seem to do, currently (You need to check for it in all places where you check for "time expired" - That is the reason why you normally want only one such place).
I typically use a sorted linked list of timer expiration entries with the list head as the timer that is going to expire earliest. The ISR then only has to check this single entry and can directly notify that one single subscriber.

Tiny OS timer not resetting

I'am currently working on tinyos and I Am trying to reset timer
lets say to 2 seconds when it is running at 45 seconds
but it is not working, i can't figure out why,
can someone help me figure it out
here is the code:
printf("timer before resetting it %ld",call Timer1.getNow());
offset = ((TimeMote_t*) payload)->tdata;
call Timer1.startPeriodic(offset);
printf("timer after resetting it %ld",call Timer1.getNow());
now actually it should have reset the timer to offset but it's not resetting it.
both printf statements are giving the same time.
No, it shouldn't. Timer.getNow() returns absolute time, which can't be changed or reset. Timer interface can be used to schedule events at a particular moment in the future. Timer.startPeriodic(offset) starts the timer, meaning that the event Timer.fired() will be signaled in the future. In this particular example, the event will be signaled offset units from the call to Timer.startPeriodic and then repeated every offset units infinitely or until call to Timer.stop(). Return value of Timer.getNow() doesn't change and increases monotonically regardless of whether the timer is started or not.
See: Interface: tos.lib.timer.Timer

Process scheduler

So I have to implement a discrete event cpu scheduler for my OS class, but I don't quite understand what how it works. Every explanation/textbook I've read always put things in terms a little too abstract for me to be able to figure out how it actually works, nor does it put things in terms of cpu bursts and io bursts (some did but still not helpful enough).
I'm not posting any of the code I have (I wrote a lot actually but I think I'm going to rewrite it after I figure out (in the words of Trump) what is actually going on). Instead I just want help to figure out a sort of pseudocode I can then implement.
We are given multiple processes with an Arrival Time (AT), Total Cpu (TC), Cpu burst (CB), and Io burst (IO).
Suppose that I was given: p1 (AT=1, TC=200, CB=10, IO=20) and p2 (AT=1000, TC=200, CB=20, IO=10). And suppose I am implementing a First Come First Serve scheduler.
I also put question marks (?) where I'm not sure.
Put all processes into eventQ
initialize all processes.state = CREATED
While(eventQueue not empty) process = eventQueue.getFront()
if process.state==CREATED state, it can transition to ready
clock= process.AT
process.state = READY
then I add it back to the end (?) of the eventQueue.
if process.state==READY, it can transition to run
clock= process.AT + process.CPU_time_had + process.IO_time_had (?)
CPU_Burst = process.CB * Rand(b/w 0 and process.CB)
if (CB >= process.TC - process.CPU_time_had)
then it's done I don't add it back
process.finish_time = clock + CB
continue
else
process.CPU_time_had += CB
(?) Not sure if I put the process into BLOCK or READY here
Add it to the back of eventQueue (?)
if process.state==BLOCK
No idea what happens (?)
Or do things never get Blocked in FCFS (which would make sense)
Also how do IO bursts enter into this picture???
Thanks for the help guys!
Look at arrival time of each thread, you can sort the queue such that arrival times occurring first appear before threads with later arrival times. Run the front of the queue's thread (this is a thread scheduler). Run the thread a burst at a time, when the burst's cpu time is up, enter a new event at the back of the queue with an arrival time of the current time plus the burst's io time (sort the queue again on arrival times). This way other threads can execute while a thread is performing io.
(My answer is assuming you are in the same class as me. [CIS*3110])

Create a non-blocking timer to erase data

Someone can show me how to create a non-blocking timer to delete data of a struct?
I've this struct:
struct info{
char buf;
int expire;
};
Now, at the end of the expire's value, I need to delete data into my struct. the fact is that in the same time, my program is doing something else. so how can I create this? even avoiding use of signals.
It won't work. The time it takes to delete the structure is most likely much less than the time it would take to arrange for the structure to be deleted later. The reason is that in order to delete the structure later, some structure has to be created to hold the information needed to find the structure later when we get around to deleting it. And then that structure itself will eventually need to be freed. For a task so small, it's not worth the overhead of dispatching.
In a difference case, where the deletion is really complicated, it may be worth it. For example, if the structure contains lists or maps that contain numerous sub-elements that must be traverse to destroy each one, then it might be worth dispatching a thread to do the deletion.
The details vary depending on what platform and threading standard you're using. But the basic idea is that somewhere you have a function that causes a thread to be tasked with running a particular chunk of code.
Update: Hmm, wait, a timer? If code is not going to access it, why not delete it now? And if code is going to access it, why are you setting the timer now? Something's fishy with your question. Don't even think of arranging to have anything deleted until everything is 100% finished with it.
If you don't want to use signals, you're going to need threads of some kind. Any more specific answer will depend on what operating system and toolchain you're using.
I think the motto is to have a timer and if it expires as in case of Client Server logic. You need to delete those entries for which the time is expired. And when a timer expires, you need to delete that data.
If it is yes: Then it can be implemented in couple of ways.
a) Single threaded : You create a sorted queue based on the difference of (interval - now ) logic. So that the shortest span should receive the callback first. You can implement the timer queue using map in C++. Now when your work is over just call the timer function to check if any expired request is there in your queue. If yes, then it would delete that data. So the prototype might look like set_timer( void (pf)(void)); add_timer(void * context, long time_to_expire); to add the timer.
b) Multi-threaded : add_timer logic will be same. It will access the global map and add it after taking lock. This thread will sleep(using conditional variable) for the shortest time in the map. Meanwhile if there is any addition to the timer queue, it will get a notification from the thread which adds the data. Why it needs to sleep on conditional variable, because, it might get a timer which is having lesser interval than the minimum existing already.
So suppose first call was for 5 secs from now
and the second timer is 3 secs from now.
So if the timer thread only sleeps and not on conditional variable, then it will wake up after 5 secs whereas it is expected to wake up after 3 secs.
Hope this clarifies your question.
Cheers,

Resources