My professor has given me an assignment to implement the Selective Repeat ARQ algorithm in C for packet transaction between sender and receiver.
There is a timer associated with each packet to be sent at the sender which is triggered ON when that packet is sent, according to which it is decided which packet duplicate is needed to be sent.
But I don't know how to set the timer of each packet.
Please suggest some method for it.
Thanks is advance!!
Keep a data structure (e.g. a priority queue or ordered map or some such) that contains each packet you're planning to (re)send, along with the time at which you're intending to (re)send it. Ideally this data structure will be such that it is efficient to determine the smallest timestamp currently in the data structure, but if the number of scheduled packets will be relatively small, a simpler unordered data structure like a linked list could work too.
On each iteration of your event loop, determine the smallest timestamp value in the data structure. Subtract the current time from that timestamp value to get a delay time (in milliseconds or microseconds or similar).
If you're using select() or similar, you can pass that delay time as your timeout argument. If you're doing something simpler without multiplexing, you might be able to get away with passing the delay time to usleep() or similar instead.
After select() (or usleep()) returns, check the current time again. If the current time is now greater than or equal to your target time, you can send the packet with the smallest timestamp, and then remove it from your data structure. (If you think you might want to resend it again later, you can re-insert it into the data structure with a new/updated timestamp value)
You can also use Threads for that purpose, which is quite easy and needs fewer lines of codes.
You just need to create and define this function:
unsigned long CALLBACK packetTimer(void *pn){
//This is our Packet Timer
//It's going to run on a Tread
//If ack[thisPacketNumber] is not true
//We gonna check will check for packet time
//If it's reached to its limit we gonna send this packet again
//and reset the timer
//If ack[] becomes true
//break the while loop and this will end this tread
int pno = (int)pn;
std::clock_t start;
double duration;
start = std::clock();
while(1){
if(!ack[pno]){
duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
if(duration > 0.5){
//This tells that we haven't received our ACk yet for this packet
//So send it again
printf("SendBuffer for Packet %d: %s", pno, packets[pno]->data);
//Resending packet again
send_unreliably(s,packets[pno]->data,(result->ai_addr));
//Reseting the timer
start = std::clock();
}
}else{break;}
}
}
And inside your while loop, where you send and receive packets to receiver, you simply define:
unsigned long tid;//This should be outside the while loop,
//Ideally in the beginning of main function
CreateThread(NULL,0,packetTimer,(void *)packetNumber,0,&tid);
This implementaion is for windows, for UNIX we need to use pthread()
This is it.
And don't forgot to add required header files like:
#include <stdlib.h>
#include <cstdio>
#include <ctime>
Related
I have a lot of different time to keep track of in my design, but nothing is super critical. 10ms +/- a few ms isn't a big deal at all. But there might be 10 different timers that are all counting at different periods at the same time, which obviously I don't have enough dedicated timers to support each of those in their own independent timer in the MSP-430.
My solution is to create a single ISR for an MSP-430 micro timer that fires at 1 KHz. It simply increments an unsigned long for each ISR entry (so each tick is 1 ms). Then elsewhere in my code I can use the SET_TIMER and EXPIRED define calls below to check to see if a certain amount of time has elapsed. My question is, is this a good way to keep a "global" time?
Timer Definitions:
typedef unsigned long TIMER;
extern volatile TIMER Tick;
#define SET_TIMER(t,i) ((t)=Tick+(i))
#define EXPIRED(t) ((long)((t)-Tick)<0)
Timer Interrupt Service Routine:
void TIMER_B0_ISR(void)
{
Tick++;
}
Example usage in a single file:
case DO_SOMETHING:
if (EXPIRED(MyTimer1))
{
StateMachine = DO_SOMETHING_ELSE;
SET_TIMER(MyTimer1, 100);
}
break;
case DO_SOMETHING_ELSE:
if (EXPIRED(MyTimer1))
...
Your scheme is relatively costly to check for timer wraparound - that you don't seem to do, currently (You need to check for it in all places where you check for "time expired" - That is the reason why you normally want only one such place).
I typically use a sorted linked list of timer expiration entries with the list head as the timer that is going to expire earliest. The ISR then only has to check this single entry and can directly notify that one single subscriber.
I am trying to implement a Selective Repeat Protocol using C for an networking assignment but am stumped at how to simulate the timer for each individual packets. I only have access to a single timer and can only call the functions as described below.
/* start timer at A or B (int), increment in time*/
extern void starttimer(int, double);
/* stop timer at A or B (int) */
extern void stoptimer(int);
Kurose and Ross mentioned in their networking textbook that
A single hardware timer can be used to mimic the
operation of multiple logical timers [Varghese 1997].
And I found the following hint for a similar assignment
You can simulate multiple virtual timers using a single physical timer. The basic idea is that you keep a chain of virtual timers ordered in their expiration time and the physical timer will go off at the first virtual timer expiration.
However, I do not have access to any time variables other than RTT as the emulator is on another layer of abstraction. How can I implement the timer for individual packets in this case?
You can do that in the same way it is implemented at Kernel level. You need to have a linked list of "timers" where each timer has a timeout relative to the preceding one. It would be something like:
Timer1: 500 ms from t0, Timer2: 400 ms from t0, Timer3 1000 ms from t0.
Then you will have a linked list in which each element has the timeout relative to the previous one, like this:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)
Every element contains minimum: timerID, relative timeout, absolute init time (timestamp from epoch). You can add a callback pointer per timer.
You use your only timer and set the timeout to the relative timeout of the first element in the list: 400ms (Timer2)
After timeout you will remove first element, probably execute a callback related to Timer2, ideally this callback is executed with another worker thread. Then you set the new timeout at the relative timeout of the next element, Timer1: 100ms.
Now, when you need to create a new timer, say at 3,000 ms, after 300 ms from t0, you need to insert it in the proper position navigating the linked list of timers. The relative timeout in Timer4 will be 2,300. This is calculated with (Timer2.RelativeTimeout - (now - Timer2.AbsoluteTimeout)) and going through the linked list to find the corresponding position, adding relative timeouts of each previous element. Your linked list will become:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)->Timer4(2,300)
In this way you implement many logical timers with one physical timer. Your timer create and find time will be O(n) but you can add various improvements for insertion performance. The most important is that timers timeout handling and update is O(1). Delete complexity will be O(n) for finding the timer and O(1) for delete.
You have to take care of possible race conditions between the thread controlling the timer and a thread inserting or deleting a timer. One way to implement this timer in user space is with condition variables and wait timeout.
So I have to implement a discrete event cpu scheduler for my OS class, but I don't quite understand what how it works. Every explanation/textbook I've read always put things in terms a little too abstract for me to be able to figure out how it actually works, nor does it put things in terms of cpu bursts and io bursts (some did but still not helpful enough).
I'm not posting any of the code I have (I wrote a lot actually but I think I'm going to rewrite it after I figure out (in the words of Trump) what is actually going on). Instead I just want help to figure out a sort of pseudocode I can then implement.
We are given multiple processes with an Arrival Time (AT), Total Cpu (TC), Cpu burst (CB), and Io burst (IO).
Suppose that I was given: p1 (AT=1, TC=200, CB=10, IO=20) and p2 (AT=1000, TC=200, CB=20, IO=10). And suppose I am implementing a First Come First Serve scheduler.
I also put question marks (?) where I'm not sure.
Put all processes into eventQ
initialize all processes.state = CREATED
While(eventQueue not empty) process = eventQueue.getFront()
if process.state==CREATED state, it can transition to ready
clock= process.AT
process.state = READY
then I add it back to the end (?) of the eventQueue.
if process.state==READY, it can transition to run
clock= process.AT + process.CPU_time_had + process.IO_time_had (?)
CPU_Burst = process.CB * Rand(b/w 0 and process.CB)
if (CB >= process.TC - process.CPU_time_had)
then it's done I don't add it back
process.finish_time = clock + CB
continue
else
process.CPU_time_had += CB
(?) Not sure if I put the process into BLOCK or READY here
Add it to the back of eventQueue (?)
if process.state==BLOCK
No idea what happens (?)
Or do things never get Blocked in FCFS (which would make sense)
Also how do IO bursts enter into this picture???
Thanks for the help guys!
Look at arrival time of each thread, you can sort the queue such that arrival times occurring first appear before threads with later arrival times. Run the front of the queue's thread (this is a thread scheduler). Run the thread a burst at a time, when the burst's cpu time is up, enter a new event at the back of the queue with an arrival time of the current time plus the burst's io time (sort the queue again on arrival times). This way other threads can execute while a thread is performing io.
(My answer is assuming you are in the same class as me. [CIS*3110])
Someone can show me how to create a non-blocking timer to delete data of a struct?
I've this struct:
struct info{
char buf;
int expire;
};
Now, at the end of the expire's value, I need to delete data into my struct. the fact is that in the same time, my program is doing something else. so how can I create this? even avoiding use of signals.
It won't work. The time it takes to delete the structure is most likely much less than the time it would take to arrange for the structure to be deleted later. The reason is that in order to delete the structure later, some structure has to be created to hold the information needed to find the structure later when we get around to deleting it. And then that structure itself will eventually need to be freed. For a task so small, it's not worth the overhead of dispatching.
In a difference case, where the deletion is really complicated, it may be worth it. For example, if the structure contains lists or maps that contain numerous sub-elements that must be traverse to destroy each one, then it might be worth dispatching a thread to do the deletion.
The details vary depending on what platform and threading standard you're using. But the basic idea is that somewhere you have a function that causes a thread to be tasked with running a particular chunk of code.
Update: Hmm, wait, a timer? If code is not going to access it, why not delete it now? And if code is going to access it, why are you setting the timer now? Something's fishy with your question. Don't even think of arranging to have anything deleted until everything is 100% finished with it.
If you don't want to use signals, you're going to need threads of some kind. Any more specific answer will depend on what operating system and toolchain you're using.
I think the motto is to have a timer and if it expires as in case of Client Server logic. You need to delete those entries for which the time is expired. And when a timer expires, you need to delete that data.
If it is yes: Then it can be implemented in couple of ways.
a) Single threaded : You create a sorted queue based on the difference of (interval - now ) logic. So that the shortest span should receive the callback first. You can implement the timer queue using map in C++. Now when your work is over just call the timer function to check if any expired request is there in your queue. If yes, then it would delete that data. So the prototype might look like set_timer( void (pf)(void)); add_timer(void * context, long time_to_expire); to add the timer.
b) Multi-threaded : add_timer logic will be same. It will access the global map and add it after taking lock. This thread will sleep(using conditional variable) for the shortest time in the map. Meanwhile if there is any addition to the timer queue, it will get a notification from the thread which adds the data. Why it needs to sleep on conditional variable, because, it might get a timer which is having lesser interval than the minimum existing already.
So suppose first call was for 5 secs from now
and the second timer is 3 secs from now.
So if the timer thread only sleeps and not on conditional variable, then it will wake up after 5 secs whereas it is expected to wake up after 3 secs.
Hope this clarifies your question.
Cheers,
First stackoverflow question! I've searched...I promise. I haven't found any answers to my predicament. I have...a severely aggravating problem to say the least. To make a very long story short, I am developing the infrastructure for a game where mobile applications (an Android app and an iOS app) communicate with a server using sockets to send data to a database. The back end server script (which I call BES, or Back End Server), is several thousand lines of code long. Essentially, it has a main method that accepts incoming connections to a socket and forks them off, and a method that reads the input from the socket and determines what to do with it. Most of the code lies in the methods that send and receive data from the database and sends it back to the mobile apps. All of them work fine, except for the newest method I have added. This method grabs a large amount of data from the database, encodes it as a JSON object, and sends it back to the mobile app, which also decodes it from the JSON object and does what it needs to do. My problem is that this data is very large, and most of the time does not make it across the socket in one data write. Thus, I added one additional data write into the socket that informs the app of the size of the JSON object it is about to receive. However, after this write happens, the next write sends empty data to the mobile app.
The odd thing is, when I remove this first write that sends the size of the JSON object, the actual sending of the JSON object works fine. It's just very unreliable and I have to hope that it sends it all in one read. To add more oddity to the situation, when I make the size of the data that the second write sends a huge number, the iOS app will read it properly, but it will have the data in the middle of an otherwise empty array.
What in the world is going on? Any insight is greatly appreciated! Below is just a basic snippet of my two write commands on the server side.
Keep in mind that EVERYWHERE else in this script the read's and write's work fine, but this is the only place where I do 2 write operations back to back.
The server script is on a Ubuntu server in native C using Berkeley sockets, and the iOS is using a wrapper class called AsyncSocket.
int n;
//outputMessage contains a string that tells the mobile app how long the next message
//(returnData) will be
n = write(sock, outputMessage, sizeof(outputMessage));
if(n < 0)
//error handling is here
//returnData is a JSON encoded string (well, char[] to be exact, this is native-C)
n = write(sock, returnData, sizeof(returnData));
if(n < 0)
//error handling is here
The mobile app makes two read calls, and gets outputMessage just fine, but returnData is always just a bunch of empty data, unless I overwrite sizeof(returnData) to some hugely large number, in which case, the iOS will receive the data in the middle of an otherwise empty data object (NSData object, to be exact). It may also be important to note that the method I use on the iOS side in my AsyncSocket class reads data up to the length that it receives from the first write call. So if I tell it to read, say 10000 bytes, it will create an NSData object of that size and use it as the buffer when reading from the socket.
Any help is greatly, GREATLY appreciated. Thanks in advance everyone!
It's just very unreliable and I have to hope that it sends it all in one read.
The key to successful programming with TCP is that there is no concept of a TCP "packet" or "block" of data at the application level. The application only sees a stream of bytes, with no boundaries. When you call write() on the sending end with some data, the TCP layer may choose to slice and dice your data in any way it sees fit, including coalescing multiple blocks together.
You might write 10 bytes two times and read 5 then 15 bytes. Or maybe your receiver will see 20 bytes all at once. What you cannot do is just "hope" that some chunks of bytes you send will arrive at the other end in the same chunks.
What might be happening in your specific situation is that the two back-to-back writes are being coalesced into one, and your reading logic simply can't handle that.
Thanks for all of the feedback! I incorporated everyone's answers into the solution. I created a method that writes to the socket an iovec struct using writev instead of write. The wrapper class I'm using on the iOS side, AsyncSocket (which is fantastic, by the way...check it out here -->AsyncSocket Google Code Repo ) handles receiving an iovec just fine, and behind the scenes apparently, as it does not require any additional effort on my part for it to read all of the data correctly. The AsyncSocket class does not call my delegate method didReadData now until it receives all of the data specified in the iovec struct.
Again, thank you all! This helped greatly. Literally overnight I got responses for an issue I've been up against for a week now. I look forward to becoming more involved in the stackoverflow community!
Sample code for solution:
//returnData is the JSON encoded string I am returning
//sock is my predefined socket descriptor
struct iovec iov[1];
int iovcnt = 0;
iov[0].iov_base = returnData;
iov[0].iov_len = strlen(returnData);
iovcnt = sizeof(iov) / sizeof(struct iovec);
n = writev(sock, iov, iovcnt)
if(n < 0)
//error handling here
while(n < iovcnt)
//rebuild iovec struct with remaining data from returnData (from position n to the end of the string)
You should really define a function write_complete that completely writes a buffer to a socket. Check the return value of write, it might also be a positive number, but smaller than the size of the buffer. In that case you need to write the remaining part of the buffer again.
Oh, and using sizeof is error-prone, too. In the above write_complete function you should therefore print the given size and compare it to what you expect.
Ideally on the server you want to write the header (the size) and the data atomically, I'd do that using the scatter/gather calls writev() also if there is any chance multiple threads can write to the same socket concurrently you may want to use a mutex in the write call.
writev() will also write all the data before returning (if you are using blocking I/O).
On the client you may have to have a state machine that reads the length of the buffer then sits in a loop reading until all the data has been received, as large buffers will be fragmented and come in in various sized blocks.