C - UDP receiving packets from unknown sources - c

I'm relatively new to programming C sockets and I have to solve a task in C.
There are multiple nodes in the network, each with its own settings. Each node broadcasts its current settings every second. It also has to listen for these broadcasts from other nodes and store their settings too. Finally, it has to be able to send a packet to another node directly. I'm planning to store all node settings in a struct array.
I've managed to finish the broadcast, which is implemented in its own thread, but I'm not sure what is the correct procedure to receive packets from unknown number of other nodes in the network and store their addresses for sending packets to them directly later.
Any tips?
Thanks!

Thanks for all the advice.
In the end, I just chose to compare each incoming packet's origin IP with all the registered units and if no match was found, I added a new one, storing the IP in the unit struct.

Related

Where data can be buffered on rfcomm connection?

i'm trying to pass data between two bluetooth devices, when both connected to two different computers.
After having hci device in each of the computers, i'm using rfcomm to pass information between the two.
I'm trying to pass 10MB of random data just to check the ability of the system.
On the beginning everything seems to work fine. After a few seconds, it looks like there is a delay between the sender and the receiver, when sometimes data stopps to arrive, and then a "massive" amount of data is suddenly arriving the receiver.
Exactly like some buffer is keeping all the data. As long i keep sending data
the delay is getting longer.
I'm trying to figure out where in the chain such buffer can be, or how to
solve this buffering.
Many Thanks :)

libuv combines mutliple async calls and invokes callback once

Requirement : A UDP server that on receiving an UDP packet and stores the received packet to one of the two queues. A worker thread is associated with each queue, and the associated thread picks up packet from the front of the queue, processes it and writes it into a in-memory cache system.
Constraints : solution has to be based on event-loop(libuv) and written in C
My Solution
register a call-back for incoming UDP which adds the received packet to one of the two queues and raises a uv_async_send
two global uv_sync_t objects are created one for each queue and used as parameter for uv_async_send. For ex : if packet is added to queue-one then uv_sync_t object-1 is used as parameter for uv_async_send. Similarly, if packet is added to queue-two then uv_sync_t object-2 is used
two threads are started, each having their own loop and a handle bound with a callback
In thread-one uv_sync_t object-1 is bound against a function (say funcA).
In thread-two uv_sync_t object-2 is bound against another function (say funcB)
funcA and funcB reads "SINGLE" packet from corresponding queue and stores it in the in-memory cache
The problem
The Client sends large number of packets which registers large number of events in the server. Now the problem is that libuv combines multiple calls into one and invokes a single callback(which removes a SINGLE node from queue). This leads to situation where nodes are being added to the queue at a faster rate and removed at a very slow rate. Can these rates be balanced ?
Is there a better way to design the server using event-looping library libuv ?
Since you are queueing the packets in one thread but processing in another, it's possible that they work at slightly different rates. I'd use a thread-safe queue (have a look at concurrencykit.org) and process the entire queue on the async callback, instead of just processing a single packet.

x number of threads sending data to Server for displaying output on GUI

I have developed a single server/multiple client TCP Application.
The client consists of x number of threads each thread doing processing on its own data and then sending the data over TCP socket to the Server for displaying.
The Server is basically a GUI having a window. Server receves data from the client and displays it.
Now, the problem is that since there are 40 threads inside the client and each thread wants to send data, how can I achieve this using one connected socket?
My Suggestion:
My approach was to create a data structure inside each of the 40 threads in which data to be sent will be maintained. A separate Send Thread with one connected socket on client side is then created. This thread will read data from data structure of first thread, send it over the socket and then read the data from second thread and so on.
Confusions:
but I am not sure how would this be implemented as I am new to all this? :( What if a thread is writing to data structure and the Send Thread tries to read the data at the same time. I am familiar with mutex, critical section etc but that sounds too complex for my simple application.
Any other suggestions/comments other than my own suggestion are welcome.
If you think my own approach is correct then please help me solving my confusions that I mentioned above.
Thanks a lot in advance :)
Edit:
Can I put I timer on Send Thread and after a specific time the Send Thread suspends thread#1(so that it can access its data structure without any synchronization issues), reads data from its data structure, sends it over the tcp Socket, and resumes Thread#1 back, then it suspends Thread#2, reads data from its data structure, sends it over the tcp Socket, and resumes Thread#2 back and so on.
A common approach is to have one thread dedicated to sending the data. The other threads post their data into a shared container (list, deque, etc) and signal the sender thread that data is available. The sender then wakes up and processes whatever data is available.
EDIT:
The gist of it is as follows:
HANDLE data_available_event; // manual reset event; set when queue has data, clear when queue is empty
CRITICAL_SECTION cs; // protect access to data queue
std::deque<std::string> data_to_send;
WorkerThread()
{
while(do_work)
{
std::string data = generate_data()
EnterCriticalSection(&cs);
data_to_send.push_back(data);
SetEvent(data_available_event); // signal sender thread that data is available
LeaveCriticalSection(&cs);
}
}
SenderThread()
{
while(do_work)
{
WaitForSingleObject(data_available_event);
EnterCriticalSection(&cs);
std::string data = data_to_send.front();
data_to_send.pop_front();
if(data_to_send.empty())
{
ResetEvent(data_available_event); // queue is empty; reset event and wait until more data is available
}
LeaveCriticalSection(&cs);
send_data(data);
}
}
This is of course assuming the data can be sent in any order. I use strings only for illustrative purposes; you probably want some kind of custom object that knows how to serialize the data it holds.
Suspending thread#1 so you can access its data strcuture does not avoid synchronization issues. When you suspend it thread#1 could be in the midst of an update to the data, so the socket thread gets part of old data, part of new. That is data corruption.
You need a shared data structure such as a FIFO queue. The worker threads add to the queue, the socket thread removes the oldest item from the queue. All access to this shared queue must be protected with a critical section unless you implement a lock-free queue. (A circular buffer.)
Depending on your application needs, if you implement this queue you might not need the socket thread at all. Just do the dequeueing in the display thread.
There are a couple of ways to achieving it; Luke's idea suffers from race conditions that will still create data corruption
You avoid that by using UDP instead of TCP as the transport protocol. It'd be especially a good choice if you don't mind missing an occasional packet (which is okay for displaying rapidly changing data); it's fantastic for ensuring real-time updates on data where exact history doesn't matter (missing a point in a relatively smooth curve while plotting graphs is okay);
If the data packets are are small and sort of represent a stream then UDP is a great choice. Its benefit increases if you have multiple senders on different systems all displaying on a single screen.

What shall we do to get C# Silverlight Tcp Packet in one piece?

When we send a large amount of data to the client,its ReceiveAsync event is being called more than one time and in each time we get a few piece of the packet.
What shall we do to get C# Silverlight Tcp Packet in one piece and through one event?
Thank you in advance.
You can't. The very nature of TCP is that data gets broken up into packets. Keep receiving data until you've got the whole message (whatever that will be). Some options for this:
First send the size of the message before the message itself.
Close the connection when the message has been sent (so the client can basically read until the connection is closed)
Add a delimiter to indicate the end of the message
I generally dislike the final option, as it means "understanding" the message as you're reading it, which can be tricky - and may mean you need to add escape sequences etc if your delimiter can naturally occur within the message.

"Sliding Window" - Is it possible to add reliability to a protocol and avoid flow control Implementation? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
As a part of a personal project, I am making an application level protocol (encapsulated in UDP) which is reliable.
For the implementation of reliability, I have to keep track of which packets i send, and which packets are received at the other receiver end. This is done with the help of a sliding window, and it also maintains the flow-control.
Is there a way to implement reliability apart from standard sliding window/flow control technique.
If No, will someone share his experience/design rationale/code and discuss it over this post.
If Yes, have you implemented it, or if you know any implementation of the concept.
It's sad that the TCP/IP stack doesn't include a reliable datagram protocol, but it just doesn't. You can search for lots of attempts and proposals.
If this is a personal project, your time is probably the most scarce resource, and unless your goal is to reinvent this particular wheel, just build your protocol on top of TCP and move on.
After reading all these answers, learning from other implementations and reading some papers, I am writing/sharing what is most pertinent. First let me talk about Flow control, and later i will talk about reliability.
There are two kinds of flow control--
Rate based -- Packets Transmission is done in a timely manner.
Window based -- The Standard Window based, can static or dynamic (sliding window).
Rate Based flow controls are difficult to implement, as they are based on RTT(round trip time -- its not as simple as ping's RTT) calculation. If you decide on providing a proprietary congestion control system, and you are providing it from present release, then you can go for rate-based flow control. Explicit congestion control is also there which is router dependent,so its out of picture.
Window based flow control, a window is used to keep track of all the sent packet, until the sender is sure that receiver has received them. Static window is simple to implement but throughput will be miserable. Dynamic window (also know as sliding window) is a better implementation, but a little complex to implement, and depends on various kind of Acknowledgement mechanisms.
Now Reliability...
Reliability is making sure the receiver has received your packet/information. Which means that receiver has to tell the sender, yes I got it. This notification mechanism is called Acknowledgement.
Now one ofcourse needs throughput for the data transfered also. So you should be able to send as many packets as you can, rather MAX[sender's sending limit, receiver's receiving limit], provided you have available bandwidth at both the ends, and throughout the path.
If you combine all the knowledge, although reliability and flow control are different concepts, but implementation wise the underlying mechanism is best implemented, when you use a sliding windows.
so finally in short, I and anyone else if designing a new app protocol and needs it to reliable, sliding window shall be the way to achieve it. If you are planning to implement congestion control then u might as well use a hybrid(window+rate based) approach (uDT) for example.
I sort of agree with Nik on this one, it sounds like you're using UDP to do TCP's job, reliable transmission and flow control. However, sometime's their are reasons to do this yourself.
To answer you're questions, there are UDP based protocols that do reliable transmission and don't care much about ordering, meaning only dropped packets have a performance penalty for making it to the destination (by requiring retransmission).
The best example of this we use daily is a protocol called RADIUS, which we use for Authentication and Accounting on our EVDO network. Each packet between the source and destination get's an identifier field (radius only used 1 byte, you may want more), and each identifier needs to be acked. Now Radius doesn't really use the concept of a sliding window since it's really just a control plane traffic, but it's entirely viable to use the concept from TCP. So, because each packet needs to be acked, you buffer copies of outgoing packets until they are acked by the remote end point, and everyone is happy. Flow control can use feedback from this acknowledgement mechanism to know it can scale up/down the rate of packets, which may be most easily controlled by the size of the list you have for packets transmitted but awaiting acknowledgement.
Just keep in mind, there are literally decades of research into the use of sliding window and adjustments to TCP/IP stacks around packet loss and the sliding window, so you may have something that works, but it may be difficult to replicate the same thing you get in a highly tweaked stack on a modern OS.
The real benefit to this method is you need reliable transport, but you really don't need ordering, because you're sending disjoint pieces of information. This is where TCP/IP breaks down, because a dropped packet stop all packets from making it up the stack until the packet is retransmitted.
You could have a simple Send->Ack protocol. For every packet you require an Ack before proceding with the next (Effectivly this is windows size = 1 packet - which I wouldn't call a sliding window :-)
You could have something like the following:
Part 1: Initialization
1) Sender sends a packet with the # of packets to be sent. This packet may require some kind of control bit to be set or be a different size so that the receiver can distinguish it from regular packets.
2) Receiver sends ACK to Sender.
3) Repeat steps 1-2 until ACK received by Sender.
Part 2: Send bulk of data
4) Sender then sends all packets with a sequence number attached to the front of the data portion.
5) Receiver receives all the packets and arranges them as specified by the sequence numbers. The receiver keeps a data structure to keep track of which sequence numbers have been received.
Part 3: Send missing data
6) After some timeout period has elapsed with no more packets received, the Receiver sends a message to the Sender requesting the missing packets.
7) Sender sends the missing packets to the Receiver.
8) Repeat steps 6-7 until Receiver has received all required packets.
9) Receiver sends a special "Done" packet to the Sender.
10) Sender sends ACK to Receiver.
11) Repeat steps 10-11 until ACK is received by Receiver.
Data Structure
There are a few ways the Receiver can keep track of the missing sequence numbers. The most straight-forward way would be to keep a boolean for each sequence number, however this will be extremely inefficient. You may want to keep a list of missing packet ranges. For example, if there are 100 total packets, you would start with a list with one element [(1-100)]. Each time a packet is received, simply increment the number on the front of that range. Let's say you receive packets 1-12 successfully, miss packets 13-14, received 15-44, miss 45-46, then you end up with something like: [(13-14), (45-46)]. It would be quite easy to put this data structure into a packet and send it off to the Sender. You could maybe make it even better by using a tree instead, not sure.
You might want to look at SCTP - it's reliable and message-oriented.
It might be a better approach to maintain local state in the UDP applications to check that necessary data has been transferred for confirming reliability -- rather than, trying to do complete packet level reliability and flow-control.
Trying to replicate reliability and flow-control mechanisms of TCP in a UDP path is not a good answer at all.

Resources