I'm coding a part of little complex communication protocol to control multiple medical devices from single computer terminal. Computer terminal need to manage about 20 such devices. Every device uses same protocol fro communication called DEP. Now, I've created a loop that multiplexes within different devices to send the request and received the patient data associated with a particular device. So structure of this loop, in general, is something like this:
Begin Loop
Select Device i
if Device.Socket has Data
Strip Header
Copy Data on Queue
end if
rem_time = TIMEOUT - (CurrentTime - Device.Session.LastRequestTime)
if TIMEOUT <= 0
Send Re-association Request to Device
else
Sort Pending Request According to Time
Select First Request
Send the Request
Set Request Priority Least
end Select
end if
end Select
end Loop
I might have made some mistake in above pseudo-code, but I hope I've made myself clear about what this loop is trying to do. I've priority list structure that selects the device and pending request for that device, so that, all the requests and devices are selected at good optimal intervals.
I forgot to mention, above loop do not actually parse the received data, but it only strips off the header and put it in a queue. The data in queue is parsed in different thread and recorded in file or database.
I wish to add a feature so that other computers may also import the data and control the devices attached to computer terminal remotely. For this, I would need to create socket that would listen to commands in this INFINITE LOOP and send the data in different thread where PARSING is performed.
Now, my question to all the concurrency experts is that:
Is it a good design to use single socket for reading and writing in two different threads? Where each of the thread will be strictly involved in either reading or writing not both. Also, I believe socket is synchronized on process level, so do I need locks to synchronize the read and write over one socket from different threads?
There is nothing inherently wrong with having multiple threads handle a single socket; however, there are many good and bad designs based around this one very general idea. If you do not want to rediscover the problems as you code your application, I suggest you search around for designs that best fit your planned particular style of packet handling.
There is also nothing inherently wrong with having a single thread handle a single socket; however, if you put the logic handling on that thread, then you have selected a bad design, as then that thread cannot handle requests while it is "working" on the last reqeust.
In your particular code, you might have an issue. If your packets support fragmentation, or even if your algorithm gets a little ahead of the hardware due to timing issues, you might have just part of the packet "received" in the buffer. In that case, your algorithm will fail in two ways.
It will process a partial packet, one which has the first part of it's data.
It will mis-process the subsequent packet, as the information in the buffer will not start with a valid packet header.
Such failures are difficult to conceive and diagnose until they are encountered. Perhaps your library already buffers and splits messages, perhaps not.
In short, your design is not dictated by how many threads are accessing your socket: how many threads access your socket is dictated by your design.
Related
I am an experienced network programmer and am faced with a situation where i need some advice.
I am required to distribute some data on several outgoing interfaces (via different tcp socket connections, each corresponding to each interface). However, the important part is, i should be able to send MORE/most of the data on the interface with better bandwidth i.e. the one that can send faster.
The opinion i had was to use select api (both unix and windows) for this purpose. I have used select, poll or even epoll in the past. But it was always for READING from multiple sockets whenever data is available.
Here i intend to write successive packets on several interfaces in sequence, then monitor each of them for write descriptors (select parameter), then which ever is available (means it was able to send the packet first), i would keep sending more packets via that descriptor.
Will i be able to achieve my intension here? i.e. if i have an interface with 10Mbps link vs another one with 1Mbps, i hope to be able to get most of the packets out via the faster interface.
Update 1: I was wondering what would be select's behavior in this case, i.e. when you call select on read descriptors, the one on which data is available is returned. However, in my scenario when we are writing on the descriptors and waiting for select to return the one that finished writing first, does select ensure returning only when entire packet is written i.e. say i tried writing 1200 bytes in one go. Will it only return when entire 1200 are return or there is a permanent error? I am not sure how would select behave and failed to find any documentation describing that.
I'd adapt the producer/consumer pattern. In this case one producer and several consumers.
Let the main thread handle your source (be the producer) and spawn off one thread for each connection (being the consumers).
The treads in parallel pull a chunk of the source each and send it over the connection one by one.
The thread holding the fastest connection is expected to send the most chunks in this setup.
Using poll/epoll/select for writing is rather tricky. The reason is that sockets are mostly ready for writing unless their socket send buffer is full. So, polling for 'writable' is apt to just spin without ever waiting.
You need to proceed as follows:
When you have something to write to a socket, write it, in a loop that terminates when all the data has been written or write() returns -1 with errno == EAGAIN/EWOULDBLOCK.
At that point you have a full socket send buffer. So, you need to register this socket with the selector/poll/epoll for writability.
When you have nothing else to do, select/poll/epoll and repeat the writes that caused the associated sockets to be polled for writability.
Do those writes the same way as at (1) but this time, if the write completes, deregister the socket for writability.
In other words you must only select/poll for writeability if you already know the socket's send buffer is full, and you must stop doing so immediately you know it isn't.
How you fit all this into your application is another question.
I have developed a single Server/multiple Clients udp application, where Server can handle x number of clients at a time. The Server has x number of threads each thread dedicated to one Client.
The code works perfectly fine. Now I want to check my application for all possible scenarios i.e. validate my application. For this purpose, I need to design a test best.
Initial Design:
The test bed I initially designed has following functionalities:
The Server GUI has a button on it. When the button is clicked, the
each thread in the Server reads a text file, picks up few bytes of
the text file, and sends those chunks to its respective clients. The
thread then picks next chunk of bytes from the text file, sends those
chunks to the client and so on until EOF is found.
The Client on the other side keep receiving these chunks of bytes,
creates a text file, and keeps storing these chunks of bytes in its
text file.
When EOF from Server is received, the Client starts sending the
completely received text file back to the Server over its Socket.
When the file is completely received back (echoed), the Server then
compares the two text files, the Sent file and the echoed one. If
both files are same, the communication process has occurred without
any fault and the communication protocol is validated.
The above mentioned validation technique (sending the text file, receiving the echoed file and then comparing both) checks the following things:
The number of bytes sent = number of bytes receieved.
No data is corrupted.
The data is receieved in proper order.
If any of the above mentioned three conditions is not fulfilled, that means that there is some error in communication.
Now I have been asked to make changes to this test bead and add more functionlities to it. Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
Are there some other conditions that must be checked besides above mentioned 3 conditions.
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
I have to implement more functionlities to his test bed for making validation system more efficient or completely replece the above test bed with some better option.
Please help me with your suggestions.
Thanks in advane :)
The first two of your conditions are guaranteed by UDP. Picking "a few bytes", i.e. anything less than 65535 bytes (64kiB isn't really a "few" bytes) will result in a single datagram being sent, and anything larger than that will fail. Though you will not want to max out the largest possible datagram size, as it will incur IP fragmentation (staying below 1280 bytes is a good idea).
You will be able to receive exactly the amount you sent or nothing at all, never more or less. UDP does not guarantee that any datagram that is sent out arrives (it cannot guarantee that, since IP does not), but it does guarantee that the entire datagram arrives as-is -- or nothing. Never anything in between.
It further guarantees that the data inside the datagram matches its checksum (the underlying protocols including IP/ethernet/ATM further do their own checksumming) and thus arrives in the same binary representation as it was sent. In other words, data arrives in order (inside the datagram) and is not corrupted.
It is of course in theory possible that a bit error passes all 3 layers of checksums, but this is extremely unlikely and will not happen in practice. Unless you need to guard against someone maliciously tampering with packets, you do not need to worry. The kinds of bit errors that happen accidentially are reliably picked up by the checksums used in the protocols.
If, on the other hand, you do need to guard against malicious modification of your data, you must add a MAC (or a checksum and encrypt the entire packet -- adding a checksum alone is useless).
To ensure that data spanning several datagrams arrives in order, you must add sequence numbers to your packets (in the same manner TCP does). And with that, you can as well use TCP, which is likely more efficient and less error-prone. One of the main reasons why one would want to use UDP is normally because in-order delivery and reliability are not needed, or sometimes reliability is needed, but not in-order delivery.
In-order delivery is the main cause of TCP's latency during packet loss (in absence of packet loss, TCP is exactly as "fast" as UDP), so if this is needed, there is no sane reason not to use TCP in the first place. It is a protocol that has been fine-tuned and worked reliably for literally billions of people for 4 decades.
Also, using one socket and one thread per client is possibly not the best approach. The disk won't read any faster, and the network card won't send any faster either. UDP doesn't need a socket per client either. When using TCP, you'll have no other choice but to use one socket per client, but still multiplexing using a readiness notification system will give you much better performance and fewer opportunities for threading errors.
Also, sending back a checksum such as one of the SHA family (or a MAC, if it needs to be secure) may be more efficient than echoing back the whole lot of data. The likelihood that the checksum matches and the data accidentially doesn't is neglegible.
Entire revision control systems that manage millions of lines of code for millions of people (such as git) rely on the fact that this just doesn't happen to identify files (well, it does happen of course, you just won't live to see it).
I have a question here ? Why UDP why not TCP? especially when you are worried for packet order and data corruption. According to me(I may be wrong), UDP is good only when the data is timesensitive like video stream.
Secondly, yes there are other methods of checking integrity of transmitted data. Simplest may be checking the MD5 and SHA1 checksum.
Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
yes
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
It doesn't have to be a file, but it has to be something you can check once you get the response. You could just generate some random data and hold on to it until you get the response.
You'd have to tell us what you really want to test. If you are trying to make sure that UDP doesn't give you bad data or out of order data, you're using the wrong protocol. You're not testing anything by seeing if you get the exact data in the exact order you send it over UDP except for the networking infrastructure you have in place.
You say you want to test your application for "all possible scenarios", but that doesn't even mean anything. You're testing to see if a behavior that is part of the UDP specification exists and trying to see that it doesn't? Well, it does. Even if you never see it.
I'm making small library for controlling various embedded devices using C language. I'm using UDP sockets to communicate with each of the devices. Devices send me various interesting data, alarms and notifications and at the same time they send some data that is used internally by the library but may not be interesting to users. So, I've implemented a callback approach, where user could register a callback function with some interesting events on each of the devices. Right now, overall design of this library is something like this:-
I've two threads running.
In one of the thread, there is a infinite while event-loop that uses select and non-blocking sockets to maintain the communication with each of the devices.
Basically, every time I receive a packet from any of devices, I strip off the header which is 20 bytes of some useless information and add my own header containing DEVICE_ID, REQUES_TIME (time request was sent to retrieve that packet and RETRIEVAL_TIME (time now when packet actually arrived) and REQUEST_ID and REQUEST_TYPE (alarm, data, notification etc..).
Now, this thread (one with infinite loop) put packet with new header into ring buffer and then notifies other thread (thread #2) to parse this information.
In thread #2, when notification is received, it locks the buffer and read pop the packet and start parsing it.
Every message contains some information that user may not be interested, so I'm providing user call back approach to act upon data which is useful to user.
Basically, I'm doing something like this in thread 2:-
THREAD #2
wait(data_put_in_buffer_cond)
lock(buffer_mutex)
packet_t* packet = pop_packet_from_buffer(buf);
unlock(buffer_mutex)
/* parsing the package... */
parsed_packet_t* parsed_packet = parse_and_change_endianess(packet->data);
/* header for put by thread #1 with host byte order only not parsing necessary */
header_t* header = get_header(packet);
/* thread 1 sets free callback for kind of packet it puts in buffer
* This not a critical section section of buffer, so fine without locks
*/
buffer.free_callback(packet);
foreach attribute in parsed_packet->attribute_list {
register_info_t* rinfo = USER_REGISTRED_EVENT_TABLE[header->device_id][attribute.attr_id];
/*user is register with this attribute ID on this device ID */
if(rinfo != NULL) {
rinof->callback(packet);
}
// Do some other stuff with this attribute..
}
free(parsed_packet);
Now, my concerned is that what will happen if callback function that user implements takes some time to complete and meanwhile I may drop some packet because ring buffer is in overwriting mode? I've tested my API for 3 to 4 devices, I don't see much drop event if callback function wait decent amount of time..I'm speculating that this approach may not be best.
Would it be a better design, if I use some sort of thread-pool to run user callback functions? In that case I would need to make explicit copy of packet before I send it to user callback? Each packet is about 500 to 700 bytes, I get around 2 packets per second from each device. Any suggestions or comments on improving the current design or solving this issues would be appreciated.
Getting 500-700 bytes per device is not a problem at all, especially if you only have 3-4 devices. Even if you had, let's say, 100 devices, it should not be a problem. The copy overhead would be most probably negligible. So, my suggest would be: do not try to optimize beforehand until you are certain that buffer copying is your bottleneck.
About losing packets, as you say in your question, you are already using a buffer ring (I assume that is something like a circular queue, right?). If the queue becomes full, then you just need to make thread #1 to wait until there is some available space in the queue. Clearly, more events from different devices may arrive, but that should not be a problem. Once, you have space again, select will let you know that you have available data from different devices, so you will just need to process all that data. Of course, in order to have a balanced system, you can set the size of the queue to a value that reduces as much as possible the number of times that the queue is full, and thus, thread #1 needs to wait.
I'm writing a program in linux to interface, through serial, with a piece of hardware. The device sends packets of approximately 30-40 bytes at about 10Hz. This software module will interface with others and communicate via IPC so it must perform a specific IPC sleep to allow it to receive messages that it's subscribed to when it isn't doing anything useful.
Currently my code looks something like:
while(1){
IPC_sleep(some_time);
read_serial();
process_serial_data();
}
The problem with this is that sometimes the read will be performed while only a fraction of the next packet is available at the serial port, which means that it isn't all read until next time around the loop. For the specific application it is preferable that the data is read as soon as it's available, and that the program doesn't block while reading.
What's the best solution to this problem?
The best solution is not to sleep ! What I mean is a good solution is probably to mix
the IPC event and the serial event. select is a good tool to do this. Then you have to find and IPC mechanism that is select compatible.
socket based IPC is select() able
pipe based IPC is select() able
posix message queue are also selectable
And then your loop looks like this
while(1) {
select(serial_fd | ipc_fd); //of course this is pseudo code
if(FD_ISSET(fd_set, serial_fd)) {
parse_serial(serial_fd, serial_context);
if(complete_serial_message)
process_serial_data(serial_context)
}
if(FD_ISSET(ipc_fd)) {
do_ipc();
}
}
read_serial is replaced with parse_serial, because if you spend all your time waiting for complete serial packet, then all the benefit of the select is lost. But from your question, it seems you are already doing that, since you mention getting serial data in two different loop.
With the proposed architecture you have good reactivity on both the IPC and the serial side. You read serial data as soon as they are available, but without stopping to process IPC.
Of course it assumes you can change the IPC mechanism. If you can't, perhaps you can make a "bridge process" that interface on one side with whatever IPC you are stuck with, and on the other side uses a select()able IPC to communicate with your serial code.
Store away what you got so far of the message in a buffer of some sort.
If you don't want to block while waiting for new data, use something like select() on the serial port to check that more data is available. If not, you can continue doing some processing or whatever needs to be done instead of blocking until there is data to fetch.
When the rest of the data arrives, add to the buffer and check if there is enough to comprise a complete message. If there is, process it and remove it from the buffer.
You must cache enough of a message to know whether or not it is a complete message or if you will have a complete valid message.
If it is not valid or won't be in an acceptable timeframe, then you toss it. Otherwise, you keep it and process it.
This is typically called implementing a parser for the device's protocol.
This is the algorithm (blocking) that is needed:
while(! complete_packet(p) && time_taken < timeout)
{
p += reading_device.read(); //only blocks for t << 1sec.
time_taken.update();
}
//now you have a complete packet or a timeout.
You can intersperse a callback if you like, or inject relevant portions in your processing loops.
Basically I set up a test to see which method is the fastest way to get data from another computer my network for a server with only a few clients(10 at max, 1 at min).
I tried two methods, both were done in a thread/per client fashion, and looped the read 10000 times. I timed the loop from the creation of the threads to the joining of the threads right after. In my threads I used these two methods, both used standard read(2)/write(2) calls and SOCK_STREAM/AF_INET:
In one I polled for data in my client reading(non blocking) whenever data was available, and in my server, I instantly sent data whenever I got a connection. My thread returned on a read of the correct number of bytes(which happened every time).
In the other, my client sent a message to the sever on connect and my server sent a message to my client on a read(both sides blocked here to make this more turn-based and synchronous). My thread returned after my client read.
I was pretty sure polling would be faster. I made a histogram of times to complete threads, and, as expected, polling was faster by a slight margin, but two things were not expected about the read/write method. Firstly, the read/write method gave me two distinct time spikes. I.E. some event sometimes occurred which would slow the read/write down by about .01 microseconds. I ran this test on a switch initially, and thought this may be a collision of packets, but then I ran the server and client on the same computer and still got these two different time spikes. Anyone know what event may be occurring?
The other, my read function returned too many bytes sometimes, and some bytes were garbage. I know streams don't guarantee you'll get all the data correctly, but why would the read function return extra garbage bytes?
Seems you are confusing the purpose of these two alternatives:
Connection per thread approach does not need polling (unless your protocol allows for random sequence of messages either way, which would be very confusing to implement). Blocking reads and writes will always be faster here since you skip one extra system call to select(2)/poll(2)/epoll(4).
Polling approach allows to multiplex I/O on many sockets/files in single-threaded or fixed-number-of-threads setup. This is how web-servers like nginx handle thousands of client connections in very few threads. The idea is that wait on any given file descriptor does not block others - wait on all of them.
So I would say you are comparing apples and goblins :) Take a look here:
High Performance Server Architecture
The C10K problem
libevent
As for the spikes - check if TCP gets into re-transmission mode, i.e. one of the sides is not reading fast enough to drain receive buffers, play with SO_RCVBUF and SO_SNDBUF socket options.
Too many bytes is definitely wrong - looks like API misuse - check if you are comparing signed and unsigned numbers, compile with high warning level.
Edit:
Looks like you are between two separate issues - data corruption and data transfer performance. I would strongly recommend focusing on the first one before tackling the second. Reduce the test to a minimum and try to figure out what you are doing wrong with the sockets. i.e. where's that garbage data comes from. Do you check return values of the read(2) and write(2) calls? Do you share buffers between threads? Paste the reduced code sample into the question (or provide a link to it) if really stuck.
Hope this helps.
I know streams don't guarantee you'll get all the data correctly, but why would the read function return extra garbage bytes?
Actually, streams do guarantee you will get all the data correctly, and in order. Datagrams (UDP) are what you were thinking of, SOCK_DGRAM, which is not what you are using. Within AF_INET, SOCK_STREAM means TCP and TCP means reliable.