In my program UDP sockets are stored in a limited size array, socke can be removed from the array which results in a restructuration of the array to keep it sequential.
I wan't to create a thread waiting for a certain message on those sockets and I would like to use non-blocking I/O for that task.
My problem is that the array of socket can change over the time, using poll if I wan't to take those changes in account I need to modify the array of pollfd. But, if poll is already called how can I make it take those changes into account?
I still could use a timeout but I find that solution not really good because it's kind of like a "non satisfying receptor" (I hope it's the right word in english).
Thank you in advance
Related
I have made a multi-client server, which uses select() to determine what clients are currently sending. However, I am wanting to send data that is larger than my buffer size (e.g. text from a file) while remaining a non-blocking client.
At first I have found solutions that place the send/recv into while loops to send the data, with the while loop condition being the amount of bytes sent, but wouldn't that block the server for a certain amount of time? Especially if the contents of the file is large?
I was thinking to send say 1024bytes in one iteration of my server while loop, and then on the next iteration it sends the next 1024bytes to the client etc. Although this would have consequences on the client side. Possibly the client could ask for the next x bytes per query to the server?
Please let me know if there is a standard way to go about this. Thanks.
You don't need to do anything special for this. Your sockets are presumably already configured as non-blocking, so when you write to them, pass as much data as you have, and check the return value to see how much was actually sent. Then keep the rest of the data in a buffer, and wait until the file descriptor is ready again before attempting to write more.
I have a real-time system, so I using the non-blocking socket to send my data
But there is happened that the socket buffer is full,
so the send function's return value less than my data length.
If I save the return length and re-send, there is not different with blocking socket?
So can I get the socket buffer's remainder size? I can check it first,
if it is enough then I call send, skip send else.
Thank you, all.
Well there is a difference between blocking and non-blocking - if you experience a short write you don't block. That's the whole point of non-blocking. It give you an opportunity to do something more pressing while waiting for some buffer space to free up.
Your concern seems to be the repeated attempts to write a full message, that is, a form of polling. But a check of the bytes free in the buffer is the same thing, you are just substituting the call to the availability with the call to write. You really don't gain anything efficiency wise.
The commonplace solution to this is to use something like select or poll that monitors the socket descriptor for the ability to write (and least some) bytes. This allows you stop polling and foist some of the work off on the kernel to monitor the space availability for you.
That said, if you really want to check to see how much space is available there are usually work arounds that tend to somewhat platform specific, mostly ioctl calls with various platform specific parameters like FIONWRITE, SIOCOUTQ, etc. You would need to investigate exactly what your platform provides. But, again, it is better to consider if this is really something you need in the first place.
If the asynchronous send fails with EWOULDBLOCK/EAGAIN, no data is sent. You could then try to send something else, or wait until the buffer is free again.
Also see https://stackoverflow.com/questions/19391208/when-a-non-blocking-send-only-transfers-partial-data-can-we-assume-it-would-r - a related issue is discussed there.
I'm currently writing a proxy application that reads from one socket and writes on another. Both are set as non-blocking, allowing multiple sockets pairs to be handle.
To control a proper flow between the sockets, the application should NOT read from the source socket if the writing on the target socket may block.
The idea is nice, however I found no way to detect a blocking state of the target socket without first writing to it... and that is not what is needed.
I know of an option to use SIOCOUTQ (using ioctl()) and calculate the remaining buffer, but this seems ugly compared to a simple check if the target socket is ready for writing.
I guess I can also use select() for just this socket, but that is so much waste of a such heavy system call.
select or poll should be able to give you the information.
I assume you're already using one of them to detect which of your reading sockets has data
When you have a reading socket available for read, replace it with the corresponding writing socket (but put it in the write fds of course), and call select again. Then, if the writing socket is available, you can read and write.
Note that it's possible that the writing socket is ready to get data, but not as much as you want. So you might manage to read 100 bytes, and write only 50.
Thank you all for the feedback.
Summarizing all comments and answers up until now:
Directly to the question, the only known way to detect that a socket will block on an attempt to write to it is using select/poll/epoll.
The original objective was to build a proxy, which reads from one socket and writes to another, keeping proper balance between them (reading at the same rate of writing and vice versa) and using no major buffering in the application. For this, the following options were presented:
Use of SIOCOUTQ to find out how much buffer is left on the destination socket and transmit no more than that. As pointed out by #ugoren, this has a disadvantage of being unreliable, mainly because in between reading the value, calculating it and attempting to write the actual value may change. It also introduces some issues of busy-waiting if wrongly managed. I guess, if this technique is to be used, it should be followed by a more reliable one for full protection.
Use of select/poll/epoll and adding a small limited buffer per read socket: Initially, all read sockets will be added to the poll, when ready for read, we read it in a limited buffer size, then remove the read socket from the poll and add instead of it the destination socket for writing, when we return to the poll and we detect that the destination socket is ready for writing, we write the buffer, if all data was accepted by the socket, we remove the destination socket from the poll and return the read socket back. The disadvantage here is that we increase the number of system calls (adding/removing to the poll/select) and we need to keep an internal buffer per socket. This seems to be the preferred approach, and some optimization could be added to try and reduce the call to system calls (for example, trying to write right after reading the read socket, and only if something is left to perform the above).
Thank you all again for participating in this discussion, it helped me a lot ordering the ideas.
This isn't a show-stopping programming problem as such, but perhaps more of a design pattern issue. I'd have thought it'd be a common design issue on embedded resource-limited systems, but none of the questions I found so far on SO seem relevant (but please point out anything relevant that I could have missed).
Essentially, I'm trying to work out the best strategy of estimating the largest buffer size required by some writer function, when that writer function's output isn't fixed, particularly because some of the data are text strings of variable length.
This is a C application that runs on a small ARM micro. The application needs to send various message types via TCP socket. When I want to send a TCP packet, the TCP stack (Keil RL) provides me with a buffer (which the library allocates from its own pool) into which I may write the packet data payload. That buffer size depends of course on the MSS; so let's assume it's 1460 at most, but it could be smaller.
Once I have this buffer, I pass this buffer and its length to a writer function, which in turn may call various nested writer functions in order to build the complete message. The reason for this structure is because I'm actually generating a small XML document, where each writer function typically generates a specific XML element. Each writer function wants to write a number of bytes to my allocated TCP packet buffer. I only know exactly how many bytes a given writer function writes at run-time, because some of the encapsulated content depends on user-defined text strings of variable length.
Some messages need to be around (say) 2K in size, meaning they're likely to be split across at least two TCP packet send operations. Those messages will be constructed by calling a series of writer functions that produce, say, a hundred bytes at a time.
Prior to making a call to each writer function, or perhaps within the writer function itself, I initially need to compare the buffer space available with how much that writer function requires; and if there isn't enough space available, then transmit that packet and continue writing into a fresh packet later.
Possible solutions I am considering are:
Use another much larger buffer to write everything into initially. This isn't preferred because of resource constraints. Furthermore, I would still wish for a means to algorithmically work out how much space I need by my message writer functions.
At compile time, produce a 'worst case size' constant for each writer function. Each writer function typically generates an XML element such as <START_TAG>[string]</START_TAG>, so I could have something like: #define SPACE_NEEDED ( START_TAG_LENGTH + START_TAG_LENGTH + MAX_STRING_LENGTH + SOME_MARGIN ). All of my content writer functions are picked out of a table of function pointers anyway, so I could have the worst-case size estimate constants for each writer function exist as a new column in that table. At run-time, I check the buffer room against that estimate constant. This probably my favourite solution at the moment. The only downside is that it does rely on correct maintenance to make it work.
My writer functions provide a special 'dummy run' mode where they run though and calculate how many bytes they want to write but don't write anything. This could be achieved by perhaps simply sending NULL in place of the buffer pointer to the function, in which case the functions's return value (which usually states amount written to buffer) just states how much it wants to write. The only thing I don't like about this is that, between the 'dummy' and 'real' call, the underlying data could - at least in theory - change. A possible solution for that could be to statically capture the underlying data.
Thanks in advance for any thoughts and comments.
Solution
Something I had actually already started doing since posting the question was to make each content writer function accept a state, or 'iteration' parameter, which allows the writer to be called many times over by the TCP send function. The writer is called until it flags that it has no more to write. If the TCP send function decides after a certain iteration that the buffer is now nearing full, it sends the packet and then the process continues later with a new packet buffer. This technique is very similar I think to Max's answer, which I've therefore accepted.
A key thing is that on each iteration, a content writer must be designed so that it won't write more than LENGTH bytes to the buffer; and after each call to the writer, the TCP send function will check that it has LENGTH room left in the packet buffer before calling the writer again. If not, it continues in a new packet.
Another step I did was to have a serious think about how I structure my message headers. It became apparent that, like I suppose with almost all protocols that use TCP, it is essential to implement into the application protocol some means of indicating the total message length. The reason for this is because TCP is a stream-based protocol, not a packet-based protocol. This is again where it got a bit of a headache because I needed some upfront means of knowing the total message length for insertion into the start header. The simple solution to this was to insert a message header into the start of every sent TCP packet, rather than only at the start of the application protocol message (which may of course span several TCP sockets), and basically implement fragmentation. So, in the header, I implemented two flags: a fragment flag, and a last-fragment flag. Therefore the length field in each header only needs to state the size of the payload in the particular packet. At the receiving end, individual header+payload chunks are read out of the stream and then reassembled into a complete protocol message.
This of course is no doubt very simplistically how HTTP and so many other protocols work over TCP. It's just quite interesting that, only once I've attempted to write a robust protocol that works over TCP, have I started to realise the importance of really thinking the your message structure in terms of headers, framing, and so forth so that it works over a stream protocol.
I had a related problem in a much smaller embedded system, running on a PIC 16 micro-controller (and written in assembly language, rather than C). My 'buffer size' was always going to be the two byte UART transmit queue, and I had only one 'writer' function, which was walking a DOM and emitting its XML serialisation.
The solution I came up with was to turn the problem 'inside out'. The writer function becomes a task: each time it is called it writes as many bytes as it can (which may be >2 depending on the serial data transmission rate) until the transmit buffer is full, then it returns. However, it remembers, in a state variable, how far it had got through the DOM. The next time it is called, it caries on from the point previously reached. The writer task is called repeatedly from a loop. If there is no free buffer space, it returns immediately without changing its state. It is called repeatedly from an infinite loop, which acts as a round-robin scheduler for this task and the others in the system. Each time round the loop, there is a delay which waits for the TMR0 timer to overflow. So each task gets called exactly once in a fixed time slice.
In my implementation, the data is transmitted by a TxEmpty interrupt routine, but it could also be sent by another task.
I guess the 'pattern' here is that one role of the program counter is to hold the current state of the flow of control, and that this role can be abstracted away from the PC to another data structure.
Obviously, this isn't immediately applicable to your larger, higher level system. But it is a different way of looking at the problem, which may spark your own particulr insight.
Good luck!
I have a client using select() to check if there is anything to be received, else it times out and the user is able to send(). Which works well enough. However, the program locks up waiting for user input, so it cant recv() again until the user has sent something.
I am not having much luck with using threads either, as I cannot seem to find a good resource, which shows me how to use them.
I have tried a creating two threads (using CreateThread) for send and recv functions which are basically two functions using while loops to keep sending and receiving. And then the two CreateThreads() are wrapped up in a while loop, because otherwise it just seemed to drop out.
I have basically no previous experience with threads, so my description of what i've been doing will probably sound ridiculous. But I would appreciate any help, in using them properly for this kind of use, or an alternative method would also be great.
Can't find a good resource on socket programming? Bah. Read the bible:
Unix Network Programming
this has a strong feel of assignment work...
I am not too much into Windows programming but when I did do some earlier, there used to be a kbhit (probably) function that allowed you to check if the user had sent any inputs. You could look for something similar and try a non-blocking user input check before starting your select again.
You could then get this working without multi-threading
(unless, er, you have to use that).