How to find out how much I should read from a socket? - silverlight

In .NET there is the DataAvailable property in the network stream and the Available property in the tcp client.
However silverlight lacks those.
Should I send a header with the lenght of the message? I'd rather not waste network resources.
Is there any other way?

You are micro-optimizing. Why do you think that another 4 bytes would affect the performance?
In other words: Use a length header.
Update
I saw your comment on the other answer. You are using BeginRead in the wrong way. It will never block or wait until the entire buffer have been filled.
You should declare a buffer which can receive your entire message. The return value from EndRead will report the number of bytes received.
You should also know that TCP is stream based. There is no guarantees that your entire JSON message will be received at once (or that only your first message is received). Therefore you must have some sort of way to know when a message is complete.
And I say it again: A length header will hardly affect the performance.

What do you mean by 'waste network resources'? Every network read API I am aware of returns the actual number of bytes read, somehow. What's the actual problem here?

Related

Why should I use circular buffers when reading and writing to sockets in C?

I'm doing an assignment where the goal is to create a basic FTP server in C capable of handling multiple clients at once.
The subject tells us to "wisely use circular buffers" but I don't really understand why or how ?
I'm already using select to know when I can read or write into my socket without blocking as I'm not allowed to use recv, send or O_NONBLOCKING.
Each connection has a structure where I store everything related to this client like the communication file descriptor, the network informations and the buffers.
Why can't I just use read on my socket into a fixed size buffer and then pass this buffer to the parsing function ?
Same goes for writing : why can't I just dprintf my response into the socket ?
From my point of view using a circular buffer adds a useless layer of complexity just to be translated back into a string to parse the command or to send back the response.
Did I misunderstood the subject ? Instead of storing individual characters should I store commands and responses as circular buffers of strings ?
Why should I use circular buffers when reading and writing to sockets in C?
The socket interface does not itself provide a reason for using circular buffers (a.k.a. ring buffers). You should be looking instead at the protocol requirements of the application using the socket -- the FTP protocol in this case. This will be colored by the characteristics of the underlying network protocol (TCP for FTP) and their effect on the behavior of the socket layer.
Why can't I just use read on my socket into a fixed size buffer and then pass this buffer to the parsing function ?
You surely could do without circular buffers, but that wouldn't be as simple as you seem to suppose. And that's not the question you should be asking anyway: it's not whether circular buffers are required, but what benefit they can provide that you might not otherwise get. More on that later.
Also, you surely can have fixed size circular buffers -- "circular" and "fixed size" are orthogonal characteristics. However, it is usually among the objectives of using a circular buffer to minimize or eliminate any need for dynamically adjusting the buffer size.
Same goes for writing : why can't I just dprintf my response into the socket ?
Again, you probably could do as you describe. The question is what do you stand to gain from interposing a circular buffer? Again, more later.
From my point of view using a circular buffer adds a useless layer of
complexity just to be translated back into a string to parse the
command or to send back the response.
Did I misunderstood the subject ?
That you are talking about translating to and from strings makes me think that you did indeed misunderstand the subject.
Instead of storing individual
characters should I store commands and responses as circular buffers
of strings ?
Again, where do you think "of strings" comes into it? Why are you supposing that the elements of the buffer(s) would represent (whole) messages?
A circular buffer is more a manner of use of an ordinary, flat, usually fixed-size buffer than it is a separate data structure of its own. There is a little bit of extra bookkeeping data involved, however, so I won't quibble with anyone who wants to call it a data structure in its own right.
Circular buffers for input
Among the main contexts for circular buffers' usefulness is data arriving with stream semantics (such as TCP provides) rather than with message semantics (such as UDP provides). With respect to your assignment, consider this: when the server reads command input, how does it know where the command ends? I suspect you're supposing that you will get one complete command per read(), but that is in no way a safe assumption, regardless of the implementation of the client. You may get partial commands, multiple commands, or both on each read(), and you need to be prepared to deal with that.
So suppose, for example, that you receive one and a half control messages in one read(). You can parse and respond to the first, but you need to read more data before you can act on the second. Where do you put that data? Ok, you read it into the end of the buffer. And what if on the next read() you get not only the rest of a message, but also part of another message?
You cannot keep on indefinitely adding data at the end of the buffer, not even if you dynamically allocate more space as needed. You could at some point move the unprocessed data from the tail of the buffer to the beginning, thus opening up space at the end, but that is costly, and at this point we are well past the simplicity you had in mind. (That simplicity was always imaginary.) Alternatively, you can perform your reads into a circular buffer, so that consuming data from the (logical) beginning of the buffer automatically makes space available at the (logical) end.
Circular buffers for output
Similar applies on the writing side with a stream-oriented network protocol. Consider that you cannot write() an arbitrary amount of data at a time, and it is very hard to know in advance exactly how much you can write. That's more likely to bite you on the data connection than on the control connection, but in principle, it applies to both. If you have only one client to feed at a time then you can keep write()ing in a loop until you've successfully transferred all the data, and this is what dprintf() would do. But that's potentially a blocking operation, so it undercuts your responsiveness when you are serving multiple clients at the same time, and maybe even with just one if (as with FTP) there are multiple connections per client.
You need to buffer data on the server, especially for the data connection, and now you have pretty much the same problem that you did on the reading side: when you've written only part of the data you want to send, and the socket is not ready for you to send more, what do you do? You could just track where you are in the buffer, and send more pieces as you can until the buffer is empty. But then you are wasting opportunities to read more data from the source file, or to buffer more control responses, until you work through the buffer. Once again, a circular buffer can mitigate that, by giving you a place to buffer more data without requiring it to start at the beginning of the buffer or being limited by the available space before the physical end of the buffer.

Periodically sending data over sockets in C

I have made a multi-client server, which uses select() to determine what clients are currently sending. However, I am wanting to send data that is larger than my buffer size (e.g. text from a file) while remaining a non-blocking client.
At first I have found solutions that place the send/recv into while loops to send the data, with the while loop condition being the amount of bytes sent, but wouldn't that block the server for a certain amount of time? Especially if the contents of the file is large?
I was thinking to send say 1024bytes in one iteration of my server while loop, and then on the next iteration it sends the next 1024bytes to the client etc. Although this would have consequences on the client side. Possibly the client could ask for the next x bytes per query to the server?
Please let me know if there is a standard way to go about this. Thanks.
You don't need to do anything special for this. Your sockets are presumably already configured as non-blocking, so when you write to them, pass as much data as you have, and check the return value to see how much was actually sent. Then keep the rest of the data in a buffer, and wait until the file descriptor is ready again before attempting to write more.

Can I set SO_RCVBUF to 1 on a UDP socket

I have a system where a single-byte message is sent via a UDP socket from one process to another when something happens. On the receiving end, it matters not if this event has happened once or a dozen or even a million times.
Rather than making many useless recvfrom() calls, I thought maybe I could just use setsockopt to set SO_RCVBUF to 1 which I am hoping would cause the system to simply ignore all but the first message (until it is read thus allowing another). However, googling turns up some evidence that this might not be 100% portable (it looks like some systems might quietly enforce a minimum size?).
So before I go to the work of trying this I was wondering if anyone knew how likely this would be to work? This is on Red Hat Linux if that matters in any way.
On linux, the actual size is bound between SOCK_MIN_RCVBUF and sysctl_rmem_max, but the call to setsockopt will never fail. See here. This includes overhead, among other things, for a struct sk_buff.
The value of SOCK_MIN_RCVBUF is:
#define SOCK_MIN_RCVBUF (2048 + sizeof(struct sk_buff))
As to what you want to do: AFAIK, there is no way to clear the receive buffer without reading (or closing the socket).
You are assuming SO_RCVBUF is measured in bytes of application data. For UDP, it isn't on any platform that I know of.

Strategy for estimating / calculating buffer space needed by writer function on embedded system

This isn't a show-stopping programming problem as such, but perhaps more of a design pattern issue. I'd have thought it'd be a common design issue on embedded resource-limited systems, but none of the questions I found so far on SO seem relevant (but please point out anything relevant that I could have missed).
Essentially, I'm trying to work out the best strategy of estimating the largest buffer size required by some writer function, when that writer function's output isn't fixed, particularly because some of the data are text strings of variable length.
This is a C application that runs on a small ARM micro. The application needs to send various message types via TCP socket. When I want to send a TCP packet, the TCP stack (Keil RL) provides me with a buffer (which the library allocates from its own pool) into which I may write the packet data payload. That buffer size depends of course on the MSS; so let's assume it's 1460 at most, but it could be smaller.
Once I have this buffer, I pass this buffer and its length to a writer function, which in turn may call various nested writer functions in order to build the complete message. The reason for this structure is because I'm actually generating a small XML document, where each writer function typically generates a specific XML element. Each writer function wants to write a number of bytes to my allocated TCP packet buffer. I only know exactly how many bytes a given writer function writes at run-time, because some of the encapsulated content depends on user-defined text strings of variable length.
Some messages need to be around (say) 2K in size, meaning they're likely to be split across at least two TCP packet send operations. Those messages will be constructed by calling a series of writer functions that produce, say, a hundred bytes at a time.
Prior to making a call to each writer function, or perhaps within the writer function itself, I initially need to compare the buffer space available with how much that writer function requires; and if there isn't enough space available, then transmit that packet and continue writing into a fresh packet later.
Possible solutions I am considering are:
Use another much larger buffer to write everything into initially. This isn't preferred because of resource constraints. Furthermore, I would still wish for a means to algorithmically work out how much space I need by my message writer functions.
At compile time, produce a 'worst case size' constant for each writer function. Each writer function typically generates an XML element such as <START_TAG>[string]</START_TAG>, so I could have something like: #define SPACE_NEEDED ( START_TAG_LENGTH + START_TAG_LENGTH + MAX_STRING_LENGTH + SOME_MARGIN ). All of my content writer functions are picked out of a table of function pointers anyway, so I could have the worst-case size estimate constants for each writer function exist as a new column in that table. At run-time, I check the buffer room against that estimate constant. This probably my favourite solution at the moment. The only downside is that it does rely on correct maintenance to make it work.
My writer functions provide a special 'dummy run' mode where they run though and calculate how many bytes they want to write but don't write anything. This could be achieved by perhaps simply sending NULL in place of the buffer pointer to the function, in which case the functions's return value (which usually states amount written to buffer) just states how much it wants to write. The only thing I don't like about this is that, between the 'dummy' and 'real' call, the underlying data could - at least in theory - change. A possible solution for that could be to statically capture the underlying data.
Thanks in advance for any thoughts and comments.
Solution
Something I had actually already started doing since posting the question was to make each content writer function accept a state, or 'iteration' parameter, which allows the writer to be called many times over by the TCP send function. The writer is called until it flags that it has no more to write. If the TCP send function decides after a certain iteration that the buffer is now nearing full, it sends the packet and then the process continues later with a new packet buffer. This technique is very similar I think to Max's answer, which I've therefore accepted.
A key thing is that on each iteration, a content writer must be designed so that it won't write more than LENGTH bytes to the buffer; and after each call to the writer, the TCP send function will check that it has LENGTH room left in the packet buffer before calling the writer again. If not, it continues in a new packet.
Another step I did was to have a serious think about how I structure my message headers. It became apparent that, like I suppose with almost all protocols that use TCP, it is essential to implement into the application protocol some means of indicating the total message length. The reason for this is because TCP is a stream-based protocol, not a packet-based protocol. This is again where it got a bit of a headache because I needed some upfront means of knowing the total message length for insertion into the start header. The simple solution to this was to insert a message header into the start of every sent TCP packet, rather than only at the start of the application protocol message (which may of course span several TCP sockets), and basically implement fragmentation. So, in the header, I implemented two flags: a fragment flag, and a last-fragment flag. Therefore the length field in each header only needs to state the size of the payload in the particular packet. At the receiving end, individual header+payload chunks are read out of the stream and then reassembled into a complete protocol message.
This of course is no doubt very simplistically how HTTP and so many other protocols work over TCP. It's just quite interesting that, only once I've attempted to write a robust protocol that works over TCP, have I started to realise the importance of really thinking the your message structure in terms of headers, framing, and so forth so that it works over a stream protocol.
I had a related problem in a much smaller embedded system, running on a PIC 16 micro-controller (and written in assembly language, rather than C). My 'buffer size' was always going to be the two byte UART transmit queue, and I had only one 'writer' function, which was walking a DOM and emitting its XML serialisation.
The solution I came up with was to turn the problem 'inside out'. The writer function becomes a task: each time it is called it writes as many bytes as it can (which may be >2 depending on the serial data transmission rate) until the transmit buffer is full, then it returns. However, it remembers, in a state variable, how far it had got through the DOM. The next time it is called, it caries on from the point previously reached. The writer task is called repeatedly from a loop. If there is no free buffer space, it returns immediately without changing its state. It is called repeatedly from an infinite loop, which acts as a round-robin scheduler for this task and the others in the system. Each time round the loop, there is a delay which waits for the TMR0 timer to overflow. So each task gets called exactly once in a fixed time slice.
In my implementation, the data is transmitted by a TxEmpty interrupt routine, but it could also be sent by another task.
I guess the 'pattern' here is that one role of the program counter is to hold the current state of the flow of control, and that this role can be abstracted away from the PC to another data structure.
Obviously, this isn't immediately applicable to your larger, higher level system. But it is a different way of looking at the problem, which may spark your own particulr insight.
Good luck!

Is there a function in Wireshark which I can use to decode packets in a memory buffer rather than a capture file?

I have been reading the docs that come with Wireshark along with analyzing the source to figure out if there is a function or collection of functions in Wireshark which would allow me to pass a buffer containing capture data (rather than using an external file).
I asked this question on the Wireshark-dev list as well, but had no responses.
As far as I understand, you tell Wireshark where to look for packets (in terms of a file, pipe or socket) and it tells you whenever it's got a packet from that stream, so my gut feeling is that there isn't a way to pass a pointer to a buffer containing capture data and have Wireshark parse whatever it can find there.
However, I am by no means an expert on the internals of Wireshark, so I am looking to SO to help me figure out a definitive answer. Thank you.
Have you tried creating a new tvbuff_t as TVBUFF_REAL_DATA and providing your memory region as the data parameter?
tvbuff_t *tvb = tvb_new_real_data(mem_buffer_ptr, mem_buf_length, mem_buf_length);
//mem_buffer_ptr is your memory buffer
//mem_buf_length is the number of bytes in your buffer
Wireshark historically does not play well with other applications, so whether this will work or not is questionable, especially if you think your data may change while it is being dissected. Is it not possible to write out a temporary file?

Resources