C sendmsg() no buffer space available - c

I'm not an expert in C programming, but I'm trying to write a fairly simple program using sendmsg() and recvmsg() to send a message between a client and a server (both are on the same machine, so basically I'm sending a message to localhost).
After initialising the required structures (as in the iovec and the msghdr) and succesfully connecting the client to the server, my sendmsg() call fails with "no buffer space avaliable" errno.
This is what linux man reports about this type of error:
The output queue for a network interface was full. This generally indicates that the interface has stopped sending, but maybe caused by transient congestion. (Normally, this does not occur in Linux. Packets are just silently dropped when a device queue overflows.)
I looked around on the Internet and as a result I found out that sendmsg() is not widely used, and nobody could relate with this type of error. The only useful advice I found was to check a possible excess of open sockets, but again I always close EVERY socket I create.
So I'm stuck, basically because being quite the noob I don't know exactly where to look to fix this kind of problem.
If anybody knows how to proceed, it would be great.
(And please don't tell me not to use sendmsg(), because the whole purpose of my work is to understand this syscall, and not send a message to myself)
Here's the code I've written so far on pastebin: client and server
--SOLVED--
Thank you very much. I've been able to solve the problem and I fixed other mistakes I made, so here's the functioning code for sendmsg() and recvmsg() working message-passing: Client and Server

As others have pointed out, iovlen should be 1. But also, you want to zero out mh before initializing some of its fields since you're probably sending in garbage in the uninitialized fields and the syscall gets confused. Also, it doesn't really make sense to set msg_name and msg_namelen since you're connected and can't change your mind about where to send the data anyway.
This is what works for me in your client code:
/* The message header contains parameters for sendmsg. */
memset(&mh, 0, sizeof(mh));
mh.msg_iov = iov;
mh.msg_iovlen = 1;
printf("mh structure initialized \n");

The msg_iovlen field contains the number of elements in the iov array, not its size in bytes.
The system interpreted the following uninitialized memory as iov elements, ended up with a packet that is larger than the socket buffer space available, and thus refused to send the data.

Okay, so in your code I found this:
mh.msg_iovlen = sizeof(iov);
Which sets the msg_iovlen member to the size of struct iovec. But the documentation says this about this field:
size_t msg_iovlen; /* # elements in msg_iov */
So your code is wrong, it tells sendmsg() that it's going to send way more elements than you actually initialize.

Related

Is it possible to write a packet, read by libpcap, with libnet? in c?

I'm trying to get libpcap to read a pcap file, get the user to select a packet and write that packet using libnet, in c.
I got the reading from file part done. Libpcap puts that packet into a const unsigned char. I have worked with libnet before, but never with libnet's advanced functions. I would just create the packet using libnet's build functions, then let them on their way. I realize there is a function, libnet_adv_write_link() that takes the libnet context, a pointer to a packet to inject(const uint8_t) and the size of the packet. I tried passing the 'packet' that I got from libpcap, and it compiled and executed without errors. However, I am not seeing anything in wireshark.
Would this be the right way to tackle this problem, or should I read from libpcap and build a separate packet with libnet, based on what libpcap read?
EDIT: I believe I somewhat solved the problem. I read the packet with libpcap. Put all the bytes after the 16th byte into another uchar and wrote that into the wire. using libnet_adv_write_raw_ipv4(), libnet initialized with LIBNET_RAW4_ADV. I believe, maybe because of the driver, I don't have much power over the ETH layer. so basically I just let it be written automatically this way, and the new uchar packet is just whatever is left after the ETH layer in the original packet. Works fine so far.
I'm the current libnet maintainer.
You should call libnet_write_link() to write a packet. If you aren't seeing it, its possible you haven't opened the correct device, that you lack permissions (you checked the return value of libnet_write_link I hope), and also possible that the packet injected was invalid.
If you don't need to build the packet, it sounds like you should be using pcap to send the packet, though, see http://www.tcpdump.org/manpages/pcap_inject.3pcap.html
Also, your statement "Libpcap puts that packet into a const unsigned char" is odd. A packet doesn't fit in a single char, what pcap does is, depending on the API, return pointers into the packet data. Its worth including a snippet of code showing how you get the packet from data, and how you pass it to libnet. Its possible you aren't using the pointers correctly.
If you are using libpcap, why not use libpcap to send the packet? No, it's not well known, but yes it does work. See the function pcap_sendpacket.
The packet libpcap returns is simply an array of bytes. Anything that takes an array of bytes (including the ethernet frame) should work. However, note that your OS and/or hardware may stop you from sending packets with incorrect or malformed source MAC addresses.

What is the minimum size of data guaranteed to be sent in a single call of send() (tcp sockets)? [duplicate]

This question already has an answer here:
Why is it assumed that send may return with less than requested data transmitted on a blocking socket?
(1 answer)
Closed 9 years ago.
After select returns with write fd set for a tcp socket. If I try to send data on that socket, what is the minimum guaranteed size of data to be sent at once using send api? I understand that I have to run a loop to make sure all the data is sent. Still i want to understand what is the minimum guaranteed data sent and why?
This has come up before. I'm still searching for the referenced answer.
Let's start with the function prototype for send()
ssize_t send(int sockfd, const void *buf, size_t len, int flags);
For blocking TCP sockets - all the documentation will suggest that send() and write() will return a value between [1..len], unless there was an error. However, the reality is that no one I know has ever observed send() returning something other than -1 (error) or just "len" in the success case to indicate all of "buf" was sent in one call. I've never felt good about this, so I code defensively and just put my blocking send calls in a loop until the entire buffer is sent.
For non-blocking TCP sockets - you should just code as if the minimum was "1" (or -1 on error). Don't make any assumptions about a minimum data size.
And for recv(), you should always assume recv() will return some random value between 1..len in the success case, or 0 (closed), or -1 (error). Don't EVER assume recv will return a full buffer.
From the official POSIX reference:
A descriptor shall be considered ready for writing when a call to an output function with O_NONBLOCK clear would not block, whether or not the function would transfer data successfully.
As you see it doesn't actually mentions any size, or that the write will even be successful, just that you will be able to write to the socket without blocking.
So the answer is that there is no minimum guaranteed size.
When select() indicates that a socket is writable it is guaranteed you can transfer at least one byte without incurring EWOULDBLOCK or EAGAIN. Nothing to say you won't incur a different error though :-)
I dont think that on a POSIX layer you have any control over that. send() will accept anything you pass into internal buffers of TCP/IP stack implementation. And what happens next with it it's already another story which you can watch over using same select() call.
If you are asking about size which will fit in one packet that's MTU (but this assumption also should be used with care in regards of fragmentation and reassembling of the packets on the way)
UPD:
I will answer your comment here. No, you shouldn't bother about fragmentation at all. Leave it to TCP/IP stack. There are a lot of reasons why you shouldn't do this.. One as an example. Your application works on Application(7) layer of OSI model (although I consider OSI model in most cases to be an evil thing, it's really applicable for this example). And from this layer you trying to affect functionality/properties of a logic which is on much lower layers (Session/Transport). You shouldn't do this. POSIX calls like send(), recv() are designed to give your application an ability to instruct underneath layers that you need to pass certain amount of data and you have a way to monitor an execution of a command ( select()), that's all you have to do. And lower layers suppose to do their best to deliver data you instruct them to do in most optimal way depending on OS network settings/etc.
UPD2: everything above mostly consider NON_BLOCKING sockets. Sorry forgot to mention this, I don't use blocking sockets in my projects for ages.. in case your socket is blocking I still would consider to pass everything at once and just wait for operation result in another thread for example, because trying to optimise this could lead to very OS/drivers dependant code.

Data transfer over TCP-IPv6 connection

I am working on a client-server application in C and on Linux platform. What I am trying to achieve is to change the socket id over a TCP connection on both client and server without data loss where in the client sends the data from a file to the server in the main thread. The application is multithreaded where the other threads change the socket id based on some global flags set.
Problem: The application has two TCP socket connections established, over both IPv4 and IPv6 paths. I am transferring a file over the TCP-IPv4 connection first in the main thread. The other thread is checking on some global flags and has access to/share the socket IDs created for each protocol in the main thread. The send and recv use a pointer variable in its call to point to the socket ID to be used for the data transfer. The data is transferred initially over TCP-Ipv4. Once the global flags are set and few other checks are made the other thread changes the socket ID used in send call to point to IPv6 socket. This thread also takes care of communicating the change between the two hosts.I am getting all the data over IPv4 sent completely before switching. Also I am getting data sent over Ipv6 after the socket ID is just switched. But down the transfer there is loss of data over IPv6 connection.(I am using a pointer variable in send function on server side send(*p_dataSocket.socket_id,sentence,p_size,0); to change the pointer to IPv6 socket ID on the fly)
The error after recv and send call on both side respectively is says ESPIPE:Illegal seek, but this error exists even before switching. So I am pretty much sure this is nothing to do with the data loss
I am using pselect() to check for the available data for each socket. I can somehow understand the data loss while switching(if not properly handled) but I am not able to figure out why the data loss is occurring down the transfer after switching. I hope I am clear on what the issue is. I have also checked to send the data individually over each protocol without switching and there is no data loss.It I initially transfer the data over Ipv6 and then switch to IPv4, there is no data loss. Also would really appreciate to know to how to investigate in this issue apart from using errno or netstat.
When you are using TCP to send data you just can't loose a part of the information in between. You either receive the byte stream the way it was sent or receive nothing at all - provided that you are using the socket-related functions correctly.
There are several points you may want to investigate.
First of all you must make sure that you are really sending the data which is lost. Add some logging on the server side application: dump anything that you transmit witn send() into some file. Include some extra info as well, like:
Data packet no.==1234, *p_dataSocket.socket_id==11, Data=="data_contents_here", 22 bytes total; send() return==22
The important thing here is to watch the contents of *p_dataSocket.socket_id. Make sure that you are using mutex or something like that cause you have a thread which regularly reads socket_id contents and another thread which occasionally changes it. You are not guranteed against the getting of a wrong value from that address unless your threads have monopoly access to it while reading/writing. It is important both for normal program operation and for the debugging information generation.
Another possible problem here is the logic which selects sentence to send. Corruption of this variable may be hard to track in multithreaded program. The logging of transmitted information will help you here too.
Use any TCP sniffer to check what TCP stack really transmits. Are there packets with lost data? If there are no those packets, try to find out which send() call was responsible for sending that data. If those packets exist, check the receiving side for bugs.
errno value should not be used alone. Its value has meaning only when you get an erroneous return from a function. Try to find out when exactly errno becomes ESPIPE That may happen when any of API functions return something like -1 (depends on function). When you find out where it happens you should find out what is wrong in that particular piece of code (debugger is your friend). Have in mind that errno behavior in multithreaded environment depends on your system implementation. Make sure that you use -pthread option (gcc) or at least compile with -D_REENTRANT to minimize the risks.
Check this question for some info about the possible cause of your situation with errno==ESPIPE. Try some debuggin techniques, as suggested there. Errno value of ESPIPE gives a hint that you are using file descriptors incorrectly somewhere in your program. Maybe somewhere you are using a socket fd as regular file or something like that. This may be caused by some race condition (simultaneous access to one object from several threads).

Why am I losing this byte in the send() function?

Our application is a C server (this problem is for the Windows port of said server) that communicates with a Windows Java client. In this particular instance we are sending data to the client, particularly, the message consists of a 7 byte header where the first 3 bytes all have a specific meaning (op type, flags, etc) and the last 4 bytes contain the size of the rest of the message. For some reason I absolutely can't figure out, the third byte in the header is somehow changing; if I put a break point on the send() I can see that the third byte is what I'm expecting (0xfe), but when I check in the client, that byte is set to 0. Every other byte is fine. A did some traffic capturing with WireShark and saw that the byte was 0 leaving the server, which I find even more baffling. The third byte is set via a define, ala:
#define GET_TOP_FRAME 0xfe
Some testing I did that further confuses the issue:
I changed the value from using the define to first 0x64, 0xff, 0xfd: all came across to the client.
I changed the value from using the define to using 0xfe itself: the value was zero at the client.
I changed the value of the define itself from 0xfe to 0xef: the value was zero at the client.
Nothing about this makes a lick of sense. The code goes through several levels of functions, but here is most of the core code:
int nbytes; /* network order bytes */
static int sendsize = 7;
unsigned char tbuffer[7];
tbuffer[0]= protocolByte;
tbuffer[1]= op;
tbuffer[2]= GET_TOP_FRAME;
nbytes = htonl(bytes);
memcpy((tbuffer+3), &nbytes, JAVA_INT);
send(fd, tbuffer, sendsize, 0);
Where fd is a previously put together socket, protocolByte, op, and bytes are previously set. It then sends the rest of the message with a very similar send command immediately after this one. As I mentioned, if I put a break point on that send function, the tbuffer contains exactly what I expect.
Anybody have any ideas here? I'm completely stumped; nothing about this makes sense to me. Thanks.
It might be that there's a simple bug somewhere in your system doing a simple buffer overflow or similar, but it's hard to tell where, given this little information. However, keep in mind:
TCP doesn't send messages - it's a stream. One send() call might take several recv() calls to receive. one recv call might receive partial "messages" your application have defined.
Are you checking the return value of send ? And the return values of recv ? send might send less bytes than you tell it to. recv might receive less bytes than you tell it to, or it might receive more data than one of your application "messages" if you've given it a large enough buffer.
Turns out something else was getting in the way: in addition to the C server, we have a Tomcat/Java server that sort of runs on top of everything; I'll be honest, I'm not involved in that piece of development so I don't understand it very well. As it turns out, we have some code that is shared between that middletier portion and our client; this shared code was taking that particular byte as it was leaving the server and, if it was set to 0xfe was setting it to an uninitialized value (hence the zero). Then, when it got to the client, it was wrong. That's why when I set the values to different things in the C code it was making it to the other side and also why the behavior seemed inconsistent; I was toggling values in the Java quite a bit to see what was going on.
Thanks for the pointers all, but as it turns out it was merely a case of me not understanding the path properly.

(How) Can I determine the socket family from the socket file descriptor

I am writing an API which includes IPC functions which send data to another process which may be local or on another host. I'd really like the send function to be as simple as:
int mySendFunc(myDataThing_t* thing, int sd);
without the caller having to know -- in the immediate context of the mySendFunc() call -- whether sd leads to a local or remote process. It seems to me that if I could so something like:
switch (socketFamily(sd)) {
case AF_UNIX:
case AF_LOCAL:
// Send without byteswapping
break;
default:
// Use htons() and htonl() on multi-byte values
break;
}
It has been suggested that I might implement socketFamily() as:
unsigned short socketFamily(int sd)
{
struct sockaddr sa;
size_t len;
getsockname(sd, &sa, &len);
return sa.sa_family;
}
But I'm a little concerned about the efficiency of getsockname() and wonder if I can afford to do it every time I send.
See getsockname(2). You then inspect the struct sockaddr for the family.
EDIT: As a side note, its sometimes useful to query info as well, in this case info libc sockets
EDIT:
You really can't know without looking it up every time. It can't be simply cached, as the socket number can be reused by closing and reopening it. I just looked into the glibc code and it seems getsockname is simply a syscall, which could be nasty performance-wise.
But my suggestion is to use some sort of object-oriented concepts. Make the user pass a pointer to a struct you had previously returned to him, i.e. have him register/open sockets with your API. Then you can cache whatever you want about that socket.
Why not always send in network byte order?
If you control the client and server code I have a different suggestion, which I've used successfully in the past.
Have the first four bytes of your message be a known integer value. The receiver can then inspect the first four bytes to see if it matches the known value. If it matches, then no byte swapping is needed.
This saves you from having to do byte swapping when both machines have the same endianness.

Resources