Our application is a C server (this problem is for the Windows port of said server) that communicates with a Windows Java client. In this particular instance we are sending data to the client, particularly, the message consists of a 7 byte header where the first 3 bytes all have a specific meaning (op type, flags, etc) and the last 4 bytes contain the size of the rest of the message. For some reason I absolutely can't figure out, the third byte in the header is somehow changing; if I put a break point on the send() I can see that the third byte is what I'm expecting (0xfe), but when I check in the client, that byte is set to 0. Every other byte is fine. A did some traffic capturing with WireShark and saw that the byte was 0 leaving the server, which I find even more baffling. The third byte is set via a define, ala:
#define GET_TOP_FRAME 0xfe
Some testing I did that further confuses the issue:
I changed the value from using the define to first 0x64, 0xff, 0xfd: all came across to the client.
I changed the value from using the define to using 0xfe itself: the value was zero at the client.
I changed the value of the define itself from 0xfe to 0xef: the value was zero at the client.
Nothing about this makes a lick of sense. The code goes through several levels of functions, but here is most of the core code:
int nbytes; /* network order bytes */
static int sendsize = 7;
unsigned char tbuffer[7];
tbuffer[0]= protocolByte;
tbuffer[1]= op;
tbuffer[2]= GET_TOP_FRAME;
nbytes = htonl(bytes);
memcpy((tbuffer+3), &nbytes, JAVA_INT);
send(fd, tbuffer, sendsize, 0);
Where fd is a previously put together socket, protocolByte, op, and bytes are previously set. It then sends the rest of the message with a very similar send command immediately after this one. As I mentioned, if I put a break point on that send function, the tbuffer contains exactly what I expect.
Anybody have any ideas here? I'm completely stumped; nothing about this makes sense to me. Thanks.
It might be that there's a simple bug somewhere in your system doing a simple buffer overflow or similar, but it's hard to tell where, given this little information. However, keep in mind:
TCP doesn't send messages - it's a stream. One send() call might take several recv() calls to receive. one recv call might receive partial "messages" your application have defined.
Are you checking the return value of send ? And the return values of recv ? send might send less bytes than you tell it to. recv might receive less bytes than you tell it to, or it might receive more data than one of your application "messages" if you've given it a large enough buffer.
Turns out something else was getting in the way: in addition to the C server, we have a Tomcat/Java server that sort of runs on top of everything; I'll be honest, I'm not involved in that piece of development so I don't understand it very well. As it turns out, we have some code that is shared between that middletier portion and our client; this shared code was taking that particular byte as it was leaving the server and, if it was set to 0xfe was setting it to an uninitialized value (hence the zero). Then, when it got to the client, it was wrong. That's why when I set the values to different things in the C code it was making it to the other side and also why the behavior seemed inconsistent; I was toggling values in the Java quite a bit to see what was going on.
Thanks for the pointers all, but as it turns out it was merely a case of me not understanding the path properly.
Related
My question is mainly educational so please don't tell me you shouldn't do that or you should do that. Thanks.
This is basically about how TCP works.
I'm implementing a HTTP client in C that basically just sends a HTTP GET and reads the response from ths server. Now I want to separate the headers from the body. The question is, is it possible to read from the socket byte-by-byte:
while(recv(sockfd, buffer, 1, 0))
{
// do whatever with buffer[0]
}
or is it so that: once the server wrote to the socket, say, 1000 bytes, so once the client reads even 1 bytes than all this message is "wasted" and cannot be read anymore?
Because I remember that when dealing with sockets sometime in the past that was how I understood that it is working.
Yes, in TCP that is possible. TCP does not have messages, only bytes, like a file. You can send 1000 bytes and then receive 1 byte 1000 times.
In fact that is a convenient way to start out, because if you try to receive more than 1 byte, you might not get the same number of bytes you asked for, you could get any number down to 1 byte. If you only ask to receive 1 byte, then you always get exactly 1 byte (unless the connection is closed).
It is inefficient, however, because the receive function uses a certain amount of CPU time just to process the fact that you want to receive - i.e. the function call itself costs a bit of CPU time. If you ask for, say, 100000 bytes at once, then you don't have to call the receive function 100000 times to get it. It's probably fast enough for you, just not as fast as it could be.
Although it is not good practice, it isn't "wasted". Most socket implementations on OS's do use a socket buffer, ranging from 4K till 6MB. So yes you can, as long as you read it fast engough.
But still, it is saver to just copy the stuf to your own memory-managed buffer..
In my code, the client will initiate a request to server, upon so, server will keep sending data to client until a special output is presented (say "END") then client will stop reading.
Here's some code (just an example):
/**Client**/
char req[] = "some-request";
write(socket,req,strlen(req));
while(1)
{
read(socket,readline,100);
strtok(readline, "\n");
if(strcmp(readline,"END") == 0){ //if end of sending
break;
}
/** HANDLE OUTPUT **/
}
/** Server **/
int data[] = {5532,127,332,65,723,8,912,2,421,126,8,3,2}
int i;
for(i = 0 ; i < sizeof(data); i++)
{
write(socket, data[i], sizeof(data[i]);
}
write(socket, "END", 3);
This code works fine, but due some context switching in processes, the server writes twice before the client reads, so the client reads two lines at once, say:
5532127
332
65723
8912
2421
1268
32END
As seen, sometimes 2 or more writes are grouped into 1 read, and as a result of course the END isn't handled as it is combined..
I tried using usleep() function, which works most of the time, but still isn't a solution.
So any idea how I make sure the server doesn't write to buffer if it is not empty?
You are using a stream oriented socket. The server can't know if client has already received the data or not, and there's not really any difference between your multiple writes in a loop, or just one write which sends the whole array.
You will need to use something packet oriented, or have a way to figure out what are your "packets". As you've figured out, you can't rely on timings, so you'll need to define some sort of protocol. Something simple would be to just read the ints*, and have a special value as end.
Btw. your code also has a couple of issue with number representation. You need to used a fixed length type (int can be 16 or 32-bit, so use something like uint32_t), and you need to be aware of endianess issues (should probably use htonl/ntohl or similar).
I can see some problems in your code.
You use a strtok(readline, "\n"); client side while sending raw int server side. It means that in one of the sent integers is 0x..10 or 0x10.. it will be changed to (resp.) 0x..00 or 0x00... Never use strxx functions on raw bytes !
Next, you are using stream oriented sockets. You can have no assurance that packets send will not be grouped or splitted. So you should collate everything in the receiver until you have 3 consecutive chars containing END. But you cannot use strxx function for that ...
Finally, you transfer raw integer, and wait for 3 characters END. But if you try to send {12, 17743, 17408, 13} (on a big-endian system) you will have a major problem : 17743 = 0x454F, and 17408 = 0x4400 => you have exactly "END" ! On a little endian, the magic values would be 20293 = 0x4F45 and 68 = 0x0044.
The rules are :
in you want to transer binary data, you should use a lenght + packet pattern because any pattern can happen in raw data. And a good practice is to convert all data longer than one byte to network order (htonx functions) to avoid endianness problems
if you want to use delimiters, you should only use character data, ensuring (via escaping or ...) that the end pattern cannot occur in data.
In you use case, you should simply tranfer textual representation, converting integer using sprintf, separating them with space, end terminating with \n.
I'm not an expert in C programming, but I'm trying to write a fairly simple program using sendmsg() and recvmsg() to send a message between a client and a server (both are on the same machine, so basically I'm sending a message to localhost).
After initialising the required structures (as in the iovec and the msghdr) and succesfully connecting the client to the server, my sendmsg() call fails with "no buffer space avaliable" errno.
This is what linux man reports about this type of error:
The output queue for a network interface was full. This generally indicates that the interface has stopped sending, but maybe caused by transient congestion. (Normally, this does not occur in Linux. Packets are just silently dropped when a device queue overflows.)
I looked around on the Internet and as a result I found out that sendmsg() is not widely used, and nobody could relate with this type of error. The only useful advice I found was to check a possible excess of open sockets, but again I always close EVERY socket I create.
So I'm stuck, basically because being quite the noob I don't know exactly where to look to fix this kind of problem.
If anybody knows how to proceed, it would be great.
(And please don't tell me not to use sendmsg(), because the whole purpose of my work is to understand this syscall, and not send a message to myself)
Here's the code I've written so far on pastebin: client and server
--SOLVED--
Thank you very much. I've been able to solve the problem and I fixed other mistakes I made, so here's the functioning code for sendmsg() and recvmsg() working message-passing: Client and Server
As others have pointed out, iovlen should be 1. But also, you want to zero out mh before initializing some of its fields since you're probably sending in garbage in the uninitialized fields and the syscall gets confused. Also, it doesn't really make sense to set msg_name and msg_namelen since you're connected and can't change your mind about where to send the data anyway.
This is what works for me in your client code:
/* The message header contains parameters for sendmsg. */
memset(&mh, 0, sizeof(mh));
mh.msg_iov = iov;
mh.msg_iovlen = 1;
printf("mh structure initialized \n");
The msg_iovlen field contains the number of elements in the iov array, not its size in bytes.
The system interpreted the following uninitialized memory as iov elements, ended up with a packet that is larger than the socket buffer space available, and thus refused to send the data.
Okay, so in your code I found this:
mh.msg_iovlen = sizeof(iov);
Which sets the msg_iovlen member to the size of struct iovec. But the documentation says this about this field:
size_t msg_iovlen; /* # elements in msg_iov */
So your code is wrong, it tells sendmsg() that it's going to send way more elements than you actually initialize.
I am working on a client-server project and need to implement a logic where I need to check whether I have received the last data over a TCP socket connection, before I proceed.
To make sure that I have received all the data , I am planning to pad a flag to the last packet sent.I had two options in mind as below and also related prob.
i. Use a struct as below and populate the vst_pad for the last packet sent and check the same on the recv side for its presence. The advantage over option two is that, I dont have to remove the flag from actual data before writing it to a file.Just check the first member of the struct
typedef struct
{
/* String holding padding for last packet when socket is changed */
char vst_pad[10];
/* Pointer to data being transmitted */
char *vst_data;
//unsigned char vst_data[1];
} st_packetData;
The problem is I have to serialize the struct on every send call. Also I am not sure whether I will receive the entire struct over TCP in one recv call and so have to add logic/overhead to check this every time. I have implemented this so far but figured it later that stream based TCP may not guarantee to recv entire struct in one call.
ii. Use function like strncat to add that flag at the end to the last data being sent.
The prob is I have to check on every receive call either using regex functions or function like strstr for the presence of that flag and if so have to remove it from the data.
This application is going to be used for large data transfers and hence want to add minimal overhead on every send/recv/read/write call. Would really appreciate to know if there is a better option then the above two or any other option to check the receipt of last packet. The program is multithreaded.
Edit: I do not know the total size of file I am going to send, but I am sending fixed amount of data. That is fgets read until the size specified -1 or until a new line is encountered.
Do you know the size of the data in advance, and is it a requirement that you implement a end of message flag?
Because I would simplify the design, add a 4-byte header (assuming you're not sending more than 4gb of data per message), that contains the expected size of the message.
Thus you parse out the first 4 bytes, calculate the size, then continue calling recv until you get that much data.
You'll need to handle the case where your recv call gets data from the next message, and obviously error handling.
Another issue not raised with your 10byte pad solution is what happens if the actual message contains 10 zero bytes--assuming you're padding it with zeros? You'd need to escape the 10bytes of zeros otherwise you may mistakenly truncate the message.
Using a fixed sized header and a known size value will alleviate this problem.
For a message (data packet) first send a short (in network order) of the size, followed by the data. This can be achieved in one write system call.
On the reception end, just read the short and convert back into host order (this will enable one to use different processors at a later state. You can then read the rest of the data.
In such cases, it's common to block up the data into chunks and provide a chunk header as well as a trailer. The header contains the length of the data in the chunk and so the peer knows when the trailer is expected - all it has to do is count rx bytes and then check for a valid trailer. The chunks allow large data transfers without huge buffers at both ends.
It's no great hassle to add a 'status' byte in the header that can identify the last chunk.
An alternative is to open another data connection, stream the entire serialization and then close this data connection, (like FTP does).
Could you make use of an open source network communication library written in C#? If so checkout networkComms.net.
If this is truely the last data sent by your application, use shutdown(socket, SHUT_WR); on the sender side.
This will set the FIN TCP flag, which signals that the sender->receiver stream is over. The receiver will know this because his recv() will return 0 (just like an EOF condition) when everything has been received. The receiver can still send data afterward, and the sender can still listen for them, but it cannot send more using this connection.
If data is sent to the client but the client is busy executing something else, how long will the data be available to read using recvfrom()?
Also, what happens if a second packet is sent before the first one is read, is the first one lost and the next one sitting there wating to be read?
(windows - udp)
If data is sent to the client but the client is busy executing something else, how long will the data be available to read using recvfrom()?
Forever, or not at all, or until you close the socket or read as much as a single byte.
The reason for that is:
UDP delivers datagrams, or it doesn't. This sounds like nonsense, but it is exactly what it is.
A single UDP datagram relates to either exactly one or several "fragments", which are IP packets (further encapsulated in some "on the wire" protocol, but that doesn't matter). The network stack collects all fragments for a datagram. If the checksum on any of the fragments is not good, or any other thing that makes the network stack unhappy, the complete datagram is discarded, and you get nothing, not even an error. You simply don't know anything happened.
If all goes well, a complete datagram is placed into the receive buffer. Never anything less, and never anything more. If you try to recvfrom later, that is what you'll get.
The receive buffer is obviously necessarily large enough to hold at least one max-size datagram (65535 bytes), but since usually datagrams will not be maximum size, but rather something below 1280 bytes (or 1500 if you will), it can usually hold quite a few of them (on most platforms, the buffer defaults to something around 128-256k, and is configurable).
If there is not enough room left in the buffer, the datagram is discarded, and you get nothing (well, you do still get the ones that are already in the buffer). Again, you don't even know something happened.
Each time you call recvfrom, a complete datagram is removed from the buffer (important detail!), and you get up to the number of bytes that you requested. Which means if you naively try read a few bytes and then a few bytes again, it just won't work. The first read will discard the rest of the datagram, and the subsequent ones read the first bytes of some future datagrams (and possibly block)!
This is very different from how TCP works. Here you can actually read a few bytes and a few bytes again, and it will just work, because the network layer simulates a data stream. You give a crap how it works, because the network stack makes sure it works.
Also, what happens if a second packet is sent before the first one is read, is the first one lost and the next one sitting there waiting to be read?
You probably meant to say "received" rather than "sent". Send and receive have different buffers, so that would not matter at all. About receiving another packet while one is still in the buffer, see the above explanation. If the buffer can hold the second datagram, it will store it, otherwise it silently goes * poof *.
This does not affect any datagrams already in the buffer.
Normally, the data will be buffered until it's read. I suppose if you wait long enough that the driver completely runs out of space, it'll have to do something, but assuming your code works halfway reasonably, that shouldn't be a problem.
A typical network driver will be able to buffer a number of packets without losing any.
If data is sent to the client but the client is busy executing something else, how long will the data be available to read using recvfrom()?
This depends on the OS, in windows, I believe the default for each UDP socket is 8012, this can be raised with setsockopt() Winsock Documentation So, as long as the buffer isn't full, the data will stay there until the socket is closed or it is read.
Also, what happens if a second packet is sent before the first one is read, is the first one lost and the next one sitting there wating to be read?
If the buffer has room, they are both stored, if not, one of them gets discarded. I believe its the newest one but I'm not 100% Sure.