I want to send a byte array X times until I've reached the buffer size. My client send the byte array until the buffer size is full. But the problem is packet loss and the server only recieves like 8/10 of the buffer size.
Client:
while(writeSize < bufferSize)
{
bytes_sent += sendto(servSocket, package, sizeof(package), 0, (struct sockaddr *)&server, sizeof(server));
writeSize+= sizeof(package);
}
Server:
while(packetSize < bufferSize)
{
bytes_recv += recvfrom(servSocket, package, sizeof(package), 0, (struct sockaddr*)&client, &size);
packetSize += sizeof(package);
printf("Bytes recieved: %d\n", bytes_recv);
}
I can't really come up with a solution with this problem. When the client has sent all the packages, and the server suffers from package loss the while loop won't end, any ideas how I can solve this?
Many thanks
Hmpf. You are using UDP. For this protocol its perfectly ok to throw away packets if there is need to do so. So it's not reliable in terms of "what is sent will arrive". What you have to do in your client is to check wether all packets you need arrived, and if not, talk politely to your server to resend those packets you did not receive. To implement this stuff is not that easy, and you might finally end up with a solution wich might be less performant than just using plain old tcp-sockets, so my recomendation would be in the first instance to switch to a tcp connection.
If you have to use UDP to transfer a largish chunk of data, then design a small application-level protocol that would handle possible packet loss and re-ordering (that's part of what TCP does for you). I would go with something like this:
Datagrams less then MTU (plus IP and UDP headers) in size (say 1024 bytes) to avoid IP fragmentation.
Fixed-length header for each datagram that includes data length and a sequence number, so you can stitch data back together, and detect missed, duplicate, and re-ordered parts.
Acknowledgements from the receiving side of what has been successfully received and put together.
Timeout and retransmission on the sending side when these acks don't come within appropriate time.
Here you go. We started building TCP again ...
Of course TCP would be the better choice but if you only have UDP option i'd wrap the data into a structure with sequence and buf_length field.
It is absolutely NOT an optimal solution... It just take care of sequence number in order to assure the data is totally arrived and with the right order. If not, just stop to receive data (It could do something smarter like ask to the send to resend pieces and rebuild the data despite thw wrong order but it will become really complex)
struct udp_pkt_s {
int seq;
size_t buf_len;
char buf[BUFSIZE];
};
sending side:
struct udp_pkt_s udp_pkt;
udp_pkt.seq = 0;
while(writeSize < bufferSize)
{
// switch to the next package block
// ...
memcpy(udp_pkt.buf, package);
udp_pkt.buf_len = sizeof(package);
udp_pkt.seq++;
bytes_sent += sendto(servSocket, &udp_pkt, sizeof(udp_pkt_s), 0, (struct sockaddr *)&server, sizeof(server));
writeSize += sizeof(package);
}
receiving side:
last_seq = 0;
while(packetSize < bufferSize)
{
bytes_recv += recvfrom(servSocket, &udp_pkt, sizeof(udp_pkt_s), 0, (struct sockaddr*)&client, &size);
// break it if there's a hole
if (udp_pkt.seq != (last_seq + 1))
break;
last_seq = udp_pkt.seq;
packetSize += udp_pkt.buf_len;
printf("Bytes recieved: %d\n", udp_pkt.buf_len);
}
after sendto you can wait for sendto buffer to reach zero, you can use the following api after each sendto call.
void waitForSendBuffer(void)
{
int outstanding = 0;
while(1)
{
ioctl(socketFd_, SIOCOUTQ, &outstanding);
if(outstanding == 0)
break;
printf("going for sleep\n");
usleep(1000);
}
}
Related
I want to send data in varying sizes over UDP. The size of data to be sent is not fixed. I have the following scenario:
unsigned char buffer[BUFFERSIZE];
int bytes = fill_buffer(buffer, sizeof(buffer)): // Returns number of filled bytes.
sendto(socket, buffer, bytes, 0, (struct sockaddr *)&server, sizeof(server))
In the example above, receiving side does not know how many bytes to receive. I also thought of first sending the number of bytes to receive and then sending the data. But in that case, I don't know what would happen if the packets arrive out-of-order.
Sender side would be
sendto(socket, &bytes, sizeof(bytes), 0, (struct sockaddr *)&server, sizeof(server))
sendto(socket, buffer, bytes, 0, (struct sockaddr *)&server, sizeof(server))
Receving side would be
recvfrom(socket, &bytes, sizeof(bytes), 0, NULL, NULL)
recvfrom(socket, buffer, bytes, 0, NULL, NULL)
But could it be that sent data comes out-of-order?
I think you can send both in a single datagram if you add a message header.
The sender only sends the amount of payload data it has.
The receiver always requests the maximum payload size but examines the header and the return from recvfrom to determine the actual length.
Here's some rough code that illustrates what I'm thinking of:
struct header {
u32 magic_number;
u32 seq_no;
u32 msg_type;
u32 payload_length;
} __attribute__((__packed__));
#define MAXPAYLOAD 1024
struct message {
struct header info;
unsigned char payload[MAXPAYLOAD];
} __attribute__((__packed__));
void
sendone(int sockfd,const void *buf,size_t buflen)
{
struct message msg;
static u32 seqno = 0;
memcpy(&msg.payload[0],buf,buflen);
msg.info.magic_number = 0xDEADADDE;
msg.info.seq_no = seqno++;
msg.info.payload_length = buflen;
sendto(sockfd,&msg,sizeof(struct header) + buflen,...);
}
ssize_t
getone(int sockfd,void *buf,size_t buflen)
{
struct message msg;
ssize_t rawlen;
ssize_t paylen;
static u32 seqno = 0;
rawlen = recvfrom(sockfd,&msg,sizeof(struct header) + MAXPAYLOAD,...);
paylen = msg.info.payload_length;
if (rawlen != (sizeof(struct header) + paylen))
// error ...
memcpy(buf,&msg.payload[0],paylen);
return paylen;
}
The receiver can check the magic number and sequence number to look for corruption or missing/dropped packets, etc.
In fact, you can probably get more efficiency by using sendmsg and recvmsg since they allow you to send a single message using a scatter/gather list. (i.e.) The data would not have to be copied in/out using memcpy from the message struct [you'd only need struct header], so closer to zero copy buffering.
Another option may be to use the MSG_PEEK flag with the recvfrom/recvmsg. I've never used this myself, but it would be something like:
Do recvmsg with length of sizeof(struct header) with MSG_PEEK flag
Do second recvmsg with length of sizeof(struct header) + msg.info.payload_length
This is just a nicety of not having to always provide a maximum sized buffer. Since it involves two syscalls, it may be a bit slower. But, it might allow allow some tricks with selecting a payload buffer from a pool, based on the type of message and/or length
Unlike TCP which is a stream-based protocol, meaning that calls to recv don't exactly correspond to a call to send, UDP is packet based meaning that each recvfrom matches with exactly one sendto. This also means you need to take care of how large each message you send is.
If you send a UDP datagram that is larger that what can be contained in a IP packet, the UDP message will be fragmented across multiple UDP packets, increasing the chance of data loss. That's something you want to avoid. Also, if you're using IPv6, you'll get an error when you attempt to send because IPv6 doesn't support fragmentation.
What does this mean in relation to what you're doing? It means that, roughly speaking, your messages shouldn't be any larger than about 1450 bytes, so you can use that value as the size of your input buffer. Then you can use the return value of recvfrom to see how many bytes were actually read. If your messages are larger than that, you should break them up into multiple messages.
As with any UDP based protocol, you need to account for the case where messages get lost and they need to be retransmitted, or if messages come out of order.
Actually answer to this question was quite simple.
Given:
unsigned char buffer[BUFFERSIZE];
int bytes = fill_buffer(buffer, sizeof(buffer)): // Returns number of filled bytes.
sendto(socket, buffer, bytes, 0, (struct sockaddr *)&server, sizeof(server))
The return value of recvfrom tells us how many bytes are received, although we make a full read,
int bytesReceived = recvfrom(socket, buffer, sizeof(buffer), 0, NULL, NULL);
// Process bytesReceived number of bytes in the buffer
Just for the purpose of learning raw sockets in C I am writing a simple server that uses raw sockets to receive and send messages.
I create the socket
if ((r_sock = socket(AF_INET, SOCK_RAW, IPPROTO_UDP))< 0){
perror("socket");
exit(-1);
}
Then I create an infinite loop and start receiving, processing, and replying
while(1){
if((n = recvfrom(r_sock, buffer, BUFLEN, 0, (struct sockaddr *) &client, &client_len))<0){
perror("recvfrom");
exit(-1);
}
// Discard messages not intended to the server
if(htons(udp->uh_dport) != my_port){
continue;
}
//Do whatever with the data received and then send reply to client
// ....
if((n = sendto(r_sock, udp, ntohs(udp->uh_len), 0, (struct sockaddr *) &client, client_len))<0){
perror("sendto");
exit(-1);
}
}
I am not showing here the definition of every single variable but for the sake of completeness, buffer is a char array of size BUFLEN (big enough) and udp is a struct udphdr pointer to the right position in the buffer.
The point is that I have another program that serves as client using standard UDP sockets (SOCK_DGRAM) which is proved to be working properly (I also tried with netcat just in case). When I send a message with the client, it never receives the reply back. It seems that when the server sends the reply to the client, the server itself gets the message and the client gets nothing.
So, my question is: is there a way of solving this with raw sockets? That is, to make the server not receive its own messages and preventing others from receiving them?
Thanks in advance!
I have just realised that it was a problem with the checksum... Once I had a correct checksum in UDP the packet was correctly received by the client.
Wireshark gave me the lead to the solution. I saw that the checksum was not validated so I went to Edit > Preferences > Protocols > UDP > Validate the UDP checksum if possible and checked it.
Hope it helps
I have been working on how to transfer an image using UDP in C, I have created a code that sometimes works, sometimes it doesn't. In what I think the issue is that sometimes the server receives more packages than writes. I know that I am trying to create the TCP, but that is what I am kind looking for, but not sure how to do it.
I think to fix it the client should send the buff of the img and only sends the second part when the server reply back to the client.
Here is the code:
Client:
while (!feof(p))
{
fread(*&c, 1, BLEN, p);
sprintf(buf, "%s", *&c);
temp=sendto(s,buf,BLEN, 0, (struct sockaddr *) &si_other, slen);
//sleep(3);
//printf("%d ",temp);
if(temp < 0)
{
fprintf(stderr,"sendto error.\n");
printf("erro");
exit(1);
}
i++;
}
Server:
while(1){
if(recvfrom(s, buf, BLEN, 0, (struct sockaddr *) &si_other, (unsigned int *) &slen)==-1){
perror("recvfrom error.\n");
exit(1);
}
//printf("%s ", &si_other);
flagr[0] = buf[0];
flagr[1] = buf[1];
flagr[2] = buf[2];
if (strcmp(flagr, flag) == 0 ){
break;
}
fwrite(buf, 1, BLEN, pp);
i++;
}
UDP is a datagram protocol, meaning that each call to sendto sends one message. If that message is larger than an IP packet can hold, it will be fragmented across multiple IP datagrams. If any one of those fragments fails to arrive, the whole thing is dropped at the OS level.
The data needs to be sent in chunks of no more than about 1450 bytes. Then the receiving side will need to read each packet and, because UDP does not guarantee that data will arrive in order, you will need to reassemble them in the proper order.
That means each packet has to have a user-defined header which contains the sequence number so that the receiver knows what order to put them in.
You also need to worry about retransmissions, since UDP doesn't guarantee that a packet that is send is actually received.
There's a program I wrote called UFTP which does all of this. Take a look at the documentation and code to get an idea of what you need to do to implement reliable data transfer over UDP.
I am using domain sockets (AF_UNIX) to communicate between two threads for inter process communication. This is chosen to work well with libev: I use it on the recv end of the domain socket. This works very well except that the data I am sending is constant 4864 bytes. I cannot afford to get this data fragmented. I always thought domain sockets won't fragment data, but as it turns out it does. When the communication is at its peak between the threads, I observe the following
Thread 1:
SEND = 4864 actual size = 4864
Thread 2:
READ = 3328 actual size = 4864
Thread 1:
SEND = 4864 actual size = 4864
Thread 2:
READ = 1536 actual size = 4864
As you can see, thread 2 received the data in fragments (3328 + 1536). This is really bad for my application. Is there anyway we can make it not fragment it? I understand that IP_DONTFRAG can be set to only AF_INET family? Can someone suggest an alternative?
Update: sendto code
ssize_t
socket_domain_writer_dgram_send(int *domain_sd, domain_packet_t *pkt) {
struct sockaddr_un remote;
unsigned long len = 0;
ssize_t ret = 0;
memset(&remote, '\0', sizeof(struct sockaddr_un));
remote.sun_family = AF_UNIX;
strncpy(remote.sun_path, DOMAIN_SOCK_PATH, strlen(DOMAIN_SOCK_PATH));
len = strlen(remote.sun_path) + sizeof(remote.sun_family) + 1;
ret = sendto(*domain_sd, pkt, sizeof(*pkt), 0, (struct sockaddr *)&remote, sizeof(struct sockaddr_un));
if (ret == -1) {
bps_log(BPS_LOGGER_RD, ASL_LEVEL_ERR, "Domain writer could not connect send packets", errno);
}
return ret;
}
SOCK_STREAM by definition doesn't preserve message boundaries. Try again with SOCK_DGRAM or SOCK_SEQPACKET:
http://man7.org/linux/man-pages/man7/unix.7.html
On the other hand, consider that you may be passing messages larger than your architecture page size. For example, for amd64, a memory page is 4K. If that's a problem for any reason it might make sense to split the packets in 2.
Note however, that's not a real issue for the packets to arrive fragmented. It's common to have a packet assembler in the receiving end of the socket. What's wrong with implementing it ?
4864 + 3328 = 8192. My guess is that you're transmitting two 4864-byte packets back to back in some cases, and it's filling an 8 KB kernel buffer somewhere. IP_DONTFRAG isn't applicable because IP is not involved here — the "fragmentation" you're seeing is happening via a completely different mechanism.
If all the data you're transmitting consists of packets, you would do well to use a datagram socket (SOCK_DGRAM) instead of a stream. This should make the send() block when the kernel buffer doesn't have sufficient space to store an entire packet, rather than allowing a partial write through, and will make each recv() return exactly one packet, so you don't need to deal with framing.
I have written an HTTP proxy that does some stuff that's not relevant here, but it is increasing the client's time-to-serve by a huge amount (600us without proxy vs 60000us with it). I think I have found where the bulk of that time is coming from - between my proxy finishing sending back to the client and the client finishing receiving it. For now, server, proxy and client are running on the same host, using localhost as the addresses.
Once the proxy has finished sending (once it has returned from send() at least), I print the result of gettimeofday which gives an absolute time. When my client has received, it prints the result of gettimeofday. Since they're both on the same host, this should be accurate. All send() calls are with no flags, so they are blocking. The difference between the two is about 40000us.
The proxy's socket on which it listens for client connections is set up with the hints AF_UNSPEC, SOCK_STREAM and AI_PASSIVE. Presumably a socket from accept()ing on that will have the same parameters?
If I'm understanding all this correctly, Apache manages to do everything in 600us (including the equivalent of whatever is causing this 40000us delay). Can anybody suggest what might be causing this? I have tried setting the TCP_NODELAY option (I know I shouldn't, it's just to see if it made a difference) and the delay between finishing sending and finishing receiving went right down, I forget the number but <1000us.
This is all on Ubuntu Linux 2.6.31-19. Thanks for any help
40ms is the TCP ACK delay on Linux, which indicates that you are likely encountering a bad interaction between delayed acks and the Nagle algorithm. The best way to address this is to send all of your data using a single call to send() or sendmsg(), before waiting for a response. If that is not possible then certain TCP socket options including TCP_QUICKACK (on the receiving side), TCP_CORK (sending side), and TCP_NODELAY (sending side) can help, but can also hurt if used improperly. TCP_NODELAY simply disables the Nagle algorithm and is a one-time setting on the socket, whereas the other two must be set at the appropriate times during the life of the connection and can therefore be trickier to use.
You can't really do meaningful performance measurements on a proxy with the client, proxy and origin server on the same host.
Place them all on different hosts on a network. Use real hardware machines for them all, or specialised hardware test systems (e.g. Spirent).
Your methodology makes no sense. Nobody has 600us of latency to their origin server in practice anyway. Running all the tasks on the same host creates contention and a wholly unreaslistic network environment.
INTRODUCTION:
I already praised mark4o for the truly correct answer to the general question of lowering latency. I would like to translate the answer in terms of how it helped solve my latency issue because I think it's going to be the answer most people come here looking for.
ANSWER:
In a real-time network app (such as a multiplayer game) where getting short messages between nodes as quickly as possible is critical, TURN NAGLE OFF. In most cases this means setting the "no-delay" flag to true.
DISCLAIMER:
While this may not solve the OP specific problem, most people who come here will probably be looking for this answer to the general question of their latency issues.
ANECDOTAL BACK-STORY:
My game was doing fine until I added code to send two messages separately, but they were very close to each other in execution time. Suddenly, I was getting 250ms extra latency. As this was a part of a larger code change, I spent two days trying to figure out what my problem was. When I combined the two messages into one, the problem went away. Logic led me to mark4o's post and so I set the .Net socket member "NoDelay" to true, and I can send as many messages in a row as I want.
From e.g. the RedHat documentation:
Applications that require lower latency on every packet sent should be run on sockets with TCP_NODELAY enabled. It can be enabled through the setsockopt command with the sockets API:
int one = 1;
setsockopt(descriptor, SOL_TCP, TCP_NODELAY, &one, sizeof(one));
For this to be used effectively, applications must avoid doing small, logically related buffer writes. Because TCP_NODELAY is enabled, these small writes will make TCP send these multiple buffers as individual packets, which can result in poor overall performance.
In your case, that 40ms is probably just a scheduler time quantum. In other words, that's how long it takes your system to get back round to the other tasks. Try it on a real network, you'll get a completely different picture. If you have a multi-core machine, using virtual OS instances in Virtualbox or some other VM would give you a much better idea of what is really going to happen.
For a TCP proxy it would seem prudent on the LAN side to increase the TCP initial window size as discussed on linux-netdev and /. recently.
http://www.amailbox.org/mailarchive/linux-netdev/2010/5/26/6278007
http://developers.slashdot.org/story/10/11/26/1729218/Google-Microsoft-Cheat-On-Slow-Start-mdash-Should-You
Including paper on the topic by Google,
http://www.google.com/research/pubs/pub36640.html
And an IETF draft also by Google,
http://zinfandel.levkowetz.com/html/draft-ietf-tcpm-initcwnd-00
For Windows, I'm not sure if setting TCP_NODELAY helps. I tried that, but latency was still bad. One person suggested I try UDP, and that did the trick.
A few complicated examples of UDP did not work for me, but I ran across a simple one and it did the trick...
#include <Winsock2.h>
#include <WS2tcpip.h>
#include <system_error>
#include <string>
#include <iostream>
class WSASession
{
public:
WSASession()
{
int ret = WSAStartup(MAKEWORD(2, 2), &data);
if (ret != 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "WSAStartup Failed");
}
~WSASession()
{
WSACleanup();
}
private:
WSAData data;
};
class UDPSocket
{
public:
UDPSocket()
{
sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (sock == INVALID_SOCKET)
throw std::system_error(WSAGetLastError(), std::system_category(), "Error opening socket");
}
~UDPSocket()
{
closesocket(sock);
}
void SendTo(const std::string& address, unsigned short port, const char* buffer, int len, int flags = 0)
{
sockaddr_in add;
add.sin_family = AF_INET;
add.sin_addr.s_addr = inet_addr(address.c_str());
add.sin_port = htons(port);
int ret = sendto(sock, buffer, len, flags, reinterpret_cast<SOCKADDR *>(&add), sizeof(add));
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "sendto failed");
}
void SendTo(sockaddr_in& address, const char* buffer, int len, int flags = 0)
{
int ret = sendto(sock, buffer, len, flags, reinterpret_cast<SOCKADDR *>(&address), sizeof(address));
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "sendto failed");
}
sockaddr_in RecvFrom(char* buffer, int len, int flags = 0)
{
sockaddr_in from;
int size = sizeof(from);
int ret = recvfrom(sock, buffer, len, flags, reinterpret_cast<SOCKADDR *>(&from), &size);
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "recvfrom failed");
// make the buffer zero terminated
buffer[ret] = 0;
return from;
}
void Bind(unsigned short port)
{
sockaddr_in add;
add.sin_family = AF_INET;
add.sin_addr.s_addr = htonl(INADDR_ANY);
add.sin_port = htons(port);
int ret = bind(sock, reinterpret_cast<SOCKADDR *>(&add), sizeof(add));
if (ret < 0)
throw std::system_error(WSAGetLastError(), std::system_category(), "Bind failed");
}
private:
SOCKET sock;
};
Server
#define TRANSACTION_SIZE 8
static void startService(int portNumber)
{
try
{
WSASession Session;
UDPSocket Socket;
char tmpBuffer[TRANSACTION_SIZE];
INPUT input;
input.type = INPUT_MOUSE;
input.mi.mouseData=0;
input.mi.dwFlags = MOUSEEVENTF_MOVE;
Socket.Bind(portNumber);
while (1)
{
sockaddr_in add = Socket.RecvFrom(tmpBuffer, sizeof(tmpBuffer));
...do something with tmpBuffer...
Socket.SendTo(add, data, len);
}
}
catch (std::system_error& e)
{
std::cout << e.what();
}
Client
char *targetIP = "192.168.1.xxx";
Socket.SendTo(targetIP, targetPort, data, len);