Slow UDP streaming on c linux inside windows but fast on apple? - c

i am working on a live video stream over wifi project.
we use UDP to send datagrams with the sendto() function using the socket.h file.
each datagram is 1420 bytes and those are being sent constantly with 250 uS delay between each.
now,
while sending the data from ubuntu (over windows) to an iPad, from some reason the sendto() function is being stalled and the stream is slow and unstable (i get a stream rate of about 1.28mb/sec).
this phenomena is only happen when the iPad is connected to the network, if it is not connected we get a flow rate of about 4mb/sec.
it is strange to me since i was under the impression that UDP protocol will keep on going and won't be affected by the receiver side.
now to get things even more strange, when i run the same code on a iMac using the terminal i get an amzing stream rate with perfect video on the other side....
router is 2.4GHz 802.11n with 20Mhz channel
any ideas?
void init_udp_send_socket(char*server_ip, int server_port) {
if ( (server_socket=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == -1)
{
perror("Failed to open socket");
exit(2);
}
memset((char *) &base, 0, sizeof(base));
base.sin_family = AF_INET;
base.sin_port = htons(server_port);
if (inet_aton(server_ip , &base.sin_addr) == 0)
{
perror( "Bad base IP address\n");
exit(1);
}
}
void send_udp_datagram(char* blob, int size) {
printf("size is: %d\n", size);
if (sendto(server_socket, blob, size , 0 , (struct sockaddr *) &base, sock_struct_size)==-1)
{
perror( "Failed sending datagram\n");
}
}

Related

Sockets "hanging up" in C/buffer smaller than usual

When I send() and recv() data from my program locally it works fine.
However, on my remote server, the same program, which usually receives data in chunks of 4096, will receive in buffers capped at 1428, which rarely jump above this number.
Worse of all, after a minute or so of transferring data the socket just freezes and stops execution, and the program perpetually stays in this frozen state, like so:
Received: 4096
Received: 4096
Received: 3416
The server is simple, it accepts a connection from a client and receives data in chunks of 4096, which works absolutely fine locally, but on my remote server it is failing consistently, unless I only send a small chunk of data (sending 1000 byte files worked fine).
int main()
{
while(1){
int servSock = socket(AF_INET, SOCK_STREAM | SOCK_NONBLOCK, IPPROTO_TCP);
if(servSock < 0){
fprintf(stderr, "Socket error.\n");
continue;
}
struct sockaddr_in servAddr;
memset(&servAddr, 0, sizeof(servAddr));
servAddr.sin_family = AF_INET;
servAddr.sin_addr.s_addr = htonl(INADDR_ANY);
servAddr.sin_port = htons(atoi(STANDARD_PORT));
if(bind(servSock, (struct sockaddr*) &servAddr, sizeof(servAddr)) < 0){
fprintf(stderr, "Bind error.\n");
close(servSock);
continue;
}
if(listen(servSock, BACKLOG) < 0){
fprintf(stderr, "Listen error.\n");
close(servSock);
continue;
}
printf("%s", "Listening on socket for incoming connections.\n");
struct sockaddr_in clntAddr;
socklen_t clntAddrLen = sizeof(clntAddr);
while(1) {
int newsock = accept(servSock, (struct sockaddr*) &clntAddr, &clntAddrLen);
if(newsock < 0){
fprintf(stderr, "Accept connection error");
return 1;
continue;
}
char clntName[INET_ADDRSTRLEN];
if (inet_ntop(AF_INET, &clntAddr.sin_addr.s_addr, clntName, sizeof(clntName)) != NULL)
printf("Handling client %s:%d\n", clntName, ntohs(clntAddr.sin_port));
char file[17];
memset(file, 0, 17);
int recvd = recv(newsock, file, 16, 0);
file[17] = '\0';
char local_file_path[200];
memset(local_file_path, 0, 200);
strcat(local_file_path, "/home/");
strcat(local_file_path, file);
printf(local_file_path);
FILE* fp = fopen(local_file_path, "wb");
char buffer[4096];
while(1)
{
memset(buffer, 0, 4096);
recvd = recv(newsock, buffer, 4096, 0);
printf("Received: %d\n", recvd);
fwrite(buffer, sizeof(char), recvd, fp);
if(recvd == -1 || recvd == 0) {
fclose(fp);
break;
} else
}
close(newsock);
}
close(servSock);
}
return 1;
}
EDIT: For more context, this is a Windows server I am adapting to linux. Perhaps the recv() call is blocking when it shouldn't be, I'm going to test with flags.
However, on my remote server, the same program, which usually receives data in chunks of 4096, will receive in buffers capped at 1428, which rarely jump above this number.
Insufficient context has been presented for confidence, but that looks like a plausible difference between a socket whose peer is on the same machine (one connected to localhost, for example) and one whose peer is physically separated from it by an ethernet network. The 1428 is pretty close to the typical MTU for such a network, and you have to allow space for protocol headers.
Additionally, you might be seeing that one system coallesces the payloads from multiple transport-layer packets more or differently than the other does, for any of a variety of reasons.
In any case, at the userspace level, the difference in transfer sizes for a stream socket is not semantically meaningful. In particular, you cannot rely upon one end of the connection to read data in the same size chunks that the other sends it. Nor can you necessarily rely on receiving data in full-buffer units, regardless of the total amount being transferred or the progress of the transfer.
Worse of all, after a minute or so of transferring data the socket just freezes and stops execution, and the program perpetually stays in this frozen state, like so:
"Worst" suggests other "bad", which you have not described. But yes, your code is susceptible to freezing. You will not see EOF on the socket until the remote peer closes their side, cleanly. The closure part is what EOF means for a network socket. The cleanness part is required, at the protocol level, for the local side to recognize the closure. If the other end holds the connection open but doesn't write anything else to it then just such a freeze will occur. If the other side is abruptly terminated, or physically or logically cut off from the network without a chance to close their socket, then just such a freeze will occur.
And indeed, you remarked in comments that ...
Both the client and the server are hanging. The client program just stops sending data, and the server freezes as well.
If the client hangs mid-transfer, then, following from the above, there is every reason to expect that the server will freeze, too. Thus, it sounds like you may be troubleshooting the wrong component.
Perhaps the recv() call is blocking when it shouldn't be, I'm going to test with flags.
There is every reason to think the recv() call is indeed blocking when you don't expect it to do. It's highly unlikely that it is blocking when it shouldn't.
It is possible to set timeouts for socket operations, so that they eventually will fail instead of hanging indefinitely when the remote side fails. Doing so would allow your server to recover, but it would not resolve the client-side issue. You'll need to look into that more deeply.*
*You might see the client unfreeze after the server times out and closes the connection on its end. Don't take that as a resolution.

Raw socket send and receive

Just for the purpose of learning raw sockets in C I am writing a simple server that uses raw sockets to receive and send messages.
I create the socket
if ((r_sock = socket(AF_INET, SOCK_RAW, IPPROTO_UDP))< 0){
perror("socket");
exit(-1);
}
Then I create an infinite loop and start receiving, processing, and replying
while(1){
if((n = recvfrom(r_sock, buffer, BUFLEN, 0, (struct sockaddr *) &client, &client_len))<0){
perror("recvfrom");
exit(-1);
}
// Discard messages not intended to the server
if(htons(udp->uh_dport) != my_port){
continue;
}
//Do whatever with the data received and then send reply to client
// ....
if((n = sendto(r_sock, udp, ntohs(udp->uh_len), 0, (struct sockaddr *) &client, client_len))<0){
perror("sendto");
exit(-1);
}
}
I am not showing here the definition of every single variable but for the sake of completeness, buffer is a char array of size BUFLEN (big enough) and udp is a struct udphdr pointer to the right position in the buffer.
The point is that I have another program that serves as client using standard UDP sockets (SOCK_DGRAM) which is proved to be working properly (I also tried with netcat just in case). When I send a message with the client, it never receives the reply back. It seems that when the server sends the reply to the client, the server itself gets the message and the client gets nothing.
So, my question is: is there a way of solving this with raw sockets? That is, to make the server not receive its own messages and preventing others from receiving them?
Thanks in advance!
I have just realised that it was a problem with the checksum... Once I had a correct checksum in UDP the packet was correctly received by the client.
Wireshark gave me the lead to the solution. I saw that the checksum was not validated so I went to Edit > Preferences > Protocols > UDP > Validate the UDP checksum if possible and checked it.
Hope it helps

Cannot sniff UDP packets in C without Wireshark running

I have a setup that looks like this:
Target ---- Switch ---- Switch ---- Windows computer
|
Linux computer
So I have a target connected to a switch it sends out UDP-packets for debug purpose. Normally these packets goes to a Windows computer for analysis, this works. I have now added a Linux computer as well, to get the same data to both Linux and Windows I have setup a managed switch to mirror the traffic, this works fine when I look in Wireshark. I have then written a simple C-application for analysing the data on the Linux computer, this software does only work if Wireshark is running at the same time. Otherwise it does not receive any data from the target. Why is this?
int main()
{
int saddr_size, data_size;
struct sockaddr saddr;
unsigned char *buffer = (unsigned char *) malloc(BUFFER_SIZE);
printf("Starting...\n");
int sock_raw = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (sock_raw < 0)
{
printf("Socket Error");
return 1;
}
while (1)
{
saddr_size = sizeof saddr;
data_size = recvfrom(sock_raw, buffer, BUFFER_SIZE, 0, &saddr, (socklen_t*) &saddr_size);
if (data_size < 0)
{
printf("Recvfrom error , failed to get packets\n");
return 1;
}
processPacket(buffer);
}
close(sock_raw);
printf("Finished");
return 0;
}
The data coming from the target are sent on a format similar to RTP and is addressed to the Windows computer.
So to sum up; Why do I not receive any data from the target in my C-application without Wireshark running?
Same as here, you need to put the interface (not socket as I originally posted) into promiscuous mode. Wireshark does that, which is why your code works when Wireshark is running.
Just a guess: promiscuous mode is not turned on and the ethernet controller is discarding frames not addressed to it.

SO_RCVTIMEO option on LwIP

I'm using LwIP with FreeRTOS. My project is based on the example on this URL FreeRTOS with LwIP project. I'm also using LPC1769 with LPCXpresso version 6. CMSIS version 2.
I'm using LwIP to stream MP3 files with a UDP socket. The transfer has a nice speed but the thing is that sometimes lwip_recvfrom blocks after thousands of operations.
I can never see the timeout condition. I think I'm doing something wrong.
The followed steps are:
int socket = lwip_socket(AF_INET, SOCK_DGRAM, 0);
if(lwip_setsockopt( socket,
SOL_SOCKET,
SO_RCVTIMEO,
(int)timeoutTimeInMiliSeconds,
sizeof(int)) == -1)
{
return -1;
}
....
if(lwip_bind(protocolConfig.socket,
(struct sockaddr *)&sLocalAddr,
sizeof(sLocalAddr)) == -1)
{
return -1;
}
bytesWritten = lwip_sendto( socket,
transmitBuffer,
transmitBufferIndex,
0,
(struct sockaddr *)&sDestAddr,
sizeof(sDestAddr));
.....
bytesReceived = lwip_recvfrom( socket,
receptionBuffer,
receptionBufferSize,
0,
NULL,
NULL);
if(bytesReceived < 0)
{
//Error stuff, this condition is never reached.
}
Somebody knows what's wrong here?
Problem solved.
lwip_setsockopt has this prototype:
int lwip_setsockopt(int socket, int level, int option_name,const void *option_value, socklen_t option_len);
And I was sending by copy the value of option_value.
The timeout is working fine.

UDP Sockets in C

I'm working on a homework problem for class. I want to start a UDP Server that listens for a file request. It opens the file and sends it back to the requesting client with UDP.
Heres the server code.
// Create UDP Socket
if ((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) == -1) {
perror("Can't create socket");
exit(-1);
}
// Configure socket
memset(&server, 0, sizeof server);
server.sin_family = AF_INET; // Use IPv4
server.sin_addr.s_addr = htonl(INADDR_ANY); // My IP
server.sin_port = htons(atoi(argv[1])); // Server Port
// Bind socket
if ((bind(sockfd, (struct sockaddr *) &server, sizeof(server))) == -1) {
close(sockfd);
perror("Can't bind");
}
printf("listener: waiting to recvfrom...\n");
if (listen(sockfd, 5) == -1) {
perror("Can't listen for connections");
exit(-1);
}
while (1) {
client_len = sizeof(struct sockaddr_in);
newsockfd = accept(sockfd, (struct sockaddr*)&client,&client_len);
if (newsockfd < 0) {
perror("ERROR on accept");
}
// Some how parse request
// I do I use recv or recvfrom?
// I do I make new UDP socket to send data back to client?
sendFile(newsockfd, filename);
close(newsockfd);
}
close(sockfd);
I'm kind of lost how do I recv data from the client? And how to I make a new UDP connection back to the client?
How UDP is different from TCP:
message-oriented, not stream-oriented. You don't read/write or send/recv. You sendto/recvfrom. The size of message is limited to 64K. Each call to recvfrom gets one message sent by a call to sendto. If recvfrom passes a buffer that's smaller than the size of message, the rest of message is gone for good.
no connections. Therefore no listen/accept/connect. You send a message to a particular address/port. When you receive message (on the address/port to which your socket is bound), you get the source of the incoming message as an output parameter to recvfrom.
no guarantees. The messages can be dropped or received out of order. If I remember correctly, they cannot be truncated, though.
One last word of caution - you may find yourself re-inventing TCP over UDP. In that case, stop and go back to TCP.
I have written a UDP server-client in C , where the client sends a registration number and the server gives a name as the feedback.
SERVER
0. Variable initialization
1. sock()
2. bind()
3. recvfrom()
4. sendto()
CLIENT
0. gethostbyname()
1. sock()
2. bzero()
4. sendto()
5. recvfrom()
Hope it helps. You can find the example code here udp server/client
accept is only used for connection oriented (STREAM) sockets. UDP is not stream, oriented, so there are no connections and you can't use accept(2) -- it will return EOPNOTSUPP.
Instead, you just read packets directly from the bound service socket (generally using recvfrom(2) so you can tell where thy came from, though you can use recv or just read if you don't care), afterwhich you can send packets back using the same socket (and generally using sendto(2))
Keep in mind that UDP is connectionless. It only sends packets, and is not suitable for sending files - unless the entire content fit in one UDP packet.
If you anyway want to send/receive UDP packets, you simply call sendto/recvfrom with the appropriate addresses.

Resources