I am working on a LAN based solution with a "server" that has to control a number of "players"
My protocol of choice is UDP because its easy, I do not need connections, my traffic consists only of short commands from time to time and I want to use a mix of broadcast messages for syncing and single target messages for player individual commands.
Multicast TCP would be an alternative, but its more complicated, not exactly suited for the task and often not well supported by hardware.
Unfortunately I am running into a strange problem:
The first datagram which is sent to a specific ip using "sendto" is lost.
Any datagram sent short time afterwards to the same ip is received.
But if i wait some time (a few minutes) the first "sendto" is lost again.
Broadcast datagrams always work.
Local sends (to the same computer) always work.
I presume the operating system or the router/switch has some translation table from IP to MAC addresses which gets forgotten when not being used for some minutes and that unfortunately causes datagrams to be lost.
I could observe that behaviour with different router/switch hardware, so my suspect is the windows networking layer.
I know that UDP is by definition "unreliable" but I cannot believe that this goes so far that even if the physical connection is working and everything is well defined packets can get lost. Then it would be literally worthless.
Technically I am opening an UDP Socket,
bind it to a port and INADRR_ANY.
Then I am using "sendto" and "recvfrom".
I never do a connect - I dont want to because I have several players. As far as I know UDP should work without connect.
My current workaround is that I regularly send dummy datagrams to all specific player ips - that solves the problem but its somehow "unsatisfying"
Question: Does anybody know that problem? Where does it come from? How can I solve it?
Edit:
I boiled it down to the following test program:
int _tmain(int argc, _TCHAR* argv[])
{
WSADATA wsaData;
WSAStartup(MAKEWORD(2, 2), &wsaData);
SOCKET Sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
SOCKADDR_IN Local = {0};
Local.sin_family = AF_INET;
Local.sin_addr.S_un.S_addr = htonl(INADDR_ANY);
Local.sin_port = htons(1234);
bind(Sock, (SOCKADDR*)&Local, sizeof(Local));
printf("Press any key to send...\n");
int Ret, i = 0;
char Buf[4096];
SOCKADDR_IN Remote = {0};
Remote.sin_family = AF_INET;
Remote.sin_addr.S_un.S_addr = inet_addr("192.168.1.12"); // Replace this with a valid LAN IP which is not the hosts one
Remote.sin_port = htons(1235);
while(true) {
_getch();
sprintf(Buf, "ping %d", ++i);
printf("Multiple sending \"%s\"\n", Buf);
// Ret = connect(Sock, (SOCKADDR*)&Remote, sizeof(Remote));
// if (Ret == SOCKET_ERROR) printf("Connect Error!\n", Buf);
Ret = sendto(Sock, Buf, strlen(Buf), 0, (SOCKADDR*)&Remote, sizeof(Remote));
if (Ret != strlen(Buf)) printf("Send Error!\n", Buf);
Ret = sendto(Sock, Buf, strlen(Buf), 0, (SOCKADDR*)&Remote, sizeof(Remote));
if (Ret != strlen(Buf)) printf("Send Error!\n", Buf);
Ret = sendto(Sock, Buf, strlen(Buf), 0, (SOCKADDR*)&Remote, sizeof(Remote));
if (Ret != strlen(Buf)) printf("Send Error!\n", Buf);
}
return 0;
The Program opens an UDP Socket, and sends 3 datagrams in a row on every keystroke to a specific IP.
Run that whith wireshark observing your UDP traffic, press a key, wait a while and press a key again.
You do not need a receiver on the remote IP, makes no difference, except you wont get the black marked "not reachable" packets.
This is what you get:
As you can see the first sending initiated a ARP search for the IP. While that search was pending the first 2 of the 3 successive sends were lost.
The second keystroke (after the IP search was complete) properly sent 3 messages.
You may now repeat sending messages and it will work until you wait (about a minute until the adress translation gets lost again) then you will see dropouts again.
That means: There is no send buffer when sending UDP messages and there are ARP requests pending! All messages get lost except the last one.
Also "sendto" does not block until it successfully delivered, and there is no error return!
Well, that surprises me and makes me a little bit sad, because it means that I have to live with my current workaround or implement an ACK system that only sends one message at a time and then waits for reply - which would not be easy any more and imply many difficulties.
I'm posting this long after it's been answered by others, but it's directly related.
Winsock drops UDP packets if there's no ARP entry for the destination address (or the gateway for the destination).
Thus it's quite likely some of the first UDP packet gets dropped as at that time there's no ARP entry - and unlike most other operating systems, winsock only queues 1 packet while the the ARP request completes.
This is documented here:
ARP queues only one outbound IP datagram for a specified destination
address while that IP address is being resolved to a MAC address. If a
UDP-based application sends multiple IP datagrams to a single
destination address without any pauses between them, some of the
datagrams may be dropped if there is no ARP cache entry already
present. An application can compensate for this by calling the
Iphlpapi.dll routine SendArp() to establish an ARP cache entry, before
sending the stream of packets.
The same behavior can be observed on Mac OS X and FreeBSD:
When an interface requests a mapping for an address not
in the cache, ARP queues the message which requires the
mapping and broadcasts a message on the associated associated
network requesting the address mapping. If a response is
provided, the new mapping is cached and any pending message is
transmitted. ARP will queue at most one packet while waiting
for a response to a mapping request; only the most recently
``transmitted'' packet is kept.
UDP packets are supposed to be buffered on receipt, but a UDP packet (or the ethernet frame holding it) can be dropped at several points on a given machine:
network card does not have enough space to accept it,
OS network stack does not have enough buffer memory to copy it to,
firewall/packet filtering drop-rule match,
no application is listening on destination IP and port,
receive buffer of the listening application socket is full.
First two points are about too much traffic, which is not likely the case here. Then I trust that point 4. is not applicable and your software is waiting for the data. Point 5. is about your application not processing network data fast enough - also does not seem like the case.
Translation between MAC and IP addresses is done via Address Resolution Protocol. This does not cause packet drop if your network is properly configured.
I would disable Windows firewall and any anti-virus/deep packet inspection software and check what's on the wire with wireshark. This will most likely point you into right direction - if you can sniff those first packets on the "sent-to" machines then check local configuration (firewall, etc.); if you don't, then check your network - something in the path is interfering with your traffic.
Hope this helps.
erm ..... Its your computer doing ARP request. When you first start sending, your com does not know the receiver mac address, hence its unable to send any packets. It uses the ip address of the receiver to do an ARP request to get the mac address. During this process, any udp packets you try to send cannot be send out as the destination mac address is still not known.
Once your com receive the mac address it can start sending. However, the mac address will only remain in you com's ARP cache for 2mins (if no further activities is detected between you and the receiver) or 10 mins (full clear of ARP cache, even if connection is active). That is why you experience this problem every few mins.
Does anybody know that problem?
The real problem is that you assumed that UDP packet sending is reliable. It isn't.
The fact that you lose the first packet with your current network configuration is really a secondary issue. You might be able to fix that, but at any point you are still vulnerable to packet losses.
If packet loss is a problem for you, then you really should use TCP. You can build a reliable protocol on UDP, but unless you have a very good reason to do so, it's no recommended.
Related
I am believing TCP is reliable. If write(socket, buf, buf_len) and close(socket) returns without error, the receiver will receive the exact same data of buf with length buf_len.
But this article says TCP is not reliable.
A:
sock = socket(AF_INET, SOCK_STREAM, 0);
connect(sock, &remote, sizeof(remote));
write(sock, buffer, 1000000); // returns 1000000
close(sock);
B:
int sock = socket(AF_INET, SOCK_STREAM, 0);
bind(sock, &local, sizeof(local));
listen(sock, 128);
int client=accept(sock, &local, locallen);
write(client, "220 Welcome\r\n", 13);
int bytesRead=0, res;
for(;;) {
res = read(client, buffer, 4096);
if(res < 0) {
perror("read");
exit(1);
}
if(!res)
break;
bytesRead += res;
}
printf("%d\n", bytesRead);
Quiz question - what will program B print on completion?
A) 1000000
B) something less than 1000000
C) it will exit reporting an error
D) could be any of the above
The right answer, sadly, is ‘D’. But how could this happen? Program A
reported that all data had been sent correctly!
If this article is true, I have to change my mind. But I am not sure if this article is true or not.
Is this article true?
TCP is reliable (at least when the lower level protocols are), but programmers may use it in a unreliable way.
The problem here is that a socket should not be closed before all sent data has been correctly received at the other side: the close signal may destroy the connection before while the last data is still in transfert.
The correct way to ensure proper reception by peer is a graceful shutdown.
TCP/IP is reliable, for a very particular (and limited) meaning of the word "reliable".
Specifically, when write() returns 1000000, it is making you the following promises:
Your 1000000 bytes of data have been copied into the TCP socket's outgoing-data-buffer, and from now on, the TCP/IP stack is responsible for delivering those bytes to the remote program.
If those bytes can be delivered (with a reasonable amount of effort), then they will be delivered, even if some of the transmitted TCP packets get dropped during delivery.
If the bytes do get delivered, they will be delivered accurately and in-order (relative to each other and also relative to the data passed to previous and subsequent calls to write() on that same socket).
But there are also some guarantees that write() doesn't (and, in general, can't) provide. In particular:
write() cannot guarantee that the receiving application won't exit or crash before it calls recv() to get all 1000000 of those bytes.
write() cannot guarantee that the receiving application will do the right thing with the bytes it does receive (i.e. it might just ignore them or mishandle them, rather than acting on them)
write() cannot guarantee that the receiving application ever will call recv() to recv() those bytes at all (i.e. it might just keep the socket open but never call recv() on it)
write() cannot guarantee that the network infrastructure between your computer and the remote computer will work to deliver the bytes -- i.e. if some packets are dropped, no problem, the TCP layer will resend them, but e.g. if someone has pulled the power cable out of your cable modem, then there's simply no way for the bytes to get to their destination, and after a few minutes of trying and failing to get an ACK, the TCP layer will give up and error out the connection.
write() can't guarantee that the receiving application handles socket-shutdown issues 100% correctly (if it doesn't, then it's possible for any data you send just before the remote program closes to the socket to be silently dropped, since there's no receiver left to receive it)
write() doesn't guarantee that the bytes will be received by the receiving application in the same segment-sizes that you sent them in. i.e. just because you sent 1000000 bytes in a single call to send() doesn't mean that the receiving app can receive 1000000 bytes in a single call to recv(). He might instead receive the data in any-sized chunks, e.g. 1-byte-per-recv()-call, or 1000-bytes-per-recv-call, or any other size the TCP layer feels like providing.
Note that in all of the cases listed above, the problems occur after write() has returned, which is why write() can't just return an error code when these happen. (A later call to write() might well return an error code, of course, but that won't particularly help you know where the line between delivered-bytes and non-delivered bytes was located)
TLDR: TCP/IP is more reliable than UDP, but not a 100% iron-clad guarantee that nothing will ever go wrong. If you really want to make sure your bytes got processed on the receiving end, you'll want to program your receiving application to send back some sort of higher-level acknowledgement that it received (and successfully handled!) the bytes you sent it.
TCP/IP offers reliability which means it allows for the retransmission of lost packets, thereby making sure that all data transmitted is (eventually) received or you get a timeout error. Either your stuff get delivered or you are presented with a timeout error
What is the correct way of reading from a TCP socket in C/C++?
and btw, TCP/IP is reliable to some extent, it guarantees delivery assuming the network is working... if you unplug the cable TCP/IP will not deliver your packet
P.S. you should at least advance the pointer inside the buffer buffer
void ReadXBytes(int socket, unsigned int x, void* buffer)
{
int bytesRead = 0;
int result;
while (bytesRead < x)
{
result = read(socket, buffer + bytesRead, x - bytesRead);
if (result < 1 )
{
// Throw your error.
}
bytesRead += result;
}
}
Program A reported that all data had been sent correctly!
No. The return of write in program A only means that the returned number of bytes has been delivered to the kernel of operating system. It does not claim that the data have been send by the local system, have been received by the remote system or even have been processed by the remote application.
I am believing TCP is reliable.
TCP provides a reliability at the transport layer only. This means only that it will make sure that data loss gets detected (and packets re-transmitted), data duplication gets detected (and duplicates discarded) and packet reordering gets detected and fixed.
TCP does not claim any reliability of higher layers. If the applications needs this it has to be provided by the application itself.
Problem: On raw sockets, recvfrom can capture more bytes than sendto can send, preventing me from retransmitting packets larger than MTU.
Background: I'm programming an application that will capture and retransmit packets. Basically host A sends data to X that logs them and forwards them to B, all Linux machines. I'm using raw socket so I can capture all data and it's created with socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL)).
Then, there's code waiting for and reading incoming packets:
const int buffer_size = 2048;
uint8_t* buffer = new uint8_t[buffer_size];
sockaddr_ll addr = {0};
socklen_t addr_len = sizeof(addr);
int received_bytes = recvfrom(_raw_socket, buffer, buffer_size, 0, (struct sockaddr*)&addr, &addr_len);
Packet processing follows and the loop is finished with sending packet out again:
struct sockaddr_ll addr;
memset(&addr, 0, sizeof(struct sockaddr_ll));
addr.sll_family = htons(AF_PACKET);
addr.sll_protocol = eth_hdr->type;
addr.sll_ifindex = interface().id();
addr.sll_halen = HardwareAddress::byte_size;
memcpy(&(addr.sll_addr), eth_hdr->dest_mac, HardwareAddress::byte_size);
// Try to send packet
if(sendto(raw_socket(), data, length, 0, (struct sockaddr*)&addr, sizeof(addr)) < 0)
The problem is that I don't expect to receive packets that are larger than Ethernet MTU (1500 bytes) and I shouldn't since I'm using raw sockets that process each packet individually. But sometimes I do receive packets larger than MTU. I thought it might be error in my code but Wireshark confirms that as shown in the image, so there must be some reassembly going on at lower level like network controller itself.
Well, ok then I don't think there's a way to disable this for just one application and I can't change the host configuration, so I might increase buffer size. But the problem is that when I call sendto with anything larger than MTU size (actually 1514B, becuase of eth header) I get 80: Message too long errno. And that's the problem stated above - I can't send out the same packet I received. What could be possible solution for this? And what buffer size would I need to always capture whole packet?
EDIT: I have just checked on the machines with ethtool -k interf and got tcp-segmentation-offload: on on all of them, so it seems that it's really NIC reassembling fragments. But I wonder why sendto doesn't behave as recvfrom. If the packets can be automatically reassembled, why not fragmented?
A side note: The application needs to send those packets. Setting up forwarding with iptables etc. won't work.
Your network card probably has segmentation offload enabled, which means the hardware can re-assemble TCP segments before they reach the OS or your code.
You can check whether that is the case by running ethtool -k.
While transparently capturing TCP traffic and re-transmitting it at such a low level is often more trouble than it is worth(one are often better off doing this at the application layer, terminate the TCP connection and set up a new TCP connection towards your host B), you cannot capture and re-send packets if your network card has messed with the packets. You need to:
Turn off generic-segmentation-offload
Turn off generic-receive-offload
Turn off tcp-segmentation-offload
Turn off udp-fragmentation-offload if you are also dealing with UDP
Turn off rx-vlan-offload/tx-vlan-offload if your packets are VLAN encapsulated
Possibly turn off rx-checksumming and tx-checksumming. It either works if both
are enabled, or it's broken wrt. RAW sockets if enabled, depending on your
kernel version and type of network card.
These can be turned on/off with the ethtool -K command, the exact syntax is described in the ethtool manpage.
I'm 99% new to sockets and any sort of network programming, so please bear with me.
I am aiming to connect to a port (2111 in this case) on my local machine (192.168.0.1). From there, I'm planning on sending and receiving basic information, but that's for another day.
I've currently tried this:
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <stdlib.h>
#include <errno.h>
#include <unistd.h>
#include <string.h>
int main(int argc, char **argv)
{
int sd;
int port;
int start;
int end;
int rval;
struct hostent *hostaddr;
struct sockaddr_in servaddr;
start = 2111;
end = 2112;
for(port = start; port <= end; port++)
{
sd = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
if(sd == -1)
{
perror("Socket()\n");
return (errno);
}
memset(&servaddr, 0, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons(port);
hostaddr = gethostbyname("192.168.0.1");
memcpy(&servaddr.sin_addr, hostaddr->h_addr, hostaddr->h_length);
rval = connect(sd, (struct sockaddr *)&servaddr, sizeof(servaddr));
if(rval == -1)
{
printf("Port %d is closed\n", port);
close(sd);
}
else printf("Port %d is open\n", port);
close(sd);
}
return 0;
}
However, my connect() call hangs for about 90 seconds, then returns -1.
The device is directly connected to my Mac Mini's ethernet port and the manufacturer has confirmed that the port is 2111 or 2112.
What am I doing wrong? Also, can it be in the ELI5 (explain like I'm 5) format? I'm much better off with examples.
When you call connect() to connect to a host, your computer sends a SYN packet to begin the three-way handshake of the TCP connection. From here, there are 3 possible scenarios:
If the peer is listening on that port, it responds with a SYN+ACK packet, your computer responds with a final ACK, and the connection is established—connect() returns successfully.
If the peer is not listening on that port, it responds with an ICMP packet with a type and code indicating that the port is closed, which causes your connect() call to fail almost immediately with the error ECONNREFUSED (connection refused). Under normal circumstances, this takes 1 network round-trip time (RTT) to happen, which is typically tens or hundreds if milliseconds.
If your computer never receives either an appropriate SYN+ACK TCP packet or connection refused ICMP packet, it assumes that its original SYN packet got dropped by the network somewhere and will try to resend the SYN packet several times until it gets one of those packets back or it hits an OS-dependent timeout, at which point the connect() call fails with ETIMEDOUT. This is typically 1–2 minutes, depending on the OS and its TCP settings.
You're clearly hitting case #3. This can be caused by a few different issues:
Your original SYN packets were getting lost in the network, possibly due to a faulty link, overloaded router, or firewall
The peer's SYN+ACK or ICMP response packets were getting lost in the network, possibly due to a faulty link, overloaded router, or firewall
The destination address may be unroutable/unreachable
The peer may be failing to properly respond at all with a SYN+ACK or ICMP packet
If you're directly connecting to the device over ethernet, than that rules out #1 and #2. #4 is possible, but I think #3 is the most likely explanation.
A brief aside on packet routing
Your computer has multiple network interfaces—ethernet (sometimes multiple ethernet interfaces), Wi-Fi, the loopback device, VPN tunnels, etc. Whenever you create a socket, it has to be bound to one or more particular network interfaces in order for the OS to know which NIC to actually send the packet through. For listen sockets for servers, you typically bind to all network interfaces (to listen for connections on all of them), but you can also bind to a particular network interface to only listen on that one.
For client sockets, when you connect them to other peers, you don't normally bind them to a particular interface. By default, your computer uses its internal routing tables along with the destination IP address to determine which network interface to use. For example, if you have a gateway machine with two NICs, one of which is connected to the public internet with IP 54.x.y.z and hte other of which is connected to an internal, private network with IP 192.168.1.1, then that machine will in all likelihood have routing tables that say "for packets destined to 192.168.0.0/16, use NIC 2, for all other packets, use NIC 1". If you want to bypass the routing tables, you can bind the socket to the network interface you want by calling bind() on the socket before the call to connect().
Putting it all together
So, what does that all mean for you?
First, make sure that 192.168.0.1 is in fact the correct destination address you should be connecting to. How is that address determined? Is your computer acting as a DHCP server to assign that address to the other host? Is that host using a static IP configuration?
Next, make sure that your routing tables are correct. If the other machine is assigning itself a static IP, chances are that your Mac isn't aware of how to route to that destination and is probably trying to route through the wrong interface. You can manually adjust the routes on Mac OS X with the route(8) utility, but these get reset every reboot; this blog post shows an example of using a startup item to automate adding the new route on startup. You'll want to use the IP address associated with the ethernet interface connected to the target host.
Alternatively, instead of using routing tables, you could call bind() on your socket before connect() to bind to the local address of the interface you want to use, but this won't work for other programs unless they also provide that functionality. For example, the curl(1) utility lets you pass the --interface <name> command line flag to direct it to bind to a particular interface.
Basically, connect() is failing (check errno for why).
You might consider a implementing some kind of time-out for the connect. To do this, set the socket to non-blocking mode. Then call connect(), and then use select() to wait for a response with a timeout.
SPOILER Example of ConnectWithTimeout() for Linux
I'm writing udp server/client application in which server is sending data and client
is receiving. When packet is loss client should sent nack to server. I set the socket as
O_NONBLOCK so that I can notice if the client does not receive the packet
if (( bytes = recvfrom (....)) != -1 ) {
do something
}else{
send nack
}
My problem is that if server does not start to send packets client is behave as the
packet is lost and is starting to send nack to server. (recvfrom is fail when no data is available)I want some advice how can I make difference between those cases , if the server does not start to send the packets and if it sends, but the packet is really lost
You are using UDP. For this protocol its perfectly ok to throw away packets if there is need to do so. So it's not reliable in terms of "what is sent will arrive". What you have to do in your client is to check wether all packets you need arrived, and if not, talk politely to your server to resend those packets you did not receive. To implement this stuff is not that easy,
If you have to use UDP to transfer a largish chunk of data, then design a small application-level protocol that would handle possible packet loss and re-ordering (that's part of what TCP does for you). I would go with something like this:
Datagrams less then MTU (plus IP and UDP headers) in size (say 1024 bytes) to avoid IP fragmentation.
Fixed-length header for each datagram that includes data length and a sequence number, so you can stitch data back together, and detect missed, duplicate, and re-ordered parts.
Acknowledgements from the receiving side of what has been successfully received and put together.
Timeout and retransmission on the sending side when these acks don't come within appropriate time.
you have a loop calling either select() or poll() to determine if data has arrived - if so you then call recvfrom() to read the data.
you can set time out for receive data as follows
ssize_t
recv_timeout(int fd, void *buf, size_t len, int flags)
{
ssize_t ret;
struct timeval tv;
fd_set rset;
// init set
FD_ZERO(&rset);
// add to set
FD_SET(fd, &rset);
// this is set to 60 seconds
tv.tv_sec =
config.idletimeout;
tv.tv_usec = 0;
// NEVER returns before the timeout value.
ret = select(fd, &rset, NULL, NULL, &tv);
if (ret == 0) {
log_message(LOG_INFO,
"Idle Timeout (after select)");
return 0;
} else if (ret < 0) {
log_message(LOG_ERR,
"recv_timeout: select() error \"%s\". Closing connection (fd:%d)",
strerror(errno), fd);
return;
}
ret = recvfrom(fd, buf, len, flags);
return ret;
}
It tells that if there are data ready, Normally, read() should return up to the maximum number of bytes that you've specified, which possibly includes zero bytes (this is actually a valid thing to happen!), but it should never block after previously having reported readiness.
Under Linux, select() may report a socket file descriptor as "ready
for reading", while nevertheless a subsequent read blocks. This could
for example happen when data has arrived but upon examination has
wrong checksum and is discarded. There may be other circumstances in
which a file descriptor is spuriously reported as ready. Thus it may
be safer to use O_NONBLOCK on sockets that should not block.
Look up sliding window protocol here.
The idea is that you divide your payload into packets that fit in a physical udp packet, then number them. You can visualize the buffers as a ring of slots, numbered sequentially in some fashion, e.g. clockwise.
Then you start sending from 12 oclock moving to 1,2,3... In the process, you may (or may not) receive ACK packets from the server that contain the slot number of a packet you sent.
If you receive a ACK, then you can remove that packet from the ring, and place the next unsent packet there which is not already in the ring.
If you receive a NAK for a packet you sent, it means that packet was received by the server with data corruptions, and then you resend it from the ring slot reported in the NAK.
This protocol class allows transmission over channels with data or packet loss (like RS232, UDP, etc). If your underlying data transmission protocol does not provide checksums, then you need to add a checksum for each ring packet you send, so the server can check its integrity, and report back to you.
ACK and NAK packets from the server can also be lost. To handle this, you need to associate a timer with each ring slot, and if you don't receive either a ACK or NAK for a slot when the timer reaches a timeout limit you set, then you retransmit the packet and reset the timer.
Finally, to detect fatal connection loss (i.e. server went down), you can establish a maximum timeout value for all your packets in the ring. To evaluate this, you just count how many consecutive timeouts you have for single slots. If this value exceeds the maximum you have set, then you can consider the connection lost.
Obviously, this protocol class requires dataset assembly on both sides based on packet numbers, since packets may not be sent or received in sequence. The 'ring' helps with this, since packets are removed only after successful transmission, and on the receiving side, only when the previous packet number has already been removed and appended to the growing dataset. However, this is only one strategy, there are others.
Hope this hepls.
I have the following issue here: I want to write a server streaming data on a UDP socket on a specific port, and clients should be able to connect to it and receive the data that is being sent out without too much hassle: they just connect, and from the moment they start they should get data using recvfrom from the server.
I have some problems with setting up the network related parts. So, here is a rough piece of code that I try to make work:
int udpSock = socket(AF_INET, SOCK_DGRAM, 0);
if(udpSock == -1)
{
perror("Could not create audio output socket");
exit(1);
}
struct sockaddr_in *sin = (struct sockaddr_in*)&gOutgoingAddr;
sin->sin_port = htons(40200);
if(bind(udpSock, (const sockaddr*)sin, sizeof(struct sockaddr_in)) == -1)
{
perror("Cannot bind audio socket");
exit(1);
}
int buffer_size = 0;
char* data = get_next_buffer(&buffer_size);
while(buffer_size > 0)
{
if(sendto(udpSock, (const void*)(data), buffer_size, 0, NULL, 0) == -1)
{
perror("sendto failure");
}
data = get_next_buffer(&buffer_size);
}
Do not worry about the gOutgoingAddr variable, it is obtained correctly using getifaddrs, it is valid. I am troubled regarding the parametrization of the sendto method, becasue right now the output of the application is:
sendto failure: Destination address required
That's true, because I don't have a destination address, since all the examples I have found till now show when the server gets a client connection, and there is the address. But since I don't have a client yet connected, I'd still want to stream out.
I appreciate all help, I have no idea what Ishould put for the parameters of sendto:
The gOutgoingAddress which is the address where I create the socket? I have tried this, but if I use the tcpdump linux command on the specified port, I get nothing.
Should I create a multicast socket? This somehow makes no sense...
Something else?
Thanks,
frc
You cannot stream out to "nowhere". Streaming data via UDP is not multicast. That means if you have 100 clients connected, you must send exactly the same data 100-times, once to each of the clients that shall receive it. Multicast was not really part of the initial IPv4 design. It has been added later on and is not widely supported. This is contrary to IPv6, where multicast has been part of the initial design. The only thing you could do it broadcast the traffic within your local network. This will only work if all clients are in your local network segment. To broadcast your server would simply send the data to 255.255.255.255 and to a fixed UDP port. All clients then have to listen on that specific port and will receive the data. Please note, that on most systems you need special permissions for broadcasting (e.g. it is not common that only programs running with root privileges are allowed to broadcast traffic, as broadcasts pollute your network, since all broadcast packets are sent to all clients in the network, whether they care for them or not). Without broadcasts, you have only unicast and unicast means one sender, one receiver. For one sender multiple receiver, you must send out the same data multiple times to multiple addresses.
What is audioUdpSock by the way?
Shoudn't you be using udpSock instead?
Do a recvfrom in your server, and have the client send a message (with whatever content you want, this is just a way to establish the connection, a greeting). Then the server will have the client address from the recvfrom, and can send packet to it.
As UDP socket are connection-less (there is no need for accept and connect when using UDP socket), you need to have another way to inform the server of the existence of the client (and the client need to have an out-of-bound way to know the server address, generally, the user give it, or it is hard-coded).
If you can have multiple clients, then you'll have to use select, poll, ... on the socket to know when it is safe to call recvfrom without blocking (or you could configure your socket to be non-blocking).
Edit: I highly recommend Beej's Guide to Network Programming to everyone, and for your question, you can directly go to the sample usage of Datagram Socket.