Sockets: Setting up a client and server - ios11

I want to create a toy example with a client and an echo server.
Is there a way for an iOS app running in the Simulator to connect to a server running on my Macbook? Or, is there something similar to localhost in iOS? For instance, can I run a client and a server in two different background threads? I actually tried that, but the server doesn't receive any data from the client:
//This uses SwiftSocket installed with Cocoapods:
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
DispatchQueue.global(qos: .background).async {
let udpServer = UDPServer(address: "localhost", port:12033)
var count = 1
while true {
print("Server about to recv() \(count)")
let(data, sender, port) = udpServer.recv(6)
print("data: \(data)\nsender: \(sender)\nport: \(port)\n\n")
if let data = data {
print("Server received: \(data) from \(sender) on port \(port)")
}
sleep(1)
count += 1
}
}
DispatchQueue.global(qos: .background).async {
let udpClient = UDPClient(address: "localhost", port: 12033)
switch udpClient.send(string: "Hello\n") {
case .success:
print("Client sent message to server.")
case .failure(let error):
print("Client failed to send message to server: \(error)")
}
udpClient.close()
}
}
--output:--
Server about to recv() 1
Client sent message to server.
data: nil
sender: no ip
port: 0
Server about to recv() 2
data: nil
sender: no ip
port: 0
Server about to recv() 3
data: nil
sender: no ip
port: 0
Server about to recv() 4
data: nil
sender: no ip
port: 0
I'm a little confused by the recv(). Shouldn't that block?

Okay, the server will receive the data (and the recv() blocks), if I use "127.0.0.1" for the host string instead of "localhost".
And, if I run a server on my Macbook that listens on "127.0.0.1" (and I get the ports to match in my iOS code and the server code!), then the server running on my Macbook receives the data from the iOS client.
One question I had was whether the recv() for a UDP socket needs to be the exact length of the data. My results show that you can recv() in efficient chunks, and if the length of the data is shorter than the specified chunk, the recv() will read what is available, then return.
And if you are confused about whether you need to loop over a recv() to gather all the data that was in a send(), here is a great post on the matter:
Examples. Blocks of contiguous characters correspond to send() calls:
TCP:
Send: AA BBBB CCC DDDDDD E Recv: A ABB B BCC CDDD DDDE
All data sent is received in order, but not necessarily in the same
chunks.
UDP:
Send: AA BBBB CCC DDDDDD E Recv: CCC AA E
Data is not necessarily in the same order, and not necessarily
received at all, but messages are preserved in their entirety.
With TCP, you need to loop over the recv() until you find the marker that indicates the end of the data you are trying to read e.g. a newline, a special sequence of characters, or maybe the blank string that is returned by recv() when the socket is closed by the sender.
With UDP, one recv() will read all the data in a send().

Related

Is there a way to detect that TCP socket has been closed by the remote peer, without reading from it?

First, a little background to explain the motivation: I'm working on a very simple select()-based TCP "mirror proxy", that allows two firewalled clients to talk to each other indirectly. Both clients connect to this server, and as soon as both clients are connected, any TCP bytes sent to the server by client A is forwarded to client B, and vice-versa.
This more or less works, with one slight gotcha: if client A connects to the server and starts sending data before client B has connected, the server doesn't have anywhere to put the data. I don't want to buffer it up in RAM, since that could end up using a lot of RAM; and I don't want to just drop the data either, as client B might need it. So I go for the third option, which is to not select()-for-read-ready on client A's socket until client B has also connected. That way client A just blocks until everything is ready to go.
That more or less works too, but the side effect of not selecting-for-read-ready on client A's socket is that if client A decides to close his TCP connection to the server, the server doesn't get notified about that fact -- at least, not until client B comes along and the server finally selects-for-read-ready on client A's socket, reads any pending data, and then gets the socket-closed notification (i.e. recv() returning 0).
I'd prefer it if the server had some way of knowing (in a timely manner) when client A closed his TCP connection. Is there a way to know this? Polling would be acceptable in this case (e.g. I could have select() wake up once a minute and call IsSocketStillConnected(sock) on all sockets, if such a function existed).
If you want to check if the socket is actually closed instead of data, you can add the MSG_PEEK flag on recv() to see if data arrived or if you get 0 or an error.
/* handle readable on A */
if (B_is_not_connected) {
char c;
ssize_t x = recv(A_sock, &c, 1, MSG_PEEK);
if (x > 0) {
/* ...have data, leave it in socket buffer until B connects */
} else if (x == 0) {
/* ...handle FIN from A */
} else {
/* ...handle errors */
}
}
Even if A closes after sending some data, your proxy probably wants to forward that data to B first before forwarding the FIN to B, so there is no point in knowing that A has sent FIN on the connection sooner than after having read all the data it has sent.
A TCP connection isn't considered closed until after both sides send FIN. However, if A has forcibly shutdown its endpoint, you will not know that until after you attempt to send data on it, and receive an EPIPE (assuming you have suppressed SIGPIPE).
After reading your mirror proxy application a bit more, since this is a firewall traversal application, it seems that you actually need a small control protocol to allow to you verify that these peers are actually allowed to talk to each other. If you have a control protocol, then you have many solutions available to you, but the one I would advocate would be to have one of the connections describe itself as the server, and the other connection describe itself as the client. Then, you can reset the connection the client if there is no server present to take its connection. You can let servers wait for a client connection up to some timeout. A server should not initiate any data, and if it does without a connected client, you can reset the server connection. This eliminates the issue of buffering data for a dead connection.
It appears the answer to my question is "no, not unless you are willing and able to modify your TCP stack to get access to the necessary private socket-state information".
Since I'm not able to do that, my solution was to redesign the proxy server to always read data from all clients, and throw away any data that arrives from a client whose partner hasn't connected yet. This is non-optimal, since it means that the TCP streams going through the proxy no longer have the stream-like property of reliable in-order delivery that TCP-using programs expect, but it will suffice for my purpose.
For me the solution was to poll the socket status.
On Windows 10, the following code seemed to work (but equivalent implementations seem to exist for other systems):
WSAPOLLFD polledSocket;
polledSocket.fd = socketItf;
polledSocket.events = POLLRDNORM | POLLWRNORM;
if (WSAPoll(&polledSocket, 1, 0) > 0)
{
if (polledSocket.revents &= (POLLERR | POLLHUP))
{
// socket closed
return FALSE;
}
}
I don't see the problem as you see it. Let's say A connects to the server sends some data and close, it does not need any message back. Server won't read its data until B connects, once it does server read socket A and send the data to B. The first read will return the data A had sent and the second return either 0 or -1 in either case the socket is closed, server close B. Let's suppose A send a big chunk of data, the A's send() method will block until server starts reading and consumes the buffer.
I would use a function with a select which returns 0, 1, 2, 11, 22 or -1,
where;
0=No data in either socket (timeout)
1=A has data to read
2=B has data to read
11=A socket has an error (disconnected)
22=B socket has an error (disconnected)
-1: One/both socket is/are not valid
int WhichSocket(int sd1, int sd2, int seconds, int microsecs) {
fd_set sfds, efds;
struct timeval timeout={0, 0};
int bigger;
int ret;
FD_ZERO(&sfds);
FD_SET(sd1, &sfds);
FD_SET(sd2, &sfds);
FD_SET(sd1, &efds);
FD_SET(sd2, &efds);
timeout.tv_sec=seconds;
timeout.tv_usec=microsecs;
if (sd1 > sd2) bigger=sd1;
else bigger=sd2;
// bigger is necessary to be Berkeley compatible, Microsoft ignore this param.
ret = select(bigger+1, &sfds, NULL, &efds, &timeout);
if (ret > 0) {
if (FD_ISSET(sd1, &sfds)) return(1); // sd1 has data
if (FD_ISSET(sd2, &sfds)) return(2); // sd2 has data
if (FD_ISSET(sd1, &efds)) return(11); // sd1 has an error
if (FD_ISSET(sd2, &efds)) return(22); // sd2 has an error
}
else if (ret < 0) return -1; // one of the socket is not valid
return(0); // timeout
}
Since Linux 2.6.17, you can poll/epoll for POLLRDHUP/EPOLLRDHUP. See epoll_ctl(2):
EPOLLRDHUP (since Linux 2.6.17)
Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing simple code to detect peer shutdown when using Edge Triggered monitoring.)
If your proxy must be a general purpose proxy for any protocol, then you should handle also those clients which sends data and immediately calls close after the send (one way data transfer only).
So if client A sends a data and closes the connection before the connection is opened to B, don't worry, just forward the data to B normally (when connection to B is opened).
There is no need to implement special handling for this scenario.
Your proxy will detect the closed connection when:
read returns zero after connection to B is opened and all pending data from A is read. or
your programs try to send data (from B) to A.
You could check if the socket is still connected by trying to write to the file descriptor for each socket. Then if the return value of the write is -1 or if errno = EPIPE, you know that socket has been closed.for example
int isSockStillConnected(int *fileDescriptors, int numFDs){
int i,n;
for (i=0;i<numFDs;i++){
n = write(fileDescriptors+i,"heartbeat",9);
if (n < 0) return -1;
if (errno == EPIPE) return -1;
}
//made it here, must be okay
return 0;
}

How network event FD_WRITE is generated when using Event Driven Sockets?

I am working on newtwork event based socket application.
When client has sent some data and there is something to be read on the socket, FD_READ network event is generated.
Now according to my understanding, when server wants to write over the socket, there must be an event generated i.e. FD_WRITE. But how this message will be generated?
When there is something available to be read, FD_READ is automatically generated but what about FD_WRITE when server wants to write something?
Anyone who can help me with this confusion please?
Following is the code snippet:
WSAEVENT hEvent = WSACreateEvent();
WSANETWORKEVENTS events;
WSAEventSelect(newSocketIdentifier, hEvent, FD_READ | FD_WRITE);
while(1)
{ //while(1) starts
waitRet = WSAWaitForMultipleEvents(1, &hEvent, FALSE, WSA_INFINITE, FALSE);
//WSAResetEvent(hEvent);
if(WSAEnumNetworkEvents(newSocketIdentifier,hEvent,&events) == SOCKET_ERROR)
{
//Failure
}
else
{ //else event occurred starts
if(events.lNetworkEvents & FD_READ)
{
//recvfrom()
}
if(events.lNetworkEvents & FD_WRITE)
{
//sendto()
}
}
}
FD_WRITE means you can write to the socket right now. If the send buffers fill up (you're sending data faster than it can be sent on the network), eventually you won't be able to write anymore until you wait a bit.
Once you make a write that fails due to the buffers being full, this message will be sent to you to let you know you can retry that send.
It's also sent when you first open up the socket to let you know it's there and you can start writing.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms741576(v=vs.85).aspx
The FD_WRITE network event is handled slightly differently. An
FD_WRITE network event is recorded when a socket is first connected
with a call to the connect, ConnectEx, WSAConnect, WSAConnectByList,
or WSAConnectByName function or when a socket is accepted with accept,
AcceptEx, or WSAAccept function and then after a send fails with
WSAEWOULDBLOCK and buffer space becomes available. Therefore, an
application can assume that sends are possible starting from the first
FD_WRITE network event setting and lasting until a send returns
WSAEWOULDBLOCK. After such a failure the application will find out
that sends are again possible when an FD_WRITE network event is
recorded and the associated event object is set.
So, ideally you're probably keeping a flag as to whether it's OK to write, right now. It starts off as true, but eventually, you get a WSAEWOULDBLOCK when calling sendto, and you set it to false. Once you receive FD_WRITE, you set the flag back to true and resume sending packets.

Refresh multicast group membership

I have several embedded machines listening and streaming rtp audio data to a multicast group. They are connected to a smart managed switch (Netgear GS108Ev2) which does basic igmp snooping and multicast filtering on its ports, so that the rest of my (W)LAN doesn't get flooded.
At start everything works fine for about 500-520 seconds. After that, they don't receive any more data until they leave and join the group again. I guess the switch is "forgetting" about the join after a timeout.
Is there any way to refresh the group membership, i.e. letting the switch know, that there ist still someone listening, without losing packets?
System info:
Arch: blackfin
# cat /proc/version
Linux version 2.6.28.10-ADI-2009R1-uCBF54x-EMM
(gcc version 4.3.3 (ADI) ) #158 PREEMPT Tue Jun 5 20:05:42 CEST 2012
This is the way multicast / the IGMP protocol works. A client has to join the group periodically by sending a Membership Report or it will be assumed that he has left the group after some short timeout. However, those reports are usually sent only when receiving a Membership Query from the local multicast router. Either your clients don't receive the query or don't respond with a report.
Try to use a tool like wireshark in order to see which IGMP packets are sent through your network.
You need an IGMP querier to send the Membership Queries, as was already explained by scai.
If you can't configure your router to do that, you can use one of your computers. Seeing how running a full multicast routing daemon would be overkill (and I've never done that), I suggest you try to abuse igmpproxy.
First create a dummy upstream interface (this is not persistent!):
ip tap add dev tap6 mode tap
Write igmpproxy.conf:
# Dummy upstream interface.
phyint tap6 upstream ratelimit 0 threshold 1
# Local interface.
phyint eth0 downstream ratelimit 0 threshold 1
# Explicitly disable any other interfaces (yes, it sucks).
phyint NAME disabled
...
Finally start igmpproxy (as root):
igmpproxy -v /path/to/igmpproxy.conf
If your embedded devices are running linux, you need to turn off the reverse packet filter on them or they won't respond to group membership queries. In that case the upstream switch will assume there is no-one listening to that multicast and switch it off.
I had same problem, multicast on wifi was lost after 260 seconds, I solved it with my application by adding AddSourceMembership on socket.
private void StartListner(IPAddress sourceIp, IPAddress multicastGroupIp, IPAddress localIp, int port)
{
try
{
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
IPEndPoint localEndpoint = new IPEndPoint(localIp, port);
socket.Bind(localEndpoint);
byte[] membershipAddresses = new byte[12]; // 3 IPs * 4 bytes (IPv4)
Buffer.BlockCopy(multicastGroupIp.GetAddressBytes(), 0, membershipAddresses, 0, 4);
Buffer.BlockCopy(sourceIp.GetAddressBytes(), 0, membershipAddresses, 4, 4);
Buffer.BlockCopy(localIp.GetAddressBytes(), 0, membershipAddresses, 8, 4);
socket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddSourceMembership, membershipAddresses);
try
{
byte[] b = new byte[1024 * 2];
int length = socket.Receive(b);
}
catch { }
}
catch (Exception ex)
{
logger.Error("Exception: " + ex);
}
}

recv() links messages

I've got a piece of code:
while(1) {
if(recvfrom(*val, buffer, 1024, MSG_PEEK, NULL, NULL)==-1) {
perror("recv");
exit(1);
} else printf("recv msgpeek\n");
if(*(int*)buffer>5) {
if(recvfrom(*val, buffer, 1024, 0, NULL, NULL)==-1) {
perror("recv");
exit(1);
} else printf("recv\n");
if(*(int*)buffer==6) {
printf("%d\n", *(int*)(buffer+sizeof(int)+30));
printf("%s\n", (char*)buffer+sizeof(int));
}
}
This is part of a client programme. I'm sending messages from server to this client and I've noticed that when client receives this messages, they're connected. I'm using SOCK_STREAM type sockets. Anyone know how to avoid connecting messages?
If I understood you correctly, you are reading from TCP socket, and expecting to get exactly same number of bytes as you "sent" from the other side. Here you assumption is wrong. TCP socket is a bi-directional stream, i.e. it does not preserve boundaries of the application messages you send through it. A "write" on one side of the connection could result in multiple "reads" on the other side, and the other way around, multiple "writes" could be received together. That last case is what you are seeing. It is your responsibility to keep track of message boundaries.
Related question - Receiving data in TCP.
If I understood well your problem is that you send 2 for example messages but you receive one, containing the contents of the two messages sent. This is due to the Nagle's algorithm that TCP uses to improve efficiency. If you want to disable this algorithm use the TCP_NODELAY option.

How do I receive a message after enabling loopback?

I have my multicast (udp) sender/receiver program up and running. If I use setsockopt to enable loopback with the sender like so:
if(setsockopt(sockfd, IPPROTO_IP, IP_MULTICAST_LOOP, &loop, sizeof(loop)) < 0)
error("loopback failed.");
and later on I send out the message to every subscriber, how does my sender get the message that's sent out? The sender doesn't store its own IP address and Port number and sent itself a message (basically subscribing to itself) does it?
So it should be something like:
receiver1 (subscription) -> sender
receiver2 (subscription) -> sender
when it's time to send:
sender (info) -> receiver1
sender (info) -> receiver2
sender (info) -> sender? //how does this step work?
Thanks for the help :)
In your code, loop must be of type u_char, not int. Of course, this will also change the final setsockopt() parameter to have the value 1. I have no personal experience of this, but W. Richard Stevens says so in UNIX Network Programming (3rd edition), Vol. 1, Section 21.6, so it must be so.
He also says that using type int here is a common programming error.
In addition to enabling loopback (which actually may be enabled by default, according to http://tldp.org/HOWTO/Multicast-HOWTO-6.html#ss6.1), you also need to subscribe to the multicast group.
It isn't necessary to send a separate copy of the packet to each receiver. If the multicast subscriptions are correct and you're on a network that supports multicast, then a single transmission is sufficient.

Resources