Is it possible to receive data from more than one multicast group on a single socket?
For example:
void AddGroup(int sock,
const char* mc_addr_str,
int mc_port,
const char* interface) {
struct sockaddr_in mc_addr;
memset(&mc_addr, 0, sizeof(mc_addr));
mc_addr.sin_family = AF_INET;
mc_addr.sin_addr.s_addr = inet_addr(mc_addr_str);
mc_addr.sin_port = htons(mc_port);
if ((bind(sock, (struct sockaddr *) &mc_addr,
sizeof(mc_addr))) < 0) {
perror("bind() failed");
exit(1);
}
// construct an IGMP join request structure
struct ip_mreq mc_req;
mc_req.imr_multiaddr.s_addr = inet_addr(mc_addr_str);
mc_req.imr_interface.s_addr = inet_addr(interface);
if ((setsockopt(sock, IPPROTO_IP, IP_ADD_MEMBERSHIP,
(void*) &mc_req, sizeof(mc_req))) < 0) {
perror("setsockopt() failed");
exit(1);
}
}
This code works when I add one multicast group. But when I try to add another, the "bind" fails. I don't quite understand why the bind needs to be there in the first place? (but the code doesn't work without it).
Ideally I would like to call AddGroup multiple times on the same socket. Is this possible? Or do I need one socket per group and then just use polling?
You can join as many multicast groups as you like, using the appropriate setsockopt() call with the IP_ADD_MEMBERSHIP option, rather than bind().
You only bind a socket once. Skip the bind the second time and see what happens.
You can join as many multicast groups you want to on a single socket. See setsockopt(), IP_PKTINFO for a way to recognize which multicast group you are reading data from.
bind to the passive address, i.e. 0.0.0.0 for IPv4 and use ASM or SSM to pull in additional groups, e.g. IP_ADD_MEMBERSHIP as listed.
You can only bind once.
Yes, it's possible: look on the example in the link (http://www.tenouk.com/Module41c.html)
To shorten this up in a few steps:
You setsockopt with SO_REUSEADDR
You bind on INADDR_ANY
You setsockopt with IP_ADD_MEMBERSHIP on every group you want to receive datagram from.
It seems to me that using IP_PKTINFO gives an option to distinguish received packets, but sender must take care about preparing them(Setting the source IP for a UDP socket)
In unix based OSes:
If you need to bind to multicast address, you cannot call bind() more than once. And you will need to bind to multicast address when you expect more than one multicast streams using same destination port and multiple processes running in same device receiving those multicasts.
For example, when you have multicast streams: 239.0.0.1:1234, 239.0.0.2:1234, 239.0.0.3:1234 and 239.0.0.4:1234, and you want to receive 239.0.0.1, 239.0.0.2 in process-A and want to receive 239.0.0.3, 239.0.0.4 in process-B, you cannot accomplish this when both processes A and B running in same device.
Related
As this is my first question, so my apologies if I couldn't ask in a proper way.
I am implementing the server-client communication using lwip.
My server must be connected to multiple clients all the time.
For that I made my server listening on all its ports, and then when client wants to connect, the client will send the connect request and the server should accept it.
But the problem I am having is that the server keeps on waiting on lwip_accept() function for each client, which blocks the other clients connections also.
For example:
if there are 3 clients, who want to connect to server. Server is listening on its 3 ports for example. and now the lwip_accept() functions are being called like below:
client1Conn = lwip_accept(sock1, (struct sockaddr*)&client, (socklen_t*)&lenClient);
client2Conn = lwip_accept(sock2, (struct sockaddr*)&client, (socklen_t*)&lenClient);
client2Conn = lwip_accept(sock3, (struct sockaddr*)&client, (socklen_t*)&lenClient);
Now for some reason, if the client 1 is not present, then the server will not move further from the first lwip_accept call, as long as the client 1 becomes available, and till that time the other two clients who are available, will not be able to connect to server.
What I want is that the server checks for the client connection, and if the client is not available, then it should skip that and move to the next client connection.
What I tried is using lwip_fcntl like below
int flags = lwip_fcntl(sock1, F_GETFL, 0);
lwip_fcntl(sock1, F_SETFL, flags | O_NONBLOCK);
client1Conn = lwip_accept(sock1, (struct sockaddr*)&client, (socklen_t*)&lenClient);
But no effect, and lwip_ioctl like below
int nonblocking = 1;
lwip_ioctl(sock1, FIONBIO, &nonblocking);
client1Conn = lwip_accept(sock1, (struct sockaddr*)&client, (socklen_t*)&lenClient);
This makes the accept failed all the time.
I tried the solution given in How do I change a TCP socket to be non-blocking? but it didn't work either.
Is there any possible way to implement the required functionality?
On a system with several network interfaces, I can have the same multicast address and port combination used on different networks, with different data on them. I want to be able to connect to them with several network cards and receive different data on each interface.
To do so, I bind to the interface I want to receive on using the IP_MULTICAST_IF option:
ip_mreqn mreqn;
memset(&mreqn, 0, sizeof (ip_mreqn));
mreqn.imr_multiaddr.s_addr = inet_addr(mc);
mreqn.imr_address.s_addr = INADDR_ANY;
mreqn.imr_ifindex = if_nametoindex(device);
if (setsockopt(mct->fd, IPPROTO_IP, IP_MULTICAST_IF, &mreqn, sizeof(mreqn)) < 0) {
perror("setsockopt multicast if");
return 1;
}
and make sure the join request is only sent on that interface by setting IP_ADD_MEMBERSHIP with the same structure:
if (setsockopt(mct->fd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreqn, sizeof(mreqn)) < 0) {
perror("setsockopt add membership");
return 1;
}
While the IP_ADD_MEMBERSHIP code works (the join request is only sent on the interface specified), the IP_MULTICAST_IF does not. Instead once it is able to join the multicast on any of the interfaces, I receive the same data through all sockets, even if they have different imr_ifindex set.
The IP_MULTICAST_IF ioctl does not control incoming selection, it is setting the socket's default interface for its outgoing multicast packets.
IP_ADD_MEMBERSHIP is the only mechanism for configuring incoming multicast and the memberships it creates are for the entire host, the host does not tailor delivery to individual sockets based on their requested memberships. (You can observe the hosts memberships with netstat -gn and the reference count is being used to determine when the host can stop observing, but not which sockets are receiving fan-out. If you have a matching membership from any socket, all sockets that made an applicable bind(2) will start receiving that multicast, even if they have never used IP_ADD_MEMBERSHIP.)
The usual method to differentiate these packets with out changing the system setup, is to receive them all on a socket using ancillary data to identify their interface. On Linux, this ancillary data setup is done with IP_PKTINFO as described in ip(7).
I need to implement the following scheme (in C, however the language is not the case here):
client(192.168.1.2) <-> proxy(addr4: 192.168.1.1:1000, addr6: FE80:0000:0000:0000:0202:B3FF:FE1E:8329) <-> some_remote_host(remote_addr6)
The thing is addr6 on proxy side must be dynamically changed according to incoming ipv4 port. For example:
client connects to 192.168.1.1:1000, outgoing connection is made via addr6_0
client connects to 192.168.1.1:1001, outgoing connection is made via addr6_1
etc ...
The most straight-forward implementation would be: assign multiple static ipv6 addrs to ethernet interface and use bind() on socket before outgoing connection. The problem is that the number of incoming ports/outgoing addrs can be ~10000 (and as far as I understood recommended value for net.ipv6.conf.all.max_addresses is 32 or 64, default is 16).
The questions are:
What possible problems can I expect if I assign 10000 ipv6 addrs to one interface, I assume performance issues?
Is there a better way to achieve the goal?
I have an ambiguous issue, I have a multicast group between two users one of them is a sender and another is receiver I did these scenarios in each side:
Receiver:
create a udp socket.
bind to a multicast group address
connect to the sender side ( connect(sender ip))
join the multicast group
recv from the multicast group.
Sender:
create a udp socket.
send to the multicast group.
In this scenario above when sender sent data receiver couldn't receive but if we checked the receiver side by tcpdump there is data was received from multicast group.
but if there is no connect to sender side in receiver, data will be received.
BUT Actually, If we let the sender bind to the multicast address before send to the multicast group and also receiver connect to the sender side as scenario above data will be received successfully!!!!
Any explanation when we added the bind in the sender side ???
You might want to connect(2) the sender's socket to the multicast group to speed up sending, but don't connect(2) the receiver since it restricts it to unicast (yes, it's a bit confusing, but that's how it works). Just bind(2) the receiver to the group/port, and do the setsockopt(2) with IP_ADD_MEMBERSHIP to join the group.
There is no such thing as 'connected UDP multicast'. It is is connected or it is multicast. Remove the connect() steps completely.
On the receiver side, both the bind() and connect() calls do the same thing: they associate a given Internet socket address with a given connectionless socket. For the bind() call -- in which the Internet socket address is the address of the multicast group -- this means that the socket will only receive UDP packets whose destination address is that of the multicast group. For the connect() call -- in which the Internet socket address is that of the sender -- this means that the socket will only receive UDP packets whose destination address is that of the sender, which isn't what you want.
The connect() call is overriding the bind() call, resulting in no packets being received.
Replace the bind() call with a connect() call to the multicast group and you should still receive the UDP packets -- or keep only the bind() call. It's your call.
I have a C application that sends data to a UDP server every few seconds. If the client loses it's network connection for a few minutes and then gets it's connection back, it will send all of the accumulated data to the server which may result in a hundred or more requests coming into the server at the same time from that client.
Is there any way to prevent these messages from being sent from the client if an error occurs during transmission using UDP? Would a connect call from the UDP client help to determine if the client can connect to the server? Or would this only be possible using TCP?
int socketDescriptor;
struct sockaddr_in serverAddress;
if ((socketDescriptor = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
{
printf("Could not create socket. \n");
return;
}
serverAddress.sin_family = AF_INET;
serverAddress.sin_addr.s_addr = inet_addr(server_ip);
serverAddress.sin_port = htons(server_port);
if (sendto(socketDescriptor, data, strlen(data), 0,
(struct sockaddr *)&serverAddress, sizeof(serverAddress)) < 0)
{
printf("Could not send data to the server. \n");
return;
}
close(socketDescriptor);
It sounds like the behavior you're getting is from datagrams being buffered in socket sndbuf, and you would prefer that those datagrams be dropped if they can't immediately be sent?
If that's the case, you might have luck setting the size of the sndbuf to zero.
Word of warning--this area of behavior sounds like it treads very close to "implementation specific" territory.
As explained here, to retrieve errors on UDP send you should use a connect before, then the send method, yet on Linux it seems to have the same behaviour with or without connect.