Router Alert options on IGMPv2 packets - c

I'm trying to forge an IGMPv2 Membership Request packet and send it on a RAW socket.
The RFC 3376 states:
IGMP messages are encapsulated in IPv4 datagrams, with an IP protocol number of 2. Every IGMP message described in this document is sent with an IP Time-to-Live of 1, IP Precedence of Internetwork Control (e.g., Type of Service 0xc0), and carries an IP Router Alert option [RFC-2113] in its IP header
So the IP_ROUTER_ALERT flag must be set.
I'm trying to forge the strict necessary of the packet (e.g. only the IGMP header & payload), so i'm using the setsockopt to edit the IP options.
some useful variables:
#define C_IP_MULTICAST_TTL 1
#define C_IP_ROUTER_ALERT 1
int sockfd = 0;
int ecsockopt = 0;
int bytes_num = 0;
int ip_multicast_ttl = C_IP_MULTICAST_TTL;
int ip_router_alert = C_IP_ROUTER_ALERT;
Here's how I open the RAW socket:
sock_domain = AF_INET;
sock_type = SOCK_RAW;
sock_proto = IPPROTO_IGMP;
if ((ecsockopt = socket(sock_domain,sock_type,sock_proto)) < 0) {
printf("Error %d: Can't open socket.\n", errno);
return 1;
} else {
printf("** Socket opened.\n");
}
sockfd = ecsockopt;
Then I set the TTL and Router Alert option:
// Set the sent packets TTL
if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_MULTICAST_TTL, &ip_multicast_ttl, sizeof(ip_multicast_ttl))) < 0) {
printf("Error %d: Can't set TTL.\n", ecsockopt);
return 1;
} else {
printf("** TTL set.\n");
}
// Set the Router Alert
if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_ROUTER_ALERT, &ip_router_alert, sizeof(ip_router_alert))) < 0) {
printf("Error %d: Can't set Router Alert.\n", ecsockopt);
return 1;
} else {
printf("** Router Alert set.\n");
}
The setsockopt of IP_ROUTER_ALERT returns 0. After forging the packet, i send it with sendto in this way:
// Send the packet
if((bytes_num = sendto(sockfd, packet, packet_size, 0, (struct sockaddr*) &mgroup1_addr, sizeof(mgroup1_addr))) < 0) {
printf("Error %d: Can't send Membership report message.\n", bytes_num);
return 1;
} else {
printf("** Membership report message sent. (bytes=%d)\n",bytes_num);
}
The packet is sent, but the IP_ROUTER_ALERT option (checked with wireshark) is missing.
Am i doing something wrong? is there some other methods to set the IP_ROUTER_ALERT option?
Thanks in advance.

Finally i've found out that the IP_ROUTER_ALERT has to be set by the Linux Kernel. IGMP membership requests are sent after a IP_ADD_MEMBERSHIP is done and the Kernel takes charge of setting the IP_ROUTER_ALERT flag.

I do not know why your code is not working (it looks fine to me), but I can suggest a workaround: Drop one more layer down on your raw socket and build Ethernet frames. You may also want to take a look at Libnet, which handles building packets like this for you.

The documentation states:
Pass all to-be forwarded packets with
the IP Router Alert option set to this
socket. Only valid for raw sockets.
This is useful, for instance, for user
space RSVP daemons. The tapped packets
are not forwarded by the kernel, it is
the users responsibility to send them
out again. Socket binding is ignored,
such packets are only filtered by
protocol. Expects an integer flag.
This sounds as if the option only matters when receiving packets on the socket, not when sending them. If you're sending raw packets, can't you just set the required option in the IP header yourself?

As a reference I would recommend one of the many IGMP aware programs out there.
One example is igmpproxy:
https://github.com/ViToni/igmpproxy/blob/logging/src/igmp.c#L54
/*
* Open and initialize the igmp socket, and fill in the non-changing
* IP header fields in the output packet buffer.
*/
void initIgmp(void) {
struct ip *ip;
recv_buf = malloc(RECV_BUF_SIZE);
send_buf = malloc(RECV_BUF_SIZE);
k_hdr_include(true); /* include IP header when sending */
k_set_rcvbuf(256*1024,48*1024); /* lots of input buffering */
k_set_ttl(1); /* restrict multicasts to one hop */
k_set_loop(false); /* disable multicast loopback */
ip = (struct ip *)send_buf;
memset(ip, 0, sizeof(struct ip));
/*
* Fields zeroed that aren't filled in later:
* - IP ID (let the kernel fill it in)
* - Offset (we don't send fragments)
* - Checksum (let the kernel fill it in)
*/
ip->ip_v = IPVERSION;
ip->ip_hl = (sizeof(struct ip) + 4) >> 2; /* +4 for Router Alert option */
ip->ip_tos = 0xc0; /* Internet Control */
ip->ip_ttl = MAXTTL; /* applies to unicasts only */
ip->ip_p = IPPROTO_IGMP;
allhosts_group = htonl(INADDR_ALLHOSTS_GROUP);
allrouters_group = htonl(INADDR_ALLRTRS_GROUP);
alligmp3_group = htonl(INADDR_ALLIGMPV3_GROUP);
}
and https://github.com/ViToni/igmpproxy/blob/logging/src/igmp.c#L271
/*
* Construct an IGMP message in the output packet buffer. The caller may
* have already placed data in that buffer, of length 'datalen'.
*/
static void buildIgmp(uint32_t src, uint32_t dst, int type, int code, uint32_t group, int datalen) {
struct ip *ip;
struct igmp *igmp;
extern int curttl;
ip = (struct ip *)send_buf;
ip->ip_src.s_addr = src;
ip->ip_dst.s_addr = dst;
ip_set_len(ip, IP_HEADER_RAOPT_LEN + IGMP_MINLEN + datalen);
if (IN_MULTICAST(ntohl(dst))) {
ip->ip_ttl = curttl;
} else {
ip->ip_ttl = MAXTTL;
}
/* Add Router Alert option */
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[0] = IPOPT_RA;
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[1] = 0x04;
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[2] = 0x00;
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[3] = 0x00;
igmp = (struct igmp *)(send_buf + IP_HEADER_RAOPT_LEN);
igmp->igmp_type = type;
igmp->igmp_code = code;
igmp->igmp_group.s_addr = group;
igmp->igmp_cksum = 0;
igmp->igmp_cksum = inetChksum((unsigned short *)igmp,
IP_HEADER_RAOPT_LEN + datalen);
}

Related

Print HTTP packet data from inside Kernel Module

I am trying to write a kernel module that will dump all the HTTP packet data to dmesg.
I registered a nf_hook in POST ROUTING (tried also hooking OUTPUT table), and printing all the packets which sport is 80 (HTTP responses)
I read the following post - Print TCP Packet Data
and really got it to work! but I have a problem, the kernel module is printing only the first line of the response - HTTP/1.0 200 OK without printing all the HTTP headers and HTML.
this is my hook function -
struct iphdr *iph; /* IPv4 header */
struct tcphdr *tcph; /* TCP header */
u16 sport, dport; /* Source and destination ports */
u32 saddr, daddr; /* Source and destination addresses */
unsigned char *user_data; /* TCP data begin pointer */
unsigned char *tail; /* TCP data end pointer */
unsigned char *it; /* TCP data iterator */
/* Network packet is empty, seems like some problem occurred. Skip it */
if (!skb)
return NF_ACCEPT;
iph = ip_hdr(skb); /* get IP header */
/* Skip if it's not TCP packet */
if (iph->protocol != IPPROTO_TCP)
return NF_ACCEPT;
tcph = tcp_hdr(skb); /* get TCP header */
/* Convert network endianness to host endiannes */
saddr = ntohl(iph->saddr);
daddr = ntohl(iph->daddr);
sport = ntohs(tcph->source);
dport = ntohs(tcph->dest);
/* Watch only port of interest */
if (sport != PTCP_WATCH_PORT)
return NF_ACCEPT;
/* Calculate pointers for begin and end of TCP packet data */
user_data = (unsigned char *)((unsigned char *)tcph + (tcph->doff * 4));
tail = skb_tail_pointer(skb);
/* ----- Print all needed information from received TCP packet ------ */
/* Show only HTTP packets */
if (user_data[0] != 'H' || user_data[1] != 'T' || user_data[2] != 'T' ||
user_data[3] != 'P') {
return NF_ACCEPT;
}
/* Print packet route */
pr_debug("print_tcp: %pI4h:%d -> %pI4h:%d\n", &saddr, sport,
&daddr, dport);
/* Print TCP packet data (payload) */
pr_debug("print_tcp: data:\n");
for (it = user_data; it != tail; ++it) {
char c = *(char *)it;
if (c == '\0')
break;
printk("%c", c);
}
printk("\n\n");
return NF_ACCEPT;
I want to print the whole packet, not only the first row.
why it's printing only the first row ? My guess is that there is some routing caching (like when using IPTABLES) , is there a way to disable the caching ?
I have the same problem until closed my proxy client (like v2ray or shadowsocks), look likes the proxy client cached/simplify the http request data, you can try tcpdump to catch the http request data, there is only a line like: HTTP GET / 1.1.

AF_XDP-Socket vs Linux Sockets: Why does my AF-XDP Socket lose packets whereas a generic linux socket doesn't?

I am comparing AF-XDP sockets vs Linux Sockets in terms of how many packets they can process without packet-loss (packet-loss is defined as the RTP-sequence number of the current packet is not equal to the RTP-sequence number of the previous packet + 1).
I noticed that my AF-XDP socket program (I can't determine if this problem is related to the kernel program or the user-space program) is losing around ~25 packets per second at around 390.000 packets per second whereas an equivalent program with generic linux sockets doesn't lose any packets.
I implemented a so-called distributor-program which loads the XDP-kernel program once, sets up a generic linux socket and adds setsockopt(IP_ADD_MEMBERSHIP) to this generic socket for every multicast-address I pass to the program via command line.
After this, the distributor loads the filedescriptor of a BPF_MAP_TYPE_HASH placed in the XDP-kernel program and inserts routes for the traffic in case a single AF-XDP socket needs to share its umem later on.
The XDP-kernel program then checks for each IPv4/UDP packet if there is an entry in that hash-map. This basically looks like this:
const struct pckt_idntfy_raw raw = {
.src_ip = 0, /* not used at the moment */
.dst_ip = iph->daddr,
.dst_port = udh->dest,
.pad = 0
};
const int *idx = bpf_map_lookup_elem(&xdp_packet_mapping, &raw);
if(idx != NULL) {
if (bpf_map_lookup_elem(&xsks_map, idx)) {
bpf_printk("Found socket # index: %d!\n", *idx);
return bpf_redirect_map(&xsks_map, *idx, 0);
} else {
bpf_printk("Didn't find connected socket for index %d!\n", *idx);
}
}
In case idx exists this means that there is a socket sitting behind that index in the BPF_MAP_TYPE_XSKMAP.
After doing all that the distributor spawns a new process via fork() passing all multicast-addresses (including destination port) which should be processed by that process (one process handles one RX-Queue). In case there are not enough RX-Queues, some processes may receive multiple multicast-addresses. This then means that they are going to use SHARED UMEM.
I basically oriented my AF-XDP user-space program on this example code: https://github.com/torvalds/linux/blob/master/samples/bpf/xdpsock_user.c
I am using the same xsk_configure_umem, xsk_populate_fill_ring and xsk_configure_socket functions.
Because I figured I don't need maximum latency for this application, I send the process to sleep for a specified time (around 1 - 2ms) after which it loops through every AF-XDP socket (most of the time it is only one socket) and processes every received packet for that socket, verifying that no packets have been missed:
while(!global_exit) {
nanosleep(&spec, &remaining);
for(int i = 0; i < cfg.ip_addrs_len; i++) {
struct xsk_socket_info *socket = xsk_sockets[i];
if(atomic_exchange(&socket->stats_sync.lock, 1) == 0) {
handle_receive_packets(socket);
atomic_fetch_xor(&socket->stats_sync.lock, 1); /* release socket-lock */
}
}
}
In my opinion there is nothing too fancy about this but somehow I lose ~25 packets at around 390.000 packets even though my UMEM is close to 1GB of RAM.
In comparison, my generic linux socket program looks like this (in short):
int fd = socket(AF_INET, SOCK_RAW, IPPROTO_UDP);
/* setting some socket options */
struct sockaddr_in sin;
memset(&sin, 0, sizeof(struct sockaddr_in));
sin.sin_family = AF_INET;
sin.sin_port = cfg->ip_addrs[0]->pckt.dst_port;
inet_aton(cfg->ip_addrs[0]->pckt.dst_ip, &sin.sin_addr);
if(bind(fd, (struct sockaddr*)&sin, sizeof(struct sockaddr)) < 0) {
fprintf(stderr, "Error on binding socket: %s\n", strerror(errno));
return - 1;
}
ioctl(fd, SIOCGIFADDR, &intf);
The distributor-program creates a new process for every given multicast-ip in case generic linux sockets are used (because there are no sophisticated methods such as SHARED-UMEM in generic sockets I don't bother with multiple multicast-streams per process).
Later on I of course join the multicast membership:
struct ip_mreqn mreq;
memset(&mreq, 0, sizeof(struct ip_mreqn));
const char *multicast_ip = cfg->ip_addrs[0]->pckt.dst_ip;
if(inet_pton(AF_INET, multicast_ip, &mreq.imr_multiaddr.s_addr)) {
/* Local interface address */
memcpy(&mreq.imr_address, &cfg->ifaddr, sizeof(struct in_addr));
mreq.imr_ifindex = cfg->ifindex;
if(setsockopt(igmp_socket_fd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(struct ip_mreqn)) < 0) {
fprintf(stderr, "Failed to set `IP_ADD_MEMBERSHIP`: %s\n", strerror(errno));
return;
} else {
printf("Successfully added Membership for IP: %s\n", multicast_ip);
}
}
and start processing packets (not sleeping but in a busy-loop like fashion):
void read_packets_recvmsg_with_latency(struct config *cfg, struct statistic *st, void *buff, const int igmp_socket_fd) {
char ctrl[CMSG_SPACE(sizeof(struct timeval))];
struct msghdr msg;
struct iovec iov;
msg.msg_control = (char*)ctrl;
msg.msg_controllen = sizeof(ctrl);
msg.msg_name = &cfg->ifaddr;
msg.msg_namelen = sizeof(cfg->ifaddr);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
iov.iov_base = buff;
iov.iov_len = BUFFER_SIZE;
struct timeval time_user, time_kernel;
struct cmsghdr *cmsg = (struct cmsghdr*)&ctrl;
const int64_t read_bytes = recvmsg(igmp_socket_fd, &msg, 0);
if(read_bytes == -1) {
return;
}
gettimeofday(&time_user, NULL);
if(cmsg->cmsg_level == SOL_SOCKET && cmsg->cmsg_type == SCM_TIMESTAMP) {
memcpy(&time_kernel, CMSG_DATA(cmsg), sizeof(struct timeval));
}
if(verify_rtp(cfg, st, read_bytes, buff)) {
const double timediff = (time_user.tv_sec - time_kernel.tv_sec) * 1000000 + (time_user.tv_usec - time_kernel.tv_usec);
if(timediff > st->stats.latency_us) {
st->stats.latency_us = timediff;
}
}
}
int main(...) {
....
while(!is_global_exit) {
read_packets_recvmsg_with_latency(&cfg, &st, buffer, igmp_socket_fd);
}
}
That's pretty much it.
Please not that in the described use case where I start to lose packets I don't use SHARED UMEM, it's just a single RX-Queue receiving a multicast-stream. In case I process a smaller multicast-stream of around 150.000 pps - the AF-XDP solution doesn't lose any packets. But it is also the other way around - for around 520.000 pps on the same RX-Queue (using SHARED UMEM) I get a loss of 12.000 pps.
Any ideas what I am missing?

miniupnpssdpd query for number of connected devices failed

I'm working in openWRT linux environment, and trying to enable UPnP on my LAN network, while monitoring the connected devices at any given point.
For that, I've enabled miniupnpd in the system, as well as minissdpd.
I have written the following function, for investigation of minissdpd in attempt to understand which devices are currently connected (based on minissdpd code owner example) :
static int query_connectedDevices(void)
{
struct sockaddr_un addr;
int s, nRet = 0;
const char * minissdpdsocketpath = "/var/run/minissdpd.sock";
unsigned char buffer[2048];
unsigned char * p;
const char * device = "urn:schemas-upnp-org:device:InternetGatewayDevice:1";
int device_len = (int)strlen(device);
/*Open communication socket with minissdpd process*/
s = socket(AF_UNIX, SOCK_STREAM, 0);
if(s < 0) {
return -1;
}
addr.sun_family = AF_UNIX;
strncpy(addr.sun_path, minissdpdsocketpath, sizeof(addr.sun_path));
if(connect(s, (struct sockaddr *)&addr, sizeof(struct sockaddr_un)) < 0) {
return -1;
}
buffer[0] = 1; /* request type 1 : request devices/services by type */
p = buffer + 1;
CODELENGTH(device_len, p);
memcpy(p, device, device_len);
p += device_len;
nRet = write(s, buffer, p - buffer);
if (nRet < 0) {
goto query_exit;
}
memset(buffer, 0x0, sizeof(buffer));
nRet = read(s, buffer, sizeof(buffer));
if (nRet < 0) {
goto query_exit;
}
nRet = 0;
query_exit:
close (s);
return nRet;
}
My problem is that I always receive back the value '1' from minissdpd, no matter how many devices are actually connected to UPnP network.
Taken from minissdpd page -
For these three request types, the responses is as following :
The first byte (n) is the number of devices/services in the response
For each service/device, three Strings : Location (url), service type
(ST: in M-SEARCH replies) and USN (unique id).
**Edit -
I've tried to trigger all 3 supported request types, these are the responses, note that empty read back ="" means no data was read back:
Buffer value = 3urn:schemas-upnp-org:device:InternetGatewayDevice:1
Buffer value read back = return value = 1
Buffer value = 3urn:schemas-upnp-org:device:InternetGatewayDevice:1
Buffer value read back = return value = 1
Buffer value = 3urn:schemas-upnp-org:device:InternetGatewayDevice:1
Buffer value read back = $http://192.168.1.1:5000/rootDesc.xml/urn:schemas- upnp-org:service:Layer3Forwarding:1Zuuid:27f10a12-a448-434f-9b33- 966bcf662cc3::urn:schemas-upnp- org:service:Layer3Forwarding:1$http://192.168.1.1:5000/rootDesc.xml.urn:schemas- upnp-org:service:WANIPConnection:1Yuuid:27f10a12-a448-434f-9b33- 966bcf662cc3::urn:schemas-upnp- org:service:WANIPConnection:1$http://192.168.1.1:5000/rootDesc.xmlupnp:rootdevic e:uuid:27f10a12-a448-434f-9b33-966bcf662cc3::upnp:rootdevice return value = 463
Am I doing something wrong?
Thanks!
What do you mean by "enable UPnP on my LAN" ?
miniupnpd is a IGD implementation, it has nothing to do with UPnP AV for example.
To discover all UPnP devices present on your LAN, you can use the listdevice tool provided with miniupnpc.
minissdpd monitors available UPnP devices on the network. It is used by libminiupnpc if available to get device list.
It is OK that it returns only 1 device when requested devices with type urn:schemas-upnp-org:device:InternetGatewayDevice:1 (IGD).
I guess there is only one UPnP IGD on your LAN : the router.
What do you mean by "no matter how many clients are actually connected to UPnP network.". There is no such thing as "client" in UPnP official terminology. There are UPnP Devices on the network, and Control Points

Convert IPv4 packet to IPv6

I am modifying kernel module (called map) in vyatta to convert IPv4 packet to IPv6.
http://enog.jp/~masakazu/vyatta/map/
I could do the conversation by removing the IPv4 header and and adding new IPv6 header, but after I call ip6_local_out(), it returns no error, but the packet is still in the struct sk_buff skb I am using. When I use tcpdump I can not see the new IPv6 packet. Can anyone tell me where I went wrong?
skb_dst_drop(skb);
skb_dst_set(skb, dst);
memcpy(&orig_iph, iph, sizeof(orig_iph));
skb_pull(skb, orig_iph.ihl * 4);
skb_push(skb, sizeof(struct ipv6hdr));
skb_reset_network_header(skb);
skb->protocol = htons(ETH_P_IPV6);
ipv6h = ipv6_hdr(skb);
ipv6h->version = 6;
ipv6h->priority = 0; /* XXX: */
ipv6h->flow_lbl[0] = 0;
ipv6h->flow_lbl[1] = 0;
ipv6h->flow_lbl[2] = 0;
ipv6h->payload_len = htons(ntohs(orig_iph.tot_len) - orig_iph.ihl * 4);
ipv6h->hop_limit = orig_iph.ttl;
memcpy(&ipv6h->saddr, &fl6.saddr, sizeof(struct in6_addr));
ipv6h->nexthdr = orig_iph.protocol;
pkt_len = skb->len;
skb->local_df = 1;
if (df)
err = ip6_local_out(skb);
else
err = ip6_fragment(skb, ip6_local_out);
return 0;
I use a my method called map_debug_print_skb("map_trans_forward_v4v6", skb); which prints IP header and transport header. I could see all the new IPv6 header detail from it but the packet is not showing in tcpdump.

Multicasting via UDP from a different thread

I was trying to create an application that allows me to multicast my webcam feed over my LAN using a specific multicast address and using sendto() to just send the frame buffer. The application I am trying to build is pretty much the same as on this site
http://nashruddin.com/Streaming_OpenCV_Videos_Over_the_Network
and uses the same architecture.
Only instead of a TCP socket I use SOCK_DGRAM. The problem is that when I use the sendto() function from a different thread it tends to fail i.e it returns -1 and errno gets set to 90 (EMSGSIZE), this basically means that the packet formed is too large to be sent over the network.
But this happens even if I try to send a simple string (like "hello") to the same multicast address. This seems to work fine if the application is a single thread one. that is to say i just capture the image and multicast it all in the same thread. This is the code:
#include <netinet/in.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include "cv.h"
#include "highgui.h"
#define PORT 12345
#define GROUP "225.0.0.37"
CvCapture* capture;
IplImage* img0;
IplImage* img1;
int is_data_ready = 0;
int serversock, clientsock;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
void* streamServer(void* arg);
void quit(char* msg, int retval);
int main(int argc, char** argv)
{
pthread_t thread_s;
int key;
if (argc == 2) {
capture = cvCaptureFromFile(argv[1]);
} else {
capture = cvCaptureFromCAM(0);
}
if (!capture) {
quit("cvCapture failed", 1);
}
img0 = cvQueryFrame(capture);
img1 = cvCreateImage(cvGetSize(img0), IPL_DEPTH_8U, 1);
cvZero(img1);
cvNamedWindow("stream_server", CV_WINDOW_AUTOSIZE);
/* print the width and height of the frame, needed by the client */
fprintf(stdout, "width: %d\nheight: %d\n\n", img0->width, img0->height);
fprintf(stdout, "Press 'q' to quit.\n\n");
/* run the streaming server as a separate thread */
if (pthread_create(&thread_s, NULL, streamServer, NULL)) {
quit("pthread_create failed.", 1);
}
while(key != 'q') {
/* get a frame from camera */
img0 = cvQueryFrame(capture);
if (!img0) break;
img0->origin = 0;
cvFlip(img0, img0, -1);
/**
* convert to grayscale
* note that the grayscaled image is the image to be sent to the client
* so we enclose it with pthread_mutex_lock to make it thread safe
*/
pthread_mutex_lock(&mutex);
cvCvtColor(img0, img1, CV_BGR2GRAY);
is_data_ready = 1;
pthread_mutex_unlock(&mutex);
/* also display the video here on server */
cvShowImage("stream_server", img0);
key = cvWaitKey(30);
}
/* user has pressed 'q', terminate the streaming server */
if (pthread_cancel(thread_s)) {
quit("pthread_cancel failed.", 1);
}
/* free memory */
cvDestroyWindow("stream_server");
quit(NULL, 0);
}
/**
* This is the streaming server, run as a separate thread
* This function waits for a client to connect, and send the grayscaled images
*/
void* streamServer(void* arg)
{
struct sockaddr_in server;
/* make this thread cancellable using pthread_cancel() */
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL);
/* open socket */
if ((serversock = socket(AF_INET, SOCK_DGRAM, 0)) == -1) {
quit("socket() failed", 1);
}
memset(&server,0,sizeof(server));
server.sin_family = AF_INET;
server.sin_port = htons(PORT);
server.sin_addr.s_addr = inet_addr(GROUP);
int opt = 1;
//if(setsockopt(serversock,SOL_SOCKET,SO_BROADCAST,&opt,sizeof(int))==-1){
// quit("setsockopt failed",0);
//}
// /* setup server's IP and port */
// memset(&server, 0, sizeof(server));
// server.sin_family = AF_INET;
// server.sin_port = htons(PORT);
// server.sin_addr.s_addr = INADDR_ANY;
//
// /* bind the socket */
// if (bind(serversock, (const void*)&server, sizeof(server)) == -1) {
// quit("bind() failed", 1);
// }
//
// /* wait for connection */
// if (listen(serversock, 10) == -1) {
// quit("listen() failed.", 1);
// }
//
// /* accept a client */
// if ((clientsock = accept(serversock, NULL, NULL)) == -1) {
// quit("accept() failed", 1);
// }
/* the size of the data to be sent */
int imgsize = img1->imageSize;
int bytes=0, i;
/* start sending images */
while(1)
{
/* send the grayscaled frame, thread safe */
pthread_mutex_lock(&mutex);
if (is_data_ready) {
// bytes = send(clientsock, img1->imageData, imgsize, 0);
is_data_ready = 0;
if((bytes = sendto(serversock,img1->imageData,imgsize,0,(struct sockaddr*)&server,sizeof(server)))==-1){
quit("sendto FAILED",1);
}
}
pthread_mutex_unlock(&mutex);
// /* if something went wrong, restart the connection */
// if (bytes != imgsize) {
// fprintf(stderr, "Connection closed.\n");
// close(clientsock);
//
// if ((clientsock = accept(serversock, NULL, NULL)) == -1) {
// quit("accept() failed", 1);
// }
// }
/* have we terminated yet? */
pthread_testcancel();
/* no, take a rest for a while */
usleep(1000);
}
}
/**
* this function provides a way to exit nicely from the system
*/
void quit(char* msg, int retval)
{
if (retval == 0) {
fprintf(stdout, (msg == NULL ? "" : msg));
fprintf(stdout, "\n");
} else {
fprintf(stderr, (msg == NULL ? "" : msg));
fprintf(stderr, "\n");
}
if (clientsock) close(clientsock);
if (serversock) close(serversock);
if (capture) cvReleaseCapture(&capture);
if (img1) cvReleaseImage(&img1);
pthread_mutex_destroy(&mutex);
exit(retval);
}
In the sendto() call, you reference imgsize which is initialized to img1->imageSize.
But I don't see where img1->imageSize is set and it appears that imgsize is never updated.
So first check that the imgsize value being passed to sendto() is correct.
Then check that it is not too large:
UDP/IP datagrams have a hard payload limit of 65,507 bytes. However, an IPv4 network is not required to support more than 548 bytes of payload. (576 is the minimum IPv4 MTU size, less 28 bytes of UDP/IP overhead). Most networks have an MTU of 1500, giving you a nominal payload of 1472 bytes.
Most networks allow you to exceed the MTU by breaking the datagram into IP fragments, which the receiving OS must reassemble. This is invisible to your application: recvfrom() either gets the whole reassembled packet or it gets nothing. But the odds of getting nothing go up with fragmentation because the loss of any fragment will cause the entire packet to be loss. In addition, some routers and operating systems have obscure security rules which will block some UDP patterns or fragments of certain sizes.
Finally, any given network may enforce a maximum datagram size even with fragmentation, and this is often much less than 65507 bytes.
Since you are dealing with a specific network, you will need to experiment to see how big you can reliably go.
UDP/IP at Wikipedia
IPv4 at Wikipedia
Are you absolutely sure that you don't try to send more than limit of UDP which is around 65500 bytes? From my experience you shouldn't even send more than Ethernet packet limit which is around 1500 bytes to keep best UDP reliability.
I think that right now you are trying to send much more data in a form of stream. UDP isn't a stream protocol and you can't replace TCP by it. But of course it is possible to use UDP to send video stream on multicast, but you need some protocol on top of UDP that will handle message size limit of UDP. In real world RTP protocol on top of UDP is used for such kind of task.

Resources