I'm working in openWRT linux environment, and trying to enable UPnP on my LAN network, while monitoring the connected devices at any given point.
For that, I've enabled miniupnpd in the system, as well as minissdpd.
I have written the following function, for investigation of minissdpd in attempt to understand which devices are currently connected (based on minissdpd code owner example) :
static int query_connectedDevices(void)
{
struct sockaddr_un addr;
int s, nRet = 0;
const char * minissdpdsocketpath = "/var/run/minissdpd.sock";
unsigned char buffer[2048];
unsigned char * p;
const char * device = "urn:schemas-upnp-org:device:InternetGatewayDevice:1";
int device_len = (int)strlen(device);
/*Open communication socket with minissdpd process*/
s = socket(AF_UNIX, SOCK_STREAM, 0);
if(s < 0) {
return -1;
}
addr.sun_family = AF_UNIX;
strncpy(addr.sun_path, minissdpdsocketpath, sizeof(addr.sun_path));
if(connect(s, (struct sockaddr *)&addr, sizeof(struct sockaddr_un)) < 0) {
return -1;
}
buffer[0] = 1; /* request type 1 : request devices/services by type */
p = buffer + 1;
CODELENGTH(device_len, p);
memcpy(p, device, device_len);
p += device_len;
nRet = write(s, buffer, p - buffer);
if (nRet < 0) {
goto query_exit;
}
memset(buffer, 0x0, sizeof(buffer));
nRet = read(s, buffer, sizeof(buffer));
if (nRet < 0) {
goto query_exit;
}
nRet = 0;
query_exit:
close (s);
return nRet;
}
My problem is that I always receive back the value '1' from minissdpd, no matter how many devices are actually connected to UPnP network.
Taken from minissdpd page -
For these three request types, the responses is as following :
The first byte (n) is the number of devices/services in the response
For each service/device, three Strings : Location (url), service type
(ST: in M-SEARCH replies) and USN (unique id).
**Edit -
I've tried to trigger all 3 supported request types, these are the responses, note that empty read back ="" means no data was read back:
Buffer value = 3urn:schemas-upnp-org:device:InternetGatewayDevice:1
Buffer value read back = return value = 1
Buffer value = 3urn:schemas-upnp-org:device:InternetGatewayDevice:1
Buffer value read back = return value = 1
Buffer value = 3urn:schemas-upnp-org:device:InternetGatewayDevice:1
Buffer value read back = $http://192.168.1.1:5000/rootDesc.xml/urn:schemas- upnp-org:service:Layer3Forwarding:1Zuuid:27f10a12-a448-434f-9b33- 966bcf662cc3::urn:schemas-upnp- org:service:Layer3Forwarding:1$http://192.168.1.1:5000/rootDesc.xml.urn:schemas- upnp-org:service:WANIPConnection:1Yuuid:27f10a12-a448-434f-9b33- 966bcf662cc3::urn:schemas-upnp- org:service:WANIPConnection:1$http://192.168.1.1:5000/rootDesc.xmlupnp:rootdevic e:uuid:27f10a12-a448-434f-9b33-966bcf662cc3::upnp:rootdevice return value = 463
Am I doing something wrong?
Thanks!
What do you mean by "enable UPnP on my LAN" ?
miniupnpd is a IGD implementation, it has nothing to do with UPnP AV for example.
To discover all UPnP devices present on your LAN, you can use the listdevice tool provided with miniupnpc.
minissdpd monitors available UPnP devices on the network. It is used by libminiupnpc if available to get device list.
It is OK that it returns only 1 device when requested devices with type urn:schemas-upnp-org:device:InternetGatewayDevice:1 (IGD).
I guess there is only one UPnP IGD on your LAN : the router.
What do you mean by "no matter how many clients are actually connected to UPnP network.". There is no such thing as "client" in UPnP official terminology. There are UPnP Devices on the network, and Control Points
Related
I am comparing AF-XDP sockets vs Linux Sockets in terms of how many packets they can process without packet-loss (packet-loss is defined as the RTP-sequence number of the current packet is not equal to the RTP-sequence number of the previous packet + 1).
I noticed that my AF-XDP socket program (I can't determine if this problem is related to the kernel program or the user-space program) is losing around ~25 packets per second at around 390.000 packets per second whereas an equivalent program with generic linux sockets doesn't lose any packets.
I implemented a so-called distributor-program which loads the XDP-kernel program once, sets up a generic linux socket and adds setsockopt(IP_ADD_MEMBERSHIP) to this generic socket for every multicast-address I pass to the program via command line.
After this, the distributor loads the filedescriptor of a BPF_MAP_TYPE_HASH placed in the XDP-kernel program and inserts routes for the traffic in case a single AF-XDP socket needs to share its umem later on.
The XDP-kernel program then checks for each IPv4/UDP packet if there is an entry in that hash-map. This basically looks like this:
const struct pckt_idntfy_raw raw = {
.src_ip = 0, /* not used at the moment */
.dst_ip = iph->daddr,
.dst_port = udh->dest,
.pad = 0
};
const int *idx = bpf_map_lookup_elem(&xdp_packet_mapping, &raw);
if(idx != NULL) {
if (bpf_map_lookup_elem(&xsks_map, idx)) {
bpf_printk("Found socket # index: %d!\n", *idx);
return bpf_redirect_map(&xsks_map, *idx, 0);
} else {
bpf_printk("Didn't find connected socket for index %d!\n", *idx);
}
}
In case idx exists this means that there is a socket sitting behind that index in the BPF_MAP_TYPE_XSKMAP.
After doing all that the distributor spawns a new process via fork() passing all multicast-addresses (including destination port) which should be processed by that process (one process handles one RX-Queue). In case there are not enough RX-Queues, some processes may receive multiple multicast-addresses. This then means that they are going to use SHARED UMEM.
I basically oriented my AF-XDP user-space program on this example code: https://github.com/torvalds/linux/blob/master/samples/bpf/xdpsock_user.c
I am using the same xsk_configure_umem, xsk_populate_fill_ring and xsk_configure_socket functions.
Because I figured I don't need maximum latency for this application, I send the process to sleep for a specified time (around 1 - 2ms) after which it loops through every AF-XDP socket (most of the time it is only one socket) and processes every received packet for that socket, verifying that no packets have been missed:
while(!global_exit) {
nanosleep(&spec, &remaining);
for(int i = 0; i < cfg.ip_addrs_len; i++) {
struct xsk_socket_info *socket = xsk_sockets[i];
if(atomic_exchange(&socket->stats_sync.lock, 1) == 0) {
handle_receive_packets(socket);
atomic_fetch_xor(&socket->stats_sync.lock, 1); /* release socket-lock */
}
}
}
In my opinion there is nothing too fancy about this but somehow I lose ~25 packets at around 390.000 packets even though my UMEM is close to 1GB of RAM.
In comparison, my generic linux socket program looks like this (in short):
int fd = socket(AF_INET, SOCK_RAW, IPPROTO_UDP);
/* setting some socket options */
struct sockaddr_in sin;
memset(&sin, 0, sizeof(struct sockaddr_in));
sin.sin_family = AF_INET;
sin.sin_port = cfg->ip_addrs[0]->pckt.dst_port;
inet_aton(cfg->ip_addrs[0]->pckt.dst_ip, &sin.sin_addr);
if(bind(fd, (struct sockaddr*)&sin, sizeof(struct sockaddr)) < 0) {
fprintf(stderr, "Error on binding socket: %s\n", strerror(errno));
return - 1;
}
ioctl(fd, SIOCGIFADDR, &intf);
The distributor-program creates a new process for every given multicast-ip in case generic linux sockets are used (because there are no sophisticated methods such as SHARED-UMEM in generic sockets I don't bother with multiple multicast-streams per process).
Later on I of course join the multicast membership:
struct ip_mreqn mreq;
memset(&mreq, 0, sizeof(struct ip_mreqn));
const char *multicast_ip = cfg->ip_addrs[0]->pckt.dst_ip;
if(inet_pton(AF_INET, multicast_ip, &mreq.imr_multiaddr.s_addr)) {
/* Local interface address */
memcpy(&mreq.imr_address, &cfg->ifaddr, sizeof(struct in_addr));
mreq.imr_ifindex = cfg->ifindex;
if(setsockopt(igmp_socket_fd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(struct ip_mreqn)) < 0) {
fprintf(stderr, "Failed to set `IP_ADD_MEMBERSHIP`: %s\n", strerror(errno));
return;
} else {
printf("Successfully added Membership for IP: %s\n", multicast_ip);
}
}
and start processing packets (not sleeping but in a busy-loop like fashion):
void read_packets_recvmsg_with_latency(struct config *cfg, struct statistic *st, void *buff, const int igmp_socket_fd) {
char ctrl[CMSG_SPACE(sizeof(struct timeval))];
struct msghdr msg;
struct iovec iov;
msg.msg_control = (char*)ctrl;
msg.msg_controllen = sizeof(ctrl);
msg.msg_name = &cfg->ifaddr;
msg.msg_namelen = sizeof(cfg->ifaddr);
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
iov.iov_base = buff;
iov.iov_len = BUFFER_SIZE;
struct timeval time_user, time_kernel;
struct cmsghdr *cmsg = (struct cmsghdr*)&ctrl;
const int64_t read_bytes = recvmsg(igmp_socket_fd, &msg, 0);
if(read_bytes == -1) {
return;
}
gettimeofday(&time_user, NULL);
if(cmsg->cmsg_level == SOL_SOCKET && cmsg->cmsg_type == SCM_TIMESTAMP) {
memcpy(&time_kernel, CMSG_DATA(cmsg), sizeof(struct timeval));
}
if(verify_rtp(cfg, st, read_bytes, buff)) {
const double timediff = (time_user.tv_sec - time_kernel.tv_sec) * 1000000 + (time_user.tv_usec - time_kernel.tv_usec);
if(timediff > st->stats.latency_us) {
st->stats.latency_us = timediff;
}
}
}
int main(...) {
....
while(!is_global_exit) {
read_packets_recvmsg_with_latency(&cfg, &st, buffer, igmp_socket_fd);
}
}
That's pretty much it.
Please not that in the described use case where I start to lose packets I don't use SHARED UMEM, it's just a single RX-Queue receiving a multicast-stream. In case I process a smaller multicast-stream of around 150.000 pps - the AF-XDP solution doesn't lose any packets. But it is also the other way around - for around 520.000 pps on the same RX-Queue (using SHARED UMEM) I get a loss of 12.000 pps.
Any ideas what I am missing?
there's something strange in my client/server socket using RSA.
If i test it on localhost, everithing goes fine, but if i put client on a pc and server on othe pc, something gone wrong.
Client after call connect, call a method for public keys exchange with server. This part of code works fine.
After this, client send a request to server:
strcpy(send_pack->op, "help\n");
RSA_public_encrypt(strlen(send_pack->op), send_pack->op,
encrypted_send->op, rsa_server, padding);
rw_value = write(server, encrypted_send, sizeof (encrypted_pack));
if (rw_value == -1) {
stampa_errore(write_error);
close(server);
exit(1);
}
if (rw_value == 0) {
stampa_errore(no_response);
close(server);
exit(1);
}
printf("---Help send, waiting for response\n");
set_alarm();
rw_value = read(server, encrypted_receive, sizeof (encrypted_pack));
alarm(0);
if (rw_value == -1) {
stampa_errore(read_error);
exit(1);
}
if (rw_value == 0) {
stampa_errore(no_response);
close(server);
exit(1);
}
RSA_private_decrypt(RSA_size(rsa), encrypted_receive->message,
receive_pack->message, rsa, padding);
printf("%s\n", receive_pack->message);
return;
}
but when server try to decrypt the receive message on server side, the "help" string doesn't appear. This happen only on the net, on localhost the same code works fine...
EDIT:
typedef struct pack1 {
unsigned char user[encrypted_size];
unsigned char password[encrypted_size];
unsigned char op[encrypted_size];
unsigned char obj[encrypted_size];
unsigned char message[encrypted_size];
int id;
}encrypted_pack;
encrypted_size is 512, and padding used is RSA_PKCS1_PADDING
You are assuming that you read the whole thing, 512 sizeof (encrypted_pack) bytes, in one go. This doesn't always happen. You can get less than that, so you should read(2) in a loop until you have your complete application message.
Edit 0:
You are trying to decrypt not complete message. TCP is a stream of bytes, and you have to treat it as such. It doesn't know about your application message boundaries. You should be doing something like this:
char buffer[sizeof( encrypted_pack )];
size_t to_read = sizeof( encrypted_pack );
size_t offset = 0;
while ( true ) {
ssize_t rb = ::read( fd, buffer + offset, to_read - offset );
if ( rb == -1 ) { /* handle error */ }
else if ( rb == 0 ) { /* handle EOF */ }
else {
offset += rb;
to_read -= rb;
if ( to_read == 0 ) break;
}
}
// handle complete message in buffer
You should do the same - write bytes into the socket in a loop - on the sending side too.
It "works" over loopback because MTU of that virtual interface is usually around 16K vs. 1500 for normal ethernet, so TCP transfers your data in one chunk. But you cannot rely on that.
I was trying to create an application that allows me to multicast my webcam feed over my LAN using a specific multicast address and using sendto() to just send the frame buffer. The application I am trying to build is pretty much the same as on this site
http://nashruddin.com/Streaming_OpenCV_Videos_Over_the_Network
and uses the same architecture.
Only instead of a TCP socket I use SOCK_DGRAM. The problem is that when I use the sendto() function from a different thread it tends to fail i.e it returns -1 and errno gets set to 90 (EMSGSIZE), this basically means that the packet formed is too large to be sent over the network.
But this happens even if I try to send a simple string (like "hello") to the same multicast address. This seems to work fine if the application is a single thread one. that is to say i just capture the image and multicast it all in the same thread. This is the code:
#include <netinet/in.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>
#include "cv.h"
#include "highgui.h"
#define PORT 12345
#define GROUP "225.0.0.37"
CvCapture* capture;
IplImage* img0;
IplImage* img1;
int is_data_ready = 0;
int serversock, clientsock;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
void* streamServer(void* arg);
void quit(char* msg, int retval);
int main(int argc, char** argv)
{
pthread_t thread_s;
int key;
if (argc == 2) {
capture = cvCaptureFromFile(argv[1]);
} else {
capture = cvCaptureFromCAM(0);
}
if (!capture) {
quit("cvCapture failed", 1);
}
img0 = cvQueryFrame(capture);
img1 = cvCreateImage(cvGetSize(img0), IPL_DEPTH_8U, 1);
cvZero(img1);
cvNamedWindow("stream_server", CV_WINDOW_AUTOSIZE);
/* print the width and height of the frame, needed by the client */
fprintf(stdout, "width: %d\nheight: %d\n\n", img0->width, img0->height);
fprintf(stdout, "Press 'q' to quit.\n\n");
/* run the streaming server as a separate thread */
if (pthread_create(&thread_s, NULL, streamServer, NULL)) {
quit("pthread_create failed.", 1);
}
while(key != 'q') {
/* get a frame from camera */
img0 = cvQueryFrame(capture);
if (!img0) break;
img0->origin = 0;
cvFlip(img0, img0, -1);
/**
* convert to grayscale
* note that the grayscaled image is the image to be sent to the client
* so we enclose it with pthread_mutex_lock to make it thread safe
*/
pthread_mutex_lock(&mutex);
cvCvtColor(img0, img1, CV_BGR2GRAY);
is_data_ready = 1;
pthread_mutex_unlock(&mutex);
/* also display the video here on server */
cvShowImage("stream_server", img0);
key = cvWaitKey(30);
}
/* user has pressed 'q', terminate the streaming server */
if (pthread_cancel(thread_s)) {
quit("pthread_cancel failed.", 1);
}
/* free memory */
cvDestroyWindow("stream_server");
quit(NULL, 0);
}
/**
* This is the streaming server, run as a separate thread
* This function waits for a client to connect, and send the grayscaled images
*/
void* streamServer(void* arg)
{
struct sockaddr_in server;
/* make this thread cancellable using pthread_cancel() */
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL);
/* open socket */
if ((serversock = socket(AF_INET, SOCK_DGRAM, 0)) == -1) {
quit("socket() failed", 1);
}
memset(&server,0,sizeof(server));
server.sin_family = AF_INET;
server.sin_port = htons(PORT);
server.sin_addr.s_addr = inet_addr(GROUP);
int opt = 1;
//if(setsockopt(serversock,SOL_SOCKET,SO_BROADCAST,&opt,sizeof(int))==-1){
// quit("setsockopt failed",0);
//}
// /* setup server's IP and port */
// memset(&server, 0, sizeof(server));
// server.sin_family = AF_INET;
// server.sin_port = htons(PORT);
// server.sin_addr.s_addr = INADDR_ANY;
//
// /* bind the socket */
// if (bind(serversock, (const void*)&server, sizeof(server)) == -1) {
// quit("bind() failed", 1);
// }
//
// /* wait for connection */
// if (listen(serversock, 10) == -1) {
// quit("listen() failed.", 1);
// }
//
// /* accept a client */
// if ((clientsock = accept(serversock, NULL, NULL)) == -1) {
// quit("accept() failed", 1);
// }
/* the size of the data to be sent */
int imgsize = img1->imageSize;
int bytes=0, i;
/* start sending images */
while(1)
{
/* send the grayscaled frame, thread safe */
pthread_mutex_lock(&mutex);
if (is_data_ready) {
// bytes = send(clientsock, img1->imageData, imgsize, 0);
is_data_ready = 0;
if((bytes = sendto(serversock,img1->imageData,imgsize,0,(struct sockaddr*)&server,sizeof(server)))==-1){
quit("sendto FAILED",1);
}
}
pthread_mutex_unlock(&mutex);
// /* if something went wrong, restart the connection */
// if (bytes != imgsize) {
// fprintf(stderr, "Connection closed.\n");
// close(clientsock);
//
// if ((clientsock = accept(serversock, NULL, NULL)) == -1) {
// quit("accept() failed", 1);
// }
// }
/* have we terminated yet? */
pthread_testcancel();
/* no, take a rest for a while */
usleep(1000);
}
}
/**
* this function provides a way to exit nicely from the system
*/
void quit(char* msg, int retval)
{
if (retval == 0) {
fprintf(stdout, (msg == NULL ? "" : msg));
fprintf(stdout, "\n");
} else {
fprintf(stderr, (msg == NULL ? "" : msg));
fprintf(stderr, "\n");
}
if (clientsock) close(clientsock);
if (serversock) close(serversock);
if (capture) cvReleaseCapture(&capture);
if (img1) cvReleaseImage(&img1);
pthread_mutex_destroy(&mutex);
exit(retval);
}
In the sendto() call, you reference imgsize which is initialized to img1->imageSize.
But I don't see where img1->imageSize is set and it appears that imgsize is never updated.
So first check that the imgsize value being passed to sendto() is correct.
Then check that it is not too large:
UDP/IP datagrams have a hard payload limit of 65,507 bytes. However, an IPv4 network is not required to support more than 548 bytes of payload. (576 is the minimum IPv4 MTU size, less 28 bytes of UDP/IP overhead). Most networks have an MTU of 1500, giving you a nominal payload of 1472 bytes.
Most networks allow you to exceed the MTU by breaking the datagram into IP fragments, which the receiving OS must reassemble. This is invisible to your application: recvfrom() either gets the whole reassembled packet or it gets nothing. But the odds of getting nothing go up with fragmentation because the loss of any fragment will cause the entire packet to be loss. In addition, some routers and operating systems have obscure security rules which will block some UDP patterns or fragments of certain sizes.
Finally, any given network may enforce a maximum datagram size even with fragmentation, and this is often much less than 65507 bytes.
Since you are dealing with a specific network, you will need to experiment to see how big you can reliably go.
UDP/IP at Wikipedia
IPv4 at Wikipedia
Are you absolutely sure that you don't try to send more than limit of UDP which is around 65500 bytes? From my experience you shouldn't even send more than Ethernet packet limit which is around 1500 bytes to keep best UDP reliability.
I think that right now you are trying to send much more data in a form of stream. UDP isn't a stream protocol and you can't replace TCP by it. But of course it is possible to use UDP to send video stream on multicast, but you need some protocol on top of UDP that will handle message size limit of UDP. In real world RTP protocol on top of UDP is used for such kind of task.
I currently have a client app that works but it is single threaded.
my packets look like this: < len_of_data>|< data>"
"|" is used as a separator for my data.
< len_of_data> is always 4 digits long followed.
< data> looks like: |< transaction id>|< command>|< buflen>|< buf>|< checksum>|
my code to create the packets is:
_snprintf_s(data_buffer, WS_MAX_DATA_PACKET_SIZE,
WS_MAX_DATA_PACKET_SIZE - 1,
"%s%d%s%d%s%d%s%s%s%d%s",
WS_PACKET_SEP, pkt->transaction_id,
WS_PACKET_SEP, pkt->command,
WS_PACKET_SEP, pkt->bufsize,
WS_PACKET_SEP, pkt->buf,
WS_PACKET_SEP, pkt->checksum, WS_PACKET_SEP);
buf_len = strlen(data_buffer);
_snprintf_s(send_buffer, WS_MAX_DATA_PACKET_SIZE,
WS_MAX_DATA_PACKET_SIZE - 1, "%04d%s%s",
buf_len, WS_PACKET_SEP, data_buffer);
buf_len = strlen(send_buffer);
// Send buffer
bytes_sent = send(ConnectSocket, send_buffer, buf_len, 0);
The client thread sends a command to the server, then calls a GetIncomingPackets() function. In GetIncomingPackets(), I call recv() to get 5 bytes, this should be the len of the rest of packet, I parse these 5 bytes and verify that they match my expected format. Then I convert the first 4 bytes to an integer, x. Then I call recv() again to get x bytes more and then parse those out into my packet structure.
The problem happens when I add another thread to do the same thing (send and receive commands).
I start my app and fire 2 threads and send them to send different commands and wait for responses. When the threads call GetIncomingPackets(), the data I am getting back is invalid. The first 5 bytes I am expecting are missing sometimes, and I just get the following 5 bytes, therefore I am unable to get my < len_of_data > packet.
I even added a critical section block between the 2 recv() calls in my GetIncomingPackets() so the treads dont interrupt each other while getting a full packet.
Without some extra code for error checking, this how the function looks like
#define WS_SIZE_OF_LEN_PACKET 5
bool GetIncomingPackets(SOCKET sd, dev_sim_packet_t *pkt )
{
char len_str_buf[WS_SIZE_OF_LEN_PACKET + 1] = {0}; // + 1 for NULL char
char data_buf[WS_MAX_DATA_PACKET_SIZE + 1] = {0};
int ret = 0;
int data_len = 0;
EnterCriticalSection( &recv_critical_section );
nReadBytes = WS_RecvAll(sd, len_str_buf, WS_SIZE_OF_LEN_PACKET );
ret = WS_VerifyLenPacket(len_str_buf);
// Convert data packet lenght string received to int
data_len = WS_ConvertNumberFromString(len_str_buf, WS_SIZE_OF_LEN_PACKET );
// Get data from packet
nReadBytes = WS_RecvAll(sd, data_buf, data_len);
LeaveCriticalSection( &recv_critical_section );
ret = ParseMessager(data_buf, data_len, pkt);
}
My question is, what could be causing this problem, and how could I fix it? Or is there better ways to do what i am trying to do. The reason that I'm trying to make it multi-threaded is because my app will communicate with 2 other sources, and I want to have a thread to handle each request that comes in from either source.
thanks in advance and feel free to ask any questions if I didn't explain something well.
Here's the code for WS_RecvAll(). The buffer is a static buffer declared in GetIncomingPackets() like this:
char data_buf[WS_MAX_DATA_PACKET_SIZE + 1] = {0}; // + 1 for NULL char
int WS_RecvAll(SOCKET socket_handle, char* buffer, int size)
{
int ret = 0;
int read = 0;
int i = 0;
char err_buf[100] = {0};
while(size)
{
ret = recv(socket_handle, &buffer[read], size, 0);
if (ret == SOCKET_ERROR)
{
printf("***ERROR***: recv failed, error = %d\n", WSAGetLastError());
return WS_ERROR_RECV_FAILED;
}
if (ret == 0) {
break;
}
read += ret;
size -= ret;
}
return read;
}
It's very difficult to debug MT problems, particularly at one remove, but if you are using astatic buffer, should not:
LeaveCriticalSection( &recv_critical_section );
ret = ParseMessager(data_buf, data_len, pkt);
be:
ret = ParseMessager(data_buf, data_len, pkt);
LeaveCriticalSection( &recv_critical_section );
And why use a static buffer in any case?
Im curious to know whether you have used the same socked descriptor in both the threads to connect to the server.
I'm trying to forge an IGMPv2 Membership Request packet and send it on a RAW socket.
The RFC 3376 states:
IGMP messages are encapsulated in IPv4 datagrams, with an IP protocol number of 2. Every IGMP message described in this document is sent with an IP Time-to-Live of 1, IP Precedence of Internetwork Control (e.g., Type of Service 0xc0), and carries an IP Router Alert option [RFC-2113] in its IP header
So the IP_ROUTER_ALERT flag must be set.
I'm trying to forge the strict necessary of the packet (e.g. only the IGMP header & payload), so i'm using the setsockopt to edit the IP options.
some useful variables:
#define C_IP_MULTICAST_TTL 1
#define C_IP_ROUTER_ALERT 1
int sockfd = 0;
int ecsockopt = 0;
int bytes_num = 0;
int ip_multicast_ttl = C_IP_MULTICAST_TTL;
int ip_router_alert = C_IP_ROUTER_ALERT;
Here's how I open the RAW socket:
sock_domain = AF_INET;
sock_type = SOCK_RAW;
sock_proto = IPPROTO_IGMP;
if ((ecsockopt = socket(sock_domain,sock_type,sock_proto)) < 0) {
printf("Error %d: Can't open socket.\n", errno);
return 1;
} else {
printf("** Socket opened.\n");
}
sockfd = ecsockopt;
Then I set the TTL and Router Alert option:
// Set the sent packets TTL
if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_MULTICAST_TTL, &ip_multicast_ttl, sizeof(ip_multicast_ttl))) < 0) {
printf("Error %d: Can't set TTL.\n", ecsockopt);
return 1;
} else {
printf("** TTL set.\n");
}
// Set the Router Alert
if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_ROUTER_ALERT, &ip_router_alert, sizeof(ip_router_alert))) < 0) {
printf("Error %d: Can't set Router Alert.\n", ecsockopt);
return 1;
} else {
printf("** Router Alert set.\n");
}
The setsockopt of IP_ROUTER_ALERT returns 0. After forging the packet, i send it with sendto in this way:
// Send the packet
if((bytes_num = sendto(sockfd, packet, packet_size, 0, (struct sockaddr*) &mgroup1_addr, sizeof(mgroup1_addr))) < 0) {
printf("Error %d: Can't send Membership report message.\n", bytes_num);
return 1;
} else {
printf("** Membership report message sent. (bytes=%d)\n",bytes_num);
}
The packet is sent, but the IP_ROUTER_ALERT option (checked with wireshark) is missing.
Am i doing something wrong? is there some other methods to set the IP_ROUTER_ALERT option?
Thanks in advance.
Finally i've found out that the IP_ROUTER_ALERT has to be set by the Linux Kernel. IGMP membership requests are sent after a IP_ADD_MEMBERSHIP is done and the Kernel takes charge of setting the IP_ROUTER_ALERT flag.
I do not know why your code is not working (it looks fine to me), but I can suggest a workaround: Drop one more layer down on your raw socket and build Ethernet frames. You may also want to take a look at Libnet, which handles building packets like this for you.
The documentation states:
Pass all to-be forwarded packets with
the IP Router Alert option set to this
socket. Only valid for raw sockets.
This is useful, for instance, for user
space RSVP daemons. The tapped packets
are not forwarded by the kernel, it is
the users responsibility to send them
out again. Socket binding is ignored,
such packets are only filtered by
protocol. Expects an integer flag.
This sounds as if the option only matters when receiving packets on the socket, not when sending them. If you're sending raw packets, can't you just set the required option in the IP header yourself?
As a reference I would recommend one of the many IGMP aware programs out there.
One example is igmpproxy:
https://github.com/ViToni/igmpproxy/blob/logging/src/igmp.c#L54
/*
* Open and initialize the igmp socket, and fill in the non-changing
* IP header fields in the output packet buffer.
*/
void initIgmp(void) {
struct ip *ip;
recv_buf = malloc(RECV_BUF_SIZE);
send_buf = malloc(RECV_BUF_SIZE);
k_hdr_include(true); /* include IP header when sending */
k_set_rcvbuf(256*1024,48*1024); /* lots of input buffering */
k_set_ttl(1); /* restrict multicasts to one hop */
k_set_loop(false); /* disable multicast loopback */
ip = (struct ip *)send_buf;
memset(ip, 0, sizeof(struct ip));
/*
* Fields zeroed that aren't filled in later:
* - IP ID (let the kernel fill it in)
* - Offset (we don't send fragments)
* - Checksum (let the kernel fill it in)
*/
ip->ip_v = IPVERSION;
ip->ip_hl = (sizeof(struct ip) + 4) >> 2; /* +4 for Router Alert option */
ip->ip_tos = 0xc0; /* Internet Control */
ip->ip_ttl = MAXTTL; /* applies to unicasts only */
ip->ip_p = IPPROTO_IGMP;
allhosts_group = htonl(INADDR_ALLHOSTS_GROUP);
allrouters_group = htonl(INADDR_ALLRTRS_GROUP);
alligmp3_group = htonl(INADDR_ALLIGMPV3_GROUP);
}
and https://github.com/ViToni/igmpproxy/blob/logging/src/igmp.c#L271
/*
* Construct an IGMP message in the output packet buffer. The caller may
* have already placed data in that buffer, of length 'datalen'.
*/
static void buildIgmp(uint32_t src, uint32_t dst, int type, int code, uint32_t group, int datalen) {
struct ip *ip;
struct igmp *igmp;
extern int curttl;
ip = (struct ip *)send_buf;
ip->ip_src.s_addr = src;
ip->ip_dst.s_addr = dst;
ip_set_len(ip, IP_HEADER_RAOPT_LEN + IGMP_MINLEN + datalen);
if (IN_MULTICAST(ntohl(dst))) {
ip->ip_ttl = curttl;
} else {
ip->ip_ttl = MAXTTL;
}
/* Add Router Alert option */
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[0] = IPOPT_RA;
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[1] = 0x04;
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[2] = 0x00;
((unsigned char*)send_buf+MIN_IP_HEADER_LEN)[3] = 0x00;
igmp = (struct igmp *)(send_buf + IP_HEADER_RAOPT_LEN);
igmp->igmp_type = type;
igmp->igmp_code = code;
igmp->igmp_group.s_addr = group;
igmp->igmp_cksum = 0;
igmp->igmp_cksum = inetChksum((unsigned short *)igmp,
IP_HEADER_RAOPT_LEN + datalen);
}