I have a setup that looks like this:
Target ---- Switch ---- Switch ---- Windows computer
|
Linux computer
So I have a target connected to a switch it sends out UDP-packets for debug purpose. Normally these packets goes to a Windows computer for analysis, this works. I have now added a Linux computer as well, to get the same data to both Linux and Windows I have setup a managed switch to mirror the traffic, this works fine when I look in Wireshark. I have then written a simple C-application for analysing the data on the Linux computer, this software does only work if Wireshark is running at the same time. Otherwise it does not receive any data from the target. Why is this?
int main()
{
int saddr_size, data_size;
struct sockaddr saddr;
unsigned char *buffer = (unsigned char *) malloc(BUFFER_SIZE);
printf("Starting...\n");
int sock_raw = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (sock_raw < 0)
{
printf("Socket Error");
return 1;
}
while (1)
{
saddr_size = sizeof saddr;
data_size = recvfrom(sock_raw, buffer, BUFFER_SIZE, 0, &saddr, (socklen_t*) &saddr_size);
if (data_size < 0)
{
printf("Recvfrom error , failed to get packets\n");
return 1;
}
processPacket(buffer);
}
close(sock_raw);
printf("Finished");
return 0;
}
The data coming from the target are sent on a format similar to RTP and is addressed to the Windows computer.
So to sum up; Why do I not receive any data from the target in my C-application without Wireshark running?
Same as here, you need to put the interface (not socket as I originally posted) into promiscuous mode. Wireshark does that, which is why your code works when Wireshark is running.
Just a guess: promiscuous mode is not turned on and the ethernet controller is discarding frames not addressed to it.
Related
i am working on a live video stream over wifi project.
we use UDP to send datagrams with the sendto() function using the socket.h file.
each datagram is 1420 bytes and those are being sent constantly with 250 uS delay between each.
now,
while sending the data from ubuntu (over windows) to an iPad, from some reason the sendto() function is being stalled and the stream is slow and unstable (i get a stream rate of about 1.28mb/sec).
this phenomena is only happen when the iPad is connected to the network, if it is not connected we get a flow rate of about 4mb/sec.
it is strange to me since i was under the impression that UDP protocol will keep on going and won't be affected by the receiver side.
now to get things even more strange, when i run the same code on a iMac using the terminal i get an amzing stream rate with perfect video on the other side....
router is 2.4GHz 802.11n with 20Mhz channel
any ideas?
void init_udp_send_socket(char*server_ip, int server_port) {
if ( (server_socket=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == -1)
{
perror("Failed to open socket");
exit(2);
}
memset((char *) &base, 0, sizeof(base));
base.sin_family = AF_INET;
base.sin_port = htons(server_port);
if (inet_aton(server_ip , &base.sin_addr) == 0)
{
perror( "Bad base IP address\n");
exit(1);
}
}
void send_udp_datagram(char* blob, int size) {
printf("size is: %d\n", size);
if (sendto(server_socket, blob, size , 0 , (struct sockaddr *) &base, sock_struct_size)==-1)
{
perror( "Failed sending datagram\n");
}
}
I am working on a custom embedded Linux system that needs to read and write messages on a CAN bus. SocketCAN is being used to accomplish this.
The CAN interface can0 is brought up on boot with a baudrate set to 500 kbps. I am using CANoe, cangen, and candump to test reception and transmission of messages. When CANoe is set to send messages to the embedded system, candump has no problem reading these messages on the embedded system. When cangen is set to send messages, CANoe has no problem reading the messages from the embedded system.
I wrote a small program to read messages from the can0 interface using the read() function. When the read() function is called to read a single CAN message, the function blocks and then never returns. I am certain that the CAN interface is receiving data since the number of received bytes reported by ifconfig increases as expected. Running candump concurrently with my program also shows that the interface is receiving CAN messages from the bus. Below is the relevant code for opening and reading the CAN interface. Error checking has been omitted.
Opening the socket:
int socketNum = 0;
char interface[10] = "can0";
struct sockaddr_can addr;
struct ifreq ifr;
memset(&addr, 0, sizeof(addr));
memset(&ifr, 0, sizeof(ifr));
socketNum = socket(PF_CAN, SOCK_RAW, CAN_RAW);
addr.can_family = AF_CAN;
strncpy(ifr.ifr_name, interface, sizeof(interface));
ioctl(socketNum, SIOCGIFINDEX, &ifr);
addr.can_ifindex = ifr.ifr_ifindex;
bind(socketNum, (struct sockaddr *)&addr, sizeof(addr));
Reading the socket:
struct can_frame frame;
int nbytes = 0;
memset(&frame, 0, sizeof(frame));
/* Never returns despite interface receiving messages */
nbytes = read(socketNum, &frame, sizeof(frame));
Am I missing something in my code or doing something wrong? Has anyone else encountered this issue and found a solution?
I have found a work-around for my issue.
The embedded platform I am working on uses an IMX 8 and an NXP driver for the FLEXCAN IP. My device tree is setup with the disable-fd-mode option. Even though FD mode should be disabled, I am required to "enable" FD mode with setsockopt:
canfd_enabled = 1;
error_code = setsockopt(socketNum, SOL_CAN_RAW, CAN_RAW_FD_FRAMES, &canfd_enabled, sizeof(int));
After adding these lines of code I can read and write from the socket as expected. I also read and write up to sizeof(canfd_frame) bytes instead of sizeof(can_frame) bytes. It is likely there is something wrong with the FLEXCAN driver. In my experience, this is not unusual for NXP drivers.
I also had this problem, using a Colibri iMX6
The lines of code which worked for me were:
int canfd_enabled = 1;
int error_code;
error_code = setsockopt(s, SOL_CAN_RAW, CAN_RAW_FD_FRAMES, &canfd_enabled, sizeof(int));
nbytes = read(s, &frame, sizeof(struct can_frame));
Thanks Dschumanji!
I need to write program using raw sockets in c language on proxy server between two hosts.
I've written some code for it (and set some rules for iptable to change destination address of packets to proxy's interfaces), where I am receiving packet, print data in this packet and then send the packet to receiver.
It's working on my simple client/server programs on raw sockets, but when I am trying to establish a connection through a proxy - it doesn't work.
Do you have any ideas on how I can write this program without using the kernel?
#include <unistd.h>
#include <stdio.h>
#include <sys/socket.h>
#include <netinet/ip.h>
#include <netinet/tcp.h>
#define PCKT_LEN 8192
int main(void){
int s;
char buffer[PCKT_LEN];
struct sockaddr saddr;
struct sockaddr_in daddr;
memset(buffer, 0, PCKT_LEN);
s = socket(AF_INET, SOCK_RAW, IPPROTO_TCP);
if(s < 0){
printf("socket() error");
return -1;
}
int saddr_size = sizeof(saddr);
int header_size = sizeof(struct iphdr) + sizeof(struct tcphdr);
unsigned int count;
daddr.sin_family = AF_INET;
daddr.sin_port = htons(1234);
daddr.sin_addr.s_addr = inet_addr ("2.2.2.1");
while(1){
if(recvfrom(s, buffer, PCKT_LEN , 0, &saddr, &saddr_size) < 0){
printf("recvfrom() error");
return -1;
}
else{
int i = header_size;
for(; i < PCKT_LEN; i++)
printf("%c", buffer[i]);
if (sendto (s, buffer, PCKT_LEN, 0, &daddr, &saddr_size) < 0)
printf("sendto() error");
return -1;
}
}
}
close(s);
return 0;
}
(Your code has serious bugs. For example, the last argument to sendto(2) should not be a pointer. I'll assume it's not the real code and that the real code compiles without warnings.)
With the nagging out of the way, I think one problem is that you're accidentally including an extra IP header in the packets you send. raw(7) has the following:
The IPv4 layer generates an IP header when sending a packet unless the IP_HDRINCL socket option is enabled on the socket. When it is enabled, the packet must contain an IP header. For receiving the IP header is always included in the packet.
IP_HDRINCL is not enabled by default unless protocol is IPPROTO_RAW (see a bit further down in raw(7)), meaning it's disabled in your case. (I also checked with getsockopt(2).)
You will have to either enable IP_HDRINCL using setsockopt(2) to tell the kernel that you're supplying the header yourself, or not include the header in sendto().
It's better to look at the IHL field in the IP header than assume it has fixed size by the way. The IP header could include options.
There could be other issues as well depending on what you're trying to do, and details might vary for IPv6.
Whatever you are doing I don't think using raw sockets is the way. Those are used for network debugging only.
Fist of all, observe that basically you are copying content from an existing, stabilished connection, rather than tunneling it. You are not doing what is proposed.
If you want to capture connections to a given server:port, for instance, 2.2.2.1:1234, into your application so that you can tunnel it through a proxy, you can use iptables.
iptables -t nat -A OUTPUT -p tcp -d 2.2.2.1 --dport 1234 -j REDIRECT
Create an application bound to ip 0.0.0.0 listening to TCP port 1234 and every connection attempt to 2.2.2.1:1234 will connect to your application instead, and you can do whatever you please with it.
I wish to send UDP multicast packets to loopback address and receive the same in other application. All tests done on fedora core 17 Linux.
The idea is to receive a video stream via RTSP/HTTP or any other network protocol and multicast it on the loopback address so that I can use VLC to play the stream using multicast address. Leaving aside other bitrate and controlled multicast issues, I tried to read one video file and multicast on loopback device. But when tried to play the same on vlc it didn't worked. I'm able to see packet getting transmitted in wireshark but the src ip is taken from my default network interface (i.e interface which is my default gateway)
I have already tried following commands
sudo ifconfig lo multicast
sudo ip route add 239.252.10.10 dev lo
Any suggestion in this regard would be very helpful.
Test program code pasted below
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define MULTICAST_ADDRESS "239.252.10.10"
#define UDP_PORT 1234
#define INTERFACE_IP "127.0.0.1"
#define MTU 1474
#define DATA_BUFFER_SIZE (1024*1024)
static int socket_init(char *intf_ip) {
int sd;
struct in_addr localInterface;
sd = socket (AF_INET, SOCK_DGRAM, 0);
if (sd < 0) {
perror ("Opening datagram socket error");
return -1;
}
else
printf ("Opening the datagram socket...OK.\n");
localInterface.s_addr = inet_addr (intf_ip);
if (setsockopt(sd, IPPROTO_IP, IP_MULTICAST_IF, (char *) &localInterface,sizeof (localInterface)) < 0){
perror ("Setting local interface error");
close(sd);
return -1;
}
else
printf ("Setting the local interface...OK\n");
#if 1
char loopch = 1;
if(setsockopt(sd, IPPROTO_IP, IP_MULTICAST_LOOP, (char *)&loopch, sizeof(loopch)) < 0){
perror("Setting IP_MULTICAST_LOOP error");
close(sd);
return -1;
}
else
printf("Enabling the loopback...OK.\n");
#endif
return sd;
}
static int transmit_packet(int sd, char *databuf, int size,char *ip, unsigned short port){
struct sockaddr_in groupSock;
int len,datalen,rc;
memset ((char *) &groupSock, 0, sizeof (groupSock));
groupSock.sin_family = AF_INET;
groupSock.sin_addr.s_addr = inet_addr (ip);
groupSock.sin_port = htons (port);
len=0;
datalen = MTU;
if(size < MTU)
datalen = size;
while(len < size){
rc = sendto(sd, databuf, datalen, 0, (struct sockaddr *) &groupSock,sizeof (groupSock));
if(rc <0){
perror ("Sending datagram message error");
return -1;
}
usleep(10000);
len += rc;
}
return len;
}
static int transmit_file(char *filepath, char *dstip, char *srcip,unsigned short port) {
FILE *fp;
int sd,rc;
char *databuf;
fp = fopen(filepath, "r");
if(!fp) {
printf("transmit_file : no such file or directory %s \n",filepath);
return -1;
}
sd = socket_init(srcip);
if(sd < 0) {
printf("Socket initialization failed \n");
fclose(fp);
return -1;
}
databuf = (char*) malloc(sizeof(char)*DATA_BUFFER_SIZE);
if(!databuf) {
printf("Unable to allocate databuf\n");
close(sd);fclose(fp);
return -1;
}
while(!feof(fp)){
rc = fread(databuf,1,DATA_BUFFER_SIZE,fp);
if(rc<= 0) {
printf("read failed or EOF reached\n");
break;
}
if(transmit_packet(sd,databuf,rc,dstip,port) <0)
printf("Transmit failed\n");
}
close(sd);fclose(fp);
free(databuf);
return 0;
}
int main(int argc, char *argv[]){
if(argc != 3){
printf("%s <filename> <ip>\n",argv[0]);
return -1;
}
transmit_file(argv[1],argv[2],INTERFACE_IP,UDP_PORT);
return 0;
}
You can use multicast on loopback but you have to add a new route because your OS using default external interface by default for multicast. Also multicast can be disabled by default on loopback. On linux you can change this with this command :
route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
ifconfig lo multicast
Binding or routing to the loopback device is necessary if you do not want IP multicast traffic (such as IGMP messages) to be sent across the network. However, this is typically only necessary if there are other computers on the network that may interfere by using the same multicast group.
The real problem is having programs on the same host receive multicast data sent by each other (or, equivalently, having sockets within a single program receive multicast data sent by each other), when they're both configured to use the same multicast group.
This is quite a common question with many StackOverflow questions on it, but they are often misunderstood or poorly worded. It is difficult to search for this problem specifically with regards to operating system behavior or standardization.
On the hardware level, multicast traffic is treated like broadcast traffic in that it is not routed back to the physical port it was sent from in order to prevent link level loops. This means that the operating system is responsible for forwarding traffic to other programs or sockets on the same host that joined a multicast group, since it won't be read from the interface.
This is configured by the standard IP_MULTICAST_LOOP option, which is best summarized by the IP Multicast MSDN article (archived):
Currently, most IP multicast implementations use a set of socket options proposed by Steve Deering to the Internet Engineering Task Force (IETF). Five operations are thus made available:
[...]
IP_MULTICAST_LOOP—Controls loopback of multicast traffic.
[...]
The Winsock version of the IP_MULTICAST_LOOP option is semantically different than the UNIX version of the IP_MULTICAST_LOOP option:
In Winsock, the IP_MULTICAST_LOOP option applies only to the receive path.
In the UNIX version, the IP_MULTICAST_LOOP option applies to the send path.
For example, applications ON and OFF (which are easier to [keep track of] than X and Y) join the same group on the same interface; application ON sets the IP_MULTICAST_LOOP option on, application OFF sets the IP_MULTICAST_LOOP option off. If ON and OFF are Winsock applications, OFF can send to ON, but ON cannot sent to OFF. In contrast, if ON and OFF are UNIX applications, ON can send to OFF, but OFF cannot send to ON.
From what I have read this is setting may be disabled by default on Windows and enabled by default on Linux, but I haven't tested it myself.
As an important side note, the IP_MULTICAST_LOOP option is entirely different from the IPV6_MULTICAST_LOOP option, referring to the Linux ip(7) and ipv6(7) man pages:
IP_MULTICAST_LOOP (since Linux 1.2)
Set or read a boolean integer argument that determines whether sent multicast packets should be looped back to the local sockets.
IPV6_MULTICAST_LOOP
Control whether the socket sees multicast packets that it has [sent] itself. Argument is a pointer to boolean.
IP_MULTICAST_LOOP allows IP multicast traffic to be received on different sockets on the same host it was sent. IPV6_MULTICAST_LOOP allows IPv6 multicast traffic to be received on the same socket it was sent -- something which is not typically possible with IPv4.
If anyone has references to official standards about the intended behavior of implementations (RFCs, IEEE POSIX standards, etc.), please post them in the comments or edit this answer.
I wish to send UDP multicast packets to loopback address
Stop right there. You can't do that. It's impossible. You can only send multicasts to multicast addresses. Your code doesn't do any multicasting, just sending to 127.0.0.1.
If you're only sending to the localhost, why are you using multicast at all? Do you have multiple listening processes?
the src ip is taken from my default network interface(i.e interface which is my default gateway)
Very likely, as you haven't bound your socket. What did you expect?
What is the right (portable, stable) way to get the ToS byte of a received packet? I'm doing UDP with recvmsg() and on linux I can get the ToS if I setsockopt() IP_RECVTOS/IPV6_RECVTCLASS, but IP_RECVTOS doesn't seem to be available on my BSD systems. What is the right way to do this?
I primarily want this to work on the BSDs and Solaris.
Edit:
To clarify:
I currently use recvmsg() where I get the TTL and TOS in the msg_control field on Linux, but in order to get TTL and TOS I need to setsockopt()-enable IP_RECVTTL and IP_RECVTOS. And since Solaris and BSD (working with FreeBSD at the moment) don't have IP_RECVTOS from what I can see I don't get TOS when looping over the CMSG data.
I tried enabling IP_RECVOPTS and IP_RECVRETOPTS, but I still don't get any IP_TOS type CMSG.
Edit 2:
I want ToS to be able to verify (as much as possible) that it wasn't overwritten in transit. If for example a VoIP app all of a sudden notices that it's not getting EF tagged packets, then something is wrong and there should be an alarm. (and no, I'm not expecting EF to be respected or preserved over the public internet)
I want TTL basically just because I can. Hypothetically this could be used to trigger "something changed in the network between me and the other side" alerts, which can be useful to know if somethings stops working at the same time.
I was thinking if you can create two sockets.
One socket of type DGRAM used exclusively for sending
One Raw socket used exclusively for receiving.
Since you are using UDP, you can call a bind + recvFrom on the Raw Sock Fd and then manually unpack the IP header to determine the TOS or TTL.
When you want to send, use the DGRAM sockFd so you dont have to bother to actually create the UDP & IP packet yourself.
There may be issues like the kernel may pass the received buffer to both sockets or to the UDP socket instead of Raw socket or just to the Raw socket. If that is the case (or if it is implementation dependent) then we are back to square one. However, you can try calling bind on the Raw socket and see if it helps. I am aware this maybe a hack but searching on the net for a setsockopt for BSD returned nothing.
EDIT: I wrote a sample program
It kind of achieves the objective.
The code below creates two sockets (one raw & one udp). The udp socket is bound on the actual port I am expecting to receive data whereas the raw socket is bound on Port 0. I tested this on Linux and like I expected any data for port 2905 is received by both the sockets. I am however able to retrieve the TTL & TOS values. Dont downvote for the quality of the code. I am just experimenting whether it will work.
Further EDIT: Disabled the receive by UDP socket.
I have further enhanced the code to disable the receive by the UDP packet. Using setsockopt, I set the UDP's socket receive buffer to 0. This ensures the kernel does not pass the packet to the UDP socket. IMHO,You can now use the UDP socket exclusively for sending and the raw socket for reading. This should work for you in BSD and Solaris also.
#include<stdio.h>
#include<stdlib.h>
#include<sys/types.h>
#include<sys/socket.h>
#include<netinet/in.h>
#include<netinet/ip.h>
#include<arpa/inet.h>
#include<string.h>
#include "protHeaders.x"
#include "gen.h"
int main(void)
{
S32 rawSockFd;
S32 udpSockFd;
struct sockaddr_in rsin;
struct sockaddr_in usin;
S32 one = 1;
const S32* val = &one;
struct timeval tv;
fd_set rfds;
S32 maxFd;
S16 ret;
S8 rawBuffer[2048];
S8 udpBuffer[2048];
struct sockaddr udpFrom,rawFrom;
socklen_t rLen,uLen;
memset(rawBuffer,0,sizeof(rawBuffer));
memset(udpBuffer,0,sizeof(udpBuffer));
memset(udpFrom,0,sizeof(udpFrom));
memset(rawFrom,0,sizeof(rawFrom));
if ((rawSockFd = socket(PF_INET,SOCK_RAW,IPPROTO_UDP)) < 0)
{
perror("socket:create");
RETVALUE(RFAILED);
}
/* doing the IP_HDRINCL call */
if (setsockopt(rawSockFd,IPPROTO_IP,IP_HDRINCL,val,sizeof(one)) < 0)
{
perror("Server:setsockopt");
RETVALUE(RFAILED);
}
rsin.sin_family = AF_INET;
rsin.sin_addr.s_addr = htonl(INADDR_ANY);
rsin.sin_port = htons(0);
usin.sin_family = AF_INET;
usin.sin_addr.s_addr = htons(INADDR_ANY);
usin.sin_port = htons(2905);
if(bind(rawSockFd,(struct sockaddr *)&rsin, sizeof(rsin)) < 0 )
{
perror("Server: bind failed");
RETVALUE(RFAILED);
}
if ((udpSockFd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP)) < 0)
{
perror("socket:create");
RETVALUE(RFAILED);
}
if(bind(udpSockFd,(struct sockaddr *)&usin, sizeof(usin)) < 0 )
{
perror("Server: bind failed on udpsocket");
RETVALUE(RFAILED);
}
/*set upd socket receive buffer to 0 */
one = 0;
if (setsockopt(udpSockFd,SOL_SOCKET,SO_RCVBUF,(char *)&one,sizeof(one)) < 0)
{
perror("Server:setsockopt on udpsocket failed");
RETVALUE(RFAILED);
}
tv.tv_sec = 0;
tv.tv_usec = 0;
maxFd = (rawSockFd > udpSockFd)? rawSockFd:udpSockFd;
while(1)
{
FD_ZERO(&rfds);
FD_SET(rawSockFd,&rfds);
FD_SET(udpSockFd,&rfds);
ret = select(maxFd+1,&rfds,0,0,&tv);
if ( ret == -1)
{
perror("Select Failed");
RETVALUE(RFAILED);
}
if(FD_ISSET(rawSockFd,&rfds))
{
printf("Raw Socked Received Message\n");
if(recvfrom(rawSockFd,rawBuffer,sizeof(rawBuffer),0,&rawFrom,&rLen) == -1)
{
perror("Raw socket recvfrom failed");
RETVALUE(RFAILED);
}
/*print the tos */
printf("TOS:%x\n",*(rawBuffer+1));
printf("TTL:%x\n",*(rawBuffer+8));
}
if(FD_ISSET(udpSockFd,&rfds))
{
printf("UDP Socked Received Message\n");
if(recvfrom(udpSockFd,udpBuffer,sizeof(udpBuffer),0,&udpFrom,&uLen) == -1)
{
perror("Udp socket recvfrom failed");
RETVALUE(RFAILED);
}
printf("%s\n",udpBuffer);
}
}
RETVALUE(ROK);
}
The "proper" and standard solution is probably to use cmsg(3). You'll find a complete description in Stevens' "Unix network programming" book, a must-read.
Google Code Search found me this example of use.
My understanding is that firstly BSD does not support IP_RECVTOS like functionality and secondly BSD raw sockets do not support the reception of UDP nor TCP packets. However there are two other ways of doing this, firstly by using the /dev/bpf interface - either directly or via libpcap. Or secondly by using DIVERT sockets which allow for diversion of specified traffic flows to userland.
Has anyone actually tested the code above on a BSD box? (it may work on Solaris...)
On Linux this approach will work but as mentioned it is also possible (and more convenient) to use setsockopt() with IP_TOS on the outgoing socket to set the outgoing TOS byte and setsockopt() with IP_RECVTOS on the incoming socket and use recvmsg() to retrieve the TOS byte.
Unfortuneatly this sort of thing usually varies across different *ixs. On Solaris you want to use getsockopt with IP_TOS; I don't know about BSD.
See man 7 ip for details.