How to modify IP header options for SYN/ACK packets? - c

To produce packets with extended IP header setsockopt operation can be performed with level SOL_IP and option IP_OPTIONS:
int ipoption=0xbaadf00d;
int sockfd=socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
setsockopt(sockfd, SOL_IP, IP_OPTIONS, &ipoption, sizeof ipoption);
After making this when trying to connect TCP stack produces packets with correct extended header.
Problem is how to do the same for server socket: I expect TCP server socket that answers with SYN/ACK packet with specific IP header extension in response to connect. But making same setsockopt for socket gives no effect. No matter when I call setsockopt - before listen, before accept etc. Is it possible somehow to apply IP option to server socket without switching to RAW sockets?

Related

How to preoperly prepare a DTLS server in OpenSSL 1.1.1

I am trying to get a DTLS "connection" going using OpenSSL 1.1.1.
I am constantly getting a SSL_ERROR_SYSCALL when trying to run DTLSv1_listen() on the socket.
I use a single AF_INET, DGRAM, UDP socket to receive all incoming data. I assumed I could leave it at that and OpenSSL would take care of determining the sender whenever a datagram is received but I am starting to think I am mistaken.
I have: (error handling omitted for brevity)
SSL_CTX *ctx = SSL_CTX_new(DTLS());
SSL_CTX_use_certificate_file(ctx, "certs/server-cert.pem", SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(ctx, "certs/server-key.pem", SSL_FILETYPE_PEM);
SSL_CTX_set_cookie_generate_cb(ctx, generate_cookie);
SSL_CTX_set_cookie_verify_cb(ctx, &verify_cookie);
int fd = socket(AF_INET, SOCK_DGRAM, 0);
setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (const void*) &on, (socklen_t) sizeof(on));
bind(fd, (const struct sockaddr *) &server_addr, sizeof(struct sockaddr_in))
SSL *ssl = SSL_new(ctx);
SSL_set_fd(ssl, fd);
SSL_set_accept_state(ssl);
while(DTLSv1_listen(ssl, (BIO_ADDR *) BIO_get_conn_address(SSL_get_rbio(ssl))) <= 0)
...
As I mentioned, that last line gives me an `SSL_ERROR_SYSCALL'.
errno gives me 0.
I suspect I'm missing some steps in the CTX configuration but I'm not sure what.
I've been looking through some examples and one in particular caught my eye. It seems to create a new socket whenever it receives a datagram and does a connect() on that socket to the remote address. This seems a bit ridicuous to me as I don't think UDP requires a soket per client AFAIK.
According connect with UDP
In BSD sockets one can do a connect on a UDP socket, but this
basically just sets the default destination address for send (instead
giving explicitly to send_to).
So the success may depent on the used send function.
Generally you can use a UDP socket for DTLS communication to many peers. That requires some mapping between the "association keys/seqn-numbers" and the other peer's address. Though this is pretty much the same as on the server side, it should not be impossible. However, a lot of TLS-derived implementations don't enabled such UDP specific features and so you may be forced to use separate sockets.
One pitfall will be left anyway:
If you want to use SNI /server Name Indication) to access the same physical server using different dns names from the same peer, then you will fail.

Receiving multicast UDP packets from a single network interface on macOS

This is a macOS question. I am trying to setup a UDP socket that receives SSDP messages, i.e. UDP packets, sent to multicast addresses. I want to restrict receiving these packets from a single network interface.
I tried
int fd = socket(AF_INET, SOCK_DGRAM, 0);
char* itf = "en0";
int res = setsockopt(fd, SOL_SOCKET, IP_RECVIF, itf, strlen(itf));
The setsockopt call fails with errno 42 (Protocol not available).
I have also found SO_BINDTODEVICE that can be used for the same purpose, but it seems that this is not available on macOS.
Using bind with port and address also does not work. Then no packets sent to the multicast address are received on that socket.
From the OSX documentation on IP multicast...
A host must become a member of a multicast group before it can receive datagrams sent to the group. To join a multicast group, use the IP_ADD_MEMBERSHIP option...
To receive multicast traffic on a specific interface you need to tell the OS that you want to join that multicast group. Follow these steps (you were almost there)...
Create a datagram socket (done).
Bind to INADDR_ANY with the expected port.
Join the multicast group via setsockopt() with the IP_ADD_MEMBERSHIP option. Here you can pass the IP address of the specific network interface you wish to receive multicast traffic on in the ip_mreq struct.

Raw Sockets in C

1.
socket(AF_INET, SOCK_RAW, IPPROTO_RAW);
The linux manual page says about this code.
In socket option, if IP_HDRINCL is set, I can make IP header. Am I right?
If it's right, above socket also let me make TCP header, too?
Then, if IP_HDRINCL is not set, what means above socket?
2.
socket(AF_INET, SOCK_RAW, IPPROTO_TCP);
socket(AF_INET, SOCK_RAW, IPPROTO_UDP);
what means above code comparing to number 1 question's code?
I know IPPROTO_RAW can't receive any IP packets. And here, these sockets only can receive TCP packets, and UDP pakcets each.(Can I see IP Header, Ethernet Header also?)
But how about sending?? I don'know exactly about this.
IP_HDRINCL means: I want my data (for send and recv) to include the ip hdr. And if your data include the ip hdr, it means that the tcp hdr follows (just after the ip hdr), and finally the app's message too (the message your normally give to send ...). Without IP_HDRINCL, you have access to apps data only.
Yes, IPPROTO_TCP and IPPROTO_UDP whith SOCK_RAW are just filters as you say, for sending and receiving. Use IPPROTO_RAW to be able to send any TCP/IP packet (no filter). But to also receive packets, you need also to change AF_INET into AF_PACKET.

set socket option is why so important for a socket (IP_HDRINCL) In ICMP request?

I am new to socket programming
I saw a ICMP request program , in that they used setsockopt to a socket
int on = 1;
setsockopt(s, IPPROTO_IP, IP_HDRINCL, &on, sizeof(on))
but even if I do not use this statement, the program runs correctly. Why is it so
important to mention to the kernel this socket including the IP structure?
The IP_HDRINCL option does the following (from the man page):
The IPv4 layer generates an IP header when sending a packet unless the IP_HDRINCL socket option is enabled on the socket. When it is enabled, the packet must contain an IP header. For receiving the IP header is always included in the packet.
Presumably your program is constructing an IP header. If you remove this option, it will use the kernel's IP header. Whether that 'works' or not depends on what your program does. Perhaps under some circumstances it wants to customise the IP header and with this removed that will not work.
If you post the rest of the program or tell us a bit about it, we might be able to help.

Strange Linux socket protocols behaviour

I'm a little confused about the difference between the definitions of protocols on Linux when using socket(). I am attempting to listen for connections over TCP using socket(PF_INET, SOCK_STREAM, proto), where proto is (in my mind) disputed, or at least seems odd.
From <netinet/in.h>:
...
IPPROTO_IP = 0, /* Dummy protocol for TCP. */
...
IPPROTO_TCP = 6, /* Transmission Control Protocol. */
...
Agreed with by /etc/protocols:
ip 0 IP # internet protocol, pseudo protocol number
hopopt 0 HOPOPT # hop-by-hop options for ipv6
...
tcp 6 TCP # transmission control protocol
...
I learned from an online tutorial, and also from the man page tcp(7) that you initialise a TCP socket using
tcp_socket = socket(AF_INET, SOCK_STREAM, 0);
which works absolutely fine, and certainly is a TCP socket. One thing about using the above arguments to initialise a socket is that the code
struct timeval timeout = {1, 0};
setsockopt(tcp_socket, 0, SO_RCVTIMEO, &timeout, sizeof(timeout); // 1s timeout
// Exactly the same for SO_SNDTIMEO here
works absolutely fine, but not after replacing all protocol arguments (including in socket()) with IPPROTO_TCP, as opposed to IPPROTO_IP which they have, as above.
So after experimenting with the difference, I've needed to ask a few searching questions:
Why, when I replace all protocol arguments with IPPROTO_TCP, do I get error 92 ("Protocol not available") when setting timeouts, when protocol 0 is apparently just a 'dummy' TCP?
Why does socket() require the information of whether it should be a stream, datagram or raw socket when that information is (always?) implicitly known from the protocol, and vice versa? (i.e. TCP is a stream protocol, UDP is a datagram protocol, ...)
What could be meant by "dummy TCP"?
What is hopopt, and why does it have the same protocol number as 'ip'?
Many thanks.
Giving 0 as protocol to socket just means that you want to use the default protocol for the family/socktype pair. In this case that is TCP, and thus you get the same result as with IPPROTO_TCP.
Your error is in the setsockopt call. The correct one would be
setsockopt(tcp_socket, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout)); // 1s timeout
the 0 there is not for protocol, but for option level. IPPROTO_TCP is another option level, but you can't combine that with SO_RCVTIMEO. It can only be used together with SOL_SOCKET.
The ones you use with IPPROTO_TCP are the ones listed in tcp(7), e.g. TCP_NODELAY.
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); should work fine.
Passing 0 as the protocol just means, give me the default. Which on every system is TCP for stream sockets and UDP for datagram sockets, when dealing with IP. But socket() can be used for many other things bar giving you a TCP or UDP socket.
socket() is quite general in nature. socket(AF_INET, SOCK_STREAM, 0); just reads as; "give me a streaming socket within the IP protocol family". Passing 0 means you have no preferences over which protocol - though TCP is the obvious choice for any system. But theoretically, it could have given you e.g. an SCTP socket.
Whether you want datagram or streaming sockets is not implicit for protocols. There are many more protocols bar IP based protocols, and many can be used in either datagram or streaming mode such as SCCP used in SS7 networks.
For IP based protocols, SCTP can be used in a datagram based, or streaming fashion. Thus socket(AF_INET,IPPROTO_SCTP); would be ambiguous. And for datagram sockets, there's other choices as well, UDP, DCCP, UDPlite.
socket(AF_INET,SOCK_SEQPACKET,0); is another interesting choice. It cannot return a TCP socket, TCP is not packet based. It cannot return and UDP socket, UDP gives no guarantee of sequential delivery. But an SCTP socket would do, if the system supports it.
I have no explanation for why someone made the comment "dummy TCP" in that the linux netinet/in.h
hopopt is the IPv6 HOP by hop option. In IPv6, the protocol discriminator field is also used as an extension mechanism. In IPv4 packets there is a protocol field which is the protocol discriminator, it'll be set to IPPROTO_TCP if that IPv4 datagram carries TCP. If that IPv4 packet also carries some additional info(options), they are coded by other mechanisms.
IPv6 does this differently, if there is an extension(option), that extension is coded in the protocol field. So if the IPv6 packet needs the hop-by-hop option, IPPROTO_HOPOPTS is placed in the protocol field. The actual hop-by-hop option also have a protocol discriminator, which signals what the next protocol is - which might be IPPROTO_TCP, or yet another option.

Resources