Changing an OpenSSL BIO from blocking to non-blocking mode - c

I have a multithreaded application that makes heavy use of OpenSSL in C. It is designed with the idea that all of its SSL connections are expected to block. Specifically, blocking BIOs. They are all allocated off a single incoming port like this:
ssl = SSL_new(ctx);
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);
sock = BIO_new_socket(socket, BIO_CLOSE);
SSL_set_bio(ssl, sock, sock);
As it turns out though, there are a few small parts of the codebase where using non-blocking BIOs would be the best choice. The small parts that would benefit from the non-blocking BIOs have no way of knowing which SSL connections will belong to them. Thus, they always receive blocking BIOs.
The question is, can the blocking BIOs be changed to be non-blocking?
I know that BIO_set_nbio can be used to make a BIO non-blocking but the documentation says:
The call to BIO_set_nbio() should be made before the connection is established because non blocking I/O is set during the connect process.
Another possible option I have thought about would be to copy the BIO and recreate it, while somehow maintaining all of the state.

I did non-blocking SSL connections in my own "lion" code, but I did not use the BIO functionality in OpenSSL at all.
Rather, I went for the calls
SSL_set_fd(ctx, fd ) and SSL_get_fd(ssl) to handle my own fdsets and calling select.
The biggest 'gotcha' that took a while to track down was to set SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER and SSL_MODE_ENABLE_PARTIAL_WRITE for it work the way I wanted.
If you want to read the SSL part of the code, it is here:
https://github.com/lundman/lion/blob/master/src/tls.c

Related

Drop an open TCP connection without sending RST

Looking into nginx: ignore some requests without proper Host header got me thinking that it's not actually possible to close(2) a TCP connection without the OS properly terminating the underlying TCP connection by sending an RST (and/or FIN) to the other end.
One workaround would be to use something like tcpdrop(8), however, as can be seen from usr.sbin/tcpdrop/tcpdrop.c on OpenBSD and FreeBSD, it's implemented through a sysctl-based interface, and may have portability issues outside of BSDs. (In fact, it looks like even the sysctl-based implementation may be different enough between OpenBSD and FreeBSD to require a porting layer -- OpenBSD uses the tcp_ident_mapping structure (which, subsequently, contains two sockaddr_storage elements, plus some other info), whereas FreeBSD, DragonFly and NetBSD use an array of two sockaddr_storage elements directly.) It turns out, that OpenBSD's tcpdrop does appear to send the R packet as per tcpdump(8), and can be confirmed by looking at /sys/netinet/tcp_subr.c :: tcp_drop(), which calls tcp_close() in the end (and tcp_close() is confirmed to send RST elsewhere on SO), so, it appears that it wouldn't even work, either.
If I'm establishing the connection myself through C, is there a way to subsequently drop it without an acknowledgement to the other side, e.g., without initiating RST?
If I'm establishing the connection myself through C, is there a way to subsequently drop it without an acknowledgement to the other side, e.g., without initiating RST?
No. Even if there was, if the peer subseqently sent anything it would be answered by an RST.
NB Normal TCP termination uses a FIN, not an RST.
Cheating an attacker in this way could be a good idea. Of course, in this case you are already reserving server resources for the established connection. In the most basic mode you can use netfilter to drop any TCP outgoing segment with RST or FIN flags set. This rule iptables rule could be an example:
sudo iptables -A OUTPUT -p tcp --tcp-flags FIN,RST SYN -j DROP
Of course, this rule will affect all your TCP connections. I wrote it just to provide a lead of how you can do it. Go to https://www.netfilter.org/ for getting more ideas working on your solution. Basically you should be able to do the same only for the selected connections.
Because of how TCP works, if you're able to implement it, the client (or attacker) will keep the connection open for a long time. To understand the effect it would have in a client read here: TCP, recv function hanging despite KEEPALIVE where I provide results of a test in which the other side doesn't return any TCP segment (not even an ACK). In my configuration, it takes 13 minutes for the socket to enter an error state (that depends on Linux parameters like tcp_retries1 and tcp_retries2).
Just consider a DoS attack will usually imply connections from thousands of different devices and not necessarily many connections from the same device. This is very easy to detect and block in the firewall. So, it's very improbable that you are going to generate resources exhaustion in the client. Also this solution will not work for cases of half-open connection attack.

Reading multiple UDP messages without polling

I would like to use the recvmmsg call to read multiple UDP messages from ONE single socket at once. I'm reading data from a single multicast group.
When I read TCP data, I usually use poll/select with a non-blocking socket (and timeout) to be notified when that is ready to be read. I follow this approach as I am aware of the issue of spurious wakeup and potential troubles of having a blocking socket.
As my application must be very quick, if I follow the same approach with recvmmsg I will introduce an extra system call (poll/select) that might slow down the execution.
So my two questions are the following:
With UDP, can I safely read from BLOCKING sockets using recvmmsg without poll/select or do I have to apply the same principle I've used for TCP (non-blocking+poll)?
Suppose I have a huge amount of multicast traffic, would you go for non-blocking socket + recvmmsg only (no poll) and burn a lot of CPU?
I am using Linux: CentOS 7 and Oracle Linux.
You can always use blocking mode, with both TCP and UDP sockets.
If you want to impose a read timeout there is setsockopt() with the SO_RCVTIMEO option.
I follow this approach as I am aware of the issue of spurious wakeup
What spurious wakeup? Never seen it in 25 years of network programming.
and potential troubles of having a blocking socket.
Never heard of those either.
Using select() and non-blocking mode with a single socket is pointless unless your platform doesn't support SO_RCVTIMEO. It's an extra system call, for a start.
The option of using blocking or non-blocking depends on what is the final purpose of the application.
- Say it's just a sample chat application showing the usage of UDP combined with TCP then you can use either.
- But if you are planning to make this module a part of highly used application with lots of data flowing then probably creating multiple threads/processes to handle different tasks will come in handy. The parent thread will to wait for the message but for processing it will spawn a different child thread and hence make the parent available for the next message.
But in a nutshell I don't see any issue with your first option of using a blocking socket without poll/select for a UDP application considering it's just for homework purposes.

How to use SO_KEEPALIVE option properly to detect that the client at the other end is down?

I was trying to learn the usage of option SO_KEEPALIVE in socket programming in C language under Linux environment.
I created a server socket and used my browser to connect to it. It was successful and I was able to read the GET request, but I got stuck on the usage of SO_KEEPALIVE.
I checked this link keepalive_description#tldg.org but I could not find any example which shows how to use it.
As soon as I detect the client's request on accept() function I set the SO_KEEPALIVE option value 1 on the client socket. Now I don't know, how to check if the client is down, how to change the time interval between the probes sent etc.
I mean, how will I get the signal that the client is down? (Without reading or writing at the client - I thought I will get some signal when probes are not replied back from client), how should I program it after setting the option SO_KEEPALIVE on).
Also if suppose the probes are sent every 3 secs and the client goes down in between I will not get to know that client is down and I may get SIGPIPE.
Anyways importantly I wanna know how to use SO_KEEPALIVE in the code.
To modify the number of probes or the probe intervals, you write values to the /proc filesystem like
echo 600 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 60 > /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 20 > /proc/sys/net/ipv4/tcp_keepalive_probes
Note that these values are global for all keepalive enabled sockets on the system, You can also override these settings on a per socket basis when you set the setsockopt, see section 4.2 of the document you linked.
You can't "check" the status of the socket from userspace with keepalive. Instead, the kernel is simply more aggressive about forcing the remote end to acknowledge packets, and determining if the socket has gone bad. When you attempt to write to the socket, you will get a SIGPIPE if keepalive has determined remote end is down.
You'll get the same result if you enable SO_KEEPALIVE, as if you don't enable SO_KEEPALIVE - typically you'll find the socket ready and get an error when you read from it.
You can set the keepalive timeout on a per-socket basis under Linux (this may be a Linux-specific feature). I'd recommend this rather than changing the system-wide setting. See the man page for tcp for more info.
Finally, if your client is a web browser, it's quite likely that it will close the socket fairly quickly anyway - most of them will only hold keepalive (HTTP 1.1) connections open for a relatively short time (30s, 1 min etc). Of course if the client machine has disappeared or network down (which is what SO_KEEPALIVE is really useful for detecting), then it won't be able to actively close the socket.
As already discussed, SO_KEEPALIVE makes the kernel more aggressive about continually verifying the connection even when you're not doing anything, but does not change or enhance the way the information is delivered to you. You'll find out when you try to actually do something (for example "write"), and you'll find out right away since the kernel is now just reporting the status of a previously set flag, rather than having to wait a few seconds (or much longer in some cases) for network activity to fail. The exact same code logic you had for handling the "other side went away unexpectedly" condition will still be used; what changes is the timing (not the method).
Virtually every "practical" sockets program in some way provides non-blocking access to the sockets during the data phase (maybe with select()/poll(), or maybe with fcntl()/O_NONBLOCK/EINPROGRESS&EWOULDBLOCK, or if your kernel supports it maybe with MSG_DONTWAIT). Assuming this is already done for other reasons, it's trivial (sometimes requiring no code at all) to in addition find out right away about a connection dropping. But if the data phase does not already somehow provide non-blocking access to the sockets, you won't find out about the connection dropping until the next time you try to do something.
(A TCP socket connection without some sort of non-blocking behaviour during the data phase is notoriously fragile, as if the wrong packet encounters a network problem it's very easy for the program to then "hang" indefinitely, and there's not a whole lot you can do about it.)
Short answer, add
int flags =1;
if (setsockopt(sfd, SOL_SOCKET, SO_KEEPALIVE, (void *)&flags, sizeof(flags))) { perror("ERROR: setsocketopt(), SO_KEEPALIVE"); exit(0); };
on the server side, and read() will be unblocked when the client is down.
A full explanation can be found here.

Confused about OpenSSL non-blocking I/O

In general, the OpenSSL library (C API) seems to offer two ways to do everything: you can either use plain system sockets configured to your liking, or you can use OpenSSL BIO objects which are sort of like streams.
However, I'm often confused by some of the duplicated functionality. For example, how do you make an SSL connection non-blocking? One way seems to be to simply access the underlying file descriptor and set it to non-blocking using fcntl. But there is also an OpenSSL API function called BIO_set_nbio which takes in a BIO* object and sets it to non-blocking mode.
So what is the best way to set up a non-blocking SSL socket? What happens if you pass OpenSSL a native file descriptor which is already set to non-blocking mode via fnctl? Do you still need to specifically call BIO_set_nbio to make the BIO object non-blocking?
I think most people prefer the BIO interface, but the BIO routines just use whatever native non-blocking socket APIs that are available on the platform. I don't know what happens if you mix and match.
Note that non-blocking I/O for SSL is much trickier than for TCP in general. If you don't understand this going in you're going to be torturing yourself. There are books by John Viega and another by Eric Rescorla that go into this, and you can certainly read the OpenSSL mailing list to get a sense of the heartburn this has caused. Some good code examples showing non-blocking SSL programming with OpenSSL are contained in the software for the TOR project, and the curl utility.

Add SSL support to an open source application

I need Mosquitto http://mosquitto.org to work with SSL.
I've read several examples with OpenSSL, but as I've never worked with sockets in C, can someone tell me what do I have to change for my existing sockets? (Accept, write, read?)
Thank you very much
My understanding is that after you've called accept(), you then have to configure the socket for use with OpenSSL - assuming you've also already configured the library for use as well.
After that, you can use SSL_read() and SSL_write() instead of read() and write().
When you want to close the socket, you need to disable SSL support before calling close().
It's a reasonable undertaking for certain - the socket code isn't really the problem, it's understanding what you need to do to start and stop the TLS support and ensuring that you don't miss something out which could lead to vulnerabilities.

Resources