C HTTP Server & OpenSSL - Works fine for HTTP - Multiple/rapid/concurrent connections being dropped using HTTPS - c

I'm writing an HTTP server in C using sockets. It can listen on multiple ports and works on a 1-thread-per-port basis to run listening loops and each loop spawns another thread to deliver a response
The code works perfectly when delivering standard HTTP responses. I have it set up to respond with an HTML page with JavaScript code that just refreshes the browser repeatedly in order to stress test the server. I've tested this with my computer running as the server and 4 other devices spamming it with requests at the same time.
No crashes, no dropped connections and no memory leaks. CPU usage never jumps beyond 5% running on a 2.0 GHz Intel Core 2 Duo in HTTP mode with 4 devices spamming requests.
I just added OpenSSL yesterday so it can deliver secure responses over HTTPS. That went fairly smoothly as it seems that all I had to do with replace some standard socket calls with their OSSL counterparts for secure mode (based on the solution to this question: Turn a simple socket into an SSL socket).
There is one SSL context and SSL struct per connection. It does work but not very reliably. Again, each response happens on its own thread but multiple/rapid/concurrent requests in secure mode are getting dropped seemingly at random, though there are still no crashes or memory leaks in my code.
When a connection is dropped the browser will either say its waiting for a response that never happens (Chrome) or just says the connection was reset (Firefox).
For reference, here is the updated connection creation and closing code.
Connection creation code (main part of the listening loop):
// Note: sslCtx and sslConnection exist
// elsewhere in memory allocated specifically
// for each connection.
struct sockaddr_in clientAddr; // memset-ed to 0 before accept
int clientAddrLength = sizeof(clientAddr);
...
int clientSocketHandle = accept(serverSocketHandle, (struct sockaddr *)&clientAddr, &clientAddrLength);
...
if (useSSL)
{
int use_cert, use_privateKey, accept_result;
sslCtx = SSL_CTX_new(SSLv23_server_method());
SSL_CTX_set_options(sslCtx, SSL_OP_SINGLE_DH_USE);
use_cert = SSL_CTX_use_certificate_file(sslCtx, sslCertificatePath , SSL_FILETYPE_PEM);
use_privateKey = SSL_CTX_use_PrivateKey_file(sslCtx, sslCertificatePath , SSL_FILETYPE_PEM);
sslConnection = SSL_new(sslCtx);
SSL_set_fd(sslConnection, clientSocketHandle);
accept_result = SSL_accept(sslConnection);
}
... // Do other things and spawn request handling thread
Connection closing code:
int recvResult = 0;
if (!useSSL)
{
shutdown(clientSocketHandle, SHUT_WR);
while (TRUE)
{
recvResult = recv(clientSocketHandle, NULL, 0, 0);
if (recvResult <= 0) break;
}
}
else
{
SSL_shutdown(sslConnection);
while (TRUE)
{
recvResult = SSL_read(sslConnection, NULL, 0);
if (recvResult <= 0) break;
}
SSL_free(sslConnection);
SSL_CTX_free(sslCtx);
}
closesocket(clientSocketHandle);
Again, this works 100% perfect for HTTP responses. What could be going wrong for HTTPS responses?
Update
I've updated the code with OpenSSL callbacks for mutli-threaded environments and the server is slightly more reliable using code from an answer to this question: OpenSSL and multi-threads.
I wrote a small command line program to spam the server with HTTPS requests and it is not dropping any connections with 5 multiple instances of it running at the same time. Multiple instances of Firefox also appear not to be dropping any connections.
What is interesting however is that connections are still being dropped with modern WebKit-based browsers. Chrome starts to drop connections at under 30 seconds of spamming, Safari on an iPhone 4 (iOS 5.1) rarely makes it past 3 refreshes before saying the connection was lost, but Safari on an iPad 2 (iOS 5.0) seems to cope the longest but ultimately ends up dropping connections as well.

You should call SSL_accept() in your request handling thread. This will allow your listening thread to process the TCP accept/listen queue more quickly, and reduce the chance of new connections getting a RESET from the TCP stack because of a full accept/listen queue.
SSL handshake is compute intensive. I would guess that your spammer is probably not using SSL session cache, so this causes your server to use the maximum amount of CPU. This will cause it to be CPU starved in regards to servicing the other connections, or new incoming connections.

Related

LwIP Clilent can't establish a connection

I want to connect two F746ZG boards so that they can communicate via TCP. I am using the STM implementation of LwIP with the netconn API. The IP address is supplied via DHCP, but it is always the same address. Also, the address matches the expected value. The problem I am facing is that the client seemingly can't establish a connection. I am binding the connection to port 8880. Since I ran into this issue, I have written a debug client that should just periodically send a predefined message to a server. Here is the code for the client:
static void tcpecho_client_thread(void const *arg)
{
struct netconn *xNetConn = NULL;
err_t bind_err, connect_err;
char* b_data = "OK"; // Data to be sent
uint16_t b_len = sizeof ( b_data );
IP4_ADDR(&local_ip, IP_ADDR0_CLIENT, IP_ADDR1_CLIENT, IP_ADDR2_CLIENT, IP_ADDR3_CLIENT);
IP4_ADDR(&pc_ip, IP_ADDR0_PC, IP_ADDR0_PC, IP_ADDR2_PC, IP_ADDR3_PC);
xNetConn = netconn_new ( NETCONN_TCP );
if (xNetConn != NULL){
bind_err = netconn_bind ( xNetConn, &local_ip, TCP_PORT_NETCONN );
if(bind_err == ERR_OK){
// Try to connect to server
for(;;){
connect_err = netconn_connect ( xNetConn, &pc_ip, TCP_PORT_NETCONN);
if (connect_err == ERR_OK){
// We are connected
while(1){
BSP_LED_On(LED1);
netconn_write(xNetConn, b_data, b_len, NETCONN_COPY);
vTaskDelay(1000); // To see the result easily in Comm Operator
}
}
}
}else{
// Failed to bind the connection
BSP_LED_On(LED3);
}
}else{
// Failed to allocate a new connection
BSP_LED_On(LED3);
}
}
When I debug this, netconn_connect never manages to actually connect to something. Since I am able to ping the board and get a response, I am confused, what is going wrong here. I have tried to use Hercules to set up a TCP server on my PC so that the board can connect to that, but that also doesn't work. Using Wireshark, I can see the responses to my ping command coming in, but I don't see anything that would indicate the board trying to connect to my PC.
I have tested the corresponding server on the second board, but that runs fine. I can connect to it with Hercules and send data, so I doubt there is anything fundamentally wrong with the LwIP stack.
What I could guess is that I messed up the netconn_bind, I am not 100% sure what IP you are supposed to bind the connection to. The way it currently is, is how I read the documentation. For the server, I have bound it to IP_ADDR_ANY. Besides that, my implementation mostly matches with the examples you can find online (e.g. LwIP Wiki).
I have figured out the problem. After I delete the netconn_bind call, everything works fine for me.

MQTT TLS session resumption in C

I'm using the Eclipse Paho MQTT C client to connect to a mosquitto broker with TLS using openssl. This is part of my code:
MQTTClient client;
MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer;
MQTTClient_message pubmsg = MQTTClient_message_initializer;
MQTTClient_SSLOptions sslOptions = MQTTClient_SSLOptions_initializer;
MQTTClient_deliveryToken token;
int rc;
MQTTClient_create(&client, ADDRESS, CLIENTID,
MQTTCLIENT_PERSISTENCE_NONE, NULL);
conn_opts.keepAliveInterval = 20;
conn_opts.cleansession = 1;
/* TLS */
sslOptions.enableServerCertAuth = 0;
sslOptions.trustStore = "ca_rsp.crt";
conn_opts.ssl = &sslOptions;
if ((rc = MQTTClient_connect(client, &conn_opts)) != MQTTCLIENT_SUCCESS)
{
printf("Failed to connect, return code %d\n", rc);
exit(EXIT_FAILURE);
}
Actually every time I reconnect to the broker, the client make a full handshake. I would like to use the TLS session resumption to reduce the overhead. I've search around the web but I haven't found any example of how o implement that in a simple way.
Any suggestion?
Thanks
This came up recently on the mosquitto dev mailing list here https://dev.eclipse.org/mhonarc/lists/mosquitto-dev/msg01606.html
The following excerpt seams to imply it may not be possible just yet with the code as it is.
How can I use Mosquitto / OpenSSL C API to leverage session tickets in an
MQTT C client ?
Not at the moment, this needs code changes that are a bit more
involved - it looks like we need to use SSL_set_session() to apply a
saved session to your client and SSL_CTX_sess_set_new_cb() to save the
session out.
Is there any way I could persist session tickets on the clients, so they
would remain valid across reboot ?
With the above changes, yes.
Make conn_opts.cleansession = 0;
Disabling the cleansession flag in PAHO-client programs enables session resumption.
I have already verified it with wireshark.
With session Resumption, 1st packet transmission
We can see 4 times communication between server and client in 1 image and even certificates are transferred.
With session Resumption ,screenshot taken for 2nd packet transmission
Observe both images carefully , there is only 3 times communication between server and client in2 image, hence the server negotiates not to perform full handshake.
Session resumption time limit is 7200 seconds.
But setting the cleansession flag to 1 will always perform full handshake which means no session resumption.
I feel it was a good decision taken by PAHO people who made clean session flag linked with session resumption because mosquitto client provided in github lacks this inbuilt feature of session resumption.
Go through the specification of MQTT v3.1.1
Or refer MQTT specification in their website

Winsock 15 connections at the same time becomes unstable

My server is listening on the port 1234 for incoming connections, i made an array of sockets and i am looping through that array looking for an already free sockets (Closed = 0) increasing that array to hold a new incoming sockets if no free socket available in the sockets array, This will be later the index to the sockets to identify each connection alone. Not sure if that is a good approach but it works fine and stable until i tried to stress my server with about 15 client opened at the same time. The problem i am facing is that some of the client apps gets connection time-out and the server becomes unstable handling those multiple connections at the same time. I am not sending much data only 20 bytes on each connect event received from the server. I have tried to increase the Backlog value on the listen call but that didn't help either.
I have to wait longer for those clients to connect. But they connect eventually. And the server still responses to new connections later on (If i close all those clients apps for example or open a new client app it will connect immediatly). Also these connections stay open and i do not close socket from the client side.
I am using CSocketPlus Class
Connection request event
Private Sub Sockets_ConnectionRequest(ByVal Index As Variant, ByVal requestID As Long)
Dim sckIndex As Integer
sckIndex = GetFreeSocketIndex(Sockets)
Sockets.Accept sckIndex, requestID
End Sub
Function GetFreeSocketIndex(Sockets As CSocketPlus) As Integer
Dim i As Integer, blnFound As Boolean
If Sockets.ArrayCount = 1 Then 'First we check if we have not loaded any arrays yet (First connector)
Sockets.ArrayAdd 1
GetFreeSocketIndex = 1
ReDim UserInfo(1)
Exit Function
Else
'Else we loop through all arrays and find a free (Not connected one)
For i = 1 To Sockets.ArrayCount - 1
If Sockets.State(i) <> sckConnected Then
'Found one use it
blnFound = True
Sockets.CloseSck i
GetFreeSocketIndex = i
If UBound(UserInfo) < i Then
ReDim Preserve UserInfo(i)
End If
Exit Function
End If
Next i
'Didn't find any load one and use it
If blnFound = False Then
Sockets.ArrayAdd i
Sockets.CloseSck i
GetFreeSocketIndex = i
ReDim Preserve UserInfo(i)
End If
End If
End Function
Does anyone know why there is a performance issues when multiple connections occur at the same time? why the server becomes slowly to response?
What makes the server to accept faster?
I have also tried to not accept the connection ie not calling accept on connection request event but still the same issue.
EDIT: I have tried to put a debug variable to print the output of socket number on the FD_ACCEPT event on the CSocket class and it seems that WndProc is delaying to post the messages in case of a lot of connections.
Ok the problem seems to be from my connection. I have moved my Server to my RDP which has 250Mbps Download speed with 200Mbps Upload speed and it seems to work very well there. Tested it with 100 client made a connection and every one of them connected immediatly!. Wonder why i have such issues where my home connection is 40/40 Mbps...hmmm. Anyone knows why that happen?
Edit: Seems to be the Router option!
Enable Firewall
Enable DoS protection
Disabled the firewall (just for testing purposes) and everything works flawlessly!
So basically the router is thinking that there is some kind of dos attack.
So it will slow down the traffic.

Is there a way to detect that TCP socket has been closed by the remote peer, without reading from it?

First, a little background to explain the motivation: I'm working on a very simple select()-based TCP "mirror proxy", that allows two firewalled clients to talk to each other indirectly. Both clients connect to this server, and as soon as both clients are connected, any TCP bytes sent to the server by client A is forwarded to client B, and vice-versa.
This more or less works, with one slight gotcha: if client A connects to the server and starts sending data before client B has connected, the server doesn't have anywhere to put the data. I don't want to buffer it up in RAM, since that could end up using a lot of RAM; and I don't want to just drop the data either, as client B might need it. So I go for the third option, which is to not select()-for-read-ready on client A's socket until client B has also connected. That way client A just blocks until everything is ready to go.
That more or less works too, but the side effect of not selecting-for-read-ready on client A's socket is that if client A decides to close his TCP connection to the server, the server doesn't get notified about that fact -- at least, not until client B comes along and the server finally selects-for-read-ready on client A's socket, reads any pending data, and then gets the socket-closed notification (i.e. recv() returning 0).
I'd prefer it if the server had some way of knowing (in a timely manner) when client A closed his TCP connection. Is there a way to know this? Polling would be acceptable in this case (e.g. I could have select() wake up once a minute and call IsSocketStillConnected(sock) on all sockets, if such a function existed).
If you want to check if the socket is actually closed instead of data, you can add the MSG_PEEK flag on recv() to see if data arrived or if you get 0 or an error.
/* handle readable on A */
if (B_is_not_connected) {
char c;
ssize_t x = recv(A_sock, &c, 1, MSG_PEEK);
if (x > 0) {
/* ...have data, leave it in socket buffer until B connects */
} else if (x == 0) {
/* ...handle FIN from A */
} else {
/* ...handle errors */
}
}
Even if A closes after sending some data, your proxy probably wants to forward that data to B first before forwarding the FIN to B, so there is no point in knowing that A has sent FIN on the connection sooner than after having read all the data it has sent.
A TCP connection isn't considered closed until after both sides send FIN. However, if A has forcibly shutdown its endpoint, you will not know that until after you attempt to send data on it, and receive an EPIPE (assuming you have suppressed SIGPIPE).
After reading your mirror proxy application a bit more, since this is a firewall traversal application, it seems that you actually need a small control protocol to allow to you verify that these peers are actually allowed to talk to each other. If you have a control protocol, then you have many solutions available to you, but the one I would advocate would be to have one of the connections describe itself as the server, and the other connection describe itself as the client. Then, you can reset the connection the client if there is no server present to take its connection. You can let servers wait for a client connection up to some timeout. A server should not initiate any data, and if it does without a connected client, you can reset the server connection. This eliminates the issue of buffering data for a dead connection.
It appears the answer to my question is "no, not unless you are willing and able to modify your TCP stack to get access to the necessary private socket-state information".
Since I'm not able to do that, my solution was to redesign the proxy server to always read data from all clients, and throw away any data that arrives from a client whose partner hasn't connected yet. This is non-optimal, since it means that the TCP streams going through the proxy no longer have the stream-like property of reliable in-order delivery that TCP-using programs expect, but it will suffice for my purpose.
For me the solution was to poll the socket status.
On Windows 10, the following code seemed to work (but equivalent implementations seem to exist for other systems):
WSAPOLLFD polledSocket;
polledSocket.fd = socketItf;
polledSocket.events = POLLRDNORM | POLLWRNORM;
if (WSAPoll(&polledSocket, 1, 0) > 0)
{
if (polledSocket.revents &= (POLLERR | POLLHUP))
{
// socket closed
return FALSE;
}
}
I don't see the problem as you see it. Let's say A connects to the server sends some data and close, it does not need any message back. Server won't read its data until B connects, once it does server read socket A and send the data to B. The first read will return the data A had sent and the second return either 0 or -1 in either case the socket is closed, server close B. Let's suppose A send a big chunk of data, the A's send() method will block until server starts reading and consumes the buffer.
I would use a function with a select which returns 0, 1, 2, 11, 22 or -1,
where;
0=No data in either socket (timeout)
1=A has data to read
2=B has data to read
11=A socket has an error (disconnected)
22=B socket has an error (disconnected)
-1: One/both socket is/are not valid
int WhichSocket(int sd1, int sd2, int seconds, int microsecs) {
fd_set sfds, efds;
struct timeval timeout={0, 0};
int bigger;
int ret;
FD_ZERO(&sfds);
FD_SET(sd1, &sfds);
FD_SET(sd2, &sfds);
FD_SET(sd1, &efds);
FD_SET(sd2, &efds);
timeout.tv_sec=seconds;
timeout.tv_usec=microsecs;
if (sd1 > sd2) bigger=sd1;
else bigger=sd2;
// bigger is necessary to be Berkeley compatible, Microsoft ignore this param.
ret = select(bigger+1, &sfds, NULL, &efds, &timeout);
if (ret > 0) {
if (FD_ISSET(sd1, &sfds)) return(1); // sd1 has data
if (FD_ISSET(sd2, &sfds)) return(2); // sd2 has data
if (FD_ISSET(sd1, &efds)) return(11); // sd1 has an error
if (FD_ISSET(sd2, &efds)) return(22); // sd2 has an error
}
else if (ret < 0) return -1; // one of the socket is not valid
return(0); // timeout
}
Since Linux 2.6.17, you can poll/epoll for POLLRDHUP/EPOLLRDHUP. See epoll_ctl(2):
EPOLLRDHUP (since Linux 2.6.17)
Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing simple code to detect peer shutdown when using Edge Triggered monitoring.)
If your proxy must be a general purpose proxy for any protocol, then you should handle also those clients which sends data and immediately calls close after the send (one way data transfer only).
So if client A sends a data and closes the connection before the connection is opened to B, don't worry, just forward the data to B normally (when connection to B is opened).
There is no need to implement special handling for this scenario.
Your proxy will detect the closed connection when:
read returns zero after connection to B is opened and all pending data from A is read. or
your programs try to send data (from B) to A.
You could check if the socket is still connected by trying to write to the file descriptor for each socket. Then if the return value of the write is -1 or if errno = EPIPE, you know that socket has been closed.for example
int isSockStillConnected(int *fileDescriptors, int numFDs){
int i,n;
for (i=0;i<numFDs;i++){
n = write(fileDescriptors+i,"heartbeat",9);
if (n < 0) return -1;
if (errno == EPIPE) return -1;
}
//made it here, must be okay
return 0;
}

Refresh multicast group membership

I have several embedded machines listening and streaming rtp audio data to a multicast group. They are connected to a smart managed switch (Netgear GS108Ev2) which does basic igmp snooping and multicast filtering on its ports, so that the rest of my (W)LAN doesn't get flooded.
At start everything works fine for about 500-520 seconds. After that, they don't receive any more data until they leave and join the group again. I guess the switch is "forgetting" about the join after a timeout.
Is there any way to refresh the group membership, i.e. letting the switch know, that there ist still someone listening, without losing packets?
System info:
Arch: blackfin
# cat /proc/version
Linux version 2.6.28.10-ADI-2009R1-uCBF54x-EMM
(gcc version 4.3.3 (ADI) ) #158 PREEMPT Tue Jun 5 20:05:42 CEST 2012
This is the way multicast / the IGMP protocol works. A client has to join the group periodically by sending a Membership Report or it will be assumed that he has left the group after some short timeout. However, those reports are usually sent only when receiving a Membership Query from the local multicast router. Either your clients don't receive the query or don't respond with a report.
Try to use a tool like wireshark in order to see which IGMP packets are sent through your network.
You need an IGMP querier to send the Membership Queries, as was already explained by scai.
If you can't configure your router to do that, you can use one of your computers. Seeing how running a full multicast routing daemon would be overkill (and I've never done that), I suggest you try to abuse igmpproxy.
First create a dummy upstream interface (this is not persistent!):
ip tap add dev tap6 mode tap
Write igmpproxy.conf:
# Dummy upstream interface.
phyint tap6 upstream ratelimit 0 threshold 1
# Local interface.
phyint eth0 downstream ratelimit 0 threshold 1
# Explicitly disable any other interfaces (yes, it sucks).
phyint NAME disabled
...
Finally start igmpproxy (as root):
igmpproxy -v /path/to/igmpproxy.conf
If your embedded devices are running linux, you need to turn off the reverse packet filter on them or they won't respond to group membership queries. In that case the upstream switch will assume there is no-one listening to that multicast and switch it off.
I had same problem, multicast on wifi was lost after 260 seconds, I solved it with my application by adding AddSourceMembership on socket.
private void StartListner(IPAddress sourceIp, IPAddress multicastGroupIp, IPAddress localIp, int port)
{
try
{
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
IPEndPoint localEndpoint = new IPEndPoint(localIp, port);
socket.Bind(localEndpoint);
byte[] membershipAddresses = new byte[12]; // 3 IPs * 4 bytes (IPv4)
Buffer.BlockCopy(multicastGroupIp.GetAddressBytes(), 0, membershipAddresses, 0, 4);
Buffer.BlockCopy(sourceIp.GetAddressBytes(), 0, membershipAddresses, 4, 4);
Buffer.BlockCopy(localIp.GetAddressBytes(), 0, membershipAddresses, 8, 4);
socket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddSourceMembership, membershipAddresses);
try
{
byte[] b = new byte[1024 * 2];
int length = socket.Receive(b);
}
catch { }
}
catch (Exception ex)
{
logger.Error("Exception: " + ex);
}
}

Resources