I wrote a program on C , where a client sent one time some information to a server. I used TCP sockets.And some time the server calculated and should to sent result to the client. How can I detect if connection on The server or the client was broken?
You may want to try TCP keepalives.
# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
75
# cat /proc/sys/net/ipv4/tcp_keepalive_probes
9`
In the above example, TCP keep-alive timer kicks in after the idle time of 7200 seconds. If the keep-alive messages are unsuccessful then they are retried at the interval of 75 seconds. After 9 successive retry failure, the connection will be brought down.
The keepalive time can be modified at boot time by placing startup script at /etc/init.d.
There is a way to detect, on Linux, dead sockets without reading or writing to them:
Get the numeric (uint) file descriptor from the socket handler.
readlink the file /proc/[pid]/fd/[#hander]. If it's a socket it will return a string like socket:[#inode].
Read /proc/net/tcp, search for the line with that inode (11th column).
Read the status (st) column on that line (4th column). If it's 0x07 (Close) or 0x08 (TIME_WAIT), the socket is dead.
The only way I know for a program to know for certain that a TCP connection is dead is to attempt to send something on it. The attempt will either time-out or return with an error condition. Thus, the program doesn't need to do anything special -- just send the stuff it was designed to send. It does need to handle, however, all possible error-conditions. On time-out, it could retry for a limited time or decide that the connection is dead. The latter case is appropriate if sending the same data multiple times would be harmful. After this or an error condition, the program should close the current connection and, if appropriate, re-establish it.
TCP Keep-Alive is a reliable way to determine if the peer is dead.That is if the peer application exited without doing a proper closure of the open TCP connection.
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
Lookout for how to enable tcp keep-alives(SO_KEEPALIVE) per socket using setsockopt call.
Another method is that the client and the server application agree on heartbeats at regular intervals. Non arrival of heartbeats should indicate that the peer is dead.
Related
I have a very simple client-server with one blocking socket doing full-duplex communication. I've enabled SSL/TLS to the application. The model is that of a typical producer-consumer. The client produces the data, sends it to the server and the server processes them. The only catch is that, once in a while the server sends data back to the client which the client handles accordingly. Below is a very simple pseudo code of the application:
1 Client:
2 -------
3 while (true)
4 {
5 if (poll(pollin, timeout=0) || 0 < SSL_pending(ssl))
6 {
7 SSL_read();
8 // Handle WANT_READ or WANT_WRITE appropriately.
9 // If no error, handle the received control message.
10 }
11 // produce data.
12 while (!poll(pollout))
13 ; // Wait until the pipe is ready for a send().
14 SSL_write();
15 // Handle WANT_READ or WANT_WRITE appropriately.
16 if (time to renegotiate)
17 SSL_renegotiate(ssl);
18 }
19
20 Server:
21 -------
22 while (true)
23 {
24 if (poll(pollin, timeout=1s) || 0 < SSL_pending(ssl))
25 {
26 SSL_read();
27 // Handle WANT_READ or WANT_WRITE appropriately.
28 // If no error, consume data.
29 }
30 if (control message needs to be sent)
31 {
32 while (!poll(pollout))
33 ; // Wait until the pipe is ready for a send().
34 SSL_write();
35 // Handle WANT_READ or WANT_WRITE appropriately.
36 }
37 }
The trouble happens when, for testing purposes, I force SSL renegotiation (lines 16-17). The session starts nice and easy, but after a while, I get the following errors:
Client:
-------
error:140940F5:SSL routines:SSL3_READ_BYTES:unexpected record
Server:
-------
error:140943F2:SSL routines:SSL3_READ_BYTES:sslv3 alert unexpected message
Turns out, around the same time that the client initiates a renegotiation (line 14), the server ends up sending application data to the client (line 34). The client as part of the renegotiation process receives this application data and bombs with a "unexpected record" error. Similarly, when the server does the subsequent receive (line 26), it ends up receiving a renegotiation data when it was expecting application data.
What am I doing wrong? How should I handle/test SSL renegotiations with a full-duplex channel. Note that, there are no threads involved. It's a simple single threaded model with reads/writes happening on either end of the socket.
UPDATE : To verify that there is nothing wrong with the application that I have written, I could even reproduce this quite comfortably with OpenSSL's s_client and s_server implementations. I started a s_server and once the s_client got connected to the server, I programmatically send a bunch of application data from the server to the client and a bunch of 'R' (renegotiation requests) from the client to the server. Eventually, they both fail in exactly the same manner as described above.
s_client:
RENEGOTIATING
4840:error:140940F5:SSL routines:SSL3_READ_BYTES:unexpected record:s3_pkt.c:1258:
s_server:
Read BLOCK
ERROR
4838:error:140943F2:SSL routines:SSL3_READ_BYTES:sslv3 alert unexpected message:s3_pkt.c:1108:SSL alert number 10
4838:error:140940E5:SSL routines:SSL3_READ_BYTES:ssl handshake failure:s3_pkt.c:1185:
UPDATE 2:
Ok. As suggested by David, I reworked the test application to use non-blocking sockets and always do SSL_read and SSL_write first and do the select based on what they return and I still get the same errors during renegotiations (SSL_write ends up getting application data from the other side in the midst of renegotiation). The question is, at any point in time, if SSL_read returns WANT_READ, can I assume it is because there is nothing in the pipe and go ahead with SSL_write since I have something to write? If not, that's probably why I end up with errors. Either that, or I am doing the renegotiation all wrong. Note, if SSL_read returns WANT_WRITE, I always do a select and call SSL_read again.
You're trying to "look through" the SSL black box. This is a huge mistake.
if (poll(pollin, timeout=0) || 0 < SSL_pending(ssl))
{
SSL_read();
You're making the assumption that in order for SSL_read to make forward progress, it needs to read data from the socket. This is an assumption that can be false. For example, if a renegotiation is in progress, the SSL engine may need to send data next, not read data.
while (!poll(pollout))
; // Wait until the pipe is ready for a send().
SSL_write();
How do you know the SSL engine wants to write data to the pipe? Did it give you a WANT_WRITE indication? If not, maybe it needs to read renegotiation data in order to send.
To use SSL in non-blocking mode, just attempt the operation you want to do. If you want to read decrypted data, call SSL_read. If you want to send encrypted data, call SSL_write. Only call poll if the SSL engine tells you to, with a WANT_READ or WANT_WRITE indication.
Update:: You have a "half of each" hybrid between blocking and non-blocking approaches. This cannot possibly work. The problem is simple: Until you call SSL_read, you don't know whether or not it needs to read from the socket. If you call poll first, you will block even if SSL_read does not need to read from the socket. If you call SSL_read first, it will block if it does need to read from the socket. SSL_pending won't help you. If SSL_read needs to write to the socket to make forward progress, SSL_pending will return zero, but calling poll will block forever.
You have two sane choices:
Blocking. Leave the sockets set blocking. Just call SSL_read when you want to read and SSL_write when you want to write. They will block. Blocking sockets can block, that's how they work.
Non-blocking. Set the sockets non-blocking. Just call SSL_read when you want to read and SSL_write when you want to write. They will not block. If you get a WANT_READ indication, poll in the read direction. If you get a WANT_WRITE indication, poll in the write direction. Note that it is perfectly normal for SSL_read to return WANT_WRITE, and then you poll in the write direction. Similarly, SSL_write can return WANT_READ, and then you poll in the read direction.
Your code would (mostly) work if the implementation of SSL_read was basically, "read some data then decrypt it" and SSL_write was "encrypt some data and send it". The problem is, these functions actually run a sophisticated state machine that reads and writes to the socket as needed and ultimately causes the effect of giving you decrypted data or encrypting your data and sending it.
After spending time debugging my application with OpenSSL, I figured out the answer to the question I originally posted. I am sharing it here in case it helps others like me.
The question I had posted originally had to do with a clear error from OpenSSL indicating that it was receiving application data in the middle of a handshake. What I failed to understand was that OpenSSL gets confused when it receives application data in the middle of a handshake. It's fine to receive handshake data when receiving/sending application data but not the other way around (at least with OpenSSL). That's the thing that I failed to realize. This is also the reason most SSL-enabled applications run fine because most of them are half-duplex in nature (HTTPS for instance) which implicitly guarantees no application data asynchronously arriving at the time of handshake.
What this means is that if you are designing a custom client-server full-duplex protocol (which is the case I am in) and want to slap SSL onto it, it's the application's responsibility to initiate a renegotiation when neither end is sending any data. This is clearly documented in Mozilla's NSS API. Not to mention there is an open ticket in OpenSSL's bug repository regarding this issue. The moment I changed my application to initiate a handshake when there is nothing for the client/server to sayto one another, I no longer faced the above errors.
Also, I agree with David's comments about blocking sockets and I've read many of his arguments in the OpenSSL mailing list as well. But, the sad thing is that most legacy applications are built around poll and blocking sockets and they "Just Work Fine (TM)". The issue arises when dealing with SSL renegotiation. I still believe at least my application can deal with SSL renegotiation in the presence of blocking sockets since it is a very confined and custom protocol and we (as the application developer) can decide to do the renegotiation when the protocol is quiescent. If that doesn't work, I will go the non-blocking socket route.
So the basic premise of my program is that I'm supposed to create a tcp session, direct traffic through it, and detect any connection losses. If the connection does break, I need to close the sockets and reopen them (using the same ports) in such a way that it will seem like the connection (almost) never died. It should also be noted that the two programs will be treated as proxies (data gets sent to them, if the connection breaks it gets stored until connection is fixed, then data is sent off).
I've done some research and gone ahead and used setsockopt() with the SO_REUSEADDR option to set the socket options so that I can reuse the address.
Here's the basic algorithm I do to detect a connection break using signals:
After initial setup of sockets, begin sending data
After x seconds, set a flag to false, which will prevent all other data from being sent
Send a single piece of data to let the other program know the connection is still open, reset timer to x seconds
If I receive same piece of data from the program, set the flag to true to continue sending
If I don't receive the data after x seconds, close the socket and attempt to reconnect
(step 5 is where I'm getting the error).
Essentially one program is a client(on one VM) and one program is a server(on another VM), each sending and receiving data to/from each other and to/from another program on each VM.
My question is: Given that I'm still getting this error after setting the socket options, why am I not allowed to re-bind the address when a connection has been detected?
The server is the one complaining when a disconnect is detected (I close the socket, open a new one, set the option, and attempt to bind the port with the same information).
One other thing of note is the way I'm receiving the data from the sockets. If I have a socket open, I'm basically reading it by doing the following:
while((x = recv(socket, buff, 1, 0)>=0){
//add to buffer
// send out to other program if connection is alive
}
Since I'm using the timer to close/reopen the socket, and this is in a different thread, will this prevent the socket from closing?
SO_REUSEADDR only allows limited reuse of ports. Specifically, it does not allow reuse of a port that some other socket is currently actively listening for incoming connections on.
There seems to be an epidemic here of people calling bind() and then setsockopt() and wondering why the setsockopt() doesn't fix an error that had already happened on bind().
You have to call setsockopt() first.
But I don't understand your problem. Why do you think you need to use the same ports? Why are you setting a flag preventing you from sending data? You don't need any of this. Just handle the errors on send() when and if they arise, creating a new connection when necessary. Don't try to out-think TCP. Many have tried, few if any have succeeded.
I have a client-server application where each side communicate with the other via TCP socket.
I properly establish the connection and then I crash the server BEFORE any data is written on the socket by the client.
What I see is that the first write() attempt (client-side) is successful and it returns the actual number of written bytes, while the following ones return (as I expected) -1 (receiving a SIGPIPE) and errno=EPIPE.
Why the first write() is successful even if the socket is already closed?
EDIT
Sometimes also the following write() have a positive return values, as if everything goes well.
You're confused by what the return value of write() means. It doesn't mean, "the peer got the data and acknowledged it". Instead, it means, "I buffered so-many bytes to send to the peer and they're my responsibility now, so you can forget about them (and I don't have any pending errors)".
That is, if the TCP stack accepts the write and returns n bytes, that doesn't mean they've been written yet, just queued for writing. It'll take some time, perhaps 30s after it starts sending network traffic, before the stack gives up and returns an error to you. During that time, you could have done several calls to write() which were successful at queueing data for sending. (The write error will be returned in c.30s if the peer has vanished, or immediately if the peer can be contacted and sends a RST packet straight away to indicate the connection is dead.)
This has to do with how TCP/IP works, that can be roughly described as two mostly independent half-connections. When you close the socket at the server, the client is told that it will not receive further data from the C<-S half-connection, waking up read() immediatly, but not about the C->S direction. It only gets a reply resetting the connection after it tries to send some data. I recommend the TCP/IP Guide for further details.
The reason why sometimes you can write() twice is that you write faster than the round-trip time and can squeeze a second write() before the reply to the first one.
I'm using the following method to detect a disconnected server condition:
After getting the select() timeout on a socket (nothing was received, though was supposed to),
the 'system("ping -c 1 -w 1 server");' command is activated.
If the server is up and just lagging, the ping command will return in less than 0.1 seconds.
Otherwise (the server is down), the ping command will return in 1 second.
I am debugging a c based linux socket program. As all the examples available in websites,
I applied the following structure:
sockfd= socket(AF_INET, SOCK_STREAM, 0);
connect(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr));
send_bytes = send(sockfd, sock_buff, (size_t)buff_bytes, MSG_DONTWAIT);
I can detect the disconnection when the remove server closes its server program. But if I unplug the ethernet cable, the send function still return positive values rather than -1.
How can I check the network connection in a client program assuming that I can not change server side?
But if I unplug the ethernet cable, the send function still return
positive values rather than -1.
First of all you should know send doesn't actually send anything, it's just a memory-copying function/system call. It copies data from your process to the kernel - sometime later the kernel will fetch that data and send it to the other side after packaging it in segments and packets. Therefore send can only return an error if:
The socket is invalid (for example bogus file descriptor)
The connection is clearly invalid, for example it hasn't been established or has already been terminated in some way (FIN, RST, timeout - see below)
There's no more room to copy the data
The main point is that send doesn't send anything and therefore its return code doesn't tell you anything about data actually reaching the other side.
Back to your question, when TCP sends data it expects a valid acknowledgement in a reasonable amount of time. If it doesn't get one, it resends. How often does it resend ? Each TCP stack does things differently, but the norm is to use exponential backoffs. That is, first wait 1 second, then 2, then 4 and so on. On some stacks this process can take minutes.
The main point is that in the case of an interruption TCP will declare a connection dead only after a seriously large period of silence (on Linux it does something like 15 retries - more than 5 minutes).
One way to solve this is to implement some acknowledgement mechanism in your application. You could for example send a request to the server "reply within 5 seconds or I'll declare this connection dead" and then recv with a timeout.
To detect a remote-disconnect, do a read()
Check this thread for more info:
Can read() function on a connected socket return zero bytes?
You can't detect the unplugged ethernet cable only with calling write() funcation.
That's because of tcp retransmission acted by tcp stack without your consciousness.
Here are solutions.
Even though you already set keepalive option to your application socket, you can't detect in time the dead connection state of the socket, in case of your app keeps writing on the socket.
That's because of tcp retransmission by the kernel tcp stack.
tcp_retries1 and tcp_retries2 are kernel parameters for configuring tcp retransmission timeout.
It's hard to predict precise time of retransmission timeout because it's calculated by RTT mechanism.
You can see this computation in rfc793. (3.7. Data Communication)
https://www.rfc-editor.org/rfc/rfc793.txt
Each platforms have kernel configurations for tcp retransmission.
Linux : tcp_retries1, tcp_retries2 : (exist in /proc/sys/net/ipv4)
http://linux.die.net/man/7/tcp
HPUX : tcp_ip_notify_interval, tcp_ip_abort_interval
http://www.hpuxtips.es/?q=node/53
AIX : rto_low, rto_high, rto_length, rto_limit
http://www-903.ibm.com/kr/event/download/200804_324_swma/socket.pdf
You should set lower value for tcp_retries2 (default 15) if you want to early detect dead connection, but it's not precise time as I already said.
In addition, currently you can't set those values only for single socket. Those are global kernel parameters.
There was some trial to apply tcp retransmission socket option for single socket(http://patchwork.ozlabs.org/patch/55236/), but I don't think it was applied into kernel mainline. I can't find those options definition in system header files.
For reference, you can monitor your keepalive socket option through 'netstat --timers' like below.
https://stackoverflow.com/questions/34914278
netstat -c --timer | grep "192.0.0.1:43245 192.0.68.1:49742"
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (1.92/0/0)
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (0.71/0/0)
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (9.46/0/1)
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (8.30/0/1)
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (7.14/0/1)
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (5.98/0/1)
tcp 0 0 192.0.0.1:43245 192.0.68.1:49742 ESTABLISHED keepalive (4.82/0/1)
In addition, when keepalive timeout ocurrs, you can meet different return events depending on platforms you use, so you must not decide dead connection status only by return events.
For example, HP returns POLLERR event and AIX returns just POLLIN event when keepalive timeout occurs.
You will meet ETIMEDOUT error in recv() call at that time.
In recent kernel version(since 2.6.37), you can use TCP_USER_TIMEOUT option will work well. This option can be used for single socket.
Finally, you can use read function with MSG_PEEK flag, which can let you check that the socket is okay. (MSG_PEEK just peeks if data arrived at kernel stack buffer and never copies the data into user buffer.)
So you can use this flag just for checking socket is okay without any side effect.
Check the return value, and see if it's equal to this value:
EPIPE
This socket was connected but the connection is now broken. In this case, send generates a SIGPIPE signal first; if that signal is ignored or blocked, or if its handler returns, then send fails with EPIPE.
Also add a check for the SIGPIPE signal in your handler, to make it be more controllable.
Hi' I'm writing a simple http port forwarder. I read data from port 80, and pass the data to my lighttpd server, on port 8080.
As long as I write() data on the socket on port 8080 (forwarding the request) there's no problem, but when I read() data from that socket (forwarding the response), the last read() hangs a lot (about 1 or 2 seconds) before realizing there's no more data and returning 0.
I tried to set the socket to non-blocking, but this doesn't work, as sometimes it returns EWOULDBLOCKING even if there's some data left (lighttpd + cgi can be quite slow).
I tried to set a timeout with select(), but, as above, a slow cgi could timeout the socket when there's actually some data to transmit.
Update: SOLVED. It was the keepalive after all. After I disabled it in my lighttpd configuration file, the whole thing runs flawlessly.
Well, for the sake of completion, and as per my comment:
It is likely that the HTTP server itself (lighttpd in your case) is maintaining a persistent connection to your proxy because your proxy relayed a header containing “Connection: keep-alive”. This header aids when the client wants to make multiple requests over the same connection. So, because lighttpd received this header, it assumed it was going to receive further requests and kept the socket open, causing read to block in your proxy.
Disabling keep-alive in your lighttpd configuration is one way to fix it, but also you could also strip the “Connection: keep-alive“ from the header before you relay it to your web server.
Using both non-blocking sockets and select is the right way to go. Returning EWLOULDBLOCK doesn't mean that the entire stream of data is finished being received, it means that, instantaneously, there is nothing to read right now. That's exactly what you want, because it means that read won't wait even half a second for more data to show up. If the data isn't immediately available it will return.
Now, obviously, this means you will need to call read multiple times to get the complete data. The general format for doing this is a select loop. In pseudocode:
do
select ( my_sockets )
if ( select error )
handle_error
else
for each ( socket in my_sockets ) do
if ( socket is ready ) then
nonblocking read from socket
if ( no data was read ) then
close socket
remove socket from my_sockets
endif
endif
loop
endif
loop
The idea is that select will tell you which sockets have data available for reading right now. If you read one of those sockets, you are guaranteed either to get data or to get a return value of 0, indicating that the remote end closed the socket.
If you use this method, you will never be stuck in a read call that is not reading data, for any length of time. The blocking operation is the select call, and you can also select over writeable sockets if you need to write, and set a timeout if you need to do things periodically.
Don't do that!
Keepalives boost performance from other clients. Instead, fix your client. Send a Connection: close header in your client and make sure your request doesn't claim HTTP/1.1 compliance. (If for no other reason than that you probably don't handle chunked encoding either.)
I guess that I would use non-blocking I/O to full extend. Instead of setting timeouts I'd rather wait for event's:
while(select(...)) {
switch(...) {
case ...: // Handle accepting new connection
case ...: // Handle reading from socket
...
}
}
Sinle-thread, blocking forwarder will cause problems anyway with multiple clients.
Sorry - I don't remember exact calls. Also it can be strange in some cases (IIRC - you need to handle write), but there are libraries which simplify the task.