Reasons for TLS Client Hello not being answered - c

I have written a basic TLS client for use in embedded systems (written in C). It uses TLS1.2, and it works great in 90% of situations. I have it working fine for HTTPS, and also have it working with various FTP servers using implicit and explicit FTPS. This week I've encountered an issue when using it with Cerberus FTP and proftpd though. TLS handshake goes through absolutely fine when opening the control channel on port 21, but when using passive mode and opening the passive port, my client sends the TLS Client hello (and I can see the server reply with a TCP ACK), but the FTP server never replies with a Server Hello. Does anyone know of a reason why that might be?-I'm guessing that there is something different in the way Cerberus and ProFTPd have implemented TLS that my client doesn't cater to. My client hello on both connections is identical (apart from port number in tcp headers) and I am not reusing the session data. I don't have this issue when testing against vsftpd or filezilla servers.

Found the reason for the lack of response, and it's an interesting one if anyone is ever writing their own FTP Client and need to use FTPS with it. The FTP Client I had written issued the PASV command, and then immediately opened the data channel port before then issuing the STOR command on the control channel. This behaviour is fine for all FTP servers when using un-encrypted FTP. However, as I discovered, you have to beware when using TLS. With proftpd and cerberus FTP, the FTP server doesn't seem to attach a listener to that port until you issue the STOR command (or equivalent), so it won't negotiate TLS on that port until you've issued the command, whereas other FTP servers like vsftpd and filezilla are happy to negotiate TLS as soon as the port is opened. SO the solution was to open the port after sending the STOR command.

Related

Problem with zentyal proxy 6.1 and connect request in embedded application

I am working on a small application on an embedded platform which has to send some classified information to a server. This connection to the server is encrypted using SSL. The encryption is tunneled trough a proxy - in this case a Zentyal proxy. The embedded application is written in C and the connectivity part is done with wolfssl and lwip.
The application works fine with Zentyal 5.1. But recently the proxy server was updated to 6.1 and now the connection is falling all the time. Debugging the issue, I have found the problem occurs when the application ask the server for a tunnel connection. What I see happening is that the application sends the CONNECT request to the proxy ...
... to which the the proxy answers with a 200 Connection Established.
But after that packet is received the proxy send another message with Proxy-Coneection:Close. Whith has the effect that the connection is shut down before the SSL handshake.
I have tried different configuration in the Zentyal (transparent proxy enable, disable cache, etc.) but the error remains the same. Also, I have added different HTTP headers like proxy-connection: Keep alive. But the connection is still being closed.
Maybe this is a problem with the 6.1 version. I have tried looking through the change log but there is no reference to any change in the processing of the connect request. Neither are there any known problems related to the way the connect is being handled.
Any advice?

138 Connection Timed out on NoMachine Client - always

I am trying to connect from a NoMachine client on a Windows 7 machine to an OpenSUSE machine. I can only connect via NX however I keep running into Error 138:Connection Timed out. I can connect via SSH on my Command prompt however Seem to be unable to connect via here. Does anyone know a solution - been doing this since morning with no light in sight!
Routers supporting UPnP or NAT-PMP are configured automatically to pass connections to NoMachine and all required information is displayed at initial screen (Welcome to NoMachine).
Routers not supporting UPnP or NAT-PMP and Firewalls have to be configured manually to pass traffic to port 4000 (NX protocol), 22 (SSH protocol on Linux/MacOSX) or (4022) (SSH protocol on Windows).
So, check the configuration first.
I have a similar issue setting up my ftp server.
There are a couple of possibilities why the connection was not established, but in my case, and perhaps yours, you must allow the service you're trying to execute in your firewall settings.
In my case I allowed the ftp port and some other specific port for tcp communication.
This (and the proper service, router, etc setup) allowed the communication to be established.

425 Unable to build data connection: No route to host

Firewall is turned off but still it is not able to execute put commend on ftp
ftp>cd /web
250 CWM command successful
ftp>binary
200 Type set to I
ftp>put C:\sample.xml
200 PORT command successful
425 Unable to build data connection: No route to host
Your script works in active mode, e.g. the client gives the server a client side ip and port using the PORT command and the server tries to connect to it. Because the server replies, that it has no route to your host my guess is, that
you are in an internal network, e.g. using a private IP which is not directly accessible from the internet. This IP is used in the PORT command.
the server is on the internet
from the internet the server of course can not connect into a private network, thus you get this error message
Fix: use passive mode (e.g. you should see a PASV not a PORT command)
Windows ftp.exe operates in passive mode and your FTP server may need active FTP transfers.
Test your commands on a different FTP server to see if it works elsewhere.

Nagios client tcp connection to Nagios Server using NSCA, how to make this connection stay up forever

I have setup a nagios distributed monitoring system environment and i am able to send passive checks to Nagios server using send_nsca. When i look at the handshake between Nagios Client and Nagios Server, i see that Nagios Client is establishing a tcp connection to Nagios server whenever it has something to send and terminating the connection once the client is done sending the information. I want the tcp connection to stay up forever instead of terminating every time after data transfer is done. Could anyone please let me the know the process to make this happen?
You cannot do this without modifying the standard NSCA daemon. Normally, it will time out and that's why the NSCA client reestablishes the connection.
I've implemented send_nsca in both Perl and Ruby, and in both cases cannot make a persistent connection work.
A better solution, though, if you are using Nagios 3.x is to install the livestatus module (part of check_mk). This allows passive checks to be submitted, but supports a persistent connection and a whole lot more. We've moved to using this instead for many cases.

Socket server not accepting clients from other computers (connect failed: 10060)

I am trying to use this c socket class, but it only works when I use it on my own computer.
Desktop only
Server is started like this: cSocketServer -p:2030 -i:192.168.178.22
Client connects: cSocketclient -p:2030 -s:192.168.178.22
Works fine.
Desktop server, laptop client
Server: cSocketServer -p:2030 -i:192.168.178.22
Client: cSocketclient -p:2030 -s:192.168.178.22
Exact same as above, but this fires the connect failed: 10060 error. Which essentially means it timed out.
Desktop only (external address)
Server: cSocketServer -p:2030 -i:192.168.178.22
Client: cSocketclient -p:2030 -s:xx.xx.xx.xx
Where xx.xx.xx.xx is my external ip address.
Same error: connect failed: 10060. Port 2030 is definitely open and accessible, because I tested it with a few unrelated applications that allow their users to choose their own ports (like utorrent). While those run, whatismyip.org states port 2030 is open. But when I run my application it sais it Timed-out. Those applications do not have any special privileges in the firewall.
But even if I did mess up some firewall/router settings (which I'm fairly sure I didn't) that wouldn't explain why I can't connect to the server from within my local network. Other services (such as file sharing) work fine so there is definitely a connection between the 2 computers.
Both client and server run on windows 7 64-bit.
Also; for some reason, each client that connects gets their own inbound port assigned or something? Is that normal? When clients connect the server states;
Accepted client: 192.168.178.22:55156
Accepted client: 192.168.178.22:55164
Accepted client: 192.168.178.22:55176
What's that all about?
If two TCP connections have the same source IP, destination IP, source port, and destination port, there would be no way to tell them apart. To ensure they differ somewhere, clients typically assign a unique source port to every outbound connection they make.
As for the errors, you really need to do some troubleshooting. Do the listening sockets show up in a 'netstat'? Do you get the same problem with the firewalls turned off? Are the server and client on the same LAN (for the internal address case)? Is port forwarding enabled and working in the router (for the external address case)?
My bet is that the external address case won't work because you haven't configured the port to be forwarded by your router or your router doesn't support hairpin (local access to external IP). Other programs may work because they support UPnP or don't rely on hairpin (all access to external IPs come from outside your LAN).
I have no immediate explanation for why your desktop-to-laptop won't work inside your LAN. Are you sure both computers are in the same LAN? Can they ping each other?
Get rid of the -i argument to the server, or specify 0.0.0.0 and fix the code so that isn't considered an error, which is itself an error.

Resources