libcurl gnutls_handshake() failed: A TLS warning alert has been received - c

I am essentially trying to do the following:
http://curl.haxx.se/libcurl/c/https.html
I have my apache server set up with mod_ssl and a server cert. I added the line:
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
and also tried:
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, false);
but I keep getting the error: gnutls_handshake() failed: A TLS warning alert has been received.
Does anyone know who to fix this or get around it?

Realizing that the question is fairly old and one answer was already accepted, here is an alternative answer that worked perfectly for me and may be useful for folks coming here from search:
Replace libcurl-gnutls with libcurl-openssl alternative.
I noticed that the certificate error was only generated with programs using libcurl and not with browsers so I assumed that something about GNUTLS was at fault here and not the certificates.
Here is what worked for me (Ubuntu 12.04 LTS):
$ sudo apt-get remove libcurl4-gnutls-dev
$ sudo apt-get install libcurl4-openssl-dev
All programs that were relying on libcurl started working fine immediately after I replaced the libraries (I also recompiled the programs just in case).
Note: this solution will only help you if you get a warning with GNUTLS but not with, say, browsers. That is, I am assuming the certificate chain is actually set up correctly.

I attempted the above solution and that did not work for me. In python, I tried the following option which effectively forces SSLv3 and disables TLS.
c.setopt(pycurl.SSLVERSION, pycurl.SSLVERSION_SSLv3)
in PHP it should be able to be accomplished by setting the CURLOPT_SSLVERSION option to 3.

That "usual" warning you get means there is no well-known organization (certificate authority) willing to vouch for its authenticity. That is what the "usual" warning means and that is what the TLS warning is telling you.
Try to set CURLOPT_SSL_VERIFYHOST to 0 or install a proper certificate.

Related

Undefined reference for TLSv1_1_client_method though 'nm' says otherwise

In my SSL client code, on trying to compile I get an undefined reference error on using TLSv1_1_client_method(). If I don't have the TLS method, the linking is fine. On running ldd on the binary I see:
%ldd client_sim_ssl
libssl.so.10 => /usr/lib64/libssl.so.10
libcrypto.so.10 => /usr/lib64/libcrypto.so.10
Now, if I check nm for /usr/lib64/libssl.so.10 :
%nm /usr/lib64/libssl.so.10 | grep TLSv1_1_client_method
0000000000030d30 T TLSv1_1_client_method
OpenSSL version installed : OpenSSL 1.0.1g 7 Apr 2014
Why the undefined reference error when the library it links to has the definition? What is the missing piece?
EDIT 1:
I had logged off the system I was working on. In my experiment in upgrading the version - openSSL 1.0.1g, I think, I have messed with libraries. SSH connection is having problems. :-(
ssh root#10.200.2.197
ssh_exchange_identification: Connection closed by remote host
May be my original problem is also related to this?
I will update the post shortly on more details as I fix SSH connection issue.
EDIT 2:
My system is RHEL 6.1. For SSH issue had to re install OpenSSL rpm from CD for there was version mismatch error from ssh. With this OpenSSL reinstallation, libraries have been set right in /usr/lib/ and /usr/lib64/. Now I don't get to see TLSv1_1_client_method() with nm.
I must have put in libraries from 1.0.1g into /usr/lib64/' and thus resulted innm` showing TLS method. And while compiling it was using other versions? Not sure.

Google compute engine returned 399 internal server error

Google compute engine console return 399 error code already asks my question but the solution is not as suggested there. Since the URL is little old starting a new thread.
I am trying to do a wget using:
wget https://console.developers.google.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
I see the error:
Resolving console.developers.google.com (console.developers.google.com)... 216.239.32.27
Connecting to console.developers.google.com (console.developers.google.com)|216.239.32.27|:443... connected.
HTTP request sent, awaiting response... 399 Internal Server Error
2014-08-26 20:02:18 ERROR 399: Internal Server Error.
I am new to Linux commands so wanted to know if am missing something obvious.
The address works when I use Chrome downloader but fails with wget with me as well
I have never seen this behaviour before
You can also use cURL to download files, I used the -v switch and got a dns error(no idea why)
curl -v http://console.developers.googlO.com/m/cloudstorage/b/m-lab/o/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz
We cannot download with traditional tools we have to use gsutil utility provided by google, using which automation is possible.
You need to use the following URI pattern:
http://storage.googleapis.com/<bucket>/<object>
In this case, you can download that file using the command:
wget http://storage.googleapis.com/m-lab/ndt/2012/05/23/20120523T000000Z-mlab1-ams01-ndt-0000.tgz

Open ssl unkown protocol

I wrote a code that worked for me great, I don't remember modifying it..
I compiled it today and tried to run it, but I got this error:
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
I also tried to connect to the host name with the openssl client, but I've got and error says: Linked closed ping timeout..
I also tried to Install openssl again, and even install an older version but it didn't work..
Any solutions?
I also tried to connect to the host name with the openssl client, but I've got and error says: Linked closed..
It sounds like the remote host is not available. If you can't connect to the remote host using command line tools, then there's nothing you can do with your code to make it work. Verify that the remote host is responding correctly first.

port selfupdate: "macPorts sources: command execution failed"

I am trying to selfupdate my Macports, but I am getting the following message:
Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing
MacPorts sources: command execution failed
I checked my /opt/local/bin/macports and the directory does not exist. Instead, it is in /opt/local/var. Could that be the issue?
Running with -dt, I get the following:
[Users/user] > selfupdate
DEBUG: MacPorts sources location: /opt/local/var/macports/sources/rsync.macports.org/release/base
---> Updating MacPorts base sources using rsync
rsync: failed to connect to rsync.macports.org: Connection refused (61)
rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync-42/rsync/clientserver.c(105) [receiver=2.6.9]
Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/base/ /opt/local/var/macports/sources/rsync.macports.org/release/base
Exit code: 10
DEBUG: Error synchronizing MacPorts sources: command execution failed
while executing
"macports::selfupdate [array get global_options] base_updated"
Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed
What is error 61? Any ideas how I can fix that?
I had this same problem recently, and I forgot to run the command under root. If anyone else is having the problem, be sure to run command as so:
sudo port selfupdate
I was behind a firewall. Tried on a different network and it worked.
There is no /opt/local/bin/macports. The executable you need is /opt/local/bin/port. (Port files are in /opt/local/var/..., which is correct.)
Based on the command execution failed:
you might have forgotten to run as root.
port forks the following programs: rsync, tclsh, openssl, tar, chmod, chown.
Are these executable and in the PATH? (Is /opt/local/bin in your PATH as well?)
If that doesn't help, run port with -dt to get all sorts of debug info. That might help with finding the problem. Append the interesting parts to your question, maybe.
I faced the same issue.But I used to this method in the after.
Go to:
$prefix/etc/macports/sources.conf
(my path is like this):
/opt/local/etc/macports/sources.conf
comment out the rsync entry, and add a new entry as follows:
#rsync://rsync.macports.org/release/tarballs/ports.tar [default]
https://distfiles.macports.org/ports.tar.gz [default]
After that you can run:
sudo port -d sync
It's also explained on MacPorts.com.
Update for Mavericks: to ensure the XCode command line tools are installed, open a terminal and run xcode-select –-install, then follow the instructions in the resulting pop-up window:
accept license
Of course, this is in addition to the other tips such as making sure to run sudo port selfupdate.
If anybody else is having this issue and they've recently updated XCode, the root of my problem was that Command Line Tools had been omitted from the latest build.
Opening XCode and installing Command Line Tools via the XCode preference panel fixed this error being thrown by MacPorts.
If your company block the access via rsync you can use the http tarball. Explained here
Hope this helps.
EDIT: Now prefer to use homebrew
I too had the same error. It is because the network connection is rejected. If you are using University/Company WiFi or public connection, firewall would be refusing the connection.
As you can see from the output of -dt "rsync: failed to connect to rsync.macports.org: Connection refused (61)"
There are workarounds available which are provided on the macports site:
1) Using svn.
2) If svn fails too, you can try using Daily tarball.
You can test the changes by running "sudo port -d sync"
Note: If the https fails, you can replace it with http. But doing so is not recommended, as you will be fetching from insecure connection.
I faced the same issue.
The main problem was my network. Because the NETWORK Port was blocked for;
rsync://rsync.macports.org/release/tarballs/ports.tar
Try to use use another network.
for someone who's problem still exists, maybe you've forgot agree the Xcode license:
# sudo xcodebuild license
remember to look through and type 'agree' in the end.
In my case, the problem was internal to Macports! I updated rsync (the one delivered by Apple is old) with Macports and then Macports failed to use it (/opt/local/bin/rsync) but asked instead to use /usr/bin/rsync which does not exist (or has been erased to force using Macports rsync ?). I created a soft link between the two and now it works again.

Nagios: CRITICAL - Socket timeout after 10 seconds

I've been running nagios for about two years, but recently this problem started appearing with one of my services.
I'm getting
CRITICAL - Socket timeout after 10 seconds
for a check_http -H my.host.com -f follow -u /abc/def check, which used to work fine. No other services are reporting this problem. The remote site is up and healthy, and I can do a wget http://my.host.com/abc/def from the nagios server, and it downloads the response just fine. Also, doing a check_http -H my.host.com -f follow works just fine, i.e. it's only when I use the -u argument that things break. I also tried passing it a different user agent string, no difference. I tried increasing the timeout, no luck. I tried with -v, but all it get is:
GET /abc/def HTTP/1.0
User-Agent: check_http/v1861 (nagios-plugins 1.4.11)
Connection: close
Host: my.host.com
CRITICAL - Socket timeout after 10 seconds
... which does not tell me what's going wrong.
Any ideas how I could resolve this?
Thanks!
Try using the -N option of check_http.
I ran into similar problems, and in my case the web server didn't terminate the connection after sending the response (https was working, http wasn't). check_http tries to read from the open socket until the server closes the connection. If that doesn't happen then the timeout occurs.
The -N option tells check_http to receive only the header, but not the content of the page / document.
I tracked my issue down to an issue with the security providers configured in the most recent version of OpenSUSE.
From summary of other web pages it appears to be an issue with an attempt to use TLSv2 protocol which does not appear to work correctly, or is missing something in the default configurations to allow it to work.
To overcome the problem I commented out the security provider in question from the JRE security configuration file.
#security.provider.10=sun.security.pkcs11.SunPKCS11
The security.provider. value may be different in your configuration, but essentially the SunPKCS11 provider is at issue.
This configuration is normally found in
$JAVA_HOME/lib/security/java.security
of the JRE that you are using.
Fixed with this url in nrpe.cfg: (on Deb 6.0 Squeeze using nagios-nrpe-server)
command[check_http]=/usr/lib/nagios/plugins/check_http -H localhost -p 8080 -N -u /login?from=%2F
For whoever is interested, I stumbled in this problem too and the problem ended up being in mod_itk on the web server.
A patch is available, even if it seems it's not included in the current CentOS or Debian packages:
https://lists.err.no/pipermail/mpm-itk/2015-September/000925.html
In my case /etc/postfix/main.cf file was not good configured.
My mailserverrelay was not defined and was also very restrictive.
I should to add:
relayhost = mailrelay.ext.example.com
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination

Resources