I researched about this question and could not find any pointers to this error. I am essentially trying to connect to server using libcurl program.
https://curl.haxx.se/libcurl/c/ftpsget.html
The program compiles fine but gives a run time error as follows:
Trying (Some IP)...
* Connected to (Some server name) (same ip as above) port 21 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 697 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* gnutls_handshake() failed: An unexpected TLS packet was received.
* Closing connection 0
curl told us 35
I am having access to this server through a username& password.
Kinda old question, but incase it helps others...
You might need to use --insecure --ftp-ssl --tlsv1 and maybe ftps:// but sometimes ftp://
Hope it helps!
EDIT:
URL attached
FTPS connection error: gnutls_handshake failed
Related
The bounty expires in 3 days. Answers to this question are eligible for a +50 reputation bounty.
Squazz wants to draw more attention to this question:
It either seems I haven't been able to explain myself well enough, or that the answer is not well known. Either way I hope for clarification on what could be going on here.
I have a .NET 6 service running in a container (based on the mcr.microsoft.com/dotnet/aspnet:6.0-focal image). When my service needs to talk to the SQL Server database, I must set SECLEVEL=1 in my OpenSSL config. I run the following when creating the container (taken from this github issue: https://github.com/dotnet/SqlClient/issues/776#issuecomment-825418533)
RUN sed -i '1i openssl_conf = default_conf' /etc/ssl/openssl.cnf && echo "\n[ default_conf ]\nssl_conf = ssl_sect\n[ssl_sect]\nsystem_default = system_default_sect\n[system_default_sect]\nMinProtocol = TLSv1.2\nCipherString = DEFAULT:#SECLEVEL=1" >> /etc/ssl/openssl.cnf
If I don't, I get this error:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 31 - Encryption(ssl/tls) handshake failed)
But... my connection string has not set anything about encrypt or anything else that indicates that I must make an encrypted connection. And when I look at EF Core 6 or less, Encrypt=False is the default. So if you don't do anything explicit, I assume that the connection is not encrypted.
My connection string looks like this
Server=123,456;Database=123;User ID=123;PWD=123;multipleactiveresultsets=True;
On the .NET side I'm using Microsoft.EntityFrameworkCore.SqlServer 6.0.13, which has a dependency on Microsoft.Data.SqlCliet 2.1.4. Both of these has encrypt=false as default for the connection strings.
And that's where I'm unable to understand what happens.
If the connection is not encrypted, why do I have to set SECLEVEL=1 to avoid handshake errors? Why does a handshake even happen?
If the SQL Server you are connecting to is configured with force encryption, TLS/SSL will be used for all communication regardless of whether the client requests encryption or not.
Even if encryption is not required by client or server, login packets for the credential exchange are still encrypted. The setup needed to do so occurs as part of the pre-login handshake as described in this answer. This introduces the TLS/SSL requirement.
I have 2 domain controllers, one is PDC and also with Root CA(not best practice) and dns. the other one is just a domain controller.
I have done all needed configuration for ldaps in the second domain controller and tested ldp working fine from both the workstation and the DC itself. however I could not connect to it from linux server.
when using openssl s_client -connect dc02.domainname:636 -showcerts. it always returned no peer certificate available.
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 289 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1610484233
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
Firewall and everything are OK since connection test are all good. I do know from where I can troubleshoot this problem as I can see certs are in computer store and service keystore.
Could someone provide me some hint as from where I can continue this troubleshooting and investigation?
Thanks.
This may depend on the permission of the cert that you are attempting to grab from the server you may want to add an everyone group and allow it to read-only or add the Linux Server to the domain via SSSD and then give the server access in the same menu. Also, ensure the cert you are attempting to push is publicly visible.
REF (How to add a Linux machine to AD): https://www.datasunrise.com/blog/professional-info/integrating-a-linux-machine-into-windows-active-directory-domain/
CA Properties
It turns out that port 88 is also needed when doing ldaps authentication and etc. when open port 88 for both tcp and udp, the test and everything works fine.
Thanks for the suggestions.
(Submitting the following Q&A string, as it may be of benefit to others receiving similar Error Messaging...)
Question submitted by "M":
I have Windows 7 64-bit ODBC driver. When using Attunity Replicate I am trying to read 1TB of data from Snowflake, it is giving below error after running for around 5 hours:
Result download worker error: Worker error: [Snowflake][Snowflake] (4)
REST request for URL https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com/ogn3-s-vass2706/results/018ef7de-01c5-8ec1-0000-2ab10047f27a_0/main/data_1_5_225?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&AWSAccessKeyId=AKIAJP5BI3JZEVKDRXDQ&Expires=1568894118&Signature=MoTOQPf5ZiBjX8YNYWJ6J0KaH5Q%3D failed: CURLerror (curl_easy_perform() failed) - code=2 msg='Failed initialization'.
Note: This error occurs after running for around 5 hours when the job got triggered.
"KM" Response #1:
1) Is this issue happening intermittently or all the times ?
2) Is this issue is happening with small dataset ?
3) What is the Snowflake ODBC version you are using? Can you use the latest ODBC driver version 2.19.14 and let us know the behavior?
4) Are you using proxy into your network ?
5) Please run the below statement from Snowflake Web GUI or from SnowSQL terminal to get the list of the endpoint that needs to be whitelisted into the firewall/network. (share the endpoint details with your networking team)
SELECT SYSTEM$WHITELIST();
OR (if you want a slightly more readable output):
select t.value:type::varchar as type, t.value:host::varchar as host, t.value:port as port from table(flatten(input => parse_json(system$whitelist()))) as t;
Note : In order to function properly, Snowflake must be able to access a set of HTTP/HTTPS addresses. If your server policy denies access to most or all external IP addresses and web sites, you must whitelist these addresses to allow normal service operation.
All communication with Snowflake happens over port 443. However, CRL and OCSP certification checks are transmitted over port 80. The network administrator for your organization must open your firewall to traffic on ports 443 and 80.
"M" Follow-Up Response #1:
Please find my response below for your questions:
1) This is an intermittent issue. Not it always fails.
2) The issue does not occur with small data set. For larger data set, the job runs for 11-12 hours and then it gets failed with the specified error.
3) We are using ODBC driver version: 2.19.09.00. Will check with higher version.
4) No. We are not using any proxy in the network.
5) OK.
I will check and white-list all the Snowflake IP address in our network, install the latest ODBC driver and run the job again. I will keep you posted about the result.
"M" Follow-Up Response #2:
I have upgraded the ODBC driver to the latest version 2.19.14.
Now on the running job, it is failing after running for 24 hours with a different error.
Error message:
Result download worker error: Worker error: [Snowflake][Snowflake] (4)
REST request for URL https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com/ogn3-s-vass2706/results/018f030a-0164-1b0c-0000-2ab1004b8b96_0/main/data_5_6_219?x-amz-server-side-encryption-customer-algorithm=AES256&response-content-encoding=gzip&AWSAccessKeyId=AKIAJP5BI3JZEVKDRXDQ&Expires=1569308247&Signature=2DSCUhY7DU56cpq6jo31rU5LKRw%3D failed: CURLerror (curl_easy_perform() failed) - code=28 msg='Timeout was reached'.
Could you please advise on this?
KM Response #2:
1) What is your operating system?
2) Is this issue happening with Small dataset or big dataset ?
3) Can you try to clear some space on the TEMP location of the server example for Windows it will be C:\Windows\Temp and C:\Users\\AppData\Local\Temp\ and for linux /tmp.
4) Can you make sure URL https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com is whitelisted.
5) try the command to check the connectivity.
curl -v -k https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com
"M" Follow-Up Response #3:
1) Operating System: Windows Server 2012 R2.
2) This issue is happening only with big dataset - particularly when job runs for around 24 hours.
3) Done. Cleared the space.
4) The URL is white-listed.
5) On windows power shell, this command gives error:
Invoke-WebRequest : A parameter cannot be found that matches parameter name 'k'.
At line:1 char:9
+ curl -v -k https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com
+ ~~
+ CategoryInfo : InvalidArgument: (:) [Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
KM Response #3:
Use the CURL command to test connectivity to Snowflake. (Make sure the curl is installed on the machine, If not then you can download curl from the third party ex. https://curl.haxx.se/download.html)
curl -v -k https://sfc-va-ds1-2-customer-stage.s3.amazonaws.com
Sometime this issue can happen when there is not much space left in the TEMP location. You can try to run the Job and monitor the %TEMP% space.
I am not sure how the Attunity tool works but some of the ETL tools like Informatica ETL tool create the temporary file on the server and utilize the %TEMP% location.
"M" Follow-Up Response #4:
Using curl command, now it is able to connect successfully. I will trigger the job now and monitor %TEMP% location.
Any other ideas, recommendations, or possible work-arounds?
I'm baffled by this. Working my way through all the answers I've found here on Stackoverflow I tried all I now. Hence posting and asking here.
### sonar.properties
# AD AUTHENTICATION
sonar.security.realm=LDAP
ldap.url=ldap://server.domain.com:389
ldap.bindDn=cn=service_account,ou=mygroup,dc=domain,dc=com
ldap.bindPassword=*********
ldap.authentication=simple
ldap.realm=domain.com
ldap.user.baseDn=ou=mygroup,dc=domain,dc=com
ldap.user.request=(&(objectClass=user )(sAMAccountName ={login}))
## ERROR in log
2017.07.27 05:10:27 INFO web[][o.s.p.l.LdapContextFactory] Test LDAP connection: FAIL
2017.07.27 05:10:27 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
org.sonar.plugins.ldap.LdapException: Unable to open LDAP connection
...
Caused by: java.net.ConnectException: Connection refused (Connection refused)
Using ldapsearch with the same connection string (URL, bind DN, password, search base, ...) works like a charm.
Can anybody point me in the direction or provide some insight?
Thanks in advance,
Eric V.
I’m trying to connect to an LDAP directory over SSL using the Windows LDAP C-API. This fails with error code 0x51 = LDAP_SERVER_DOWN while the event log on the client computer has this:
„The certificate received from the remote server does not contain the expected name. It is therefore not possible to determine whether we are connecting to the correct server. The server name we were expecting is eim-tsi2.sam.develop.beta.ads. The SSL connection request has failed. The attached data contains the server certificate.”
This is can’t be true since “Ldap Admin” is able to connect over SSL and port 636.
The LDAP directory is an Oracle DSEE which has the CA and the server certificate in the appropriate cert store.
The client has the CA installed in the “Trusted Root Certification Authorities” and there in the „Local Computer“ physical store. I assumed this to be the right place for the CA since my little client program uses the Windows LDAP C-API; LDAP Admin indeed expects the CA there.
Here is an excerpt of my program omitting the error handling and other obvious source code:
ld = ldap_sslinit(host, LDAP_SSL_PORT, 1);
// Set options: LDAP version, timeout ...
rc = ldap_set_option(ld, LDAP_OPT_SSL, LDAP_OPT_ON);
// Now connect:
rc = ldap_connect(ld, NULL);
Result:
0x51 = LDAP_SERVER_DOWN
Connecting without SSL succeeds so the LDAP server is generally accessible.
Since Ldap Admin is able to connect over SSL, I assume the certificates are valid and in the right place. But obviously the LDAP API expects them somewhere else and cannot get the server certificate from the server. I configured the certs as described here: https://msdn.microsoft.com/en-us/library/aa366105%28v=vs.85%29.aspx
What am I doing wrong?
Sometimes it helps reading error messages more carefully. The entry in the event viewer caused by an unsuccessful bind over SSL was "The server name we were expecting is eim-tsi2.sam.develop.beta.ads."
I should have noticed that the name should have been eim-tsi2.cgn.de.(etc.), instead. So the domain name part was wrong.
This is a bug in Schannel which can be solved by an entry in the registry as described here: https://support.microsoft.com/en-us/kb/2275950.
I still do not know why LDAPAdmin was able to connect without that additional registry key although it also uses the WINLDAP API and therefore should have run into the same error. But that doesn’t matter any more.
Thanks, Andrew, for your help.