I have 2 domain controllers, one is PDC and also with Root CA(not best practice) and dns. the other one is just a domain controller.
I have done all needed configuration for ldaps in the second domain controller and tested ldp working fine from both the workstation and the DC itself. however I could not connect to it from linux server.
when using openssl s_client -connect dc02.domainname:636 -showcerts. it always returned no peer certificate available.
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 289 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1610484233
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
Firewall and everything are OK since connection test are all good. I do know from where I can troubleshoot this problem as I can see certs are in computer store and service keystore.
Could someone provide me some hint as from where I can continue this troubleshooting and investigation?
Thanks.
This may depend on the permission of the cert that you are attempting to grab from the server you may want to add an everyone group and allow it to read-only or add the Linux Server to the domain via SSSD and then give the server access in the same menu. Also, ensure the cert you are attempting to push is publicly visible.
REF (How to add a Linux machine to AD): https://www.datasunrise.com/blog/professional-info/integrating-a-linux-machine-into-windows-active-directory-domain/
CA Properties
It turns out that port 88 is also needed when doing ldaps authentication and etc. when open port 88 for both tcp and udp, the test and everything works fine.
Thanks for the suggestions.
Related
The bounty expires in 3 days. Answers to this question are eligible for a +50 reputation bounty.
Squazz wants to draw more attention to this question:
It either seems I haven't been able to explain myself well enough, or that the answer is not well known. Either way I hope for clarification on what could be going on here.
I have a .NET 6 service running in a container (based on the mcr.microsoft.com/dotnet/aspnet:6.0-focal image). When my service needs to talk to the SQL Server database, I must set SECLEVEL=1 in my OpenSSL config. I run the following when creating the container (taken from this github issue: https://github.com/dotnet/SqlClient/issues/776#issuecomment-825418533)
RUN sed -i '1i openssl_conf = default_conf' /etc/ssl/openssl.cnf && echo "\n[ default_conf ]\nssl_conf = ssl_sect\n[ssl_sect]\nsystem_default = system_default_sect\n[system_default_sect]\nMinProtocol = TLSv1.2\nCipherString = DEFAULT:#SECLEVEL=1" >> /etc/ssl/openssl.cnf
If I don't, I get this error:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 31 - Encryption(ssl/tls) handshake failed)
But... my connection string has not set anything about encrypt or anything else that indicates that I must make an encrypted connection. And when I look at EF Core 6 or less, Encrypt=False is the default. So if you don't do anything explicit, I assume that the connection is not encrypted.
My connection string looks like this
Server=123,456;Database=123;User ID=123;PWD=123;multipleactiveresultsets=True;
On the .NET side I'm using Microsoft.EntityFrameworkCore.SqlServer 6.0.13, which has a dependency on Microsoft.Data.SqlCliet 2.1.4. Both of these has encrypt=false as default for the connection strings.
And that's where I'm unable to understand what happens.
If the connection is not encrypted, why do I have to set SECLEVEL=1 to avoid handshake errors? Why does a handshake even happen?
If the SQL Server you are connecting to is configured with force encryption, TLS/SSL will be used for all communication regardless of whether the client requests encryption or not.
Even if encryption is not required by client or server, login packets for the credential exchange are still encrypted. The setup needed to do so occurs as part of the pre-login handshake as described in this answer. This introduces the TLS/SSL requirement.
We've a Windows Event Collector in DOMAIN1. DOMAIN1 and DOMAIN2 have a two-way transitive forest trust. Events from sources in D1 are forwarding fine to the WEC in D1.
D2 is setup to communicate to the same FQDN subscription manager over http/5985 (Server=http://server1.domain1.com:5985/wsman/SubscriptionManager/WEC,Refresh=60). Source initiated event collection. Port 5985 is open and listening from D2 machines through WEC in D1.
Machines in D2 are getting this in their Eventlog-ForwardingPlugin Operational logs
The forwarder is having a problem communicating with subscription manager at address http://wec1.domain1.com:5985/wsman/SubscriptionManager/WEC. Error code is 2150858909 and Error Message is <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="2150858909" Machine="server1.domain2.com"><f:Message>WinRM cannot process the request. The following error with errorcode 0xc0000413 occurred while using Kerberos authentication: An unknown security error occurred.
Possible causes are:
-The user name or password specified are invalid.
-Kerberos is used when no authentication method and no user name are specified.
-Kerberos accepts domain user names, but not local user names.
-The Service Principal Name (SPN) for the remote computer name and port does not exist.
-The client and remote computers are in different domains and there is no trust between the two domains.
After checking for the above issues, try the following:
-Check the Event Viewer for events related to authentication.
-Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport.
Note that computers in the TrustedHosts list might not be authenticated.
-For more information about WinRM configuration, run the following command: winrm help config. </f:Message></f:WSManFault>.
[eventlog][1]
I don't know enough about kerberos to know if tickets from D2 can be used in D1 or somehow made to. Anyone got any ideas? I can't find much about this exact issue and WEF.
thanks
[1]: https://i.stack.imgur.com/VVF0Y.png
I'm writing a server in Go that uses MongoDB and I was doing some research on how to enable SSL for the connection to the database. I found several examples that explain how to add the CA file. Like so:
mongo.NewClientWithOptions(connectionString, mongo.ClientOpt.SSLCaFile(caFilePath))
I'm using a hosted database on Atlas and they state that all connections use SSL by default. This answer on a different question shows how to connect to Atlas with Go but the code example doesn't use a CA file. I also couldn't find an option to download the CA file from Atlas that I could use.
This confuses me a bit and leads to the following questions. When is it necessary to provide a CA file like shown above to use SSL? If it's always required for SSL to provide a CA file, where do I get the CA file from to connect to a managed cluster on Atlas?
You always need a CA certificate to validate the server when initiating a TLS connection. Sometimes this is already installed on your platform and used automatically. You have to provide a CA file during connection when such a root certificate is not available. The CA file is used to validate the certificate presented by the server. A trusted third party provides this CA, and also (possibly through a chain of trusted parties) provides a certificate to the server, so you can validate the server is who claims it is by validating its certificate using the CA.
All platforms come with an initial set of root certificates that can validate well-known third-party generated certificates. The mongodb server you're connecting to is probably using such a certificate, and thus, your OS certificates can be used to validate it. If you had your own PKI with your own CA not validated by a third party, then you'd need a separate CA file signed by your own CA. Then you'd need to pass that CA file to validate the server, because your root certificate will not contain your custom CA.
The CA file specifies which self-signed root certificates you trust, and can include intermediate certificate authorities as well.
When the application connects to the server, the server sends its certificate as part of the handshake. The server's certificate was digitally signed.
In order to check that the server certificate was not tampered with, the issuer's certificate is consulted, which contains a public key that can be used to validate the digital signature.
If the issuer was an intermediate CA, then its certificate was also signed by another CA, so that CA's certificate will be consulted to validate the signature on the intermediate certificate.
This continues until the chain reaches a certificate that was signed by itself. This is the root certificate. Since it signs itself, you have to explicitly indicate that you trust it in order to trust the entire chain, including the server being connected to.
The bottom line here is you need to provide a CA file when:
You care about verifying the identity of the server you are connecting to (i.e. preventing man in the middle attacks), and
The root certificate will not already be trusted implicitly by inclusion in a local trust store
I'm trying to set-up a PKINIT-based Kerberos login on a Active Directory. The login shall be performed using sssd on Linux. However, the kerberos server does not accept the client certificate. We receive an error with event ID 21: Certificate for user REALM/Domainuser is not valid on the server and sssd says: Client name mismatch.
The certificates were created with the following procedure on a different Linux machine:
openssl req -new -key keyfile.pem -out reqfile.pem -subj
"/CN=Domainuser/O=AAA/C=DE/OU=BBB"
env REALM=REALM.local CLIENT=Domainuser openssl x509 \
-CAkey ../ca_privkey.pem -CA ../ca_cert.pem -req -in reqfile.pem \
-extensions client_cert -extfile extensions.client \
-days 365 -out certfile.pem
We installed the AD CA on the Windows Server that hosts the AD itself. We exported the certificte of this CA to the Linux machine and stored its private key and certificate in the ca_privkey.pem and ca_cert.pemfiles to use it with openssl.
The client_cert we used during signature creation was created using the template suggested by sssd. The only thing we added is the crlDistributionPoints option to include the CRL of the CA:
[client_cert]
basicConstraints=CA:FALSE
keyUsage=digitalSignature,keyEncipherment,keyAgreement
extendedKeyUsage=1.3.6.1.5.2.3.4
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
issuerAltName=issuer:copy
subjectAltName=otherName:1.3.6.1.5.2.2;SEQUENCE:princ_name
crlDistributionPoints=URI:http://link-to-CRL-of-CA
[princ_name]
realm=EXP:0,GeneralString:${ENV::REALM}
principal_name=EXP:1,SEQUENCE:principal_seq
[principal_seq]
name_type=EXP:0,INTEGER:1
name_string=EXP:1,SEQUENCE:principals
[principals]
princ1=GeneralString:${ENV::CLIENT}
The realm used for the authentication is REALM.local (equal to the AD Domain). The environment variables REALM and CLIENT are set to REALM.LOCAL and pkuser during certificate creation and the user pkuser also exists in the AD (Password-based login is possible).
I have no clue why the authentication is not successful. Do you have any ideas what might be wrong in the configuration or give me a hint s.t. Windows prints a more detailed error message? Note, that certutil -verify -urlfetch certfile.pem is able to validate the whole certificate chain and does not print any errors when executed with the Administrator command on the AD server.
I assume that there are some configuration errors on the Windows Server. This is the first time I configured a Windows Server ;)
Finally, we figured out what the problem was. Windows AD requires additional extended key usage fields to allow the authentication.
You have to add 1.3.6.1.5.5.7.3.2 and 1.3.6.1.4.1.311.20.2.2.
Further, the SAN must be set to: otherName:1.3.6.1.4.1.311.20.2.3;UTF8:<client-name>#<realm>
I’m trying to connect to an LDAP directory over SSL using the Windows LDAP C-API. This fails with error code 0x51 = LDAP_SERVER_DOWN while the event log on the client computer has this:
„The certificate received from the remote server does not contain the expected name. It is therefore not possible to determine whether we are connecting to the correct server. The server name we were expecting is eim-tsi2.sam.develop.beta.ads. The SSL connection request has failed. The attached data contains the server certificate.”
This is can’t be true since “Ldap Admin” is able to connect over SSL and port 636.
The LDAP directory is an Oracle DSEE which has the CA and the server certificate in the appropriate cert store.
The client has the CA installed in the “Trusted Root Certification Authorities” and there in the „Local Computer“ physical store. I assumed this to be the right place for the CA since my little client program uses the Windows LDAP C-API; LDAP Admin indeed expects the CA there.
Here is an excerpt of my program omitting the error handling and other obvious source code:
ld = ldap_sslinit(host, LDAP_SSL_PORT, 1);
// Set options: LDAP version, timeout ...
rc = ldap_set_option(ld, LDAP_OPT_SSL, LDAP_OPT_ON);
// Now connect:
rc = ldap_connect(ld, NULL);
Result:
0x51 = LDAP_SERVER_DOWN
Connecting without SSL succeeds so the LDAP server is generally accessible.
Since Ldap Admin is able to connect over SSL, I assume the certificates are valid and in the right place. But obviously the LDAP API expects them somewhere else and cannot get the server certificate from the server. I configured the certs as described here: https://msdn.microsoft.com/en-us/library/aa366105%28v=vs.85%29.aspx
What am I doing wrong?
Sometimes it helps reading error messages more carefully. The entry in the event viewer caused by an unsuccessful bind over SSL was "The server name we were expecting is eim-tsi2.sam.develop.beta.ads."
I should have noticed that the name should have been eim-tsi2.cgn.de.(etc.), instead. So the domain name part was wrong.
This is a bug in Schannel which can be solved by an entry in the registry as described here: https://support.microsoft.com/en-us/kb/2275950.
I still do not know why LDAPAdmin was able to connect without that additional registry key although it also uses the WINLDAP API and therefore should have run into the same error. But that doesn’t matter any more.
Thanks, Andrew, for your help.