What does this LDAP traffic signify? "<ROOT>" baseObject - active-directory

The requesting host is compromised, sending traffic to the DC (the former is running Sharphound to perform recon - though I don't know if that is a part of this). I am very new to LDAP and am clueless as to what this means. Is this malicious? The details are below, any help would be appreciated!

For regular LDAP over TCP, the "null DN" entry (indicated as <ROOT> by Wireshark) is often called the "rootDSE", and is used for protocol negotiation – it contains attributes indicating what LDAP protocol extensions are supported by the server. For example, whenever an LDAP client wants to use SASL authentication or StartTLS, it first makes a query for the rootDSE entry to make sure that's available:
$ ldapsearch -x -b "" -s base + \*
[...]
supportedControl: 1.3.6.1.1.13.1 # 'pre-read' control supported
supportedExtension: 1.3.6.1.4.1.1466.20037 # StartTLS supported
supportedExtension: 1.3.6.1.4.1.4203.1.11.1 # password change supported
supportedFeatures: 1.3.6.1.1.14 # 'Increment' operation supported
supportedSASLMechanisms: GSSAPI # Kerberos authentication supported
[...]
(The rootDSE also has some AD-specific parameters, but that's not its main purpose.)
Seeing such requests over TCP is normal for any LDAP communications.
The CLDAP request over UDP is a slightly different thing – it's an Netlogon ping used by Windows AD clients to quickly check communications with a domain controller. It serves a similar purpose as the above rootDSE search, but deals with AD Netlogon parameters rather than LDAP parameters.
Seeing an occassional CLDAP ping from any AD member to your DCs is normal (as long as their payload makes sense, and as long as it's not a flood of requests).

Related

Limit the usage of Kerberos TGTs

I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!
The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].
Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html

how can I get the client ip in pam

I am writing a custom pam module where authentication is controlled only from particular ip addresses
I am not able to get the ip address of the client making a connection.
Is there any example ?
I am using this function in my code
....
err = pam_get_item(pamh, PAM_RHOST, (const void **) &pHost);
.....
But I get phost as always null
First off:
where authentication is controlled only from particular ip addresses
It's a bad idea to base this off IP addresses, as they can be ridiculously easily forged. Simply don't do that.
Secondly:
As man pam_get_item will tell you
The requesting hostname (the hostname of the machine from which the PAM_RUSER entity is requesting service). That is PAM_RUSER#PAM_RHOST does identify the requesting user. In some applications, PAM_RHOST may be NULL. In such situations, it is unclear where the authentication request is originating from.
That will be the case in many applications nowaday.
You might simply be confusing PAM requests origins (which shouldn't ever be trusted -- those are the people trying to get auth, so trusting them before you trust them is plain making your own auth mechanism useless) and the "authenticator" working in the background.
If you need host-based validation, there's already a mature, albeit a little complex to set up, but still widely deployed solution: Kerberos has exactly that purpose, authenticating hosts, so that further authentications can take host authenticity into consideration. Don't reinvent the wheel, especially in security contexts.

Kerberos: kvno is '1' in client tickets

We're configuring SSO for our web app for a customer, but unfortunately we don't have access to the domain controller (one more reason why we don't do more experimenting to check our assumptions). So, we asked to run ktpass.exe and prepare .ktpass file to use for our server configuration.
The issue we are facing is "specified version of key is not available".
I looked up the keytab file (knvo = 5), and checked out the traffic with Wireshark on our web server:
As you can see, kvno = 1 in AP-REQ ticket. I suppose that it's the right ticket to check kvno version.
I know there're compatibility issues with Windows 2000 domain (/kvno 1 must be used for Windows 2000 domain compatibility), but we are said to deal with Windows 2008R2 server (and I can see the value msDS-Behavior-Version = 4 for our domain controller, which matches 2008R2!).
Is there anything like W2K domain mode we are facing with?
Would explicit kvno=1 help to resolve the issue? I.e., ktpass.exe [..] /kvno 1
EDIT #1
The problem was about incorrectly specified SPN. It was HTTP/computer_name#DOMAIN.COM instead of using fully-qualified domain name. This would only work if WINS were enabled, but it turned out it wasn't.
After generating keytab with the correct SPN, everything works fine, and kvno sent according to actual account value.
Will kindly accept answer that explains the effect I observed.
I do not know the internals well, but MIT Kerberos clients do forward resolution of the hostname part of a host-based service principal to canonicalize the hostname. In my experience if the name does not resolve it does affect Kerberos auth. When I setup service accounts for SQL Server to do Kerberos I always have to register an SPN with the host name and the fully qualified domain name because different SQL components seem to use different resolution methods.
In a very basic network topology WINS would be able to resolve the name. Even without WINS though, the NetBIOS service would be able to resolve the hostname. WINS and NetBIOS rely heavily on broadcasts, so if your webserver is on a different subnet, NetBIOS name resolution would fail, and WINS too if not configured correctly. Also Windows need to use the TCP/IP NetBIOS Helper service.
The problem was about incorrectly specified SPN. It was HTTP/computer_name#DOMAIN.COM instead of using fully-qualified domain name. This would only work if WINS were enabled, but it turned out it wasn't.
After generating keytab with the correct SPN, everything works fine, and kvno sent according to actual account value.
Will kindly accept answer that explains the effect I observed.

TCP Connections to Postgres Secure? SSL Required?

Good morning,
I was going through the Postgresql configuration files, and recently noticed that there is an ssl option. I was wondering when this is required.
Say if you have an app server and a database server - not running inside a private network. If a user tries to log in, if SSL is not enabled will the app server transmit the user's password in cleartext to the database when looking up if it is a valid username/password?
What is standard practice here? Should I be setting up my DB to use SSL?
If that is the case, is there any difference in the connection settings in config/database.yml in my Rails app?
Thanks!
Like for other protocols, using SSL/TLS for PostgreSQL allows you to secure the connection between the client and the server. Whether you need it depends on your network environment.
Without SSL/TLS the traffic between the client and the server will be visible by an eavesdropper: all the queries and responses, and possibly the password depending on how you've configured your pg_hba.conf (whether the client is using md5 or a plaintext password).
As far as I'm aware, it's the server that requests MD5 or plaintext password authentication, so an active Man-In-The-Middle attacker could certainly downgrade that and get your password anyway, when not using SSL/TLS.
A well-configured SSL/TLS connection should allow you to prevent eavesdropping and MITM attacks, against both passwords and data.
You can require SSL to be used on the server side using sslhost in pg_hba.conf, but that's only part of the problem. Ultimately, just like for web servers, it's up to the client to verify that SSL is used at all, and that it's used with the right server.
Table 31-1 in the libpq documentation summarises the levels of protection you get.
Essentially:
if you think you have a reason to use SSL, disable, allow and prefer are useless (don't take "No" or "Maybe" if you want security).
require is barely useful, since it doesn't verify the identity of the remote server at all.
verify-ca doesn't verify the host name, which makes it vulnerable to MITM attacks.
The one you'll want if security matters to you is verify-full.
These SSL mode names are set by libpq. Other clients might not use the same (e.g. pure Ruby implementation or JDBC).
As far as I can see, ruby-pg relies on libpq. Unfortunately, it only lists "disable|allow|prefer|require" for its sslmode. Perhaps verify-full might work too if it's passed directly. However, there would also need a way to configure the CA certificates.
Considering data other than the password. If you use or not i pretty much a security posture issue. How safe do you need your system to be? If the connection is just over your private network then you anyone on that network can listien in. If that is acceptable that dont use SSL, I not enable it. If the connection is ove r internet SSL should be enable.
As #Wooble says. You should never send the password as cleartext in the first place you have a problem. The stanard solution in this case is to store a hash in the database and only send the hash for validation.
Here is som link about the rails part

LDAPS with ActiveDirectoryMembershipProvider on ASP.Net Webforms

I have set the ActiveDirectoryMembershipProvider connectionProtection attribute to secure, according to MSDN documentation states that when this is set to secure the following holds:
"The ActiveDirectoryMembershipProvider class will attempt to connect to Active Directory using SSL. If SSL fails, a second attempt to connect to Active Directory using sign-and-seal will be made. If both attempts fail, the ActiveDirectoryMembershipProvider instance will throw a ProviderException exception."
The code works and queries can be made against the LDAP but one issue that has me a little confused is that my connection string is prefixed with LDAP and not LDAPS. Changing this to LDAPS results in the following error:
"Parser Error Message: Error HRESULT E_FAIL has been returned from a call to a COM component."
What is happening here? In the first instance where the connection string is simply LDAP is SSL being used? The documentation indicates that if it is not, an exception should be thrown. If not, then what would be the likely cause for this error in this context?
As far as I know, all the LDAP DN's (distinguished names) always have an LDAP only prefix - whether they're over a secure link or not. The secure aspect typically shows up by means of the port on the server being used, 389 being the default for non-secure, and 636 being the default for a secure communication.
But the spec of the LDAP distinguished names doesn't have a LDAPS prefix, really. I've done a lot of LDAP work a few years ago, and I do remember having to deal with different ports for trusted or secured communications, but I never once had a full-qualified LDAP path have anything else but an LDAP:// prefix (case sensitive, too!).
The LDAP:// prefix is used both for clear and SSL connections, to check whether the communication is indeed under SSL try step 3 of this blog entry http://erlend.oftedal.no/blog/?blogid=7

Resources