I thought I understand how Kerberos works, now I am not sure at all.
We have a problem with Kerberos authentication on a 3rd party server with Windows Active Directory. The server support is insisting that what they call "kerberos server" passes additional information somehow, namely fields identified as uid and email, and I need to confirm that they are indeed "sent" by the server before they can help any further. I read "kerberos server" as the KDC, which "sends" information by placing it into the TGT, and the uid may be the good old UPN, except I do not understand why I am asked to confirm it is really there. But what is the email attribute?
I even read the whole RFC4210, but could not find any possible place for additional info in any of the tickets. In general, 1.5.2 talks about extending the protocol, but in the very abstract manner. There are also KRB_SAFE and KRB_PRIV messages, that can be used to pass arbitrary octet strings (3.4, 3.5), but no step towards defining their structure is done in the standard. There is also padata extension, that 5.2.7 notes have also been used as a typed hole with which to extend protocol exchanges with the KDC., but this seems sent one-way. And nowhere the RFC seem to talk about additional identified fields that the authentication server can attach to the ticket.
My question is thus bifold:
Theoretical: how additional attributes are passed in Kerberos, presumably in an interoperable way (not e. g. Active Directory extensions)? What am I being asked to confirm?
Practical, if anyone can help with that: how to track what is placed by the AD into these attributes?
The server support is very lousy at telling you what they really want to have. Here is what you need: You want the KDC to send you PAC data with the generated service ticket. Here is Microsoft's reference: https://msdn.microsoft.com/en-us/library/cc237917.aspx.
How to verify? You need the a keytab for the account which is accepting the security context. Configure that with Wireshark, log all traffic. You should see the TGS-REP for the service you'd like to use. Expand it, when the keytab is fine, you will see the decrypted information. Somewhere down below, you should see the Authorization Data fields, type 1 (AD-IF-RELEVANT). That is an ASN.1-encoded sequence of elements. Even element position describes the sub type, odd element position the octet string. In that octet string is again an ASN.1-encoded suquence with type 128 (AD-WIN2K-PAC) and that is the PAC data. Unfortunately, Wireshark can decode upto level one only. The request is an opaque byte buffer. I have minimal, working (though incomplete) Java implementation of the PAC data decryption.
The email value is not included in that structure but what you have is the RID KERB_VALIDATION_INFO structure and the userPrincipalName in the UPN_DNS_INFO structure. The latter is extremely easy to decode.
First check via LDAP that for the desired client account userAccountControl does not has the NA field set.
Godspeed.
Related
I have been assigned a task to export the AD Attributes than find out what systems are using these attributes. I have not had much luck in scripting or a tool that can provide just that. Is this feasible and if so how? I have already exported attributes. Just need to find what systems are using them.
This isn't possible with any reasonable accuracy, especially if "using" isn't defined for you.
The event logs on the domain controllers will tell you where login events are coming from, but only by IP. That doesn't tell you which application is authenticating. You would have to do monitoring on that computer and see which application is making the connection. But then the logs would be cluttered with connections made by Windows itself, or Exchange (if you use Exchange for email). It it would be very difficult to identify what is coming from an 3rd-party application rather than Windows itself.
Also, applications can request more information than they need. It's very easy when programming with LDAP to request every attribute for an object, even if you only intend to use one. For example, take this C# code:
var de = new DirectoryEntry("LDAP://example.com");
Console.WriteLine(de.Properties["name"].Value);
That only "uses" the name attribute. But because of the way LDAP works, it actually requests every non-constructed attribute that has a value. (there is a way to specifically ask for only one attribute, but you have to know that and use that)
So even if you could find logs saying that "this IP requested all of these attributes", and then figure out which application made that request, that doesn't mean it "used" all of those attributes.
A user's password and salt determine the Kerberos keys generated by ktpass. I have noticed that ktpass sometimes changes the user's salt, but other times it does not. I was able to discover the salt by capturing a packet trace of a kinit. The salt appears to be generated based on the Kerberos realm and the userPrincipalName. However, it's not this simple. If the UPN is later updated manually, the salt is not updated. (I suspect that whether the /mapop option is specified may play a role.)
In what circumstances does ktpass set the user's salt?
How is the salt determined?
Is the salt stored in AD, or just in the KDCs?
Is there a straightforward way to read the current value of the salt?
Is there a way to manually change the salt?
In Microsoft Windows Active Directory, which has used Kerberos v5 since its inception in Windows 2000, the ktpass command sets the salt automatically. The salt is always used in Kerberos v5. In Kerberos v4, a salt was never used.
The complete principal name (including the realm) is used as the salt, e.g., accountname/somedomain.com#SOMEDOMAIN.COM, which is then paired with the encrypted hash of the password to absolutely ensure the result is unique throughout the AD forest.
As mentioned, the salt is the complete principal name (including the realm). It is stored in the ntds.dit file, which is the Active Directory database. The KDC get spun up in a process spawned by by kdcsvc.dll - and it relates to the values stored in ntds.dit. So while the KDC and AD database are not one and the same inside the runtime environment, they are, so to speak, "joined at the hip". I think when the domain controller shuts down, all the important elements within the KDC is persisted inside of ntds.dit. Microsoft does not provide exact details on how this is done. I have looked extensively, and my partial knowledge is drawn from careful study and inferences made from bits and pieces of articles found on the web. Note that the ntds.dit database is also the LDAP database. It is also the DNS database, if DNS is AD-integrated. All these protocols working together, plus a few more, form "Active Directory".
Open up Active Directory Users and Computers, go to the Account tab. The "user logon name" is the most straightforward way to "read" the salt. You don't see the realm name concatenated with it right there but it is implied. The SPN, if also defined, is listed in a straightforward way like you are looking for under the Attribute Editor tab (look for servicePrincipalName). Make sure you have View > Advanced Features selected in order to expose this tab. A corresponding UPN will also be listed lower down in this same section, in the manner that looks exactly like: accountname/somedomain.com#SOMEDOMAIN.COM.
When you change the account name on the AD Account tab, you have just changed the salt. Note if there is a keytab out there tied to this AD account, you will have just invalidated it, as its secret key inside is a concantaention of the password hash and the salt. When either the salt or the password changes, then the keys will no longer match between the AD account and that inside the keytab. You will have to re-generate it at this point.
Makes sense? This is really a field explanation. To learn more about Kerberos as it relates to AD, start here: Kerberos Survival Guide
There is a slightly easier way to read the current salt value (it is not really straightforward but at least no paket tracing is required):
Install MIT Kerberos for Windows
Open a PowerShell and run:
$env:KRB5_TRACE="kinit-trace.log"
'C:\Program Files\MIT\Kerberos\bin\kinit.exe' -fV UPN-or-USER#REALM
Get-Content $env:KRB5_TRACE | Select-String "salt"
rm $env:KRB5_TRACE
I assume here that MIT Kerberos is installed at its default location. If not you need to adjust the path name in the second command.
This solution was originally suggested here on Stackoverflow for Linux by user Spezieh.
The documentation of JSON-LD mentions that clients can provide a profile parameter to the Accept header can be used to control the representation. It defines the three defaults for requesting compacted, expanded or flattened JSON-LD documents. It does also say that
If the profile parameter is given, a server should return a document that honors the profiles in the list which are recognized by the server.
It does not, however, explain whether there are any specific rules the server should follow. Is it completely up to the server to decide what the behavior is for custom profile URIs? Are there any discussions on that subject?
Would the examples below be correct?
Example 1
The client requests with
Accept: application/ld+json;
profile="http://www.w3.org/ns/json-ld#compacted http://schema.org"
And the server returns compacted document with http://schema.org as #context?
Example 2
The client requests with
Accept: application/ld+json; profile="http://schema.org"
And the server returns compacted document with http://schema.org as #context?
The JSON-LD 1.0 Spec defines profile in IANA Condierations. This defines the profile identifiers such as compacted you identified above. It doesn't provide a way to specify a specific context to use, and the semantics of profile would make it difficult to know what is meant by a different profile URI, as there is no way (AFAIK) to register this meaning elsewhere.
That said, I think it would be useful to be able to specify a context to use for compacted or expanded, and if/when we support framing, a frame to use. I think this might take the form of a type-specific Accept parameter context and/or frame, which would be used to specify the requested context or frame to be used when serializing the document. However, as with other profiles, these are SHOULD, not MUST; a client needs to be able to cope with getting a document back not so serialized, perhaps using a local jsonld.js instance to re-encoding the returned document. It might also be useful to recommend that the same parameters be used in the response with Content-Type for the server to communicate the profile/context/frame used as part of the response.
Please consider raising an issue at https://github.com/json-ld/json-ld.org/issues, as we're starting to look at new Community Group (i.e., not W3C Recommendations) drafts of the specs to address long outstanding community feature requests.
I am new to LDAP and I am wondering if attribtue names like "maxPwdAge" and "pwdLastSet" are constant attribute names for LDAP, not just AD?
The reason that I want to know this is because I want to write a program to calculate password expiration time for all systems that use LDAP. If the names are not constant across systems, it might be pretty complicate for me.
pwdLastReset is peculiar to Active Directory as far as I know.
pwdMaxAge comes from a Internet Draft 'LDAP Password Policy', which is the step before an RFC, which technically expired years ago but which is nevertheless implemented by a number of LDAP servers. In OpenLDAP you have to add the ppolicy overlay to get the password-policy attributes to appear.
You should also note that you may not have access to the pwdLastReset attribute, and that pwdMaxAge is not an attribute of the user at all: it is an attribute of the policy entry, which you may not have access to either.
No. These is no universal standard to determine when a password expires.
As #EJP mentioned, there is a Internet Draft 'LDAP Password Policy' that has been implemented to some levels for several LDAP server implementations but it is not universal.
-jim
I am implementing an automatic update feature and need some advice on how to do this securely using best practices. I would like to use the downloaded file's Authenticode signature to verify that it is safe to run (i.e. originates from our company and hasn't been tampered with). My question is very similar to question #2008519.
The bottom-line question: what's the best, most secure way to check Authenticode signatures for an automatic update feature? What fields in the certificate should be checked? Requirements being: (1) check signature is valid, (2) check it's my signature, (3) old clients can still update when my certificate expires and I get a new one.
Here's some background information / ideas from my research: I believe this could be broken into two steps:
Verify that the signature is valid. I believe this should be easy using WinVerifyTrust as outlined in http://msdn.microsoft.com/en-us/library/aa382384(VS.85).aspx - I don't expect problems here.
Verify that the signature corresponds to our company, and not another company. This seems to be a more difficult question to answer:
One possibility is to check some of the strings in the signature. Could be obtained via code at MS KB article #323809, but this article doesn't make recommendations on what fields should be checked for this type of application (or any other, for that matter). Question #1072540 also illustrates how to get some certificate info, but again doesn't recommend what fields to actually check. My concern is that the strings might not be the best check: what if another person is able to obtain a certificate with the same name, for example? Or if there's a valid reason for us to change the strings in the future?
The person at question #2008519 has a very similar requirement. His need for a "TrustedByUs" function is identical to mine. However, he goes about doing the check by comparing public keys. While this would work in the short-term, it seems like it won't work for an automatic update feature. This is because code signing certificates are only valid for 2 - 3 years max. Therefore, in the future, when we buy a new certificate in 2 years, the old clients wouldn't be able to update any more due to the change in public key.
The person at question #2008519 has a
very similar requirement. His need for
a "TrustedByUs" function is identical
to mine. However, he goes about doing
the check by comparing public keys.
While this would work in the
short-term, it seems like it won't
work for an automatic update feature.
This is because code signing
certificates are only valid for 2 - 3
years max. Therefore, in the future,
when we buy a new certificate in 2
years, the old clients wouldn't be
able to update any more due to the
change in public key.
Since the concern is that the application trusts you rather than that a person trusts you, you could just use self-signing and embed any public keys needed in the applications themselves. This gives you much more control over the process. This is inappropriate when asking a user or application not under your control to give trust, but in this case the application is under your control, so it will work fine. This allows you to very easily avoid the concern of mistaking someone else's similar-looking certificate for your own.