Issue creating SamlResponse when following your example Idp code - within the LoginResponse method - itfoxtec-identity-saml2

I have created an IDP using the code contained within https://github.com/ITfoxtec/ITfoxtec.Identity.Saml2/blob/master/test/TestIdPCore/Controllers/AuthController.cs
This is throwing an error when I attempt to bind the authNResponse using the following code:
var responsebinding = new Saml2PostBinding();
responsebinding.Bind(saml2AuthnResponse).XmlDocument.OuterXml;
This is the same code as within the PostContent method, but I've opted to use this code direct as I just needed the SamlResponse.
The error is:
Microsoft.IdentityModel.Tokens.Saml2.Saml2SecurityTokenWriteException: 'IDX13129: The SAML2:AttributeStatement must contain at least one SAML2:Attribute.'
With the following abridged stack trace:
at Microsoft.IdentityModel.Tokens.Saml2.Saml2Serializer.WriteAttributeStatement(XmlWriter writer, Saml2AttributeStatement statement)
at Microsoft.IdentityModel.Tokens.Saml2.Saml2Serializer.WriteStatement(XmlWriter writer, Saml2Statement statement)
at Microsoft.IdentityModel.Tokens.Saml2.Saml2Serializer.WriteAssertion(XmlWriter writer, Saml2Assertion assertion)
at Microsoft.IdentityModel.Tokens.Saml2.Saml2SecurityTokenHandler.WriteToken(XmlWriter writer, SecurityToken securityToken)
at ITfoxtec.Identity.Saml2.Tokens.Saml2ResponseSecurityTokenHandler.WriteToken(SecurityToken token)
at ITfoxtec.Identity.Saml2.Saml2AuthnResponse.ToXml()
I have used your example code almost exactly, so is there an issue within it, or am I missing something?
Many thanks

Here's a deeper analysis and two possible solutions/workarounds.
The situation: creating a ITfoxtec.Identity.Saml2.Saml2AuthnResponse for a ClaimsIdentity that has only one claim: the nameidentifier.
The relevant code snippet (not complete, just the part that is relevant, but the ITFoxttec samples have the full code)
var response = new Saml2AuthnResponse(config);
response.ClaimsIdentity = new ClaimsIdentity(new[] { new Claim(ClaimTypes.NameIdentifier, "someone#somewhere.com") });
response.NameId = new Saml2NameIdentifier(....etc...);
var token = response.CreateSecurityToken(appliesToAddress);
//so far all is well, but the problem has been sneakily introduced!
//which is why the next line will give the error: Microsoft.IdentityModel.Tokens.Saml2.Saml2SecurityTokenWriteException: 'IDX13129: The SAML2:AttributeStatement must contain at least one SAML2:Attribute
return binding.Bind(response).ToActionResult();
Explanation:
ITfoxtec code: The nameidentifier claim is is removed from the claims when the token is created. This makes senses as it is supposed to be in the NameId property. This remaining claims are set as Subject in the SecurityTokenDescriptor that is fed to the Saml2SecurityTokenHandler which is Microsoft code.
var tokenDescriptor = new SecurityTokenDescriptor();
tokenDescriptor.Subject = new ClaimsIdentity(claims.Where(c => c.Type != ClaimTypes.NameIdentifier));
The claims in this tokendescriptor then end up as Attributes in the AttributeStatement in the generated Saml2SecurityToken (via a Saml2SecurityTokenHandler.CreateToken(tokendescriptor) call).
Unfortunately, if the nameidentifier was the only claim you had, then you end up with an AttributeStatement that has no Attributes. And subsequently run into the problem when the binding.Bind(response) deep down the bowels does its XML thing..
Unless you are supposed to always have an AttributeStatement it looks to me like a bug / edge case in the Microsoft.IdentityModel.Tokens.Saml library.
There are two solutions to solve it:
Prevent ending up with no claims: Simply add another claim to the identity, doesn't have to be email, can be anything:
response.ClaimsIdentity.AddClaim(new Claim("x", "y"))
After the CreateSecurityToken call but before the call to Bind, check if the AttributeStatement is empty and if so remove it. A quick and dirty example for that:
var x = (Saml2AttributeStatement)token.Assertion.Statements.FirstOrDefault(a => a.GetType() == typeof(Saml2AttributeStatement));
if (x?.Attributes.Count == 0)
{
token.Assertion.Statements.Remove(x);
}
Personally, I prefer option 1, as it is generally safer to use and less code. Plus I'm sure there can always be 'something' to further attribute the identity with...

Maybe you are missing the part of adding claims to the token and creating the token?
saml2AuthnResponse.SessionIndex = sessionIndex;
var claimsIdentity = new ClaimsIdentity(claims);
saml2AuthnResponse.NameId = new Saml2NameIdentifier(claimsIdentity.Claims.Where(c => c.Type == ClaimTypes.NameIdentifier).Select(c => c.Value).Single(), NameIdentifierFormats.Persistent);
saml2AuthnResponse.ClaimsIdentity = claimsIdentity;
var token = saml2AuthnResponse.CreateSecurityToken(relyingParty.Issuer, subjectConfirmationLifetime: 5, issuedTokenLifetime: 60);
https://github.com/ITfoxtec/ITfoxtec.Identity.Saml2/blob/master/test/TestIdPCore/Controllers/AuthController.cs#L110

I have found that you need both the ClaimTypes.NameIdentifier and ClaimTypes.Email claims in order for the token to be generated successfully.

Related

Getting "Signature is invalid." when using Artifact Binding during the artifact consumption step

I have an IdP and an SP setup using the ITfoxtec SAML2 libraries, and everything works great when not using artifact binding, or when not validating signatures. When using artifact binding and validating signatures I'm getting a "Signature is invalid." exception in the ACS when trying to retrieve and bind the actual response/assertion.
It seems to unbind the artifact response fine, then when it goes to retrieve and unbind the artifact from the ArtifactResolutionService it fails, specifically on the last line of this block:
var soapEnvelope = new Saml2SoapEnvelope();
saml2AuthnResponse = new Saml2AuthnResponse(config);
await soapEnvelope.ResolveAsync(httpClient, saml2ArtifactResolve, saml2AuthnResponse);
I've checked that my signature validation certificate is correct and I've dug through the source code but am scratching my head. I've tried to validate the "saml2p:ArtifactResponse" myself but there isn't much out there.
If I put this line before the chunk above everything works as expected as it no longer validates the signature:
config.SignatureValidationCertificates.Clear();
One thing I noticed is that in the 'saml2p:ArtifactResponse' there is a signature inside of that node but not inside the contained 'saml2p:Response' node. Is it possible that the saml2p:Response is being isolated and then a signature check is being performed? I tried to see if it was supposed to be signing the response/assertion in the artifact cache on the IdP side (artifactSaml2AuthnResponseCache), but it doesn't sign response at all. I'm doing this before putting it in the cache just like in the example and just like I do when using POST binding:
var token = saml2AuthnResponse.CreateSecurityToken(relyingParty.Issuer, subjectConfirmationLifetime: 5, issuedTokenLifetime: 60);
artifactSaml2AuthnResponseCache[saml2ArtifactResolve.Artifact] = saml2AuthnResponse;`
EDIT: I have determined that the ArtifactResponse just isn't signed properly. Another tool claims the digest in the XML doesn't match the computed value. This is after stepping through the source and grabbing the XML that the code is trying to validate directly. I can see that the ArtifactResolve is being signed and validated properly (and I checked with the external tool) but the ArtifactResponse isn't. Even in the code it fails at the final validation of the signature (and not at any checks before it).
EDIT 2: Found the problem in the source. The .ToXmlDocument() extension is breaking the signed XML. The final test was done by 'replacing' it in the spot with a new method that just returns the string directly with "envelope.ToString(SaveOptions.DisableFormatting)":
protected virtual XmlDocument ToSoapXml()
{
var envelope = new XElement(Saml2Constants.SoapEnvironmentNamespaceX + Saml2Constants.Message.Envelope);
envelope.Add(GetXContent());
return envelope.ToXmlDocument();
}
protected string ToSoapXmlString()
{
var envelope = new XElement(Saml2Constants.SoapEnvironmentNamespaceX + Saml2Constants.Message.Envelope);
envelope.Add(GetXContent());
return envelope.ToString(SaveOptions.DisableFormatting);//.ToXmlDocument();
}
And directly save that to the SoapResponseXml of the Saml2SoapEnvelope:
protected override Saml2SoapEnvelope BindInternal(Saml2Request saml2Request, string messageName)
{
if (!(saml2Request is Saml2ArtifactResponse))
throw new ArgumentException("Only Saml2ArtifactResponse is supported");
BindInternal(saml2Request);
SoapResponseXml = ToSoapXmlString();// ToSoapXml().OuterXml;
return this;
}
I would initiate a pull request for this change but honestly I'm not that up to speed with Git. I'm also not sure if this is the best way to fix the issue.
Thank you for your question and code to solve the problem. I'll look into the problem.
EDIT: I'm trying to reproduce the error but no luck. The sample is both an IdP an RP, what have you changed to get the error?

Why am I seeing duplicate Scopes on IdentityServer4's consent screen?

I am writing an IdentityServer4 implementation and using the Quickstart project described here.
When you define an ApiResource (using InMemory classes for now) it looks like IdentityServer creates a Scope with the same name as the resource. For example
public static IEnumerable<ApiResource> GetApiResources()
{
return new List<ApiResource>
{
new ApiResource("api", "My API")
};
}
will create a Scope called "api" (this is done in the ApiResource constructor). If I add "api" as an allowed Scope on my Client object (using InMemoryClients for a proof of concept) and request this api Scope in the scope query string parameter in my auth request from my JavaScript client I get an invalid_scope error message.
I found by following this documentation you can add Scopes to the ApiResource through the Scopes property like so
new ApiResource
{
Name = "api",
DisplayName = "Custom API",
Scopes = new List<Scope>
{
new Scope("api.read"),
new Scope("api.write")
}
}
So now if I instead define my ApiResource like this and request the Scopes api.read and api.write (and add them to the AllowedScopes property on the Client Object) then everything works fine EXCEPT the consent page which shows duplicate Scopes. It shows api.read 2 times and api.write 2 times. See the consent screen here
The Client configuration is as follows:
new Client
{
ClientId = "client.implicit",
ClientName = "JavaScript Client",
AllowedGrantTypes = GrantTypes.Implicit,
AllowAccessTokensViaBrowser = true,
RedirectUris = { "http://localhost:3000/health-check" },
PostLogoutRedirectUris = { "http://localhost:3000" },
AllowedCorsOrigins = { "http://localhost:3000" },
AllowedScopes = {
IdentityServerConstants.StandardScopes.OpenId,
IdentityServerConstants.StandardScopes.Profile,
"customApi.read", "customApi.write"
}
}
Why is this happening? Am I doing something obviously wrong?
Update:
Here a portion of the discovery document that shows the Scopes are only listed once...
It looks like the problem is with the Quickstart UI... or with the Scope.cs class depending on how you look at it. Specifically, in the method and line shown in the class ConsentService.cs
The following code
vm.ResourceScopes = resources.ApiResources.SelectMany(x => x.Scopes).Select(x => CreateScopeViewModel(x, vm.ScopesConsented.Contains(x.Name) || model == null)).ToArray();
is not filtering out the duplicates. That is, even if two Scopes have the same name they are not considered equal. So if GetHashCode and Equals were overridden in Scope.cs (which is in IdentityServer4 - not the Quickstart) then it would solve this problem. In that case SelectMany would return a unique set. This is because the ApiResources property is implemented as a HashSet. Alternatively, you could write your own logic to make this return a unique set of Scopes. This is how I solved the problem. I wrote something very similar to Jon Skeet's answer in this post that filtered out the duplicate Scopes.
The problem lies within IdentityService4 code in the implementation of InMemoryResourcesStore.FindApiResourcesByScopeAsync and was fixed with this commit. You can use the dev branch where it's included since June 22th 2017, but it was never released in any of the NuGET packages targeting .NET Standard 1.4, which is very annoying.
I created an issue and requested it to get patched:
https://github.com/IdentityServer/IdentityServer4/issues/1470
For fixing the view, i added the line marked with Todo to ConsentService.cs
var resources = await _resourceStore.FindEnabledResourcesByScopeAsync(request.ScopesRequested);
if (resources != null && (resources.IdentityResources.Any() || resources.ApiResources.Any()))
{
// TODO: Hotfix to cleanup scope duplication:
resources.ApiResources = resources.ApiResources.DistinctBy(p => p.Name).ToList();
return CreateConsentViewModel(model, returnUrl, request, client, resources);
}
This solves the display problem, but the scope will still be included multiple times in the access token which makes it bigger since it squares the scope count for that API. I had 3 scopes, so each one was included 3 times, adding 6 unneeded scope copies. But at least it's usable until it get's fixed.
There was a bug that was just fixed in 1.5 that addresses this: https://github.com/IdentityServer/IdentityServer4/pull/1030. Please upgrade and see if that fixes the issue for you. Thanks.

Provide a callback URL in Google Cloud Storage signed URL

When uploading to GCS (Google Cloud Storage) using the BlobStore's createUploadURL function, I can provide a callback together with header data that will be POSTed to the callback URL.
There doesn't seem to be a way to do that with GCS's signed URL's
I know there is Object Change Notification but that won't allow the user to provide upload specific information in the header of a POST, the way it is possible with createUploadURL's callback.
My feeling is, if createUploadURL can do it, there must be a way to do it with signed URL's, but I can't find any documentation on it. I was wondering if anyone may know how createUploadURL achieves that callback calling behavior.
PS: I'm trying to move away from createUploadURL because of the __BlobInfo__ entities it creates, which for my specific use case I do not need, and somehow seem to be indelible and are wasting storage space.
Update: It worked! Here is how:
Short Answer: It cannot be done with PUT, but can be done with POST
Long Answer:
If you look at the signed-URL page, in front of HTTP_Verb, under Description, there is a subtle note that this page is only relevant to GET, HEAD, PUT, and DELETE, but POST is a completely different game. I had missed this, but it turned out to be very important.
There is a whole page of HTTP Headers that does not list an important header that can be used with POST; that header is success_action_redirect, as voscausa correctly answered.
In the POST page Google "strongly recommends" using PUT, unless dealing with form data. However, POST has a few nice features that PUT does not have. They may worry that POST gives us too many strings to hang ourselves with.
But I'd say it is totally worth dropping createUploadURL, and writing your own code to redirect to a callback. Here is how:
Code:
If you are working in Python voscausa's code is very helpful.
I'm using apejs to write javascript in a Java app, so my code looks like this:
var exp = new Date()
exp.setTime(exp.getTime() + 1000 * 60 * 100); //100 minutes
json['GoogleAccessId'] = String(appIdentity.getServiceAccountName())
json['key'] = keyGenerator()
json['bucket'] = bucket
json['Expires'] = exp.toISOString();
json['success_action_redirect'] = "https://" + request.getServerName() + "/test2/";
json['uri'] = 'https://' + bucket + '.storage.googleapis.com/';
var policy = {'expiration': json.Expires
, 'conditions': [
["starts-with", "$key", json.key],
{'Expires': json.Expires},
{'bucket': json.bucket},
{"success_action_redirect": json.success_action_redirect}
]
};
var plain = StringToBytes(JSON.stringify(policy))
json['policy'] = String(Base64.encodeBase64String(plain))
var result = appIdentity.signForApp(Base64.encodeBase64(plain, false));
json['signature'] = String(Base64.encodeBase64String(result.getSignature()))
The code above first provides the relevant fields.
Then creates a policy object. Then it stringify's the object and converts it into a byte array (you can use .getBytes in Java. I had to write a function for javascript).
A base64 encoded version of this array, populates the policy field.
Then it is signed using the appidentity package. Finally the signature is base64 encoded, and we are done.
On the client side, all members of the json object will be added to the Form, except the uri which is the form's address.
var formData = new FormData(document.forms.namedItem('upload'));
var blob = new Blob([thedata], {type: 'application/json'})
var keys = ['GoogleAccessId', 'key', 'bucket', 'Expires', 'success_action_redirect', 'policy', 'signature']
for(field in keys)
formData.append(keys[field], url[keys[field]])
formData.append('file', blob)
var rest = new XMLHttpRequest();
rest.open('POST', url.uri)
rest.onload = callback_function
rest.send(formData)
If you do not provide a redirect, the response status will be 204 for success. But if you do redirect, the status will be 200. If you got 403 or 400 something about the signature or policy maybe wrong. Look at the responseText. If is often helpful.
A few things to note:
Both POST and PUT have a signature field, but these mean slightly different things. In case of POST, this is a signature of the policy.
PUT has a baseurl which contains the key (object name), but the URL used for POST may only include bucket name
PUT requires expiration as seconds from UNIX epoch, but POST wants it as an ISO string.
A PUT signature should be URL encoded (Java: by wrapping it with a URLEncoder.encode call). But for POST, Base64 encoding suffices.
By extension, for POST do Base64.encodeBase64String(result.getSignature()), and do not use the Base64.encodeBase64URLSafeString function
You cannot pass extra headers with the POST; only those listed in the POST page are allowed.
If you provide a URL for success_action_redirect, it will receive a GET with the key, bucket and eTag.
The other benefit of using POST is you can provide size limits. With PUT however, if a file breached your size restriction, you can only delete it after it was fully uploaded, even if it is multiple-tera-bytes.
What is wrong with createUploadURL?
The method above is a manual createUploadURL.
But:
You don't get those __BlobInfo__ objects which create many indexes and are indelible. This irritates me as it wastes a lot of space (which reminds me of a separate issue: issue 4231. Please go give it a star)
You can provide your own object name, which helps create folders in your bucket.
You can provide different expiration dates for each link.
For the very very few javascript app-engineers:
function StringToBytes(sz) {
map = function(x) {return x.charCodeAt(0)}
return sz.split('').map(map)
}
You can include succes_action_redirect in a policy document when you use GCS post object.
Docs here: Docs: https://cloud.google.com/storage/docs/xml-api/post-object
Python example here: https://github.com/voscausa/appengine-gcs-upload
Example callback result:
def ok(self):
""" GCS upload success callback """
logging.debug('GCS upload result : %s' % self.request.query_string)
bucket = self.request.get('bucket', default_value='')
key = self.request.get('key', default_value='')
key_parts = key.rsplit('/', 1)
folder = key_parts[0] if len(key_parts) > 1 else None
A solution I am using is to turn on Object Changed Notifications. Any time an object is added, a Post is sent to a URL - in my case - a servlet in my project.
In the doPost() I get all info of objected added to GCS and from there, I can do whatever.
This worked great in my App Engine project.

Parse.com iOS SDK: Create New Related Object on Object Save

I'm trying to use cloud code to create a new 'credit' every time a new User is created, the credit is for that user, as in it is a related object. For some reason I can't get writing to the 'Logs' tab to work using lines like console.log(tell me what is going on!); so I'm stumped, and with no way of knowing where I've gone wrong.
Parse.Cloud.afterSave("User", function(request) {
var Credit = Parse.Object.extend("credit");
var credit = new Credit();
credit.set("parent", request.object);
credit.set("expiry", null);
credit.set("type", "Opening");
credit.save();
});
You need to change it to this:
Parse.Cloud.afterSave(Parse.User, function(request)
Parse.User rather than "User".
For whatever reason this doesn't seem to be in the docs here: https://www.parse.com/docs/cloud_code_guide#functions-aftersave

How to add a filter in in the "middle of the URL" using Restlet?

I have the following routes:
/projects/{projectName}
and
/projects/{projectName}/Wall/{wallName}
Now I'd like to have that all GETs be allowed but PUT, POST, DELETE should only be allowed by project members i.e. users members of that project. I have a special class that given a user id and project name I can get the status of the user's membership - something like MyEnroler.getRole(userId, projectName) - where the userId is part of the request header and the projectName is taken from the URI.
I've tried a number of things but doesn't work. Here's the idea:
public class RoleMethodAuthorizer extends Authorizer {
#Override
protected boolean authorize(Request req, Response resp) {
//If it's a get request then no need for further authorization.
if(req.getMethod().equals(Method.GET))
return true;
else
{
String authorEmail = req.getClientInfo().getUser().getIdentifier();
String projectName = req.getAttributes().get("project").toString();
Role userRole = MyEnroler.getRole(authorEmail, projectName);
//forbid updates to resources if done by non-members of project
if(userRole.equals(MyEnroler.NON_MEMBER))
return false;
//for everybody else, return true
return true;
}
}
}
Now simply doing the following completely fails when creating inbound root in the Application:
Router projectRouter = new Router(getContext());
RoleMethodAuthorizer rma = new RoleMethodAuthorizer();
//Guard declaration here. Then setNext Restlet
guard.setNext(projectRouter);
projectRouter.attach("/projects/{project}",rma);
Router wallRouter = new Router(getContext());
wallRouter.attach("/Wall/{wallName}", WallResource.class);
rma.setNext(wallRouter);
//return guard;
So a request to /projects/stackoverflow/Wall/restlet fails. The URL is never found. I'm guessing since it's trying to match it with the projectRouter. Well I tried the various modes (MODE_BEST_MATCH or MODE_FIRST/NEXT_MATCH) to no avail.
Nothing seems to work. Conceptually this should work. I'm only intercepting a call and just being transparent to the request, but don't know how things are working on the inside.
I could move the authorizer just after the guard, but I'd lose access to the request attribute of projectName - I don't wish to parse the URL myself to search for the projectName since the URL pattern could change and would break the functionality - i.e. require 2 changes instead of one.
Any ideas how to achieve this?
I would use the standard RoleAuthorizer class to supply the list of allowed roles, along with your custom enroller probably split into two I would then add a custom Filter class that does something like this to call your Enrolers.
protected int beforeHandle(final Request request, final Response response) throws ResourceException {
final String projectName = (String) request.getAttributes().get("projectName");
// Check that a projectName is supplied, should not have got this far otherwise but lets check.
if (projectName == null || projectName.isEmpty()) {
throw new ResourceException(Status.CLIENT_ERROR_NOT_FOUND);
}
if (Method.GET.equals(request.getMethod())){
new ReadEnroler(projectName).enrole(request.getClientInfo());
}else{
new MutateEnroler(projectName).enrole(request.getClientInfo());
}
return super.beforeHandle(request, response);
}
the enrolers would then set the appropriate values in the clientInfo.getRoles() Collection when enrole was called.

Resources