I am using Http Form Adapter in Ping Federate. How to get user attributes from SAML Response? - saml-2.0

Http Form adapter serves as an authentication service in my application. I have not implemented any application on the Identity Provider to get user inputs.
Therefore, on successful authentication, SP verifies the user's signature and redirects to the application. At my target Resource, I receive an open token. Is it still possible to utilize the open Token Jar to read the user attributes from OTK?
**Note: ** In Service Provider, I use open token Adapter.
Also, please let me know if there is any other possible way of getting the user attributes other than using the open token adapter/http form adapter.
Thanks.

There are numerous SP Adapters you can choose to use for your last mile integration with your application. The OpenToken Adapter is just one of them. If your application is in Java and you are using the SP OpenToken Adapter, then you would most likely use the Java OpenToken Agent implementation within your application to read the OTK (documented in the Java Integration Kit). If you look at the Add Ons list, there are actually 3 flavors of OTK Agents (.NET, Java and PHP from PingID. Ruby on Rails and Perl are available via respective Open Source repositories).
However, you are not limited to OpenToken Adapters. The Agentless Integration Kit is also very popular for SP/last-mile integration with PingFederate.
Unfortunately, the question is just too open ended for the Stackoverflow format. I would suggest talking to your Ping Identity Solution Architect who can help steer you in the right direction and ask the necessary follow-up questions on your use case.

If understand the question correctly, you desire attributes to be fulfilled that the web application can read and utilize. This starts with the SP Connection configuration. I am going to assume you are using Active Directory and already configured that data source along with the Password Credential Validator (PCV) for the HTML Form IdP Adapter. In the SP Connection you will need to extend the attribute contract to define the values to put into the SAML assertion and then use the Active Directory data source to fulfill the attributes. When the SAML assertion is received by the PingFederate SP role server, the SP Adapter maps the attribute values from the SAML assertion into the OpenToken. When your application receives the OpenToken, it can read the values.

Related

Not able to create events using Microsoft Graph SDK

I am trying to create an Event using Microsoft Graph SDK, as following the document #
https://learn.microsoft.com/en-us/graph/api/user-post-events?view=graph-rest-beta&tabs=csharp
1.Created "authProvider"
2.Created GraphClient with above AuthProvider
3.Creating Event using
The event is not creating also no exception/error is throwing, Could any one help me here?
This is happening because this call is being made with same transactionId frequently. It avoids unnecessary retries on the server.
It is an optional parameter , just comment out this property and try again. It should work.
Note : This identifier specified by a client app for the server , to avoid redundant POST operations in case of client retries to create the same event and also useful when low network connectivity causes the client to time out before receiving a response from the server for the client's prior create-event request.
More info is required here, as the reply from Allen Wu stated. without any details I would focus my efforts on the authprovider piece and azure app registration piece. as the rest of the example is just sending a post request to graph api.
but what is recommended really depends on what type of application you are trying to build. eg. is it a service daemon, a web app, mobile app, desktop app, single page app, etc.

ADFS 2.0 Not handling 'Extension' tag in SAML AuthnRequest - Throwing Exception MSIS7015

We currently have ADFS 2.0 with hotfix 2 rollup installed and working properly as an identity provider for several external relying parties using SAML authentication. This week we attempted to add a new relying party, however, when a client presents the authentication request from the new party, ADFS simply returns an error page with a reference number and does not prompt the client for credentials.
I checked the server ADFS 2.0 event log for the reference number, but it is not present (searching the correlation id column). I enabled the ADFS trace log, re-executed the authentication attempt and this message was presented:
Failed to process the Web request because the request is not valid. Cannot get protocol message from HTTP query. The following errors occurred when trying to parse incoming HTTP request:
Microsoft.IdentityServer.Protocols.Saml.HttpSamlMessageException: MSIS7015: This request does not contain the expected protocol message or incorrect protocol parameters were found according to the HTTP SAML protocol bindings.
at Microsoft.IdentityServer.Web.HttpSamlMessageFactory.CreateMessage(HttpContext httpContext)
at Microsoft.IdentityServer.Web.FederationPassiveContext.EnsureCurrent(HttpContext context)
As the message indicates that the request is not well formed, I went ahead and ran the request through xmlsectool and validated it against the SAML protocol XSD (http://docs.oasis-open.org/security/saml/v2.0/saml-schema-protocol-2.0.xsd) and it came back clean:
C:\Users\ebennett\Desktop\xmlsectool-1.2.0>xmlsectool.bat --validateSchema --inFile metaauth_kld_request.xml --schemaDirectory . --verbose
INFO XmlSecTool - Reading XML document from file 'metaauth_kld_request.xml'
DEBUG XmlSecTool - Building DOM parser
DEBUG XmlSecTool - Parsing XML input stream
INFO XmlSecTool - XML document parsed and is well-formed.
DEBUG XmlSecTool - Building W3 XML Schema from file/directory 'C:\Users\ebennett\Desktop\xmlsectool-1.2.0\.'
DEBUG XmlSecTool - Schema validating XML document
INFO XmlSecTool - XML document is schema valid
So, I'm thinking that ADFS isn't playing full compliance with the SAML specification. To verify, I manually examined the submitted AuthnRequest, and discovered that our vendor is making use of the 'Extensions' element to embed their custom properties (which is valid, according to the SAML specification) (note: "ns33" below correctly namspaces "urn:oasis:names:tc:SAML:2.0:protocol" elsewhere in the request)
<ns33:Extensions>
<vendor_ns:fedId xmlns:vendor_ns="urn:vendor.name.here" name="fedId" value="http://idmfederation.vendorname.org"/>
</ns33:Extensions>
If I remove the previous element from the AuthnRequest and resubmit it to ADFS, everything goes swimmingly. And, in fact, I can leave the 'Extensions' container and simply edit out the vendor namespaced element, and ADFS succeeds.
Now, I guess I have 3 questions:
Why was the reference number not logged to the ADFS log? That really would have helped my early debugging efforts
Is it a known issue that ADFS's SAML handler cannot handle custom elements defined within the Extensions element, and if so, is there a way to add support (or at least not crash while handling it)? My vendor has offered to change the SAML AuthnRequest generated to omit that tag, but said that it 'may take some time'-- and we all know what that means...
Does anyone think that installing ADFS hotfix rollup 3 will address this situation? I didn't see anything in the doc to indicate the affirmative.
Thanks for your feedback.
When facing a MSIS7015 ADFS error, the best place to start would be enabling ADFS Tracing. Login to the ADFS server as admin and run the following command. If you have a very busy ADFS server, might be wise to do it when the server is not as busy.
C:\Windows\System32\> wevtutil sl “AD FS Tracing/Debug” /L:5
C:\Windows\System32\> eventvwr.msc
In Event Viewer select “Application and Services Logs”, right-click and select “View – Show Analytics and Debug Logs”
Go to AD FS Tracing – Debug, right-click and select “Enable Log” to start Trace Debugging.
Process your ADFS login / logout steps and when finished, go to the event viewer mmc find the sub tree AD FS Tracing – Debug, right-click and select “Disable Log” to stop Trace Debugging.
Look for EventID 49 - incoming AuthRequest - and verify values are not being sent with CAPs value. For example, in my case, it was I was receiving the following values: IsPassive='False', ForceAuthn='False'
In my case, to address the issue, all I needed to do was create incoming claim transformer rule - for the distinct endpoints.
Once the CAPs were transformed to lower case true and false, authentication started working.

Do we need Keystore/JKSKeyManager in IDP initiated SSO (SAML)?

I've successfully implemented SSO authentication using Spring-SAML extension. Primary requirement for us to support IDP-initiated SSO to our application. Well, by using the configurations from spring-security-saml2-sample even SP-initiated SSO flow also works for us.
Question: Is keystore is used in IDP-initiated SSO (if metadata has certificate)? If not used, I would like to get rid of keystore configurations from securityContext.xml.
Note: SP-initiated SSO and Global logout is not needed for us. We use Okta as IDP.
This is a good feature request. I've opened https://jira.spring.io/browse/SES-160 for you and support is available in Spring SAML's trunk with the following documentation:
In case your application doesn't need to create digital signatures
and/or decrypt incoming messages, it is possible to use an empty
implementation of the keystore which doesn't require any JKS file
- org.springframework.security.saml.key.EmptyKeyManager. This can be the
case for example when using only IDP-Initialized single sign-on.
Please note that when using the EmptyKeyManager some of Spring SAML
features will be unavailable. This includes at least SP-initialized
Single Sign-on, Single Logout, usage of additional keys in
ExtendedMetadata and verification of metadata signatures. Use the
following bean in order to initialize the EmptyKeyManager:
<bean id="keyManager" class="org.springframework.security.saml.key.EmptyKeyManager"/>

silverlight accept invalid certificate

I'm doing https web requests in silverlight using "WebRequest"/"WebResponse" framework classes.
Problem is: I do a request to an url like: https://12.34.56.78
I receive back a versign signed certificate which has as subject a domain name like: www.mydomain.com.
Hence this results in a remote certificate mismatch error.
First question: Can I somehow accept the invalid certificate, and get the WebBresponse content ? (even if it involves using other libraries, I'm open to it)
Additional details: (for those interested on why I need this scenario)
I'm trying to give a client access to a silverlight app deployed on a test server.
Client accesses the silverlight app at: www.mydomain.com/app
Then I do some rest requests to: https://xx.mydomain.com
Problem is I don't want to do requests on https://xx.mydomain.com, since that is on our productive server. For this reason I use https://12.34.56.78 instead of https://xx.mydomain.com.
Client has some firewalls/proxies and if I simply change his hosts file and map https://xx.mydomain.com to 12.34.56.78, web requests don't resolve to the mapped IP.
I say this because on his network webrequests fail if I try that, on my network I can use the hosts changing without problems.
UPDATE: Fixed the problem by deploying test releases to an alternative: https://yy.domain.com and allowing the user to configure for test purposes, the base url to which I do requests to be: https://yy.domain.com.
Using an certificate that contained the IP in the subject or an alternative subject would've probably worked too, but would have cost some money to be issued by a certified provider and would not be so good because IP's might change.
After doing more research looks like Microsoft won't add this feature too soon, unless there's a scenario for non-testing/debugging uses.
See: http://connect.microsoft.com/VisualStudio/feedback/details/368047/add-system-net-servicepointmanager-servercertificatevalidationcallback-property

WCF security between WinForms client and Shared Host webserver

Ok,
I have developed this WinForms client, which interacts with a server (ASPX Application) by means of WCF calls. I would now like to deploy the server to my shared webhost, but I'm kinda new to WCF and especially the security possibilities behind it.
The goal is to kind of secure the WCF service, so that not everybody that knows or finds out the endpoint address can call it. Rather, only my WinForms client must be able to call the WCF service.
I do not need authentication on a user basis, so no authentication is required from the user of the client. But I want only instances of this WinForms client to be able to interact with the service. The information passed between server and client is not very sensitive, so it's not really required to secure it, but it's a plus if it can easily be done.
Is this possible with a Shared Host (IIS) environment (no HTTPS at disposal) ? What bindings and options should I use ? I suppose wsHttpBinding, but how would I setup the security options ?
Using .NET 4.0
Thanks
From what I understand, you have an internet-facing service which you want to limit to only your client app to be able to call - correct? Or do you envision other clients (like PHP, Ruby etc.) also wanting to call into your service at some point?
To secure your message, you have two options in WCF - message or transport security. Over the internet, with an unknown number of hops between your client and your service, transport security doesn't work - you're left with message security (encrypting the message as it travels across the 'net). For this to work, you typically add a digital certificate to your service (only server-side) that the client can discover and use to encrypt the messages with. Only your service will be able to decrypt them - so you're safe on that end.
The next point is: who can call your service? If you want to be totally open to anyone, then yes, you need wsHttpBinding (or the RESTful variant - webHttpBinding). If you want to allow non-.NET clients, you're typically limited to no authentication (anyone can call), or username/password schemes which you will validate on the server side against a database of valid users.
If you only want to allow your own .NET client in, then you can do several things:
disable metadata on your service; with this, you would "hide" your endpoints and the services they provide - someone using a "metadata scanner" (if that exists) wouldn't be able to just stumble across your service and find out what methods it provides etc. This however also makes it impossible for another developer outside your organization to do an Add Service Reference to your service.
you could define and use a custom binary http binding - only other clients with this setup could even call your service. The binary http binding would bring some speed improvements, too. See this blog post on how to do this.
you need to somehow identify those callers that are allowed in - one possible method would be to put an extra header into your WCF messages that you then check for on the server side. This would simply make sure that a casual hacker who discovers your service and figures out the binary http binding would still be rejected (at least for some time). See this blog post here on how to implement such a message inspector extension for WCF.
the ultimate step would be to install a digital certificate on the client machine along with your service. You would then set up your client side proxy to authenticate with the service using that certificate. Only client machine that have that certificate could then call into your service.
It really depends on how far you want to go - WCF gives you a lot of options, but you need to decide how much effort you want to put into that .
The first thing you need to ask your self is: "What can someone do to your WCF service if they connected their own customized client?" Look at all of the functionality that is being exposed via WCF and assume that it could be accessed at will. You have absolutely no control over the client, and you will never have this ability.
HTTPS is beautiful, its a damn shame that your forced to be vulnerable to OWASP A9: Insufficient Transport Layer Protection. If it where up to me, I would move to a different host that cared about security. If you are throwing usernames and passwords over the network, then your putting your users in danger.
One of the biggest problems I have seen with a WCF service is that they had a "executeQuery()" function that was exposed. The developer allowing the client to build queries to be executed by the server. This approach is fundamentally flawed as you are just handing your database over to an attacker. This type of vulnerability isn't SQL Injection, it falls under CWE-602: Client-Side Enforcement of Server-Side Security.
Along the same lines as CWE-602 is OWASP A4: Insecure Direct Object References. Could an attacker fool your WCF service into thinking its another user by providing a different user id? Are you trusting the client to tell the truth?
The next classification of vulnerabilities that you must take into consideration is OWASP A1: Injection, other wise known as "Taint and Sink". For instance if you are exposing a function where one of its parameters is being used in a CreateProcess() which is invoking cmd.exe. This parameter could be controlled by the attacker, and there for this variable is "tainted", the call to CreateProcess() is a "sink". There are many types of vulnerabilities along these lines, including but not limited to; SQL Injection, LDAP Injection, XPATH Injection. These types of vulnerabilities affect all web applications.

Resources