I have a simple web application. I want to set Shibboleth native SP in front of my web app so that it issues/asserts SAML related things and forwards request to my web app. Is there a complete tutorial how to achieve that?
Use testshib to test your app, it gives too much ease.
Follow the steps
download and instal sp on your machine
include shibboleth's configuration into your apache
2.1. into httpd.conf file add include "PATH/opt/path/etc/apache22"(if version is apache2.2, otherwise appropriate)
in apache22.config file add the location you want to secure - it would be /secure bydefault
in your shibboleth2.xml file (in etc folder) put your entity id(application defaults element), ex https://mywebsite.com/shibboleth - this can be anything, not neccessary a real path
put entity id of your idp in sso element, in case of testshib it would be https://idp.testshib.org/idp/shibboleth
in the metadata provider put idp's metadata uri to your idp's metadata urn, incase testshib it would be http://www.testshib.org/metadata/testshib-providers.xml
Download your metadata from https://mywebsitehost.com/Shibboleth.sso/Metadata - here mywebsitehost would be a real host and rest path will be automatically configured by shibboleth - this path will download your sp's metadata file
Upload your metadata file to testshib via register
You are ready to go. Go to https://mywebsitehost.com/secure and you should be redirected to idp to authenticate.
NOTE: Make sure you have a domain name configured with ssl(https)
Related
How can I use Kerberos to secure the Solr Admin panel on a standalone (non- Solr-Cloud) configuration? I've tried using https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin but I don't understand how to set up authentication without Zookeeper/security.json.
As specified in the same wiki page you link to, you can specify you want to use the Kerberos Plugin as a Java System Property on node start up.
For example, in your solr.in.sh, you can add SOLR_AUTHENTICATION_OPTS="-DauthenticationPlugin=org.apache.solr.security.KerberosPlugin". You'll need a JAAS config file as well as some additional properties as well, you can see these specified in the "Define a JAAS Configuration File" and "Solr Startup Parameters" sections on the same page.
Note: The solr.kerberos.principal you specify must be the SPNEGO SPN (i.e. HTTP/solr.example.com#EXAMPLE.COM) for the full qualified domain name of the host the Solr node is located on.
This is likely different to the service principal you use for the internode communication that you register in your JAAS config file (something like solr/solr.example.com#EXAMPLE.COM).
Http Form adapter serves as an authentication service in my application. I have not implemented any application on the Identity Provider to get user inputs.
Therefore, on successful authentication, SP verifies the user's signature and redirects to the application. At my target Resource, I receive an open token. Is it still possible to utilize the open Token Jar to read the user attributes from OTK?
**Note: ** In Service Provider, I use open token Adapter.
Also, please let me know if there is any other possible way of getting the user attributes other than using the open token adapter/http form adapter.
Thanks.
There are numerous SP Adapters you can choose to use for your last mile integration with your application. The OpenToken Adapter is just one of them. If your application is in Java and you are using the SP OpenToken Adapter, then you would most likely use the Java OpenToken Agent implementation within your application to read the OTK (documented in the Java Integration Kit). If you look at the Add Ons list, there are actually 3 flavors of OTK Agents (.NET, Java and PHP from PingID. Ruby on Rails and Perl are available via respective Open Source repositories).
However, you are not limited to OpenToken Adapters. The Agentless Integration Kit is also very popular for SP/last-mile integration with PingFederate.
Unfortunately, the question is just too open ended for the Stackoverflow format. I would suggest talking to your Ping Identity Solution Architect who can help steer you in the right direction and ask the necessary follow-up questions on your use case.
If understand the question correctly, you desire attributes to be fulfilled that the web application can read and utilize. This starts with the SP Connection configuration. I am going to assume you are using Active Directory and already configured that data source along with the Password Credential Validator (PCV) for the HTML Form IdP Adapter. In the SP Connection you will need to extend the attribute contract to define the values to put into the SAML assertion and then use the Active Directory data source to fulfill the attributes. When the SAML assertion is received by the PingFederate SP role server, the SP Adapter maps the attribute values from the SAML assertion into the OpenToken. When your application receives the OpenToken, it can read the values.
I've successfully implemented SSO authentication using Spring-SAML extension. Primary requirement for us to support IDP-initiated SSO to our application. Well, by using the configurations from spring-security-saml2-sample even SP-initiated SSO flow also works for us.
Question: Is keystore is used in IDP-initiated SSO (if metadata has certificate)? If not used, I would like to get rid of keystore configurations from securityContext.xml.
Note: SP-initiated SSO and Global logout is not needed for us. We use Okta as IDP.
This is a good feature request. I've opened https://jira.spring.io/browse/SES-160 for you and support is available in Spring SAML's trunk with the following documentation:
In case your application doesn't need to create digital signatures
and/or decrypt incoming messages, it is possible to use an empty
implementation of the keystore which doesn't require any JKS file
- org.springframework.security.saml.key.EmptyKeyManager. This can be the
case for example when using only IDP-Initialized single sign-on.
Please note that when using the EmptyKeyManager some of Spring SAML
features will be unavailable. This includes at least SP-initialized
Single Sign-on, Single Logout, usage of additional keys in
ExtendedMetadata and verification of metadata signatures. Use the
following bean in order to initialize the EmptyKeyManager:
<bean id="keyManager" class="org.springframework.security.saml.key.EmptyKeyManager"/>
I would like to configure Tomcat to be able to connect to AD and authenticate users accordingly.
In addition, I would also like to invoke some web services (in this case, Share Point) using the client credentials.
So far, I've managed to successfully configure Tomcat to use SPNEGO authentication, as described in the tutorial at http://tomcat.apache.org/tomcat-7.0-doc/windows-auth-howto.html. Note that I have used Tomcat's SPNEGO authentication (not Source Forge's or Waffle).
I did not use Source Forge's implementation since I wanted to keep things simple and use Tomcat's as provided out of the box. In addition, I wanted all the authentication and authorization to be handled by Tomcat, using the SPNEGO as the authentication method in WEB.XML and Tomcat's JNDI realm for authorization.
Also I have not used WAFFLE, since this is Windows only.
I'm using CXF as my Web Service stack. According to the CXF documentation at http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html#ClientHTTPTransport%28includingSSLsupport%29-SpnegoAuthentication%28Kerberos%29, all you need to do to authenticate with the a web service (in my case, Share Point) is to use:
<conduit name="{http://example.com/}HelloWorldServicePort.http-conduit"
xmlns="http://cxf.apache.org/transports/http/configuration">
<authorization>
<AuthorizationType>Negotiate</AuthorizationType>
<Authorization>CXFClient</Authorization>
</authorization>
</conduit>
and configure CXFClient in jaas.conf (in my case, where Tomcat's server JAAS configuration is located, such that my jass.conf looks like:
CXFClient {
com.sun.security.auth.module.Krb5LoginModule required client=true useTicketCache=true debug=true;
};
com.sun.security.jgss.krb5.initiate {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
principal="HTTP/tomcatsrv.corporate.intra#CORPORATE.INTRA"
useKeyTab=true
keyTab="C:/Program Files/Apache/apache-tomcat-7.0.27/conf/tomcatsrv.keytab"
storeKey=true
debug=true;
};
com.sun.security.jgss.krb5.accept {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
principal="HTTP/tomcatsrv.corporate.intra#CORPORATE.INTRA"
useKeyTab=true
keyTab="C:/Program Files/Apache/apache-tomcat-7.0.27/conf/tomcatsrv.keytab"
storeKey=true
debug=true;
};
Yet, when I'm invoking the web service, it is invoked under the service username (i.e. Tomcat's username configured in AD and in tomcatsrv.keytab), rather than the client's username (e.g. duncan.attard).
So my question is this: Is there some way in which the client's username can be delegated (or use some sort of impersonation) to CXF so that when I invoke Share Point's web service (e.g. I want to upload a file using Copy.asmx), the file is uploaded as duncan.attard and not as tomcat.srv.
Thanks all, your help is much appreciated.
Technically, this works perfectly. Here's the recipe:
You do not need a login module name if you work with credential delegation.
You have to make sure that the user account is eligible for delegation.
Take a look at the implementation of Tomcat's GenericPrincipal, it will save you the GSS credential if there is one. Cast request.getPrincipal to GenericPrincipal and get the credential.
Now say you have the credential:
Construct a Subject with the Principal and the GSSCredential as private credential.
Wrap the CXF code into a PrivilegedAction.
Pass the constructed subject and an instance of your privileged action to the Subject.doAs method and the system will construct an AccessControlContext on behalf of the passed subject and will invoke everything in JAAS on behalf of that context. CXF should use those if it is implemented correctly. This is like su or sudo on Unix.
The easiest way to test that is to create an InitialDirContext in the privileged action on behalf of the client to your Active Directory. This is how I test a working credential delegation environment.
I followed the installation guide for an Apache Web Policy Agent, but it always results in an endless redirect loop between web and application server. Firefox says "The page isn't redirecting properly" and Chrome thinks that "This webpage has a redirect loop". The setup is an Apache 2 on port 80 with a small demo app and a Web Policy Agent, and a Tomcat 7 server on port 8080 with an OpenAM server (the former OpenSSO from Sun):
App URL http://hostname.example.com:80/ (App and Agent, running on Apache 2.2.16)
OpenAM Server URL http://hostname.example.com:8080/openam (running on Tomcat 7.0.12)
The Live HTTP Header Firefox plugin shows that the policy agent and the OpenAM server (i.e. the Apache and Tomcat servers) redirect to each other, although the server sets the SSO Token Cookie correctly. The name of the SSO Token Cookie has the default value "iPlanetDirectoryPro". Any idea how to solve the problem?
After a whole week I finally figured it out, with the help of Stackoverflow and the OpenAM Mailing list. There were two main problems: missing log files and missing cookie domains. Installing the OpenAM server and the Web Policy Agent is difficult, there are a lot of log files and many different configuration options. If you select the wrong options, it won't work. It is impossible to make it work without knowing what is going on, which can only be determined by a suitable log file.
Missing Log for Web Policy Agent : The log level must be set in the "Java properties" files. There are two "Java Properties" files for the Web Policy Agent, OpenSSOAgentBootstrap.properties and OpenSSOAgentConfiguration.properties. The log and debug level which is named com.sun.identity.agents.config.debug.level can and must be defined in both (!) files, and it should be set to the high level, all:4 or all:5. The format is important. Even if you do this, the AgentConfiguration.properties file is only used when the agent is not working in centralized config mode. The profile must be set to "local".
Missing Cookie Domain: Do not forget to enter the right Cookie Domain during the setup of the OpenAM server in the beginning, or add it afterwards if it is missing. On the OpenAM server, go to Configuration > System > Platform and change the Cookie Domain Value to your domain (for instace .example.com) if it is missing. Otherwise the browser will lose your cookie during the redirect process. Somehow I had an empty entry for the cookie domain at the OpenAM server, I guess a forgot a dot (example.com instead of .example.com) so that it was invalid (or something like that).
This troubleshooting site was helpful to locate the problem.
#0x4a6f4672, Your post was absolutely helpful . Some more to add to your answer. The following changes is what i had to do in the config to make it running, at-least getting it running it for alfresco.
com.sun.identity.agents.config.user.mapping.mode=USER_ID(Dont use HTTP_HEADER)
com.sun.identity.agents.config.user.attribute.name=uid
com.sun.identity.agents.config.user.principal=true(Dont use false)
com.sun.identity.agents.config.user.token=SsoUserHeader(Keep it as per what is specified in you application- in my case alfresco)
Now you are not running the Agent in centralised mode but in local mode the setting which is specified for profile attribute can be only set via property file so add the following.
com.sun.identity.agents.config.profile.attribute.fetch.mode=HTTP_HEADER
com.sun.identity.agents.config.profile.attribute.mapping[uid]=SsoUserHeader(whatever you want the header to come in browser as)
As told by 0x4a6f4672, it is difficult to debug and unless you are in local mode , so switch to local mode immediately and start tracing the logs and make the property changes accordingly.