GitLab CI/CD Pipeline - Setting VAULT_NAMESPACE to Root for OIDC - azure-active-directory

I am implementing single sign-on for HashiCorp Vault (Enterprise) and Azure AD, using OIDC as the auth method and Terraform for my IaC. The entire solution runs off a GitLab CI/CD Pipeline.
The OIDC auth method is required to be enabled in the Vault root namespace. I initially implemented and tested the solution successfully using a dedicated namespace issued by our Vault Admin, with the VAULT_ADDRESS and VAULT_NAMESPACE variables in my .gitlab-ci.yml set in the below format:
VAULT_ADDRESS = "https://my-company-domain.org/"
VAULT_NAMESPACE = "/nnn/nnn"
To reiterate, the OIDC single sign-on authentication works perfectly with my VAULT_NAMESPACE set as depicted above.
However, to enable me set it to the Root namespace, I have implemented two changes in my .gitlab-ci.yml file which I expected to provide the desired result, but neither has been successful. By that I mean after the GitLab pipeline has successfully provisioned my GitLab project code, at the Vault login prompt (depicted below) no login redirection occurs, as per the expected and previous behaviour.
The two changes I have tried out are:
Commenting out the VAULT_NAMESPACE line entirely, i.e. # VAULT_NAMESPACE = "/nnn/nnn"
Setting the VAULT_NAMESPACE variable to an empty string, i.e. VAULT_NAMESPACE = ""
What could I be doing wrong, or how do I achieve the desired result? Is this a change required in my project code........or perhaps by the Vault Admin?
Meanwhile, a GitLab documentation page on GitLab.com describes the VAULT_NAMESPACE as:
VAULT_NAMESPACE - Optional. The Vault Enterprise namespace to use for reading secrets and authentication.
If no namespace is specified, Vault uses the root ("/") namespace.

Related

GCP: Remove IAM policy from Service Account using Terraform

Im creating an app engine using the following module: google_app_engine_flexible_app_version.
By default, Google creates a Default App Engine Service Account with roles/editor permissions.
I want to reduce the permissions of my AppEngine.
Therefore, I want to remove the roles/editor permission and add it my custom role.
In order to remove it I know I can use gcloud projects remove-iam-policy-binding cli.
But I want it to be part of my terraform plan.
If you are using https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/app_engine_flexible_app_version to creating your infrastructure then you must have seen the following line in it.
role = "roles/compute.networkUser"
This role is used when setting up your infra and you can tinker it after referring from https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_deny_policy
Note: When setting up role, please ensure valid permissions are in place for your app engine to work properly.
I. Using Provided Terraform Code as template & Tinker it
One simple hack I would suggest you, is to
(1) First setup your infra-structure with the basic terraform code your have and then (2) Update/tinker your infra as per your expectations (3) Now you can do terraform refresh and terraform plan to find the differences required to update your code.
Below is not related but only as an example.
resource "google_dns_record_set" "default" {
name = google_dns_managed_zone.default.dns_name
managed_zone = google_dns_managed_zone.default.name
type = "A"
ttl = 300
rrdatas = [
google_compute_instance.default.network_interface.0.access_config.0.nat_ip
]
}
Above is the code for creating a DNS record using Terraform. After mentioned above step 1, 2 & 3, I get following differences to update my code
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_dns_record_set.default will be updated in-place
~ resource "google_dns_record_set" "default" {
id = "projects/mmterraform03/managedZones/example-zone-googlecloudexample/rrsets/googlecloudexample.com./A"
name = "googlecloudexample.com."
~ ttl = 360 -> 300
# (4 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
II. Using Terraform Import
Google Cloud Platform tool - gcloud, terraform and several other open source platform are available today that can read your existing infrastructure and write Terraform code for you.
So you can check terraform import or Google's docs - https://cloud.google.com/docs/terraform/resource-management/import#:~:text=Terraform%20can%20import%20existing%20infrastructure,manage%20your%20deployment%20in%20Terraform.
But to use this method, you have to setup your infrastructure first. You either do it completely manually from Google Console UI or use terraform first and then update it.
As a III option, you can reach out/hire a Terraform Expert to do this task for you but I and II options works best for many cases.
On a different note, please https://stackoverflow.com/help/how-to-ask,
https://stackoverflow.com/help/minimal-reproducible-example. Opinion based and how/what to do questions are usually discouraged in StackOverflow.
This is one situation where you might consider to use google_project_iam_policy
That could be used to knock out the Editor role, but it will knock out everything else you don't explicitly list in the policy!
Beware - There is a risk of locking yourself out of your project if you are not sure what you are doing.
Another option would be to use a custom service account.
Use terraform to create the account and apply the desired roles.
Use gcloud app deploy --service-account={custom-sa} to deploy a service to app engine that uses the custom account.
But you may still wish to remove the Editor role from the default service account. Given that you already have the gcloud command to do it, gcloud projects remove-iam-policy-binding you could use resource terraform-google-gcloud to execute the command from terraform.
See also this feature request.

gcloud cli app engine domain mapping error

I am trying to get multiple microservices to run on a single app engine of a single project. I am following this official documentation from GCP
https://cloud.google.com/appengine/docs/standard/python3/mapping-custom-domains
When I try to create a wild card mapping like this
gcloud app domain-mappings create '*.example.com'
So that GCP backend engines can match the request accordingly:
[VERSION_ID].[SERVICE_ID].example.com
I get the following error
ERROR: (gcloud.app.domain-mappings.create) INVALID_ARGUMENT: A managed certificate cannot be created on a wildcard domain mapping. Set `ssl_management_type` to `MANUAL` and retry the domain mapping creation. You can manually create an SSL certificate with `AuthorizedCertificates.CREATE` and map it to this domain mapping with `AuthorizedCertificates.UPDATE`.
Could anyone help with this?
It looks like by default the command attempts to configure managed SSL certificates, which aren't compatible with wildcard domain mappings. From Wildcard mappings:
**Note**: Wildcard mappings are not supported for managed SSL certificates.
As the error message suggests you can disable that with an option. From gcloud beta app domain-mappings create:
--certificate-management=CERTIFICATE_MANAGEMENT
Type of certificate management. 'automatic' will provision an SSL
certificate automatically while 'manual' requires the user to provide
a certificate id to provision. CERTIFICATE_MANAGEMENT must be one
of: automatic, manual.
So just try instead:
gcloud app domain-mappings create '*.example.com' --certificate-management=manual
I see a discrepancy: the error message mentions the ssl_management_type option while the doc page shows certificate-management. Try both if needed - it may be just an error or it may be a renamed option (which may or may not still be supported under the hood).
Of course, if you want SSL, you'd have to manage the SSL certificate(s) yourself (maybe using the --certificate-id option, documented on the same page?). In that case also check out the related Google App Engine custom subdomain mapping for a specific version for potential implications of variable domain nesting.

Do we need Keystore/JKSKeyManager in IDP initiated SSO (SAML)?

I've successfully implemented SSO authentication using Spring-SAML extension. Primary requirement for us to support IDP-initiated SSO to our application. Well, by using the configurations from spring-security-saml2-sample even SP-initiated SSO flow also works for us.
Question: Is keystore is used in IDP-initiated SSO (if metadata has certificate)? If not used, I would like to get rid of keystore configurations from securityContext.xml.
Note: SP-initiated SSO and Global logout is not needed for us. We use Okta as IDP.
This is a good feature request. I've opened https://jira.spring.io/browse/SES-160 for you and support is available in Spring SAML's trunk with the following documentation:
In case your application doesn't need to create digital signatures
and/or decrypt incoming messages, it is possible to use an empty
implementation of the keystore which doesn't require any JKS file
- org.springframework.security.saml.key.EmptyKeyManager. This can be the
case for example when using only IDP-Initialized single sign-on.
Please note that when using the EmptyKeyManager some of Spring SAML
features will be unavailable. This includes at least SP-initialized
Single Sign-on, Single Logout, usage of additional keys in
ExtendedMetadata and verification of metadata signatures. Use the
following bean in order to initialize the EmptyKeyManager:
<bean id="keyManager" class="org.springframework.security.saml.key.EmptyKeyManager"/>

Configure Tomcat for Kerberos and Impersonation

I would like to configure Tomcat to be able to connect to AD and authenticate users accordingly.
In addition, I would also like to invoke some web services (in this case, Share Point) using the client credentials.
So far, I've managed to successfully configure Tomcat to use SPNEGO authentication, as described in the tutorial at http://tomcat.apache.org/tomcat-7.0-doc/windows-auth-howto.html. Note that I have used Tomcat's SPNEGO authentication (not Source Forge's or Waffle).
I did not use Source Forge's implementation since I wanted to keep things simple and use Tomcat's as provided out of the box. In addition, I wanted all the authentication and authorization to be handled by Tomcat, using the SPNEGO as the authentication method in WEB.XML and Tomcat's JNDI realm for authorization.
Also I have not used WAFFLE, since this is Windows only.
I'm using CXF as my Web Service stack. According to the CXF documentation at http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html#ClientHTTPTransport%28includingSSLsupport%29-SpnegoAuthentication%28Kerberos%29, all you need to do to authenticate with the a web service (in my case, Share Point) is to use:
<conduit name="{http://example.com/}HelloWorldServicePort.http-conduit"
xmlns="http://cxf.apache.org/transports/http/configuration">
<authorization>
<AuthorizationType>Negotiate</AuthorizationType>
<Authorization>CXFClient</Authorization>
</authorization>
</conduit>
and configure CXFClient in jaas.conf (in my case, where Tomcat's server JAAS configuration is located, such that my jass.conf looks like:
CXFClient {
com.sun.security.auth.module.Krb5LoginModule required client=true useTicketCache=true debug=true;
};
com.sun.security.jgss.krb5.initiate {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
principal="HTTP/tomcatsrv.corporate.intra#CORPORATE.INTRA"
useKeyTab=true
keyTab="C:/Program Files/Apache/apache-tomcat-7.0.27/conf/tomcatsrv.keytab"
storeKey=true
debug=true;
};
com.sun.security.jgss.krb5.accept {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
principal="HTTP/tomcatsrv.corporate.intra#CORPORATE.INTRA"
useKeyTab=true
keyTab="C:/Program Files/Apache/apache-tomcat-7.0.27/conf/tomcatsrv.keytab"
storeKey=true
debug=true;
};
Yet, when I'm invoking the web service, it is invoked under the service username (i.e. Tomcat's username configured in AD and in tomcatsrv.keytab), rather than the client's username (e.g. duncan.attard).
So my question is this: Is there some way in which the client's username can be delegated (or use some sort of impersonation) to CXF so that when I invoke Share Point's web service (e.g. I want to upload a file using Copy.asmx), the file is uploaded as duncan.attard and not as tomcat.srv.
Thanks all, your help is much appreciated.
Technically, this works perfectly. Here's the recipe:
You do not need a login module name if you work with credential delegation.
You have to make sure that the user account is eligible for delegation.
Take a look at the implementation of Tomcat's GenericPrincipal, it will save you the GSS credential if there is one. Cast request.getPrincipal to GenericPrincipal and get the credential.
Now say you have the credential:
Construct a Subject with the Principal and the GSSCredential as private credential.
Wrap the CXF code into a PrivilegedAction.
Pass the constructed subject and an instance of your privileged action to the Subject.doAs method and the system will construct an AccessControlContext on behalf of the passed subject and will invoke everything in JAAS on behalf of that context. CXF should use those if it is implemented correctly. This is like su or sudo on Unix.
The easiest way to test that is to create an InitialDirContext in the privileged action on behalf of the client to your Active Directory. This is how I test a working credential delegation environment.

302 status when copying data to another app in AppEngine

I'm trying to use the "Copy to another app" feature of AppEngine and keep getting an error:
Fetch to http://datastore-admin.moo.appspot.com/_ah/remote_api failed with status 302
This is for a Java app but I followed the instructions on setting up a default Python runtime.
I'm 95% sure it's an authentication issue and the call to remote_api is redirecting to the Google login page. Both apps use Google Apps as the authentication mechanism. I've also tried copying to and from a third app we have which uses Google Accounts for authentication.
Notes:
The user account I log in with is an Owner on all three apps. It's a Google Apps account (if that wasn't obvious).
I have a gmail account this is an Owner on all three apps as well. When I log in to the admin console with it, I don't see the datastore admin console at all when I click it.
I'm able to use the remote_api just fine from the command-line after I enter my details
Tried with both the Python remote_api built-in and the Java one.
I've found similar questions/blog posts about this, one of which required logging in from a browser, then manually submitting the ACSID cookie you get after that's done. Can't do that here, obviously.
OK, I think I got this working.
I'll refer to the two appIDs as "source" and "dest".
To enable datastore admin (as you know) you need to upload a Python project with the app.yaml and appengine_config.py files as described in the docs.
Either I misread the docs or there is an error. The "appID" inthe .yaml should be the app ID you are uploading to to enable DS admin.
The other appID in the appengine_config file, specifically this line:
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'HTTP_X_APPENGINE_INBOUND_APPID', ['appID'])
Should be the appID of the "source", ID the app id of where the data is coming from in the DS copy operation.
I think this line is what allows the source appID to be authenticated as having permissions to write to the "dest" app ID.
So, I changed that .py, uploaded again to my "dest" app ID. To be sure I made this dummy python app as default and left it as that.
Then on the source app ID I tried the DS copy again, and all the copy jobs were kicked off OK - so it seems to have fixed it.

Resources