AzureAD - oauth2AllowIdTokenImplicitFlow for app-registrations - azure-active-directory

As I was attempting to remove the additional configuration for on an app-registration created by PowerShell.. I came around a strange behaviour;
The this command az ad app update --id $app.appid --set oauth2AllowIdTokenImplicitFlow='false'
results in an exception namely "az : Property 'oauth2AllowIdTokenImplicitFlow' not found on root. Send it as an additional property". It however does apply the value.
Is there no documented way to automated this setting - as it can be send via the portal and via the API (Shoot and forget as per How to setup oauth2AllowIdTokenImplicitFlow in azure AD application from console? )
And i was expecting it to be part of the Permission-grants ..?

The command you have shared will not work as oauth2AllowIdTokenImplicitFlow was not the correct syntax of az ad app update as per Microsoft Document.
The correct syntax will be
az ad app update --id $app.appid --oauth2-allow-implicit-flow false

Microsoft split the oauth2AllowIdTokenImplicitFlow out of oauth2AllowImplicitFlow.
To reliably set it from within Az-context is; use the API instead:
az rest --method PATCH --uri 'https://graph.microsoft.com/v1.0/applications/<<Object_Id>' --headers 'Content-Type=application/json' --body '{\"web\":{\"implicitGrantSettings\":{\"enableIdTokenIssuance\":false}}}'
(reference: https://github.com/Azure/azure-cli/issues/10579)

Related

GCP: Remove IAM policy from Service Account using Terraform

Im creating an app engine using the following module: google_app_engine_flexible_app_version.
By default, Google creates a Default App Engine Service Account with roles/editor permissions.
I want to reduce the permissions of my AppEngine.
Therefore, I want to remove the roles/editor permission and add it my custom role.
In order to remove it I know I can use gcloud projects remove-iam-policy-binding cli.
But I want it to be part of my terraform plan.
If you are using https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/app_engine_flexible_app_version to creating your infrastructure then you must have seen the following line in it.
role = "roles/compute.networkUser"
This role is used when setting up your infra and you can tinker it after referring from https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_deny_policy
Note: When setting up role, please ensure valid permissions are in place for your app engine to work properly.
I. Using Provided Terraform Code as template & Tinker it
One simple hack I would suggest you, is to
(1) First setup your infra-structure with the basic terraform code your have and then (2) Update/tinker your infra as per your expectations (3) Now you can do terraform refresh and terraform plan to find the differences required to update your code.
Below is not related but only as an example.
resource "google_dns_record_set" "default" {
name = google_dns_managed_zone.default.dns_name
managed_zone = google_dns_managed_zone.default.name
type = "A"
ttl = 300
rrdatas = [
google_compute_instance.default.network_interface.0.access_config.0.nat_ip
]
}
Above is the code for creating a DNS record using Terraform. After mentioned above step 1, 2 & 3, I get following differences to update my code
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_dns_record_set.default will be updated in-place
~ resource "google_dns_record_set" "default" {
id = "projects/mmterraform03/managedZones/example-zone-googlecloudexample/rrsets/googlecloudexample.com./A"
name = "googlecloudexample.com."
~ ttl = 360 -> 300
# (4 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
II. Using Terraform Import
Google Cloud Platform tool - gcloud, terraform and several other open source platform are available today that can read your existing infrastructure and write Terraform code for you.
So you can check terraform import or Google's docs - https://cloud.google.com/docs/terraform/resource-management/import#:~:text=Terraform%20can%20import%20existing%20infrastructure,manage%20your%20deployment%20in%20Terraform.
But to use this method, you have to setup your infrastructure first. You either do it completely manually from Google Console UI or use terraform first and then update it.
As a III option, you can reach out/hire a Terraform Expert to do this task for you but I and II options works best for many cases.
On a different note, please https://stackoverflow.com/help/how-to-ask,
https://stackoverflow.com/help/minimal-reproducible-example. Opinion based and how/what to do questions are usually discouraged in StackOverflow.
This is one situation where you might consider to use google_project_iam_policy
That could be used to knock out the Editor role, but it will knock out everything else you don't explicitly list in the policy!
Beware - There is a risk of locking yourself out of your project if you are not sure what you are doing.
Another option would be to use a custom service account.
Use terraform to create the account and apply the desired roles.
Use gcloud app deploy --service-account={custom-sa} to deploy a service to app engine that uses the custom account.
But you may still wish to remove the Editor role from the default service account. Given that you already have the gcloud command to do it, gcloud projects remove-iam-policy-binding you could use resource terraform-google-gcloud to execute the command from terraform.
See also this feature request.

GitLab CI/CD Pipeline - Setting VAULT_NAMESPACE to Root for OIDC

I am implementing single sign-on for HashiCorp Vault (Enterprise) and Azure AD, using OIDC as the auth method and Terraform for my IaC. The entire solution runs off a GitLab CI/CD Pipeline.
The OIDC auth method is required to be enabled in the Vault root namespace. I initially implemented and tested the solution successfully using a dedicated namespace issued by our Vault Admin, with the VAULT_ADDRESS and VAULT_NAMESPACE variables in my .gitlab-ci.yml set in the below format:
VAULT_ADDRESS = "https://my-company-domain.org/"
VAULT_NAMESPACE = "/nnn/nnn"
To reiterate, the OIDC single sign-on authentication works perfectly with my VAULT_NAMESPACE set as depicted above.
However, to enable me set it to the Root namespace, I have implemented two changes in my .gitlab-ci.yml file which I expected to provide the desired result, but neither has been successful. By that I mean after the GitLab pipeline has successfully provisioned my GitLab project code, at the Vault login prompt (depicted below) no login redirection occurs, as per the expected and previous behaviour.
The two changes I have tried out are:
Commenting out the VAULT_NAMESPACE line entirely, i.e. # VAULT_NAMESPACE = "/nnn/nnn"
Setting the VAULT_NAMESPACE variable to an empty string, i.e. VAULT_NAMESPACE = ""
What could I be doing wrong, or how do I achieve the desired result? Is this a change required in my project code........or perhaps by the Vault Admin?
Meanwhile, a GitLab documentation page on GitLab.com describes the VAULT_NAMESPACE as:
VAULT_NAMESPACE - Optional. The Vault Enterprise namespace to use for reading secrets and authentication.
If no namespace is specified, Vault uses the root ("/") namespace.

SonarQube LDAP Authentication seems to load but won't allow login via domain user

I've been trying to setup SonarQube (v4.1) with the LDAP authentication plugin (v1.4) and I just can't get it to authenticate against my domain user. My config is setup as follows:
#########################
# LDAP configuration
#########################
# General Configuration
sonar.security.realm=LDAP
sonar.security.savePassword=true
sonar.security.updateUserAttributes=true
sonar.authenticator.downcase=true
sonar.authenticator.createUsers=true
ldap.authentication=simple
ldap.realm=mydomain.co.uk
ldap.bindDn=CN=USERNAME,OU=developers,DC=mydomain,DC=co,DC=uk
ldap.bindPassword=PASSWORD
# User Configuration
#ldap.user.baseDn=OU=developers,DC=mydomain,DC=co,DC=uk
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
ldap.group.baseDn=CN=Domain Users,CN=Users,DC=adastra,DC=co,DC=uk
ldap.group.request=(&(objectClass=group)(member={dn}))
and the log outputs the following messges that seem to say that the LDAP connection is working fine:
2014.01.20 16:12:32 INFO [org.sonar.INFO] Security realm: LDAP
2014.01.20 16:12:32 INFO [o.s.p.l.LdapSettingsManager] Auto discovery mode
2014.01.20 16:12:32 INFO [o.s.p.l.LdapSettingsManager] Detected server: ldap://dc02.mydomain.co.uk:389
2014.01.20 16:12:32 INFO [o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=dc=mydomain,dc=co,dc=uk, request=(&(objectClass=user)(sAMAccountName={0})), realNameAttribute=cn, emailAttribute=mail}
2014.01.20 16:12:32 INFO [o.s.p.l.LdapSettingsManager] Group mapping: LdapGroupMapping{baseDn=CN=Domain Users,CN=Users,DC=mydomain,DC=co,DC=uk, idAttribute=cn, requiredUserAttributes=[dn], request=(&(objectClass=group)(member={0}))}
2014.01.20 16:12:32 INFO [o.s.p.l.LdapContextFactory] Test LDAP connection on ldap://dc02.mydomain.co.uk:389: OK
2014.01.20 16:12:32 INFO [org.sonar.INFO] Security realm started
But it just doesn't seem to work for my user unless I use a local user. When enabling logging on the wrapper by setting:
wrapper.console.loglevel=DEBUG
I get the following error in the logs which doesn't really help that much! :)
2014.01.20 17:07:10 ERROR [rails] Error from external users provider:
I just worked through getting the SonarQube LDAP plugin to work with Active Directory myself. Since everyone's network is set up differently, you often can't just copy and paste a configuration. Here is the process I used to figure out the correct configuration at my company:
As stated in the documentation, this configuration goes in the file:
SONARQUBE_HOME/conf/sonar.properties
The following line is required as-is:sonar.security.realm=LDAP. Other lines will be different per company.
It was helpful for me to test the configuration with a GUI tool. I used the Softerra LDAP Browser (free read-only version of LDAP Administrator). In that LDAP Browser,
Create a new profile.
Lookup Servers button will help determine ldap.url. You will need to end up with something along the lines of ldap.url=ldap://dc01.mycompany.local:3268. NOTE: As stated in another answer, this may need to be a different port than the one listed in this screen.
The Base DN box can be left blank for now.
For authentication, I just chose the currently logged on user.
The filter can also be left blank for now.
Click Finish and it should display items at the top level of your AD.
F3 toggles the Quick Search bar.
Since you can't connect SonarQube to AD with Anonymous Authentication, you will need to select a user for the SonarQube service to connect as. Search for that user in the Quick Search.
You should find a CN (Common Name) entry. Double-click that to open it up.
Find the distinguishedName field and copy its value to use as your ldap.bindDn
ldap.bindPassword should be that user's password.
That should be enough to let SonarQube start successfully, but it is NOT enough to let it search for the user that is trying to log into your SonarQube web portal. For that, first pick a sample person (such as yourself).
Do another Quick Search for the sample person and open up their CN entry
Take the value of their distinguishedName, remove the "CN={Their Name}," piece, and put that into ldap.user.baseDn
The last piece that you need to determine with the ldap.user.request. The suggestion from the SonarQube docs to use with AD worked for me: (&(objectClass=user)(sAMAccountName={login})). Let me explain why, in case it does not work for you. The "{login}" will be replaced by whatever the SonarQube enters for their username, so that request string (which is called "Filter" in LDAP Browser) is essentially saying to search for all entries with any objectClass equal to "user" and their sAMAccountName equal to whatever is typed into the username textbox in your SonarQube web portal. Inside the sample person's info, there should be at least one field called "objectClass". One of those should have the value "user". There should also be an field for sAMAccountName. Use that value for the username textbox in your SonarQube web portal.
To test if that request string should work for you, do a Directory Search in LDAP Browser (Ctrl+F3). Put your ldap.user.baseDn value in the "Search DN" texbox and put your ldap.user.request value in the Filter texbox (be sure to manually replace "{login}" with your sample username). It should return the CN entry for the sample person.
Thanks to #aaron who managed to point me in the right direction! For my issue it was a problem with the auto-discovery and the forest I was connecting to. According to http://technet.microsoft.com/en-us/library/cc978012.aspx you should use a different port when connecting to a forest so that it can then search the whole forest rather that the domain you happen to connect to (which I suppose might not be the correct one in auto-discovery mode). In the end the configuration that worked for me was:
# General Configuration
ldap.realm=mydomain.com
sonar.security.realm=LDAP
sonar.authenticator.createUsers=true
sonar.security.savePassword=true
sonar.security.updateUserAttributes=true
ldap.url=ldap://dc.mydomain.com:3268
# User Configuration
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
Which is actually quite simple and doesn't require a user account to connect with. This means it is in SIMPLE authentication mode (I can't seem to get it to work in anything else) but that is fine with me as it's an internal only system.
I am using SonarQube 3.7.3 and I attached my configuration which works. I hope that would be useful.
# General Configuration
sonar.security.realm=LDAP
sonar.authenticator.createUsers=true
sonar.security.savePassword=true
sonar.security.updateUserAttributes=true
ldap.url=ldap://...
ldap.bindDn=user
ldap.bindPassword=password
# User Configuration
ldap.user.baseDn=ou=People,dc=company,dc=local
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
My Fix
I had painstakingly verified my settings, even to the point of using the log file's "User mapping" output line to configure a manual ldapsearch command and verify that my user was being retrieved properly.
For some reason, specifying this setting fixed it for me:
ldap.user.realNameAttribute=cn
Why?
This attribute is supposed to be optional and default to cn anyway, but it only works for me if I specify it manually. This might be a bug.
Debugging with ldapsearch
ldapsearch can allow you to bypass the application query LDAP directly.
I looked in the log file for this line:
INFO web[o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=DC=my-ad,DC=example,DC=com, request=(&(objectClass=user)(sAMAccountName={0})), realNameAttribute=cn, emailAttribute=mail}
And then built an ldapsearch command like:
ldapsearch -D CN=myldapuser,CN=Users,DC=my-ad,DC=example,DC=com -W -h my-ad.example.com -b "DC=my-ad,DC=example,DC=com" "(&(objectClass=user)(sAMAccountName=myuser))"
-D specifies the bind DN, basically the login username for LDAP
-W means that ldapsearch will prompt you for the password
-h specifies the LDAP URL
-b should be baseDN from the log file line
The last positional parameter is the request value from the log file line, after replacing {0} with a real username.
If you get real user info back, it means your basic settings are right. This is a hint that something else is going wrong.
http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/13/support-for-active-directory-and-single-sign-on-sso-in-the-sonarqube-ldap-plugin.aspx
With the new v1.5, only one line is required:
LDAP configuration
sonar.security.realm=LDAP
Using port 3268 did the trick for me. Here is my configuration that works with SonarQube 5.0.1 and Active Directory:
sonar.security.realm=LDAP
sonar.security.savePassword=true
sonar.security.updateUserAttributes=true
sonar.authenticator.createUsers=true
ldap.url=ldap://dc101.office.company.com:3268
ldap.bindDn=CN=Service Account,OU=Windows Service,OU=Accounts,OU=Resources,DC=office,DC=company,DC=com
ldap.bindPassword=PASSWORD
ldap.user.baseDn=DC=office,DC=company,DC=com
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
None of solutions before worked for me, but this:
# Configuration
sonar.realm=myreal.domain.com
sonar.security.realm=LDAP
sonar.authenticator.createUsers=true
sonar.security.savePassword=true
sonar.security.updateUserAttributes=true
ldap.url=ldap://myreal.domain.com:389
ldap.bindDn=cn=CNUser,dc=domain,dc=com
ldap.bindPassword=password
# User Configuration
ldap.user.baseDn=ou=people,dc=domain,dc=com
ldap.user.request=(&(objectClass=user)(sAMAccountName={login}))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
#logeo lo que pasa
wrapper.console.loglevel=DEBUG
My Ldap server do needs authentication, so i cant avoid that. If it doesnt works for you, try not to especify the ldap.user.request: all depends of the configuration of your network´s LDAP server.

Plone LDAP/AD authentication encryption

We have a Plone 4.3 site on debian 7 that we'd like to authenticate against an existing AD controller. Using the excellent plone.app.ldap product we have this working, but the 'manager' username/password are being sent over the wire in plain text.
No doubt this is because we are using the protocol: 'LDAP' and not 'LDAP over SSL', our problem is how to implement 'LDAP over SSL' on the AD server in a way that works with Plone. Has anyone had any experience configuring the AD machine to accept these types of requests?
From what I understand it needs to be a new service on a new port, similar to https (i.e. not TLS), but I don't know enough about AD to know what to ask the AD admin mob.
EDIT: following the comment from #Martijn Pieters I add that if we set the 'manager dn usage' to not always then we get this error in the event log:
OPERATIONS_ERROR: {'info': '000004DC: LdapErr: DSID-0C0906E8, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, v1db1', 'desc': 'Operations error'}
Thanks for any ideas.
-i
You could set up the LDAP connection to use a certificate instead of a password.
Ross Patterson outlines the procedure, but the mailinglist post he links to is gone. The same thread is however still available on GMane.

302 status when copying data to another app in AppEngine

I'm trying to use the "Copy to another app" feature of AppEngine and keep getting an error:
Fetch to http://datastore-admin.moo.appspot.com/_ah/remote_api failed with status 302
This is for a Java app but I followed the instructions on setting up a default Python runtime.
I'm 95% sure it's an authentication issue and the call to remote_api is redirecting to the Google login page. Both apps use Google Apps as the authentication mechanism. I've also tried copying to and from a third app we have which uses Google Accounts for authentication.
Notes:
The user account I log in with is an Owner on all three apps. It's a Google Apps account (if that wasn't obvious).
I have a gmail account this is an Owner on all three apps as well. When I log in to the admin console with it, I don't see the datastore admin console at all when I click it.
I'm able to use the remote_api just fine from the command-line after I enter my details
Tried with both the Python remote_api built-in and the Java one.
I've found similar questions/blog posts about this, one of which required logging in from a browser, then manually submitting the ACSID cookie you get after that's done. Can't do that here, obviously.
OK, I think I got this working.
I'll refer to the two appIDs as "source" and "dest".
To enable datastore admin (as you know) you need to upload a Python project with the app.yaml and appengine_config.py files as described in the docs.
Either I misread the docs or there is an error. The "appID" inthe .yaml should be the app ID you are uploading to to enable DS admin.
The other appID in the appengine_config file, specifically this line:
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'HTTP_X_APPENGINE_INBOUND_APPID', ['appID'])
Should be the appID of the "source", ID the app id of where the data is coming from in the DS copy operation.
I think this line is what allows the source appID to be authenticated as having permissions to write to the "dest" app ID.
So, I changed that .py, uploaded again to my "dest" app ID. To be sure I made this dummy python app as default and left it as that.
Then on the source app ID I tried the DS copy again, and all the copy jobs were kicked off OK - so it seems to have fixed it.

Resources