GCP: Remove IAM policy from Service Account using Terraform - google-app-engine

Im creating an app engine using the following module: google_app_engine_flexible_app_version.
By default, Google creates a Default App Engine Service Account with roles/editor permissions.
I want to reduce the permissions of my AppEngine.
Therefore, I want to remove the roles/editor permission and add it my custom role.
In order to remove it I know I can use gcloud projects remove-iam-policy-binding cli.
But I want it to be part of my terraform plan.

If you are using https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/app_engine_flexible_app_version to creating your infrastructure then you must have seen the following line in it.
role = "roles/compute.networkUser"
This role is used when setting up your infra and you can tinker it after referring from https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_deny_policy
Note: When setting up role, please ensure valid permissions are in place for your app engine to work properly.
I. Using Provided Terraform Code as template & Tinker it
One simple hack I would suggest you, is to
(1) First setup your infra-structure with the basic terraform code your have and then (2) Update/tinker your infra as per your expectations (3) Now you can do terraform refresh and terraform plan to find the differences required to update your code.
Below is not related but only as an example.
resource "google_dns_record_set" "default" {
name = google_dns_managed_zone.default.dns_name
managed_zone = google_dns_managed_zone.default.name
type = "A"
ttl = 300
rrdatas = [
google_compute_instance.default.network_interface.0.access_config.0.nat_ip
]
}
Above is the code for creating a DNS record using Terraform. After mentioned above step 1, 2 & 3, I get following differences to update my code
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_dns_record_set.default will be updated in-place
~ resource "google_dns_record_set" "default" {
id = "projects/mmterraform03/managedZones/example-zone-googlecloudexample/rrsets/googlecloudexample.com./A"
name = "googlecloudexample.com."
~ ttl = 360 -> 300
# (4 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
II. Using Terraform Import
Google Cloud Platform tool - gcloud, terraform and several other open source platform are available today that can read your existing infrastructure and write Terraform code for you.
So you can check terraform import or Google's docs - https://cloud.google.com/docs/terraform/resource-management/import#:~:text=Terraform%20can%20import%20existing%20infrastructure,manage%20your%20deployment%20in%20Terraform.
But to use this method, you have to setup your infrastructure first. You either do it completely manually from Google Console UI or use terraform first and then update it.
As a III option, you can reach out/hire a Terraform Expert to do this task for you but I and II options works best for many cases.
On a different note, please https://stackoverflow.com/help/how-to-ask,
https://stackoverflow.com/help/minimal-reproducible-example. Opinion based and how/what to do questions are usually discouraged in StackOverflow.

This is one situation where you might consider to use google_project_iam_policy
That could be used to knock out the Editor role, but it will knock out everything else you don't explicitly list in the policy!
Beware - There is a risk of locking yourself out of your project if you are not sure what you are doing.
Another option would be to use a custom service account.
Use terraform to create the account and apply the desired roles.
Use gcloud app deploy --service-account={custom-sa} to deploy a service to app engine that uses the custom account.
But you may still wish to remove the Editor role from the default service account. Given that you already have the gcloud command to do it, gcloud projects remove-iam-policy-binding you could use resource terraform-google-gcloud to execute the command from terraform.
See also this feature request.

Related

How do you resolve an "Access Denied" error when invoking `image_uris.retrieve()` in AWS Sagemaker JumpStart?

I am working in a SageMaker environment that is locked down. For example, my user account is prevented from creating S3 buckets. But, I can successfully run vanilla ML training jobs by passing in role=get_execution_role to an instance of the Estimator class when using an out-of-the-box algorithm such as XGBoost.
Now, I'm trying to use an algorithm (LightBGM) that is only available via the JumpStart feature in SageMaker, but I can't get it to work. When I try to retrieve an image URI via image_uris.retrieve(), it returns the following error:
ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied.
This makes some sense to me if my user permissions are being used when creating an object. But what I want to do is specify another role - like the one returned from get_execution_role - to perform these tasks.
Is that possible? Is there another work-around available? How can I see which role is being used?
Thanks,
When I encountered this issue, it was a permissions issue with a bucket that had changed.
In the SageMaker Python SDK source code , there is a cache that is located at in an AWS-owned bucket: jumpstart-cache-prod-{region}. and a manifest.json that translates the ECR path for the image for you.
If you look at the stack trace, it could be erroring out at the code that is looking for the manifest.
One place to look is if there are new restrictions placed in IAM, Included here is the minimum policy you need to access JumpStart (pretrained) models

What's the recommended way to stop the current version of app engine using gcloud?

I want to automatically start/stop our app engine services by running a bash script.
I know it's easy to run gcloud app versions start/stop, but I don't want to manually check the version number. I want to dynamically pass the version that is serving 100% traffic to gcloud and tell it to stop.
On the flip side, I also want to tell gcloud to start the most recently deployed version.
What's the recommended way to do this?
Thanks!
One way to do that is to use gcloud's keys and flags: projections, --format, --filters. To read more directly from the terminal use gcloud topic, for example:
gcloud topic projections
In order to see what fields/properties are available use --format=flattened, like:
gcloud app services list --format=flattened
For the sake of simplicity I will leave outside everything but gcloud.
for SERVICE in $(gcloud app services list --format='table[no-heading](id)'); do
echo "for service $SERVICE :"
RECENT=$(gcloud app versions list --format='table[no-heading](id)' --filter="service=$SERVICE" | tail -n1)
echo 'y' | gcloud app versions start $RECENT
VERSIONS=$(gcloud app versions list --format='table[no-heading](id)' --filter="service=$SERVICE AND version.servingStatus=SERVING AND NOT id=$RECENT" | tr '\n' ' ')
echo 'y' | gcloud app versions stop $VERSIONS
done
'table[no-heading](service)' outputs a table without heading, which is set in brackets, and a single column with service IDs, which is set in parentheses.
--filter="service=$SERVICE AND version.servingStatus=SERVING AND NOT id=$RECENT" will only show versions from indicated service that are serving, except the one indicated by RECENT.
Additionally, if you would want to use dates for filtering:
gcloud app versions list --format='table(id, version.servingStatus, version.createTime.date(format="%s"))' --filter="service=default" --sort-by="~version.createTime"
version.createTime.date(format="%s") is a function date converting version.createTime.date into the number of seconds since the Epoch.
%s comes from strftime(3) and returns dates in Epoch format which is easier to understand and compare.
--sort-by="~version.createTime"sorts by creation date and because of ~ in descending order.
One approach is to use the --stop-previous-version and/or --promote options when deploying with gcloud app deploy (they should be the default if I interpret the docs correctly, unless you use --no-stop-previous-version and/or --no-promote):
--promote
Promote the deployed version to receive all traffic. Overrides the
default app/promote_by_default property value for this command
invocation. Use --no-promote to disable.
--stop-previous-version
Stop the previously running version when deploying a new version that
receives all traffic. Overrides the default
app/stop_previous_version property value for this command
invocation. Use --no-stop-previous-version to disable.
But, if you're using the standard environment and dynamic scaling, you should be aware that if the previous version handles a lot of traffic there may be service degradation/interruptions during the switch (it may take a while for the GAE autoscaler to determine how many new version instances it needs to spin up to handle that traffic, see Use traffic migration or splitting when switching to a new default version. You can perform these programmatically, see Not applicable to the flex environment, which doesn't support traffic splitting.
Also potentially of interest: GAE shutdown or restart all the active instances of a service/app
You can only control at which deployed version(s) is the traffic routed to by default, you can't really stop all traffic to a deployed version, it can always be reached via targeted routing.
BTW, the gcloud app versions [start|stop] commands are only applicable to manually scaled services:
It may only be used if the scaling module for your service has been
set to manual.

WSO2 Identity Server XML config of service providers

My company is using WSO2 IS version 5.2. We have implemented it clustered with 1 manager node and 3 worker nodes. We do not use multiple tenants. We are implementing a SAML approach to authentication. Our first implementation was in a development environment which included quite a bit of manual (UI based) configuration. The following was done using the management console:
adding custom claims
adding service providers (we have 3 currently)
assigning custom claims to SPs
configure the resident IdP
We now must setup and configure 50 more development, QA and UAT environments. We would like to be able to do this entirely through XML configuration with no human data entry. Is there a specific resource that can walk me through the above 4 items? Note: We have determined how to add our own custom claims through xml config. So item #1 is no longer an issue but I included it for reference. I am really mostly interested in items 2,3 and 4.
We did find the following topic in the docs:
https://docs.wso2.com/display/IS520/Configuring+a+SP+and+IdP+Using+Configuration+Files
However, the above link does not go far enough to explain how to map custom claim to SPs. We also found this which asks a very similar question but gives only part of what we are looking for.
Thanks for any assistance.
You could setup a basic environment and copy the database from the directoy conf/repository/database.

devappserver2, remote_api, and --default_partition

To access a remote datastore locally using the original dev_appserver I would set --default_partition=s as mentioned here
In March 2013 Google made devappserver2 the default development server, and it does not support --default_partition resulting in the original, dreaded:
BadRequestError: app s~appname cannot access app dev~appname's data
It appears like the first few requests are served correctly with
os.environ["APPLICATION_ID"] == 's~appname'
Then a subsequent request results in a call to /_ah/warmup and then
os.environ["APPLICATION_ID"] == 'dev~appname'
The docs specifically mention related topics but appear geared to dev_appserver here
Warning! Do not get the App ID from the environment variable. The development server simulates the production App Engine service. One way in which it does this is to prepend a string (dev~) to the APPLICATION_ID environment variable, which is similar to the string prepended in production for applications using the High Replication Datastore. You can modify this behavior with the --default_partition flag, choosing a value of "" to match the master-slave option in production. Google recommends always getting the application ID using the get_application_id() method, and never using the APPLICATION_ID environment variable.
You can do the following dirty little trick:
from google.appengine.datastore.entity_pb import Reference
DEV = os.environ['SERVER_SOFTWARE'].startswith('Development')
def myApp(*args):
return os.environ['APPLICATION_ID'].replace("dev~", "s~")
if DEV:
Reference.app = myApp

302 status when copying data to another app in AppEngine

I'm trying to use the "Copy to another app" feature of AppEngine and keep getting an error:
Fetch to http://datastore-admin.moo.appspot.com/_ah/remote_api failed with status 302
This is for a Java app but I followed the instructions on setting up a default Python runtime.
I'm 95% sure it's an authentication issue and the call to remote_api is redirecting to the Google login page. Both apps use Google Apps as the authentication mechanism. I've also tried copying to and from a third app we have which uses Google Accounts for authentication.
Notes:
The user account I log in with is an Owner on all three apps. It's a Google Apps account (if that wasn't obvious).
I have a gmail account this is an Owner on all three apps as well. When I log in to the admin console with it, I don't see the datastore admin console at all when I click it.
I'm able to use the remote_api just fine from the command-line after I enter my details
Tried with both the Python remote_api built-in and the Java one.
I've found similar questions/blog posts about this, one of which required logging in from a browser, then manually submitting the ACSID cookie you get after that's done. Can't do that here, obviously.
OK, I think I got this working.
I'll refer to the two appIDs as "source" and "dest".
To enable datastore admin (as you know) you need to upload a Python project with the app.yaml and appengine_config.py files as described in the docs.
Either I misread the docs or there is an error. The "appID" inthe .yaml should be the app ID you are uploading to to enable DS admin.
The other appID in the appengine_config file, specifically this line:
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'HTTP_X_APPENGINE_INBOUND_APPID', ['appID'])
Should be the appID of the "source", ID the app id of where the data is coming from in the DS copy operation.
I think this line is what allows the source appID to be authenticated as having permissions to write to the "dest" app ID.
So, I changed that .py, uploaded again to my "dest" app ID. To be sure I made this dummy python app as default and left it as that.
Then on the source app ID I tried the DS copy again, and all the copy jobs were kicked off OK - so it seems to have fixed it.

Resources