Drive account associated with Appengine Service account - google-app-engine

I recently noticed that there is a drive account related to me Appengine's Service account where I can push/read files using python code on Appengine using my service account as a user.
My questions are:
1.) Is the drive account available by default as 1 per GCP application or 1 per Service account (as I can create multiple service account)?
2.) What is the quota on the Drive account available in this case?
3.) How can I access the drive account associated with service account from web UI (as we do with user account)?
4.) Can you provide any additional information on how this drive is similar/different from the general drive accounts?
Thanks in advance.

I have been experimenting a little with this API.
1) The drive account is global to the project
2) It's not clear whether there is a quota, but it looks like a regular user, so you should expect the same limit as for your business accounts
3) You can't. What I do is give Edit access to a folder I own to the account so that it can generate new files and update existing ones.
4) It looks like a regular user, but it's intended to be used by machines, there is no way to set a password AFAIK.
However, I must warn you that there is currently a problem when trying to access the Drive API with those service accounts because they don't support scoping, needed for Drive API.
Insufficient Permission with Appengine Flex service account to access Drive folder

Yes, there is a drive account associated with every service account and its limit is same as a normal gmail user.
But, you can see drive content in a GUI like you can do in normal gmail user.

Related

How to share access to many GCP Cloud Storage Buckets from many regional projects

What is the best way of sharing a bucket between a multi-region App Engine app's projects?
I am enhancing an App Engine project to be multi-regional which means having many projects each set to one region. The existing app has per-user Buckets for uploading files which are created by the back-end as needed, then using resumable downloads / signed URLs etc for the uploads from the user. The permissions are all left as defaults, using the API storage.createBucket(bucketName);. The "App engine Flexible service account" has full access by default, and I've also granted read-access to another service account I made for a cloud function which processes the uploaded files.
Now I've come to need multiple projects accessing the same user-buckets (depending on how I balance the regions), so I need to add permissions for the additional region-projects' two service accounts to create and access these buckets.
I know I could grant access specifically (like this answer https://stackoverflow.com/a/20731155/209288) but as I will have N users, and R regions I would need to add N x R permissions - and need to back-fill in the new permissions each time a new region is added (which would not be often but will happen). I want new user buckets to instantly be able to be used from all regions, and new region-apps to be able to access all previously existing user-buckets.
So I was hoping there's some kind of group functionality I can use to grant group access to all the buckets on creation, then assign the region-app's services accounts into that group. Therefore the matrix of access is combined instantly using this group.
I found there is something for human users: https://cloud.google.com/storage/docs/collaboration#group but I'm not sure if this is appropriate for service accounts.
I looked at access conditions, but it seems to only affect the resources, not the "calling" users.
Another worry I have is that the existing buckets are all in one region/project (just the region I was using when the app was single region), and future buckets will also be created from which ever region/project the user happens to be being served from. If I shut down a region/project I don't want to lose those buckets.
I wondered if I should have a "host" project for these shared buckets, and perhaps a project-level setting in that project could unify the access from the other regional projects? (This is similar to Guillome's idea from this related question of mine)
I have tried one solution which seems fairly simple - using project-level permissions instead of bucket-level.
i.e. to add the "App Engine Default Service Account" from each regional project to the IAM permissions of the cloud storage project (be it the original region or if I did the host project idea).
In my region-project build script I add
gcloud projects add-iam-policy-binding $GCS_IMPORT_PROJECT_ID --member=\'serviceAccount:$PROJECT_ID#appspot.gserviceaccount.com\' --role='roles/storage.admin'
Where GCS_IMPORT_PROJECT_ID is the original or host project, and PROJECT_ID is the new region project being created.
The drawback of this is that it allows the other project to access all buckets, not just the user-upload buckets but infrastructure buckets like cloud-build artefacts, and my app's default and config bucket. This isn't a security issue as it's one app, but could possibly result in a mixup down the line.

Using Google Pub/Sub Java client library without adding the GOOGLE_APPLICATION_CREDENTIALS environment variable for authentication

Pub/Sub is really easy to use from my local work station. I set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path to my .json authentication object.
But what if you need to interact with multiple different Pub/Sub projects? This seems like an odd way to do authentication. Shouldn't there be a way to pass the .json object in from the Java code?
How can I use the client libraries without setting the system's environment variable?
Thanks
You can grant access for PubSub in different project using single Service Account and set it as env variable. See PubSub Access Control for more details.
A service account JSON key file allows to identify a service account on GCP. This service account is the equivalent of a user account but for the app (without user, only machine).
Thereby, if your app need to interact with several topic from different projects, you simply have to grant the service account email with the correct role on each topics/projects.
Another important thing. The service account key file are useful for app outside GCP. Else it's bad. Let me explain this. The service account key file is a secret file that you have to keep and store securely. And a file, it's easy to copy it, to send it by email and even to commit it into public git repository... In addition, it's recommended to rotate this key at least every 90 days for security reasons.
So, for all these reasons and the difficulty in security that represent the service account key files, I don't recommend you to use them.
With your local environment, use your user account (simply use the gcloud SDK and perform a gcloud auth application-default auth). Your aren't a machine (I hope!!)
With GCP component, use the component "identity" (load a service account when you deploy a service, create a VM,..., and grant the correct role on this service account without generating JSON key
With external app (other cloud provider, on premise, Apigee, CI/CD pipeline,...), generate a JSON file on your service account, you can't avoid them in this case.

Upload file to google drive using cron and google app engine

I studied and could successfully replicate the quickstart.py example on https://developers.google.com/drive/web/quickstart/quickstart-python to upload a file to my google drive using command line.
However, I wish to write an app that does the same, but through a cron job i.e. uploads a file everyday at 8am say, without the need to authenticate each time. Is there sample code/examples that I can look at to implement the oauth steps without the command line intervention?
Thanks!
You can use your App Engine app's built-in Service Account to authorize requests to the Google Drive API.
https://cloud.google.com/appengine/docs/python/appidentity/
https://developers.google.com/identity/protocols/OAuth2ServiceAccount
Your app will need to have an embedded Refresh Token, or some way of fetching it from a secure server. The Refresh Token acts a bit like a stored username/password, albeit with constrained access. Therefore you need to consider the security implications. For example, since it's uploading, it will only need drive.file scope, so your corpus of Drive files remain inaccessible.
If you're happy with the security implications, then the steps you need are described How do I authorise an app (web or installed) without user intervention? (canonical ?)

Google Apps Engine - verifying a domain that I own but is not currently hosted by a domain resigtrar

I am asking this question here because it is my understanding that this is the official GAE support site.
The background:
I had a website, mysite.com, hosted on somehostingsite.net. Recently, I ported the website to GAE, mysite.appspot.com. This occurred just as the renewal fee from somehostingsite.net was coming due, so I cancelled the somehostingsite.net account. I haven't transferred my domain to another registrar.
The problem:
I want to set up my domain with Google Apps so that I can have a url like myapp.mysite.com. So, the verification process to verify my ownership of the domain name presupposes mysite is currently up and running. Mine isn't right now. However, my domain title is listed at OpenSRS but they do not provide DNS support.
The main issue: mysite.appspot.com is free, which is why I ported the website to it.
What is the best way to proceed to get my domain set up with GAE given my current circumstance?
"So, the verification process to verify my ownership of the domain name presupposes mysite is currently up and running."
Not quite: all you need is DNS for your domain up and running, and then you can create a DNS TXT record to verify your domain. See: http://support.google.com/a/bin/answer.py?hl=en&answer=60216
So it sounds to me like you need to get (configurable) DNS hosting set up somewhere. You're going to need that anyway because you need to set up a CNAME record in order to serve you App Engine App off of your domain.

Restricting files from Google Cloud Storage to the users that have authenticated with my Google App Engine app?

I have a GAE application with a database of users.
When one of the user tries to download, say, file myapplication.appspot.com/somefile.jpg, I would:
check on the GAE database whether he is allowed to
if he is allowed, redirect him to a cloud storage bucket of mine from where he can download somefile.jpg
if he is not allowed, return him a 404 error code, and do some magic so that directly trying to download somefile.jpg from the cloud storage bucket does not complete.
Now what’s unclear to me is how to control access to somefile.jpg. How can I restrict the download to this scope of users?
PS: using something else than Google Storage is not an option (for those of you guys who thought about blobstore).
You don't need to restrict access on a per user basic you can restrict access on a per application (Google App Engine App) basis.
Every application has a service account, what you can do is set an ACL on the bucket to allow access to the application service account.
Now all you need to write an handler that would access Google Storage and return the data to the user.
As Shay noted, every App Engine application automatically has associated with it an internal account, called the “service account”. Normally, the service account name follows the pattern “your-app-id#appspot.gserviceaccount.com”, however, you can confirm the exact name by visiting the App Engine Administration Console, then clicking on your app name, followed by the “Application Settings” link, at which point you should see your service account name.
Once you find your service account name, add it to the “Team” subpage on the APIs console with “Can edit” permissions. This is even easier than updating the bucket ACL because you don't have to change any ACLs, however, bear in mind this applies to all buckets in your project. If you'd like to restrict your app to only have access to a subset of the buckets owned by your project then you'll want to update the per-bucket ACL(s), as Shay proposed.

Resources