Granting directory level permissions for Google Cloud Storage REST APIs - google-app-engine

My aim is to be able to call the Google Storage REST APIs with an OAuth token granting an authenticated Google user the read/write permissions on a directory called "directoryName" inside my storage bucket.
So, far I have successfully managed to use the Storage APIs after adding the user to the ACL for the bucket. However, I do not want to grant the user the READ or WRITE permissions on the complete bucket but just on the user's directory inside the bucket e.g. bucket/directoryName.
e.g. I want to be able to call storage.objects.list for a directory inside the bucket without providing the user the permissions for the bucket but just for that directory (and subdirectories).
What I've tried so far: When I tried to call the GET method on https://www.googleapis.com//storage/v1/b/bucket/o?fields=kind%2Citems%28name%29&maxResults=150&prefix=directoryName with the user added to the directory's ACL (as Owner), I get the error response "code":403,"message":"myEmail#gmail.com does not have storage.objects.list access to myBucketName.appspot.com."
Is it possible to provide directory level permissions with Google Cloud Storage and list the contents of that directory only?

As explained in the documentation, there are no such thing as directories in Cloud Storage. As far as Storage is concerned, there are only buckets and inside them objects/files that may or may not have "/" in their name.
Due to this design choice, there's no option to set permissions on a "directory" in Cloud Storage. Please note however that you can create as many buckets as you want for no extra charge. You may create one bucket per user to fit your requirement.

Related

Can't access image files in Google Cloud storage from App Engine

Our GAE app has been running fine for years. I'm trying to switch to IAM roles to manage buckets from the default (fine-grained access). It doesn't work.
After switching to uniform access, I give StorageAdmin permissions to the GAE service account. At that point our code fails in getServingUrl():
String filename = "/gs/" + bucketName + "/" + fileName;
String url = service.getServingUrl( ServingUrlOptions.Builder.withGoogleStorageFileName(filename ));
An IllegalArgumentException is thrown with no detailed error message.
So, I play around with the permissions a bit more. I add allUsers with StorageAdmin permissions to the bucket. Two interesting things to note: 1) I can access the image directly from a browser using: https://storage.googleapis.com/bucket/filename.png. 2) Nothing changes on our app. Still get the same behavior as described above!
To me, this makes no sense. Doesn't allUsers mean anyone or any service can access the files? And why doesn't adding the GAE service account work?
There are two types of permissions allowed by the cloud storage to access any bucket or objects, these are IAM and ACLs , so if you are using IAM to access buckets then make sure that you are following the norm mentioned in the documentation as:
In most cases, IAM is the recommended method for controlling access to your resources. IAM controls permissioning throughout Google Cloud and
allows you to grant permissions at the bucket and project levels. You should use IAM for any permissions that apply to multiple objects in a bucket to reduce
the risks of unintended exposure. To use IAM exclusively, enable uniform bucket-level access to disallow ACLs for all Cloud Storage resources.
If you use IAM and ACLs on the same resource, Cloud Storage grants the broader permission set on the resource. For example, if your IAM permissions only
allow a few users to access my-object, but your ACLs make my-object public, then my-object is exposed to the public. In general, IAM cannot detect
permissions granted by ACLs, and ACLs cannot detect permissions granted by IAM.
You can also refer to the stackoverflow question where a similar issue has been faced by the OP and got resolved by changing the permission in access control list of the object as READ or FULL_CONTROL.

Download Google Storage file using acl and redirect

Users can download csv-files från my Google App Enigne-application.
Larger files hit the 1min timeout-window and I have re-imagened my solution giving users read-access in Storage-ACL and redirecting to a downloadlink.
This is a working example for giving the user access and redirects to "https://storage.cloud.google.com/[bucket]/[object]" (1)
storage_client = storage.Client()
bucket = storage_client.get_bucket(app_identity.get_default_gcs_bucket_name())
acl = bucket.acl
acl.user(ndb_user.primaryEmail).grant_read()
acl.save()
self.redirect("https://storage.cloud.google.com" + filename)
It seems my solution don't work programatically nor in-browser.
Any ideas why or suggestions on alternatives ways to actually do this?
1: https://cloud.google.com/storage/docs/xml-api/reference-uris
This doesn't work because you are only granting READ access on the bucket via the the ACL api. This maps to the IAM Role roles/storage.legacyBucketReader, which does not grant storage.objects.get permission (it only grants storage.buckets.get and storage.objects.list). If you switch to the new IAM api, and instead grant roles/storage.objectViewer this should work.
That being said, I strongly recommend looking at Signed URLs instead. Signed URLs will grant temporary access via delegation rather than permanently granting the user access via ACLs and is made for precisely this use case. Google's google-cloud library has a "generate_signed_url" function you can use to generate them instead of rolling your own. Here is the documentation.

Can I have GCS private isolated buckets with a unique api key per bucket?

I'd like to give to each of my customers access to their own bucket under my GCS enabled app.
I also need to make sure that a user's bucket is safe from other users' actions.
Last but not least, the customer will be a client application, so the whole process needs to be done transparently without asking the user to login.
If I apply an ACL on each bucket, granting access only to the user I want, can I create an API key only for that bucket and hand that API key to the client app to perform GCS API calls?
Unfortunately you only have two good options here:
Have a service which authenticates the individial app according to whatever scheme you like (some installation license, a random GUID assigned at creation time, whatever) and vends GCS signed URLs, which the end user could then use for a single operation, like uploading an object or listing a bucket's content. The downside here is that all requests must involve your service. All resources would belong entirely to your application.
Abandon the "without asking the user to login" requirement and require a single Google login at install time.

Drive Realtime API not granting permission to realtime document; normal drive API freaking out

My app uses the Drive rest API and the Drive Realtime API in combination. We set the file's permissions so that the public has view access, and then emailed a link to it to a few thousand customers.
The file's permissions are set so that the public has view access, but:
When a user tries to open the realtime document, we get Drive Realtime API Error: not_found: File not found.
When a user tries to copy the non-realtime file, we get The authenticated user has not granted the app 689742286244 write access to the file 0B-NHh5QARZiUUFctN0Zjc3RKdWs (of course we are not asking to write
You can see the effects for yourself at https://peardeck.com/editor/0B-NHh5QARZiUUFctN0Zjc3RKdWs , and our embarrassing attempts to cover for the errors.
Interesting notes:
Sharing the file directly with a particular google account seems to lift the curse, and then that google account can find the file like normal. No extra permissions, just an explicit reference to a google account.
Setting the file so that the public has full write access seems to have no effect
Other files with the exact same settings in the exact same Drive folder can be opened successfully (but presumably have not been opened by so many people in the past). This makes me think there is some broken state within Google.
How can I avoid this? What's going on?(!?!?) Thanks for any help!
The realtime API does not support anonymous access. All users must have a Google account and be explicitly authorized to view or write to the file.

Google Drive SDK: Modify application-owned file as user

I have a Google App Engine application that:
Authenticates a user and authorizes the drive.file scope;
Creates and stores a file on behalf of a user via an application-owned 'regular' Google account;
Shares that file with the user (grants write access).
However, when a user attempts to update one of these files via an authorized Drive service created by the app, the following exception is raised:
403: The authenticated user has not granted the app {appId} access to
the file {fileId}.
What am I missing? Given that the file was both initially created by and is still owned by the application, why is it necessary for the user to specifically grant the application access to the file?
My goal is for users to modify files (to which they have write access, that are stored in/owned by an application-owned account) as themselves in order to maintain appropriate 'last modifying user' attribution.
Is there anything I can do to work around this, other than (a) authorizing the 'drive' scope, (b) using the Google Picker or Drive UI to 'explicitly' open files with my app (does this imply the file must live in the user's Drive account?), or (c) having my application-owned account perform all file update operations?
File scope authorization is currently done as a user-app pair. Each user must individually authorize the app to access to the file.
Given that, I think you've identified the possible solutions. For b, the file doesn't need to be owned by the user, they just need access to it. Having shared it with them should be sufficient.

Resources