I created a Google App Engine app that listens for Google Cloud Storage notifications and whenever a new object is created on GCS, the app needs to open the new object and perform operations based on its contents. I can't access the object contents when the app and the gcs bucket are in different projects.
Configuration:
I have created a service account in project A with Storage Object Admin permissions, associated the GAE app with it, activated the service account using:
gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE
I then created a bucket gs://some_bucket in project B in the same region as my GAE app, and added my service account as an owner of the bucket.
I added my service account as a member of project B with "Storage Object Admin" permissions.
I created a watchbucket channel between my application and the bucket using
gsutil notification watchbucket -i [ChannelId] -t [Token] https://[app-name].appspot.com/ gs://some_bucket
My application is now receiving post requests, I can parse through them, find the source bucket, the size, object name, etc. but I can't read the objects themselves. I get the following error.
{Location: ""; Message: "Access Denied: File gs://some_bucket/some_object: Access Denied"; Reason: "accessDenied"}
I tested the above configuration within the same project (project A), and I am able to read the objects and operate on them. This is a permissions issue that I can't figure out.
GCS Bucket permissions are different than GCS object permissions, being a bucket owner does not translate into object owner or having object access. You can grant read permissions to all existing GCS objects in your bucket recursively using the following:
gsutil -m acl ch -u name#project.iam.gserviceaccount.com:R -r gs://example-bucket
which will recursively grant the service account read permission to all objects in the bucket.
One might also want to change the bucket object default permissions so that all future objects coming into your GCS bucket have the desired permissions
gsutil defacl ch -u name#project.iam.gserviceaccount.com:READ gs://example-bucket
Changing object ACL's: https://cloud.google.com/storage/docs/gsutil/commands/acl
Changing default object ACL's: https://cloud.google.com/storage/docs/gsutil/commands/defacl
Related
Our GAE app has been running fine for years. I'm trying to switch to IAM roles to manage buckets from the default (fine-grained access). It doesn't work.
After switching to uniform access, I give StorageAdmin permissions to the GAE service account. At that point our code fails in getServingUrl():
String filename = "/gs/" + bucketName + "/" + fileName;
String url = service.getServingUrl( ServingUrlOptions.Builder.withGoogleStorageFileName(filename ));
An IllegalArgumentException is thrown with no detailed error message.
So, I play around with the permissions a bit more. I add allUsers with StorageAdmin permissions to the bucket. Two interesting things to note: 1) I can access the image directly from a browser using: https://storage.googleapis.com/bucket/filename.png. 2) Nothing changes on our app. Still get the same behavior as described above!
To me, this makes no sense. Doesn't allUsers mean anyone or any service can access the files? And why doesn't adding the GAE service account work?
There are two types of permissions allowed by the cloud storage to access any bucket or objects, these are IAM and ACLs , so if you are using IAM to access buckets then make sure that you are following the norm mentioned in the documentation as:
In most cases, IAM is the recommended method for controlling access to your resources. IAM controls permissioning throughout Google Cloud and
allows you to grant permissions at the bucket and project levels. You should use IAM for any permissions that apply to multiple objects in a bucket to reduce
the risks of unintended exposure. To use IAM exclusively, enable uniform bucket-level access to disallow ACLs for all Cloud Storage resources.
If you use IAM and ACLs on the same resource, Cloud Storage grants the broader permission set on the resource. For example, if your IAM permissions only
allow a few users to access my-object, but your ACLs make my-object public, then my-object is exposed to the public. In general, IAM cannot detect
permissions granted by ACLs, and ACLs cannot detect permissions granted by IAM.
You can also refer to the stackoverflow question where a similar issue has been faced by the OP and got resolved by changing the permission in access control list of the object as READ or FULL_CONTROL.
My aim is to be able to call the Google Storage REST APIs with an OAuth token granting an authenticated Google user the read/write permissions on a directory called "directoryName" inside my storage bucket.
So, far I have successfully managed to use the Storage APIs after adding the user to the ACL for the bucket. However, I do not want to grant the user the READ or WRITE permissions on the complete bucket but just on the user's directory inside the bucket e.g. bucket/directoryName.
e.g. I want to be able to call storage.objects.list for a directory inside the bucket without providing the user the permissions for the bucket but just for that directory (and subdirectories).
What I've tried so far: When I tried to call the GET method on https://www.googleapis.com//storage/v1/b/bucket/o?fields=kind%2Citems%28name%29&maxResults=150&prefix=directoryName with the user added to the directory's ACL (as Owner), I get the error response "code":403,"message":"myEmail#gmail.com does not have storage.objects.list access to myBucketName.appspot.com."
Is it possible to provide directory level permissions with Google Cloud Storage and list the contents of that directory only?
As explained in the documentation, there are no such thing as directories in Cloud Storage. As far as Storage is concerned, there are only buckets and inside them objects/files that may or may not have "/" in their name.
Due to this design choice, there's no option to set permissions on a "directory" in Cloud Storage. Please note however that you can create as many buckets as you want for no extra charge. You may create one bucket per user to fit your requirement.
$ gsutil acl get gs://mybucket/xyz.css
GSResponseError: status=403, code=AccessDenied, reason="Forbidden", message="Access denied.", detail="Access denied to mybucket/xyz.css"
How should I diagnose the problem?
I'm the owner of the bucket. But as the answer points out, I'm not the owner of the object.
In order to view the ACL for an object, you must have the FULL_CONTROL permission for that object (bucket permissions are irrelevant). There are a variety of reasons that you might not be able to access this object, but the most obvious is that you are using gsutil with an account that does not have FULL_CONTROL permission for this object. The account that owns an object always has FULL_CONTROL permission.
Did you create this bucket? Did you create this object? Did you create them with the same account with which you've configured gsutil? Perhaps you've set up gsutil with a service account but the object is owned by your user account?
Here are some possible reasons why this might the case:
This isn't your bucket and/or object. Perhaps you've misspelled it?
You're using gsutil as an anonymous user or as a different user than the owner of this object.
Some other user or service had permission to create objects in your bucket and created it with custom ACLs that don't include you.
Here are some random troubleshooting ideas:
Try using the cloud console at https://cloud.google.com/console. Can you see the object's ACL from there? If so, it's an issue with your gsutil config.
Only users granted FULL_CONTROL access are allowed to read the object's ACL. Do you know who created this object? If so, you could ask that person to run the gsutil acl get command on it; then you could see who is granted FULL_CONTROL access.
I am trying to use the Google Prediction API for the first time.
I am just following the steps given in the article https://developers.google.com/appengine/articles/prediction_service_accounts.
I am getting a strange problem while executing step 2.4 in the above mentioned article.
I have followed the steps as below.
1) I have an application created in say xyz.com domain, and I have service account name of my application as "myapp#appspot.gserviceaccount.com".
2) Then I went to "Team" tab on the Google API Console, and tried to add the service account name of my application, to the project in which I have activated Prediction API and Google Cloud Storage.
While adding the serivce account to the project it gives me an error saying that
"Only users in domain xyz.com may be added to the project".
The same kind of message is also displayed on the bottom of the "Team" tab.
xyz.com is the domain in wich my application is deployed.
Could any one please help me understand why this kind of message is comming?
Are there any domain level admin settings required to add the service account to the Google Console API project?
Regards,
Nirzari
Currently, if you created a project with your Apps account, you can only add members of that same domain.
What you'll have to do is create a new project from something like xxx#gmail.com account (NOT your Apps domain account). You can then add both #appspot.gserviceaccount.com and yourself#xyz.com.
I think you can even remove xxx#gmail.com later on, once you've added yourself#xyz.com. Even activate billing for yourself#xyz.com, not xxx#gmail.com, if you need to.
Take from https://developers.google.com/appengine/docs/python/googlestorage/overview
You can modify the ACL of the bucket manually:
An alternate way to grant app access to a bucket is manually edit and set the bucket ACL and the default object ACL, using the gsutil utility:
Get the ACL for the bucket and save it to a file for editing: gsutil getacl gs://mybucket > myAcl.txt
Add the following Entry to the ACL file you just retrieved:
<Entry>
<Scope type="UserByEmail">
<EmailAddress>
your-application-id#appspot.gserviceaccount.com
</EmailAddress>
</Scope>
<Permission>
WRITE
</Permission>
</Entry>
If you are adding multiple apps to the ACL, repeat the above entry
for each app, changing only the email address to reflect each app's
service name.
Set the modified ACL on your bucket: gsutil setacl myAcl.txt gs://mybucket
Pretty basic question but I haven't been able to find an answer. Using Transit I can "move" files from one S3 bucket on one AWS account to another S3 bucket on another AWS account, but what it actually does is download the files from the first then upload them to the second.
Is there a way to move files directly from one S3 account to another without downloading them in between?
Yes, there is a way. And its pretty simple, though it's hard to find it. 8)
For example, suppose your first account username is acc1#gmail.com and second is acc2#gmail.com.
Open AWS Management Console as acc1. Get to the Amazon S3 bucket properties, and in the "Permissions" tab click "Add more permissions". Then add List and View Permissions for "Authenticated Users".
Next, in AWS IAM (it's accessible from among the console tabs) of acc2 create a user with full access to the S3 bucket (to be more secure, you can set up exact permissions, but I prefer to create a temporary user for the transfer and then delete it).
Then you can use s3cmd (using the credentials of the newly created user in acc2) to do something like:
s3cmd cp s3://acc1_bucket/folder/ s3://acc2_bucket/folder --recursive
All transfer will be done on Amazon's side.
Use the aws cli (I used ubuntu 14 ec2 instance) and just run the following command:
aws s3 sync s3://bucket1 s3://bucket2
You will need to specify the account details for one, and have public write access or public read access to the other.
This will sync the two buckets. You can use the same command again later to sync quickly. Best part is that it doesn't seem t require any bandwidth (e.g. files are not passing through local computer).
If you are just looking for a ready made solution there are a few solutions out there that can do this. Bucket Explorer works on Mac and Windows and can copy across accounts as can Cloudberry S3 Explorer and S3 Browser but they are Windows only so may not work for you.
I suspect the AWS console could also do it with the appropriate permissions setup but I haven't tested this.
You can also do it using the AWS API as long as you have given the AWS account you are using write permissions to the destination bucket.
boto works well. See this thread. Using boto, you copy objects straight from one bucket to another, rather than downloading them to the local machine and uploading them to another bucket.
Move S3 files from One account to another account
Let's consider there are two accounts source account and destination account. And two buckets source-bucket and destination bucket. We want to move all files from source-bucket to destination-bucket. We can do it by the following steps:
aws configure
Configure your destination account using the credential or the IAM role.
Create user policy for the destination account user.
Give destination user access to the source-bucket by modifying the source-bucket policy and adding destination account user policy into it. By this way, destination user will have the access to source-bucket.
aws s3 ls s3://source-bucket/
this will check whether the destination account is having access to source-bucket. Just for confirmation do this.
aws s3 cp s3://source-bucket s3://destination-bucket --recursive
this will copy source-bucket all files to destination-bucket. All files are copied using --recursive flag.
aws s3 mv s3://source-bucket s3://destination-bucket --recursive
this will move all the files from source-bucket to destination-bucket.
Alternative you can use the sync command
- aws s3 sync s3://source-bucket s3://detination-bucket
For Better Explanation follow the link
On Mac OS X I used the Transmit app from Panic. I opened one window for each S3 account (using the API Keys and secrets). I could then drag from one bucket in one window to another bucket in the other window. No need to download files locally first.
Andrew is correct, Transmit downloads the files locally then uploads the files.
CrossFTP can copy S3 files straight from one bucket to another without downloading them. It is a GUI S3 client that works on Windows, Mac, and Linux.
You can user Cyberduck (open source)
For newly created files (NOT existing objects), you can take advantage of new functionality from AWS. It is Cross-Region Replication (under "Versioning" for the S3 bucket). You can create a policy that will allow you to replicate new objects to a bucket in a different account.
For existing objects, you will still need to copy your objects using another method - unless AWS introduces native functionality for this in the future.
One can so it with running following :
aws s3 mv (sync for keeping buckets in sync) s3://source-bucket s3://destination-bucket --recursive
Attach a bucket policy to the source bucket in Source Account.
Attach an AWS Identity and Access Management (IAM) policy to a user or role in Destination Account.
Use the IAM user or role in Destination Account to perform the cross-account move.