I am using the default bucket name, but whenever I try to write a file, I get a 403 Forbidden. It tries to write to a bucket named: app_default_bucket.
This is the default bucket retrieved by file.DefaultBucketName(ctx).
Local file permissions also seem to be okay.
In production everything works as expected.
It's trying to write on your remote Google Cloud Storage account. Seems like a current bug. For now you might have to create/reconfigure the default bucket on your account.
Using the client library with the dev server not working in Go
Related
I am trying to host my website with amazon aws s3 static website hosting.
I created a bucket, completed permissions and the bucket policy etc.
And it was returning a 403 forbidden access when I tried to access my end point.
After leaving it for a weekend I went back to have another go and it was working.
Now I tried to delete the contents of the bucket and add some different files. ( basically the same just a few changes in some paragraphs.)
And once again it is now giving me a 403 forbidden access. My question is. Is there a waiting period or something when a bucket or it’s contents are changed. ?
Or is it just me doing something wrong. ? I didn’t change my policy or permissions so I don’t see why it has gone back to giving me a "403 forbidden" message again.
I have looked at previous questions and also aws documentation but couldn’t find anything specific to this.
Appreciate any information.
You need to make all files public. You can either make them public using S3 console/interface or aws cli. It does not take any time to apply public rule, it will be applied instantly.
In Google App Engine (GAE), files that get stored to the local Cloud Storage show up in the admin console with a path. Example:
/gs/myapp.appspot.com.somefile.jpg
This one seems to get closer:
http://localhost:8080/_ah/img/encoded_gs_file:somefile.jpg
But that generates an error:
Error 404 ApplicationError: 6: Could not read blob.
This one works but it requires I know the key:
http://localhost:8080/_ah/img/encoded_gs_key:some_key
Is there a way to use the local url but use the filename instead of a key?
I think you should go through the details of this GitHub Code about how to read and write blobs. The code confirms that for image files, you always require the keys.
For Images, you require the Key http://localhost:8080/_ah/img/encoded_gs_file:[Keys]
While for other files: https://localhost:8080/_ah/gcs/default_bucket/file_name
I am following the below link.
https://github.com/GoogleCloudPlatform/getting-started-java/tree/master/bookshelf-standard/3-binary-data
I create a new Google Cloud Project and followed the above instructions and all fine on the remote server
I tried using an existing old appengine project (created 4-5 years ago). I get the following error at the given code:
"Caller does not have storage.objects.create access to bucket ..."
storage.create(BlobInfo.newBuilder(bucketName, fileName)
// Modify access list to allow all users with link to read file
.setAcl(new ArrayList<>(Arrays.asList(Acl.of(User.ofAllUsers(),
Role.READER)))).build(),
fileStream.openStream());
Following is the stacktrace
Uncaught exception from servlet
com.google.cloud.storage.StorageException: Caller does not have storage.objects.create access to bucket asw12.
at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:189)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:240)
at com.google.cloud.storage.StorageImpl$3.call(StorageImpl.java:151)
at com.google.cloud.storage.StorageImpl$3.call(StorageImpl.java:148)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:94)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:54)
at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:148)
at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:141)
at com.example.getstarted.util.CloudStorageHelper.uploadFile(CloudStorageHelper.java:65)
at com.example.getstarted.basicactions.CreateBookServlet.doPost(CreateBookServlet.java:70
I checked up the Google Service Accounts in my old project and it exists. How do I know, who is the 'Caller'?
If you use the google-cloud libraries from App Engine and don't otherwise specify, you will be acting as your project's app engine default service account. Its name is probably something like your-project-id#appspot.gserviceaccount.com.
To get the service account name, open the Service Accounts page in the console, or check the settings on your App Engine page.
TransformationError
This error keeps coming up for a specific image.
There are no problems with other images and I'm wondering what the reason for this exception could be.
From Google:
"Error while attempting to transform the image."
Update:
Development server it works fine, only live it fails.
Thanks
Without more information I'd say it's either the image is corrupted, or it's in a format that cannot be used with get_serving_url (animate GIF for example).
I fought this error forever and incase anyone finds they get the dreaded TransformationError please note that you need to make sure that your app has owner permissions on the files you want to generate a url for
It'll look something like this in your IAM tab:
App Engine app default service account
your-project-name-here#appspot.gserviceaccount.com
In IAM on that member you want to scroll down to Storage and grant "Storage Object Admin" to that user. That is as long as you have your storage bucket under the same project... if not I'm not sure how...
This TransformationError exception seems to show up for permissions errors so it is a bit misleading.
I way getting this error because I had used the Bucket Policy Only permissions on a bucket in a different project.
However after changing this back to Object Level permissions and giving my App Engine app access (from a different project) I was able to perform the App Engine Standard Images operation (google.appengine.api.images.get_serving_url) that I was trying to implement.
Make sure that you set your permissions correctly either in the Console UI or via gsutil like so:
gsutil acl ch -u my-project-a#appspot.gserviceaccount.com:OWNER gs://my-project-b
I'm trying to use the "Copy to another app" feature of AppEngine and keep getting an error:
Fetch to http://datastore-admin.moo.appspot.com/_ah/remote_api failed with status 302
This is for a Java app but I followed the instructions on setting up a default Python runtime.
I'm 95% sure it's an authentication issue and the call to remote_api is redirecting to the Google login page. Both apps use Google Apps as the authentication mechanism. I've also tried copying to and from a third app we have which uses Google Accounts for authentication.
Notes:
The user account I log in with is an Owner on all three apps. It's a Google Apps account (if that wasn't obvious).
I have a gmail account this is an Owner on all three apps as well. When I log in to the admin console with it, I don't see the datastore admin console at all when I click it.
I'm able to use the remote_api just fine from the command-line after I enter my details
Tried with both the Python remote_api built-in and the Java one.
I've found similar questions/blog posts about this, one of which required logging in from a browser, then manually submitting the ACSID cookie you get after that's done. Can't do that here, obviously.
OK, I think I got this working.
I'll refer to the two appIDs as "source" and "dest".
To enable datastore admin (as you know) you need to upload a Python project with the app.yaml and appengine_config.py files as described in the docs.
Either I misread the docs or there is an error. The "appID" inthe .yaml should be the app ID you are uploading to to enable DS admin.
The other appID in the appengine_config file, specifically this line:
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'HTTP_X_APPENGINE_INBOUND_APPID', ['appID'])
Should be the appID of the "source", ID the app id of where the data is coming from in the DS copy operation.
I think this line is what allows the source appID to be authenticated as having permissions to write to the "dest" app ID.
So, I changed that .py, uploaded again to my "dest" app ID. To be sure I made this dummy python app as default and left it as that.
Then on the source app ID I tried the DS copy again, and all the copy jobs were kicked off OK - so it seems to have fixed it.