Not able to create the file on Google Cloud Storage - google-app-engine

I am following the below link.
https://github.com/GoogleCloudPlatform/getting-started-java/tree/master/bookshelf-standard/3-binary-data
I create a new Google Cloud Project and followed the above instructions and all fine on the remote server
I tried using an existing old appengine project (created 4-5 years ago). I get the following error at the given code:
"Caller does not have storage.objects.create access to bucket ..."
storage.create(BlobInfo.newBuilder(bucketName, fileName)
// Modify access list to allow all users with link to read file
.setAcl(new ArrayList<>(Arrays.asList(Acl.of(User.ofAllUsers(),
Role.READER)))).build(),
fileStream.openStream());
Following is the stacktrace
Uncaught exception from servlet
com.google.cloud.storage.StorageException: Caller does not have storage.objects.create access to bucket asw12.
at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:189)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:240)
at com.google.cloud.storage.StorageImpl$3.call(StorageImpl.java:151)
at com.google.cloud.storage.StorageImpl$3.call(StorageImpl.java:148)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:94)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:54)
at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:148)
at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:141)
at com.example.getstarted.util.CloudStorageHelper.uploadFile(CloudStorageHelper.java:65)
at com.example.getstarted.basicactions.CreateBookServlet.doPost(CreateBookServlet.java:70
I checked up the Google Service Accounts in my old project and it exists. How do I know, who is the 'Caller'?

If you use the google-cloud libraries from App Engine and don't otherwise specify, you will be acting as your project's app engine default service account. Its name is probably something like your-project-id#appspot.gserviceaccount.com.
To get the service account name, open the Service Accounts page in the console, or check the settings on your App Engine page.

Related

"An internal error occurred while ensuring the default service account exists" while creating Google App Engine

I am trying to create Google App Engine using create application option but i am getting below error :
An internal error occurred while ensuring the default service account exists.
Can you please help me with the solution
Tried creating with different location getting same error
Make sure the default App Engine Service Account is not missing (due to an accidental deletion e.g) from the IAM & admin > Service accounts section of the Cloud Console. It is named after your project followed by "#appspot.gserviceaccount.com".
If you do not see it, you can recover it by doing for example a REST API call as documented here:
POST https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com:undelete
If the deletion was made more than 30 days ago, the only way left to fix the issue would be to create a new project and use its brand new default Service Account.

403 Forbidden on local Google Cloud Storage call

I am using the default bucket name, but whenever I try to write a file, I get a 403 Forbidden. It tries to write to a bucket named: app_default_bucket.
This is the default bucket retrieved by file.DefaultBucketName(ctx).
Local file permissions also seem to be okay.
In production everything works as expected.
It's trying to write on your remote Google Cloud Storage account. Seems like a current bug. For now you might have to create/reconfigure the default bucket on your account.
Using the client library with the dev server not working in Go

TransformationError on blob via get_serving_url (app engine)

TransformationError
This error keeps coming up for a specific image.
There are no problems with other images and I'm wondering what the reason for this exception could be.
From Google:
"Error while attempting to transform the image."
Update:
Development server it works fine, only live it fails.
Thanks
Without more information I'd say it's either the image is corrupted, or it's in a format that cannot be used with get_serving_url (animate GIF for example).
I fought this error forever and incase anyone finds they get the dreaded TransformationError please note that you need to make sure that your app has owner permissions on the files you want to generate a url for
It'll look something like this in your IAM tab:
App Engine app default service account
your-project-name-here#appspot.gserviceaccount.com
In IAM on that member you want to scroll down to Storage and grant "Storage Object Admin" to that user. That is as long as you have your storage bucket under the same project... if not I'm not sure how...
This TransformationError exception seems to show up for permissions errors so it is a bit misleading.
I way getting this error because I had used the Bucket Policy Only permissions on a bucket in a different project.
However after changing this back to Object Level permissions and giving my App Engine app access (from a different project) I was able to perform the App Engine Standard Images operation (google.appengine.api.images.get_serving_url) that I was trying to implement.
Make sure that you set your permissions correctly either in the Console UI or via gsutil like so:
gsutil acl ch -u my-project-a#appspot.gserviceaccount.com:OWNER gs://my-project-b

Google App Engine : this application does not exist

When i deploy my application in GAE , i get this error
This application does not exist (app_id=u'qsse-ss').
scenario is that this application is already deployed to GAE by some other user , now i have made some changes and i want to update in GAE, so now when i right click on my app in eclipse and go to "deploy to appengine " it gives me this error
Am i doing something wrong , whats the correct way of doing it
thanks
That happened to me as well when I provided a username that wasn't an owner/developer, and even specifying a different account with -e or --email didn't work. What fixed was this:
appcfg.py update . --no_cookies
(same as: https://stackoverflow.com/a/10004722)
You have to login under developer or owner to deploy application. This is bottom left icon in eclipse.
Also check your application name. If id of application correct and user you login has role owner/developer for this application - you will deploy.
You have to make sure that application name in your GAE local client matches the one in your Google account. Check the app.yaml file to make sure that the name of application matches there as well.
That also happend to me when I change the email to create new appid for goagent.
My solution is delete cookie file in the server directory:
rm server/.appcfg_cookies
then, everything is ok!
Deleted .appcfg_oauth2_tokens c:/usrs/etc worked for me after a long overdue upgrade to python27.
At least as of 2017-06-22, it's not enough to create a cloud project. You have to go to the App Engine section of cloud console and choose a language. When it's done saying "Preparing your App Engine services..." then you can deploy.
the problem was different name which i changed from rightClick project ->appenginesettings->ApplicationId .
this name should be same in your google account
Removing the oauth2 token file ~/.appcfg_oauth2_tokens or specifying a different token store file file with the flag "--oauth2_credential_file" might be a permanent solution. MacOSX GoogleAppEngineLauncher.app does not let you change this flag/path when you push the deploy button.
appcfg.py --oauth2_credential_file=~/.appcfg_oauth2_tokens_myappid
I had same error message:
This application does not exist (project_id=u'xxxx-123456'). To create an
App Engine application in this project, run "gcloud app create" in
your console.
I solved it by executing below command:
gcloud app create
It will create app inside your project and assign selected region.

302 status when copying data to another app in AppEngine

I'm trying to use the "Copy to another app" feature of AppEngine and keep getting an error:
Fetch to http://datastore-admin.moo.appspot.com/_ah/remote_api failed with status 302
This is for a Java app but I followed the instructions on setting up a default Python runtime.
I'm 95% sure it's an authentication issue and the call to remote_api is redirecting to the Google login page. Both apps use Google Apps as the authentication mechanism. I've also tried copying to and from a third app we have which uses Google Accounts for authentication.
Notes:
The user account I log in with is an Owner on all three apps. It's a Google Apps account (if that wasn't obvious).
I have a gmail account this is an Owner on all three apps as well. When I log in to the admin console with it, I don't see the datastore admin console at all when I click it.
I'm able to use the remote_api just fine from the command-line after I enter my details
Tried with both the Python remote_api built-in and the Java one.
I've found similar questions/blog posts about this, one of which required logging in from a browser, then manually submitting the ACSID cookie you get after that's done. Can't do that here, obviously.
OK, I think I got this working.
I'll refer to the two appIDs as "source" and "dest".
To enable datastore admin (as you know) you need to upload a Python project with the app.yaml and appengine_config.py files as described in the docs.
Either I misread the docs or there is an error. The "appID" inthe .yaml should be the app ID you are uploading to to enable DS admin.
The other appID in the appengine_config file, specifically this line:
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'HTTP_X_APPENGINE_INBOUND_APPID', ['appID'])
Should be the appID of the "source", ID the app id of where the data is coming from in the DS copy operation.
I think this line is what allows the source appID to be authenticated as having permissions to write to the "dest" app ID.
So, I changed that .py, uploaded again to my "dest" app ID. To be sure I made this dummy python app as default and left it as that.
Then on the source app ID I tried the DS copy again, and all the copy jobs were kicked off OK - so it seems to have fixed it.

Resources