gcloud.app.deploy - The caller does not have permission [duplicate] - google-app-engine

I have followed through the steps in google cloud platform guide, but still getting permission error. which says the caller does not have the permission. pls what am I doing wrong.
this is the out of command gcloud config list
region = us-central1
zone = us-central1-f
[core]
account = <gmail-account>
disable_usage_reporting = True
project = <project-id>
Your active configuration is: [default]
this is the error it raised
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/<project-id>/regions/europe-west1/operations/error [INTERNAL]: An internal error occurred while processing task /appengine-flex-v1/insert_flex_deployment/flex_create_resources>2020-07-28T15:45:31.962Z49210.jv.11: Deployment Manager operation <project-id>/operation-... errors: [code: "RESOURCE_ERROR"
location: "/deployments/aef-default-..../resources/aef-default-...."
message: "{\"ResourceType\":\"compute.beta.regionAutoscaler\",
\"ResourceErrorCode\":\"403\",
\"ResourceErrorMessage\":{\"code\":403,
\"message\":\"The caller does not have permission\",
\"status\":\"PERMISSION_DENIED\",
\"statusMessage\":\"Forbidden\",
\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/<project-id>/regions/europe-west1/autoscalers\",
\"httpMethod\":\"POST\"}}"

Please check your project quotas usually this error is raised when your project doesn't have enough IPs or VMs (App engine Flex uses Compute Engine VMs) and the scaling strategy on your app.yaml is exceeding the quotas.
Please try to add one of the following blocks in your app.yaml file
For automatic scaling
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
For manual scaling
manual_scaling:
instances: 2
To avoid exhaust these quotas please, delete/stop the App Engine service versions that you don't need.
For more information about scaling strategies please check this reference guide
For example:
Every VM takes 1 IP and your project have a quota of 4.
If your app engine service has 3 VMs running (3 IPs used), in the next deploy you only has 1 IP available, if your min_instances or instances in your app.yaml file is greater than 1, the deploy will fails.
This is because is not possible allocate more than 4 IPs on your project, and App engine first turn on the new instances and after shutdown the old instances, this is to avoid a service interruption
If you need increase this resource quotas it is necessary to contact a GCP sales rep.

Related

Sudden datastore exceptions : DatastoreFailureException: Missing or invalid authentication

I have a Google Appengine app (Java) and the datastore was recently auto-migrated into "Cloud Firestore in Datastore mode" (completed on 6/6/2022) that has been running for years.
It has now started throwing these exceptions in new deployments without any relevant code change to trigger them:
com.google.appengine.api.datastore.DatastoreFailureException: Missing or invalid authentication.
at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java:69)
at com.google.appengine.api.datastore.DatastoreApiHelper$1.convertException(DatastoreApiHelper.java:127)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:97)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:89)
at com.google.appengine.api.datastore.FutureHelper.getInternal(FutureHelper.java:68)
at com.google.appengine.api.datastore.FutureHelper.quietGet(FutureHelper.java:32)
at com.google.appengine.api.datastore.BaseQueryResultsSource.getIndexList(BaseQueryResultsSource.java:154)
at com.google.appengine.api.datastore.BaseQueryResultsSource.loadMoreEntities(BaseQueryResultsSource.java:187)
at com.google.appengine.api.datastore.BaseQueryResultsSource.loadMoreEntities(BaseQueryResultsSource.java:166)
at com.google.appengine.api.datastore.QueryResultIteratorImpl.ensureLoaded(QueryResultIteratorImpl.java:146)
at com.google.appengine.api.datastore.QueryResultIteratorImpl.hasNext(QueryResultIteratorImpl.java:64)
at com.googlecode.objectify.impl.KeysOnlyIterator.hasNext(KeysOnlyIterator.java:29)
at com.googlecode.objectify.impl.ChunkIterator.next(ChunkIterator.java:48)
at com.googlecode.objectify.impl.ChunkIterator.next(ChunkIterator.java:20)
at com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:52)
at com.google.common.collect.MultitransformedIterator.hasNext(MultitransformedIterator.java:50)
at com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1105)
at com.googlecode.objectify.impl.ChunkingIterator.hasNext(ChunkingIterator.java:51)
at com.google.common.collect.Iterators.addAll(Iterators.java:366)
at com.google.common.collect.Lists.newArrayList(Lists.java:163)
at com.googlecode.objectify.util.MakeListResult.translate(MakeListResult.java:22)
at com.googlecode.objectify.util.MakeListResult.translate(MakeListResult.java:12)
at com.googlecode.objectify.util.ResultTranslator.nowUncached(ResultTranslator.java:21)
at com.googlecode.objectify.util.ResultCache.now(ResultCache.java:30)
at com.googlecode.objectify.util.ResultProxy.invoke(ResultProxy.java:34)
at com.sun.proxy.$Proxy68.isEmpty(Unknown Source)
My code. After loading a list of entities, the exception is thrown on the isEmpty() call on the returned list:
List<SystemSetting> applicationSettings = ofy().load().type(SystemSetting.class).filter("name", name).list();
if (applicationSettings.isEmpty()) { // Exception thrown here
It's worth also noting that on previous versions, datastore reads and writes are still happening without any issues. It is only new deployments that exhibit this issue. However, redeploying previous versions from previous commits using the same CI service to deploy them are failing.
Is this caused by the datastore migration?
Slight update: these errors are only happening on newly deployed versions... In fact, I deployed a new version (from CI, bitbucket pipeline) that worked, but when I redeployed it with no changes it no longer works. Same app engine version id, code and users etc.
Things we've tried without success:
Re-deploying last known version
Clearing memcache
Deploying from local dev
Stopping instances
Trying alternative service account key
Expanding service account permissions
Clearing BitBucket pipeline caches (the CI builder)
Have recreated the issue with datastore api directly (no objectify)
Have redeployed with different user account - no success
Have redeployed with new service account - no success
Update 1:
If I run the following code on a local java application (pointing at live datastore using same service account) it works locally, but fails if I deploy to app engine with my project-id.
DatastoreOptions datastoreOptions = DatastoreOptions.getDefaultInstance();
Datastore datastore = datastoreOptions.getService();
// A known entity in my live datastore
Key key = datastore.newKeyFactory()
.setKind("MyEntityType")
.newKey(123123123l);
Entity retrieved = datastore.get(key);
When it fails, I can get slightly more information:
java.io.IOException: Unexpected Error code 500 trying to get security access token from Compute Engine metadata for the default service account: Could not fetch URI /computeMetadata/v1/instance/service-accounts/default/token
full stack:
com.google.cloud.datastore.DatastoreException: I/O error
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:138)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:123)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.lookup(HttpDatastoreRpc.java:173)
at com.google.cloud.datastore.DatastoreImpl$3.call(DatastoreImpl.java:434)
at com.google.cloud.datastore.DatastoreImpl$3.call(DatastoreImpl.java:431)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:103)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.datastore.DatastoreImpl.lookup(DatastoreImpl.java:430)
...
Caused by: com.google.datastore.v1.client.DatastoreException: I/O error, code=UNAVAILABLE
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:171)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:117)
at com.google.datastore.v1.client.Datastore.lookup(Datastore.java:93)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.lookup(HttpDatastoreRpc.java:171)
... 74 more
Caused by: java.io.IOException: Unexpected Error code 500 trying to get security access token from Compute Engine metadata for the default service account: Could not fetch URI /computeMetadata/v1/instance/service-accounts/default/token
at com.google.auth.oauth2.ComputeEngineCredentials.refreshAccessToken(ComputeEngineCredentials.java:206)
at com.google.auth.oauth2.OAuth2Credentials$1.call(OAuth2Credentials.java:257)
at com.google.auth.oauth2.OAuth2Credentials$1.call(OAuth2Credentials.java:254)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.auth.oauth2.OAuth2Credentials$AsyncRefreshResult.executeIfNew(OAuth2Credentials.java:580)
at com.google.auth.oauth2.OAuth2Credentials.asyncFetch(OAuth2Credentials.java:220)
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:170)
at com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:96)
at com.google.cloud.http.HttpTransportOptions$1.initialize(HttpTransportOptions.java:159)
at com.google.cloud.http.CensusHttpModule$CensusHttpRequestInitializer.initialize(CensusHttpModule.java:109)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc$1.initialize(HttpDatastoreRpc.java:91)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:95)
... 76 more
Update 2:
I've since paid for Google support and they have now confirmed it "is linked to a known issue related to the GAE standard Service Agent and access token which is affecting some users"

We are deploying java application in Google app engine and we are getting capacity error

We are deploying java application to use VisionAPI in Google app engine and we are getting capacity error, we were asked to try with different zones still we are getting the same error,
GCLOUD: ERROR: (gcloud.app.deploy) Error Response: [8] Flex operation projects/text-convert-304513/regions/us-east1/operations/6d4717fc-a5e9-419c-85cc-72394ed9e68a error [RESOURCE_EXHAUSTED]: An internal error occurred while processing task /app-engine-flex/insert_flex_deployment/flex_create_resources>2021-02-11T13:54:32.406Z50061.ue.1: The requested amount of instances has exceeded GCE's default quota. Please see https://cloud.google.com/compute/quotas for more information on GCE resources.
As per the GCP doc the parameter 'max_num_instances', The maximum number of instances in your project should be 8 by default where I can see you may want to use more than that limit. I would like to suggest you to increase the quota limit for your project will solve the issue.

App Engine for cloud monitoring metrics throwing 500 error when writing to big query

I want to export Metrics from Cloud Monitoring to Big Query and google has given a solution on how to do this. I am following this this article.
I have downloaded the code from github and I am able to successfully deploy and run the application (python2.7),
I have given the aggregate alignment period as 86400s (I want to aggregate metrics per day starting from 1st July)
One of the app-engines the write-metrics app engine which writes the metrics to the big query, by getting the api response as a pub-sub message is always throwing me these errors:
> Exceeded soft memory limit of 256 MB with 270 MB after servicing 5 requests total. Consider setting a larger instance class in app.yaml.
> While handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application or may be using an instance with insufficient memory. Consider setting a larger instance class in app.yaml.
The above is 500 error and very frequent and I find that duplicate records are getting inserted in the table in BigQuery still
and also this one below
DeadlineExceededError: The overall deadline for responding to the HTTP request was exceeded.
The app engine logs frequently show POST with codes 500 and 200
In app engine(standard) I have added the scaling as automatic and set in app.yaml as below:
automatic_scaling:
target_cpu_utilization: 0.65
min_instances: 5
max_instances: 25
min_pending_latency: 30ms
max_pending_latency: automatic
max_concurrent_requests: 50
but this seems to have no effect. I am very new to app engine,google-cloud and its stackdriver metrics.
This change makes it work
instance_class: F4_1G
This needs to be as a independent tag and previously i had made the mistake of putting under the automatic scaling: so it gave illegal modifier

Google Cloud Memorystore (Redis) ETIMEDOUT in App Engine

Im writing a NodeJS app and trying to connect to GCPs Redis MemoryStore, but I'm getting the ETIMEDOUT 10.47.29.131:6379 error message. The 10.47.29.131 corresponds to the REDISHOST. I'm trying to reach the server by it's internal private IP.
While the app works locally with a local Redis installed, it does not when deployed to the GCP AppEngine.
My GCP-Setup
Redis instance running at location europe-west3-a
Created a connector under "Serverless VPC access" which is in europe-west3
Redis and the VPC-connector are on the same network "default".
App Engine running in europe-west
Redis isntance:
VPC-connector:
The app.yml
runtime: nodejs
env: flex
automatic_scaling:
// or this but without env: flex (standard)
vpc_access_connector:
name: "projects/project-ID/locations/europe-west/connectors/connector-name"
beta_settings:
cloud_sql_instances: project-ID:europe-west3:name
env_variables:
REDISHOST: '10.47.29.131'
REDISPORT: '6379'
// removed this when trying without env: flex (standard)
network:
name: default
session_affinity: true
I followed these instructions to set everything up: https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard
Digging deeper, I found: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access where they mention something about permissions and serverless-vpc-access-images, and while trying to follow the instructions: https://cloud.google.com/compute/docs/images/restricting-image-access#trusted_images I couldn't find "Define trusted image projects." anywhere
What am I missing here?
Well, turns out, the problem was the region I've selected for the Redis instance.
From Documentation:
Important: In order to connect to a Memorystore for Redis instance, the connecting client must be located within the same region as the instance.
A region is a specific geographical location where you can run your resources. Each region is subdivided into several zones.
For example, the us-central1 region in the central United States has zones us-central1-a, us-central1-b, us-central1-c, and us-central1-f.
Althouh the documentation clearly says, that AppEngine and Memorystore have to be in the same region, my assumption on what regions actually are, was false.
When I created the AppEngine, I created it in europe-west, which is the same as europe-west1. On the other hand, when I created the redis instance, I used europe-west3, with the assumption that west3 is the same region as west, which is not.
Since the AppEngines region cannot be changed, I created another redis-instance in europe-west1 and now everything works.
So, the redis region must be exactly the same as the AppEngine region. region1 is the same as region, but region2 or region3 are not the same as region.

google app engine: while using gcloud app deploy ,you do not have permission to access project

I'm using cloud app deploy but it comes:
Server responded with code [400]:
Bad Request Unexpected HTTP status 400.
Failed Project Preparation (app_id='b~eloquent-env-168317'). Out of retries. Last error: Temporary error occurred while verifying project: TEMPORARY_ERROR: Operation does not satisfy the following requirements: billing-enabled {Billing must be enabled for activation of service '' in project 'eloquent-env-168317' to proceed., https://console.developers.google.com/project/eloquent-env-168317/settings}
com.google.api.management.server.common.exceptions.ServiceManagementNonRetriableStorageException: Operation does not satisfy the following requirements: billing-enabled {Billing must be enabled for activation of service '' in project 'eloquent-env-168317' to proceed., https://console.developers.google.com/project/eloquent-env-168317/settings}
Beginning deployment of service [default]...
Building and pushing image for service [default]
Some files were skipped. Pass --verbosity=info to see which ones.
You may also view the gcloud log file, found at
[/Users/majnun/.config/gcloud/logs/2017.05.22/01.24.54.321942.log].
ERROR: (gcloud.app.deploy) You do not have permission to access project [eloquent-env-168317] (or it may not exist): The caller does not have permission
what's wrong, how can I fix it, thanks a lot
On 2017-May-21 your project was suspended. You should have received an email explaining the reason for the suspension, and the steps you need to follow to un-suspend your project. You can find out more information in the Project Suspension Guidelines.
You can see in the error you provided "Billing must be enabled for activation of service..". Therefore, submitting the Account Verification form should be all you need to do.

Resources