Google App Engine : Quota Error - google-app-engine

I got an error when command "gcloud preview app deploy app.yaml --promote"
:
ERROR: (gcloud.preview.app.deploy) Error Response: [13] CPU Quota Exceeded: in use: 7, requested: 2, limit: 8 Version: 20151201t142343.388948918338383472
Is there anything to solve this situation?

Since I just ran into this too, and was a bit boggled:
it seems like every deploy you do for e.g. a node.js project creates a new compute engine CPU. If you go into your project's gcloud dashboard, you can see the number of compute engine instances - remove old ones, and you can deploy again.

just went through an unexpected downtime with this.. running appengine flex environment with a meteor app with minimal resource usage (bandwidth, disk, memory, qps)..
i guess i have to navigate to a quotas section located at https://console.cloud.google.com/iam-admin/quotas?project=your-project-id
here's what my "request" looks like when submitted as example:
and submit a request for resources that have been auto set to null values.. thanks for the heads up google. i trust the SLA to leave me at the mercy of their whims..
but for others, hopefully this gives a better indication of what to do when encountering:
DEBUG: HttpError accessing <https://appengine.googleapis.com/v1/apps/<project-id>/services/default/versions?alt=json>: response: <{'status': '400', 'content-length': '217', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Wed, 25 Oct 2017 04:42:45 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="39,38,37,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 400,
"message": "The following quotas were exceeded: DISKS_TOTAL_GB (quota: 0, used: 10 + needed: 0), INSTANCES (quota: 0, used: 1 + needed: 0).",
"status": "INVALID_ARGUMENT"
}
}
>
DEBUG: (gcloud.app.deploy) Error Response: [400] The following quotas were exceeded: DISKS_TOTAL_GB (quota: 0, used: 10 + needed: 0), INSTANCES (quota: 0, used: 1 + needed: 0).
https://appengine.googleapis.com/v1/apps/<project-id>/services/default/versions?alt=json
Traceback (most recent call last):
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 789, in Execute
resources = args.calliope_command.Run(cli=self, args=args)
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 754, in Run
resources = command_instance.Run(args)
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/surface/app/deploy.py", line 62, in Run
use_service_management=use_service_management)
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 624, in RunDeploy
flex_image_build_option=flex_image_build_option)
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 418, in Deploy
endpoints_info)
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/appengine_api_client.py", line 162, in DeployService
self.client.apps_services_versions.Create, create_request)
File "~/.pyenv/versions/2.7.11/envs/env-2.7.11/lib/python2.7/site-packages/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/api/requests.py", line 71, in MakeRequest
raise exc
HttpException: Error Response: [400] The following quotas were exceeded: DISKS_TOTAL_GB (quota: 0, used: 10 + needed: 0), INSTANCES (quota: 0, used: 1 + needed: 0).
https://appengine.googleapis.com/v1/apps/<project-id>/services/default/versions?alt=json
ERROR: (gcloud.app.deploy) Error Response: [400] The following quotas were exceeded: DISKS_TOTAL_GB (quota: 0, used: 10 + needed: 0), INSTANCES (quota: 0, used: 1 + needed: 0).
https://appengine.googleapis.com/v1/apps/<project-id>/services/default/versions?alt=json
and an amazingly friendly error message when visiting the site via http:
Error: Server Error
The server encountered a temporary error and could not complete your
request.
Please try again in 30 seconds.

Related

Issue with Websockets - Flexible app-engine

From past few days, i am getting the below error when i try to run the example python websocket flexible-app-engine application (https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/appengine/flexible/websockets). It use to work before but now when i run, i see the below error.
I dont see this issue when i run locally using - "gunicorn -b 127.0.0.1:8080 -k flask_sockets.worker main:app".
I see this issue only when i deploy in google cloud.
File "/env/lib/python3.6/site-packages/gevent/pywsgi.py", line 999, in handle_one_response
self.run_application()
File "/env/lib/python3.6/site-packages/geventwebsocket/handler.py", line 75, in run_application
self.run_websocket()
File "/env/lib/python3.6/site-packages/geventwebsocket/handler.py", line 52, in run_websocket
list(self.application(self.environ, lambda s, h, e=None: []))
File "/env/lib/python3.6/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/env/lib/python3.6/site-packages/flask_sockets.py", line 40, in __call__
handler, values = adapter.match()
File "/env/lib/python3.6/site-packages/werkzeug/routing.py", line 2026, in match
raise WebsocketMismatch()
werkzeug.routing.WebsocketMismatch: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand.
{
textPayload: "2021-05-14T22:33:23Z {'REMOTE_ADDR': '172.17.0.4', 'REMOTE_PORT': '23300', 'HTTP_HOST': 'malware-sandboxing.uc.r.appspot.com', (hidden keys: 39)} failed with WebsocketMismatch
"
insertId: "nn71n3jmo5u1ymw9s"
resource: {2}
timestamp: "2021-05-14T22:33:23Z"
labels: {4}
logName: "projects/malware-sandboxing/logs/appengine.googleapis.com%2Fstderr"
receiveTimestamp: "2021-05-14T22:33:26.911503907Z"
}
Please help
Werkzeug released version 2.0.0 which also broke things for me.
https://github.com/pallets/werkzeug/releases/tag/2.0.0
Added Werkzeug==1.0.1 to the top of requirements file fixed my issue.

PubSub returns 503 - Service Unavailable all the time

I created a small program in Python for reading messages from a Pub/Sub subscription. I am using Python 3.7 and google-cloud-pubsub 1.1.0.
My code is very simple:
from google.cloud import pubsub_v1
from google.auth import jwt
import json
service_account_info = json.load(open("service-account-info.json"))
audience_sub = "https://pubsub.googleapis.com/google.pubsub.v1.Subscriber"
credentials_sub = jwt.Credentials.from_service_account_info(
service_account_info, audience=audience_sub
)
subscriber_ring = pubsub_v1.SubscriberClient(credentials=credentials_sub)
def callback1(message):
print("In callback!!")
print(message.data)
message.ack()
sub_path = "projects/my-project/subscriptions/my-sub"
future = subscriber_ring.subscribe(sub_path, callback=callback1)
future.result()
When the code reaches "future.result()", it hangs there forever and times out 10 minutes later with the error
pubsub 503 failed to connect to all addresses
I already verified that:
Pub/Sub is up and running
My service account has all the needed permissions. I even tried with my personal Google Cloud account (I am the project owner) with the same results.
There are unacked messages in the Topic
My network connection is OK
but I cannot make it work. Any ideas?
EDIT: I got some more info from the exception:
om_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 failed to connect to all addresses
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/_batch/thread.py", line 219, in _commit
response = self._client.api.publish(self._topic, self._messages)
File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/cloud/pubsub_v1/gapic/publisher_client.py", line 498, in publish
request, retry=retry, timeout=timeout, metadata=metadata
File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/usr/local/anaconda3/envs/loadtest/lib/python3.7/site-packages/google/api_core/retry.py", line 206, in retry_target
last_exc,
File "<string>", line 3, in raise_from
google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7fa030891a70>,
, metadata=[('x-goog-request-params', 'topic=projects/my-project/subscriptions/my-sub'), ('x-goog-api-client', 'gl-python/3.7.6 grpc/1.26.0 gax/1.16.0 gapic/1.2.0')]), last exception: 503 failed to connect to all addresses
It is likely that there is a firewall rule in place or some network configuration that is disallowing/dropping connections to *.googleapis.com (or specifically pubsub.googleapis.com). You can see an example of this with another Google product.

solr error in production, fine in development

I'm searching on a CKAN site and I have this error from solr on test site, which I don't have on dev site
[Mon Jun 10 13:30:10.723687 2019] [:error] [pid 126661:tid 140082572949248] Solr responded with an error (HTTP 400): [Reason: org.apache.solr.search.SyntaxError: Cannot parse '\xd7\x95\xd7\x95\xd7\xa2\xd7\x93\xd7\x95\xd7\xaa \xd7\x9c\xd7\x90\xd7\x95\xd7\x9e\xd7\x99\xd7\x95\xd7\xaa \xd7\x9e\xd7\x95\xd7\x9c\xd7\x9e\xd7\x95"\xd7\xa4 owner_org:"762ccf21-0422-4022-8c98-e47df9fb0a98"': Lexical error at line 1, column 72. Encountered: <EOF> after : ""]
[Mon Jun 10 13:30:10.724038 2019] [:error] [pid 126661:tid 140082572949248] Group search error: ('SOLR returned an error running query: {\\'sort\\': u\\'score desc, metadata_modified desc\\', \\'fq\\': [u\\'\\', u\\'+site_id:"default"\\', \\'+state:active\\'], \\'facet.mincount\\': 1, \\'rows\\': 21, \\'facet.field\\': [u\\'organization\\', u\\'groups\\', u\\'tags\\', u\\'res_format\\', u\\'license_id\\'], \\'facet.limit\\': \\'50\\', \\'facet\\': \\'true\\', \\'q\\': u\\'\\\\u05d5\\\\u05d5\\\\u05e2\\\\u05d3\\\\u05d5\\\\u05ea \\\\u05dc\\\\u05d0\\\\u05d5\\\\u05de\\\\u05d9\\\\u05d5\\\\u05ea \\\\u05de\\\\u05d5\\\\u05dc\\\\u05de\\\\u05d5"\\\\u05e4 owner_org:"762ccf21-0422-4022-8c98-e47df9fb0a98"\\', \\'start\\': 0, \\'wt\\': \\'json\\', \\'fl\\': \\'id validated_data_dict\\'} Error: SolrError(u\\'Solr responded with an error (HTTP 400): [Reason: org.apache.solr.search.SyntaxError: Cannot parse \\\\\\'\\\\u05d5\\\\u05d5\\\\u05e2\\\\u05d3\\\\u05d5\\\\u05ea \\\\u05dc\\\\u05d0\\\\u05d5\\\\u05de\\\\u05d9\\\\u05d5\\\\u05ea \\\\u05de\\\\u05d5\\\\u05dc\\\\u05de\\\\u05d5"\\\\u05e4 owner_org:"762ccf21-0422-4022-8c98-e47df9fb0a98"\\\\\\': Lexical error at line 1, column 72. Encountered: <EOF> after :
What does the error means?
What can I do to reproduce it on dev?
Which configuration may I missing from dev to test?
Thanks

App Engine dev_appserver.py with WebPack return 404 from Google Storage Local API

I'm developing an app using Google App Engine Standard Environment with Python and frontend with Webpack. The application store photos in Google Storage and display them. When I deploy the app to App Engine everything works fine.
When I try to run the local dev_appserver.py and Webpack I get the exception bellow when trying to read from Google Storage API when accessing the app via Webpack DevServer port (3001 in my case). If build the Webpack static output and do the exact same request with dev_appserver.py port (8080) I get the photo back.
[appengine] ERROR 2017-08-26 20:19:18,786 photo_storage.py:31] Fail to read photo file
[appengine] Traceback (most recent call last):
[appengine] File "c:\myapp\server\api\photo_storage.py", line 20, in read_photo_from_storage
[appengine] file_stat = gcs.stat(filename)
[appengine] File "c:\myapp\server\lib\cloudstorage\cloudstorage_api.py", line 151, in stat
[appengine] body=content)
[appengine] File "c:\myapp\server\lib\cloudstorage\errors.py", line 132, in check_status
[appengine] raise NotFoundError(msg)
[appengine] NotFoundError: Expect status [200] from Google Storage. But got status 404.
[appengine] Path: '/app_default_bucket/pics/185804764220139124118/395ebb68_main.png'.
[appengine] Request headers: None.
[appengine] Response headers: {'date': 'Sat, 26 Aug 2017 20:19:18 GMT', 'connection': 'keep-alive', 'content-type': 'text/html; charset=utf-8', 'content-security-policy': "default-src 'self'", 'x-content-type-options': 'nosniff', 'x-powered-by': 'Express', 'content-length': '211'}.
[appengine] Body: ''.
[appengine] Extra info: None.
This is the code that reads and write the files:
import logging
import os
import logging
import urllib
from google.appengine.api import app_identity
import cloudstorage as gcs
from models import Photo
def read_photo_from_storage(photo, label, response):
bucket_name = os.environ.get('BUCKET_NAME',
app_identity.get_default_gcs_bucket_name())
filename = format_photo_file_name(bucket_name, photo.created_by_user_id, photo.crc32c, label)
try:
file_stat = gcs.stat(filename)
gcs_file = gcs.open(filename)
response.headers['Content-Type'] = file_stat.content_type
response.headers['Cache-Control'] = 'private, max-age=31536000' # cache for upto 1 year
response.headers['ETag'] = file_stat.etag
response.write(gcs_file.read())
gcs_file.close()
except gcs.NotFoundError:
logging.exception("Fail to read photo file")
response.status = 404
response.write('photo file not found')
def write_photo_to_storage(user_id, checksum, label, file_type, image_content):
bucket_name = os.environ.get('BUCKET_NAME',
app_identity.get_default_gcs_bucket_name())
filename = format_photo_file_name(bucket_name, user_id, checksum, label)
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
gcs_file = gcs.open(filename,
'w',
content_type=file_type,
options={'x-goog-meta-crc32c': "{0:x}".format(checksum)},
retry_params=write_retry_params)
gcs_file.write(image_content)
gcs_file.close()
def format_photo_file_name(bucket_name, user_id, checksum, label):
return urllib.quote("/{0}/pics/{1}/{2:x}_{3}.png".format(bucket_name, user_id, checksum, label))

Drive API Files Patch Method fails with "Precondition Failed" "conditionNotMet"

It seems that over night the Google Drive API methods files().patch( , ).execute() has stopped working and throws an exception. This problem is also observable through Google's reference page https://developers.google.com/drive/v2/reference/files/patch if you "try it".
The exception response is:
500 Internal Server Error
cache-control: private, max-age=0
content-encoding: gzip
content-length: 162
content-type: application/json; charset=UTF-8
date: Thu, 22 Aug 2013 12:32:06 GMT
expires: Thu, 22 Aug 2013 12:32:06 GMT
server: GSE
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "Precondition Failed",
"locationType": "header",
"location": "If-Match"
}
],
"code": 500,
"message": "Precondition Failed"
}
}
This is really impacting our application.
We're experiencing this as well. A quick-fix solution is to add this header: If-Match: * (ideally, you should use the etag of the entity but you might not have a logic for conflict resolution right now).
Google Developers, please give us a heads up if you're planning to deploy breaking changes.
Looks like sometime in the last 24 hours the Files.Patch issue has been put back to how it used to work as per Aug 22.
We were also hitting this issue whenever we attempted to Patch the LastModified Timestamp of a file - see log file extract below:
20130826 13:30:45 - GoogleApiRequestException: retry number 0 for file patch of File/Folder Id 0B9NKEGPbg7KfdXc1cVRBaUxqaVk
20130826 13:31:05 - ***** GoogleApiRequestException: Inner exception: 'System.Net.WebException: The remote server returned an error: (500) Internal Server Error.
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at Google.Apis.Requests.Request.InternalEndExecuteRequest(IAsyncResult asyncResult) in c:\code.google.com\google-api-dotnet-client\default_release\Tools\BuildRelease\bin\Debug\output\default\Src\GoogleApis\Apis\Requests\Request.cs:line 311', Exception: 'Google.Apis.Requests.RequestError
Precondition Failed [500]
Errors [
Message[Precondition Failed] Location[If-Match - header] Reason[conditionNotMet] Domain[global]
]
'
20130826 13:31:07 - ***** Patch file request failed after 0 tries for File/Folder 0B9NKEGPbg7KfdXc1cVRBaUxqaVk
Today's run of the same process is succeeding whenever it Patches a files timestamp, just as it was prior to Aug 22.
As a result of this 4/5 day glitch, we now have hundreds (possibly thousands) of files with the wrong timestamps.
I know the API is Beta but please, please Google Developers "let us know in advance of any 'trialing fixes'" and at least post in this forum to acknowledge the issue to save us time trying to find the fault in our user programs.
duplicated here Getting 500: Precondition Failed when Patching a folder. Why?
I recall a comment from one of dev videos saying "use Update instead of Patch as it has one less server roundtrip internally". I've inferred from this that Patch checks etags but Update doesn't. I've changed my code to use Update in place of Patch and the problem hasn't recurred since.
Gotta love developing against a moving target ;-)

Resources