To be specific, this question is about how to get the specified quota raised or lifted, not how to be more efficient within the existing quota limit.
When running a MapReduce job on GAE, I hit the quota limit listed below. The limit is 100GB of "file bytes received" per day, which is file bytes received from Blobstore from what I can tell. Increasing my budget has no affect on this quota limit of 100Gb/day. I'd like the limit lifted entirely and the ability to pay for what I use.
Output in logs:
The API call file.Open() required more quota than is available.
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/base_handler.py", line 68, in post
self.handle()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/handlers.py", line 168, in handle
for entity in input_reader:
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/mapreduce_pipeline.py", line 109, in __iter__
for binary_record in super(_ReducerReader, self).__iter__():
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/input_readers.py", line 1615, in __iter__
record = self._reader.read()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/records.py", line 335, in read
(chunk, record_type) = self.__try_read_record()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/records.py", line 292, in __try_read_record
header = self.__reader.read(HEADER_LENGTH)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 569, in read
with open(self._filename, 'r') as f:
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 436, in open
exclusive_lock=exclusive_lock)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 269, in __init__
self._open()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 393, in _open
self._make_rpc_call_with_retry('Open', request, response)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 397, in _make_rpc_call_with_retry
_make_call(method, request, response)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 243, in _make_call
rpc.check_success()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 558, in check_success
self.__rpc.CheckSuccess()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
OverQuotaError: The API call file.Open() required more quota than is available.
It seems you need to talk to Google directly: on quotas page there is a link to a form to request quota increase: http://support.google.com/code/bin/request.py?&contact_type=AppEngineCPURequest
I've hit this error too. We are using the "experimental backup" feature of appengine. This in turn runs a map reduce to backup all appengine data to google-cloud-storage. However, currently the backup fails with this error:
OverQuotaError: The API call file.Open() required more quota than is available.
And in the quota dashboard we see:
Other Quotas With Warnings
These quotas are only shown when they have warnings
File Bytes Sent 100% 107,374,182,400 of 107,374,182,400 Limited
So apparently there is a hidden quota "File Bytes Sent" which we have hit. But it not documented anywhere, and we could have never known we would hit it.... Now we're stuck
Related
We are randomly getting the following error when using Google App Engine standard environment python2.7:
AttributeError: 'Connection' object has no attribute 'commit'
An example stack trace:
Traceback (most recent call last): File
"/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py",
line 1535, in call
rv = self.handle_exception(request, response, e) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py",
line 1529, in call
rv = self.router.dispatch(request, response) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py",
line 1278, in default_dispatcher
return route.handler_adapter(request, response) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py",
line 1102, in call
return handler.dispatch() File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py",
line 572, in dispatch
return self.handle_exception(e, self.app.debug) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py",
line 570, in dispatch
return method(*args, **kwargs) File "/base/data/home/apps/s~boutirapp/prod20210407-0912.434271348867830388/api/sixdegree.py",
line 172, in post
body = self.process(*args, **kwargs) File "/base/data/home/apps/s~boutirapp/prod20210407-0912.434271348867830388/pages/paydollar_datafeed.py",
line 89, in process
BTOrder.confirmOrder(Ref, user) File "/base/data/home/apps/s~boutirapp/prod20210407-0912.434271348867830388/entities/btorder.py",
line 2034, in confirmOrder
self = db.run_in_transaction_options(xg_on_retry_5, cls._confirmOrderTransaction, orderId, paypalTxId, paypalResp) File
"/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/1/google/appengine/api/datastore.py",
line 2641, in RunInTransactionOptions
function, *args, **kwargs) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/1/google/appengine/api/datastore.py",
line 2716, in _RunInTransactionInternal
ok, result = _DoOneTry(function, args, kwargs) File "/base/alloc/tmpfs/dynamic_runtimes/python27g/fdc6d631da52d25b/python27/python27_lib/versions/1/google/appengine/api/datastore.py",
line 2770, in _DoOneTry
if _GetConnection().commit(): AttributeError: 'Connection' object has no attribute 'commit'
The error occurs randomly when we are committing a datastore transaction using the google.appengine.ext.db library. The same lines of code might work fine when we retry immediately after the error. It also doesn't look like an error of our code as it is a missing attribute from a library object.
Any help to tackle the error is appreciated.
Edit: the error started to occur on 8 March 2021. We didn't change any related code close to that day.
GAE seems to throw an ApplicationError: 1 on some code that has worked before. It could be a general GEA problem or some version upgrade issue as the code last was used a while ago. How would I debug that error?
ApplicationError: 1 (/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py:1552)
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/e~(AppId)/dev.(VersionId)/worker.py", line 732, in post
channel.send_message('status-' + userId, str(emailCount) + ":" + str(emailTotal))
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/channel/channel.py", line 242, in send_message
raise _ToChannelError(e)
ApplicationError: ApplicationError: 1
The Channels API has been shutdown; the decprecation page is here.
It states:
The Channel API did not scale well enough for the workloads it was
intended for and so did not find wide adoption. Accordingly, support
for the Channel API will be shut down for brief periods of time for
maintenance beginning on July 13, 2017, as described in the shutdown
timetable. The service will be turned off permanently on October 31,
2017.
The recommended alternative is to use Firebase:
See Using Firebase for realtime events on App Engine for information on replacing Channel API functionality with the Firebase Realtime Database.
I've noticed that service accounts on GCE instances now have a much longer token than before, and I suspect they are causing AppEngine applications to be unable to work with them, resulting in an InvalidOAuthParametersError.
I obtain a token from a Compute Engine instance like this:
# curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google"
{"access_token":".......","expires_in":2544,"token_type":"Bearer"}
If I take the resulting 153 character token and POST it to an AppEngine Python application in an 'Authorisation: Bearer ...' header, it causes the app to fail. The app works fine with a shorter (73 character) token generated for a user account.
The Oauth bit of the App that throws the exception is:
SCOPE = 'https://www.googleapis.com/auth/userinfo.email'
email = oauth.get_current_user(SCOPE).email().lower()
The error from AppEngine is:
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/...../v4.398557956806027726/oauth.py", line 13, in post
user_name = self._get_authd_user()
File "/...../v4.398557956806027726/oauth.py", line 36, in _get_authd_user
email = oauth.get_current_user(SCOPE).email().lower()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/oauth/oauth_api.py", line 109, in get_current_user
_maybe_call_get_oauth_user(_scope)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/oauth/oauth_api.py", line 220, in _maybe_call_get_oauth_user
_maybe_raise_exception()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/oauth/oauth_api.py", line 237, in _maybe_raise_exception
raise InvalidOAuthParametersError(error_detail)
InvalidOAuthParametersError
If I truncate the token to 73 characters, I see an InvalidOauthToken error (as you'd expect). If I use the full token at https://www.googleapis.com/oauth2/v3/tokeninfo?access_token=... then it returns correct information for the service account, including the correct scopes (userinfo.email).
I'm also 99% sure that the tokens were shorter last week, and that they worked fine in this scenario.
My question is: why does AppEngine think that these service account tokens are invalid? The error would seem to suggest that it is the length causing the issue, or perhaps I'm missing something.
I am trying to use BeautifulSoup v4 to parse a document. I call BeautifulSoup on note.content, which is a string returned by Evernote's API:
soup = BeautifulSoup(note.content)
I have enabled lxml in my app.yaml file:
libraries:
- name: lxml
version: "2.3"
Note that this works on my local development server. However, when deployed to Google's cloud I get the following error:
Error Trace:
Unicode parsing is not supported on this platform
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~ever-blog/1.356951374446096208/controller/blog.py", line 101, in get
soup = BeautifulSoup(note.content)
File "/base/data/home/apps/s~ever-blog/1.356951374446096208/lib/bs4/__init__.py", line 168, in __init__
self._feed()
File "/base/data/home/apps/s~ever-blog/1.356951374446096208/lib/bs4/__init__.py", line 181, in _feed
self.builder.feed(self.markup)
File "/base/data/home/apps/s~ever-blog/1.356951374446096208/lib/bs4/builder/_lxml.py", line 62, in feed
self.parser.feed(markup)
File "parser.pxi", line 1077, in lxml.etree._FeedParser.feed (third_party/apphosting/python/lxml/src/lxml/lxml.etree.c:76196)
ParserError: Unicode parsing is not supported on this platform
UPDATE:
I checked out parser.pxi, and I found these lines of code which generated the error:
elif python.PyUnicode_Check(data):
if _UNICODE_ENCODING is NULL:
raise ParserError, \
u"Unicode parsing is not supported on this platform"
I think there must be something about GAE's deployement environment which causes this error, but I am not sure what.
UPDATE 2:
Because BeautifulSoup will automatically fall back on other parsers, I ended up removing lxml from my application entirely. Doing so fixed the problem.
Try to parse utf-8 string instead of unicode.
As of May 2012 this bug is still present in production, but not in the SDK (1.6.6).
However, rolling back to bs3 bypasses it on production for the time being:
http://www.crummy.com/software/BeautifulSoup/bs3/documentation.html
Appengine LogService has an undocumented quota:
You can make up to a 1,000,000 reads from it per day, and then you'll receive the following error:
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 701, in __call__
handler.get(*groups)
File "/base/data/home/apps/xxx/3.356325783019142341/xxx.py", line 355, in get
for request_log in logservice.fetch(start_time=start_time, end_time=end_time, version_ids=["3"]):
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/logservice/logservice.py", line 414, in __iter__
self._advance()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/logservice/logservice.py", line 427, in _advance
response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 308, in MakeSyncCall
rpc.CheckSuccess()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
OverQuotaError: The API call logservice.Read() required more quota than is available.
Also, when you reach this quota, you'll start see the following on your dashboard (AFAIK you don't see this line there before):
At that point it's not documented at all, and it seems it isn't billable too.
See this too: http://groups.google.com/group/google-appengine/browse_thread/thread/61fac55e1a2d521
Hope it'll save you some time.
Let me know if you can think on a workaround... (just to make it a question ;) )
You can request to have your limits increased: http://support.google.com/code/bin/request.py?&contact_type=AppEngineCPURequest