What does the TombstonedTaskError mean? It is being raised while trying to add a task to the queue, from a cron-job:
Traceback (most recent call last):
File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 501, in __call__
handler.get(*groups)
File "/base/data/home/apps/.../tasks.py", line 132, in get
).add(queue_name = 'userfeedcheck')
File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 495, in add
return Queue(queue_name).add(self)
File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 563, in add
self.__TranslateError(e)
File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 619, in __TranslateError
raise TombstonedTaskError(error.error_detail)
TombstonedTaskError
Searching the documentation only has the following to say:
exception TombstonedTaskError(InvalidTaskError)
Task has been tombstoned.
..which isn't particularly helpful.
I couldn't find anything useful in the App Engine code either..
You've added a task with that exact name before. Although it's already run, executed task names are kept around for some time to prevent accidental duplicates. If you're assigning task names, you should be using ones that are globally unique to prevent this occurring.
Related
I got this error:
TransactionFailedError: too much contention on these datastore entities. please try again.
Even though I'm not doing any transactions. The line of my code that causes the error is
ndb.put_multi(entity_list) # entity_list is a list of 100 entities
This error doesn't happen often so it isn't a big deal, but I'm curious why I get this error. Any ideas?
Here is most of the traceback:
Traceback (most recent call last):
...
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 318, in post
self.run_from_request()
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 313, in run_from_request
run(self.request.body)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 155, in run
return func(*args, **kwds)
File "/base/data/home/apps/s~opavote/2017-09-15.404125237783169549/tasks.py", line 70, in start_election
models.Voter.create(e.eid, chunk)
File "/base/data/home/apps/s~opavote/2017-09-15.404125237783169549/models.py", line 2426, in create
ndb.put_multi(voters + vbs)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3958, in put_multi
for future in put_multi_async(entities, **ctx_options)]
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 824, in put
key = yield self._put_batcher.add(entity, options)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 358, in _put_tasklet
keys = yield self._conn.async_put(options, datastore_entities)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 513, in _on_rpc_completion
result = rpc.get_result()
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 928, in get_result
result = rpc.get_result()
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1893, in __put_hook
self.check_rpc_success(rpc)
File "/base/data/home/runtimes/python27_experiment/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1385, in check_rpc_success
raise _ToDatastoreError(err)
TransactionFailedError: too much contention on these datastore entities. please try again.
Note that the error is actually received from the datastore itself, in the RPC response: self.check_rpc_success(rpc).
Which makes me suspect that on the datastore side, to ensure operation consistency/reliability across the redundant pieces of infra supporting it, every write operation is actually using the same/similar mechanisms as for transactional operations. The difference would be that those also have some transactional checks on the client side, before/after the RPC exchange and maybe explicit RPC transaction start/end triggers for the datastore.
From Life of a Datastore Write, a quote suggesting that some common mechanisms are being used regardless of the operations being transactional or not (emphasis mine):
If the commit phase has succeeded but the apply phase failed, the
datastore will roll forward to apply the changes to indexes under two
circumstances:
The next time you execute a read or write or start a transaction on this entity group, the datastore will first roll
forward and fully apply this committed but unapplied write, based on
the data in the log.
And one of the possible reasons for failures would be simply too many parallel accesses to the same entities, even if they're just read-only. See Contention problems in Google App Engine, though in that case they're for transactions on the client side.
Note that this is just a theory ;)
It might be worth re-reviewing transactions and entity groups, noting the various definitions and limits.
Putting "Every attempt to create, update, or delete an entity takes place in the context of a transaction," and, "There is a write throughput limit of about one transaction per second within a single entity group," probably speaks to what you're seeing, particularly if entity_list contains entities that would fall into the same entity group.
Has anyone been successful in backing up large datastore kinds to cloud storage? This is an experimental feature so support is pretty sketchy on the google end.
The kind in question we want to backup to cloud storage (ultimately with the goal of ingesting from cloud storage into big query) is currently sitting at 1.2 TB in size.
- description: BackUp
url: /_ah/datastore_admin/backup.create?name=OurApp&filesystem=gs&gs_bucket_name=OurBucket&queue=backup&kind=LargeKind
schedule: every day 00:00
timezone: America/Regina
target: ah-builtin-python-bundle
We keep running into the following error message:
Traceback (most recent call last):
File "/base/data/home/apps/s~steprep-prod-hrd/prod-339.366560204640641232/lib/mapreduce/handlers.py", line 182, in handle
input_reader, shard_state, tstate, quota_consumer, ctx)
File "/base/data/home/apps/s~steprep-prod-hrd/prod-339.366560204640641232/lib/mapreduce/handlers.py", line 263, in process_inputs
entity, input_reader, ctx, transient_shard_state):
File "/base/data/home/apps/s~steprep-prod-hrd/prod-339.366560204640641232/lib/mapreduce/handlers.py", line 318, in process_data
output_writer.write(output, ctx)
File "/base/data/home/apps/s~steprep-prod-hrd/prod-339.366560204640641232/lib/mapreduce/output_writers.py", line 711, in write
ctx.get_pool("file_pool").append(self._filename, str(data))
File "/base/data/home/apps/s~steprep-prod-hrd/prod-339.366560204640641232/lib/mapreduce/output_writers.py", line 266, in append
self.flush()
File "/base/data/home/apps/s~steprep-prod-hrd/prod-339.366560204640641232/lib/mapreduce/output_writers.py", line 288, in flush
f.write(data)
File "/python27_runtime/python27_lib/versions/1/google/appengine/api/files/file.py", line 297, in __exit__
self.close()
File "/python27_runtime/python27_lib/versions/1/google/appengine/api/files/file.py", line 291, in close
self._make_rpc_call_with_retry('Close', request, response)
File "/python27_runtime/python27_lib/versions/1/google/appengine/api/files/file.py", line 427, in _make_rpc_call_with_retry
_make_call(method, request, response)
File "/python27_runtime/python27_lib/versions/1/google/appengine/api/files/file.py", line 250, in _make_call
rpc.check_success()
File "/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 570, in check_success
self.__rpc.CheckSuccess()
File "/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
DeadlineExceededError: The API call file.Close() took too long to respond and was cancelled.
There seems to be an undocumented time limit of 30 seconds for write operations from gae to cloud storage.
This applies also to write-ops made on a backend, so the maximum file-size you could create from the gae
in the cloud-storage depends on your throughput. Our solution is to split the file; each time the writer-task
approaches 20 seconds, it closes the current file and opens a new one and then we join these files locally. For us this results in files of about 500KB (compressed), so this might not be an acceptable solution for you...
We receive this error quite frequently from our appengine application. Are other people receiving this error? Does anyone know how to get around it?
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/mail.py", line 894, in send
make_sync_call('mail', self._API_CALL, message, response)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 308, in MakeSyncCall
rpc.CheckSuccess()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
DeadlineExceededError: The API call mail.Send() took too long to respond and was cancelled.
Thanks
I was getting a lot of those errors (on Python 2.5), I decided to move the SendMail call to a task. This way I at least get a retry each time it fails.
You have to pass the make_sync_call method to create a more lenient deadline.
Take a look at this answer
On GAE this line of code:
file_name = files.blobstore.create(mime_type='image/png')
drops google.appengine.runtime.DeadlineExceededError
Here is the full method code:
class UploadsHandler(JSONRequestHandler):
def upload_blob(self, content, filename):
file_name = files.blobstore.create(mime_type='image/png')
file_str_list = split_len(content, 65520)
with files.open(file_name, 'a') as f:
for line in file_str_list:
f.write(line)
files.finalize(file_name)
return files.blobstore.get_blob_key(file_name)
Logging message ends with:
A serious problem was encountered with the process that handled this request, causing it to exit. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may be throwing exceptions during the initialization of your application. (Error code 104)
Full error stack:
<class 'google.appengine.runtime.DeadlineExceededError'>:
Traceback (most recent call last):
File "/base/data/home/apps/s~mockup-cloud/1.352909931378411668/main.py", line 389, in main
util.run_wsgi_app(application)
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 98, in run_wsgi_app
run_bare_wsgi_app(add_wsgi_middleware(application))
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 116, in run_bare_wsgi_app
result = application(env, _start_response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__
handler.post(*groups)
File "/base/data/home/apps/s~mockup-cloud/1.352909931378411668/main.py", line 339, in post
original_key = "%s" % self.upload_blob(src)
File "/base/data/home/apps/s~mockup-cloud/1.352909931378411668/main.py", line 268, in upload_blob
file_name = files.blobstore.create(mime_type='image/png')
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/blobstore.py", line 68, in create
return files._create(_BLOBSTORE_FILESYSTEM, params=params)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 487, in _create
_make_call('Create', request, response)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/files/file.py", line 228, in _make_call
rpc.wait()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 533, in wait
self.__rpc.Wait()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 119, in Wait
rpc_completed = self._WaitImpl()
File "/base/python_runtime/python_lib/versions/1/google/appengine/runtime/apiproxy.py", line 131, in _WaitImpl
rpc_completed = _apphosting_runtime___python__apiproxy.Wait(self)
Blob is created while file upload. Other methods of the app work great. It looks like blobstore is not responding for under 30 sec.
Any ideas why this happens?
Thanks!
Seems like you're not the only one having this issue:
http://groups.google.com/group/google-appengine/browse_thread/thread/27e52484946cbdc1#
(posted today)
Seems like Google had some reconfigurations of their servers. Now everything's working fine as it was before.
A runtime.DeadlineExceededError occurs when your request handler takes too long to execute - the blobstore call just happened to be what was running when that happened. You need to profile your handler with appstats to see why it's so slow.
I frequently get this Application error. What does this mean ?
File "/base/data/home/apps/0xxopdp/10.347467753731922836/matrices.py", line 215, in insert_into_db
obj.put()
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 895, in put
return datastore.Put(self._entity, config=config)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 404, in Put
return _GetConnection().async_put(config, entities, extra_hook).get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 601, in get_result
self.check_success()
File "/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 572, in check_success
rpc.check_success()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 502, in check_success
self.__rpc.CheckSuccess()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 126, in CheckSuccess
raise self.exception
ApplicationError: ApplicationError: 5
I do make many calls to the datastore. What caused this problem ?
The ApplicationError:5 message tipically indicates a Timeout error.
The error is raised by the datastore API, so your application is probably trying to make more than the allowed 5 writes per seconds to db.
I would recommend you to read this insightful article about Handling Datastore Errors that explains very well the possible timeout 's causes and how to deal with them.