Taskqueue error while testing locally - google-app-engine

I've been developing with appengine for the past 6 months now and haven't had too many issues-- but I just switched my box setup from ArchLinux to a Linux Mint install and now am having a weird issue I haven't seen before. I'm using the 1.8.0 python sdk. I'm trying to dispatch tasks onto a named queue and am getting the following traceback:
ERROR 2013-06-05 14:53:33,762 taskqueue_stub.py:1892] Failed to dispatch task
Traceback (most recent call last):
File "/opt/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 1890, in ExecuteTask
'0.1.0.2')
File "/opt/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 532, in add_request
headers_dict['Host'], urlparse.urlsplit(relative_url).path)
File "/opt/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 580, in _resolve_target
raise request_info.ServerDoesNotExistError(prefix)
ServerDoesNotExistError: 15.bqdownloader-test
ERROR 2013-06-05 14:53:33,762 taskqueue_stub.py:1965] An error occured while sending the task "task16" (Url: "/_ah/warmup") in queue "download-queue". Treating as a task error.
My backends.py contains the following section that pertains:
- name: bqdownloader-test
class: B1
instances: 20
options: dynamic
I've looked around and have found one mention of this error, but no fix. Has anyone encountered this? I don't really know what to do to address this issue. Thanks in advance.

Related

Google App Engine with Flask: Memorystore/redis produces [Errno 104] Connection reset by peer

My Flask-based GAE app has been running for a few weeks without issue. Today I noticed the root URL produces a 500 Internal Server Error most of the time. In the logging I see this appears to be related to session handling in Flask (using Flask-Session). Before transitioning to GAE, this app ran on a VM with local Redis instance for well over a year without any problems.
The Memorystore instance has only about 1500 keys at this time and 3 or 4 mb of data, so it is not heavily loaded. The server itself receives very little traffic (just me and the occasional robot). I am looking for insight as to what has produced this change in behavior or what diagnostic procedures I should pursue since I am new to GAE and the Google Cloud environment.
A typical traceback of the failure looks like this:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/flask/app.py", line 1969, in finalize_request response = self.process_response(response)
File "/env/lib/python3.7/site-packages/flask/app.py", line 2268, in process_response self.session_interface.save_session(self, ctx.session, response)
File "/env/lib/python3.7/site-packages/flask_session/sessions.py", line 166, in save_session time=total_seconds(app.permanent_session_lifetime))
File "/env/lib/python3.7/site-packages/redis/client.py", line 1540, in setex return self.execute_command('SETEX', name, time, value)
File "/env/lib/python3.7/site-packages/redis/client.py", line 836, in execute_command conn = self.connection or pool.get_connection(command_name, **options)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 1065, in get_connection if connection.can_read():
File "/env/lib/python3.7/site-packages/redis/connection.py", line 682, in can_read return self._parser.can_read(timeout)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 295, in can_read return self._buffer and self._buffer.can_read(timeout)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 205, in can_read raise_on_timeout=False)
File "/env/lib/python3.7/site-packages/redis/connection.py", line 173, in _read_from_socket data = recv(self._sock, socket_read_size)
File "/env/lib/python3.7/site-packages/redis/_compat.py", line 58, in recv return sock.recv(*args, **kwargs) ConnectionResetError: [Errno 104] Connection reset by peer
Again, this is new behavior. The server worked flawlessly for a couple of weeks. What might have changed and where should I look?
Possible related issue: https://github.com/andymccurdy/redis-py/issues/1186
Using health_check_interval eliminated most, but not all of these "Connection reset by peer" errors for us (GAE Python 2.7):
self._redis = Redis(
environ.get("REDISHOST", "localhost"),
int(environ.get("REDISPORT", 6379)),
health_check_interval=30,
)
Perhaps a value lower than 30 would eliminate the remaining occurrences.

Starting Go server on App Engine localhost throws error

Can anyone tell me why this is happening:
$ dev_appserver.py nmg_server
INFO 2017-07-08 17:15:37,369 application_configuration.py:461] No version specified. Generated version id: 20170708t171537
WARNING 2017-07-08 17:15:37,369 application_configuration.py:166] The Managed VMs runtime is deprecated, please consider migrating your application to use the Flexible runtime. See https://cloud.google.com/appengine/docs/flexible/python/migrating for more details.
INFO 2017-07-08 17:15:37,472 devappserver2.py:116] Skipping SDK update check.
INFO 2017-07-08 17:15:37,513 api_server.py:312] Starting API server at: http://localhost:54096
INFO 2017-07-08 17:15:37,517 api_server.py:938] Applying all pending transactions and saving the datastore
INFO 2017-07-08 17:15:37,517 api_server.py:941] Saving search indexes
Traceback (most recent call last):
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 103, in <module>
_run_file(__file__, globals())
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/dev_appserver.py", line 97, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 381, in <module>
main()
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 369, in main
dev_server.start(options)
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 196, in start
options.api_host, apiserver.port, wsgi_request_info_, options.grpc_apis)
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 223, in start
_module.start()
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1647, in start
self._add_instance()
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/module.py", line 1799, in _add_instance
expect_ready_request=True)
File "/Users/dgaedcke/gcloud_tools/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/go_runtime.py", line 189, in new_instance
self._go_application.maybe_build()):
TypeError: maybe_build() takes exactly 2 arguments (1 given)
I'm trying to run my server for testing on localhost and it keeps exiting with this error
This appears to be a bug in Cloud SDK which is affecting dev_appserver.py when using with App Engine Managed VMs. It does not seem to be affecting App Engine Standard or App Engine Flex environments.
Until Google releases a new Cloud SDK with the fix, you can modify the CLOUD_SDK_INSTALL_DIR//platform/google_appengine/google/appengine/tools/devappserver2/go_managedvm.py file locally as shown below (added both the patchable unified diff as well as before/after just for convenience).
Also consider moving to App Engine Flex since Managed VMs are deprecated and will not be supported after October 27, 2017.
Warning: The Managed VMs beta environment (applications deployed with
vm:true) is deprecated and will be decommissioned. This page is for
users who are already using the flexible environment with vm:true in
their app.yaml and want to upgrade to the latest release. If you are
updating your application from the standard environment, see the
Migrating Services from the Standard Environment to the Flexible
Environment instead.
Diff in patchable format
--- /Users/tuxdude/google-cloud-sdk-orig/platform/google_appengine/google/appengine/tools/devappserver2go_managedvm.py 2017-07-08 11:11:11.000000000 -0700
+++ /Users/tuxdude/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/go_managedvm.py 2017-07-08 11:11:11.000000000 -0700
## -152,15 +152,9 ##
logging.debug('Build succeeded:\n%s\n%s', stdout, stderr)
self._go_executable = exe_name
- def maybe_build(self, maybe_modified_since_last_build):
+ def maybe_build(self):
"""Builds an executable for the application if necessary.
- Args:
- maybe_modified_since_last_build: True if any files in the application root
- or the GOPATH have changed since the last call to maybe_build, False
- otherwise. This argument is used to decide whether a build is Required
- or not.
-
Returns:
True if compilation was successfully performed (will raise
an exception if compilation was attempted but failed).
## -173,9 +167,6 ##
self._work_dir = tempfile.mkdtemp('appengine-go-bin')
atexit.register(_rmtree, self._work_dir)
- if self._go_executable and not maybe_modified_since_last_build:
- return False
-
if self._go_executable:
logging.debug('Rebuilding Go application due to source modification')
else:
Before:
def maybe_build(self, maybe_modified_since_last_build):
"""Builds an executable for the application if necessary.
Args:
maybe_modified_since_last_build: True if any files in the application root
or the GOPATH have changed since the last call to maybe_build, False
otherwise. This argument is used to decide whether a build is Required
or not.
Returns:
True if compilation was successfully performed (will raise
an exception if compilation was attempted but failed).
False if compilation was not attempted.
Raises:
BuildError: if building the executable fails for any reason.
"""
if not self._work_dir:
self._work_dir = tempfile.mkdtemp('appengine-go-bin')
atexit.register(_rmtree, self._work_dir)
if self._go_executable and not maybe_modified_since_last_build:
return False
if self._go_executable:
logging.debug('Rebuilding Go application due to source modification')
else:
logging.debug('Building Go application')
self._build()
return True
After:
def maybe_build(self):
"""Builds an executable for the application if necessary.
Returns:
True if compilation was successfully performed (will raise
an exception if compilation was attempted but failed).
False if compilation was not attempted.
Raises:
BuildError: if building the executable fails for any reason.
"""
if not self._work_dir:
self._work_dir = tempfile.mkdtemp('appengine-go-bin')
atexit.register(_rmtree, self._work_dir)
if self._go_executable:
logging.debug('Rebuilding Go application due to source modification')
else:
logging.debug('Building Go application')
self._build()
return True

Pepper Robot - getImageLocal generates error

When trying to get an image from the robot using getImageLocal, I receive an error message. This is despite the fact that I am running the code directly on the robot. The error message is:
Traceback (most recent call last):
File "test.py", line 13, in <module>
video_device.getImageLocal(handle)
RuntimeError: Uncaught error: Pointer serialization not implemented
The code I've used to obtain this error is below (I receive the same error when using C++ as well) :
import qi
import sys
if __name__ == "__main__":
app = qi.Application(sys.argv)
# start the eventloop
app.start()
video_device = app.session.service("ALVideoDevice")
handle = video_device.subscribe('handler', 0, 0, 10)
video_device.getImageLocal(handle)
video_device.releaseImage(handle)
I'm currently running this code using:
python test.py --qi-url=tcp://pepper.local
I would be very interested to know if it is something that I am doing wrong here, or if there is a more serious underlying issue.
Even if you run this code directly on the robot, you won't be able to retrieve this image using Python code. The fact that you get the same error while using C++ is quite disturbing indeed...
If you want to work in Python, you should consider using the getImageRemote() method to get the images. This solution works if your code runs on the robot, but also if it runs on a remote computer.
If you want to retrieve the images faster, you could also consider using GStreamer (here is a link to a post describing how to use it. It's a valid solution for Nao, but it can be used for Pepper as well).
Which version of Naoqi are you using ?

Undocumented Managed VM task queue RPCFailedError

I'm running into a very peculiar and undocumented issue with a GAE Managed VM and Task Queues. I understand that the Managed VM service is in beta, so this question may not be relevant forever, but it's definitely causing me lots of headache now.
The main symptom of the issue is that, in certain (not completely known to me) circumstances, I'm seeing the following error/traceback:
File "/home/vmagent/my_app/some_file.py", line 265, in some_ndb_tasklet
res = yield some_task.add_async('some-task-queue-name')
File "/home/vmagent/python_vm_runtime/google/appengine/ext/ndb/tasklets.py", line 472, in _on_rpc_completion
result = rpc.get_result()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/vmagent/python_vm_runtime/google/appengine/api/taskqueue/taskqueue.py", line 1948, in ResultHook
rpc.check_success()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmstub.py", line 312, in _WaitImpl
raise self._ErrorException(*_DEFAULT_EXCEPTION)
RPCFailedError: The remote RPC to the application server failed for call taskqueue.BulkAdd().
I've gone through my local App Engine SDK to trace this through, and I can get up to the last line of the trace, but google/appengine/ext/vmruntime/ doesn't exist on my machine at all, so I have no idea what's happening in vmstub.py. From looking at the local code, some_task.add_async('the-queue') is spinning up an RPC and waiting for it to finish, but this error is not what the except apiproxy_errors.ApplicationError, e: at line 1949 of taskqueue.py is expecting...
The code that's generating the error looks something like this:
#ndb.tasklet
def kickoff_tasks(batch_of_payloads):
for task_payload in batch_of_payloads:
# task_payload is a dict
task = taskqueue.Task(
url='/the/handler/url',
params=payload)
res = yield task.add_async('some-valid-task-queue-name')
Other things worth noting:
this code itself is running in a task handler kicked off by another task.
I first saw this error before implementing this sort of batching, and assumed the issue was because I had added too many tasks from within a task handler.
In some cases, I can run this successfully with a batch size of 100, but in others, it fails consistently (depending on the data in the payloads) at 100, and sometimes succeeds at batch sizes of 50.
The task payloads themselves include batches of items, and are tuned to be just small enough to fit in a task. App Engine advertises a maximum task size of 100KB, so I'm keeping the payloads to under 90,000 bytes right now. Lowering the size even more doesn't seem to help any.
I've also tried implementing an exponential backoff to retry the kickoff_tasks method when this error appears, but it seems that once the error is raised, I can't add any other tasks at all from within the same handler (i.e. I can't kickoff a "continue from where you left off" task, I just have to let this one fail and restart itself)
So, my question is, what is actually causing this error? How can I avoid it, or fix this so that I'm handling it correctly?
This is a known issue that is being worked on. There are actually two issues - the RPC failure itself and the lack of handling of the RPCFailedError exception by the SDK.
There is some public discussion of the issue here.
If you're using App Engine Flexible and the python-compat-multicore image, a new bug popped up related to App Engine using a newer version of the requests library that broke the communication between App Engine Flexible and the datastore. You can fix this error by monkey patching the library in your appengine_config.py file.
Add the following code to appengine_config.py:
try:
import appengine.ext.vmruntime.vmstub as vmstub
except ImportError:
pass
else:
if isinstance(vmstub.DEFAULT_TIMEOUT, (int, long)):
# Newer requests libraries do not accept integers as header values.
# Be sure to convert the header value before sending.
# See Support Case ID 11235929.
vmstub.DEFAULT_TIMEOUT = bytes(vmstub.DEFAULT_TIMEOUT)
Note that if you do not have an appengine_config.py file, you can just create it in your base project directory (wherever you put your app.yaml file). This file gets run during App Engine startup..

Exception in idle (python 2.7) - possible bug in idle?

I'm trying to run a meta-analysis on a database of fMRI data, using the neurosynth python library through idle. When I try to run even some of the most basic functions, I get an error, not an error my own code, or in the neurosynth modules, the error seems to be a bug in idle itself.
I uninstalled and reinstalled python 2.7, reinstalled neurosynth and its dependencies, and ran into the same error. I've pasted my code below, followed by the error message, which appears in the unix shell (not in the idle shell).
Has anybody come across this error before using idle and python 2.7?
The script:
from neurosynth.base.dataset import Dataset
from neurosynth.analysis import meta, decode, network
import neurosynth
neurosynth.set_logging_level('info')
dataset = Dataset('data/database.txt')
dataset.add_features('data/features.txt')
dataset.save('dataset.pkl')
print 'done'
The error message which appeared in the unix shell:
----------------------------------------
Unhandled server exception!
Thread: SockThread
Client Address: ('127.0.0.1', 46779)
Request: <socket._socketobject object at 0xcb8d7c0>
Traceback (most recent call last):
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address)
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 503, in __init__
SocketServer.BaseRequestHandler.__init__(self, sock, addr, svr)
File "/usr/global/python/2.7.3/lib/python2.7/SocketServer.py", line 638, in __init__
self.handle()
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/run.py", line 265, in handle
rpc.RPCHandler.getresponse(self, myseq=None, wait=0.05)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 280, in getresponse
response = self._getresponse(myseq, wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 300, in _getresponse
response = self.pollresponse(myseq, wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 424, in pollresponse
message = self.pollmessage(wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 376, in pollmessage
packet = self.pollpacket(wait)
File "/usr/global/python/2.7.3/lib/python2.7/idlelib/rpc.py", line 347, in pollpacket
r, w, x = select.select([self.sock.fileno()], [], [], wait)
error: (4, 'Interrupted system call')
*** Unrecoverable, server exiting!
----------------------------------------
Thanks in advance!
Idle is meant for interactive exploration in the shell, for editing in an editor, and for testing programs by running them from an editor. It is not meant for production running of programs once developed. If there is a problem, one should separate the Idle part from the running with Python part. So in the unix shell, run python -m idlelib (for instance) to see if Idle starts correctly. Then, in an appropriate directory, run python path-to-my-file.py. Which does not work?
The error message is definitely odd, as it has more than just the python traceback. On the other hand, it does not start with a line of your code. I have no idea why the select call would be interrupted.

Resources