Pepper Robot - getImageLocal generates error - nao-robot

When trying to get an image from the robot using getImageLocal, I receive an error message. This is despite the fact that I am running the code directly on the robot. The error message is:
Traceback (most recent call last):
File "test.py", line 13, in <module>
video_device.getImageLocal(handle)
RuntimeError: Uncaught error: Pointer serialization not implemented
The code I've used to obtain this error is below (I receive the same error when using C++ as well) :
import qi
import sys
if __name__ == "__main__":
app = qi.Application(sys.argv)
# start the eventloop
app.start()
video_device = app.session.service("ALVideoDevice")
handle = video_device.subscribe('handler', 0, 0, 10)
video_device.getImageLocal(handle)
video_device.releaseImage(handle)
I'm currently running this code using:
python test.py --qi-url=tcp://pepper.local
I would be very interested to know if it is something that I am doing wrong here, or if there is a more serious underlying issue.

Even if you run this code directly on the robot, you won't be able to retrieve this image using Python code. The fact that you get the same error while using C++ is quite disturbing indeed...
If you want to work in Python, you should consider using the getImageRemote() method to get the images. This solution works if your code runs on the robot, but also if it runs on a remote computer.
If you want to retrieve the images faster, you could also consider using GStreamer (here is a link to a post describing how to use it. It's a valid solution for Nao, but it can be used for Pepper as well).
Which version of Naoqi are you using ?

Related

Why is NAOs startup interfering with my default behavior?

I created a main behavior for my NAOv6 (nao_main_behavior_V2) that is supposed to be executed on startup. It sets the robots language to german, activates some of NAOs LEDs and waits for me to touch his head sensors to activate a different behavior called Manual_Mode.
Every time I start NAO he executes the main behavior but stops as soon as I touch his head sensors to activate the other behavior. Using the Choreograph I can find this Error message:
[ERROR] behavior.box :onInput_onStart:16 _Behavior__nao_main_behavior_v28d046fMain2726901504:/Error! Manual_Mode_12: _Behavior__nao_main_behavior_v28d046fMain2726901504:/Manual Mode_29: ALBehaviorManager::runBehavior Box _Behavior__nao_main_behavior_v28d046fManualMode2716269632:/Speech Reco. Ger_1 has failed with error: Traceback (most recent call last): File "/opt/aldebaran/lib/python2.7/site-packages/albehavior.py", line 120, in _safeCallOfUserMethod func() File "", line 55, in onInput_onStart RuntimeError: ALSpeechRecognition::pushContexts AsrHybridNuance::xPushContexts You need to stop or pause the ASR engine to be able to make this call.
The Manual_Mode behavior accesses the speech recognition so I can say commands but I never access anything speech recognition related earlier.
Starting (or restarting) the behavior manually using Choreograph works just fine, everything works like it is supposed to and I don't get any Errors.
I tried using a third behavior during startup that starts the Main_Behavior but I get the same result.
The behavior was originally written for a NAOv5 and it worked without any problems.
I think this is related to some autonomous life running in the background (depending of the robot/version you use).
Try by stopping the autonomous life using api command when starting your "behavior_v2" or to change the nature of your "manual_mode" behavior (interactive/solitary) (or of the behavior).
About stopping the autonomous life: refer to api in ALAutonomousLife: setState or enableAnAbility methods (depending of your version).
About changing nature of behavior, it can be seen in this video for example: https://youtu.be/xPdNoiuaQag
I found a workaround. I created a launch trigger condition for my behavior:
(('Launchpad/LifeTime' ~30 ))
After NAO finishes its startup and 30 seconds have passed my behavior is being triggered and works without any problems.

CommandSequence taking too long to download

With both DKPy-SITL and our APM2 board, the wait_ready method is causing our program to raise an API Exception due to the command list (waypoints) taking too long to download. In the past (with droneapi) this wasn't an issue for me. Some waypoints are being downloaded, but the process takes about 10 seconds for each one, which leads me to believe something weird is going on.
Are there any ways to speed up the download process? I've posted the relevant code below.
self.vehicle = connect(connection_string, baud=baud_rate,
status_printer=dronekit_printer, wait_ready=True)
and later in another asynchronous method
def commands(self):
commands = self.vehicle.commands
commands.download()
commands.wait_ready()
return commands
The error occurs on commands.wait_ready(). There has to be a faster way to download commands than sitting there for over 30 seconds on an i7 4790k processor, especially since I've run the same code off a slower computer in the past with droneapi. If need be, I can raise an issue on the dronekit github as well.
I had the same issue. First time download call always goes well (0 commands). Once you have uploaded some commands the second time you try to download it fails ('Timeout' exception).
What I did to solve this was calling clear without download after the first time.
Something like this:
cmds = vehicle.commands
if not cmds.count > 0:
# Download
cmds.download()
# Wait until download is finished
cmds.wait_ready()
cmds.clear()
# Add / Modify the commands here and then upload them

Undocumented Managed VM task queue RPCFailedError

I'm running into a very peculiar and undocumented issue with a GAE Managed VM and Task Queues. I understand that the Managed VM service is in beta, so this question may not be relevant forever, but it's definitely causing me lots of headache now.
The main symptom of the issue is that, in certain (not completely known to me) circumstances, I'm seeing the following error/traceback:
File "/home/vmagent/my_app/some_file.py", line 265, in some_ndb_tasklet
res = yield some_task.add_async('some-task-queue-name')
File "/home/vmagent/python_vm_runtime/google/appengine/ext/ndb/tasklets.py", line 472, in _on_rpc_completion
result = rpc.get_result()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/vmagent/python_vm_runtime/google/appengine/api/taskqueue/taskqueue.py", line 1948, in ResultHook
rpc.check_success()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmstub.py", line 312, in _WaitImpl
raise self._ErrorException(*_DEFAULT_EXCEPTION)
RPCFailedError: The remote RPC to the application server failed for call taskqueue.BulkAdd().
I've gone through my local App Engine SDK to trace this through, and I can get up to the last line of the trace, but google/appengine/ext/vmruntime/ doesn't exist on my machine at all, so I have no idea what's happening in vmstub.py. From looking at the local code, some_task.add_async('the-queue') is spinning up an RPC and waiting for it to finish, but this error is not what the except apiproxy_errors.ApplicationError, e: at line 1949 of taskqueue.py is expecting...
The code that's generating the error looks something like this:
#ndb.tasklet
def kickoff_tasks(batch_of_payloads):
for task_payload in batch_of_payloads:
# task_payload is a dict
task = taskqueue.Task(
url='/the/handler/url',
params=payload)
res = yield task.add_async('some-valid-task-queue-name')
Other things worth noting:
this code itself is running in a task handler kicked off by another task.
I first saw this error before implementing this sort of batching, and assumed the issue was because I had added too many tasks from within a task handler.
In some cases, I can run this successfully with a batch size of 100, but in others, it fails consistently (depending on the data in the payloads) at 100, and sometimes succeeds at batch sizes of 50.
The task payloads themselves include batches of items, and are tuned to be just small enough to fit in a task. App Engine advertises a maximum task size of 100KB, so I'm keeping the payloads to under 90,000 bytes right now. Lowering the size even more doesn't seem to help any.
I've also tried implementing an exponential backoff to retry the kickoff_tasks method when this error appears, but it seems that once the error is raised, I can't add any other tasks at all from within the same handler (i.e. I can't kickoff a "continue from where you left off" task, I just have to let this one fail and restart itself)
So, my question is, what is actually causing this error? How can I avoid it, or fix this so that I'm handling it correctly?
This is a known issue that is being worked on. There are actually two issues - the RPC failure itself and the lack of handling of the RPCFailedError exception by the SDK.
There is some public discussion of the issue here.
If you're using App Engine Flexible and the python-compat-multicore image, a new bug popped up related to App Engine using a newer version of the requests library that broke the communication between App Engine Flexible and the datastore. You can fix this error by monkey patching the library in your appengine_config.py file.
Add the following code to appengine_config.py:
try:
import appengine.ext.vmruntime.vmstub as vmstub
except ImportError:
pass
else:
if isinstance(vmstub.DEFAULT_TIMEOUT, (int, long)):
# Newer requests libraries do not accept integers as header values.
# Be sure to convert the header value before sending.
# See Support Case ID 11235929.
vmstub.DEFAULT_TIMEOUT = bytes(vmstub.DEFAULT_TIMEOUT)
Note that if you do not have an appengine_config.py file, you can just create it in your base project directory (wherever you put your app.yaml file). This file gets run during App Engine startup..

Taskqueue error while testing locally

I've been developing with appengine for the past 6 months now and haven't had too many issues-- but I just switched my box setup from ArchLinux to a Linux Mint install and now am having a weird issue I haven't seen before. I'm using the 1.8.0 python sdk. I'm trying to dispatch tasks onto a named queue and am getting the following traceback:
ERROR 2013-06-05 14:53:33,762 taskqueue_stub.py:1892] Failed to dispatch task
Traceback (most recent call last):
File "/opt/google_appengine/google/appengine/api/taskqueue/taskqueue_stub.py", line 1890, in ExecuteTask
'0.1.0.2')
File "/opt/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 532, in add_request
headers_dict['Host'], urlparse.urlsplit(relative_url).path)
File "/opt/google_appengine/google/appengine/tools/devappserver2/dispatcher.py", line 580, in _resolve_target
raise request_info.ServerDoesNotExistError(prefix)
ServerDoesNotExistError: 15.bqdownloader-test
ERROR 2013-06-05 14:53:33,762 taskqueue_stub.py:1965] An error occured while sending the task "task16" (Url: "/_ah/warmup") in queue "download-queue". Treating as a task error.
My backends.py contains the following section that pertains:
- name: bqdownloader-test
class: B1
instances: 20
options: dynamic
I've looked around and have found one mention of this error, but no fix. Has anyone encountered this? I don't really know what to do to address this issue. Thanks in advance.

Java command lastModified() not working in Clojure

I am trying to get the last modified time from a file in Clojure, by executing a Java command.
By using java.io.File.lastModified I am supposed to be able to get the UNIX-time, this does not work by execution of the script or in the REPL.
My code is:
(java.io.File.lastModified "/home/lol/lolness.txt")
and my error is:
java.lang.ClassNotFoundException: java.io.File.lastModified (NO_SOURCE_FILE:24)
(java.io.File.separator) works, however.
EDIT:
Clojure version 1.2.0-master-SNAPSHOT
Java version OpenJDK 1.6.0
lastModified is a method of java.io.File objects. To access it in Clojure, use the following syntax:
(.lastModified (java.io.File. "/home/lol/lolness.txt"))
Note that the namespaces clojure.contrib.java-utils (1.1) / clojure.java.io (bleeding edge) provide a function file which makes the creation of java.io.File objects more convenient. Since you're on the bleeding edge, the following should work for you:
(require '[clojure.java.io :as io])
(.lastModified (io/file "/home/lol/lolness.txt"))

Resources