We are trying to move Pepper around on a floor using ALNavigation SLAM APIs. We have created a map using ALNavigation:explore() method. The application works most of the time but sometimes Pepper stops and the application crashes in between due to some safeguard feature.
We are using ALNavigation:navigateToInMap to move Pepper around the map.
Here are some logs:
[W] 15:01:26 ALMotion.OmniWheelFollowPath: Stitch failed. Stopping path:
["Circle", [0.436987877, 11.431554794], [2.869375944, 11.368105888],
-0.046996359]
[W] 15:01:26 ALTouch.TouchManager: My Base is touched. Reasons: Wheel.
[W] 15:01:26 AutonomousLife: Robot was moved!
[W] 15:01:26 AutonomousLife: Robot moved, must enter safeguard state. Will
immediately re-enter solitary state.
Is there any way to fix this issue or is this a hardware issue with Pepper's wheels or something wrong in the code? I am simply calling navigateToInMap after localizing the robot and this works most of the times but this issue is getting more and more frequent.
Thanks
Pepper has a system to detect if she's been pushed, and (with current versions) there are sometimes false positives - especially on a floor with irregularities, or when pepper is moving quickly or accelerating brutaly.
Some solutions:
Make Pepper move / accelerate a bit more slowly
Have a system to launch the application again as soon as it exits safeguard (for example with a ShouldBeExploring or shouldBeNavigating trigger condition) - in my experience when this false positive happens the robot is in safeguard for a very short time, maybe less then a second.
I recommend the second solution because that's usually what you want to do when a safeguard that's not a false positive is triggered - when someone bumps into pepper, or shakes her, etc.
Related
We have a Pepper robot recently and actively trying to develop apps for it. Lately more and more we encounter the same error - 720 (Indication for this error is that the shoulder LED's starts blinking yellow).
When we push the button we hear the following description of the error:
Description: Some of my motors are getting hot in my Neck. I will need to rest soon.
Solution based on the documentation: Robot motors are getting hot or are already too hot to be able to move. Put the robot in crouch and unstiffened and wait for few tens of minutes to let its motors cool down before you can use it again.
Is it normal that we encounter this error several times a day? When we push the button behind the tablet, the error goes away and we can continue our work.
The room temperature is about 22-24 degrees.
Is there something we can do to prevent this error from occurring?
Best Regards.
Also check that you don't have objects near pepper (1m, 1m50 around pepper). If you do, then Pepper might consider those objects as potential kids and consistently try to look down to see them. With time it will heat up the neck because of the head's weight creating torque on the neck.
Constant movement heats up the motor. If you do not need the robot to be alive or moving around when you develop then just put it in resting position.
http://doc.aldebaran.com/2-5/naoqi/motion/control-stiffness-api.html?highlight=relax#ALMotionProxy::rest
If you encounter this error too often, the neck motor might be damaged.
You can contact support using this form: https://www.ald.softbankrobotics.com/en/about-us/contact . They will tell you if your Pepper needs repair.
I am in the process of building a mobile game with Corona SDK, which is based on Lua. Until now i didn't need any help but this time I can't seem to find the cause, and I've been searching for it for hours.
It's one of those timer problems, where, after leaving, removing, and revisiting the scene, items that are spawned within a loop just multiply themselves every relaunch. More specificly, everytime a "forbidden" collision happens, which leads to the relaunch, according to my onCollision function.
What I already corrected after hours of strenuous research :
--the code inside the onCollision function is now inside the "began" phase,
so that can't cause the multiplication
--the scene phases are also correctly used
--transitions and timers are all canceled right before the relaunch
Since the code would be too long for you to look through, I'd rather ask for some hints :
What do you have in mind can cause such problems, besides what I already mentioned.
I appreciate every answer! Thanks alot.
The above comments are valid, it is going to be hard to diagnose the problem without being able to look at the code.
In the past, I have found it very helpful to name all my objects when dealing with collisions, so when a collision happens I know what objects caused it and it is very helpful for debugging purposes.
It looks like you have an issue with how you are starting the scene and deallocating resources when the scene ends. You may want to start/stop physics when the scene leaves and comes back, but without code I can't give a concrete answer.
I'm trying to use whole body balancer made by Aldebaran to make my nao dance more steadily and to be less dependent on the surface horizont level, to neglect some small tilt.
I've succeeded in requesting nao to go to balance, but enabling balance constraint gives me nothing. For testing, I designed an ill-balanced timeline which leads robot to fall down when the body balancer is disabled and should keep the robot stable as log as it's enabled, that's what Aldebaran declares as a use-case. However, the robot still falls down (I keep him vertical with my hand) and then goes to balance due to ALMotionProxy::wbGoToBalance. It is strange, however, that he reaches balance in a rapid move, rather than in 3.0 seconds that I requested.
My suggestion now is: whole body balancer needs some resources (joints) that are actually used by my timeline (it uses all the joints). Is it correct? Can anyone confirm or deny this?
The source I use is generally this one:
self.proxy = ALProxy("ALMotion")
self.proxy.wbEnable(True)
self.proxy.wbFootState("Fixed", "LLeg")
self.proxy.wbFootState("Free", "RLeg")
self.proxy.wbEnableBalanceConstraint(True, "LLeg")
I use this source inside a box in Choregraphe 1.14 and it is definitely called (it leaves logs I stripped out). And it definitely gives me no exceptions, I check and log them.
Yes, I think that you must remove some joints from your timeline.
The test is easy: disable for instance ankles from your timeline and see the results.
Disabling some joint is easy:
open the timeline
click the small pen beside the "Motion" caption on
the left
then uncheck some circles (for instance the LAnkleRoll
circle): so those joint animation will be disabled.
retest
I recently wrote a program to display data on a set of LCD TV's. The data is for the most part static with the exception of refreshing from the database every 60 seconds. I know screen burn isn't as big an issue with LCD's as Plasma TV's, however, I would like to try and minimize the risk. These screens will be running for 8 hours a day.
I programmed a small square that bounces around the screens on top of all the data. The square constantly changes colors as it goes. I did test that it hits every pixel on the screen. It completes a "cycle" every couple of minutes.
Is that sufficient to mitigate the risk of burn in? Or do I need to make something more complicated?
Discard all the effort altogether, LCDs do not sufer from that problem at all.
And that square is probalby annoying, and even if it were to do any good, it would have to stay on the screen for longer period of time.
And - I wouldn't worry, 8 hours per day is normal. If you are paranod, you can move the window / re-place the text every so.
That is not true exactly. While LCD don't suffer from what burn in actually is. They do have a similar problem, especially when used as a computer screen, or left on a tv guide. An image will stick if left on the screen long enough, usually goes away but it can be permanent.
The program you are describing sounds like it would work just fine.
So the idea is that a computer agent would be programmed in two layers, the conscious and unconscious.
The unconscious part is essentially a set of input and output devices, which I typically think of as sensors (keyboard, temperature, etc. to the limit of your imagination) and output methods (screen and speakers notably in the case of a home PC, but again to the limit of your imagination). Sensors can be added or removed at anytime, and this layer provides two main channels to the conscious layer, an single input and a single output. Defining what kind of information travels between these two layers is sort of difficult, but the basic idea is that the conscious part is constantly receiving signals (of various levels of abstraction) from the output of the unconscious part, and the conscious part can send whatever it wants down to the unconscious layer through the input channel.
The conscious layer initially knows little to nothing, it is just being completely blasted by inputs from the unconscious layer, and it knows how to send signals back, though it knows nothing about how any particular signal will affect the unconscious part. The conscious part has a large amount of storage space and processing power, however, it is all volatile memory.
Now for the question. I would like for the conscious part of the system to "grow" in that it has no idea what it can do, it just knows it can send signals, and so it starts out by sending signals down the pipe and seeing how that affects the sensor data it receives back. The dead end is that the computer is not initially trying to satisfy a goal. It is just sending signals around. To think of it like a baby being born, they need food, or sleep or to be moved out of the sun, etc. The sensory inputs of the baby are fed to its brain, which then decides to try making use of its outputs in order to get what it needs.
What kind of natural need can a computer have?
What have I tried?
Thinking specifically about how a baby becomes hungry, I certainly haven't read any research on cat scans perform on crying hungry children or anything, but I thought perhaps a particular signal comes from the unconscious with growing speed constantly which is only satiated when the signals sent back cause the baby to eat. The conscious brain's job would be to minimize the rate at which each type of signal comes in at. In other words, the "instinct" of the computer is to limit the rate of each signal coming in. What other "instincts" could there be? The problem with this analogy of course is, computers don't need to eat. Or at least I haven't been able to translate eating to something a computer needs.
Outside of the scope of this question
The end goal of this is to teach a computer who knows nothing except how it interacts with the world to play tic-tac-toe. So another idea I had was to supply a button you could press to manually stimulate the rate of a particular signal entering the conscious when it does something bad or manually soothe the rate of a particular signal when it does good.
Machine intelligence programs generally start at the Award level on Mazlow's Hierarchy of Needs because they don't have a way to perceive Physiological, Safety & Security, or Social needs. However...
At the physiological level the computer feeds on electricity. Plug in a UPS that tells the computer when it is running on battery and you have a potentially useful input for perceiving physiological needs.
Give it the ability to "perceive" that it has "lost time" or has gaps in its time record (due to power failure) and you might be able to introduce the need for Safety and Security.
Introduce social needs by making it need to interact. It could "feel" lonely when lots of time passes between inputs from the keyboard.
Detecting lost time, time passed since last keyboard interaction, and running on battery could be among the inputs available to the unconscious layer that can periodically be bumped to the attention of the conscious layer.
The computer scientists in Two Faces of Tomorrow approach a similar problem, training a computer sandboxed on a satellite to become aware. They give it those needs by, for example, making it aware that it will cease to function without electricity then providing appropriate stimulation and observing the response.
The Adolescence of P-1 is another interesting work along these lines.
A robot was programmed to believe that it liked herring sandwiches. This was actually the most difficult part of the whole experiment. Once the robot had been programmed to believe that it liked herring sandwiches, a herring sandwich was placed in front of it. Whereupon the robot thought to itself, "Ah! A herring sandwich! I like herring sandwiches."
It would then bend over and scoop up the herring sandwich in its herring sandwich scoop, and then straighten up again. Unfortunately for the robot, it was fashioned in such a way that the action of straightening up caused the herring sandwich to slip straight back off its herring sandwich scoop and fall on to the floor in front of the robot. Whereupon the robot thought to itself, "Ah! A herring sandwich..., etc., and repeated the same action over and over and over again. The only thing that prevented the herring sandwich from getting bored with the whole damn business and crawling off in search of other ways of passing the time was that the herring sandwich, being just a bit of dead fish between a couple of slices of bread, was marginally less alert to what was going on than was the robot.
The scientists at the Institute thus discovered the driving force behind all change, development and innovation in life, which was this: herring sandwiches. They published a paper to this effect, which was widely criticised as being extremely stupid. They checked their figures and realised that what they had actually discovered was "boredom", or rather, the practical function of boredom. In a fever of excitement they then went on to discover other emotions, Like "irritability", "depression", "reluctance", "ickiness" and so on. The next big breakthrough came when they stopped using herring sandwiches, whereupon a whole welter of new emotions became suddenly available to them for study, such as "relief", "joy", "friskiness", "appetite", "satisfaction", and most important of all, the desire for "happiness'.
This was the biggest breakthrough of all.
~from The Hitchhiker's Guide to the Galaxy by Douglas Adams
Bonus
Have a look at Reinforcement Learning.