How to stop NAO robot to detect people automatically? - nao-robot

I am working on a project for NAO recording and I am trying to analyse the sound data. I set the head yaw as well as the pitch angle to a specified degree first and then start the recording process. A problem comes when somebody is facing to its camera, it will move its head and face to the person which is really annoyed .
It seems that this face contact is run in default, could anybode teach me how to blind it?
ALSO is it possible to stop the robot to shake its body when it rests?
Thankyou.

This is probably due to Autonomous Life - you can disable it from Choregraphe, from robot settings, or simply by pressing the chest button twice - see the documentation.

Related

Strange Posture of NAO Robot

The Aldebaran NAO robot suddenly stop moving. It boots properly, and can be accessed webpage and Choregraphe.
However, it shows the strange posture as attached photo, and never moves it's motors.
Tried hard reset, and re-install naoqi, but no luck yet.
Any recommendation for resolving this strange issue?
It seems your robot is decalibrated.
You will need to contact to the support.

Nao robot stopped recognizing and responding to spoken words

I'm working with two Nao robots. Their speech recognition abilities have been working alright so far, but recently they just stopped working altogether.
I am using Choregraphe and I can enter words in the dialog box and the robot will respond as intended, but when I speak out words, the robot will either not even recognize words being spoken, or will just display: Human: <...> and that's it. I have tried using autonomous life on and off, creating a simple dialog that only has one line of functionality, like: "u:(_*) Hello.", and it doesn't do anything.
In autonomous life mode the robot's eyes go blue and Nao nods occasionally as if it would hear words, but I get no response and see nothing in the console.
The robot I have is Nao model 6 (the dark grey one and as far as I know the newest model).
However if I use a speech recognition box, Nao will understand the spoken words, just not in the dialog. Do you have any idea what's going on here?
Hi i had a simmilar issue with Pepper.
I also encountered recognition stopping to work.
In my Choregraph log I had:
[WARN ] Dialog.StrategyRemote.personalData :prepare:0 FreeSpeechToText is not available
So the support let me know that:
The problem you observed happened because Pepper got a timeout from
the Nuance Remote server, she will consider that the server is
unavailable and will not try to contact it again for one hour (during
which Free Speech will not work). This could be because the server is
indeed unavailable, or because of network issues.
Fortunately to workaround a bad network you can change those
parameters, with ALSpeechRecognition.setParameter(parameter_name,
parameter_value)
The parameters that will interest you are:
RemoteTimeout: How long Pepper waits for a response from the Nuance
Remote server, in milliseconds. Default value: 10000.0 (ms)
RemoteTryAgain: Number of minutes before trying to use Nuance Remote
again after a timeout. Default value: 60.0 (minutes)
Note that you will need to reset those values again after each boot.
Maybe that can also help you with Nao.
Also I learned that the Remote ASR seems to have a limit of about 200-250 invocations per day.

Implementing collision detection python script on dronekit

We're building off of the Tower app, which was built with dronekit-android, and flying the 3dr solo with it. We're thinking about adding some sort of collision detection with it.
Is it feasible to run some python script on the drone, basically reading some IR or ultrasound sensor via the accessory bay, and basically yell at the Android tablet when it detects something? That way, the tablet will tell the drone to fly backwards or something.
Otherwise, would we use the dronekit-python libs to do that? How would use a tablet / computer to have a Tower-like functionality with that?
Thanks a bunch.

How to send waypoints programmatically to drone?

I am very new at this and trying to get an understanding of this. I have read a lot on the DroneKit-Python site trying to figure out how exactly am I able to communicate with it.
Drone I am currently using is Iris+
I have looked more and there are software that already provide this, but I want to be able to control it plus more.
I want to set waypoints, tell it to then fly give the way points and keep going to them. Also, to be able to arm itself, which is in the example, and override the safety mechanism.
Here is the basic of what I am trying to use it for. Have it fly up at a certain time. Go to the waypoints 1,2,3,1,etc.. Then after X amount of time or on low battery go back to launch point and land.
I have found plenty of code that provides what i need to do, though I don't know if it will work and more importantly I don't even know how to start programming for this. Maybe I have the wrong approach in doing this?
I kind of want this to be a light API, so that in the future I can make a simple UI on my phone and insert some coordinates to give it ways points and that is it. I know there is software out there already that does it, but I want to remove the need for touching the drone. I want it to start and end autonomously.
If anyone could help provide some info that much would be greatly appreciated.
Assuming you have no companion computer (Iris+ does not by default), you are OK with running a ground station app (you won't be out of range to send commands to "end mission on time expiry") and that driving the behaviour from your phone is important, I would be looking at DroneKit Android.
Some notes:
You're going to have to touch the drone at some point to attach the
batteries.
You can arm the device from dronekit
You can override the safety mechanism from a script. I hope you have
a lot of money to pay for the new drones you're going to have to buy when they crash and all the litigation from damaged people and property (in other words "don't do it".
The default behaviour is to return the device to launch (RTL) on low battery. This is convigurable
Setting a time is more "problematic". You can have a timer in a script that then sends return-to-launch but the script needs to be connected to the UAV. This means that either you have to be running in a connected ground station (which might potentially be out of range) or on a companion computer.
Iris+ does not have a companion computer. You have to install one or connect from a Ground Control Station.
DroneKit-Python runs on Linux, MacOSX or Windows. You can't just run it on an ordinary phone, though you could find some other mechanism to send messages/scripts to it running on a companion Computer.
DroneKit Android runs on Android. We do have a planned iOS version too. In theory these could run on a companion computer, but in practice currently these are only used as ground stations.

How to get started on creating a safe that will open and close upon entering a passcode into it?

I want to work on a project dealing with some hardware things but I have no knowledge in this area and I would really appreciate some help with getting started.
I am programming in Node.js so I have had a look at NodeBots and Arduino; however, I am not sure if this is the right place to even start.
The main thing I want to be able to do is:
create a safe where it will open when I enter the correct passcode into an iPad or some touch screen
be able to set the passcode
When working on such a project, do I need the hardware first? (If so, what are some things I should get?)
Secondly, where can I start coding and what are good languages for this? (I am unsure as to which aspect of this project I should focus on first)
I made one with a keypad (not connected to other devices, but it's similar). And, according to my experience, the mechanical part is the worst: designing and building the safe (I used plexiglass) was the worst thing.
As for your questions, I think that yes, you should start with the hardware design. Or at least choose
how are ou closing/opening the door? I used a stepper motor, but there are tons of choices - door locks, servomotors, dc motors.....
how are you detecting if it is open or closed?
what will be the interface? You spoke about iPad, so I suppose you want some bluetooth apple-enabled devices.
Then you will be able to start coding. But the code will be really simple once you got what the hardware will look like (in the end, it's just "wait for the correct passcode, then open the safe, then close it again").

Resources