NAO robot V4 - wrist motors issue - nao-robot

I have two NAO Robots V4 , one of them has no motors nor sensors in his wrist, is it possible to replace his wrist with the 2nd robot wrist which has motors ? however, the wrist motors don't appear in choregraphe. should I define these motors or what should I do to make them recognized in his system ?
Many Thanks :)

It seems like you have some hardware issue, so just contact the support. There's a form on the company website who produces this robots. Or try something like support#aldebaran.com

Related

Quartus 18.0 Lite MAX10 device board model number not listed in programmer menu

I have an assignment at my university which involves using Quartus - they use Quartus 18.0 Lite.
The board is a terasiC DE10 -Lite board which uses the chip 10M50DAF484C7G
I have installed this on both my windows and linux machines with the same issue.
I download Quartus with the MAX10 device .qdz file so it should be in there.
Note: When creating a project I can set the device to 10M50DAF484C7G but when it comes to uploading my logic circuit design to the board it is not listed in the choices. I have attached a screenshot for clarity:
If anyone is able to help it would be greatly appreciated as it means I cannot test my work on the weekends as our electronics lab is only open 9am to 5pm weekdays for obvious health and safety reasons.
I see "10M50DAF484" is listed, you can probably use that, since the C7G just indicates the operating temperature (basic), fabric speed grade (medium), and version (production).

Nao robot stopped recognizing and responding to spoken words

I'm working with two Nao robots. Their speech recognition abilities have been working alright so far, but recently they just stopped working altogether.
I am using Choregraphe and I can enter words in the dialog box and the robot will respond as intended, but when I speak out words, the robot will either not even recognize words being spoken, or will just display: Human: <...> and that's it. I have tried using autonomous life on and off, creating a simple dialog that only has one line of functionality, like: "u:(_*) Hello.", and it doesn't do anything.
In autonomous life mode the robot's eyes go blue and Nao nods occasionally as if it would hear words, but I get no response and see nothing in the console.
The robot I have is Nao model 6 (the dark grey one and as far as I know the newest model).
However if I use a speech recognition box, Nao will understand the spoken words, just not in the dialog. Do you have any idea what's going on here?
Hi i had a simmilar issue with Pepper.
I also encountered recognition stopping to work.
In my Choregraph log I had:
[WARN ] Dialog.StrategyRemote.personalData :prepare:0 FreeSpeechToText is not available
So the support let me know that:
The problem you observed happened because Pepper got a timeout from
the Nuance Remote server, she will consider that the server is
unavailable and will not try to contact it again for one hour (during
which Free Speech will not work). This could be because the server is
indeed unavailable, or because of network issues.
Fortunately to workaround a bad network you can change those
parameters, with ALSpeechRecognition.setParameter(parameter_name,
parameter_value)
The parameters that will interest you are:
RemoteTimeout: How long Pepper waits for a response from the Nuance
Remote server, in milliseconds. Default value: 10000.0 (ms)
RemoteTryAgain: Number of minutes before trying to use Nuance Remote
again after a timeout. Default value: 60.0 (minutes)
Note that you will need to reset those values again after each boot.
Maybe that can also help you with Nao.
Also I learned that the Remote ASR seems to have a limit of about 200-250 invocations per day.

How to stop NAO robot to detect people automatically?

I am working on a project for NAO recording and I am trying to analyse the sound data. I set the head yaw as well as the pitch angle to a specified degree first and then start the recording process. A problem comes when somebody is facing to its camera, it will move its head and face to the person which is really annoyed .
It seems that this face contact is run in default, could anybode teach me how to blind it?
ALSO is it possible to stop the robot to shake its body when it rests?
Thankyou.
This is probably due to Autonomous Life - you can disable it from Choregraphe, from robot settings, or simply by pressing the chest button twice - see the documentation.

Implementing collision detection python script on dronekit

We're building off of the Tower app, which was built with dronekit-android, and flying the 3dr solo with it. We're thinking about adding some sort of collision detection with it.
Is it feasible to run some python script on the drone, basically reading some IR or ultrasound sensor via the accessory bay, and basically yell at the Android tablet when it detects something? That way, the tablet will tell the drone to fly backwards or something.
Otherwise, would we use the dronekit-python libs to do that? How would use a tablet / computer to have a Tower-like functionality with that?
Thanks a bunch.

iOS 6 Audio multi-route - use external microphone AND internal speaker simultaneously

This presentation: http://www.slideshare.net/invalidname/core-audioios6portland on Core Audio in iOS6 seems to suggest (slide 87) that it is possible to over-ride the automatic output / input of audio devices using Av Session.
So, specifically, it is possible to have an external mic.plugged into an iOS6 device and output sound through the internal speaker ? I've seen this asked before on this site: iOS: Route audio-IN thru jack, audio-OUT thru inbuilt speaker but no answer was forthcoming.
Many thanks !
According to Apple's documentation:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/occ/instm/AVAudioSession/overrideOutputAudioPort:error:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/doc/c_ref/AVAudioSessionPortOverride
You can override to the speaker, but if you look more closely at the C based Audio Session services (which is actually being deprecated, but still has helpful information) reference:
https://developer.apple.com/library/ios/documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html#//apple_ref/doc/constant_group/Audio_Session_Property_Identifiers
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
I would suggest looking at the documentation for iOS 7 to see if they've added any new functionality. I'd also suggest running tests with external devices like iRiffPort or USB based inputs (if you have an iPad with CCK).

Resources