Synchronizing Nao Robots Via Choregraphe - nao-robot

Is it possible to synchronize 2 Nao robots without using Python? We want to avoid using python as much as possible, but if its the only way to get them synchronized we will switch

In choregraphe, using embedded connection, you can only connect to one robot at a time.
However it's easy to connect to another robot from a box in choregraphe, by typing only two python code line.
Here's an example:
1. Connect to your NAO as usual, and make a simple program to make it say "hello".
2. Then we will create another box to ask the other robot to speak, for that: Create an empty box (right click on the central area) then double click on it, to show the code, and on the onStart Method, add this line:
tts = ALProxy("ALTextToSpeech", "ip of the other robot", 9559)
tts.say("hello")
And tadaa, both of your robot are speaking, in parallel or sequentially depending how you connect the two boxes...

Related

How do I send simulated keystroke to steam games on Windows 10?

I want to create a simple game controller/bot inside of a steam game similar to the one done here on gtaV to try out machine learning techniques. I've tried everything from pyautogui package, pyinput, and this go to method that was also used in the previous link and all of them work on the desktop apps and browsers but none of them work once I start the game.
I've also tried making and compiling this C code that I'm guessing the python is wrapping around but it's the same issue where it works until I enter the game screen.
I'm guessing the issue is something with the method of input as in the previous case, it had to be changed from ScanCodes instead of VKs but I have no idea on how to even figure out which input method must be used.
My current setup is python3 on windows10 home edition.

RemoteApp using drivers for scanner

One of my clients wants to use a check scanner. They purchased software and have a scanner however they do not want to store any of the data on the workstation the scanner is attached to. I'm wondering if we can utilize RemoteApp to deploy the software? I've built a test of the application being deployed via RemoteApp and it seems to work however I don't have a check scanner to test with. Will I run into driver issues or should this POC work?
Setup a test environment using RemoteApp software works fine however do not have a check scanner to test with.
It should work ok, but it will often depend on the scanner software. Useally these scanners simply type as if the keyboard was being pressed. So you have to place your cursor in the field on the form, and then scan, and it “types in” what the scanner saw. So, you can launch word, or Access or even note pad for this to work. If you are using remote desktop, then this should also work. If the scanner does not type keys as it scans, then you can’t use remote desktop, but in most cases it should.
And in most cases, the field (text box) you scan into likely will need to parse out the bits and parts of the string into separate text boxes.
So given how most scanners work, then you should be ok. So, you install the scanner software on the client side - and all it really does is press keys as if you were typing. So the trick then becomes to ensure that your cursor is in the right text box before you scan.

Nao robot stopped recognizing and responding to spoken words

I'm working with two Nao robots. Their speech recognition abilities have been working alright so far, but recently they just stopped working altogether.
I am using Choregraphe and I can enter words in the dialog box and the robot will respond as intended, but when I speak out words, the robot will either not even recognize words being spoken, or will just display: Human: <...> and that's it. I have tried using autonomous life on and off, creating a simple dialog that only has one line of functionality, like: "u:(_*) Hello.", and it doesn't do anything.
In autonomous life mode the robot's eyes go blue and Nao nods occasionally as if it would hear words, but I get no response and see nothing in the console.
The robot I have is Nao model 6 (the dark grey one and as far as I know the newest model).
However if I use a speech recognition box, Nao will understand the spoken words, just not in the dialog. Do you have any idea what's going on here?
Hi i had a simmilar issue with Pepper.
I also encountered recognition stopping to work.
In my Choregraph log I had:
[WARN ] Dialog.StrategyRemote.personalData :prepare:0 FreeSpeechToText is not available
So the support let me know that:
The problem you observed happened because Pepper got a timeout from
the Nuance Remote server, she will consider that the server is
unavailable and will not try to contact it again for one hour (during
which Free Speech will not work). This could be because the server is
indeed unavailable, or because of network issues.
Fortunately to workaround a bad network you can change those
parameters, with ALSpeechRecognition.setParameter(parameter_name,
parameter_value)
The parameters that will interest you are:
RemoteTimeout: How long Pepper waits for a response from the Nuance
Remote server, in milliseconds. Default value: 10000.0 (ms)
RemoteTryAgain: Number of minutes before trying to use Nuance Remote
again after a timeout. Default value: 60.0 (minutes)
Note that you will need to reset those values again after each boot.
Maybe that can also help you with Nao.
Also I learned that the Remote ASR seems to have a limit of about 200-250 invocations per day.

Autohotkey - How to detect all input areas/checkboxes in an application?

Is there a way to detect input areas such as textboxes and checkboxes within an application? I want to label each input area with a number so I can jump between input fields with AHK using my keyboard.
For example: Once the script is activated and active window is Google Chrome, Chrome could have its address bar labeled #1. When I press "1", the cursor will be directed to that area.
I'm basically trying to create a workaround for applications that are not very keyboard friendly.
Most Windows applications use standard windows elements.
For these...
https://autohotkey.com/docs/commands/WinGet.htm - with the ControlList parameter, gets a list of all standard controls.
For those:
https://autohotkey.com/docs/commands/ControlGet.htm - can get the type of control, and
https://autohotkey.com/docs/commands/ControlGetPos.htm - can get position and dimensions of the control.
Some can also be controlled through COM: https://gist.github.com/kheybot/7026077#automation-of-office-applications
Commandline and console programs can sometimes be communicated with directly, using the standard streams (STDIN, STDOUT, STDERR, LPTn, PRN, NUL), or you can communicate with the terminal that displays the program using COM or WSH:
https://gist.github.com/kheybot/7026077#interact-with-command-line
This is important for a lot of legacy data-entry programs.
Browsers (eg Chrome), unfortunately, can't use these heavyweight components, because there may be far too many on a page, but there are other options for communicating with them, such as COM, DDE, etc to communicate with the DOM:
https://gist.github.com/kheybot/7026077#browser-automation
For a web browser, I'd be inclined to go for a hybrid approach, combining AHK-handling of the web browser's input areas (address bar, etc) with a Greasemonkey/Tampermonkey script to handle input fields within the web page itself - the Javascript will be better able to handle input areas using the DOM than any screen-scraping software could. There's also the possibility of using a functional-testing suite like Selenium for automation, and using the browser's plug-in functionality to write an extension to handle its UI.
This would mean that you now have TWO programming problems, of course...
Java applications, Flash applications, HTML5 applications, some graphic design software, and just about all computer games are essentially just graphics, with no way of externally identifying controls.
For these, you have to use basic screen scraping techniques: http://www.autohotkey.com/docs/commands/ImageSearch.htm and http://www.autohotkey.com/docs/commands/PixelSearch.htm to identify specific areas, which can only really be done by individually programming the specific control.
One option for generic detection, though, is to have something that detects shadows (drop shadows, buttonized components, etc) and allows you to tab between and send a click to the components detected that way. Unfortunately, modern flat design meant this won't always work, so you could also try searching for flat-colored rectangles... except sometimes they have curved corners. Because graphic designers hate people.
At this point, you will hopefully see that what you have here is an infinite rabbithole of fractal complexity.
You can make a simple ControlGet solution which doesn't work for a lot of applications you would use regularly... or you can create a hybrid approach that targets many applications individually, while also trying to have a generic solution for unrecognized apps.
If you are creating this for your own use, I'd say aim for making it work with the apps you know and use regularly, and that should be enough.
If you're writing it as accessibility software for others to use, I'd say aim for having it user-configurable for each application: let them control what input element they want to click, and in what order, because auto-detection will never work perfectly, and will only rarely pick the ideal solution.
The answer is yes, if the number of check boxes and their position in the application is fixed and you know on which machine the automation takes place.
Please research ImageSearch on how to locate them from screenshots.
If you know the X/Y position of the checkbox in the window, you can also use PixelGetColor to check if a check is visible or not.
You should also examine your application with the included AutoIt Spy. This program shows you, what it can see in the application window.
To get your labelling, checkout the Gui commands. If you make you gui transparent and don't give focus, you can write labels on top of the application.

RDP connectivity/responsiveness test

I want to write an app to test whether a Windows machine is responding correctly to RDP (Remote Desktop) - i.e. to check if the machine not only allows the connection, but is also responding normally, and is not hung or otherwise responding abnormally.
Is there a library or utility that I can use to do this? My searches turned up full RDC clients, but I'm hoping there's something out there at least offers an API for testing. I would most like to use Java or a scripting language to do this, but I'm open to suggestions.
You can find some good answers in this question: Programmatically create and launch and RDP session (without gui)
Because RDP is a constantly evolving proprietary protocol, i'm guessing there isn't some simple open-source code you can take and use. This leaves us with two possible paths to follow:
Use Microsoft RDP ActiveX control (on Windows)
Launch mstsc.exe and send keyboard events to it (also on Windows, using your favorite language)
For the second option, I suggest AutoHotkey. It is perfect for automating windows programs and comes with a powerful library. It also has a strong comunity behind it so you can find lots of useful scripts on the internet. I use it to control winamp (like 'i hate this song! delete it and move to next')(well, technically 'move to the next song and delete the previous' because you can't delete the file while in use but you get the idea). If you choose this path, i can help you with the script.
Found this on Experts Exchange:
use Net::Telnet ();
$t = new Net::Telnet (Timeout => 10, Prompt => '', Port >= 3389);
if($t->open("computer.name.or.ip")) {
print "Connect successful\n";
}
else {
print "Could not connect\n";
}
The idea was to attempt a connection and if it can't connect within 'x' amount of seconds, assume it isn't going to work. Gets a bit more complicated if you're trying to see if a login for a specific user works or not, but this should at least get you started.
NOTE: As pointed out in the comments, the original solution left out the RDP port, so I included that in this...

Resources