Changing Choregraphe Dialog Confidence Interval for Nao - nao-robot

I am currently working with a Nao robot using Choregraphe and am trying to lower the confidence interval required to act upon a request made through QiChat from the default 50% to 30%.
I have found this solution, https://community.ald.softbankrobotics.com/en/forum/change-speech-engine-confidence-threshold-choregraphe-dialog-8624, but unfortunately the scripting functionality for Dialog boxes is deprecated in Choregraphe v2.1. Does anyone know what the "new" way to do this is?

I have found the solution. Scripting for Dialog boxes is not allowed but you can add a Python script before the Dialog box to change this interval. The code that should go in this box is below.
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
def onLoad(self):
#put initialization code here
pass
def onUnload(self):
#put clean-up code here
pass
def onInput_onStart(self):
# Lower confidence threshold from 50% to 30%
ALDialog = ALProxy('ALDialog')
ALDialog.setASRConfidenceThreshold(0.3)
self.onStopped() #activate the output of the box
def onInput_onStop(self):
self.onUnload() #it is recommended to reuse the clean-up as the box is stopped
self.onStopped() #activate the output of the box

Two solutions to increase recognition rate:
1) Add more variants to your input - for example, if you're listening for "yes", you should also make sure you listen for "yep", "yup", "yeah", "sure", "okay", "fine", etc. - concepts are useful for that, see the qichat doc.
1) as you suggest, set the confidence threshold - for a more compact version (I prefer less boilerplate):
class MyClass(GeneratedClass):
def onInput_onStart(self):
# Lower confidence threshold from 50% to 30%
ALProxy('ALDialog').setASRConfidenceThreshold(0.3)
self.onStopped() # activate the output of the box
HOWEVER, note that this is not very elegant; you will need to reset it, and it greatly increases the risk of false positives, so you should only use this if you can't solve it just by adding more variants.

setASRConfidenceThreshold is for Nao V5; in Pepper and Nao V6 you should use setConfidenceThreshold:
class MyClass(GeneratedClass):
def onInput_onStart(self):
# Lower confidence threshold from 50% to 30%
ALProxy('ALDialog').setConfidenceThreshold("BNF", 0.3)
self.onStopped() # activate the output of the box

Related

How can I get updated system DPI information from X11 in a C program?

I'm trying to create a DPI aware app which responds to user requested DPI change events by resizing the window.
The program in question is created in C and uses SDL2, however to retrieve system DPI information I use xlib directly, as the SDL DPI support in X11 is lacking.
I found two ways to get the correct DPI information on program startup, both involving getting Xft.dpi information from Xresource: one is to use XGetDefault(display, "Xft", "dpi"), while the other is to use XResourceManagerString, XrmGetStringDatabase and XrmGetResource. Both of them return the correct DPI value when the program is created.
The problem is, if the user changes the system scale while the program is running, both XGetDefault abd XrmGetResource still return the old DPI value even though when I run "xrdb -query | grep Xft.dpi" the value has indeed changed.
Does anyone know a way to get the updated Xft.dpi value?
I found out a way to do exactly what I wanted, even though it's rather hackish.
The solution (using XLib) is to create a new, temporary connection to the X server using XOpenDisplay and XCloseDisplay, and poll the resource information from that new connection.
The reason this is needed is because X fetches the resource information only once per new connection, and never updates it. Therefore, by opening a new connection, X will get the updated xresource data, which can then be used for the old main connection.
Be mindful that constantly opening and closing new X connections may not be great for performance, so only do it when you absolutely need to. In my case, since the window has borders, I only check for DPI changes when the title height has changed, as a DPI change will change the size of your title border due to font size differences.
First off it must be noted that the value of the Xft.dpi resource isn't necessarily accurate -- it depends on whether the system and or user login scripts have correctly set it or not.
Also it is important to remember that the Xft.dpi resource is intended to be used by the Xft library, not by arbitrary programs looking for the screen resolution.
The Xft.dpi resource can be set as follows. This example effectively only deals with a display with a single screen, and note that it uses xdpyinfo. This also shows how it might not be exact, but could be rounded. Finally this example shows calculation of both the horizontal and vertical resolution, but Xft really only wants the horizontal resolution:
SCREENDPI=$(xdpyinfo | sed -n 's/^[ ]*resolution:[ ]*\([^ ][^ ]*\) .*$/\1/p;//q')
SCREENDPI_X=$(expr "$SCREENDPI" : '\([0-9]*\)x')
SCREENDPI_Y=$(expr "$SCREENDPI" : '[0-9]*x\([0-9]*\)')
# N.B.: If true screen resolution is within 10% of 100DPI it makes the most
# sense to claim 100DPI to avoid font-scaling artifacts for bitmap fonts.
if expr \( $SCREENDPI_X / 100 = 1 \) \& \( $SCREENDPI_X % 100 \<= 10 \) >/dev/null; then
FontXDPI=100
fi
if expr \( $SCREENDPI_Y / 100 = 1 \) \& \( $SCREENDPI_Y % 100 \<= 10 \) >/dev/null; then
FontYDPI=100
fi
echo "Xft.dpi: ${FontYDPI}" | xrdb -merge
I really wish I knew why Xft didn't at least try to find out the screen's resolution itself instead of relying all of the time on its "dpi" resource being set, but I've found that the current implementation only uses the resource setting, so something like the above is actually always necessary to set the resource properly (and further one must also make sure the X Server itself has been properly configured with the correct physical screen dimensions).
From a C program you want to do just what xdpyinfo itself does and skip all the nonsense about Xft's resources. Here's the xdpyinfo code paraphrased:
Display *dpy;
dpy = XOpenDisplay(displayname);
for (scr = 0; scr < ScreenCount(dpy); scr++) {
int xres, yres;
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy, scr)) * 25.4) /
((double) DisplayWidthMM(dpy, scr))) + 0.5;
yres = ((((double) DisplayHeight(dpy, scr)) * 25.4) /
((double) DisplayHeightMM(dpy, scr))) + 0.5;
}
XCloseDisplay(dpy);
Note also that if you are for some odd reason scaling your whole display (e.g. with xrandr), then you should want the fonts to scale equally with everything else. It's just a horrible bad hack to use whole-screen scaling to scale just the fonts, especially when for most things it's simpler to just tell the application to use properly scaled fonts that will display at a constant on-screen point size (which is exactly what Xft uses the "dpi" resource to do). I'm guessing Ubuntu does something stupid to change the screen resolution, e.g. using xrandr to scale up the apparent size of icons and other on-screen widgets without applications having to know about screen size and resolution, then it has to lie to Xft by rewriting the Xft.dpi resource.
Note that if you avoid whole-screen scaling then applications that don't use Xft can still get proper font scaling by correctly requesting a properly scaled font, i.e. even with bitmap fonts you can get them scaled to the proper physical on-screen size by using the screen's actual resolution in the font-spec. E.g. continuing from the above shell fragment:
# For pre-Xft applications we can specify physical font text sizes IFF we also tell
# it the screen's actual resolution when requesting a font. Note the use of the
# rounded values here.
#
DecentDeciPt="80"
DecentPt="8"
export DecentDeciPt DecentPt
#
# Best is to arrange one's font-path to get the desired one first, but....
# If you know the name of a font family that you like and you can be sure
# it is installed and in the font-path somewhere....
#
DefaultFontSpec='-*-liberation mono-medium-r-*-*-*-${DecentDeciPt}-${FontXDPI}-${FontYDPI}-m-*-iso10646-1'
export DefaultFontSpec
#
# For Xft we have set the Xft.dpi resource so this allows the physical font size to
# be specified (e.g. with Xterm's "-fs" option) and for a decent scalable font
# to be chosen:
#
DefaultFTFontSpec="-*-*-medium-r-*-*-*-*-0-0-m-*-iso10646-1"
DefaultFTFontSpecL1="-*-*-medium-r-*-*-*-*-0-0-m-*-iso8859-1"
export DefaultFTFontSpec DefaultFTFontSpecL1
# Set a default font that should work for everything
#
eval echo "*font: ${DefaultFontSpec}" | xrdb -merge
Finally here's an example of starting an xterm (that's been compiled to use Xft) with the above settings (i.e. the Xft.dpi resource and the shell variables above) to show text at physical size of 10.0 Points on the screen:
xterm -fs 10 -fa $DefaultFTFontSpec
You could try to use xdpyinfo(1); on my system it outputs, among a lot of other things:
dimensions: 1280x1024 pixels (332x250 millimeters)
resolution: 98x104 dots per inch
depths (7): 24, 1, 4, 8, 15, 16, 32
I don't know whether it can help you because I don't know how do you change the DPI of your screen, but chances are it works. Good luck!
--- UPDATE after comment ---
In a comment below from the OP, it is said that "there is a setting to change the DPI"... still I don't know which. Anyway, I tried Ctrl+Alt+Plus and Ctrl+Alt+Minus to change the resolution of the X server on the fly. After having changed the resolution, and seeing everything bigger than before, I ran xdpyinfo again. IT DIDN'T WORK: still the same output. But may be the method you use (which?) instead works...

What am I doing wrong with this AI?

I am creating a very naive AI (it maybe shouldn't even be called an AI, as it just tests out a lot of possibilites and picks the best one for him), for a board game I am making. This is to simplify the amount of manual tests I will need to do to balance the game.
The AI is playing alone, doing the following things: in each turn, the AI, playing with one of the heroes, attacks one of the max 9 monsters on the battlefield. His goal is to finish the battle as fast as possible (in the least amount of turns) and with the fewest amount of monster activations.
To achieve this, I've implemented a think ahead algorithm for the AI, where instead of performing the best possible move at the moment, he selects a move, based on the possible outcome of future moves of other heroes. This is the code snippet where he does this, it is written in PHP:
/** Perform think ahead moves
*
* #params int $thinkAheadLeft (the number of think ahead moves left)
* #params int $innerIterator (the iterator for the move)
* #params array $performedMoves (the moves performed so far)
* #param Battlefield $originalBattlefield (the previous state of the Battlefield)
*/
public function performThinkAheadMoves($thinkAheadLeft, $innerIterator, $performedMoves, $originalBattlefield, $tabs) {
if ($thinkAheadLeft == 0) return $this->quantify($originalBattlefield);
$nextThinkAhead = $thinkAheadLeft-1;
$moves = $this->getPossibleHeroMoves($innerIterator, $performedMoves);
$Hero = $this->getHero($innerIterator);
$innerIterator++;
$nextInnerIterator = $innerIterator;
foreach ($moves as $moveid => $move) {
$performedUpFar = $performedMoves;
$performedUpFar[] = $move;
$attack = $Hero->getAttack($move['attackid']);
$monsters = array();
foreach ($move['targets'] as $monsterid) $monsters[] = $originalBattlefield->getMonster($monsterid)->getName();
if (self::$debug) echo $tabs . "Testing sub move of " . $Hero->Name. ": $moveid of " . count($moves) . " (Think Ahead: $thinkAheadLeft | InnerIterator: $innerIterator)\n";
$moves[$moveid]['battlefield']['after']->performMove($move);
if (!$moves[$moveid]['battlefield']['after']->isBattleFinished()) {
if ($innerIterator == count($this->Heroes)) {
$moves[$moveid]['battlefield']['after']->performCleanup();
$nextInnerIterator = 0;
}
$moves[$moveid]['quantify'] = $moves[$moveid]['battlefield']['after']->performThinkAheadMoves($nextThinkAhead, $nextInnerIterator, $performedUpFar, $originalBattlefield, $tabs."\t", $numberOfCombinations);
} else $moves[$moveid]['quantify'] = $moves[$moveid]['battlefield']['after']->quantify($originalBattlefield);
}
usort($moves, function($a, $b) {
if ($a['quantify'] === $b['quantify']) return 0;
else return ($a['quantify'] > $b['quantify']) ? -1 : 1;
});
return $moves[0]['quantify'];
}
What this does is that it recursively checks future moves, until the $thinkAheadleft value is reached, OR until a solution was found (ie, all monsters were defeated). When it reaches it's exit parameter, it calculates the state of the battlefield, compared to the $originalBattlefield (the battlefield state before the first move). The calculation is made in the following way:
/** Quantify the current state of the battlefield
*
* #param Battlefield $originalBattlefield (the original battlefield)
*
* returns int (returns an integer with the battlefield quantification)
*/
public function quantify(Battlefield $originalBattlefield) {
$points = 0;
foreach ($originalBattlefield->Monsters as $originalMonsterId => $OriginalMonster) {
$CurrentMonster = $this->getMonster($originalMonsterId);
$monsterActivated = $CurrentMonster->getActivations() - $OriginalMonster->getActivations();
$points+=$monsterActivated*($this->quantifications['activations'] + $this->quantifications['activationsPenalty']);
if ($CurrentMonster->isDead()) $points+=$this->quantifications['monsterKilled']*$CurrentMonster->Priority;
else {
$enragePenalty = floor($this->quantifications['activations'] * (($CurrentMonster->Enrage['max'] - $CurrentMonster->Enrage['left'])/$CurrentMonster->Enrage['max']));
$points+=($OriginalMonster->Health['left'] - $CurrentMonster->Health['left']) * $this->quantifications['health'];
$points+=(($CurrentMonster->Enrage['max'] - $CurrentMonster->Enrage['left']))*$enragePenalty;
}
}
return $points;
}
When quantifying some things net positive points, some net negative points to the state. What the AI is doing, is, that instead of using the points calculated after his current move to decide which move to take, he uses the points calculated after the think ahead portion, and selecting a move based on the possible moves of the other heroes.
Basically, what the AI is doing, is saying that it isn't the best option at the moment, to attack Monster 1, but IF the other heroes will do this-and-this actions, in the long run, this will be the best outcome.
After selecting a move, the AI performs a single move with the hero, and then repeats the process for the next hero, calculating with +1 moves.
ISSUE: My issue is, that I was presuming, that an AI, that 'thinks ahead' 3-4 moves, should find a better solution than an AI that only performs the best possible move at the moment. But my test cases show differently, in some cases, an AI, that is not using the think ahead option, ie only plays the best possible move at the moment, beats an AI that is thinking ahead 1 single move. Sometimes, the AI that thinks ahead only 3 moves, beats an AI that thinks ahead 4 or 5 moves. Why is this happening? Is my presumption incorrect? If so, why is that? Am I using wrong numbers for weights? I was investigating this, and run a test, to automatically calculate the weights to use, with testing an interval of possible weights, and trying to use the best outcome (ie, the ones, which yield the least number of turns and/or the least number of activations), yet the problem I've described above, still persists with those weights also.
I am limited to a 5 move think ahead with the current version of my script, as with any larger think ahead number, the script gets REALLY slow (with 5 think ahead, it finds a solution in roughly 4 minutes, but with 6 think ahead, it didn't even find the first possible move in 6 hours)
HOW THE FIGHT WORKS: The fight works in the following way: a number of heroes (2-4) controlled by the AI, each having a number of different attacks (1-x), which can be used once or multiple times in a combat, are attacking a number of monsters (1-9). Based on the values of the attack, the monsters lose health, until they die. After each attack, the attacked monster gets enraged if he didn't die, and after each heroes performed a move, all monsters get enraged. When the monsters reach their enrage limit, they activate.
DISCLAIMER: I know that PHP is not the language to use for this kind of operation, but as this is only an in-house project, I've preferred to sacrifice speed, to be able to code this as fast as possible, in my native programming language.
UPDATE: The quantifications that we currently use look something like this:
$Battlefield->setQuantification(array(
'health' => 16,
'monsterKilled' => 86,
'activations' => -46,
'activationsPenalty' => -10
));
If there is randomness in your game, then anything can happen. Pointing that out since it's just not clear from the materials you have posted here.
If there is no randomness and the actors can see the full state of the game, then a longer look-ahead absolutely should perform better. When it does not, it is a clear indication that your evaluation function is providing incorrect estimates of the value of a state.
In looking at your code, the values of your quantifications are not listed and in your simulation it looks like you just have the same player make moves repeatedly without considering the possible actions of the other actors. You need to run a full simulation, step by step in order to produce accurate future states and you need to look at the value estimates of the varying states to see if you agree with them, and make adjustments to your quantifications accordingly.
An alternative way to frame the problem of estimating value is to explicitly predict your chances of winning the round as a percentage on a scale of 0.0 to 1.0 and then choose the move that gives you the highest chance of winning. Calculating the damage done and number of monsters killed so far doesn't tell you much about how much you have left to do in order to win the game.

wxpython: Is there a quicker method for inserting large quantities of text into text controls?

I'm coding a GUI in wxpython right now and one of its features is a text control. This text control often takes 10's of thousands of numbers of varying length inserted into it. When it is being filled with the data, it takes a long time (30 seconds or more perhaps).
Just wondering, is there a method to filling the text control with data that will make it do it quicker? Thanks.
I suppose it rather depends on whether the delay is due to the act of getting the numbers or the loading of the text control.
If it is the text control, you can pre-load a variable with the data and then load it in one hit.
import wx
import time
class test(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None)
self.panel = wx.Panel(self, wx.ID_ANY)
self.tc = wx.TextCtrl(self.panel, wx.ID_ANY, size=(300,400),
style = wx.TE_MULTILINE|wx.TE_READONLY|wx.VSCROLL)
text=""
start = time.time()
for i in range(1,30000):
text+=str(i)+'\n'
# self.tc.AppendText(str(i)+"\n")
self.tc.WriteText(text)
self.Show()
end = time.time()
print (end - start)
if __name__ == '__main__':
app = wx.App()
frame = test()
app.MainLoop()
Here, I am building text with the numbers and then loading it once, with WriteText.
If you comment out the text=text+str(i)+'\n' and self.tc.WriteText(text) lines and uncomment the self.tc.AppendText(str(i)+"\n") line which loads the text control one number at a time, you should see that the first method runs multiple times faster. At least it does on my box.

Using PyMEL to set the "Alpha to Use" attribute in an object of class psdFileTex

I am using Maya to do some procedural work, and I have a lot of textures that I need to load into Maya, and they all have transparencies (alpha channels). I would very much like to be able to automate this process. Using PyMEL, I can create my textures and hook them up to a shader, but the alpha doesn't set properly by default. There is an attribute in the psdFileTex node called "Alpha to Use", and it must be set to "Transparency" in order for my alpha channel to work. My question is this - how do I use PyMEL scripting to set the "Alpha to Use" attribute properly?
Here is the code I am using to set up my textures:
import pymel.core as pm
pm.shadingNode('lambert', asShader=True, name='myShader1')
pm.sets(renderable=True, noSurfaceShader=True, empty=True, name='myShader1SG')
pm.connectAttr('myShader1.outColor', 'myShader1SG.surfaceShader', f=True)
pm.shadingNode('psdFileTex', asTexture=True, name='myShader1PSD')
pm.connectAttr('myShader1PSD.outColor', 'myShader1.color')
pm.connectAttr('myShader1PSD.outTransparency', 'myShader1.transparency')
pm.setAttr('myShader1ColorPSD.fileTextureName', '<pathway>/myShader1_texture.psd', type='string')
If anyone can help me, I would really appreciate it.
Thanks
With any node, you can use listAttr() to get the available editable attributes. Run listAttr('myShaderPSD'), note in it's output, there will be two attributes called 'alpha' and 'alphaList'. Alpha, will return you the current selected alpha channel. AlphaList will return you however many alpha channels you have in your psd.
Example
pm.PyNode('myShader1PSD').alphaList.get()
# Result: [u'Alpha 1', u'Alpha 2'] #
If you know you'll only ever be using just the one alpha, or the first alpha channel, you can simply do this.
psdShader = pm.PyNode('myShader1PSD')
alphaList = psdShader.alphaList.get()
if (len(alphaList) > 0):
psdShader.alpha.set(alphaList[0])
else:
// No alpha channel
pass
Remember that lists start iterating from 0, so our first alpha channel will be located at position 0.
Additionally and unrelated, while you're still using derivative commands of the maya.core converted for Pymel, there's still some commands you can use to help make your code read nicer.
pm.setAttr('myShader1ColorPSD.fileTextureName', '<pathway>/myShader1_texture.psd', type='string')
We can convert this to pymel like so:
pm.PyNode('myShader1ColorPSD').fileTextureName.set('<pathway>/myShader1_texture.psd')
And:
pm.connectAttr('myShader1PSD.outColor', 'myShader1.color')
Can be converted to:
pm.connect('myShader1PSD.outColor', 'myShader1.color')
While they may only be small changes, it reads just the little bit nicer, and it's native PyMel.
Anyway, I hope I have helped you!

Get the size of a window in OpenCV

I have searched online and wasn't able to find an answer to this so I figured I could ask the experts here. Is there anyway to get the current window resolution in OpenCV? I've tried the cvGetWindowProperty passing in the named instance of the window, but I can't find a flag to use.
Any help would be greatly appreciated.
You can get the width and height of the contents of the window by using shape[1] and shape[0] respectively.
I think when you use Open CV, the image from the camera is stored as a Numpy array, with the shape being [rows, cols, bgr_channels] like [480,640,3]
code e.g.
import cv2 as cv2
cv2.namedWindow("myWindow")
cap = cv2.VideoCapture(0) #open camera
ret,frame = cap.read() #start streaming
windowWidth=frame.shape[1]
windowHeight=frame.shape[0]
print(windowWidth)
print(windowHeight)
cv2.waitKey(0) #wait for a key
cap.release() # Destroys the capture object
cv2.destroyAllWindows() # Destroys all the windows
console output:
640
480
You could also call getWindowImageRect() which gets a whole rectangle: x,y,w,h
e.g.
import cv2 as cv2
cv2.namedWindow("myWindow")
cap = cv2.VideoCapture(0) #open camera
ret,frame = cap.read() #start streaming
windowWidth=cv2.getWindowImageRect("myWindow")[2]
windowHeight=cv2.getWindowImageRect("myWindow")[3]
print(windowWidth)
print(windowHeight)
cv2.waitKey(0) #wait for a key
cap.release() # Destroys the capture object
cv2.destroyAllWindows() # Destroys all the windows
-which very curiously printed 800 500 (the actual widescreen format from the camera)
Hmm... it's not really a great answer (pretty hack!), but you could always call cvGetWindowHandle. With that native window handle, I'm sure you could figure out some native calls to get the contained image sizes. Ugly, hackish, and not-very-portable, but that's the best I could suggest given my limited OpenCV exposure.

Resources