NAO robot grasps a ball on the ground - nao-robot

I create a timeline on Choregraph,and turn to the animation mode.I have trouble in making NAO crouch and making its hand able to reach the ball on the ground.How to record every gesture,such as intervals between two gestures,how many gesture I should record.Meanwhile,the robot often falls down...How can I adjust the gesture.

Choregraphe's animation mode helped you design grossly the movement of the robot, but it is up to you refine the result by spacing the key frames properly, and tuning the movement to maintain the balance.
Robot animation requires time and skill, and includes specific challenges :
you may have to move the robot to positions that are not balanced, which the robot requires too much strength to maintain, resulting in hot joints*, loss of strength, or the robot falling. Hence you need to be able to carry the robot regularly to record or test the robot's position.
the positions are irremediably mechanically constrained.
real joints are imperfect, have limited maximum strength and speed, and have play.
real world have real obstacles, and the robot tries to avoid them, altering the desired trajectories.
*When you get hot joints, just let the robot sit for 2 minutes, you do not need to reboot it.

Related

Why is the result of minimax of tic-tac-toe always a draw?

I found the below text from here, saying that the result for minimax for games like tic-tac-toe and chess will always be a draw. I also saw minimax algorithms for unbeatable tic-tac-toe. But I don't quite understand the reason why minimax results in a draw. Is it because there is no guaranteed winning or losing move and thus the best possible option for both players is a draw?
a computer running a minimax algorithm without any sort of enhancements will discover that, if both it and its opponent play optimally, the game will end in a draw no matter where it starts, and thus have no clue as to which opening play is the "best." Even in more interesting win-or-lose games like chess, even if a computer could play out every possible game situation (a hopelessly impossible task), this information alone would still lead it to the conclusion that the best it can ever do is draw (which would in fact be true, if both players had absolutely perfect knowledge of all possible results of each move).
The information from the site you’ve linked is slightly incorrect.
We know from a brute-force exploration of the game that with perfect play tic-tac-toe will always end in a draw. That is, if both players play the game according to the best possible strategy, then the game ends in a draw. There’s a wonderful xkcd graphic that details how to play perfectly.
If you were to run a minimax search over the game all the way to the end, it isn’t necessarily the case that minimax won’t know what option to pick. Rather, minimax would select any move that leads to a forced draw, since it always picks a move that leads to the best possible result for the player. It’s “unbeatable” in the sense that a perfect minimax player will never lose and, if you play against it with a suboptimal strategy, it may be able to find a forced win and beat you.
As for chess - as of now (December 2021) no one knows whether chess ends in a draw with perfect play or whether one of the players has a forced win. We simply aren’t able to explore the game tree in that much depth. It’s entirely possible that white has a forced win, for example, in which case a minimax search given sufficient time and resources playing as white will always outplay you.

Mobile geolocation precision - cordova / phonegap

I want to develop an app that detects how far the user/device is from points on a map.
Calculating the distance is easy, but when you get close to about 30meters I would need it to be as precise as possible.
Basically I want some lights on the UI to get brighter the closer you get to the target/point.
How do I achieve this if the gps position sometimes bounces around for 5-10 meters or more?
Any ideas on how to approach this?
Thanks!
In general there is the inaccuracy with the position, and indeed its meters, thus the bouncing will be there and its rather impossible to get rid of it, anyways, one suggestion would be to collect the last few (3-10 up to you and your logic really) locations and calculate average from them. Then with fast movements your position would be lagging of course, but when doing slow movements the position shown should be more stable.. Of course you could also have additional logic on determining the movement direction, and accepting the location change towards that faster etc.
You will not get a better precision than 3m to the target.
At low, speed, like walking, you will no make it better than 8-10m.
Count the distance sicne last used fix, If it exceeds 12m then use the fix, and mark it as last used.
This is a simple filter which works well for walking speeds.
At speeds higher (> 10km/h) switch off the filter.
GPS should not jump at that speed.

Snake Game Artificial Intelligence

I am developing a snake game for iOS:
https://github.com/ScottBouloutian/Snake
My goal is to have the AI complete the game of snake optimally (have the snake fill the board).
I am using IDA* to find a path from a snake's current location to the food. This works. However, the algorithm doesn't take into account the fact that it may need to get more food in the future. As a result, sometimes it tends to box itself in.
i.e. The snake's goal at any given time is to find the food, whereas it's goal should be to fill the board (finding food along the way).
How can I add to or modify this approach to make the AI win the game of snake? Is there a better approach I should use instead? I'm just trying to come up with some ideas. Thanks!
If a board is a static rectangle (not torus - so no crossing though the borders) then the only optimal strategy is to find a set of longest closed paths through the board, such that each point in the board is in at least one path.
If a board is empty (there are no obstacles) then there exists an "ultimate" path in the form
16|.1|.6|.7
15|.2|.5|.8
14|.3|.4|.9
13|12|11|10
which goes through all tiles, snake following this pattern will eventually eat all food, and fill the whole board
If there are some obstacles, then such path does not have to exist, then you should find set of such longest paths, and switch between them, when food appears in the unreachable spot of a current path.
For example
#######
#.....#
#.#.#.#
#.....#
#######
Here you have two paths you have to consider, one-longest, going around the whole board, but missing the central spot, and one small loop going through it. As long as food does not appear in the center, you should use the outer loop. Hopefully, if food appears in the center when you fill all the remaining blocks - you will "win". If it appears there sooner - you have to eat it (switch to the other loop) and depending on your current length - you will get back to the best loop, or hit your tail and "lose". In each case, your score will be the best possible to achieve on the board with this locations of foods.
Non A* based approach will find the optimum, this is completely different problem, you should look for the longest closed path, not shortest.

Collision Detection in 2D Motion

I have created a very simple numerical simulation that models an object being thrown off a building at some angle, and when the object hits the ground, the simulation stops. Now I want to add in collision detection. How would I go about doing this?
I know I need to find the exact time that the object (a ball) hits the ground, as well as the velocity in the x and y direction, and position of the object when it hits the ground, and I have to add in parameters that say how much the ball will bounce on impact. But I don't know how to go about doing this. I know that there are various ways of detecting collision but since I am new to this, the most comprehensible method would be best.
Make a coordinate system, with the ground at y=0. Track the coordinates of the ball as it flies and then check when it has y=0, and that's where it hits the ground. You can also keep track of the x and y velocity as the ball is moving.
Use Physics skillz. This is a good tutorial. If you have it, I recommend Fundamentals of Physics by Halliday, Resnick and Walker. They have a very good chapter on this.
If you are just looking for the math, that you could write C code for. I found this one helpful. Math Models
Collision detection simply involves determining the distance between 2 objects.
If you are only interested in collisions between objects and the ground, you can use:
if(object.y <= ground.y) {
//collision occurred
}
To do collisions between objects, you can loop through all objects and compare them to each other in the same way.

Given an audio stream, find when a door slams (sound pressure level calculation?)

Not unlike a clap detector ("Clap on! clap clap Clap off! clap clap Clap on, clap off, the Clapper! clap clap ") I need to detect when a door closes. This is in a vehicle, which is easier than a room or household door:
Listen: http://ubasics.com/so/van_driver_door_closing.wav
Look:
It's sampling at 16bits 4khz, and I'd like to avoid lots of processing or storage of samples.
When you look at it in audacity or another waveform tool it's quite distinctive, and almost always clips due to the increase in sound pressure in the vehicle - even when the windows and other doors are open:
Listen: http://ubasics.com/so/van_driverdoorclosing_slidingdoorsopen_windowsopen_engineon.wav
Look:
I expect there's a relatively simple algorithm that would take readings at 4kHz, 8 bits, and keep track of the 'steady state'. When the algorithm detects a significant increase in the sound level it would mark the spot.
What are your thoughts?
How would you detect this event?
Are there code examples of sound pressure level calculations that might help?
Can I get away with less frequent sampling (1kHz or even slower?)
Update: Playing with Octave (open source numerical analysis - similar to Matlab) and seeing if the root mean square will give me what I need (which results in something very similar to the SPL)
Update2: Computing the RMS finds the door close easily in the simple case:
Now I just need to look at the difficult cases (radio on, heat/air on high, etc). The CFAR looks really interesting - I know I'm going to have to use an adaptive algorithm, and CFAR certainly fits the bill.
-Adam
Looking at the screenshots of the source audio files, one simple way to detect a change in sound level would be to do a numerical integration of the samples to find out the "energy" of the wave at a specific time.
A rough algorithm would be:
Divide the samples up into sections
Calculate the energy of each section
Take the ratio of the energies between the previous window and the current window
If the ratio exceeds some threshold, determine that there was a sudden loud noise.
Pseudocode
samples = load_audio_samples() // Array containing audio samples
WINDOW_SIZE = 1000 // Sample window of 1000 samples (example)
for (i = 0; i < samples.length; i += WINDOW_SIZE):
// Perform a numerical integration of the current window using simple
// addition of current sample to a sum.
for (j = 0; j < WINDOW_SIZE; j++):
energy += samples[i+j]
// Take ratio of energies of last window and current window, and see
// if there is a big difference in the energies. If so, there is a
// sudden loud noise.
if (energy / last_energy > THRESHOLD):
sudden_sound_detected()
last_energy = energy
energy = 0;
I should add a disclaimer that I haven't tried this.
This way should be possible to be performed without having the samples all recorded first. As long as there is buffer of some length (WINDOW_SIZE in the example), a numerical integration can be performed to calculate the energy of the section of sound. This does mean however, that there will be a delay in the processing, dependent on the length of the WINDOW_SIZE. Determining a good length for a section of sound is another concern.
How to Split into Sections
In the first audio file, it appears that the duration of the sound of the door closing is 0.25 seconds, so the window used for numerical integration should probably be at most half of that, or even more like a tenth, so the difference between the silence and sudden sound can be noticed, even if the window is overlapping between the silent section and the noise section.
For example, if the integration window was 0.5 seconds, and the first window was covering the 0.25 seconds of silence and 0.25 seconds of door closing, and the second window was covering 0.25 seconds of door closing and 0.25 seconds of silence, it may appear that the two sections of sound has the same level of noise, therefore, not triggering the sound detection. I imagine having a short window would alleviate this problem somewhat.
However, having a window that is too short will mean that the rise in the sound may not fully fit into one window, and it may apppear that there is little difference in energy between the adjacent sections, which can cause the sound to be missed.
I believe the WINDOW_SIZE and THRESHOLD are both going to have to be determined empirically for the sound which is going to be detected.
For the sake of determining how many samples that this algorithm will need to keep in memory, let's say, the WINDOW_SIZE is 1/10 of the sound of the door closing, which is about 0.025 second. At a sampling rate of 4 kHz, that is 100 samples. That seems to be not too much of a memory requirement. Using 16-bit samples that's 200 bytes.
Advantages / Disadvantages
The advantage of this method is that processing can be performed with simple integer arithmetic if the source audio is fed in as integers. The catch is, as mentioned already, that real-time processing will have a delay, depending on the size of the section that is integrated.
There are a couple of problems that I can think of to this approach:
If the background noise is too loud, the difference in energy between the background noise and the door closing will not be easily distinguished, and it may not be able to detect the door closing.
Any abrupt noise, such as a clap, could be regarded as the door is closing.
Perhaps, combining the suggestions in the other answers, such as trying to analyze the frequency signature of the door closing using Fourier analysis, which would require more processing but would make it less prone to error.
It's probably going to take some experimentation before finding a way to solve this problem.
You should tap in to the door close switches in the car.
Trying to do this with sound analysis is overengineering.
There are a lot of suggestions about different signal processing
approaches to take, but really, by the time you learn about detection
theory, build an embedded signal processing board, learn the processing
architecture for the chip you chose, attempt an algorithm, debug it, and then
tune it for the car you want to use it on (and then re-tune and re-debug
it for every other car), you will be wishing you just stickey taped a reed
switch inside the car and hotglued a magnet to the door.
Not that it's not an interesting problem to solve for the dsp experts,
but from the way you're asking this question, it's clear that sound
processing isn't the route you want to take. It will just be such a nightmare
to make it work right.
Also, the clapper is just an high pass filter fed into a threshold detector. (plus a timer to make sure 2 claps quickly enough together)
There is a lot of relevant literature on this problem in the radar world (it's called detection theory).
You might have a look at "cell averaging CFAR" (constant false alarm rate) detection. Wikipedia has a little bit here. Your idea is very similar to this, and it should work! :)
Good luck!
I would start by looking at the spectral. I did this on the two audio files you gave, and there does seem to be some similarity you could use. For example the main difference between the two seems to be around 40-50Hz. My .02.
UPDATE
I had another idea after posting this. If you can, add an accelerometer onto the device. Then correlate the vibrational and acoustic signals. This should help with cross vehicle door detection. I'm thinking it should be well correlated since the sound is vibrationally driven, wheres the stereo for example, is not. I've had a device that was able to detect my engine rpm with a windshield mount (suction cup), so the sensitivity might be there. (I make no promises this works!)
(source: charlesrcook.com)
%% Test Script (Matlab)
clear
hold all %keep plots open
dt=.001
%% Van driver door
data = wavread('van_driver_door_closing.wav');
%Frequency analysis
NFFT = 2^nextpow2(length(data));
Y = fft(data(:,2), NFFT)/length(data);
freq = (1/dt)/2*linspace(0,1,NFFT/2);
spectral = [freq' 2*abs(Y(1:NFFT/2))];
plot(spectral(:,1),spectral(:,2))
%% Repeat for van sliding door
data = wavread('van_driverdoorclosing.wav');
%Frequency analysis
NFFT = 2^nextpow2(length(data));
Y = fft(data(:,2), NFFT)/length(data);
freq = (1/dt)/2*linspace(0,1,NFFT/2);
spectral = [freq' 2*abs(Y(1:NFFT/2))];
plot(spectral(:,1),spectral(:,2))
The process for finding distinct spike in audio signals is called transient detection. Applications like Sony's Acid and Ableton Live use transient detection to find the beats in music for doing beat matching.
The distinct spike you see in the waveform above is called a transient, and there are several good algorithms for detecting it. The paper Transient detection and classification in energy matters describes 3 methods for doing this.
I would imagine that the frequency and amplitude would also vary significantly from vehicle to vehicle. Best way to determine that would be taking a sample in a Civic versus a big SUV. Perhaps you could have the user close the door in a "learning" mode to get the amplitude and frequency signature. Then you could use that to compare when in usage mode.
You could also consider using Fourier analysis to eliminate background noises that aren't associated with the door close.
Maybe you should try to detect significant instant rise in air pressure that should mark a door close. You can pair it with this waveform and sound level analysis and these all might give you a better result.
On the issue of less frequent sampling, the highest sound frequency which can be captured is half of the sampling rate. Thus, if the car door sound was strongest at 1000Hz (for example) then a sampling rate below 2000Hz would lose that sound entirely
A very simple noise gate would probably do just fine in your situation. Simply wait for the first sample whose amplitude is above a specified threshold value (to avoid triggering with background noise). You would only need to get more complicated than this if you need to distinguish between different types of noise (e.g. a door closing versus a hand clap).

Resources