Sound of a rolling ball - c

I'm looking for the most realistic way of playing sound of a rolling ball. Currently I'm using a Wav sample that I play over and over as long as the ball is moving - which just doesn't feel right.
I've been thinking about completely synthesizing the sound, which I know very little about (almost nothing), I'd be grateful for any tutorials/research materials/samples concerning synthesis of sound of a ball made of particular material rolling on surface made of another material. Also if this idea is completely wrong, please suggest another way of doing this.
Thanks!

I would guess that you'll get the biggest bang for your buck by doing a dynamic frequency adjustment on the sound that makes the playback frequency proportional to the velocity of the ball. I don't know what type of sound library you use, but most will support some variant of this.
For example, in FMOD you could use the Channel::setFrequency method. Ideally, you would compute your desired playback frequency based on your WAV's original sample frequency (Fo), the ball's current velocity (Vc), and the ball's 'ideal' velocity at which the default WAV sounds right (Vi). Something generally like:
F = Fo * ( Vc / Vi )
This will tend to break down as the ball gets farther away from the 'ideal' velocity. You might want to have several different WAVs that are appropriate for different speed ranges that you switch to at certain threshold velocities. Within each WAV's bracket, you'd do the same kind of frequency adjustment.
Another note: this is probably not something that is worth doing every frame. I'd guess that doing this more than 20 times per second would be a waste of time.
ADDENDUM: Playback frequency scaling like this can also be used for simulating the Doppler effect as well. Once you have your adjusted playback frequency, you'd perform another scale of the Frequency based on the velocity of the ball relative to the 'listener' (the camera).

Have you tried playing the sound forward, then playing it backward, and looping that? I use this trick graphically to creating repeating patterns. I don't know much about sound but it might work?

One approach might be to analyze the sound of a rolling ball, and decompose it into its component waveforms. Then you'd be able to generate your own wav file with synthesized waves.
You should be able to do this using an FFT on a sample of the sound.
One drawback is that the sound will likely sound synthesized - you'll have to add noise and such to make it sound more realistic. Getting it to sound real enough may be the hardest part.

I don't think you need the trouble to synthesize that. It would be way too hard to even sound convincing.
Depending on how your scene is, you could loop the sound foward/backwards and simulate a doppler effect applying a low pass filter and/or changing it's pitch.
By the way, freesoung.org is a great place for free samples. They are not professionally recorded but are a good starting point for manipulation. On the other hand, sound ideas has some great sample cds (they're actually industry standard) if you can find them on the cheap. You just have to search for which one has rolling ball sounds.

I really like the approach described in this SIGGRAPH paper:
http://www.cs.ubc.ca/~kvdoel/publications/foleyautomatic.pdf
It describes synthesizing the sound of a rock rolling in a wok (no, really :). The idea is to use modal synthesis (i.e. convolved impulse responses), and the results can be very convincing.
Here's a link to the video demo that goes with the paper:
http://www.cs.ubc.ca/~kvdoel/publications/foleyautomatic.mpeg
And here's a link to the JASS library (written by one of the authors), which was used to create the sound for the video:
http://www.cs.ubc.ca/~kvdoel/jass/jass.html
I'm not sure if you could make it run on a smart phone, but with an efficient enough convolution routine/approximation you might be able to do something interesting...

My question is 'why?' - do you see some benefit in this, or is it just for fun? Your question implies that you aren't happy with the wav you are using but I strongly believe that synthesising your own is going to sound far inferior.
If your wav sample doesn't sound right, I'd suggest try to find another sample. Synthesising a sound is not easy and is never going to sound as realistic as your sample.
Real time synthesis may require more resources for processing and computation. You may very well end up prerendering your synthesised sound into a wav file and performing a playback.
If you want to simulate the sound of different materials then you can use some DSP, or even simple tricks like slowing or speeding the wav playback. The simplest way is the prerender these in another application and store one copy of the file for each use.

Related

Processing and Kinect: Change speed of video according to distance from the sensor

I'm asking a question for a school project. I am trying to play a video in a loop and change its speed according to the distance from the Kinect 1414. When you are far the video plays at normal speed, which then increases as you get closer.
I tried in different ways but sometimes either the video doesn't loop or it doesn't show, I can only hear the audio. Other solutions don't let me change the speed of the video. Do you know any way to have the distance of a person affect the video?
Thanks.
Stack Overflow isn't designed for general "how do I do this" type questions like this. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense.
You need to break your problem down into smaller pieces and then take on those pieces one at a time. For example, can you start with a very basic sketch that just displays the distance from the Kinect? Don't worry about the video yet, just display the distance on the screen.
Separately from that, can you create another sketch that just shows a video playing in a loop? Work your way up from there: can you make it so its speed is based on a hard-coded value? How about on a value like mouseX?
Get those two basic sketches working perfectly by themselves before you start thinking about combining them. Then if you get stuck on one of those steps, you can post a MCVE along with a specific question (in a new question post), and we'll go from there. Good luck.

Robotic Navigation using Kinect

Till now I have been able to create an application where the Kinect sensor is at one place. I have used speech recognition EmguCV (open cv) and Aforge.NET to help me process an image, learn and recognize objects. It all works fine but there is always scope for improvement and I am posing some problems: [Ignore the first three I want the answer for the fourth]
The frame rate is horrible. Its like 5 fps even though it should be like 30 fps. (This is WITHOUT all the processing) My application is running fine, it gets color as well as depth frames from the camera and displays it. Still the frame rate is bad. The samples run awesome, around 25 fps. Even though I ran the exact same code from the samples it wont just budge. :-( [There is no need for code, please tell me the possible problems.]
I would like to create a little robot on which the kinect and my laptop will be mounted on. I tried using the Mindstorms Kit but the lowtorque motors dont do the trick. Please tell me how will I achieve this.
How do I supply power on board? I know that the Kinect uses 12 volts for the motor. But it gets that from an AC adapter. [I would not like to cut my cable and replace it with a 12 volt battery]
The biggest question: How in this world will it navigate. I have done A* and flood-fill algorithms. I read this paper like a thousand times and I got nothing. I have the navigation algorithm in my mind but how on earth will it localize itself? [It should not use GPS or any kind of other sensors, just its eyes i.e. the Kinect]
Helping me will be Awesome. I am a newbie so please don't expect me to know everything. I have been up on the internet for 2 weeks with no luck.
Thanks A lot!
Localisation is a tricky task, as it depends on having prior knowledge of the environment in which your robot will be placed (i.e. a map of your house). While algorithms exist for simultaneous localisation and mapping, they tend to be domain-specific and as such not applicable to the general case of placing a robot in an arbitrary location and having it map its environment autonomously.
However, if your robot does have a rough (probabilistic) idea of what its environment looks like, Monte Carlo localisation is a good choice. On a high level, it goes something like:
Firstly, the robot should make a large number of random guesses (called particles) as to where it could possibly be within its known environment.
With each update from the sensor (i.e. after the robot has moved a short distance), it adjusts the probability that each of its random guesses is correct using a statistical model of its current sensor data. This can work especially well if the robot takes 360ยบ sensor measurements, but this is not completely necessary.
This lecture by Andrew Davison at Imperial College London gives a good overview of the mathematics involved. (The rest of the course will most likely be very interesting to you as well, given what you are trying to create). Good luck!

Are there any well known algorithms to count steps based on the accelerometer?

I'm implementing an accelerometer-based pedometer, and I was wondering if there was any known algorithms to handle that.
You have probably found this:
Enhancing the Performance of Pedometers Using a Single Accelerometer
Anyhow, I am also interested in finding a good algorithm, I am curios what other answers you will get. :)
There is an app called Sensor data that you can uses to gather experimental data so you can then analyze it and try to find an algorithm.
Its going to be quite tricky to find a very good algorithm especially for the iPhone since its accelerometer is quite noisy
There's an interesting paper (with source code) here that may be of help: http://www.analog.com/static/imported-files/application_notes/47076299220991AN_900.pdf.
The charts are interesting. If I were to do this myself I would probably sample the data at a fairly high frequency, convert to frequency domain with a FFT, apply a digital band-pass filter to cut off all frequencies outside the expected minimum/maximum walking speeds (including any DC offset), do a reverse-FFT to reconstruct the now-filtered signal and then run the resulting data through an edge-detector with a Hysteresis function. This is all pure speculation of course but looking at those charts I think it would work, it would be relatively fast to code up and well within the processing power of a mobile phone.

How to imitate a player in an online game

I'd like to write an application, which would imitate a player in an online game.
About the game: it is a strategy, where you can:
train your army (you have to have enough resources, then click on a unit, click train)
build buildings (mines, armories, houses,...)
attack enemies (select a unit, select an enemy, click attack)
transport resources between buildings
make researches (economics, military, technologic,...)
This is a simplified list and is just an example. Main thing is, that you have to do a lot of clicking, if you want to advance...
I allready have the 'navigational' part of the application (I used Watin library - http://watin.sourceforge.net/). That means, that I can use high level objects and manipulate them, for example:
Soldiers soldiers = Navigator.GetAllSoldiers();
soldiers.Move(someLocation);
Now I'd like to take the next step - write a kind of AI, which would simulate my gaming style. For this I have two ideas (and I don't like either of them):
login to the game and then follow a bunch of if statements in a loop (check if someone is attacking me, check if I can build something, check if I can attack somebody, loop)
design a kind of scripting language and write a compiler for it. This way I could write simple scripts and run them (Login(); CheckForAnAttack(); BuildSomething(); ...)
Any other ideas?
PS: some might take this as cheating and it probably is, but I look on this as a learning project and it will never be published or reselled.
A bunch of if statements is the best option if the strategy is not too complicated. However, this solution does not scale very well.
Making a scripting language (or, domain specific language as one would call that nowadays) does not buy you much. You are not going to have other people create AI agents are you? You can better use your programming language for that.
If the strategy gets more involved, you may want to look at Bayesian Belief Networks or Decision Graphs. These are good at looking for the best action in an uncertain environment in a structured and explicit way. If you google on these terms you'll find plenty of information and libraries to use.
Sounds like you want a finite state machine. I've used them to various degrees of success in coding bots. Depending on the game you're botting you could be better off coding an AI that learns, but it sounds like yours is simple enough not to need that complexity.
Don't make a new language, just make a library of functions you can call from your state machine.
Most strategy game AIs use a "hierarchical" approach, much in the same way you've already described: define relatively separate domains of action (i.e. deciding what to research is mostly independent from pathfinding), and then create an AI layer to handle just that domain. Then have a "top-level" AI layer that directs the intermediate layers to perform tasks.
How each of those intermediate layers work (and how your "general" layer works) can each determined separately. You might come up with something rather rigid and straightforward for the "What To Research" layer (based on your preferences), but you may need a more complicated approach for the "General" layer (which is likely directing and responding to inputs of the other layers).
Do you have the sourcecode behind the game? If not, it's going to be kind of hard tracing the positions of each CPU you're (your computer in your case) is battling against. You'll have to develop some sort of plugin that can do it because from the sound of it, you're dealing with some sort of RTS of some sort; That requires the evaluation of a lot of different position scenarios between a lot of different CPUs.
If you want to simulate your movements, you could trace your mouse using some WinAPI quite easily. You can also record your screen as you play (which probably won't help much, but might be of assistance if you're determined enough.).
To be brutally honest, what you're trying to do is damn near impossible for the type of game you're playing with. You didn't seem to think this through yet. Programming is a useful skill, but it's not magic.
Check out some stuff (if you can find any) on MIT Battlecode. It might be up your alley in terms of programming for this sort of thing.
First of all I must point out that this project(which only serves educational purposes), is too large for a single person to complete within a reasonable amount of time. But if you want the AI to imitate your personal playing style, another alternative that comes to mind are neural networks: You play the game a lot(really a lot) and record all moves you make and feed that data to such a network, and if all goes well, the AI should play roughly the same as you do. But I'm afraid this is just a third idea you won't like, because it would take a tremendeous amount of time to get it perfect.

Need suggestions for an Applied AI project

I have a course in my current semester in which I'm required to do a project on application of AI. I have decided to do this on game AI. I have 2 basic ideas: implementing an FPS bot(s) or implementing soccer AI.
I'm quiet a noob at AI right now, I've implemented basic pathfinding algos (A*, etc), and have studied about Finite state machines, some First Order logic, basic Neural Network stuff(Backpropagation ALgo), and am currently doing a course on Genetic Algorithms.
Our main focus is on the bot right now. Our plans include:
Each 'bot' would be implemented using a Finite State Machine (FSM), which would contain the possible states the bot could have; & the rules for the action/state changes that are going to take place when it receives an input.
In bot group movement, each bot would decide whether to strike, ways to strike; based on range, number of bots, existing fights using Neural Networks.
By using genetic algorithms the opponents next move could be anticipated based on repetitive moves.
Although I've programmed a few 2d games till now in my free time (like pacman, tetris, etc), I've never really gone into the 3d area. We will most probably be using a 3d engine.
We want to concentrate most of our energy on the AI part. We would like not to be bothered with unnecessary details about the animation/3d models, etc. For example, if we could find a framework which has functions like Moveright() which just moves the bot to the right, it would be really awesome.
My basic question is : is it too ambitious to go about it in the way we have planned, considering the duration of the project is abour 3 months? Should we go 3d and use a 3d game engine? is it easy to use such engines, if you have no experience with them before? If yes, what kind of engine would be suitable to our project?
I came accross another idea, given in the book AI Game programming by example, where the player would have a top down view of the bots. Would that way be more appropriate?
Thanks .. sorry about the length of the question .. it's just that my problem is a bit too specific.
My basic question is : is it too
ambitious to go about it in the way we
have planned, considering the duration
of the project is abour 3 months?
Yes -- but that's not necessarily a bad thing :)
Should we go 3d and use a 3d game
engine?
No. Mainly because you said:
We want to concentrate most of our
energy on the AI part.
Here's what I'd do, based on my experience (and knowing that, as a student, I often bit off way more than I could chew, too):
Make your simulation function irrespective of a graphical component. Have it publish "updates" to another layer, that consist of player and ball vectors. By doing so you'll be keeping your AI tasks separate from everything else, which means you have fewer bugs to worry about, and you can also unit test your underlying simulation much easier.
Take those "updates" and create your first "visualization" layer -- make it the simplest 2D representation possible. It could just be a stream of text lines: "Player 1 has the ball / Player 1 kicked ball at (30,40) with speed 20kph". That will be hard enough for your first pass since you'll be figuring out how to take data published by the simulation and doing something with it.
Your next visualization might add a 2D grid of ANSI graphics (think rogue-like) to actually show players and the ball moving. Your next one after that might be sprites. And so on. Note how you incrementally increase the complexity of your visualization... don't make your first step go to using a technology (3d graphics engine) you've never used before. (You'll never finish your project in that case.)
As for your questions about which route to take -- FSMs, NNs, GAs, top-down design -- you should rank your interest in them from most to least (along with the rest of your group) and then tackle them, in that order. You might consider doing one style for one team and a different design for the other team. You might want to make your FSM team play against a FSM team that's had an additional tweak done to it, in order to compare and contrast if you think your changes are actually being beneficial (you might be surprised and find out they make the team worse). Actually, that's where unit testing and splitting the simulation from the visualization come in very, very handy -- you should be able to "sim" as many games as you need to to get experimental results without worrying about graphics. You might even do it in batches overnight with scripts.
In general, my advice to you is this: break down your project into the tiniest pieces you can, and tackle them one at a time, so no matter where you're at when time runs out, you'll have something interesting to show off.
You could have a look at guntactyx, that's what I had to use when I did my AI unit at uni.
It takes care of all the display, physics, sound etc... for you, all you have to do is program your team of bots.
The API includes functions to make the bot move left or right, shoot, hear sounds (like gun shots) etc... and it comes with a few sample bots so you don't start from scratch.
Also, it's quite fun to watch your bots battling your friends' bots :)

Resources