Gyroscope drift on mobile phones - mobile

Lots of posts talk about the gyro drift problem. Some guys say that the gyro reading has drift, however others say the integration has drift.
The raw gyro reading has drift[link].
The integration has drift[link](Answer1).
So, I conduct one experiment. The next two figures are what I got. The following figure shows that gyro reading doesn't drift at all, but has the offset. Because of the offset, the integration is horrible. So it seems that the integration is the drift, is it?
The next figure shows that when the offset is reduced the integration doesn't drift at all.
In addition, I conducted another experiment. First, I put the mobile phone stationary on the desk for about 10s. Then rotated it to the left then restore to back. Then right and back. The following figure tells the angle quite well. What I used is only reducing the offset then take the integration.
So, my big problem here is that maybe the offset is the essence of the gyro drift(integration drift)? Can complimentary filter or kalman filter be applied to remove the gyro drift in this condition?
Any help is appreciated.

If the gyro reading has "drift", it is called bias and not drift.
The drift is due to the integration and it occurs even if the bias is exactly zero. The drift is because you are accumulating the white noise of the reading by integration.
For drift cancellation, I highly recommend the Direction Cosine Matrix IMU: Theory manuscript, I have implemented sensor fusion for Shimmer 2 devices based on it.
(Edit: The document is from the MatrixPilot project, which has since moved to Github, and can be found in the Downloads section of the wiki there.)
If you insist on the Kalman filter then see https://stackoverflow.com/q/5478881/341970.
By why are you implementing your own sensor fusion algorithm?
Both Android (SensorManager under Sensor.TYPE_ROTATION_VECTOR) and iPhone (Core Motion) offers its own.

The dear Ali wrote something that is really questionable and imprecise (wrong).
The drift is the integration of the bias. It is the visible "effect" of bias when you integrate. The noise - any kind of stationary noise - that has mean zero, consequently has integral zero (I am not talking of the integral of PSD, but of the additive noise of the signal integrated in time).
The bias changes in time, as a function of voltage and exercise temperature. E.g. if voltage changes (and it changes), bias changes. The bias it is not fixed nor "predictable".
That is why you can not eliminate bias using the proposed subtraction of the estimated bias by the signal. Also any estimate has an error. This error cumulates in time. If the error is lower, the effects of cumulation (the drifting) become visible in a longer interval, but it still exists.
Theory says that a total elimination of bias it is not possible, at the present days. At the state of the art, no one has still found a way to eliminate the bias - based only gyroscopes and accelerometers magnetometers - that could filter all the bias out.
Android and iPhone have limited implementations of bias elimination algorithms. They are not totally free by bias effects (e.g. in small intervals). For some applications this can cause severe problems and unpredictable results.

In this discussion both Ali and Stefano have raised two fundamental aspects of drifts due to ideal integration.
Basically zero mean white noise is an idealized concept and even for such ideal noise integration offer higher gain over lower frequency component of noise, which introduces a low frequency drift in the integrated signal. By theory the zero mean noise should not cause any drift iff observed over significantly long time but practically ideal integration never works.
On the other hand, even a minor dc-offset in the reading (input signal) can cause a significant drift over a time, if an ideal integration (loss-less summation) is performed on it. It can ramp up a very small dc-offsets in the system, as ideal integration has infinite gain on DC component of an input signal. Therefore for the practical purpose we substitute ideal integration by a low pass filter whose cut-off can be as low as required but can not be zero or too low for practical purpose.

Motivated by Ali reply (thanks Ali!), I did some reading and some numerical experiments and decided to post my own reply about the nature of gyro drift.
I've written a simple octave online script plotting white noise and integrated white noise:
The angle plot with reduced offset that is shown in the question seems to resemble a typical random walk. Mathematical random walks has zero mean value, so that cannot be accounted as drift. However, I believe numerical integration of white noise leads to non-zero mean (as can be seen in the histogram plot for random walk below). This, together with linearly increasing variance could be associated to the so-called gyro drift.
There is a great introduction to errors arising from gyroscopes and accelerometers here. In any case, I still have much to learn, so I could be wrong.
Regarding the complimentary filter, there's some discussion here, showing how the gyro drift is reduced by it. The article is very informal, but I found it interesting.

Related

Monte Carlo Tree Search Improvements

I'm trying to implement the MCTS algorithm on a game. I can only use around 0.33 seconds per move. In this time I can generate one or two games per child from the start state, which contains around 500 child nodes. My simulations aren't random, but of course I can't make a right choice based on 1 or 2 simulations. Further in the game the tree becomes smaller and I can my choices are based on more simulations.
So my problem is in the first few moves. Is there a way to improve the MCTS algorithm so it can simulate more games or should I use another algorithm?
Is it possible to come up with some heuristic evaluation function for states? I realise that one of the primary benefits of MCTS is that in theory you wouldn't need this, BUT: if you can create a somewhat reasonable evaluation function anyway, this will allow you to stop simulations early, before they reach a terminal game state. Then you can back-up the evaluation of such a non-terminal game state instead of just a win or a loss. If you stop your simulations early like this, you may be able to run more simulations (because every individual simulation takes less time).
Apart from that, you'll want to try to find ways to ''generalize''. If you run one simulation, you should try to see if you can also extract some useful information from that simulation for other nodes in the tree which you didn't go through. Examples of enhancements you may want to consider in this spirit are AMAF, RAVE, Progressive History, N-Gram Selection Technique.
Do you happen to know where the bottleneck is for your performance? You could investigate this using a profiler. If most of your processing time is spent in functions related to the game (move generation, advancing from one state to the next, etc.), you know for sure that you're going to be limited in the number of simulations you can do. You should then try to implement enhancements that make each individual simulation as informative as possible. This can for example mean using really good, computationally expensive evaluation functions. If the game code itself already is very well optimized and fast, moving extra computation time into things like evaluation functions will be more harmful to your simulation count and probably pay off less.
For more on this last idea, it may be interesting to have a look through some stuff I wrote on my MCTS-based agent in General Video Game AI, which is also a real-time environment with a very computationally expensive game, meaning that simulations counts are severely constrained (but the branching factor is much much smaller than it seems to be in your case). Pdf files of my publications on this are also available online.

Linear Probing vs Chaining

In Algorithm Design Foundations,Analysis, and Internet Examples by Michael T. Goodrich ,Roberto Tamassia in section 2.5.5 Collision-Handling Schemes the last paragraph says
These open addressing schemes save some space over the separate
chaining method, but they are not necessarily faster. In experimental
and theoretical analysis, the chaining method is either competitive or
faster than the other methods, depending upon the load factor of the
methods.
But regarding the speed previous SO Answer says exact opposite.
Linear probing wins when the load factor = n/m is smaller. That is when the number of elements is small compared to the slots. But exactly reverse happen when load factor tends to 1. The table become saturated and every time we have to travel nearly whole table resulting in exponential growth. On the other hand Chaining still grows linearly.

Distance Threshold using bluetooth

I know many people have asked this about larger distances, but is there a way to determine using NFC/BLE/Bluetooth whether two phones are within a small distance of one another? (Say along the order of about 2-3 inches). Determination of precise location would be interesting to see, but not critical.
At a very low level, you can check the intensity of the signal.
I don't think you can code an apps who give you the precise distance, because it depends of the power of the emiter and the power of the receiver.ith less than 50 reputation.

A.I.: How would I train a Neural Network across multiple machines?

So, for larger networks with large data sets, they take a while to train. It would be awesome if there was a way to share the computing time across multiple machines. However, the issue with that is that when a neural network is training, the weights are constantly being altered every iteration, and each iteration is more or less based on the last -- which makes the idea of distributed computing at the very least a challenge.
I've thought that for each portion of the network, the server could send maybe a 1000 sets of data to train a network on... but... you'd have roughly the same computing time as I wouldn't be able to train on different sets of data simultaneously (which is what I want to do).
But even if I could split up the network's training into blocks of different data sets to train on, how would I know when I'm done with that set of data? especially if the amount of data sent to the client machine isn't enough to achieve the desired error?
I welcome all ideas.
Quoting http://en.wikipedia.org/wiki/Backpropagation#Multithreaded_Backpropagation:
When multicore computers are used multithreaded techniques can greatly decrease the amount of time that backpropagation takes to converge. If batching is being used, it is relatively simple to adapt the backpropagation algorithm to operate in a multithreaded manner.
The training data is broken up into equally large batches for each of the threads. Each thread executes the forward and backward propagations. The weight and threshold deltas are summed for each of the threads. At the end of each iteration all threads must pause briefly for the weight and threshold deltas to be summed and applied to the neural network.
which is essentially what other answers here describe.
Depending on your ANN model you can exploit some parallelism on multiple machines by running the same model with the same training and validation data on multiple machines but set different ANN properies; initial values, ANN parameters, noise etc, for different runs.
I used to do this a lot to make sure I'd explored the problem space effectively and wasn't stuck in local minima etc. This is a very easy way to take advantage of multiple machines without having to recode your algorith. Just another approach you might want to consider.
My assumption is you have more than 1 training set, and you have a gold standard. Also, I assume you have some way of storing the state of the neural network (whether it's a list of probability weights for each node, or something along those lines).
Using as many compute nodes in a cluster as you can, launch the program on a data set on each node. Save the results for each, and test on the gold standard. Which ever neural network state performs best set as the input for the next round of training. Repeat as much as you see fit
If I understand correctly, you're trying to figure out a way to train an ANN on a cluster of machines? As you stated, partitioning the network isn't the right approach, and as far as I know, is seemingly unfeasible for most models. A possible approach might be to partition the training sets and run local copies of your network, and then merge the results. An intuitive way to do this and gain some validation along the way would be with cross-validation. As you stated, knowing when the network has had the right amount of training is a problem, but that variability is a problem inherent to neural nets in general, not in parallelizing the work.
As you also stated, the updates that happen during each iteration of training are dependent on the current state of the weights, but without mixing up training sets/validation, you're likely overfitting. This is why CV is nice, because your training sets will all get a chance to play a role in the training, and the validating, across multiple samples.
If you do batch training, the weight are only altered after you have been through the entire dataset. You can compute the weight update vector for each data point in the set on a separate machine/core and add them up at the end, then proceed with the next epoch.
Here is a link to a question about batch training.

Efficient dataset size for a feed-foward neural network training

I'm using a feed-foward neural network in python using the pybrain implementation. For the training, i'll be using the back-propagation algorithm. I know that with the neural-networks, we need to have just the right amount of data in order not to under/over-train the network. I could get about 1200 different templates of training data for the datasets.
So here's the question:
How do I calculate the optimal amount of data for my training? Since I've tried with 500 items in the dataset and it took many hours to converge, I would prefer not to have to try too much sizes. The results we're quite good with this last size but I would like to find the optimal amount. The neural network has about 7 inputs, 3 hidden nodes and one output.
How do I calculate the optimal amount
of data for my training?
It's completely solution-dependent. There's also a bit of art with the science. The only way to know if you're into overfitting territory is to be regularly testing your network against a set of validation data (that is data you do not train with). When performance on that set of data begins to drop, you've probably trained too far -- roll back to the last iteration.
The results were quite good with this
last size but I would like to find the
optimal amount.
"Optimal" isn't necessarily possible; it also depends on your definition. What you're generally looking for is a high degree of confidence that a given set of weights will perform "well" on unseen data. That's the idea behind a validation set.
The diversity of the dataset is much more important than the quantity of samples you are feeding to the network.
You should customize your dataset to include and reinforce the data you want the network to learn.
After you have crafted this custom dataset you have to start playing with the amount of samples, as it is completely dependant on your problem.
For example: If you are building a neural network to detect the peaks of a particular signal, it would be completely useless to train your network with a zillion samples of signals that do not have peaks. There lies the importance of customizing your training dataset no matter how many samples you have.
Technically speaking, in the general case, and assuming all examples are correct, then more examples are always better. The question really is, what is the marginal improvement (first derivative of answer quality)?
You can test this by training it with 10 examples, checking quality (say 95%), then 20, and so on, to get a table like:
10 95%
20 96%
30 96.5%
40 96.55%
50 96.56%
you can then clearly see your marginal gains, and make your decision accordingly.

Resources