I am developing (for my senior project) a dumbbell that is able to classify and record different exercises. The device has to be able to classify a range of these exercises based on the data given from an IMU (Inertial Measurement Unit). I have acceleration, gyroscope, compass, pitch, yaw, and roll data.
I am leaning towards using an Artificial Neural Network in order to do this, but am open to other suggestions as well. Ultimately I want to pass in the IMU data into the network and have it tell me what kind of exercise it is (Bicep curl, incline fly etc...).
If I use an ANN, what kind should I use (recurrent or not) and how should I implement it? I am not sure how to get the network to recognize an exercise when I am passing it a continuous stream of data. I was thinking about constantly performing an FFT on a portion of the inputs and sending a set number of frequency magnitudes into the network, but am not sure if that will work either. Any suggestions/comments?
Your first task should be to collect some data from the dumbbell. There are many, many different schemes that could be used to classify the data, but until you have some sample data to work with, it is hard to predict exactly what will work best.
If you get 5 different people to do all of the exercises and look at the resulting data yourself (e.g. pilot the different parts of the data collected), can you distinguish which exercise is which? This may give you hints on what pre-processing you might want to perform on the data before sending it to a classifier.
First you create a large training set.
Then you train it, telling it what actually happens.
And you might uses averages of data as well.
Perhaps use actual movement and movement that is averaged over 2 sec 5 sec and 10 sec. use those too as for input nodes.
while exercising the trained network can be feeded with the averaged data as well ea (the last x samples divided by x), this will give you a stable approach. Otherwise the neural network can become hectic erratic.
Notice the training set might require averaged data as well and thus you will need a large training set.
Related
I built a convolution neural network for image classification that works successfully with large amount of data for each class, but I want to implements it with specific database with limited amount of data available for each class (e.g. may be 1, 2, 3). The accuracy of the same model will be very low in stead of I used data augmentation, batch normalization,and drop out. How can I raise the system accuracy with low amount of data available, is there some model specialized for this case, or any other addition to my system or editing to my image in order to get height evaluated accuracy system. Can anyone please help me, I'm confusing. Thanks...
If you didn't do test with small amount of data you should try it, conv net can work well even with limited amount of data, it's depend how "hard" classification task is.
few option I see with small amount of data:
transfer learning (from you'r network trained with big data base, or for a more real world condition, from DCNN trained by google or other big one, since if you take weight from ur own CNN u'll never know if you could have achieve those performances with just small data base)
If there is some research about ur classification task, find which feature ingeniering people do and apply it. Then try different classifier on extracted feature like SVM,randomforest... Look at ensemble learning and stacking model which are curetly used a lot
ps: for what I know there are 2 option to classify image. Automatic feature extraction which are done by neural network and "manual" feature extraction which can be identified by having a deep knowledge in the field, as a data scientist AND as a profesionnal of the field.
When you have extractd you'r feature you can use different classifier, most of the people which extract feature with conv net use their neural network as classifier
I wish to use Artificial neural network pattern recognition tool to predict traffic flow of the urban area with the use of previous traffic count data.
I want to know whether it is a good technique to predict traffic condition.
Probably should be posted on CrossValidated.
The exact effectiveness is based on what features you are looking at in predicting traffic conditions. The question "whether it's a good technique" is too vague. Neural networks might work pretty well under certain circumstances, while it might also work really badly on other situations. Without a specific context it's hard to tell.
Typically neural networks work pretty well on predicting patterns. If you can form your problem into specific pattern recognition tasks then it's possible that neural networks will work pretty well.
-- Update --
Based on the following comment
What I need to predict is vehicle count of a given road, according to the given time and given day with the use of previous data set. As a example when I enter the road name that I need to travel, the time that I wish to travel and the day, I need to get the vehicle count of that road at that time and day.
I would say be very cautious with using neural networks, because depending on your data source, your data may get really sparse. Lets say you have 10000 roads, then for a month period, you are dividing your data set by 30 days, then 24 hours, then 10000 roads.
If you want your neural network to work you need to at least have enough data for each partition of your data set. If you divide your data set in the way described above, you have 7200000 partitions already. Just think about how much data you need in total. The result of having a small dataset means most of your 7 million partitions will have no data available in it, which then implies that your neural network prediction will not work most of the time, since you don't have data to start with.
This is part of the reason why big companies are sort of crazy about big data, because you just never get enough of it.
But anyway, do ask on CrossValidated since people there are more statistician-y and can provide better explanations.
And please note, there might be other ways to split your data (or not splitting at all) to make it work. The above is just an example of pitfalls you might encounter.
I am a beginner when it comes to NN. I understand the basics and I am not sure abou the following - lets consider handwriting recognition network. I understand you can train a network to recognize a pattern, i.e the weights are set appropriately. But if the network will be trained to recognize "A", how could it recognize "B" then, which would certainly require the weights to be set differently?
Or does the network only searches for one letter it is currently trained? I hope I made myselft clear - I basically try to understand how a trained network can recognize various characters if the weights will be mixed when training for all.
When a neural network is being trained, what is happening is that the network is searching for a set of weights which when combined with the test inputs, will yield the expected output.
One of the key features in neural networks is the setting up and assignment of the Learning Rate. What this means is essentially how much of the previous acquired information is kept.
It is important that this value be neither too high (if memory serves, setting it to 1 would mean that the weight will be changed by taking into consideration only the current test case) nor too low (setting it up to zero will mean that no weight change will be made). In either case, the neural network would never converge.
When training for hand writing, as far as I know, the training set involves various letters written in various forms. That being said, although neural networks tend to fair better than other AI approaches when there are variations in its input, there are always limitations.
EDIT:
As per your question, assuming that you are dealing with a back propagation neural network, what you do is that at each layer, you apply an activation function and pass the result of the current layer to the next.
The extra bit comes during testing, where you compare the result you have with the result you want. This is where you apply the back propagation algorithm to amend the weights, and in this section is where the learning rate comes in.
As you have mentioned in your comment, the weights will be changed, however, the value of the learning rate will determine how much will the weights change. Usually, you want them to change relatively slowly so that they converge, hence why you want to keep the value of the learning rate relatively low. However, if you have a very high learning rate, the current data set will, as you are saying, affect any improvements made by the next.
The way you can look at it is that while training, the neural network is searching for a set of weights which can given its test inputs, it will yield the expected results. So basically, you are looking for weights which satisfy all your test cases.
My professor asked my class to make a neural network to try to predict if a breast cancer is benign or malignant. To do this I'm using the Breast Cancer Wisconsin (Diagnostic) Data Set.
As a tip for doing this my professor said not all 30 atributes needs to be used as an input (there are 32, but the first 2 are the ID and Diagnosis), what I want to ask is: How am I supposed to take those 30 inputs (that would create like 100+ weights depending on how many neurons I would use) and get them into a lesser number?
I've already found how to "prune" a neural net, but I don't think that's what I want. I'm not trying to eliminate unnecessary neurons, but to shrink the input itself.
PS: Sorry for any english errors, it's not my native language.
That is a question that is being under research right now. It is called feature selection and there are some techniques already. One is Principal Componetns Analysis (PCA) that reduces the dimensionality of your dataset taking those feature that keeps the most variance. Another thing you can do is to see if there are highly corelated variables. If two inputs are highly correlated may mean that they carry almost the same information so it may be remove without worsen much the performance of your classifier. As a third technique you could use is deep-learning which is a technique that tries to learn the features that will later be used to feed your trainer. More info about deep learning and PCA can be found here http://deeplearning.stanford.edu/wiki/index.php/Main_Page
This problem is called feature selection. It is mostly the same for neural networks as for other classifiers. You could prune your dataset while retaining the most variance using PCA. To go further, you could use a greedy approach and evaluate your features one by one by training and testing your network with each feature excluded in turn.
There is a technique for feature selection using just neural networks
Split your dataset into three groups:
Training data used for supervised training
Validation data used to verify that the neural network is able to generalize
Accuracy testing used to test which of the features are required
The steps:
Train a network on your training and validation set, just like you would normally do.
Test the accuracy of the network with the third dataset.
Locate the varible which yields the smallest drop in the accuracy test above when dropped (dropped meaning always feeding a zero as the input signal )
Retrain your network with the new selection of features
Keep doing this either to the network fails to be trained or there is just one variable left.
Here is a paper on the technique
I currently have a lot of data that will be used to train a prediction neural network (gigabytes of weather data for major airports around the US). I have data for almost every day, but some airports have missing values in their data. For example, an airport might not have existed before 1995, so I have no data before then for that specific location. Also, some are missing whole years (one might span from 1990 to 2011, missing 2003).
What can I do to train with these missing values without misguiding my neural network? I though about filling the empty data with 0s or -1s, but I feel like this would cause the network to predict these values for some outputs.
I'm not an expert, but surely this would depend on the type of neural network you have?
The whole point of neural networks is they can deal with missing information and so forth.
I agree though, setting empty data with 1's and 0's can't be a good thing.
Perhaps you could give some info on your neural network?
I'm using a lot NNs for forecasting and I can say you that you can simply leave that "holes" in your data. In fact, NNs are able to learn relationships inside observed data and so if you don't have a specific period it doesn't matter...if you set empty data as a constant value you will have give to your training algorithm misleading information. NNs don't need "continuous" data, in fact it's a good practise to shuffle the data sets before training in order to do the backpropagation phase on not-contiguous samples...
Well a type of neural network named autoencoder is suitable for your work. Autoencoders can be used to reconstruct the input. An autoencoder is trained to learn the underlying data manifold/distribution. However, they are mostly used for signal reconstruction tasks such as image and sound. You could however use them to fill the missing features.
There is also another technique coined as "matrix-factorization" which is used in many recommendation systems. People use matrix factorization techniques to fill huge matrices with a lot of missing values. For instance, suppose there are 1 million movies on IMDb. Almost no one has watched even 1/10 of those movies throughout her life. But she has voted for some movies. The matrix is N by M where N is the number of users and M the number of movies. Matrix factorization are among the techniques used to fill the missing values and suggest movies to the users based on their previous votes for other movies.