I built a convolution neural network for image classification that works successfully with large amount of data for each class, but I want to implements it with specific database with limited amount of data available for each class (e.g. may be 1, 2, 3). The accuracy of the same model will be very low in stead of I used data augmentation, batch normalization,and drop out. How can I raise the system accuracy with low amount of data available, is there some model specialized for this case, or any other addition to my system or editing to my image in order to get height evaluated accuracy system. Can anyone please help me, I'm confusing. Thanks...
If you didn't do test with small amount of data you should try it, conv net can work well even with limited amount of data, it's depend how "hard" classification task is.
few option I see with small amount of data:
transfer learning (from you'r network trained with big data base, or for a more real world condition, from DCNN trained by google or other big one, since if you take weight from ur own CNN u'll never know if you could have achieve those performances with just small data base)
If there is some research about ur classification task, find which feature ingeniering people do and apply it. Then try different classifier on extracted feature like SVM,randomforest... Look at ensemble learning and stacking model which are curetly used a lot
ps: for what I know there are 2 option to classify image. Automatic feature extraction which are done by neural network and "manual" feature extraction which can be identified by having a deep knowledge in the field, as a data scientist AND as a profesionnal of the field.
When you have extractd you'r feature you can use different classifier, most of the people which extract feature with conv net use their neural network as classifier
Related
I wish to use Artificial neural network pattern recognition tool to predict traffic flow of the urban area with the use of previous traffic count data.
I want to know whether it is a good technique to predict traffic condition.
Probably should be posted on CrossValidated.
The exact effectiveness is based on what features you are looking at in predicting traffic conditions. The question "whether it's a good technique" is too vague. Neural networks might work pretty well under certain circumstances, while it might also work really badly on other situations. Without a specific context it's hard to tell.
Typically neural networks work pretty well on predicting patterns. If you can form your problem into specific pattern recognition tasks then it's possible that neural networks will work pretty well.
-- Update --
Based on the following comment
What I need to predict is vehicle count of a given road, according to the given time and given day with the use of previous data set. As a example when I enter the road name that I need to travel, the time that I wish to travel and the day, I need to get the vehicle count of that road at that time and day.
I would say be very cautious with using neural networks, because depending on your data source, your data may get really sparse. Lets say you have 10000 roads, then for a month period, you are dividing your data set by 30 days, then 24 hours, then 10000 roads.
If you want your neural network to work you need to at least have enough data for each partition of your data set. If you divide your data set in the way described above, you have 7200000 partitions already. Just think about how much data you need in total. The result of having a small dataset means most of your 7 million partitions will have no data available in it, which then implies that your neural network prediction will not work most of the time, since you don't have data to start with.
This is part of the reason why big companies are sort of crazy about big data, because you just never get enough of it.
But anyway, do ask on CrossValidated since people there are more statistician-y and can provide better explanations.
And please note, there might be other ways to split your data (or not splitting at all) to make it work. The above is just an example of pitfalls you might encounter.
I am developing (for my senior project) a dumbbell that is able to classify and record different exercises. The device has to be able to classify a range of these exercises based on the data given from an IMU (Inertial Measurement Unit). I have acceleration, gyroscope, compass, pitch, yaw, and roll data.
I am leaning towards using an Artificial Neural Network in order to do this, but am open to other suggestions as well. Ultimately I want to pass in the IMU data into the network and have it tell me what kind of exercise it is (Bicep curl, incline fly etc...).
If I use an ANN, what kind should I use (recurrent or not) and how should I implement it? I am not sure how to get the network to recognize an exercise when I am passing it a continuous stream of data. I was thinking about constantly performing an FFT on a portion of the inputs and sending a set number of frequency magnitudes into the network, but am not sure if that will work either. Any suggestions/comments?
Your first task should be to collect some data from the dumbbell. There are many, many different schemes that could be used to classify the data, but until you have some sample data to work with, it is hard to predict exactly what will work best.
If you get 5 different people to do all of the exercises and look at the resulting data yourself (e.g. pilot the different parts of the data collected), can you distinguish which exercise is which? This may give you hints on what pre-processing you might want to perform on the data before sending it to a classifier.
First you create a large training set.
Then you train it, telling it what actually happens.
And you might uses averages of data as well.
Perhaps use actual movement and movement that is averaged over 2 sec 5 sec and 10 sec. use those too as for input nodes.
while exercising the trained network can be feeded with the averaged data as well ea (the last x samples divided by x), this will give you a stable approach. Otherwise the neural network can become hectic erratic.
Notice the training set might require averaged data as well and thus you will need a large training set.
My professor asked my class to make a neural network to try to predict if a breast cancer is benign or malignant. To do this I'm using the Breast Cancer Wisconsin (Diagnostic) Data Set.
As a tip for doing this my professor said not all 30 atributes needs to be used as an input (there are 32, but the first 2 are the ID and Diagnosis), what I want to ask is: How am I supposed to take those 30 inputs (that would create like 100+ weights depending on how many neurons I would use) and get them into a lesser number?
I've already found how to "prune" a neural net, but I don't think that's what I want. I'm not trying to eliminate unnecessary neurons, but to shrink the input itself.
PS: Sorry for any english errors, it's not my native language.
That is a question that is being under research right now. It is called feature selection and there are some techniques already. One is Principal Componetns Analysis (PCA) that reduces the dimensionality of your dataset taking those feature that keeps the most variance. Another thing you can do is to see if there are highly corelated variables. If two inputs are highly correlated may mean that they carry almost the same information so it may be remove without worsen much the performance of your classifier. As a third technique you could use is deep-learning which is a technique that tries to learn the features that will later be used to feed your trainer. More info about deep learning and PCA can be found here http://deeplearning.stanford.edu/wiki/index.php/Main_Page
This problem is called feature selection. It is mostly the same for neural networks as for other classifiers. You could prune your dataset while retaining the most variance using PCA. To go further, you could use a greedy approach and evaluate your features one by one by training and testing your network with each feature excluded in turn.
There is a technique for feature selection using just neural networks
Split your dataset into three groups:
Training data used for supervised training
Validation data used to verify that the neural network is able to generalize
Accuracy testing used to test which of the features are required
The steps:
Train a network on your training and validation set, just like you would normally do.
Test the accuracy of the network with the third dataset.
Locate the varible which yields the smallest drop in the accuracy test above when dropped (dropped meaning always feeding a zero as the input signal )
Retrain your network with the new selection of features
Keep doing this either to the network fails to be trained or there is just one variable left.
Here is a paper on the technique
I am new to machine learning. I am familiar with SVM , Neural networks and GA. I'd like to know the best technique to learn for classifying pictures and audio. SVM does a decent job but takes a lot of time. Anyone know a faster and better one? Also I'd like to know the fastest library for SVM.
Your question is a good one, and has to do with the state of the art of classification algorithms, as you say, the election of the classifier depends on your data, in the case of images, I can tell you that there is one method called Ada-Boost, read this and this to know more about it, in the other hand, you can find lots of people are doing some researh, for example in Gender Classification of Faces Using Adaboost [Rodrigo Verschae,Javier Ruiz-del-Solar and Mauricio Correa] they say:
"Adaboost-mLBP outperforms all other Adaboost-based methods, as well as baseline methods (SVM, PCA and PCA+SVM)"
Take a look at it.
If your main concern is speed, you should probably take a look at VW and generally at stochastic gradient descent based algorithms for training SVMs.
if the number of features is large in comparison to the number of the trainning examples
then you should go for logistic regression or SVM without kernel
if the number of features is small and the number of training examples is intermediate
then you should use SVN with gaussian kernel
is the number of features is small and the number of training examples is large
use logistic regression or SVM without kernels .
that's according to the stanford ML-class .
For such task you may need to extract features first. Only after that classification is feasible.
I think feature extraction and selection is important.
For image classification, there are a lot of features such as raw pixels, SIFT feature, color, texture,etc. It would be better choose some suitable for your task.
I'm not familiar with audio classication, but there may be some specturm features, like the fourier transform of the signal, MFCC.
The methods used to classify is also important. Besides the methods in the question, KNN is a reasonable choice, too.
Actually, using what feature and method is closely related to the task.
The method mostly depends on problem at hand. There is no method that is always the fastest for any problem. Having said that, you should also keep in mind that once you choose an algorithm for speed, you will start compromising on the accuracy.
For example- since your trying to classify images, there might a lot of features compared to the number of training samples at hand. In such cases, if you go for SVM with kernels, you could end up over fitting with the variance being too high.
So you would want to choose a method that has a high bias and low variance. Using logistic regression or linear SVM are some ways to do it.
You could also use different types of regularizations or techniques such as SVD to remove the features that do not contribute much to your output prediction and have only the most important ones. In other words, choose the features that have little or no correlation between them. Once you do this, you would be able to speed yup your SVM algorithms without sacrificing the accuracy.
Hope it helps.
there are some good techniques in learning machines such as, boosting and adaboost.
One method of classification is the boosting method. This method will iteratively manipulate data which will then be classified by a particular base classifier on each iteration, which in turn will build a classification model. Boosting uses weighting of each data in each iteration where its weight value will change according to the difficulty level of the data to be classified.
While the method adaBoost is one ensamble technique by using loss function exponential function to improve the accuracy of the prediction made.
I think your question is very open ended, and "best classifier for images" will largely depend on the type of image you want to classify. But in general, I suggest you study convulutional neural networks ( CNN ) and transfer learning, currently these are the state of the art techniques for the problem.
check out pre-trained models of cnn based neural networks from pytorch or tensorflow
Related to images I suggest you also study pre-processing of images, pre-processing techniques are very important to highlight some feature of the image and improve the generalization of the classifier.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
We need to decide between Support Vector Machines and Fast Artificial Neural Network for some text processing project.
It includes Contextual Spelling Correction and then tagging the text to certain phrases and their synonyms.
Which will be the right approach? Or is there an alternate to both of these... Something more appropriate than FANN as well as SVM?
I think you'll get a competitive results from both of the algorithms, so you should aggregate the results... think about ensemble learning.
Update:
I don't know if this is specific enough: use Bayes Optimal Classifier to combine the prediction from each algorithm. You have to train both of your algorithms, then you have to train the Bayes Optimal Classifier to use your algorithms and make optimal predictions based on the input of the algorithms.
Separate your training data in 3:
1st data set will be used to train the (Artificial) Neural Network and the Support Vector Machines.
2nd data set will be used to train the Bayes Optimal Classifier by taking the raw predictions from the ANN and SVM.
3rd data set will be your qualification data set where you will test your trained Bayes Optimal Classifier.
Update 2.0:
Another way to create an ensemble of the algorithms is to use 10-fold (or more generally, k-fold) cross-validation:
Break data into 10 sets of size n/10.
Train on 9 datasets and test on 1.
Repeat 10 times and take a mean accuracy.
Remember that you can generally combine many the classifiers and validation methods in order to produce better results. It's just a matter of finding what works best for your domain.
You might want to also take a look at maxent classifiers (/log linear models).
They're really popular for NLP problems. Modern implementations, which use quasi-newton methods for optimization rather than the slower iterative scaling algorithms, train more quickly than SVMs. They also seem to be less sensitive to the exact value of the regularization hyperparameter. You should probably only prefer SVMs over maxent, if you'd like to use a kernel to get feature conjunctions for free.
As for SVMs vs. neural networks, using SVMs would probably be better than using ANNs. Like maxent models, training SVMs is a convex optimization problem. This means, given a data set and a particular classifier configuration, SVMs will consistently find the same solution. When training multilayer neural networks, the system can converge to various local minima. So, you'll get better or worse solutions depending on what weights you use to initialize the model. With ANNs, you'll need to perform multiple training runs in order to evaluate how good or bad a given model configuration is.
This question is very old. Lot of developments were happened in NLP area in last 7 years.
Convolutional_neural_network and Recurrent_neural_network evolved during this time.
Word Embeddings: Words appearing within similar context possess similar meaning. Word embeddings are pre-trained on a task where the objective is to predict a word based on its context.
CNN for NLP:
Sentences are first tokenized into words, which are further transformed into a word embedding matrix (i.e., input embedding layer) of d dimension.
Convolutional filters are applied on this input embedding layer to produce a feature map.
A max-pooling operation on each filter obtain a fixed length output and reduce the dimensionality of the output.
Since CNN had a short-coming of not preserving long-distance contextual information, RNNs have been introduced.
RNNs are specialized neural-based approaches that are effective at processing sequential information.
RNN memorizes the result of previous computations and use it in current computation.
There are few variations in RNN - Long Short Term Memory Unit (LSTM) and Gated recurrent units (GRUs)
Have a look at below resources:
deep-learning-for-nlp
Recent trends in deep learning paper
You can use Convolution Neural Network (CNN) or Recurrent Neural Network (RNN) to train NLP. I think CNN has achieved state-of-the-art now.