How do I train by changing the order of the training set? - dataset

I want to compare the performance values when training the order of the two training sets differently, A and B.training in Train Set A first and then Train Set B, but I don’t know what to do.
When I modify the data.yaml file, do I just have to change the order of the train part?
And what should I do when I mix training sets randomly?
`train: # train images (relative to 'path')
images/data_A
images/data_B
or
images/data_B
images/data_A`

Related

How to apply gradient descent on the weights of a neural network?

Considering a neural network with two hidden layers. In this case we have three matrices of weights. Lets say I'm starting the training. In the first round I'll set random values for all weights of the three matrices. If this is correct I have two questions about:
1- Should I do the training from the input layer to the right or otherwise?
2- In the second round of the trainging I have to apply the gradient descent on the weights. Should I apply on all weights of all matrices an after that calculate the error or apply it weight by weight checking if the error has decreased to go to the next weight and so on to finally go to the next training round?
You need to be familiar with forward propagation and the backward propagation. In a neural network, first you initialize weights randomly. Then you predict the y value(let's say y_pred) according to the training set values(X_train). For each X_train sample you have y_train which is the true output(we say ground truth) for the training sample. Then you calculate a loss value according to the loss function, for simplicity let's say loss=y_pred-y_train (This is no the actual loss function it is a bit more complex than that). This is the forward propagation in short.
So you get the loss then you calculate the how much you need to change the weights in order to train your neural network in the next iteration. For this we use gradient descent algorithm. You calculate new weights using the loss value you get. This is the backward propagation in short.
You redo this steps multiple times and you will improve your weights from random to trained weights.

Natural language classifier returns classifications for untrained items

I am confused as to how NLC works. My expectation is that when it is asked to classify text that it should have no relation or training data to learn from it should return no results or results with very low confidence scores.
I have trained a model with a set of training data and when I attempt to classify text that is outside of the training data I am getting results with high confidence values (~60%).
Here's an example of my training data:
foo,1,2,3,4
bar,1,2,3,4
baz,1,2,3,4
When I try to classify the text "This should not exist" I receive a high confidence that this text is "1".
Is my assumption correct in that I should be returned values in this case? Am I training the data to classify foo, bar, and baz incorrectly? If not what should I expect from the NLC service?
Imagine that you have 3 buckets and you have to throw a coin in one of them. Each bucket has 33.3 % changes to get the coin. The same happens with Natural Language Classifier service. It is trained to classify input text into predefined classes.
If you create a classifier with 3 classes and you try to classify text that wasn't in the training data, NLC will still classify your sentence to one of the three classes you defined. If your output is 60% then the other two buckets will get the remaining 40%.
Sometimes you could get a high score and that's normal when you have classes that are very different.

How is the hold out set chosen in vowpal wabbit

I am using vowpal wabbit for logistic regression. I came to know that vowpal wabbit selects a hold out set for validation from the given training data. Is this set chosen randomly. I have a very unbalanced dataset with 100 +ve examples and 1000 -ve examples. I want to know given this training data, how vowpal wabbit selects the hold out examples?
How do I assign more weights to the +ve examples
By default each 10th example is used for holdout (you can change it with --holdout_period,
see https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments#holdout-options).
This means the model trained with holdout evaluation on is trained only on 90% of the training data.
This may result in slightly worse accuracy.
On the other hand, it allows you to use --early_terminate (which is set to 3 passes by default),
which makes it easier to reduce the risk of overtraining caused by too many training passes.
Note that by default holdout evaluation is on, only if multiple passes are used (VW uses progressive validation loss otherwise).
As for the second question, you can add importance weight to the positive examples. The default importance weight is 1. See https://github.com/JohnLangford/vowpal_wabbit/wiki/Input-format

House pricing using neural network

I wrote multilayer perceptron implementation (on Python) which is able to classify Iris dataset. It was trained by backpropagation algorithm and uses sigmoid actiovation functions on a hidden and output layers.
But now I want to change it to be able to approximate house price.
(I have dataset of ~300 estates with prices and input parameters like rooms, location etc.)
Now output of my perceptron is in range [0;1]. But as far as I understand if I want to get resulting house price on the output neuron I need to change that activation function somehow right?
Can somebody help me?
I'm new to neural networks
Thanks in advance.
Assuming, for instance, that houses price between $1 and $1,000,000, then you can just map the 0...1 range to the final price range both for the training and for the testing. Just note that 300 estates is a fairly small data set.
To be precise, if a house is $500k, then the target training output becomes 0.5. You can basically divide by your maximum possible home value to get the target training amount. When you get the output value you multiple by the maximum home value to get the predicted price.
So, view the output of the neural network as the percentage of the total cost.

Document classification with incomplete training set

Advice please. I have a collection of documents that all share a common attribute (e.g. The word French appears) some of these documents have been marked as not pertinent to this collection (e.g. French kiss appears) but not all documents are guaranteed to have been identified. What is the best method to use to figure out which other documents don't belong.
Assumptions
Given your example "French", I will work under the assumption that the feature is a word that appears in the document. Also, since you mention that "French kiss" is not relevant, I will further assume that in your case, a feature is a word used in a particular sense. For example, if "pool" is a feature, you may say that documents mentioning swimming pools are relevant, but those talking about pool (the sport, like snooker or billiards) are not relevant.
Note: Although word sense disambiguation (WSD) methods would work, they require too much effort, and is an overkill for this purpose.
Suggestion: localized language model + bootstrapping
Think of it this way: You don't have an incomplete training set, but a smaller training set. The idea is to use this small training data to build bigger training data. This is bootstrapping.
For each occurrence of your feature in the training data, build a language model based only on the words surrounding it. You don't need to build a model for the entire document. Ideally, just the sentences containing the feature should suffice. This is what I am calling a localized language model (LLM).
Build two such LLMs from your training data (let's call it T_0): one for pertinent documents, say M1, and another for irrelevant documents, say M0. Now, to build a bigger training data, classify documents based on M1 and M0. For every new document d, if d does not contain the feature-word, it will automatically be added as a "bad" document. If d contains the feature-word, then consider a local window around this word in d (the same window size that you used to build the LLMs), and compute the perplexity of this sequence of words with M0 and M1. Classify the document as belonging to the class which gives lower perplexity.
To formalize, the pseudo-code is:
T_0 := initial training set (consisting of relevant/irrelevant documents)
D0 := additional data to be bootstrapped
N := iterations for bootstrapping
for i = 0 to N-1
T_i+1 := empty training set
Build M0 and M1 as discussed above using a window-size w
for d in D0
if feature-word not in d
then add d to irrelevant documents of T_i+1
else
compute perplexity scores P0 and P1 corresponding to M0 and M1 using
window size w around the feature-word in d.
if P0 < P1 - delta
add d to irrelevant documents of T_i+1
else if P1 < P0 - delta
add d to relevant documents of T_i+1
else
do not use d in T_i+1
end
end
end
Select a small random sample from relevant and irrelevant documents in
T_i+1, and (re)classify them manually if required.
end
T_N is your final training set. In this above bootstrapping, the parameter delta needs to be determined with experiments on some held-out data (also called development data).
The manual reclassification on a small sample is done so that the noise during this bootstrapping is not accumulated through all the N iterations.
Firstly you should take care of how to extract features of the sample docs. Counting every word is not a good way. You might need some technique like TFIDF to teach the classifier that which words are important to classify and which are not.
Build a right dictionary. In your case, the word French kiss should be a unique word, instead of a sequence of French + kiss. Use the right technique to build a right dictionary is important.
The remain errors in samples are normal, we call it "not linear separable". There're a huge amount of advanced researches on how to solve this problem. For example, SVM (support vector machine) would be what you like to use. Please note that single-layer Rosenblatt perceptron usually shows very bad performance for the dataset which are not linear separable.
Some kinds of neural networks (like Rosenblatt perceptron) can be educated on erroneus data set and can show a better performance than tranier has. Moreover in many cases you should make errors for avoid over-training.
You can mark all unlabeled documents randomly, train several nets and estimate theirs performance on the test set (of course, you should not include unlabeled documents in the test set). After that you can in cycle recalculate weights of unlabeled documents as w_i = sum of quality(j) * w_ij, and then repeate training and the recalculate weight and so on. Because procedure is equivalent to introducing new hidden layer and recalculating it weights by Hebb procedure the overall procedure should converge if your positive and negative sets are lineary separable in some network feature space.

Resources