Benchmarking a Model in Caffe - Does dataset make a difference? - benchmarking

I use the following command to benchmark my model:
./build/tools/caffe time -model /path/to/deploy.prototxt -weights /path/to/caffemodel -gpu all
My question is: Does the dataset make any difference? In this case my deploy file does not point to any dataset. Also the caffemodel file should not make a difference even if it is trained for just one epoch. I believe this because the number of multiplications and additions in forward pass will remain the same no matter how trained the model is. Therefore, the benchmark time should be same and accurate regardless of whatever .caffemodel file is used. Is my assumption correct?

No, the dataset doesn't make a difference in the benchmark. Infact, there is no need to give the -weights flag even. The time functionality uses dummy data for benchmarking the model present in deploy.prototxt

Related

My RPROP neural network gets stuck

Since the implementation of the algorithm is correct(i checked it hundreds of times), I think I have misunderstood some theorical facts.
I suppose that:
given that j refers to the hiddenlayer side and k to the output layer
∂E/∂wjk is calculated by doing:
outputNeuron[k].errInfo=(target[k]-outputNeuron[k].out)*derivate_of_sigmoid(outputNeuron[q].in);
∂E/∂wjk=outputNeuron[k].errInfo*hiddenNeuron[j].out;
For the ∂E/∂wij, where 'i' refers to the inputlayer and 'j' to the hiddenlayer, it's a bit longer.
Each hidden unit (Zj, j = 1, ... ,p) sums its
delta inputs (from units in the output layer),
errorInfo_in[j]=summation from k=1 to m(number of output units) of: outputNeuron[k].errInfo*w[j][k]
Then i calculate the error info of the hidden unit:
hiddenNeuron[j].errInfo=errorInfo_in[j]*derivated_sigmoid(hiddenNeuron[j].in);
Finally the ∂E/∂wij is:
hiddenNeuron[j].errInfo*x[i] (where x[i] is the output of an input unit)
I apply the RPROP as described here http://www.inf.fu-berlin.de/lehre/WS06/Musterererkennung/Paper/rprop.pdf
For all the weight between the input and hidden, and hidden output.
I'm trying to recognize letters made of '#' and '-' , 9(rows)x7(columns).
The MSE just get stuck at 172 after a few epoch.
I know that RPROP is a batch learning, but i'm using online learning because i read that it works anyway.
RPROP does not work well with pure online learning, it might work with mini-batch learning, provided mini-batch is large enough. Absolute value of MSE is poor indicator of anything, especially on custom datasets where gold standard values are unknown.
It is best to test newly implemented NN algorithms on simple things like logic gates (AND, OR, XOR), before moving to something more complex. This way, you will always be confident in your code and methodologies. For character recognition tasks, you may also want to test on well known datasets such as MNIST, where expected results are known and you can compare your results to previous work.
For classification tasks one usually want to measure accuracy of classification as it is much better perfomance indicator than MSE.

How to make training and testing set from a dataset?

What's the best method:
splitting my data into training and testing sets by making 70% of the data as training and 30% test, or
using similar data for training and testing set.
A- Is the second method correct and what's its disadvantages?
B- My dataset contains 3 attributes and 1000 objects, is this good for selecting the training and testing sets from this dataset?
The second method is wrong (at least if by 'similar' you mean 'same').
You shouldn't use the test set for training.
If you use just one data set, you could achieve perfect accuracy by simply learning this set (with the risk of overfitting).
Generally, this isn't what you want because the algorithm should learn the general concept behind the examples. A way of testing if this happens is to use separate dataset for training and testing.
Test set gives you a forecast of the performance of your model in the "real world" because it's independent (during the training/validation phase you don't make any choice based on test data).
Second option is wrong. First option is the best....
Using ling-pipe classifier we can train and test news data. But if you provide same data used in training for testing purposes no doubt it shows accurate output. What we want is predicting output for unknown cases that's how we test accuracy right.
So what you have to do is
1)Train your data
2)Build a model
3)Apply test data to the model to get output for unknown sets/ cases too.
Building a model is nothing but writing the trained object into a file. So each time you runs the program you have to put the data into that model instead of training each time. This saves your time. I hope my answer will help you. Best regards.
You can create Train-Test from a dataset in command line:
java -cp weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 30 -i dataset.arff -o train.arff
java -cp weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 70 -i dataset.arff -o test.arff
and A): except if "all" the future possible data combinations exist in your dataset, using same data for train and test is a bad solution. It does not assess how your model is able to handle different new cases and can't assess if you are overfitting (it fits to your current data without re-usable logic). Why don't you use "cross validation", this is very effective if you want to use the same dataset. It automatically splits in different parts, and test each part against the rest of the data, then compute the average result.
B) if you mean 3 attributes and 1000 instances, it could be ok if you don't have too much different type of outputs (classes) to predict and that instances map good use cases.
FYI: if you want to test your data on many different classifiers to find the best one, use the experimenter.

Viola - jones adaboost method

I have read the paper about Viola-Jones method for object detection and am confused by a few things.
1 - For Adaboost does each round mean that we calculate all the 160k features across all images then find the one with the least error (which as i understand is a 'weak classifier? please correct me if I am wrong). If yes then won't this take extremely long to train on a large set of images which can take months ? And also how many rounds will you run it for if this is correct.
2- If the above point is wrong then does it mean that for each feature, we evaluate all the non-face and face images with one feature and compare it to a certain acceptable threshold of error, and if it lies below this acceptable threshold then we take this feature as a weak classifier and then update the weights before using the next feature from the 160k features.
I have tried understanding the MATLAB code in this link http://www.ece301.com/ml-doc/54-face-detect-matlab-1.html but not sure if the way he implemented adaboost was correct.
It would also be a great help if there was a link that would explain adaboost used by Viola-Jones in a simple and clear way.
AdaBoost tries out multiple weak classifier over several rounds , Selecting the best weak classifier in each round and combining the best classifier to create a strong classifier.
Example for adaboost :
Data point Classifier 1 Classifier 2 Classifier 3
x1 Fail pass fail
x2 Pass fail pass
x3 fail pass pass
x4 pass fail pass
AdaBoost can use classifier that are consistently wrong by reversing their decision .
1) Yes. If your parameters are, say, 5000 positive windows and 5000 negative ones (extracted from the set of negative background images you provided initially), then at each stage every feature is evaluated in turn for all 10000 windows and the best one is added to the feature set.
And yes in the original paper of Viola-Jones each feature is a weak classifier.
The process of checking all features will take a long time, but not weeks. As a matter of fact the bottle neck is the gathering of negative widows over the last stages than could require days or also weeks The number of rounds depends on the reaching of the given conditions. Each stage requires a number of round (usually more in final stages). So for example: stage 1 requires 3 features (3 rounds), stage 2 requires 5 features (5 rounds), stage 3 requires 6 features (6 rounds), etc.
2) The above point is true so it doesn’t apply.

How to go about creating a prolog program that can work backwards to determine steps needed to reach a goal

I'm not sure what exactly I'm trying to ask. I want to be able to make some code that can easily take an initial and final state and some rules, and determine paths/choices to get there.
So think, for example, in a game like Starcraft. To build a factory I need to have a barracks and a command center already built. So if I have nothing and I want a factory I might say ->Command Center->Barracks->Factory. Each thing takes time and resources, and that should be noted and considered in the path. If I want my factory at 5 minutes there are less options then if I want it at 10.
Also, the engine should be able to calculate available resources and utilize them effectively. Those three buildings might cost 600 total minerals but the engine should plan the Command Center when it would have 200 (or w/e it costs).
This would ultimately have requirements similar to 10 marines # 5 minutes, infantry weapons upgrade at 6:30, 30 marines at 10 minutes, Factory # 11, etc...
So, how do I go about doing something like this? My first thought was to use some procedural language and make all the decisions from the ground up. I could simulate the system and branching and making different choices. Ultimately, some choices are going quickly make it impossible to reach goals later (If I build 20 Supply Depots I'm prob not going to make that factory on time.)
So then I thought weren't functional languages designed for this? I tried to write some prolog but I've been having trouble with stuff like time and distance calculations. And I'm not sure the best way to return the "plan".
I was thinking I could write:
depends_on(factory, barracks)
depends_on(barracks, command_center)
builds_from(marine, barracks)
build_time(command_center, 60)
build_time(barracks, 45)
build_time(factory, 30)
minerals(command_center, 400)
...
build(X) :-
depends_on(X, Y),
build_time(X, T),
minerals(X, M),
...
Here's where I get confused. I'm not sure how to construct this function and a query to get anything even close to what I want. I would have to somehow account for rate at which minerals are gathered during the time spent building and other possible paths with extra gold. If I only want 1 marine in 10 minutes I would want the engine to generate lots of plans because there are lots of ways to end with 1 marine at 10 minutes (maybe cut it off after so many, not sure how you do that in prolog).
I'm looking for advice on how to continue down this path or advice about other options. I haven't been able to find anything more useful than towers of hanoi and ancestry examples for AI so even some good articles explaining how to use prolog to DO REAL THINGS would be amazing. And if I somehow can get these rules set up in a useful way how to I get the "plans" prolog came up with (ways to solve the query) other than writing to stdout like all the towers of hanoi examples do? Or is that the preferred way?
My other question is, my main code is in ruby (and potentially other languages) and the options to communicate with prolog are calling my prolog program from within ruby, accessing a virtual file system from within prolog, or some kind of database structure (unlikely). I'm using SWI-Prolog atm, would I be better off doing this procedurally in Ruby or would constructing this in a functional language like prolog or haskall be worth the extra effort integrating?
I'm sorry if this is unclear, I appreciate any attempt to help, and I'll re-word things that are unclear.
Your question is typical and very common for users of procedural languages who first try Prolog. It is very easy to solve: You need to think in terms of relations between successive states of your world. A state of your world consists for example of the time elapsed, the minerals available, the things you already built etc. Such a state can be easily represented with a Prolog term, and could look for example like time_minerals_buildings(10, 10000, [barracks,factory])). Given such a state, you need to describe what the state's possible successor states look like. For example:
state_successor(State0, State) :-
State0 = time_minerals_buildings(Time0, Minerals0, Buildings0),
Time is Time0 + 1,
can_build_new_building(Buildings0, Building),
building_minerals(Building, MB),
Minerals is Minerals0 - MB,
Minerals >= 0,
State = time_minerals_buildings(Time, Minerals, Building).
I am using the explicit naming convention (State0 -> State) to make clear that we are talking about successive states. You can of course also pull the unifications into the clause head. The example code is purely hypothetical and could look rather different in your final application. In this case, I am describing that the new state's elapsed time is the old state's time + 1, that the new amount of minerals decreases by the amount required to build Building, and that I have a predicate can_build_new_building(Bs, B), which is true when a new building B can be built assuming that the buildings given in Bs are already built. I assume it is a non-deterministic predicate in general, and will yield all possible answers (= new buildings that can be built) on backtracking, and I leave it as an exercise for you to define such a predicate.
Given such a predicate state_successor/2, which relates a state of the world to its direct possible successors, you can easily define a path of states that lead to a desired final state. In its simplest form, it will look similar to the following DCG that describes a list of successive states:
states(State0) -->
( { final_state(State0) } -> []
; [State0],
{ state_successor(State0, State1) },
states(State1)
).
You can then use for example iterative deepening to search for solutions:
?- initial_state(S0), length(Path, _), phrase(states(S0), Path).
Also, you can keep track of states you already considered and avoid re-exploring them etc.
The reason you get confused with the example code you posted is essentially that build/1 does not have enough arguments to describe what you want. You need at least two arguments: One is the current state of the world, and the other is a possible successor to this given state. Given such a relation, everything else you need can be described easily. I hope this answers your question.
Caveat: my Prolog is rusty and shallow, so this may be off base
Perhaps a 'difference engine' approach would be appropriate:
given a goal like 'build factory',
backwards-chaining relations would check for has-barracks and tell you first to build-barracks,
which would check for has-command-center and tell you to build-command-center,
and so on,
accumulating a plan (and costs) along the way
If this is practical, it may be more flexible than a state-based approach... or it may be the same thing wearing a different t-shirt!

Help--100% accuracy with LibSVM?

Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...
As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that it making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.
I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?
The details:
About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.
Ran subset.py to separate the data into test (500 instances) and train (remaining).
Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)
Ran svm-predict on the test file. Accuracy=100%!
Things tried:
Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error
re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa
ran a simple diff-like check to confirm that the test file is not a subset of the training data
svm-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)
((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)
(sanity check) running svm-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow svm-predict is nominally acting right on my machine.
Tentative conclusion?:
Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.
(This doesn't, on first pass, explain why the RBF kernel gives garbage results, however.)
Would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.
Two other ideas:
Make sure you're not training and testing on the same data. This sounds kind of dumb, but in computer vision applications you should take care that: make sure you're not repeating data (say two frames of the same video fall on different folds), you're not training and testing on the same individual, etc. It is more subtle than it sounds.
Make sure you search for gamma and C parameters for the RBF kernel. There are good theoretical (asymptotic) results that justify that a linear classifier is just a degenerate RBF classifier. So you should just look for a good (C, gamma) pair.
Notwithstanding that the devil is in the details, here are three simple tests you could try:
Quickie (~2 minutes): Run the data through a decision tree algorithm. This is available in Matlab via classregtree, or you can load into R and use rpart. This could tell you if one or just a few features happen to give a perfect separation.
Not-so-quickie (~10-60 minutes, depending on your infrastructure): Iteratively split the features (i.e. from 900 to 2 sets of 450), train, and test. If one of the subsets gives you perfect classification, split it again. It would take fewer than 10 such splits to find out where the problem variables are. If it happens to "break" with many variables remaining (or even in the first split), select a different random subset of features, shave off fewer variables at a time, etc. It can't possibly need all 900 to split the data.
Deeper analysis (minutes to several hours): try permutations of labels. If you can permute all of them and still get perfect separation, you have some problem in your train/test setup. If you select increasingly larger subsets to permute (or, if going in the other direction, to leave static), you can see where you begin to lose separability. Alternatively, consider decreasing your training set size and if you get separability even with a very small training set, then something is weird.
Method #1 is fast & should be insightful. There are some other methods I could recommend, but #1 and #2 are easy and it would be odd if they don't give any insights.

Resources