I'm using Gatling to load test a GET endpoint.
.../catByName?name=XYZ
In order to feed the load test with cat names, I have a CSV with the names.
Currently, there are four options to read the names from the file: Queue, Random, Shuffle, and Circular.
I like to better simulate real-life scenarios, that behave according to the long-tail and Pareto principle.
So I like that 20% of the names will be used in 80% of the requests and the other 80% of Names will be used in 20% of the requests.
Is there a way to do it?
Have 2 different feeders, one 4 times larger than the other one and a randomSwitch with 80/20 ratios.
Related
I'm wondering how I can retrieve every other document in a Firestore collection. I have a collection of documents that include a date field. I'd like to sort them by date and then retrieve 1 document from every X sized block in the sorted collection. I'm adding a new document about every 10 seconds and I'm trying to display historical data on the front end without having to download so many records.
Sure can, just need to plan for it ahead of time.
Random Sampling
Let's call this 'random sampling', so you'll need to determine your sample rate when you write the document. Let's assume you want to sample approximately 1 of every 10 documents (but not strictly 1 every 10).
When you write a document, add a field called sample-10 and set it to random(1,10). On query time add .where("sample-10", "=", random(1,10)) to your query.
Non-Random Sampling
This is harder when the source of your writes are distributed (e.g. many mobile devices), so I won't talk about it here.
If writes are coming from a single source, for example you might be graphing sensor data from a single source. This is easier in just incrementing the value put into sample-10 modulo 10.
Other Sample Rates
You'll need to do a separate sample-n for different sample rates of n.
I am using a recurrent neural network for time series prediction with LSTM as the activation function. The inputs are sequence datasets, with the output being the next datum after the input sequence. I have hundreds of inputs, one hidden layer of equal size, and a single output in the output layer. However much I train, the result is always much higher than the actual value (with other functions too), shown respectively by green and blue below. What is the solution?
It seems that LSTM is not suited for this kind of pattern. Softmax works well.
What's the best method:
splitting my data into training and testing sets by making 70% of the data as training and 30% test, or
using similar data for training and testing set.
A- Is the second method correct and what's its disadvantages?
B- My dataset contains 3 attributes and 1000 objects, is this good for selecting the training and testing sets from this dataset?
The second method is wrong (at least if by 'similar' you mean 'same').
You shouldn't use the test set for training.
If you use just one data set, you could achieve perfect accuracy by simply learning this set (with the risk of overfitting).
Generally, this isn't what you want because the algorithm should learn the general concept behind the examples. A way of testing if this happens is to use separate dataset for training and testing.
Test set gives you a forecast of the performance of your model in the "real world" because it's independent (during the training/validation phase you don't make any choice based on test data).
Second option is wrong. First option is the best....
Using ling-pipe classifier we can train and test news data. But if you provide same data used in training for testing purposes no doubt it shows accurate output. What we want is predicting output for unknown cases that's how we test accuracy right.
So what you have to do is
1)Train your data
2)Build a model
3)Apply test data to the model to get output for unknown sets/ cases too.
Building a model is nothing but writing the trained object into a file. So each time you runs the program you have to put the data into that model instead of training each time. This saves your time. I hope my answer will help you. Best regards.
You can create Train-Test from a dataset in command line:
java -cp weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 30 -i dataset.arff -o train.arff
java -cp weka.jar weka.filters.unsupervised.instance.RemovePercentage -P 70 -i dataset.arff -o test.arff
and A): except if "all" the future possible data combinations exist in your dataset, using same data for train and test is a bad solution. It does not assess how your model is able to handle different new cases and can't assess if you are overfitting (it fits to your current data without re-usable logic). Why don't you use "cross validation", this is very effective if you want to use the same dataset. It automatically splits in different parts, and test each part against the rest of the data, then compute the average result.
B) if you mean 3 attributes and 1000 instances, it could be ok if you don't have too much different type of outputs (classes) to predict and that instances map good use cases.
FYI: if you want to test your data on many different classifiers to find the best one, use the experimenter.
One approach to split the data into two disjoint sets, one for training and one for tests is taking the first 80% as the training set and the rest as the test set. Is there another approach to split the data into training and test sets?
** For example, I have a data contains 20 attributes and 5000 objects. Therefore, I will take 12 attributes and 1000 objects as my training data and 3 attributes from the 12 attributes as test set. Is this method correct?
No, that's invalid. You would always use all features in all data sets. You split by "objects" (examples).
It's not clear why you are taking just 1000 objects and trying to extract a training set from that. What happened to the other 4000 you threw away?
Train on 4000 objects / 20 features. Cross-validate on 500 objects / 20 features. Evaluate performance on the remaining 500 objects/ 20 features.
If your training produces a classifier based on 12 features, it could be (very) hard to evaluate its performances on a test set based only on a subset of these features (your classifier is expecting 12 inputs and you'll give only 3).
Feature/attribute selection/extraction is important if your data contains many redundant or irrelevant features. So you could identify and use only the most informative features (maybe 12 features) but your training/validation/test sets should be based on the same number of features (e.g. since you're mentioning weka Why do I get the error message 'training and test set are not compatible'?).
Remaining on a training/validation/test split (holdout method), a problem you can face is that the samples might not be representative.
For example, some classes might be represented with very few instance or even with no instances at all.
A possible improvement is stratification: sampling for training and testing within classes. This ensures that each class is represented with approximately equal proportions in both subsets.
However, by partitioning the available data into fixed training/test set, you drastically reduce the number of samples which can be used for learning the model. An alternative is cross validation.
Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...
As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that it making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.
I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?
The details:
About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.
Ran subset.py to separate the data into test (500 instances) and train (remaining).
Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)
Ran svm-predict on the test file. Accuracy=100%!
Things tried:
Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error
re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa
ran a simple diff-like check to confirm that the test file is not a subset of the training data
svm-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)
((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)
(sanity check) running svm-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow svm-predict is nominally acting right on my machine.
Tentative conclusion?:
Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.
(This doesn't, on first pass, explain why the RBF kernel gives garbage results, however.)
Would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.
Two other ideas:
Make sure you're not training and testing on the same data. This sounds kind of dumb, but in computer vision applications you should take care that: make sure you're not repeating data (say two frames of the same video fall on different folds), you're not training and testing on the same individual, etc. It is more subtle than it sounds.
Make sure you search for gamma and C parameters for the RBF kernel. There are good theoretical (asymptotic) results that justify that a linear classifier is just a degenerate RBF classifier. So you should just look for a good (C, gamma) pair.
Notwithstanding that the devil is in the details, here are three simple tests you could try:
Quickie (~2 minutes): Run the data through a decision tree algorithm. This is available in Matlab via classregtree, or you can load into R and use rpart. This could tell you if one or just a few features happen to give a perfect separation.
Not-so-quickie (~10-60 minutes, depending on your infrastructure): Iteratively split the features (i.e. from 900 to 2 sets of 450), train, and test. If one of the subsets gives you perfect classification, split it again. It would take fewer than 10 such splits to find out where the problem variables are. If it happens to "break" with many variables remaining (or even in the first split), select a different random subset of features, shave off fewer variables at a time, etc. It can't possibly need all 900 to split the data.
Deeper analysis (minutes to several hours): try permutations of labels. If you can permute all of them and still get perfect separation, you have some problem in your train/test setup. If you select increasingly larger subsets to permute (or, if going in the other direction, to leave static), you can see where you begin to lose separability. Alternatively, consider decreasing your training set size and if you get separability even with a very small training set, then something is weird.
Method #1 is fast & should be insightful. There are some other methods I could recommend, but #1 and #2 are easy and it would be odd if they don't give any insights.