I have some question in terms of performance with Flink,
Can someone tell me what is wrong if my program has an execution plan same as the picture below?
Thank you.
enter image description here
From your description I cannot immediately see, why you need more than one hash per source. Any kind of network shuffle limits throughput, so avoiding all unnecessary shuffles seems like the best solution in your case.
The final picture should look like
Source 1 --\
\
Source 2 ----\
+---> Map ---> Sink
... /
/
Source N --/
Such that each input record is only rekeyed once.
Aside from these general consideration, I'd need much more details and CEP pseudocode to give more specific recommendations.
Related
We are using Apache IoTDB Server in Version 0.11.2 in a scenario and observe a data directory / tsfiles that are way bigger than they should be (about 130 sensors with 4 Million double values for each sensor but the files are about 200gb).
Are there known issues or do you have any ideas what could cause this is how to track that down?
The only thing we could think off could be some merge artefacts as we do write many datapoints out of order so merging has to happen frequently.
Are there any ideas or tools on how to debug / inspect the tsfiles to get an idea whats happening here?
Any help or hints is appreciated!
this may be due to the compaction strategy.
You could fix this in two ways (no need at the same time):
(1) upgrade to 0.12.2 version
(2) open the configuration in iotdb-engine.properties: force_full_merge=true
The reason is:
The unsequenced data compaction in the 0.11.2 version has two strategies.
E.g.,
Chunks in a sequence TsFile: [1,3], [4,5]
Chunks in a unsequence TsFile: [2]
(I use [1,3] to indicate the timestamp of 2 data points in a Chunk)
(1) When using full merge(rewrite all data): we get a tidy sequence file: [1,2,3],[4,5]
(2) However, to speed up the compaction, we use append merge by default, when we get a sequence TsFile: [1,3],[4,5],[1,2,3]. In this TsFile, [1,3] does not have metadata at the end of the File, it is garbage data.
So, if you have lots of out-of-order data merged frequently, this will happen (get a verrry big TsFile).
The big TsFile will be tidy after a new compaction.
You could also use TsFileSketchTool or example/tsfile/TsFileSequenceRead to see the structure of the TsFile.
I am automating a process which asks questions (via SMS but shouldn't matter) to real people. The questions have yes/no answers, but the person might respond in a number of ways such as: sure, not at this time, yeah, never or in any other way that they might. I would like to attempt to parse this text and determine if it was a yes or no answer (of course it might not always be right).
I figured the ideas and concepts to do this might already exist as it seems like a common task for an AI, but don't know what it might be called so I can't find information on how I might implement it. So my questions is, have algorithms been developed to do this kind of parsing and if so where can I find more information on how to implement them?
This can be viewed as a binary (yes or no) classification task. You could write a rule-based model to classify or a statistics-based model.
A rule-based model would be like if answer in ["never", "not at this time", "nope"] then answer is "no". When spam filters first came out they contained a lot of rules like these.
A statistics-based model would probably be more suitable here, as writing your own rules gets tiresome and does not handle new cases as well.
For this you need to label a training dataset. After a little preprocessing (like lowercasing all the words, removing punctuation and maybe even a little stemming) you could get a dataset like
0 | never in a million years
0 | never
1 | yes sir
1 | yep
1 | yes yes yeah
0 | no way
Now you can run classification algorithms like Naive Bayes or Logistic Regression over this set (after you vectorize the words in either binary, which means is the word present or not, word count, which means the term frequency, or a tfidf float, which prevent bias to longer answers and common words) and learn which words more often belong to which class.
In the above example yes would be strongly correlated to a positive answer (1) and never would be strongly related to a negative answer (0). You could work with n-grams so a not no would be treated as a single token in favor of the positive class. This is called the bag-of-words approach.
To combat spelling errors you can add a spellchecker like Aspell to the pre-processing step. You could use a charvectorizer too, so a word like nno would be interpreted as nn and no and you catch errors like hellyes and you could trust your users to repeat spelling errors. If 5 users make the spelling error neve for the word never then the token neve will automatically start to count for the negative class (if labeled as such).
You could write these algorithms yourself (Naive Bayes is doable, Paul Graham has wrote a few accessible essays on how to classify spam with Bayes Theorem and nearly every ML library has a tutorial on how to do this) or make use of libraries or programs like Scikit-Learn (MultinomialNB, SGDclassifier, LinearSVC etc.) or Vowpal Wabbit (logistic regression, quantile loss etc.).
Im thinking on top of my head, if you get a response which you dont know if its yes / no, you can keep the answers in a DB like unknown_answers and 2 more tables as affirmative_answers / negative_answers, then in a little backend system, everytime you get a new unknown_answer you qualify them as yes or no, and there the system "learns" about it and with time, you will have a very big and good database of affirmative / negative answers.
Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...
As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that it making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.
I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?
The details:
About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.
Ran subset.py to separate the data into test (500 instances) and train (remaining).
Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)
Ran svm-predict on the test file. Accuracy=100%!
Things tried:
Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error
re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa
ran a simple diff-like check to confirm that the test file is not a subset of the training data
svm-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)
((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)
(sanity check) running svm-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow svm-predict is nominally acting right on my machine.
Tentative conclusion?:
Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.
(This doesn't, on first pass, explain why the RBF kernel gives garbage results, however.)
Would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.
Two other ideas:
Make sure you're not training and testing on the same data. This sounds kind of dumb, but in computer vision applications you should take care that: make sure you're not repeating data (say two frames of the same video fall on different folds), you're not training and testing on the same individual, etc. It is more subtle than it sounds.
Make sure you search for gamma and C parameters for the RBF kernel. There are good theoretical (asymptotic) results that justify that a linear classifier is just a degenerate RBF classifier. So you should just look for a good (C, gamma) pair.
Notwithstanding that the devil is in the details, here are three simple tests you could try:
Quickie (~2 minutes): Run the data through a decision tree algorithm. This is available in Matlab via classregtree, or you can load into R and use rpart. This could tell you if one or just a few features happen to give a perfect separation.
Not-so-quickie (~10-60 minutes, depending on your infrastructure): Iteratively split the features (i.e. from 900 to 2 sets of 450), train, and test. If one of the subsets gives you perfect classification, split it again. It would take fewer than 10 such splits to find out where the problem variables are. If it happens to "break" with many variables remaining (or even in the first split), select a different random subset of features, shave off fewer variables at a time, etc. It can't possibly need all 900 to split the data.
Deeper analysis (minutes to several hours): try permutations of labels. If you can permute all of them and still get perfect separation, you have some problem in your train/test setup. If you select increasingly larger subsets to permute (or, if going in the other direction, to leave static), you can see where you begin to lose separability. Alternatively, consider decreasing your training set size and if you get separability even with a very small training set, then something is weird.
Method #1 is fast & should be insightful. There are some other methods I could recommend, but #1 and #2 are easy and it would be odd if they don't give any insights.
Im using MPI to parrlel a program that is trying to solve the Metric TSP problem. I have P processors , and N cities to pass .
Each thread asks for work from the master, recieves a chunk - which is a range of permutation that he should check and calculates the minimal among them. I am optimizing this by pruning bad routes in advance.
There are total (N-1)! routes to calculate. each worker get a chunk with a number that represnt the first route he has to check and the also the last. In addition the master sends him the most recent best result known , so can easly prone bad routes in advance with some lower bound on thier remains.
Each time a worker is finding result that is better that the global , he asyncrounsly sends it to the all other workers and to the master.
Im not looking for better solution- I'm just trying to determine which chunk size is the best.
The best chunk size i've found so far is (n!)/(n/2)! , but it doesnt yield so good result .
please help me understand which chunk size is the best here. I'm trying to balance between the amount of computation and communication
thanks
This depends heavily on factors beyond your control: MPI implementation, total load on the machine, etc. However, I'd hazard a guess that it also heavily depends on how many worker processes there are. On that note, understand that MPI spawns processes, not threads.
Ultimately, as is often the case with most optimization questions, the answer is simply "test a lot of different settings and see which one is best". You may want to do this manually, or write a tester app that implements some sort of heuristic (e.g. a genetic algorithm).
Question: A PID controller has three parameters Kp, Ki and Kd which could affect the output performance. A differential driving robot is controlled by a PID controller. The heading information is sensed by a compass sensor. The moving forward speed is kept constant. The PID controller is able to control the heading information to follow a given direction. Explain the outcome on the differential driving robot performance when the three parameters are increased individually.
This is a question that has come up in a past paper but most likely won't show up this year but it still worries me. It's the only question that has me thinking for quite some time. I'd love an answer in simple terms. Most stuff i've read on the internet don't make much sense to me as it goes heavy into the detail and off topic for my case.
My take on this:
I know that the proportional term, Kp, is entirely based on the error and that, let's say, double the error would mean doubling Kp (applying proportional force). This therefore implies that increasing Kp is a result of the robot heading in the wrong direction so Kp is increased to ensure the robot goes on the right direction or at least tries to reduce the error as time passes so an increase in Kp would affect the robot in such a way to adjust the heading of the robot so it stays on the right path.
The derivative term, Kd, is based on the rate of change of the error so an increase in Kd implies that the rate of change of error has increased over time so double the error would result in double the force. An increase by double the change in the robot's heading would take place if the robot's heading is doubled in error from the previous feedback result. Kd causes the robot to react faster as the error increases.
An increase in the integral term, Ki, means that the error is increased over time. The integral accounts for the sum of error over time. Even a small increase in the error would increase the integral so the robot would have to head in the right direction for an equal amount of time for the integral to balance to zero.
I would appreciate a much better answer and it would be great to be confident for a similar upcoming question in the finals.
Side note: i've posted this question on the Robotics part earlier but seeing that the questions there are hardly ever noticed, i've also posted it here.
I would highly recommend reading this article PID Without a PhD it gives a great explanation along with some implementation details. The best part is the numerous graphs. They show you what changing the P, I, or D term does while holding the others constant.
And if you want real world Application Atmel provides example code on their site (for 8 bit MCU) that perfectly mirrors the PID without a PhD article. It follows this code from AVR's website exactly (they make the ATMega32p microcontroller chip on the Arduino UNO boards) PDF explanation and Atmel Code in C
But here is a general explanation the way I understand it.
Proportional: This is a proportional relationship between the error and the target. Something like Pk(target - actual) Its simply a scaling factor on the error. It decides how quickly the system should react to an error (if it is of any help, you can think of it like amplifier slew rate). A large value will quickly try to fix errors, and a slow value will take longer. With Higher values though, we get into an overshoot condition and that's where the next terms come into play
Integral: This is meant to account for errors in the past. In fact it is the sum of all past errors. This is often useful for things like a small dc/constant offset that a Proportional controller can't fix on its own. Imagine, you give a step input of 1, and after a while the output settles at .9 and its clear its not going anywhere. The integral portion will see this error is always ~.1 too small so it will add it back in, to hopefully bring control closer to 1. THis term usually helps to stabilize the response curve. Since it is taken over a long period of time, it should reduce noise and any fast changes (like those found in overshoot/ringing conditions). Because it's aggregate, it is a very sensitive measurement and is usually very small when compared to other terms. A lower value will make changes happen very slowly, and create a very smooth response(this can also cause "wind-up" see the article)
Derivative: This is supposed to account for the "future". It uses the slope of the most recent samples. Remember this is the slope, it has nothing to do with the position error(current-goal), it is previous measured position - current measured position. This is most sensitive to noise and when it is too high often causes ringing. A higher value encourages change since we are "amplifying" the slope.
I hope that helps. Maybe someone else can offer another viewpoint, but that's typically how I think about it.