In permutation testing, do you always compare absolute values? - permutation

I want to implement permutation testing. It is usually instructed to calculate the percentage of permuted outcome values that are more extreme than the "true" outcome value. This seems to suggest that you should compare absolute values.
However, I am confused about what to do when the outcome variable can take both positive and negative numbers and it matters whether it is positive or negative. If, say, my true outcome takes the value of +1 and most permutation outcomes are negative (let's say >95% of them are around -2 or lower), just comparing absolute values would lead to the conclusion that the true outcome is not significantly more extreme than the permutations'. Intuitively, however, it seems that the true outcome is significant.
So, the question is if I should look at absolute values in a case like this or not. Should I instead look at whether the true outcome is greater than (if true outcome > 0) or less than 95% of permutation outcomes (if true outcome is < 0)?

Related

Simplest way to make a histogram of an unknown, finite list of discrete floating point numbers

I have a code that generates a sequence of configurations of some system of interest (Markov Chain Monte Carlo). For each configuration, I make a measurement of a particular value for that configuration, which is bounded between zero and some maximum which I can presumably predict before hand, let's call it Rmax. It can only take a finite number of discrete values in between 0 and Rmax, but the values could be irrational and are not evenly spaced, and I don't know them a priori, or necessarily how many there are (though I could probably estimate an upper bound). I want to generate a very large number of configurations (on the order 1e8) and make a histogram of the distribution of these values, but the issue that I am facing is how to effectively keep track of them.
For example, if the values were integers in the range [0,N-1], I would just create an integer array of N elements, initially set to zero, and increment the appropriate array element for each configuration, e.g. in pseudocode
do i = 1, 1e8
call generateConfig()
R = measureR() ! R is an integer
Rhist(R)++
end do
How can I do something similar to count or tally the number of times each of these irrational, non-uniformly distributed numbers occurs?

are floating point numbers changed when sorted in perl?

I'm running a statistical bootstrap at 10k permutations, which I'm trying to compare against an observed value. The observed is supposed to be identical to the max of the 10k permutations. The way I am measuring this is by attempting to find its percentile.
All results of the 10k permutations (10,000 random numbers) are stored in an array, which I sort using:
my #sorted = sort {$a <=> $b} #permutednumbers;
When I then compare the observed value $truevalue, I'm getting an inaccurate comparison. These are stored as floating point numbers. The bootstrapping procedure uses the same formula for generating the random number so it should be absolutely identical, but when comparing the same value, it becomes inaccurate. I'm testing this with:
if ($sorted[$#sorted] == $truevalue) {
print "sorted: $sorted[$#sorted] is eq truevalue:$truevalue\n";
} elsif ($sorted[$#sorted] > $truevalue) {
print "sorted: $sorted[$#sorted] is gt truevalue:$truevalue\n";
} elsif ($sorted[$#sorted] < $truevalue) {
print "sorted: $sorted[$#sorted] is lt truevalue:$truevalue, totalpermvalues; $totalpermvalues\n";
}
output:
sorted: 0.937864522389543 is gt truevalue:0.937864522389543
So I get that floating point numbers aren't printed in complete accuracy, but I always assumed internally the computer stores the correct numbers. Is that not a correct assumption? Of course I can fix this quickly by changing them into integers of some sort, but is this something that I should be doing automatically all the time? Are floating point numbers just dangerous to use? Those exact values should be identical given that they are outputs of identical inputs, which is what is confusing me...
If this matters, the values are individually calculated using the linear_interpolate function in Math::Interpolate package, but the inputs are identical.
If I understand correctly, you are wondering why == is returning false and > is returning true for what appear to be identical numbers. Obviously, the numbers are not actually identical. You can see this by printing more digits.
printf "sorted: %.80e is gt truevalue:%.80e\n", $sorted[$#sorted], $truevalue;
No, sort will not change values. One has to assume that there is a difference in the way these two values have been produced.
It is most certainly possible to use == with floating point numbers (FPN), returning true if a pair of 64 bit quantities is identical. But one has to be very careful when one ask the question "Are these two FPNs equal?"
A (relatively small but still considerable) quantity of integers and rational numbers can be represented accurately in a FPN. For these (and only for these), questions such as "Is the FPN a equal to 1.5?" (written as $a==1.5) may make sense but only if you are confident about the genesis of the value in $a. - Don't take this lightly: will both of the following statements print "1"?
print 0.12345678901234567 == 1.2345678901234567E-1,"\n";
print 0.12345678901234567 == 12.345678901234567E-2,"\n";
All FPNs are not only representatives of the value x they represent accurately. They are also responsible for an interval of real numbers, including rational, irrational and transcendent (and even integer) numbers "a little greater and a little smaller" than x. You can quantify "a little": it is 1e-16 for x == 1.0, and shrinks or grows accordingly. So, for instance, 1+1e-17 will be 1.0 on your computer. You can input this number, but the FPN will be 1.0 all the same. Asking whether a FPN as the result of some computation equals 1+1e-17 doesn't make sense since you cannot even tell the computer that value.
The solution isn't difficult. Instead of asking for equality you have to ask "Is the FPN a in an interval [p,q] around x?" Determining p and q should be given a little thought, as a suitable choice of these values primarily depends on x. The usual formula is something like
abs( $a - $expect ) <= $expect*PRECISION
where PRECISION could be, for instance, 1e-12. (The value to use here may depend on the algorithm you use for computing $a, or on your needs, or both.)
Finally: due to the mathematical properties of FP machine instructions, the usual arithmetic laws of associativity or distributivity are not guaranteed. The effect of truncation in addition or subtraction may, for instance cause heavy distortion in the result. A typical example for illustrating this, compute some Taylor series: once adding terms in decreasing order until terms become smaller than a given limit, and once, using the same terms, but in increasing order.

How to compare double numbers?

I know that when I would like to check if double == double I should write:
bool AreSame(double a, double b)
{
return fabs(a - b) < EPSILON;
}
But what when I would like to check if a > b or b > a ?
There is no general solution for comparing floating-point numbers that contain errors from previous operations. The code that must be used is application-specific. So, to get a proper answer, you must describe your situation more specifically. For example, if you are sorting numbers in a list or other data structure, you should not use any tolerance for comparison.
Usually, if your program needs to compare two numbers for order but cannot do so because it has only approximations of those numbers, then you should redesign the program rather than try to allow numbers to be ordered incorrectly.
The underlying problem is that performing a correct computation using incorrect data is in general impossible. If you want to compute some function of two exact mathematical values x and y but the only data you have is some incorrectly computed values x and y, it is generally impossible to compute the exactly correct result. For example, suppose you want to know what the sum, x+y, is, but you only know x is 3 and y is 4, but you do not know what the true, exact x and y are. Then you cannot compute x+y.
If you know that x and y are approximately x and y, then you can compute an approximation of x+y by adding x and y. This works when the function being computed has a reasonable derivative: Slightly changing the inputs of a function with a reasonable derivative slightly changes its outputs. This fails when the function you want to compute has a discontinuity or a large derivative. For example, if you want to compute the square root of x (in the real domain) using an approximation x but x might be negative due to previous rounding errors, then computing sqrt(x) may produce an exception. Similarly, comparing for inequality or order is a discontinuous function: A slight change in inputs can change the answer completely.
The common bad advice is to compare with a “tolerance”. This method trades false negatives (incorrect rejections of numbers that would satisfy the comparison if the true mathematical values were compared) for false positives (incorrect acceptance of numbers that would not satisfy the comparison).
Whether or not an application can tolerate false acceptance depends on the application. Therefore, there is no general solution.
The level of tolerance to set, and even the nature by which it is calculated, depend on the data, the errors, and the previous calculations. So, even when it is acceptable to compare with a tolerance, the amount of tolerance to use and how to calculate it depends on the application. There is no general solution.
The analogous comparisons are:
a > b - EPSILON
and
b > a - EPSILON
I am assuming that EPSILON is some small positive number.

How to make infinity value in C? (especially integer value)

I made weight directed graph, just like this
6
0 3 4 INFINITY INFINITY INFINITY
INFINITY 0 INFINITY 7 INFINITY INFINITY
INFINITY 3 0 5 11 INFINITY
INFINITY INFINITY INFINITY 0 6 3
INFINITY INFINITY INFINITY INFINITY 0 4
INFINITY INFINITY INFINITY INFINITY INFINITY 0
at first, I used some integer value to express infinity like 99 or 20000.
but when I find it is wrong, v5 -> v4 must be express infinity but some integer value is expressed.
ex : Shortest Path from v2 to v3 : v2 v3 (length : 200000)
is there any infinity value for integer?
friend of mine says ~(1<<31) but it doesn't work
Unlike floating-point types, integer types don't have a standard value for infinity. If you have to have one, you'll have to pick a value yourself (e.g. INT_MAX) and correctly handle it throughout your code. Note that if you do this, you can use the special value in assignments and comparisons, but not in arithmetic expressions.
Infinity doesn't exist for integers. What your friend suggested was the biggest number in a 32 bit signed integer, but that still is not infinity. That also introduces the possibility of overflow if you add it to something (for example in shortest path), you actually might end up getting a smaller number. So don't do it.
The proper way to do it, is to handle infinity case by case. Use a flag for infinity, for example that same ~(1<<31), or perhaps even better, -1 and in your code, whenever you want to add two values, check if either of them is equal to this flag (equal to infinity), set the result to infinity without actually doing any summations. Or when you are checking if one value is smaller than another, check if one is equal to this flag (equal to infinity), then the other is definitely smaller, again without actually doing the comparison.
edit: didn't realize you were specifying integers.
a solution might be to use '-1' instead of infinity as your cardinal value. if i recall correctly, directed graphs should not have negative values anyway.

Shuffling biased random numbers

While thinking about this question and conversing with the participants, the idea came up that shuffling a finite set of clearly biased random numbers makes them random because you don't know the order in which they were chosen. Is this true and if so can someone point to some resources?
EDIT: I think I might have been a little unclear. Suppose a bad random numbers generator. Take n values. These are biased(the rng is bad). Is there a way through shuffling to make the output of the rng over multiple trials statistically match the output of a known good rng?
False.
There is an easy test: Assume the bias in the original set creation algorithm is "creates sets whose arithmetic average is significantly lower than expected average". Obviously, shuffling the result of the algorithm will not change the averages and thus not remove the bias.
Also, regarding your clarification: How would you shuffle the set? Using the same bad output from the bad RNG that created the set in the first place? Or using a better RNG? Which raises the question why you don't use that directly.
It's not true. In the other question the problem is to select 30 random numbers in [1..9] with a sum of 200. After choosing about on average 20 of them randomly, you reach a point where you can't select nines anymore because this would make the total sum go over 200. Of the remaining 10 numbers, most will be ones and twos. So in the end, ones and twos are very overrepresented in the selected numbers. Shuffling doesn't change that. But it's not clear how the random distribution really should look like, so one could say this is as good a solution as any.
In general, if your "random" numbers will be biased to, say, low numbers, they will be biased that way no matter the ordering.
Just shuffling a set of numbers of already random numbers won't do anything to the probability distribution of course. That would mean false. Perhaps I misunderstand your question though?
I would say false, with a caveat:
I think there is random, and then there is 'random-enough'. For most applications that I have needed to work on, 'random-enough' was more than enough, i.e. picking a 'random' ad to display on a page from a list of 300 or so that have paid to be placed on that site.
I am sure a mathematician could prove my very basic 'random' selection criteria is not truly random at all, but in fact is predictable - for my clients, and for the users, nobody cares.
On the other hand if I was writing a video game to be used in Las Vegas where large amounts of money was at hand I'd define random differently (and may have a hard time coming up with truly random).
False
The set is finite, suppose consists of n numbers. What happens if you choose n+1 numbers? Let's also consider a basic random function as implemented in many languages which gives you a random number in [0,1). However, this number is limited to three digits after the decimal giving you a set of 1000 possible numbers (0.000 - 0.999). However in most cases you will not need to use all these 1000 numbers so this amount of randomness is more than enough.
However for some uses, you will need a better random generator than this. So it all comes down to exactly how many random numbers you are going to need, and how random you need them to be.
Addition after reading original question: in the case that you have some sort of limitation (such as in the original question in which each set of selected numbers must sum up to a certain N) you are not really selected random numbers per se, but rather choosing numbers in a random order from a given set (specifically, a permutation of numbers summing up to N).
Addition to edit: Suppose your bad number generator generated the sequence (1,1,1,2,2,2). Does the permutation (1,2,2,1,1,2) satisfy your definition of random?
Completely and utterly untrue: Shuffling doesn't remove a bias, it just conceals it from the casual observer. It's like removing your dog's fondly-laid present from your carpet by just pushing under the sofa - you really haven't solved the problem, you've just made it less conspicuous. Anyone with a nose knows that there is still a problem that needs removing.
The randomness must be applied evenly over the whole range, so here's one way (off the top of my head, lots of assumptions, yadda yadda. The point is the approach, not the code - start with everything even, then introduce your randomness in a consistent fashion until you're done. The only bias now is dependent on the values chosen for 'target' and 'numberofnumbers', which is part of the question.)
target = 200
numberofnumbers = 30
numbers = array();
for (i=0; i<numberofnumbers; i++)
numbers[i] = 9
while (sum(numbers)>target)
numbers[random(numberofnumbers)]--
False. Consider a bad random number generator producing only zeros (I said it was BAD :-) No amount of shuffling the zeros would change any property of that sequence.

Resources