Symmetry breaking edges of a line graph on a coordinate grid - logic-programming

A classic trick for optimizing Datalog programs is symmetry-breaking. Symmetry breaking can halve the number of facts you must compute in your database, since you only need to compute edge(0, 1) and not also edge(1, 0).
Consider an n x m coordinate grid. If we want to write a relation describing undirected edges on this grid, a simple way to break symmetry is to say that edges only proceed from smaller to larger coordinates (e.g., (2, 3) to (3, 3), but not vice versa). Here is a simple clingo program which models a 2x2 grid in precisely this way:
row(0..1).
col(0..1).
node(n(I, J)) :- row(I), col(J).
delta(0,1).
delta(1,0).
edge(e(N1, N2)) :-
node(N1), node(N2),
N1 = n(I1, J1),
N2 = n(I2, J2),
delta(I2-I1, J2-J1).
#show edge/1.
We can see that this generates four facts about edge, rather than eight if symmetry had not been broken:
edge(e(n(0,0),n(0,1)))
edge(e(n(1,0),n(1,1)))
edge(e(n(0,0),n(1,0)))
edge(e(n(0,1),n(1,1)))
Now consider the line graph induced by this coordinate grid. This line graph is also undirected, and we'd like to symmetry break in a similar manner. What is a compact way to break symmetry on this graph?
For reference, here is a short definition of edges on the line graph which does not break symmetry.
ledge(l(E1, E2)) :-
edge(E1), edge(E2),
E1 = e(N1, N2),
E2 = e(N1, N3; N3, N1; N3, N2; N2, N3),
N3 != N1, N2 != N1.
#show ledge/1.
We can see it does not break symmetry as it generates these two facts (among others):
ledge(l(e(n(0,0),n(1,0)),e(n(1,0),n(1,1))))
ledge(l(e(n(1,0),n(1,1)),e(n(0,0),n(1,0))))

To symmetry break, it is sufficient to define some total order on elements, so you can then simply say that p(A, B) only when A < B. Clingo defines lexicographic order on all elements, so the solution is in fact as simple as:
ledge(l(E1, E2)) :-
edge(E1), edge(E2),
E1 < E2,
E1 = e(N1, N2),
E2 = e(N1, N3; N3, N1; N3, N2; N2, N3),
N3 != N1, N2 != N1.
giving
ledge(l(e(n(0,0),n(1,0)),e(n(1,0),n(1,1))))
ledge(l(e(n(0,0),n(0,1)),e(n(0,1),n(1,1))))
ledge(l(e(n(0,1),n(1,1)),e(n(1,0),n(1,1))))
ledge(l(e(n(0,0),n(0,1)),e(n(0,0),n(1,0))))
as desired. There may be opportunities for other optimizations, though!

Related

Concatenating individual bits in C

I have the following situation:
A function F1 generates an output O1 that is N1 bits in size, where N1 will in general not bit a multiple of 8 bits. Another function F2, in turn, generates an output O2 that is N2 bits in size, where N2 may, or may not, be a multiple of 8 bits.
I want to obtain an output string O construct with O2 appended to O1, with the following proviso:
Imagine the N1 % 8 is 3, and the last three bits of N1 are 101. Also imagine that the first eight bits of N2 are 01011111. The sequence of bytes that I would end up with would have the first floor(N1 / 8) bytes from O1, then then 10101011 byte, then bytes will the rest of the N2 bits obtained in N2, starting with the 111 leftover from above.
The thing is, both N1 and N2 can potentially be extremely long numbers - far bigger than the RAM in the host computer. Thus, I would have to store the N1 and N2 bits on hard drive, then perform the concatenation as described above, saving the resulting output O.
I am looking for suggestions on how to carry out the chore above as efficiently as possible, bearing in mind that, like I said, both N1 and N2 can be arbitrarily large. What worries me in particular is the bit shifting that one must carry out on O2.

Big O analysis of GCD computation function [duplicate]

I am having difficulty deciding what the time complexity of Euclid's greatest common denominator algorithm is. This algorithm in pseudo-code is:
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
It seems to depend on a and b. My thinking is that the time complexity is O(a % b). Is that correct? Is there a better way to write that?
One trick for analyzing the time complexity of Euclid's algorithm is to follow what happens over two iterations:
a', b' := a % b, b % (a % b)
Now a and b will both decrease, instead of only one, which makes the analysis easier. You can divide it into cases:
Tiny A: 2a <= b
Tiny B: 2b <= a
Small A: 2a > b but a < b
Small B: 2b > a but b < a
Equal: a == b
Now we'll show that every single case decreases the total a+b by at least a quarter:
Tiny A: b % (a % b) < a and 2a <= b, so b is decreased by at least half, so a+b decreased by at least 25%
Tiny B: a % b < b and 2b <= a, so a is decreased by at least half, so a+b decreased by at least 25%
Small A: b will become b-a, which is less than b/2, decreasing a+b by at least 25%.
Small B: a will become a-b, which is less than a/2, decreasing a+b by at least 25%.
Equal: a+b drops to 0, which is obviously decreasing a+b by at least 25%.
Therefore, by case analysis, every double-step decreases a+b by at least 25%. There's a maximum number of times this can happen before a+b is forced to drop below 1. The total number of steps (S) until we hit 0 must satisfy (4/3)^S <= A+B. Now just work it:
(4/3)^S <= A+B
S <= lg[4/3](A+B)
S is O(lg[4/3](A+B))
S is O(lg(A+B))
S is O(lg(A*B)) //because A*B asymptotically greater than A+B
S is O(lg(A)+lg(B))
//Input size N is lg(A) + lg(B)
S is O(N)
So the number of iterations is linear in the number of input digits. For numbers that fit into cpu registers, it's reasonable to model the iterations as taking constant time and pretend that the total running time of the gcd is linear.
Of course, if you're dealing with big integers, you must account for the fact that the modulus operations within each iteration don't have a constant cost. Roughly speaking, the total asymptotic runtime is going to be n^2 times a polylogarithmic factor. Something like n^2 lg(n) 2^O(log* n). The polylogarithmic factor can be avoided by instead using a binary gcd.
The suitable way to analyze an algorithm is by determining its worst case scenarios.
Euclidean GCD's worst case occurs when Fibonacci Pairs are involved.
void EGCD(fib[i], fib[i - 1]), where i > 0.
For instance, let's opt for the case where the dividend is 55, and the divisor is 34 (recall that we are still dealing with fibonacci numbers).
As you may notice, this operation costed 8 iterations (or recursive calls).
Let's try larger Fibonacci numbers, namely 121393 and 75025. We can notice here as well that it took 24 iterations (or recursive calls).
You can also notice that each iterations yields a Fibonacci number. That's why we have so many operations. We can't obtain similar results only with Fibonacci numbers indeed.
Hence, the time complexity is going to be represented by small Oh (upper bound), this time. The lower bound is intuitively Omega(1): case of 500 divided by 2, for instance.
Let's solve the recurrence relation:
We may say then that Euclidean GCD can make log(xy) operation at most.
There's a great look at this on the wikipedia article.
It even has a nice plot of complexity for value pairs.
It is not O(a%b).
It is known (see article) that it will never take more steps than five times the number of digits in the smaller number. So the max number of steps grows as the number of digits (ln b). The cost of each step also grows as the number of digits, so the complexity is bound by O(ln^2 b) where b is the smaller number. That's an upper limit, and the actual time is usually less.
See here.
In particular this part:
Lamé showed that the number of steps needed to arrive at the greatest common divisor for two numbers less than n is
So O(log min(a, b)) is a good upper bound.
Here's intuitive understanding of runtime complexity of Euclid's algorithm. The formal proofs are covered in various texts such as Introduction to Algorithms and TAOCP Vol 2.
First think about what if we tried to take gcd of two Fibonacci numbers F(k+1) and F(k). You might quickly observe that Euclid's algorithm iterates on to F(k) and F(k-1). That is, with each iteration we move down one number in Fibonacci series. As Fibonacci numbers are O(Phi ^ k) where Phi is golden ratio, we can see that runtime of GCD was O(log n) where n=max(a, b) and log has base of Phi. Next, we can prove that this would be the worst case by observing that Fibonacci numbers consistently produces pairs where the remainders remains large enough in each iteration and never become zero until you have arrived at the start of the series.
We can make O(log n) where n=max(a, b) bound even more tighter. Assume that b >= a so we can write bound at O(log b). First, observe that GCD(ka, kb) = GCD(a, b). As biggest values of k is gcd(a,c), we can replace b with b/gcd(a,b) in our runtime leading to more tighter bound of O(log b/gcd(a,b)).
Here is the analysis in the book Data Structures and Algorithm Analysis in C by Mark Allen Weiss (second edition, 2.4.4):
Euclid's algorithm works by continually computing remainders until 0 is reached. The last nonzero remainder is the answer.
Here is the code:
unsigned int Gcd(unsigned int M, unsigned int N)
{
unsigned int Rem;
while (N > 0) {
Rem = M % N;
M = N;
N = Rem;
}
Return M;
}
Here is a THEOREM that we are going to use:
If M > N, then M mod N < M/2.
PROOF:
There are two cases. If N <= M/2, then since the remainder is smaller
than N, the theorem is true for this case. The other case is N > M/2.
But then N goes into M once with a remainder M - N < M/2, proving the
theorem.
So, we can make the following inference:
Variables M N Rem
initial M N M%N
1 iteration N M%N N%(M%N)
2 iterations M%N N%(M%N) (M%N)%(N%(M%N)) < (M%N)/2
So, after two iterations, the remainder is at most half of its original value. This would show that the number of iterations is at most 2logN = O(logN).
Note that, the algorithm computes Gcd(M,N), assuming M >= N.(If N > M, the first iteration of the loop swaps them.)
Worst case will arise when both n and m are consecutive Fibonacci numbers.
gcd(Fn,Fn−1)=gcd(Fn−1,Fn−2)=⋯=gcd(F1,F0)=1 and nth Fibonacci number is 1.618^n, where 1.618 is the Golden ratio.
So, to find gcd(n,m), number of recursive calls will be Θ(logn).
The worst case of Euclid Algorithm is when the remainders are the biggest possible at each step, ie. for two consecutive terms of the Fibonacci sequence.
When n and m are the number of digits of a and b, assuming n >= m, the algorithm uses O(m) divisions.
Note that complexities are always given in terms of the sizes of inputs, in this case the number of digits.
Gabriel Lame's Theorem bounds the number of steps by log(1/sqrt(5)*(a+1/2))-2, where the base of the log is (1+sqrt(5))/2. This is for the the worst case scenerio for the algorithm and it occurs when the inputs are consecutive Fibanocci numbers.
A slightly more liberal bound is: log a, where the base of the log is (sqrt(2)) is implied by Koblitz.
For cryptographic purposes we usually consider the bitwise complexity of the algorithms, taking into account that the bit size is given approximately by k=loga.
Here is a detailed analysis of the bitwise complexity of Euclid Algorith:
Although in most references the bitwise complexity of Euclid Algorithm is given by O(loga)^3 there exists a tighter bound which is O(loga)^2.
Consider; r0=a, r1=b, r0=q1.r1+r2 . . . ,ri-1=qi.ri+ri+1, . . . ,rm-2=qm-1.rm-1+rm rm-1=qm.rm
observe that: a=r0>=b=r1>r2>r3...>rm-1>rm>0 ..........(1)
and rm is the greatest common divisor of a and b.
By a Claim in Koblitz's book( A course in number Theory and Cryptography) is can be proven that: ri+1<(ri-1)/2 .................(2)
Again in Koblitz the number of bit operations required to divide a k-bit positive integer by an l-bit positive integer (assuming k>=l) is given as: (k-l+1).l ...................(3)
By (1) and (2) the number of divisons is O(loga) and so by (3) the total complexity is O(loga)^3.
Now this may be reduced to O(loga)^2 by a remark in Koblitz.
consider ki= logri +1
by (1) and (2) we have: ki+1<=ki for i=0,1,...,m-2,m-1 and ki+2<=(ki)-1 for i=0,1,...,m-2
and by (3) the total cost of the m divisons is bounded by: SUM [(ki-1)-((ki)-1))]*ki for i=0,1,2,..,m
rearranging this: SUM [(ki-1)-((ki)-1))]*ki<=4*k0^2
So the bitwise complexity of Euclid's Algorithm is O(loga)^2.
For the iterative algorithm, however, we have:
int iterativeEGCD(long long n, long long m) {
long long a;
int numberOfIterations = 0;
while ( n != 0 ) {
a = m;
m = n;
n = a % n;
numberOfIterations ++;
}
printf("\nIterative GCD iterated %d times.", numberOfIterations);
return m;
}
With Fibonacci pairs, there is no difference between iterativeEGCD() and iterativeEGCDForWorstCase() where the latter looks like the following:
int iterativeEGCDForWorstCase(long long n, long long m) {
long long a;
int numberOfIterations = 0;
while ( n != 0 ) {
a = m;
m = n;
n = a - n;
numberOfIterations ++;
}
printf("\nIterative GCD iterated %d times.", numberOfIterations);
return m;
}
Yes, with Fibonacci Pairs, n = a % n and n = a - n, it is exactly the same thing.
We also know that, in an earlier response for the same question, there is a prevailing decreasing factor: factor = m / (n % m).
Therefore, to shape the iterative version of the Euclidean GCD in a defined form, we may depict as a "simulator" like this:
void iterativeGCDSimulator(long long x, long long y) {
long long i;
double factor = x / (double)(x % y);
int numberOfIterations = 0;
for ( i = x * y ; i >= 1 ; i = i / factor) {
numberOfIterations ++;
}
printf("\nIterative GCD Simulator iterated %d times.", numberOfIterations);
}
Based on the work (last slide) of Dr. Jauhar Ali, the loop above is logarithmic.
Yes, small Oh because the simulator tells the number of iterations at most. Non Fibonacci pairs would take a lesser number of iterations than Fibonacci, when probed on Euclidean GCD.
At every step, there are two cases
b >= a / 2, then a, b = b, a % b will make b at most half of its previous value
b < a / 2, then a, b = b, a % b will make a at most half of its previous value, since b is less than a / 2
So at every step, the algorithm will reduce at least one number to at least half less.
In at most O(log a)+O(log b) step, this will be reduced to the simple cases. Which yield an O(log n) algorithm, where n is the upper limit of a and b.
I have found it here

Finding all combinations of elements from two sets such that their geometric mean falls into third set

I have a integers from 1 to n. I randomly allot every integer into one of three sets A, B and C (A ∩ B = B ∩ C = C ∩ A = Ø). Every integer does belong to one set. So I need to calculate all combination of elements (a,b) such that a ∈ A, b ∈ B, and the geometric mean of a,b belongs to C. Basically sqrt(a*b) ∈ C.
My solution is to first mark on an array of size n whether every element went into set A,B or C. Then I loop through the array for all elements that belong to A. When I encounter one, I again loop through for all elements that belong to B. If array[sqrt(a*b)] == C, then I add (a, b, sqrt(a,b)) as one possible combination. Then I do the same for the entire array, which is O(n^2).
Is there a more optimal solution possible?
It can be done with better complexity than O(n^2). The solution sketched here is in O(n * sqrt(n) * log(n)).
The main idea is the following:
let (a, b, c) be a good solution, i.e. one with sqrt(a * b) = c. We can write a as a = s * t^2, where s is the product of the prime numbers that have odd exponents in a's prime factorization. It's guaranteed that the remaining part of a is a perfect square. Since a * b is a perfect square, then b must be of the form s * k^2. For each a (there are O(n) such numbers), after finding s from the decomposition above (this can be done in O(log(n)), as it will be described next), we can restrict our search for the number b to those of the form b = s * k^2, but there are only O(sqrt(n)) numbers like this smaller than n. For each pair a, b enumerated like this we can test in O(1) whether there is a good c, using the representation you used in the question.
One critical part in the idea above is decomposing a into s * t^2, i.e. finding the primes that have odd power in a's factorization.
This can be done using a pre-processing step, that finds the prime factors (but not also their powers) of every number in {1, 2, .. n}, using a slightly modified sieve of Eratosthenes. This modified version would not only mark a number as "not prime" when iterating over the multiples of a prime, but would also append the current prime number to the list of the factors of the current multiple. The time complexity of this pre-processing step is n * sum{for each prime p < n}(1/p) = n * log(log(n)) -- see this for details.
Using the result of the pre-processing, which is the list of primes which divide a, we can find those primes with odd power in O(log(n)). This is achieved by dividing a by each prime in the list until it is no more divisible by that prime. If we made an odd number of divisions, then we use the current prime in s. After all divisions are done, the result will be equal to 1. The complexity of this is O(log(n)) because in the worst case we always divide the initial number by 2 (the smallest prime number), thus it will take at most log2(a) steps to reach value 1.
The complexity of the main step dominates the complexity of the preprocessing, thus the overall complexity of this approach is O(n * sqrt(n) * log(n)).
Remark: in the decomposition a = s * t^2, s is the product of the prime numbers in a with odd exponents, but their exponent is not used in s (i.e. s is just the product of those primes, with exponent 1). Only in this situation it is guaranteed that b should be of the form s * k^2. Indeed, since a * b = c * c, the prime factorization of the right hand side uses only even exponents, thus all primes from s should also appear in b with odd exponents, and all other primes from b's factorization should have even exponents.
Expanding on the following line: "we can restrict our search for the number b to those of the form b = s * k^2, but there are only O(sqrt(n)) numbers like this smaller than n".
Let's consider an example. Imagine that we have something like n = 10,000 and we are currently looking for solutions having a = 360 = 2^3 * 3^2 * 5. The primes with odd exponent in a's factorization are 2 and 5 (thus s = 2 * 5; a = 10 * 6^2).
Since a * b is a perfect square, it means that all primes in the prime factorization of a * b have even exponents . This implies that those two primes (2 and 5) need to also appear in b's factorization with odd exponents, and the rest of the exponents in b's prime factorization need to be even. Thus b is of the form s * k^2 = 10 * k ^ 2.
So we proved that b = 10 * k ^ 2. This is helpful, because we can now enumerate all the b values of this form quickly (in O(sqrt(n)). We only need to consider k = 1, k = 2, ..., k = (int)sqrt(n / 10). Larger values of k result in values of b larger than n. Each of these k values determines one b value, which we need to verify. Note that when verifying one of these b values, it should be first checked whether it indeed is in set B, which can be done in O(1), and whether sqrt(a * b) is in the set C, which can also be done in O(1).

C Program to detect right angled triangles

If I am given 100 points in the coordinate system, and I have to find if there exist a right angled triangle in those vertices.
Is there a way that I can detect the right angled triangle among those vertices without having to choose all pairs of 3 vertices and then applying Pythagoras theorem on them??
Can there be a better algorithm for this?
Thanks for any help. :)
Here's an O(n^2 log n)-time algorithm for two dimensions only. I'll describe what goes wrong in higher dimensions.
Let S be the set of points, which have integer coordinates. For each point o in S, construct the set of nonzero vectors V(o) = {p - o | p in S - {o}} and test whether V(o) contains two orthogonal vectors in linear time as follows.
Method 1: canonize each vector (x, y) to (x/gcd(x, y), y/gcd(x, y)), where |gcd(x, y)| is the largest integer that divides both x and y, and where gcd(x, y) is negative if y is negative, positive if y is positive, and |x| if y is zero. (This is very similar to putting a fraction in lowest terms.) The key fact about two dimensions is that, for each nonzero vector, there exists exactly one canonical vector orthogonal to that vector, specifically, the canonization of (-y, x). Insert the canonization of each vector in V(o) into a set data structure and then, for each vector in V(o), look up its canonical orthogonal mate in that data structure. I'm assuming that the gcd and/or set operations take time O(log n).
Method 2: define a comparator on vectors as follows. Given vectors (a, b), (c, d), write (a, b) < (c, d) if and only if
s1 s2 (a d - b c) < 0,
where
s1 = -1 if b < 0 or (b == 0 and a < 0)
1 otherwise
s2 = -1 if d < 0 or (d == 0 and c < 0)
1 otherwise.
Sort the vectors using this comparator. (This is very similar to comparing the fraction a/b with c/d.) For each vector (x, y) in V(o), binary search for its orthogonal mate (-y, x).
In three dimensions, the set of vectors orthogonal to the unit vector along the z-axis is the entire x-y-plane, and the equivalent of canonization fails to map all vectors in this plane to one orthogonal mate.

Efficient algorithm for shortest distance between two line segments in 1D

I can find plenty formulas for finding the distance between two skew lines. I want to calculate the distance between two line segments in one dimension.
It's easy to do with a bunch of IF statements. But I was wondering if their is a more efficient math formula.
E.g. 1:
----L1x1-------L2x1-------L1x2------L2x2----------------------------
L1 = line segment 1, L2 = line segment 2;
the distance here is 0 because of intersection
E.g. 2:
----L1x1-------L1x2-------L2x1------L2x2----------------------------
the distance here is L2x1 - L1x2
EDIT:
The only assumption is that the line segments are ordered, i.e. x2 is always > x1.
Line segment 1 may be to the left, right, equal to etc. of line segment 2. The algorithm has to solve for this.
EDIT 2:
I have to implement this in T-SQL (SQL Server 2008). I just need the logic... I can write the T-SQL.
EDIT 3:
If a line segment is a line segment of the other line, the distance is 0.
----L1x1-------L2x1-------L2x2------L1x2----------------------------
Line segment 2 is a segment of line segment 1, making the distance 0.
If they intersect or touch, the distance is 0.
This question is the same as the question "Do two ranges intersect, and if not then what is the distance between them?" The answer depends slightly on whether you already know which range is smallest already, and whether the points in the ranges are ordered correctly (that is, whether the lines have the same direction).
if (a.start < b.start) {
first = a;
second = b;
} else {
first = b;
second = a;
}
Then:
distance = max(0, second.start - first.end);
Depending on where you're running this, your compiler should optimise it nicely. In any case, you should probably profile to make sure that your code is a bottleneck before making it less readable for a theoretical performance improvement.
This works in all cases:
d = (s1 max s2 - e1 min e2) max 0
As a bonus, removing max 0 means a negative result indicates exactly how much of the two segments overlap.
Proof
Note that the algorithm is symmetric, so asymmetric cases only need to covered once. So I'm going to assert s2 >= s1 w.l.o.g. Also note e1 >= s1 and e2 >= s2.
Cases:
L2 starts after L1 ends (s2 >= e1): s1 max s2 = s2, e1 min e2 = e1. Result is s2 - e1, which is non-negative and clearly the value we want (the distance).
L2 inside L1 (s2 <= e1, e2 <= e1): s1 max s2 = s2, e1 min e2 = e2. s2 - e2 is non-positive by s2 <= e2, so the result is 0 as expected during overlap.
L2 starts within L1 but ends after (s2 <= e1, e2 >= e1): s1 max s2 = s2, e1 min e2 = e1. s2 - e1 is non-positive by s2 <= e1, so the result is 0 as expected during overlap.
I do not think there is a way around the conditions. But this is succinct:
var diff1 = L2x1 - L1x2;
var diff2 = L2x2 - L1x1;
return diff1 > 0 ? max(0, diff1) : -min(0,diff2);
This assumes LNx1 < LNx2.
I think since all line segments in the 1D is one of form (X,0) or (0,Y)
so store all these x values in a array and sort the array and minimum distance will the differece between 1st 2 elemenst of the array.
Here you need to be careful while storing element in the array so that duplicate elemenst are not stored
This formula seems to work in all cases but the one where one line lies fully on the other line.
return -min(a2-b1,b2-a1)

Resources