I just realized that was a dumb question. Curious if anyone can still find a loophole though.
Source code:
married(trump,obama).
married(trump,goat).
married(pepee,pepper).
married(X,Y) :- married(Y,X),!. % not awesome because of infinite recursion
Goal: ex. married(trump, putin).
trace(
first base case fails.
second base case fails.
third base case fails.
married(trump,putin) = married(putin,trump),!.
what I want it doing is try married (putin,trump) again but all earlier base cases will fail again. We tried switching args before and failed. So don't recurse. Just return false.
I get a stack error because until married(putin,trump) or other way around before ! will never return true or false so cut will not be able triggered.
Easier and more sane way is to just rewrite the code to prevent recursion. I'm curious if there is a way to try switching args once and return fail if that fails. If u have a long list of facts, u can reduce that long list by half if u can try arg1,arg2 and vice versa. Potentially more exponentially if we get crazy permutation scenarios.
Any insights will be awesome thanks.
You are on the right track with "switching args once and return fail if that fails", even though that is worded very imperatively and does not cover all modes we expect from such a relation.
For this to work, you need to separate this into two predicates. It is easy to show that a single predicate with the given interface is not sufficient.
First, the auxiliary predicate:
married_(a, b).
married_(c, d).
etc.
Then, the main predicate, essentially as you suggest:
married(X, Y) :- married_(X, Y).
married(X, Y) :- married_(Y, X).
Adding impurities to your solution makes matters worse: Almost invariably, you will destroy the generality of your relations, raising the question why you are using a declarative language at all.
Example query:
?- married(X, Y).
X = a,
Y = b ;
X = c,
Y = d ;
X = b,
Y = a ;
X = d,
Y = c.
Strictly speaking, you can of course also do this with only a single predicate, but you need to carry around additional information if you do it this way.
For example:
married(_, a, b).
married(_, c, d).
married(first, X, Y) :- married(second, Y, X).
Example query:
?- married(_, X, Y).
X = a,
Y = b ;
X = c,
Y = d ;
X = b,
Y = a ;
X = d,
Y = c.
This closely follows the approach you describe: "We tried switching args before. So don't do it again."
Related
How does this code work? I wold have tough it worked for the first possible values of C and X, but somehow it loops.
path(A, B, [A, B], X) :-
route(A, B, X).
path(A, B, PathAB, Length) :-
route(A, C, X),
path(C, B, PathCB, LengthCB),
PathAB = [A | PathCB],
Length is X + LengthCB.
there are routes defind as route(bahirdar, mota, 32)..
Taking a simpler example, suppose you have the following facts:
foo(1).
foo(2).
Then you query:
| ?- foo(X).
Prolog will succeed with X = 1 and prompt:
X = 1 ?
The ? indicates that there was a choice point (it found additional options to explore for foo), and if you press ; and Enter, it will backtrack and try to find another solution, which it does, and prompts:
X = 2 ?
Now if you press ; and Enter it will fail and stop because it can't succeed any further.
Let's try something a little more elaborate with a conjunction. Use the facts:
foo(1).
foo(2).
foo(3).
And the rule:
odd(X) :- X /\ 1 =:= 1. % X is odd if X bit-wise and with 1 is 1
And then do a query that says I want X to be foo and I want X to be odd:
| ?- foo(X), odd(X).
X = 1 ? ;
X = 3 ? ;
no
| ?-
Note that we only get the odd solutions. What happens in this query is as follows:
Prolog calls foo(X) and succeeds with X = 1.
Prolog calls odd(1) (since X is instantiated as 1) and succeeds
Prolog shows the result X = 1.
The user indicates a backtrack is desired (find more solutions)
Prolog backtracks to odd(1), which has no variables to reinstantiate, so Prolog backtracks further
Prolog backtracks to foo(X) and succeeds with X = 2 and proceeds ahead again
Prolog calls odd(2) and fails. The failure causes Prolog to backtrack to the foo call
Prolog backtracks to foo(X) and succeeds with X = 3 and proceeds ahead again
Prolog calls odd(3) and succeeds and displays the solution X = 3<
...
Apply this now to your predicate:
path(A, B, [A, B], X) :-
route(A, B, X).
path(A, B, PathAB, Length) :-
route(A, C, X),
path(C, B, PathCB, LengthCB),
PathAB = [A | PathCB],
Length is X + LengthCB.
If a query is made to path, Prolog first attempts to match the query to the head of the first clause path(A, B, [A, B], X). A match to this head would mean that the 3rd argument would have to be a list consisting of exactly 2 elements. If there's a match, Prolog will call route(A, B, X). If route(A, B, X) succeeds, Prolog will display the values of A, B, and X that resulted in the success. If the user prompts for more solutions, Prolog will backtrack which would either (a) call route(A, B, X) again if there was a choice point left in the prior call, or (b) backtrack further and attempt to match the original call to path to the second clause, path(A, B, PathAB, Length). Similarly, if the original call to route(A, B, X) had failed, Prolog would backtrack to attempt to match the second clause.
If executing the second clause, you have the conjunction case like shown in the prior simplified example. Here, it's conjunctive sequence of four calls starting with route(A, C, X). Prolog will attempt each of these calls in sequence and move to the next as long as the prior succeeds. When a failure is encountered, Prolog will backtrack to the prior call and, if there was a choice point, attempt a reinstantiation of arguments to make the prior call succeed again, etc.
Perhaps you can see how this is different than looping. In a typical imperative language, you might have a loop consisting of the following statements:
while (something) do
A
B
C
end
Which would execute as A B C A B C A B C ... until the loop condition is met. In Prolog, you might have:
A,
B,
C,
Which may execute as: A(succeeds) B(fails - backtrack) A(succeeds) B(fails - backtrack) A(succeeds) B(succeeds) C(fails - backtrack) B(succeeds) C(succeeds) and then finally yield results.
If this were a really good answer, I'd include a bunch of diagrams to illustrate this. But I hope the description helps enough. :)
I am looking for a programming language and a way to automatize the following problems.
Given a formula connecting different variables, say g=GM/r^2, and values for all but one of the variables, (g=9.8,M=5E25,G=6.7E-11), how can I program a routine which:
a) Identifies the unknown variable
b) symbolically, solves the formula
c) finally, substitutes values of known variables and solves the equation for the unknown.
I am far from an expert in programming and the only thing it came to my mind was a slow process in which, one checks variable after variable which one has not been set to a value and according to that use the appropriate rearrangement of the formula to calculate the unknown.
(eg. in our case, the program checks variable after variable until it find that r is the unknown. Then, it uses the same formula but ready to calculate r, i.e. r=sqrt(GM/g))
I am sure there is a fast an elegant language to do this but I cannot figure it out.
Thanks in advance for your help.
Well, here is one way to do it, using Maxima.
eq : g = G * M / r^2;
known_values : [g = 9.8, M = 5e25, G = 6.7e-11];
eq1 : subst (known_values, eq);
remaining_var : listofvars (eq1);
solve (eq1, remaining_var);
=> [r = -5000000*sqrt(670)/7, r = 5000000*sqrt(670)/7]
You can use the function float to get a floating point value from that.
You can probably also do it with Sympy or something else.
For such a simple case, the approach that you suggest is quite appropriate.
The "slow" process might take on the order of 10 nanoseconds to find the unknown variable (using a compiled language), so I wouldn't worry so much.
Indeed symbolic computation programs are able to derive the explicit formulas, that you can retranscript in most programming languages
g=GM/r²
G=gr²/M
M=gr²/G
r=√GM/g
// C code
if (g == 0) g= G * M / (r * r);
else if (G == 0) G= g * r * r / M;
else if (M == 0) M= g * r * r / G;
else r= Math.sqrt(G * M / g);
For instance, the free Microsoft Mathematics can do it. But in this particular case, just do it by hand.
For a completely integrated solution with built-in scripting, think of Mathematica, Mathcad, Maple and the like.
This is another step of my battle with multi-dimensional arrays in R, previous question is here :)
I have a big R array with the following dimensions:
> data = array(..., dim = c(x, y, N, value))
I'd like to perform a sort of bootstrap comparing the mean (see here for a discussion about it) obtained with:
> vmean = apply(data, c(1,2,3), mean)
With the mean obtained sampling the N values randomly with replacement, to explain better if data[1,1,,1] is equals to [v1 v2 v3 ... vN] I'd like to replace it with something like [v_k1 v_k2 v_k3 ... v_kN] with k values sampled with sample(N, N, replace = T).
Of course I want to AVOID a for loop. I've read this but I don't know how to perform an efficient indexing of this array avoiding a loop through x and y.
Any ideas?
UPDATE: the important thing here is that I want a different sample for each sample in the fourth (value) dimension, otherwise it would be simple to do something like:
> dataSample = data[,,sample(N, N, replace = T), ]
Also there's the compiler package which speeds up for loops by using a Just In Time compiler.
Adding thes lines at the top of your code enables the compiler for all code.
require("compiler")
compilePKGS(enable=T)
enableJIT(3)
setCompilerOptions(suppressAll=T)
[This was initially on matrices, but I guess it applies to any variable generically]
Say we have Var1 * Var2 * Var3 * Var4.
One of them sporadically changes, which one of them is random.
Is it possible to minimize multiplications?
If I do
In case Var1 changes: newVar1 * savedVar2Var3Var4
I noticed that then I need to recalculate savedVar2Var3Var4 each time Var2, Var3, Var4 change.
Would that re-calculation of 'saved combinations' defy the purpose?
If you had a lot more numbers to multiply or if multiplication was extremely expensive then there is one thing I can think of to do.
If you had a huge number of numbers to multiply then you could separate them into sub-sets and memoize the product of each set. When a particular set changes, due to one of its members changing, then the memoized product becomes invalid and needs to be recomputed. You could do this at several levels depending on how expensive multiplication is, how much memory you have available, and how often things change. How to best implement this in C probably depends on how the variables go about changing -- if an event comes in that says "here is a new value for C" then you can invalidate all products that had C in them (or check that old C actually is different from new C before invalidation). If they are volatile variables then you will probably just have to compare each of the current values to the previous values (and this will probably take as much or more time as just multiplying on any machine with a hardware multiply instruction).
So, if you have:
answer = A * B * C * D * E * F * G * H;
then you could do separate those out to:
answer = ( (A * B) * (C * D) ) * ( (E * F) * (G * H) );
Then, if rather than having this multiplication done directly in C you were to do it on an expression tree:
answer
*
/ \
/ \
/ \
ABCD EFGH
* *
/ \ / \
/ \ / \
AB CD EF GH
* * * *
/ \ / \ / \ / \
A B C D E F G H
Then at each level (well maybe just the top few levels) you could have a memoized sub-answer as well as some data to tell you if the variables below it had changed. If events come in to tell you to change a variable then that could cause the invalidation of the expression to propagate upward upon receipt of the event (or just recompute the memoized sub-answers for each event). If variables just magically change and you have to examine them to tell that they did change then you have more work to do.
Oh, another way to do this just popped in my head, and I'm ashamed that I didn't think of it earlier. If you do know the old and new values of a variable that has changed then, as long as the old value was not 0, you could just:
new_answer = (old_answer * new_var) / old_var;
In real math this would work, but in computer math this might lose too much precision for your purposes.
In the first place, micro-optimizations like this are almost never worthwhile. Time your program to see if there is a performance problem, profile to see where the problem is, and test after making changes to see if you've made things better or worse.
In the second place, multiplications of numbers are generally fast in modern CPUs, while branches can be more expensive.
In the third place, the way you're setting it up, if Var1 changes, you'll need to recalculate savedVar1Var2Var3, saved Var1Var2Var4, saved Var1Var3Var4, and the whole product. Obviously, you're better off just recalculating the total.
Yes, it is possible.
For scalars there will probably be no benefit. For largish matrix math, you could compute and store: Var1*Var2 and Var3*Var4. Your result is the product of these 2 things. Now when one changes you only need to update 2 products instead of 3. Update only one of the 2 stored products depending who change, and update the result.
There you have it, 2 multiplications instead of 3 with each update. This will only benefit you if the common case really is for only one of them to update, but if that's true it should help a lot.
I don't think you save any time. Every time one of the N variables changes, you need to calculate (N - 1) additional products, right? Say you have A, B, C, and D. A changes, and you have saved the product of B, C, and D, but now you must recalculate your cached ABC, ABD, and ACD products. You are, in fact, doing additional work. ABCD is three multiply operations, while ABCD, ABC, ACD, and ABD works out to SEVEN.
The answer depends on how often the values change. With your example, calculating savedVar2Var3Var4 costs you two multiplications, with one additional multiplication each time Var1 changes (or you otherwise need to calculate the total). So: how many times do Var2, Var3, Var4 change, compared to Var1?
If Var1 changes more than about 3 times as often as the others, it should be worth recalculating savedVar2Var3Var4 as needed.
I don't think the gain is worth the effort, unless your "multiply" operation involves heavy calculations (matrices?).
edit: I've added an example that shows you... it's not worth it :)
T multiply(T v1, T v2, T v3, T v4)
{
static T v1xv2 = v1*v2;
static T v1xv3 = v1*v3;
static T v1xv4 = v1*v4;
static T v2xv3 = v2*v3;
static T v2xv4 = v2*v4;
static T v3xv4 = v3*v4;
static T v1save = v1;
static T v2save = v2;
static T v3save = v3;
static T v4save = v4;
if v1save != v1
{
v1save = v1;
v1xv2 = v1*v2;
v1xv3 = v1*v3;
v1xv4 = v1*v4;
}
if v2save != v2
{
v2save = v2;
v1xv2 = v1*v2;
v2xv3 = v2*v3;
v2xv4 = v2*v4;
}
if v3save != v3
{
v3save = v3;
v1xv3 = v1*v3;
v2xv3 = v2*v3;
v3xv4 = v3*v4;
}
if v4save != v4
{
v4save = v4;
v1xv4 = v1*v4;
v2xv4 = v2*v4;
v3xv4 = v3*v4;
}
return v1xv2*v3xv4;
}
Suppose you had a sum of many many variables, like Sum = A+B+C+D+.... and one of them changed, like C. If C' is the old value of C, then you could just say Sum += (C-C');
Same idea for a product: Product *= C/C';. (For matrices, they would have to be invertible.)
Of course, you might get creeping roundoff errors, so once in a while you could recalculate the whole thing.
I would try something like this:
var12 = var1*var2;
var34 = var3*var4;
result = var12*var34;
while (values available) {
get new values;
if (var1 changes || var2 changes) var12 = var1*var2;
if (var3 changes || var4 changes) var34 = var3*var4;
result = var12*var34;
}
There is no overload (only change checking) and it can be used for matrices (doesn't rely on commutativity, only associativity).
OK, continuing with my solving of the problems on Project Euler, I am still beginning to learn Haskell and programming in general.
I need to find the lowest number divisible by the numbers 1:20
So I started with:
divides :: Int -> Int -> Bool
divides d n = rem n d == 0
divise n a | divides n a == 0 = n : divise n (a+1)
| otherwise = n : divise (n+1) a
What I want to happen is for it to keep moving up for values of n until one magically is evenly divisible by [1..20].
But this doesn't work and now I am stuck as from where to go from here. I assume I need to use:
[1..20]
for the value of a but I don't know how to implement this.
Well, having recently solved the Euler problem myself, I'm tempted to just post my answer for that, but for now I'll abstain. :)
Right now, the flow of your program is a bit chaotic, to sound like a feng-shui person. Basically, you're trying to do one thing: increment n until 1..20 divides n. But really, you should view it as two steps.
Currently, your code is saying: "if a doesn't divide n, increment n. If a does divide n, increment a". But that's not what you want it to say.
You want (I think) to say "increment n, and see if it divides [Edit: with ALL numbers 1..20]. If not, increment n again, and test again, etc." What you want to do, then, is have a sub-test: one that takes a number, and tests it against 1..20, and then returns a result.
Hope this helps! Have fun with the Euler problems!
Edit: I really, really should remember all the words.
Well, as an algorithm, this kinda sucks.
Sorry.
But you're getting misled by the list. I think what you're trying to do is iterate through all the available numbers, until you find one that everything in [1..20] divides. In your implementation above, if a doesn't divide n, you never go back and check b < a for n+1.
Any easy implementation of your algorithm would be:
lcmAll :: [Int] -> Maybe Int
lcmAll nums = find (\n -> all (divides n) nums) [1..]
(using Data.List.find and Data.List.all).
A better algorithm would be to find the lcm's pairwise, using foldl:
lcmAll :: [Int] -> Int
lcmAll = foldl lcmPair 1
lcmPair :: Int -> Int -> Int
lcmPair a b = lcmPair' a b
where lcmPair' a' b' | a' < b' = lcmPair' (a+a') b'
| a' > b' = lcmPair' a' (b + b')
| otherwise = a'
Of course, you could use the lcm function from the Prelude instead of lcmPair.
This works because the least common multiple of any set of numbers is the same as the least common multiple of [the least common multiple of two of those numbers] and [the rest of the numbers]
The function 'divise' never stops, it doesn't have a base case. Both branches calls divise, thus they are both recursive. Your also using the function divides as if it would return an int (like rem does), but it returns a Bool.
I see you have already started to divide the problem into parts, this is usually good for understanding and making it easier to read.
Another thing that can help is to write the types of the functions. If your function works but your not sure of its type, try :i myFunction in ghci. Here I've fixed the type error in divides (although other errors remains):
*Main> :i divise
divise :: Int -> Int -> [Int] -- Defined at divise.hs:4:0-5
Did you want it to return a list?
Leaving you to solve the problem, try to further divide the problem into parts. Here's a naive way to do it:
A function that checks if one number is evenly divisible by another. This is your divides function.
A function that checks if a number is dividable by all numbers [1..20].
A function that tries iterates all numbers and tries them on the function in #2.
Here's my quick, more Haskell-y approach, using your algorithm:
Prelude> let divisibleByUpTo i n = all (\x -> (i `rem` x) == 0) [1..n]
Prelude> take 1 $ filter (\x -> snd x == True) $ map (\x -> (x, divisibleByUpTo x 4)) [1..]
[(12,True)]
divisibleByUpTo returns a boolean if the number i is divisible by every integer up to and including n, similar to your divides function.
The next line probably looks pretty difficult to a Haskell newcomer, so I'll explain it bit-by-bit:
Starting from the right, we have map (\x -> (x, divisibleByUpTo x 4)) [1..] which says for every number x from 1 upwards, do divisibleByUpTo x 4 and return it in a tuple of (x, divisibleByUpTo x 4). I'm using a tuple so we know which number exactly divides.
Left of that, we have filter (\x -> snd x == True); meaning only return elements where the second item of the tuple is True.
And at the leftmost of the statement, we take 1 because otherwise we'd have an infinite list of results.
This will take quite a long time for a value of 20. Like others said, you need a better algorithm -- consider how for a value of 4, even though our "input" numbers were 1-2-3-4, ultimately the answer was only the product of 3*4. Think about why 1 and 2 were "dropped" from the equation.