How can I get pairs of consecutive elements in Prolog? - arrays

I have to complete a problem, in which, one task is to get consecutive pairs of an array.
For example, if the array is [1, 2, 3], the result should be
X=1 Y=2 and X=2 Y=3
Until this moment my code works fine, but after that, it doesn't output 'no'. Instead, it sticks in an infinite loop. The hard part is that I have to do this without recursion.
My code so far is the following:
part_of(X, Y, List):-
length(X, 1),
append(_, X, Part),
length(Y, 1),
append(Part, Y, Part2),
append(Part2, _, List).
I'm not familiar with logic programming. Everything that goes through my mind have to do with returning values, which of course, is not the case here.

X and Y are consecutive elements of some List if List is of the following form: some Prefix list, then X, then Y, then some Rest list.
This is kind of what you were trying to express, but you got confused on some details. First, the one-element list containing X is written as [X]. This is probably what you were trying to say with length(X, 1), but that wouldn't work as written.
Second, you got confused with your uses of append/3. The way you are trying to use it, the last argument is the whole list that you are trying to decompose. So in this scenario, the third argument should always be a list that is already known -- either because it is passed in as an argument, or because it was computed by an earlier goal. In your code, the first append/3 goal is append(_, X, Part), where both _ and Part are unknown. Under these circumstances there is an infinite number of solutions, which causes the nontermination you see:
?- append(_, X, Part).
X = Part ;
Part = [_G2897|X] ;
Part = [_G2897, _G2903|X] ;
Part = [_G2897, _G2903, _G2909|X] ;
Part = [_G2897, _G2903, _G2909, _G2915|X] ;
Part = [_G2897, _G2903, _G2909, _G2915, _G2921|X] .
In short, you have the right idea, but the order of binding things isn't quite right. The following works:
?- List = [1, 2, 3], append(Prefix, Part1, List), append([X], Part2, Part1), append([Y], Rest, Part2).
List = Part1, Part1 = [1, 2, 3],
Prefix = [],
X = 1,
Part2 = [2, 3],
Y = 2,
Rest = [3] ;
List = [1, 2, 3],
Prefix = [1],
Part1 = [2, 3],
X = 2,
Part2 = [3],
Y = 3,
Rest = [] ;
false.
This first splits the known list List = [1, 2, 3] into its parts, of which there are only finitely many. This binds Part1 to a finite list. Then it splits the finite list Part1, binding Part2 to a finite list, and finally it splits that. There is no room for nontermination if the initial List is a finite list.
All that said, there is an easier way of expressing "some list, then two adjacent elements X and Y, then some other list":
?- append(_Prefix, [X, Y | _Rest], [1, 2, 3]).
_Prefix = [],
X = 1,
Y = 2,
_Rest = [3] ;
_Prefix = [1],
X = 2,
Y = 3,
_Rest = [] ;
false.

Here is how I would do this:
pairs([X,Y|_],X,Y).
pairs([_,Y|T],A,B) :- pairs([Y|T],A,B).
The first predicate succeeds when it can get a pair of elements from the start of a list.
The second predicate succeeds when it can strip the first element from the list and recursively call pairs/2 to get a subsequent pair.
Here's the output of my run:
?- pairs([a,b,c],X,Y).
X = a,
Y = b ;
X = b,
Y = c ;
false.

Here is my very simple solution.
show_elements([X|[]], _, _).
show_elements([X, Y|Q], A, B):- A is X, B is Y; show_elements([Y|Q], A,B).
Here is a photo of the output, I don't know if I correctly understood the task.
As you can see, I used recursion to solve the problem. Make sure you correctly understand recursion. It's used a lot in Prolog.
Also check the concept of unification. It is necessary to start writing programs in Prolog.
There is a lot of material online, and you can check this very useful guide: Lean Prolog now!

Related

Find common elements in subarrays of arrays

I have two numpy arrays of shape arr1=(~140000, 3) and arr2=(~450000, 10). The first 3 elements of each row, for both the arrays, are coordinates (z,y,x). I want to find the rows of arr2 that have the same coordinates of arr1 (which can be considered a subgroup of arr2).
for example:
arr1 = [[1,2,3],[1,2,5],[1,7,8],[5,6,7]]
arr2 = [[1,2,3,7,66,4,3,44,8,9],[1,3,9,6,7,8,3,4,5,2],[1,5,8,68,7,8,13,4,53,2],[5,6,7,6,67,8,63,4,5,20], ...]
I want to find common coordinates (same first 3 elements):
list_arr = [[1,2,3,7,66,4,3,44,8,9], [5,6,7,6,67,8,63,4,5,20], ...]
At the moment I'm doing this double loop, which is extremely slow:
list_arr=[]
for i in arr1:
for j in arr2:
if i[0]==j[0] and i[1]==j[1] and i[2]==j[2]:
list_arr.append (j)
I also tried to create (after the 1st loop) a subarray of arr2, filtering it on the value of i[0] (arr2_filt = [el for el in arr2 if el[0]==i[0]). This speed a bit the operation, but it still remains really slow.
Can you help me with this?
Approach #1
Here's a vectorized one with views -
# https://stackoverflow.com/a/45313353/ #Divakar
def view1D(a, b): # a, b are arrays
a = np.ascontiguousarray(a)
b = np.ascontiguousarray(b)
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
return a.view(void_dt).ravel(), b.view(void_dt).ravel()
a,b = view1D(arr1,arr2[:,:3])
out = arr2[np.in1d(b,a)]
Approach #2
Another with dimensionality-reduction for ints -
d = np.maximum(arr2[:,:3].max(0),arr1.max(0))
s = np.r_[1,d[:-1].cumprod()]
a,b = arr1.dot(s),arr2[:,:3].dot(s)
out = arr2[np.in1d(b,a)]
Improvement #1
We could use np.searchsorted to replace np.in1d for both of the approaches listed earlier -
unq_a = np.unique(a)
idx = np.searchsorted(unq_a,b)
idx[idx==len(a)] = 0
out = arr2[unq_a[idx] == b]
Improvement #2
For the last improvement on using np.searchsorted that also uses np.unique, we could use argsort instead -
sidx = a.argsort()
idx = np.searchsorted(a,b,sorter=sidx)
idx[idx==len(a)] = 0
out = arr2[a[sidx[idx]]==b]
You can do it with the help of set
arr = np.array([[1,2,3],[4,5,6],[7,8,9]])
arr2 = np.array([[7,8,9,11,14,34],[23,12,11,10,12,13],[1,2,3,4,5,6]])
# create array from arr2 with only first 3 columns
temp = [i[:3] for i in arr2]
aset = set([tuple(x) for x in arr])
bset = set([tuple(x) for x in temp])
np.array([x for x in aset & bset])
Output
array([[7, 8, 9],
[1, 2, 3]])
Edit
Use list comprehension
l = [list(i) for i in arr2 if i[:3] in arr]
print(l)
Output:
[[7, 8, 9, 11, 14, 34], [1, 2, 3, 4, 5, 6]]
For integers Divakar already gave an excellent answer. If you want to compare floats you have to consider e.g. the following:
1.+1e-15==1.
False
1.+1e-16==1.
True
If this behaviour could lead to problems in your code I would recommend to perform a nearest neighbour search and probably check if the distances are within a specified threshold.
import numpy as np
from scipy import spatial
def get_indices_of_nearest_neighbours(arr1,arr2):
tree=spatial.cKDTree(arr2[:,0:3])
#You can check here if the distance is small enough and otherwise raise an error
dist,ind=tree.query(arr1, k=1)
return ind

Swift For loop Enumeration in Sort differs

Im trying to manual sort on the below array.
The issue here is, the result varies while reading the item from the "for-loop enumuration" (noted as //(2)) verses reading it as a subscript (noted as //(1)). It could be a minor issue hiding behind my eye. Appreciate your time.
var mySortArray : Array<Int> = []
mySortArray = [1,5,3,3,21,11,2]
for (itemX,X) in mySortArray.enumerated() {
for (itemY,Y) in mySortArray.enumerated() {
// if mySortArray[itemX] < mySortArray[itemY] // (1)
if X < Y // (2)
{
//Swap the position of item in the array
mySortArray.swapAt(itemX, itemY)
}
}
}
print(mySortArray)
// Prints [1, 2, 3, 3, 5, 11, 21] ( for condition // (1))
// Prints [2, 1, 3, 5, 11, 3, 21] ( for condition // (2))
mySortArray = [1,5,3,3,21,11,2]
print("Actual Sort Order : \(mySortArray.sorted())")
// Prints Actual Sort Order : [1, 2, 3, 3, 5, 11, 21]
The problem here is that the function .enumerated() returns a new sequence and iterates that. Think of it as a new array.
So, you are working with 3 different arrays here.
You have an unsorted array that you want to fix. Lets call this the w ("working array") and then you have you array x and array y.
So, w is [1,5,3,3,21,11,2], x and y are effectively the same as w at the beginning.
Now you get your first two values that need to swap...
valueX is at index 1 of x (5). valueY is at index 2 of y (3).
And you swap them... in w.
So now w is [1,3,5,3,21,11,2] but x and y are unchanged.
So now you indexes are being thrown off. You are comparing items in x with items in y and then swapping them in we which is completely different.
You need to work with one array the whole time.
Of course... there is also the issue that your function is currently very slow. O(n^2) and there are much more efficient ways of sorting.
If you are doing this as an exercise in learning how to write sort algorithms then keep going. If not you should really be using the .sort() function.
Really what you want to be doing is not using .enumerated() at all. Just use ints to get (and swap) values in w.
i.e. something like
for indexX in 0..<w.count {
for indexY in indexX..<w.count {
// do some comparison stuff.
// do some swapping stuff.
}
}

What's the cleanest way to construct a Ruby array using a while loop?

Ruby has lots of nice ways of iterating and directly returning that result. This mostly involve array methods. For example:
def ten_times_tables
(1..5).map { |i| i * 10 }
end
ten_times_tables # => [10, 20, 30, 40, 50]
However, I sometimes want to iterate using while and directly return the resulting array. For example, the contents of the array may depend on the expected final value or some accumulator, or even on conditions outside of our control.
A (contrived) example might look like:
def fibonacci_up_to(max_number)
sequence = [1, 1]
while sequence.last < max_number
sequence << sequence[-2..-1].reduce(:+)
end
sequence
end
fibonacci_up_to(5) # => [1, 1, 2, 3, 5]
To me, this sort of approach feels quite "un-Ruby". The fact that I construct, name, and later return an array feels like an anti-pattern. So far, the best I can come up with is using tap, but it still feels quite icky (and quite nested):
def fibonacci_up_to(max_number)
[1, 1].tap do |sequence|
while sequence.last < max_number
sequence << sequence[-2..-1].reduce(:+)
end
end
end
Does anyone else have any cleverer solutions to this sort of problem?
Something you might want to look into for situations like this (though maybe your contrived example fits this a lot better than your actual use case) is creating an Enumerator, so your contrived example becomes:
From the docs for initialize:
fib = Enumerator.new do |y|
a = b = 1
loop do
y << a
a, b = b, a + b
end
end
and then call it:
p fib.take_while { |elem| elem <= 5 }
#=> [1, 1, 2, 3, 5]
So, you create an enumerator which iterates all your values and then once you have that, you can iterate through it and collect the values you want for your array in any of the usual Ruby-ish ways
Similar to Simple Lime's Enumerator solution, you can write a method that wraps itself in an Enumerator:
def fibonacci_up_to(max_number)
return enum_for(__callee__, max_number) unless block_given?
a = b = 1
while a <= max_number
yield a
a, b = b, a + b
end
end
fibonacci_up_to(5).to_a # => [1, 1, 2, 3, 5]
This achieves the same result as returning an Enumerator instance from a method, but it looks a bit nicer and you can use the yield keyword instead of a yielder block variable. It also lets you do neat things like:
fibonacci_up_to(5) do |i|
# ..
end

Why does deepcopy change values of numpy array?

I am having a problem in which values in a Numpy array change after copying it with copy.deepcopy or numpy.copy, in fact, I get different values if I just print the array first before copying it.
I am using Python 3.5, Numpy 1.11.1, Scipy 0.18.0
My starting array is contained in a list of tuples; each tuple is pair: a float (a time point) and a numpy array (the solution of an ODE at that time point), e.g.:
[(0.0, array([ 0., ... 0.])), ...
(3.0, array([ 0., ... 0.]))]
In this case, I want the array for the last time point.
When I call the following:
tandy = c1.IntegrateColony(3)
ylast = copy.deepcopy(tandy[-1][1])
print(ylast)
I get something that makes sense for the system I'm trying to simulate:
[7.14923891e-07 7.14923891e-07 ... 8.26478813e-01 8.85589634e-01]
However, with the following:
tandy = c1.IntegrateColony(3)
print(tandy[-1][1])
ylast = copy.deepcopy(tandy[-1][1])
print(ylast)
I get all zeros:
[0.00000000e+00 0.00000000e+00 ... 0.00000000e+00 0.00000000e+00]
[ 0. 0. ... 0. 0.]
I should add, with larger systems and different parameters, displaying tandy[k][1] (either with print() or just by calling it in the command line) shows all non-zero values that are all very close to zero, i.e. <1e-70, but that's still not sensible for the system.
With:
tandy = c1.IntegrateColony(3)
ylast = np.copy(tandy[-1][1])
print(ylast)
I get sensible output again:
[7.14923891e-07 7.14923891e-07 ... 8.26478813e-01 8.85589634e-01]
The function that generates 'tandy' is the following (edited for clarity), which uses scipy.integrate.ode, and the set_solout method to get the solution at intermediate time points:
def IntegrateColony(self, tmax=1):
# I edited out initialization of dCdt & first_step for clarity.
y = ode(dCdt)
y.set_integrator('dopri5', first_step=dt0, nsteps=2000)
sol = []
def solout(tcurrent, ytcurrent):
sol.append((tcurrent, ytcurrent))
y.set_solout(solout)
y.set_initial_value(y=C0, t=0)
yfinal = y.integrate(tmax)
return sol
Although I could get the last time point by returning yfinal, I'd like to get the whole time course once I figure out why it's behaving the way it is.
Thanks for your suggestions!
Mickey
Edit:
If I print all of sol (print(tandy) or print(IntegrateColony...), it comes out as shown above (with the values in the arrays as 0), i.e.:
[(0.0, array([ 0., ... 0.])), ...
(3.0, array([ 0., ... 0.]))]
However, if I copy it with (y = copy.deepcopy(tandy); print(y)), the arrays take on values between 1e-7 and 1e+1.
If I do print(tandy[-1][1]) twice in a row, they're filled with zeros, but the format changes (from 0.0000 to 0.).
One other feature I noticed while following the suggestions in LutzL's and hpaulj's comments: if I run tandy = c1.IntegrateColony(3) in the console (running Spyder), the arrays are filled with zeros in the variable explorer. However, if I run the following in the console:
tandy = c1.IntegrateColony(3); ylast=copy.deepcopy(tandy)
Both the arrays in tandy and in ylast are filled with values in the range I would expect, and print(tandy[-1][1]) now gives:
[7.14923891e-07 7.14923891e-07 ... 8.26478813e-01 8.85589634e-01]
Even if I find a solution that stops this behavior, I'd appreciate anyone's insight about what's going on so I don't make the same mistakes again.
Thanks!
Edit:
Here's a simple case that gives this behavior:
import numpy as np
from scipy.integrate import ode
def testODEint(tmax=1):
C0 = np.ones((3,))
# C0 = 1 # This seems to behave the same
def dCdt_simpleinputs(t, C):
return C
y = ode(dCdt_simpleinputs)
y.set_integrator('dopri5')
sol = []
def solout(tcurrent, ytcurrent):
sol.append((tcurrent, ytcurrent)) # Behaves oddly
# sol.append((tcurrent, ytcurrent.copy())) # LutzL's idea: Works
y.set_solout(solout)
y.set_initial_value(y=C0, t=0)
yfinal = y.integrate(tmax)
return sol
tandy = testODEint(1)
ylast = np.copy(tandy[-1][1])
print(ylast) # Expect same values as tandy[-1][1] below
tandy = testODEint(1)
tandy[-1][1]
print(tandy[-1][1]) # Expect same values as ylast above
When I run this, I get the following output for ylast and tandy[-1][1]:
[ 2.71828196 2.71828196 2.71828196]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00]
The code I was working on when I ran into this problem is an embarrassing mess, but if you want to take a look, an old version is here: https://github.com/mvondassow/BryozoanModel2
The details of why this is happening are tied to how ytcurrent is handled in integrate. But there are various contexts in Python where all values of a list end up the same - contrary to expectations.
For example:
In [159]: x
Out[159]: [0, 1, 2]
In [160]: x=[]
In [161]: y=np.array([1,2,3])
In [162]: for i in range(3):
...: y += i
...: x.append(y)
In [163]: x
Out[163]: [array([4, 5, 6]), array([4, 5, 6]), array([4, 5, 6])]
All elements of x have the same value - because they all are pointers to the same y, and thus show its final value.
but if I copy y before appending it to the list, I see the changes.
In [164]: x=[]
In [165]: for i in range(3):
...: y += i
...: x.append(y.copy())
In [166]: x
Out[166]: [array([4, 5, 6]), array([5, 6, 7]), array([7, 8, 9])]
In [167]:
Now that does not explain why the print statement changes the values. But that whole solout callback mechanism is a bit obscure. I wonder if there are any warnings in scipy about pitfalls in defining such a callback?

Python looping: idiomatically comparing successive items in a list

I need to loop over a list of objects, comparing them like this: 0 vs. 1, 1 vs. 2, 2 vs. 3, etc. (I'm using pysvn to extract a list of diffs.) I wound up just looping over an index, but I keep wondering if there's some way to do it which is more closely idiomatic. It's Python; shouldn't I be using iterators in some clever way? Simply looping over the index seems pretty clear, but I wonder if there's a more expressive or concise way to do it.
for revindex in xrange(len(dm_revisions) - 1):
summary = \
svn.diff_summarize(svn_path,
revision1=dm_revisions[revindex],
revision2 = dm_revisions[revindex+1])
This is called a sliding window. There's an example in the itertools documentation that does it. Here's the code:
from itertools import islice
def window(seq, n=2):
"Returns a sliding window (of width n) over data from the iterable"
" s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... "
it = iter(seq)
result = tuple(islice(it, n))
if len(result) == n:
yield result
for elem in it:
result = result[1:] + (elem,)
yield result
What that, you can say this:
for r1, r2 in window(dm_revisions):
summary = svn.diff_summarize(svn_path, revision1=r1, revision2=r2)
Of course you only care about the case where n=2, so you can get away with something much simpler:
def adjacent_pairs(seq):
it = iter(seq)
a = it.next()
for b in it:
yield a, b
a = b
for r1, r2 in adjacent_pairs(dm_revisions):
summary = svn.diff_summarize(svn_path, revision1=r1, revision2=r2)
I'd probably do:
import itertools
for rev1, rev2 in zip(dm_revisions, itertools.islice(dm_revisions, 1, None)):
summary = svn.diff_sumeraize(svn_python, revision1=rev, revision2=rev2)
Something similarly cleverer and not touching the iterators themselves could probably be done using
So many complex solutions posted, why not keep it simple?
myList = range(5)
for idx, item1 in enumerate(myList[:-1]):
item2 = L[idx + 1]
print item1, item2
>>>
0 1
1 2
2 3
3 4
Store the previous value in a variable. Initialize the variable with a value you're not likely to find in the sequence you're handling, so you can know if you're at the first element. Compare the old value to the current value.
Reduce can be used for this purpose, if you take care to leave a copy of the current item in the result of the reducing function.
def diff_summarize(revisionList, nextRevision):
'''helper function (adaptor) for using svn.diff_summarize with reduce'''
if revisionList:
# remove the previously tacked on item
r1 = revisionList.pop()
revisionList.append(svn.diff_summarize(
svn_path, revision1=r1, revision2=nextRevision))
# tack the current item onto the end of the list for use in next iteration
revisionList.append(nextRevision)
return revisionList
summaries = reduce(diff_summarize, dm_revisions, [])
EDIT: Yes, but nobody said the result of the function in reduce has to be scalar. I changed my example to use a list. Basically, the last element is allways the previous revision (except on first pass), with all preceding elements being the results of the svn.diff_summarize call. This way, you get a list of results as your final output...
EDIT2: Yep, the code really was broken. I have here a workable dummy:
>>> def compare(lst, nxt):
... if lst:
... prev = lst.pop()
... lst.append((prev, nxt))
... lst.append(nxt)
... return lst
...
>>> reduce(compare, "abcdefg", [])
[('a', 'b'), ('b', 'c'), ('c', 'd'), ('d', 'e'), ('e', 'f'), ('f', 'g'), 'g']
This was tested in the shell, as you can see. You will want to replace (prev, nxt) in the lst.append call of compare to actually append the summary of the call to svn.diff_summarize.
>>> help(reduce)
Help on built-in function reduce in module __builtin__:
reduce(...)
reduce(function, sequence[, initial]) -> value
Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.

Resources