Haskell - Sum of the differences between each element in each matrix - loops

I am very new to Haskell (and functional programming in general) and I am trying to write a function called
"profileDistance m1 m2" that takes two matrices as parameters and needs to calculate the sum of the differences between each element in each matrix... I might have not explained that very well. Let me show it instead.
The matrices are on the form of: [[(Char,Int)]]
where each matrix might look something like this:
m1 = [[('A',1),('A',2)],
[('B',3),('B',4)],
[('C',5),('C',6)]]
m2 = [[('A',7),('A',8)],
[('B',9),('B',10)],
[('C',11),('C',12)]]
(Note: I wrote the numbers in order in this example but they can be ANY numbers in any order. The chars in each row in each matrix will however match like shown in the example.)
The result (in the case above) would look something like (psuedo code):
result = ((snd m1['A'][0])-(snd m2['A'][0]))+((snd m1['A'][1])-(snd m2['A'][1]))+((snd m1['B'][0])-(snd m2['B'][0]))+((snd m1['B'][1])-(snd m2['B'][1]))+((snd m1['C'][0])-(snd m2['C'][0]))+((snd m1['C'][1])-(snd m2['C'][1]))
This would be easy to do in any language that has for-loops and is non-functional but I have no idea how to do this in Haskell. I have a feeling that functions like map, fold or sum would help me here (admittedly I am not a 100% sure on how fold works). I hope there is an easy way to do this... please help.

Here a proposal:
solution m1 m2 = sum $ zipWith diffSnd flatM1 flatM2
where
diffSnd t1 t2 = snd t1 - snd t2
flatM1 = concat m1
flatM2 = concat m2
I wrote it so that it's easier to understand the building blocks.
The basic idea is to iterate simultaneously on our two lists of pairs using zipWith. Here its type:
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
It means it takes a function with type a -> b -> c, a list of a's and a list of b's, and it returns a list of c's. In other words, zipWith takes case of the iteration, you just have to specify what you want to do with every item the iteration yields, that in your case will be a pair of pairs (one from the first matrix, another one from the second).
The function passed to zipWith takes the snd element from each pair, and computes the difference. Looking back at zipWith signature you can deduce it will return a list of numbers. So the last thing we need to do is summing them, using the function sum.
There's one last problem. We actually do not have two lists of pairs to be passed to zipWith!, but two matrices. We need to "flatten" them in a list, preserving the order of the elements. That's exactly what concat does, hence the calls to that function in the definitions of flatM1 and flatM2.
I suggest you look into the implementation of every function I mentioned to have a better grasp of how iteration is expressed by mean of recursion. HTH

Related

Julia: How to efficiently sort subarrays of 2 large arrays in parallel?

I have large 1D arrays a and b, and an array of pointers I that separates them into subarrays. My a and b barely fit into RAM and are of different dtypes (one contains UInt32s, the other Rational{Int64}s), so I don’t want to join them into a 2D array, to avoid changing dtypes.
For each i in I[2:end], I wish to sort the subarray a[I[i-1],I[i]-1] and apply the same permutation to the corresponding subarray b[I[i-1],I[i]-1]. My attempt at this is:
function sort!(a,b)
p=sortperm(a);
a[:], b[:] = a[p], b[p]
end
Threads.#threads for i in I[2:end]
sort!( a[I[i-1], I[i]-1], b[I[i-1], I[i]-1] )
end
However, already on a small example, I see that sort! does not alter the view of a subarray:
a, b = rand(1:10,10), rand(-1000:1000,10) .//1
sort!(a,b); println(a,"\n",b) # works like it should
a, b = rand(1:10,10), rand(-1000:1000,10) .//1
sort!(a[1:5],b[1:5]); println(a,"\n",b) # does nothing!!!
Any help on how to create such function sort! (as efficient as possible) are welcome.
Background: I am dealing with data coming from sparse arrays:
using SparseArrays
n=10^6; x=sprand(n,n,1000/n); #random matrix with 1000 entries per column on average
x = SparseMatrixCSC(n,n,x.colptr,x.rowval,rand(-99:99,nnz(x)).//1); #chnging entries to rationals
U = randperm(n) #permutation of rows of matrix x
a, b, I = U[x.rowval], x.nzval, x.colptr;
Thus these a,b,I serve as good examples to my posted problem. What I am trying to do is sort the row indices (and corresponding matrix values) of entries in each column.
Note: I already asked this question on Julia discourse here, but received no replies nor comments. If I can improve on the quality of the question, don't hesitate to tell me.
The problem is that a[1:5] is not a view, it's just a copy. instead make the view like
function sort!(a,b)
p=sortperm(a);
a[:], b[:] = a[p], b[p]
end
Threads.#threads for i in I[2:end]
sort!(view(a, I[i-1]:I[i]-1), view(b, I[i-1]:I[i]-1))
end
is what you are looking for
ps.
the #view a[2:3], #view(a[2:3]) or the #views macro can help making thins more readable.
First of all, you shouldn't redefine Base.sort! like this. Now, sort! will shadow Base.sort! and you'll get errors if you call sort!(a).
Also, a[I[i-1], I[i]-1] and b[I[i-1], I[i]-1] are not slices, they are just single elements, so nothing should happen if you sort them either with views or not. And sorting arrays in a moving-window way like this is not correct.
What you want to do here, since your vectors are huge, is call p = partialsortperm(a[i:end], i:i+block_size-1) repeatedly in a loop, choosing a block_size that fits into memory, and modify both a and b according to p, then continue to the remaining part of a and find next p and repeat until nothing remains in a to be sorted. I'll leave the implementation as an exercise for you, but you can come back if you get stuck on something.

How to convert vectors to arrays in ECLiPSe (CLP)? (or Prolog)

I have to solve Sudoku puzzles in the format of a vector containing 9 vectors (of length 9 each). Seeing as vectors are linked lists in Prolog, I figured the search would go faster if I transformed the puzzles in a 2D array format first.
Example puzzle:
puzzle(P) :- P =
[[_,_,8,7,_,_,_,_,6],
[4,_,_,_,_,9,_,_,_],
[_,_,_,5,4,6,9,_,_],
[_,_,_,_,_,3,_,5,_],
[_,_,3,_,_,7,6,_,_],
[_,_,_,_,_,_,_,8,9],
[_,7,_,4,_,2,_,_,5],
[8,_,_,9,_,5,_,2,3],
[2,_,9,3,_,8,7,6,_]].
I'm using ECLiPSe CLP to implement a solver. The best I've come up with so far is to write a domain like this:
domain(P):-
dim(P,[9,9]),
P[1..9,1..9] :: 1..9.
and a converter for the puzzle (parameter P is the given puzzle and Sudoku is the new defined grid with the 2D array). But I'm having trouble linking the values from the given initial puzzle to my 2D array.
convertVectorsToArray(Sudoku,P):-
( for(I,1,9),
param(Sudoku,P)
do
( for(J,1,9),
param(Sudoku,P,I)
do
Sudoku[I,J] is P[I,J]
)
).
Before this, I tried using array_list (http://eclipseclp.org/doc/bips/kernel/termmanip/array_list-2.html), but I kept getting type errors. How I did it before:
convertVectorsToArray(Sudoku,P):-
( for(I,1,9),
param(Sudoku,P)
do
( for(J,1,9),
param(Sudoku,P,I)
do
A is Sudoku[I],
array_list(A,P[I])
)
).
When my Sudoku finally outputs the example puzzle P in the following format:
Sudoku = []([](_Var1, _Var2, 8, 7, ..., 6), [](4, ...), ...)
then I'll be happy.
update
I tried again with the array_list; it almost works with the following code:
convertVectorsToArray(Sudoku,P):-
( for(I,1,9),
param(Sudoku,P)
do
X is Sudoku[I],
Y is P[I],
write(I),nl,
write(X),nl,
write(Y),nl,
array_list(X, Y)
).
The writes are there to see how the vectors/arrays look like. For some reason, it stops at the second iteration (instead of 9 times) and outputs the rest of the example puzzle as a vector of vectors. Only the first vector gets assigned correctly.
update2
While I'm sure the answer given by jschimpf is correct, I also figured out my own implementation:
convertVectorsToArray(Sudoku,[],_).
convertVectorsToArray(Sudoku,[Y|Rest],Count):-
X is Sudoku[Count],
array_list(X, Y),
NewCount is Count + 1,
convertVectorsToArray(Sudoku,Rest,NewCount).
Thanks for the added explanation on why it didn't work before though!
The easiest solution is to avoid the conversion altogether by writing your puzzle specification directly as a 2-D array. An ECLiPSe "array" is simply a structure with the functor '[]'/N, so you can write:
puzzle(P) :- P = [](
[](_,_,8,7,_,_,_,_,6),
[](4,_,_,_,_,9,_,_,_),
[](_,_,_,5,4,6,9,_,_),
[](_,_,_,_,_,3,_,5,_),
[](_,_,3,_,_,7,6,_,_),
[](_,_,_,_,_,_,_,8,9),
[](_,7,_,4,_,2,_,_,5),
[](8,_,_,9,_,5,_,2,3),
[](2,_,9,3,_,8,7,6,_)).
You can then use this 2-D array directly as the container for your domain variables:
sudoku(P) :-
puzzle(P),
P[1..9,1..9] :: 1..9,
...
However, if you want to keep your list-of-lists puzzle specification, and convert that to an array-of-arrays format, you can use array_list/2. But since that only works for 1-D arrays, you have to convert the nesting levels individually:
listoflists_to_matrix(Xss, Xzz) :-
% list of lists to list of arrays
( foreach(Xs,Xss), foreach(Xz,Xzs) do
array_list(Xz, Xs)
),
% list of arrays to array of arrays
array_list(Xzz, Xzs).
As for the reason your own code didn't work: this is due to the subscript notation P[I]. This
requires P to be an array (you were using it on lists)
works only in contexts where an arithmetic expression is expected, e.g. the right hand side of is/2, in arithmetic constraints, etc.

Labview: element-wise array multiplication operations

Does there exist a function similar to that of numpy's * operator for two arrays to multiply their elements in an element-wise manner, returning an array of the similar type?
For example:
#Lets define:
a = [0,1,2,3]
b = [1,2,3,4]
d = [[1,2] , [3,4], [5,6]]
e = [3,4,5]
#I want:
a * 2 == [2*0, 1*2, 2*2, 2*3]
a * b == [0*1, 1*2, 2*3, 3*4]
d * e == [[1*3, 2*3], [3*4, 4*4], [5*5, 6*5]]
d * d == [[1*1, 2*2], [3*3, 4*4], [5*5, 6*6]]
Note how * IS NOT regular matrix multiplication it is element-wise multiplication.
My current best solution is to write some c code, which does this, and import a compiled dll.
There must exist a better solution.
EDIT:
Using LabVIEW 2011 - Needs to be fast.
The first two multiplications can be done by using the 'multiply' primitive. Make sure the arrays in the second case are of the same length.
For the third multipllication you can use a for loop (with auto-indexing). This is needed because you need to instruct LabVIEW what the basic index is.
The last multiplication can (again) be done using the multiply primitive.
My result is different (opposite) from the previous posters. I generated a 4x1000 array of random numbers (magnitude 1000) which I multiplied by a 4x4 array of integers (1,2,3,4,...). I did this 100,000 times using the matrix multiplication VI and also using for loops to perform the operation on the arrays. I'm seeing times on the order of 0.328s for the matrix VIs and 0.051s for the for loops. Using a compiled DLL may be faster than Labview, but this does not seem to be true for the built-in functions.
This is certainly not what I expected, but it is consistent over many cycles. The VI is standard execution thread. All data types are set before the timed operations - no coercion takes place in the loops. The operations are performed separately, staged by a flat sequence structure, as is the time measurement. Parallelism is turned off.

Growing arrays in Haskell

I have the following (imperative) algorithm that I want to implement in Haskell:
Given a sequence of pairs [(e0,s0), (e1,s1), (e2,s2),...,(en,sn)], where both "e" and "s" parts are natural numbers not necessarily different, at each time step one element of this sequence is randomly selected, let's say (ei,si), and based in the values of (ei,si), a new element is built and added to the sequence.
How can I implement this efficiently in Haskell? The need for random access would make it bad for lists, while the need for appending one element at a time would make it bad for arrays, as far as I know.
Thanks in advance.
I suggest using either Data.Set or Data.Sequence, depending on what you're needing it for. The latter in particular provides you with logarithmic index lookup (as opposed to linear for lists) and O(1) appending on either end.
"while the need for appending one element at a time would make it bad for arrays" Algorithmically, it seems like you want a dynamic array (aka vector, array list, etc.), which has amortized O(1) time to append an element. I don't know of a Haskell implementation of it off-hand, and it is not a very "functional" data structure, but it is definitely possible to implement it in Haskell in some kind of state monad.
If you know approx how much total elements you will need then you can create an array of such size which is "sparse" at first and then as need you can put elements in it.
Something like below can be used to represent this new array:
data MyArray = MyArray (Array Int Int) Int
(where the last Int represent how many elements are used in the array)
If you really need stop-and-start resizing, you could think about using the simple-rope package along with a StringLike instance for something like Vector. In particular, this might accommodate scenarios where you start out with a large array and are interested in relatively small additions.
That said, adding individual elements into the chunks of the rope may still induce a lot of copying. You will need to try out your specific case, but you should be prepared to use a mutable vector as you may not need pure intermediate results.
If you can build your array in one shot and just need the indexing behavior you describe, something like the following may suffice,
import Data.Array.IArray
test :: Array Int (Int,Int)
test = accumArray (flip const) (0,0) (0,20) [(i, f i) | i <- [0..19]]
where f 0 = (1,0)
f i = let (e,s) = test ! (i `div` 2) in (e*2,s+1)
Taking a note from ivanm, I think Sets are the way to go for this.
import Data.Set as Set
import System.Random (RandomGen, getStdGen)
startSet :: Set (Int, Int)
startSet = Set.fromList [(1,2), (3,4)] -- etc. Whatever the initial set is
-- grow the set by randomly producing "n" elements.
growSet :: (RandomGen g) => g -> Set (Int, Int) -> Int -> (Set (Int, Int), g)
growSet g s n | n <= 0 = (s, g)
| otherwise = growSet g'' s' (n-1)
where s' = Set.insert (x,y) s
((x,_), g') = randElem s g
((_,y), g'') = randElem s g'
randElem :: (RandomGen g) => Set a -> g -> (a, g)
randElem = undefined
main = do
g <- getStdGen
let (grownSet,_) = growSet g startSet 2
print $ grownSet -- or whatever you want to do with it
This assumes that randElem is an efficient, definable method for selecting a random element from a Set. (I asked this SO question regarding efficient implementations of such a method). One thing I realized upon writing up this implementation is that it may not suit your needs, since Sets cannot contain duplicate elements, and my algorithm has no way to give extra weight to pairings that appear multiple times in the list.

Help with a special case of permutations algorithm (not the usual)

I have always been interested in algorithms, sort, crypto, binary trees, data compression, memory operations, etc.
I read Mark Nelson's article about permutations in C++ with the STL function next_perm(), very interesting and useful, after that I wrote one class method to get the next permutation in Delphi, since that is the tool I presently use most. This function works on lexographic order, I got the algo idea from a answer in another topic here on stackoverflow, but now I have a big problem. I'm working with permutations with repeated elements in a vector and there are lot of permutations that I don't need. For example, I have this first permutation for 7 elements in lexographic order:
6667778 (6 = 3 times consecutively, 7 = 3 times consecutively)
For my work I consider valid perm only those with at most 2 elements repeated consecutively, like this:
6676778 (6 = 2 times consecutively, 7 = 2 times consecutively)
In short, I need a function that returns only permutations that have at most N consecutive repetitions, according to the parameter received.
Does anyone know if there is some algorithm that already does this?
Sorry for any mistakes in the text, I still don't speak English very well.
Thank you so much,
Carlos
My approach is a recursive generator that doesn't follow branches that contain illegal sequences.
Here's the python 3 code:
def perm_maxlen(elements, prefix = "", maxlen = 2):
if not elements:
yield prefix + elements
return
used = set()
for i in range(len(elements)):
element = elements[i]
if element in used:
#already searched this path
continue
used.add(element)
suffix = prefix[-maxlen:] + element
if len(suffix) > maxlen and len(set(suffix)) == 1:
#would exceed maximum run length
continue
sub_elements = elements[:i] + elements[i+1:]
for perm in perm_maxlen(sub_elements, prefix + element, maxlen):
yield perm
for perm in perm_maxlen("6667778"):
print(perm)
The implentation is written for readability, not speed, but the algorithm should be much faster than naively filtering all permutations.
print(len(perm_maxlen("a"*100 + "b"*100, "", 1)))
For example, it runs this in milliseconds, where the naive filtering solution would take millenia or something.
So, in the homework-assistance kind of way, I can think of two approaches.
Work out all permutations that contain 3 or more consecutive repetitions (which you can do by treating the three-in-a-row as just one psuedo-digit and feeding it to a normal permutation generation algorithm). Make a lookup table of all of these. Now generate all permutations of your original string, and look them up in lookup table before adding them to the result.
Use a recursive permutation generating algorthm (select each possibility for the first digit in turn, recurse to generate permutations of the remaining digits), but in each recursion pass along the last two digits generated so far. Then in the recursively called function, if the two values passed in are the same, don't allow the first digit to be the same as those.
Why not just make a wrapper around the normal permutation function that skips values that have N consecutive repetitions? something like:
(pseudocode)
funciton custom_perm(int max_rep)
do
p := next_perm()
while count_max_rerps(p) < max_rep
return p
Krusty, I'm already doing that at the end of function, but not solves the problem, because is need to generate all permutations and check them each one.
consecutive := 1;
IsValid := True;
for n := 0 to len - 2 do
begin
if anyVector[n] = anyVector[n + 1] then
consecutive := consecutive + 1
else
consecutive := 1;
if consecutive > MaxConsecutiveRepeats then
begin
IsValid := False;
Break;
end;
end;
Since I do get started with the first in lexographic order, ends up being necessary by this way generate a lot of unnecessary perms.
This is easy to make, but rather hard to make efficient.
If you need to build a single piece of code that only considers valid outputs, and thus doesn't bother walking over the entire combination space, then you're going to have some thinking to do.
On the other hand, if you can live with the code internally producing all combinations, valid or not, then it should be simple.
Make a new enumerator, one which you can call that next_perm method on, and have this internally use the other enumerator, the one that produces every combination.
Then simply make the outer enumerator run in a while loop asking the inner one for more permutations until you find one that is valid, then produce that.
Pseudo-code for this:
generator1:
when called, yield the next combination
generator2:
internally keep a generator1 object
when called, keep asking generator1 for a new combination
check the combination
if valid, then yield it

Resources