Related
I'm in the process of coding the knight's tour function, and I'm as far as this where I'm getting an infinte loop in my ghci:
type Field = (Int, Int)
nextPositions:: Int -> Field -> [Field]
nextPositions n (x,y) = filter onBoard
[(x+2,y-1),(x+2,y+1),(x-2,y-1),(x-2,y+1),(x+1,y-2),(x+1,y+2),(x-1,y-2),(x-1,y+2)]
where onBoard (x,y) = x `elem` [1..n] && y `elem` [1..n]
type Path = [Field]
knightTour :: Int -> Field -> [Path]
knightTour n start = [posi:path | (posi,path) <- tour (n*n)]
where tour 1 = [(start, [])]
tour k = [(posi', posi:path) | (posi, path) <- tour (k-1), posi' <- (filter (`notElem` path) (nextPositions n posi))]
F.e. knightTour 10 (4,4) does not give an output!
Any advise?
I think one of the main problems is checking if you have visited a square. This takes too much time. You should look for a data structure that makes that more efficient.
For small boards, for example up to 8×8, you can make use of a 64-bit integer for that. A 64-bit can be seen as 64 booleans that each can represent whether the knight already has visited that place.
we thus can implement this with:
{-# LANGUAGE BangPatterns #-}
import Data.Bits(testBit, setBit)
import Data.Word(Word64)
testPosition :: Int -> Word64 -> (Int, Int) -> Bool
testPosition !n !w (!r, !c) = testBit w (n*r + c)
setPosition :: Int -> (Int, Int) -> Word64 -> Word64
setPosition !n (!r, !c) !w = setBit w (n*r + c)
nextPositions :: Int -> Word64 -> (Int, Int) -> [(Int, Int)]
nextPositions !n !w (!x, !y) = [ c
| c#(x', y') <- [(x-1,y-2), (x-1,y+2), (x+1,y-2), (x+1,y+2), (x-2,y-1), (x-2,y+1), (x+2,y-1), (x+2,y+1)]
, x' >= 0
, y' >= 0
, x' < n
, y' < n
, not (testPosition n w c)
]
knightTour :: Int -> (Int, Int) -> [[(Int, Int)]]
knightTour n p0 = go (n*n-1) (setPosition n p0 0) p0
where go 0 _ _ = [[]]
go !k !w !ps = [
(ps':rs)
| ps' <- nextPositions n w ps
, rs <- go (k-1) (setPosition n ps' w) ps'
]
main = print (knightTour 6 (1,1))
If I compile this with the -O2 flag and run this locally for a 5×5 board where the knight starts at (1,1), all the solutions are generated in 0.32 seconds. For a 6×6 board, it takes 2.91 seconds to print the first solution, but it takes forever to find all solutions that start at (1,1). For an 8×8 board, the first solution was found in 185.76 seconds:
[(0,3),(1,5),(0,7),(2,6),(1,4),(0,2),(1,0),(2,2),(3,0),(4,2),(3,4),(4,6),(5,4),(6,2),(5,0),(3,1),(2,3),(3,5),(2,7),(0,6),(2,5),(1,3),(0,1),(2,0),(3,2),(2,4),(0,5),(1,7),(3,6),(4,4),(5,6),(7,7),(6,5),(7,3),(6,1),(4,0),(5,2),(7,1),(6,3),(7,5),(6,7),(5,5),(4,7),(6,6),(7,4),(5,3),(7,2),(6,0),(4,1),(3,3),(2,1),(0,0),(1,2),(0,4),(1,6),(3,7),(4,5),(5,7),(7,6),(6,4),(4,3),(5,1),(7,0)]
It is however not a good idea to solve this with a brute force approach. If we assume an average branching factor of ~6 moves, then for a 6×6 board, we have already 1.031×1028 possible sequences we have to examine for a 6×6 board.
It is better to work with a divide and conquer approach. It is easy to split a board like 8×8 into four 4×4 boards. Then you determine places where you can hop from one board to another, and then you solve the subproblems for a 4×4 board. For small boards, you can easily store the solutions to go from any square to any other square on a 4×4 board, and then reuse these for all quadrants, so you save computational effort, by not calculating this a second time, especially since you do not need to store symmetrical queries multiple times. If you know how to go from (1,0) to (2,3) on a 4×4 board, you can easily use this to go from (3,0) to (2,3) on the same board, just by mirroring this.
I am currently trying to create n-nested loops in order to compute a rather tricky sum.
Each term in the sum depends on the last, and therefore the nesting is required. Here is some Julia-written pseudo-code explaining what I'm trying to get:
# u,v are fixed integers
# m is a fixed, n-length array of integers
_
for i_1 in 0:min(m[1], u) |#Trio of Loops we want to repeat n times
for j_1 in 0:min(m[1], v) |
for t_1 in i_1:(i_1 + j_1) _|#1
_
for i_2 in 0:min(m[2], u - i_1) |
for j_2 in 0:min(m[2], v - j_1) |
for t_2 in i_2:(i_2 + j_2) _|#2
#... repeat until nth time...#
_
for i_n in 0:min(m[n],u - sum(#all i_'s up to n-1)) |
for j_n in 0:min(m[n],v - sum(#all j_'s up to n-1)) |
for t_n in i_n:(i_n + j_n) _|#n
#Sum over sum function which depends on i_'s and j_'s and t_'s
X += f(i_1,j_1,t_1)*f(i_2,j_2,t_2)*...*f(i_n,j_n,t_n)
I am completely unsure of how to properly code this for an arbitrary n-number of trios. If the number n is small then obviously this can be explicitly written, but if not, then we are stuck. I assume that there is a recursive way to employ this code, but I am unsure how to go about this.
Any and all help would be appreciated.
Here is a quick attempt at a slightly simplified version of your problem.
function inner(X, f, ms)
is = [Symbol(:i_, n) for n in 1:length(ms)]
js = [Symbol(:j_, n) for n in 1:length(ms)]
fs = [:($f($i, $j)) for (i,j) in zip(is, js)]
:($X += *($(fs...)))
end
inner(:X, :g, [3,4,5]) # :(X += g(i_1, j_1) * g(i_2, j_2) * g(i_3, j_3))
function looper(ex::Expr, ms, u)
isempty(ms) && return ex
n = length(ms)
m_n = ms[n]
i_n = Symbol(:i_,n)
j_n = Symbol(:j_,n)
out = :( for $i_n in 0:min($m_n, $u)
for $j_n in 0:min($m_n, $u)
$ex
end
end)
looper(out, ms[1:end-1], u)
end
looper(inner(:g, [4,5]), [4,5], 3)
# :(for i_1 = 0:min(4, 3)
# for j_1 = 0:min(4, 3)
# for i_2 = 0:min(5, 3)
# for j_2 = 0:min(5, 3)
# X += g(i_1, j_1) * g(i_2, j_2) ...
macro multi(X, f, ms, u)
ms isa Expr && ms.head == :vect || error("expected a literal vector like [1,2,3]")
looper(inner(esc(X), esc(f), ms.args), ms.args, esc(u))
end
#macroexpand #multi X f [4,5] u
# :(for var"#74#i_1" = 0:Main.min(4, u) ...
function multi(X, f, u)
#multi X f [4,5] u
X
end
multi(0.0, atan, 1)
I tried to learn how the STArray works, but I couldn't. (Doc is poor, or at least the one I found).
Any way, I have the next algorithm, but it uses a lot of !!, which is slow. How can I convert it to use the STArray monad?
-- The Algorithm prints the primes present in [1 .. n]
main :: IO ()
main = print $ primesUpTo 100
type Nat = Int
primesUpTo :: Nat -> [Nat]
primesUpTo n = primesUpToAux n 2 [1]
primesUpToAux :: Nat -> Nat -> [Nat] -> [Nat]
primesUpToAux n current primes =
if current > n
then primes
else primesUpToAux n (current + 1) newAcum
where newAcum = case isPrime current primes of
True -> primes++[current]
False -> primes
isPrime :: Nat -> [Nat] -> Bool
isPrime 1 _ = True
isPrime 2 _ = True
isPrime x neededPrimes = isPrimeAux x neededPrimes 1
isPrimeAux x neededPrimes currentPrimeIndex =
if sqrtOfX < currentPrime
then True
else if mod x currentPrime == 0
then False
else isPrimeAux x neededPrimes (currentPrimeIndex + 1)
where
sqrtOfX = sqrtNat x
currentPrime = neededPrimes !! currentPrimeIndex
sqrtNat :: Nat -> Nat
sqrtNat = floor . sqrt . fromIntegral
Edit
Oops, the !! wasn't the problem; in the next version of the algorithm (below) I've removed the use of !!; also, I fixed 1 being a prime, which is not, as pointed by #pedrorodrigues
main :: IO ()
main = print $ primesUpTo 20000
type Nat = Int
primesUpTo :: Nat -> [Nat]
primesUpTo n = primesUpToAux n 1 []
primesUpToAux :: Nat -> Nat -> [Nat] -> [Nat]
primesUpToAux n current primesAcum =
if current > n
then primesAcum
else primesUpToAux n (current + 1) newPrimesAcum
where newPrimesAcum = case isPrime current primesAcum of
True -> primesAcum++[current]
False -> primesAcum
isPrime :: Nat -> [Nat] -> Bool
isPrime 1 _ = False
isPrime 2 _ = True
isPrime x neededPrimes =
if sqrtOfX < currentPrime
then True
else if mod x currentPrime == 0
then False
else isPrime x restOfPrimes
where
sqrtOfX = sqrtNat x
currentPrime:restOfPrimes = neededPrimes
sqrtNat :: Nat -> Nat
sqrtNat = floor . sqrt . fromIntegral
Now this question is about 2 questions really:
1.- How to transform this algorithm to use arrays instead of lists? (Is for the sake of learning how to handle state and arrays in Haskell)
Which somebody already answered in the comments, but pointing to a not very good explained example.
2.- How to eliminate the concatenation of lists every time a new prime is found?
True -> primesAcum++[current]
Here's a more or less direct translation of your code into working with an unboxed array of integers:
import Control.Monad
import Control.Monad.ST
import Data.Array.ST
import Data.Array.Unboxed
import Control.Arrow
main :: IO ()
main = print . (length &&& last) . primesUpTo $ 1299709
primesUpTo :: Int -> [Int]
primesUpTo = takeWhile (> 0) . elems . primesUpToUA
primesUpToUA :: Int -> UArray Int Int
primesUpToUA n = runSTUArray $ do
let k = floor( 1.25506 * fromIntegral n / log (fromIntegral n)) -- WP formula
ar <- newArray (0,k+1) 0 -- all zeroes initially, two extra spaces
let
loop c i | c > n = return ar -- upper limit is reached
| otherwise = do -- `i` primes currently in the array
b <- isPrime 0 i c -- is `c` a prime?
if b then do { writeArray ar i c ; loop (c+1) (i+1) }
else loop (c+1) i
isPrime j i c | j >= i = return True -- `i` primes
| otherwise = do -- currently in the array
p <- readArray ar j
if p*p > c then return True
else if rem c p == 0 then return False
else isPrime (j+1) i c
loop 2 0
This is more or less self-explanatory, when you read it slowly, one statement at a time.
Using arrays, there's no problems with list concatenation, as there are no lists. We use the array of primes as we're adding new items to it.
Of course you can re-write your list-based code to behave better; the simplest re-write is
ps :: [Int]
ps = 2 : [i | i <- [3..],
and [rem i p > 0 | p <- takeWhile ((<=i).(^2)) ps]]
primesTo n = takeWhile (<= n) ps
The key is to switch from recursive thinking to corecursive thinking - not how to add at the end, explicitly, but to define how a list is to be produced — and let the lazy semantics take care of the rest.
EDIT3: I'm writing a code to process very long input list of Ints with only few hundred non-duplicates. I use two auxiliary lists to maintain cumulative partial sums to calculate some accumulator value, the how's and why's are non-important. I want to ditch all lists here and turn it into nice destructive loop, and I don't know how. I don't need the whole code, just a skeleton code would be great, were read/write is done to two auxiliary arrays and some end result is returned. What I have right now would run 0.5 hour for the input. I've coded this now in C++, and it runs in 90 seconds for the same input.
I can't understand how to do this, at all. This is the list-based code that I have right now:(but the Map-based code below is clearer)
ins :: (Num b, Ord a) => a -> b -> [(a, b)] -> ([(a, b)], b)
ins n x [] = ( [(n,x)], 0)
ins n x l#((v, s):t) =
case compare n v of
LT -> ( (n,s+x) : l , s )
EQ -> ( (n,s+x) : t , if null t then 0 else snd (head t))
GT -> let (u,z) = ins n x t
in ((v,s+x):u,z)
This is used in a loop, to process a list of numbers of known length, (changed it to foldl now)
scanl g (0,([],[])) ns -- ns :: [Int]
g ::
(Num t, Ord t, Ord a) =>
(t, ([(a, t)], [(a, t)])) -> a -> (t, ([(a, t)], [(a, t)]))
g (c,( a, b)) n =
let
(a2,x) = ins n 1 a
(b2,y) = if x>0 then ins n x b else (b,0)
c2 = c + y
in
(c2,( a2, b2))
This works, but I need to speed it up. In C, I would keep the lists (a,b) as arrays; use binary search to find the element with the key just above or equal to n (instead of the sequential search used here); and use in-place update to change all the preceding entries.
I'm only really interested in final value. How is this done in Haskell, with mutable arrays?
I tried something, but I really don't know what I'm doing here, and am getting strange and very long error messages (like "can not deduce ... from context ..."):
goarr top = runSTArray $ do
let sz = 10000
a <- newArray (1,sz) (0,0) :: ST s (STArray s Int (Integer,Integer))
b <- newArray (1,sz) (0,0) :: ST s (STArray s Int (Integer,Integer))
let p1 = somefunc 2 -- somefunc :: Integer -> [(Integer, Int)]
go1 p1 2 0 top a b
go1 p1 i c top a b =
if i >= top
then
do
return c
else
go2 p1 i c top a b
go2 p1 i c top a b =
do
let p2 = somefunc (i+1) -- p2 :: [(Integer, Int)]
let n = combine p1 p2 -- n :: Int
-- update arrays and calc new c
-- like the "g" function is doing:
-- (a2,x) = ins n 1 a
-- (b2,y) = if x>0 then ins n x b else (b,0)
-- c2 = c + y
go1 p2 (i+1) c2 top a b -- a2 b2??
This doesn't work at all. I don't even know how to encode loops in do notation. Please help.
UPD: the Map based code that runs 3 times slower:
ins3 :: (Ord k, Num a) => k -> a -> Map.Map k a -> (Map.Map k a, a)
ins3 n x a | Map.null a = (Map.insert n x a , 0)
ins3 n x a = let (p,q,r) = Map.splitLookup n a in
case q of
Nothing -> (Map.union (Map.map (+x) p)
(Map.insert n (x+leftmost r) r) , leftmost r)
Just s -> (Map.union (Map.map (+x) p)
(Map.insert n (x+s) r) , leftmost r)
leftmost r | Map.null r = 0
| otherwise = snd . head $ Map.toList r
UPD2: The error message is " Could not deduce (Num (STArray s1 i e)) from the context () arising from the literal `0' at filename.hs:417:11"
that's where it says return c in go1 function. Perhaps c is expected to be an array, but I want to return the accumulator value that is built while using the two auxiliary arrays.
EDIT3: I've replaced scanl and (!!) with foldl and take as per Chris's advice, and now it runs in constant space with sane empirical complexity and is actually projected to finish in under 0.5 hour - a.o.t. ... 3 days ! I knew about it of course but was so sure GHC optimizes the stuff away for me, surely it wouldn't make that much of a difference, I thought! And so felt only mutable arrays could help... Bummer.
Still, C++ does same in 90 sec, and I would very much appreciate help in learning how to code this with mutable arrays, in Haskell.
Are the input values ever EQ? If they are not EQ then the way scanl g (0,([],[])) ns is used means that the first [(,)] array, call it a always has map snd a == reverse [1..length a] at each stage of g. For example, in a length 10 list the value of snd (a !! 4) is going to be 10-4. Keeping these reversed index values by mutating the second value of each preceding entry in a is quite wasteful. If you need speed then this is one place to make a better algorithm.
None of this applies to the second [(,)] whose purpose is still mysterious to me. It records all insertions that were not done at the end of a, so perhaps it allows one to reconstruct the initial sequence of values.
You said "I'm only really interested in final value." Do you mean you only care about the last value in list output by the scanl .. line? If so then you need a foldl instead of scanl.
Edit: I am adding a non-mutable solution using a custom Finger Tree. It passes my ad hoc testing (at bottom of code):
{-# LANGUAGE MultiParamTypeClasses #-}
import Data.Monoid
import Data.FingerTree
data Entry a v = E !a !v deriving Show
data ME a v = NoF | F !(Entry a v) deriving Show
instance Num v => Monoid (ME a v) where
mempty = NoF
NoF `mappend` k = k
k `mappend` NoF = k
(F (E _a1 v1)) `mappend` (F (E a2 v2)) = F (E a2 (v1 + v2))
instance Num v => Measured (ME a v) (Entry a v) where
measure = F
type M a v = FingerTree (ME a v) (Entry a v)
getV NoF = 0
getV (F (E _a v)) = v
expand :: Num v => M a v -> [(a, v)]
expand m = case viewl m of
EmptyL -> []
(E a _v) :< m' -> (a, getV (measure m)) : expand m'
ins :: (Ord a, Num v) => a -> v -> M a v -> (M a v, v)
ins n x m =
let comp (F (E a _)) = n <= a
comp NoF = False
(lo, hi) = split comp m
in case viewl hi of
EmptyL -> (lo |> E n x, 0)
(E v s) :< higher | n < v ->
(lo >< (E n x <| hi), getV (measure hi))
| otherwise ->
(lo >< (E n (s+x) <| higher), getV (measure higher))
g :: (Num t, Ord t, Ord a) =>
(t, (M a t, M a t)) -> a -> (t, (M a t, M a t))
g (c, (a, b)) n =
let (a2, x) = ins n 1 a
(b2, y) = if x>0 then ins n x b else (b, 0)
in (c+y, (a2, b2))
go :: (Ord a, Num v, Ord v) => [a] -> (v, ([(a, v)], [(a, v)]))
go ns = let (t, (a, b)) = foldl g (0, (mempty, mempty)) ns
in (t, (expand a, expand b))
up = [1..6]
down = [5,4..1]
see'tests = map go [ up, down, up ++ down, down ++ up ]
main = putStrLn . unlines . map show $ see'test
Slightly unorthodox, I am adding a second answer using a mutable technique. Since user1308992 mentioned Fenwick trees, I have used them to implement the algorithm. Two STUArray are allocated and mutated during the run. The basic Fenwick tree keeps totals for all smaller indices and the algorithm here needs totals for all larger indices. This change is handled by the (sz-x) subtraction.
import Control.Monad.ST(runST,ST)
import Data.Array.ST(STUArray,newArray)
import Data.Array.Base(unsafeRead, unsafeWrite)
import Data.Bits((.&.))
import Debug.Trace(trace)
import Data.List(group,sort)
{-# INLINE lsb #-}
lsb :: Int -> Int
lsb i = (negate i) .&. i
go :: [Int] -> Int
go xs = compute (maximum xs) xs
-- Require "top == maximum xs" and "all (>=0) xs"
compute :: Int -> [Int] -> Int
compute top xs = runST mutating where
-- Have (sz - (top+1)) > 0 to keep algorithm simple
sz = top + 2
-- Reversed Fenwick tree (no bounds checking)
insert :: STUArray s Int Int -> Int -> Int -> ST s ()
insert arr x v = loop (sz-x) where
loop i | i > sz = return ()
| i <= 0 = error "wtf"
| otherwise = do
oldVal <- unsafeRead arr i
unsafeWrite arr i (oldVal + v)
loop (i + lsb i)
getSum :: STUArray s Int Int -> Int -> ST s Int
getSum arr x = loop (sz - x) 0 where
loop i acc | i <= 0 = return acc
| otherwise = do
val <- unsafeRead arr i
loop (i - lsb i) $! acc + val
ins n x arr = do
insert arr n x
getSum arr (succ n)
mutating :: ST s Int
mutating = do
-- Start index from 0 to make unsafeRead, unsafeWrite easy
a <- newArray (0,sz) 0 :: ST s (STUArray s Int Int)
b <- newArray (0,sz) 0 :: ST s (STUArray s Int Int)
let loop [] c = return c
loop (n:ns) c = do
x <- ins n 1 a
y <- if x > 0
then
ins n x b
else
return 0
loop ns $! c + y
-- Without debugging use the next line
-- loop xs 0
-- With debugging use the next five lines
c <- loop xs 0
a' <- see a
b' <- see b
trace (show (c,(a',b'))) $ do
return c
-- see is only used in debugging
see arr = do
let zs = map head . group . sort $ xs
vs <- sequence [ getSum arr z | z <- zs ]
let ans = filter (\(a,v) -> v>0) (zip zs vs)
return ans
up = [1..6]
down = [5,4..1]
see'tests = map go [ up, down, up ++ down, down ++ up ]
main = putStrLn . unlines . map show $ see'tests
let me call the function accumrArray.
accumrArray ::
(e' -> e -> e) An accumulating function
-> e A default element
-> (i, i) The bounds of the array
-> [(i, e')] List of associations
-> a i e The array
accumrArray (:) [] (1,2) [(1,1),(2,2),(2,3)] === array [(1,[1]), (2,[2,3])]
head $ (accumrArray (:) [] (1,1) [(1,x)|x<-[4..]]) ! 1 === 4
How strange... I wrote this function a few days ago for someone else. The function first appeared in LML (I believe), but never made it into the Haskell array library.
Here you go:
{-# LANGUAGE ScopedTypeVariables #-}
import Data.Array
import System.IO.Unsafe
import Data.IORef
import Data.Array.MArray
import Data.Array.Base
import Control.Monad
import Data.Array.IO
accumArrayR :: forall a e i. Ix i => (a -> e -> e) -> e -> (i,i) -> [(i,a)] -> Array i e
accumArrayR f e bounds#(l,u) assocs = unsafePerformIO $ do
ref <- newIORef assocs
arr <- newArray_ bounds
let _ = arr :: IOArray i e
let n = safeRangeSize (l,u)
let elem x = unsafePerformIO $ do
ass <- readIORef ref
let loop [] = writeIORef ref [] >> return e
loop ((y,a):rest) = do
let ix = safeIndex bounds n y
let r = f a (elem x)
unsafeWrite arr ix r
if (ix == x)
then writeIORef ref rest >> return r
else loop rest
loop ass
forM_ [0..n] $ \ix -> unsafeWrite arr ix (elem ix)
unsafeFreeze arr
A challenge for the reader: use accumArrayR to implement linear-time depth-first-search of a graph.
Edit I should mention that the function isn't thread-safe as written. Turning the IORef into an MVar would fix it, but there might be better ways.
Not the most efficient, but...
accumrArray f x b l = accumArray (flip f) x b (reverse l)
I would argue that
accumrArray f x b l = accumArray (flip f) x b (reverse l)
is indeed the best solution (credits to sclv's answer).
Its supposed "inefficiency" comes from fact that foldr applies the function f from right to left.
However, since accumArray is strict, l can never be infinite, otherwise the program would be incorrect. It would never terminate.
Therefore, foldl (flip f) is just as good as a foldr.