F# Manhatten distances between 2 Array2D - arrays

so I have 2 boards for example
let startingBoard = [|[|1; 4; 7|];
[|6; 3; 5|];
[|0; 8; 2|]|]
let goal = [|[|1; 2; 3|];
[|4; 5; 6|];
[|7; 8; 0|]|]
and want to find the sum of all the Manhattan distances of like elements between the 2 arrays ( for example the Manhattan distance for the 4 tile would be 2 one move down, one move left) all I have so far is the code below which finds the Manhattan distance for indexes its given.
let totalManhattenDistance board goal =
let manhattenDistance (x1, y1) (x2, y2) =
abs(x1 - x2) + abs(y1 - y2)
// solution here
the problem is I cant imagine doing this without for loops, but that seems unidiomatic.

Here's one version
let totalManhattanDistance board goal =
let manhattanDistance ((x1, y1), (x2, y2)) = abs(x1 - x2) + abs(y1 - y2)
let indexed xs = xs |> Seq.mapi (fun i -> Seq.mapi (fun j x -> (i, j), x))
|> Seq.concat
|> Seq.sortBy snd
|> Seq.map fst
Seq.zip (indexed board) (indexed goal)
|> Seq.map manhattanDistance
|> Seq.sum
The three Seq operations in the end could be done with just one Array.fold2 but I don't know if this makes the code any clearer
let totalManhattanDistance board goal =
let manhattanDistance (x1, y1) (x2, y2) = abs(x1 - x2) + abs(y1 - y2)
let indexed xs = xs |> Array.mapi (fun i -> Array.mapi (fun j x -> (i, j), x))
|> Array.concat
|> Array.sortBy snd
|> Array.map fst
let folder = fun acc n m -> acc + manhattanDistance n m
Array.fold2 folder 0 (indexed board) (indexed goal)

Using 2D Arrays the problems seams to be more natural:
let startingBoard = array2D [|[|1; 4; 7|];
[|6; 3; 5|];
[|0; 8; 2|]|]
let goal = array2D [|[|1; 2; 3|];
[|4; 5; 6|];
[|7; 8; 0|]|]
Unfortunately there is no findIndex2D function (like Array.findIndex). You have to define it yourself:
let findIndex2D (p:'A -> bool) (a:'A [,]) =
a |> Array2D.mapi (fun x y v -> x,y,v)
|> Seq.cast<int*int*'A>
|> Seq.pick (fun (x,y,v) -> if p v then Some (x,y) else None)
Straightforward definition of manhatten distance:
let manhattanDistance (x1, y1) (x2, y2) = abs(x1 - x2) + abs (y1 - y2)
And the sum of all manhatten distances:
let totalManhattanDistance board goal =
board |> Array2D.mapi (fun x y v -> manhattanDistance (x, y)
<| findIndex2D ((=) v) goal)
|> Seq.cast<int> // flatten
|> Seq.reduce (+)

Another version:
let newpos (start : int[][]) (finish:int[][]) (i, j) =
let rw =
finish |> Array.fold (fun (found, y, x) row ->
if found then (found, y, x)
else
match row |> Array.tryFindIndex ((=) start.[i].[j]) with
| Some nX -> (true, y, nX)
| None -> (false, y+1, x)
) (false, 0, 0)
match rw with
| (true, x, y) -> (x, y)
| _ -> failwith "Not found"
let totalManhattenDistance board goal =
let manhattenDistance (x1, y1) (x2, y2) = abs(x1 - x2) + abs(y1 - y2)
board |> Array.mapi (fun i arr ->
arr |> Array.mapi (fun j v ->
let (i1, j1) = newpos board goal (i, j)
manhattenDistance (i, j) (i1, j1)
)
)
totalManhattenDistance startingBoard goal
Answer is
val it : int [] [] =
[|[|0; 2; 4|];
[|2; 2; 1|];
[|2; 0; 3|]|]

Here's a F# idiomatic (I hope) version, not too dissimilar to mikkoma's:
let flatten a2D =
a2D |> Array.mapi (fun i1 a1D -> a1D
|> Array.mapi (fun i2 el -> (el, (i1, i2)) ))
|> Array.concat |> Array.sortBy fst
let manhattan b1 b2 =
flatten b1
|> Array.zip (flatten b2)
|> Array.sumBy (fun ((_, (i1, j1)), (_, (i2, j2))) -> abs(i2-i1) + abs(j2-j1))
flatten transforms the 2D array into a 1D array where each element is put next to its coordinates on the board.
manhattan then simply zips the 2 1D arrays together and sums up the coordinate offset.

Related

Strange profiling overhead for IOArray and STArray

I'm testing the speed of various memoizing methods. The code below compares two implementation of memoizing with an array. I tested this on a recursive function. The complete code is below
Running the program with stack test for memoweird 1000, memoweird 5000 etc, shows that IOArray is consistently faster than STArray by a couple seconds, and the difference seems to be O(1). However, running the same program with stack test --profile reverses the result, and STArray becomes consistently faster by about one second.
{-# LANGUAGE ScopedTypeVariables #-}
module Main where
import Data.Array
import Data.Array.ST
import Control.Monad.ST
import Data.Array.IO
import GHC.IO
import Control.Monad
import Data.Time
memoST :: forall a b. (Ix a)
=> (a, a) -- range of the argument memoized
-> ((a -> b) -- a recursive function, but uses it's first argument for recursive calls instead
-> a -> b)
-> (a -> b) -- memoized function
memoST r f = (runSTArray compute !)
where
compute :: ST s (STArray s a b)
compute= do
arr <- newArray_ r
forM_ (range r) (\i -> do
writeArray arr i $ f (memoST r f) i)
return arr
memoArray :: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoArray r f = (unsafePerformIO compute !) -- safe!
where
compute :: IO (Array a b)
compute = do
arr <- newArray_ r :: IO (IOArray a b)
forM_ (range r) (\i -> do
writeArray arr i$ f (memoArray r f) i)
freeze arr
weird :: (Int -> Int) -> Int -> Int
weird _ 0 = 0
weird _ 1 = 0
weird f i = f (i `div` 2) + f (i - 1) + 1
stweird :: Int -> Int
stweird n = memoST (0,n) weird n
arrayweird :: Int -> Int
arrayweird n = memoArray (0,n) weird n
main :: IO()
main = do
t0 <- getCurrentTime
print (stweird 5000)
t1 <- getCurrentTime
print (arrayweird 5000)
t2 <- getCurrentTime
let sttime = diffUTCTime t0 t1
let artime = diffUTCTime t1 t2
print (sttime - artime)
Is there a reason why the profiling overhead is so different (albeit small) on the two array types?
I'm using Stack Version 2.7.3, GHC version 8.10.4 on OS X.
Some data on my computer.
Running this a couple times:
Without Profiling:
-0.222663s -0.116007s -0.202765s -0.205319s -0.130202s
Avg -0.1754s
Std 0.0486s
With Profiling:
0.608895s -0.755541s -0.61222s -0.83613s 0.450045s
1.879662s -0.181789s 3.251379s 0.359211s 0.122721s
Avg 0.4286s
Std 1.2764s
Apparently, the random fluctuations of the profiler covers the difference up. The data here is not sufficient to confirm a difference.
You really should use criterion for benchmarking.
benchmarking stweird
time 3.116 s (3.109 s .. 3.119 s)
1.000 R² (1.000 R² .. 1.000 R²)
mean 3.112 s (3.110 s .. 3.113 s)
std dev 2.220 ms (953.8 μs .. 2.807 ms)
variance introduced by outliers: 19% (moderately inflated)
benchmarking marrayweird
time 3.170 s (2.684 s .. 3.602 s)
0.997 R² (0.989 R² .. 1.000 R²)
mean 3.204 s (3.148 s .. 3.280 s)
std dev 72.66 ms (1.810 ms .. 88.94 ms)
variance introduced by outliers: 19% (moderately inflated)
My system is noisy, but it does appear that the standard deviations don't overlap. I don't actually care much about figuring out why, though, because the code is exceptionally slow. 3 seconds for memoizing 5000 operations? Something has gone horribly wrong.
The code as written is a super-exponential algorithm - there's no sharing of memoized functions in the memoization code. Each sub-evaluation could create an entirely new array and populate it. You're being saved from that situation by two things. First is laziness - most values are never evaluated. The upshot here is that the algorithm will actually terminate, instead of eagerly evaluating array entries forever. Second, and more importantly, GHC does some constant-lifting, lifting the expression (memoST r f) (or the arrayST version) out of the loop body. This creates sharing within each loop body so that the two sub-calls actually share memoization. It's not great, but it's actually doing some speedup. But it's mostly accidental.
The traditional approach to this sort of memoization is to just let laziness do the necessary mutation:
memoArray
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoArray r f = fetch
where
fetch n = arr ! n
arr = listArray r $ map (f fetch) (range r)
Note the knot-tying between fetch and arr internally. This ensures that the same array is used in every calculation. It benchmarks a bit better:
benchmarking arrayweird
time 212.0 μs (211.5 μs .. 212.6 μs)
1.000 R² (0.999 R² .. 1.000 R²)
mean 213.3 μs (212.4 μs .. 215.0 μs)
std dev 4.104 μs (2.469 μs .. 6.194 μs)
variance introduced by outliers: 12% (moderately inflated)
213 microseconds is much more what I'd expect from only 5000 iterations. Still, one might be curious whether doing explicit sharing could work with the other variants. And it can:
memoST'
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoST' r f = fetch
where
fetch n = arr ! n
arr = runSTArray compute
compute :: ST s (STArray s a b)
compute = do
a <- newArray_ r
forM_ (range r) $ \i -> do
writeArray a i $ f fetch i
return a
memoMArray'
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoMArray' r f = fetch
where
fetch n = arr ! n
arr = unsafePerformIO compute
compute :: IO (Array a b)
compute = do
a <- newArray_ r :: IO (IOArray a b)
forM_ (range r) $ \i -> do
writeArray a i $ f fetch i
freeze a
Those use explicit sharing to introduce the same sort of knot-tying, though significantly more indirectly.
benchmarking stweird'
time 168.1 μs (167.1 μs .. 169.9 μs)
1.000 R² (0.999 R² .. 1.000 R²)
mean 167.1 μs (166.7 μs .. 167.8 μs)
std dev 1.636 μs (832.3 ns .. 3.007 μs)
benchmarking marrayweird'
time 171.1 μs (170.7 μs .. 171.7 μs)
1.000 R² (1.000 R² .. 1.000 R²)
mean 170.9 μs (170.5 μs .. 171.4 μs)
std dev 1.554 μs (1.076 μs .. 2.224 μs)
And those actually seem to beat the listArray variant. I really don't know what's up with that. listArray must be doing some surprising extra amount of work. Oh well.
In the end, I don't actually know what's leading to these small performance differences. But none of them are significant in comparison to actually using an efficient algorithm.
Full code, for your perusal:
{-# LANGUAGE ScopedTypeVariables #-}
module Main where
import Data.Array
import Data.Array.ST
import Data.Array.Unsafe
import Control.Monad.ST
import Data.Array.IO
import GHC.IO.Unsafe
import Control.Monad
import Criterion.Main
memoST
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoST r f = (runSTArray compute !)
where
compute :: ST s (STArray s a b)
compute = do
arr <- newArray_ r
forM_ (range r) $ \i -> do
writeArray arr i $ f (memoST r f) i
return arr
memoMArray
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoMArray r f = (unsafePerformIO compute !)
where
compute :: IO (Array a b)
compute = do
arr <- newArray_ r :: IO (IOArray a b)
forM_ (range r) $ \i -> do
writeArray arr i $ f (memoMArray r f) i
freeze arr
memoArray
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoArray r f = fetch
where
fetch n = arr ! n
arr = listArray r $ map (f fetch) (range r)
memoST'
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoST' r f = fetch
where
fetch n = arr ! n
arr = runSTArray compute
compute :: ST s (STArray s a b)
compute = do
a <- newArray_ r
forM_ (range r) $ \i -> do
writeArray a i $ f fetch i
return a
memoMArray'
:: forall a b. (Ix a)
=> (a, a)
-> ((a -> b) -> a -> b)
-> a -> b
memoMArray' r f = fetch
where
fetch n = arr ! n
arr = unsafePerformIO compute
compute :: IO (Array a b)
compute = do
a <- newArray_ r :: IO (IOArray a b)
forM_ (range r) $ \i -> do
writeArray a i $ f fetch i
freeze a
weird :: (Int -> Int) -> Int -> Int
weird _ 0 = 0
weird _ 1 = 0
weird f i = f (i `div` 2) + f (i - 1) + 1
stweird :: Int -> Int
stweird n = memoST (0, n) weird n
marrayweird :: Int -> Int
marrayweird n = memoMArray (0, n) weird n
arrayweird :: Int -> Int
arrayweird n = memoArray (0, n) weird n
stweird' :: Int -> Int
stweird' n = memoST' (0, n) weird n
marrayweird' :: Int -> Int
marrayweird' n = memoMArray' (0, n) weird n
main :: IO()
main = do
let rounds = 5000
print $ stweird rounds
print $ marrayweird rounds
print $ arrayweird rounds
print $ stweird' rounds
print $ marrayweird' rounds
putStrLn ""
defaultMain
[ bench "stweird" $ whnf stweird rounds
, bench "marrayweird" $ whnf marrayweird rounds
, bench "arrayweird" $ whnf arrayweird rounds
, bench "stweird'" $ whnf stweird' rounds
, bench "marrayweird'" $ whnf marrayweird' rounds
]

Game of Life Ocaml with Matrix

I have generated a matrix for my Game of Life, and I'm trying to make it so that it loops through and continuously prints out the next generation, I am using a code I found online, and it doesn't seem to work. Here is my code
let generation = ref 1
let get g x y =
try g.(x).(y)
with _ -> 0
;;
let neighbourhood g x y =
(get g (x-1) (y-1)) +
(get g (x-1) (y )) +
(get g (x-1) (y+1)) +
(get g (x ) (y-1)) +
(get g (x ) (y+1)) +
(get g (x+1) (y-1)) +
(get g (x+1) (y )) +
(get g (x+1) (y+1))
let next_cell g x y =
let n = neighbourhood g x y in
match g.(x).(y), n with
| 1, 0 | 1, 1 -> 0 (* lonely *)
| 1, 4 | 1, 5 | 1, 6 | 1, 7 | 1, 8 -> 0 (* overcrowded *)
| 1, 2 | 1, 3 -> 1 (* lives *)
| 0, 3 -> 1 (* get birth *)
| _ -> 0
let copy g = Array.map Array.copy g
let rec next g =
let width = Array.length g
and height = Array.length g.(0)
and new_g = copy g in
for x = 0 to pred width do
for y = 0 to pred height do
new_g.(x).(y) <- (next_cell g x y)
done
done;
next new_g
let print g =
let width = Array.length g
and height = Array.length g.(0) in
for x = 0 to pred width do
for y = 0 to pred height do
if g.(x).(y) = 0
then print_char '.'
else print_char 'o'
done;
print_newline()
done
;;
print_string "Width ";
let num = read_int () in
print_string "Height";
let num2 = read_int () in
while !generation < 100 do
let myArray = Array.init num (fun _ -> Array.init num2 (fun _ -> Random.int 2)) in
print_string "Generation: "; print_int !generation; print_string "\n";
print (next myArray);
generation := !generation +1;
print_newline();
done;;
It only prints out the initial one and the generation after that rather than a new one. Since the parameters is the original array, however, when I put print (next new_g) it gives me an unbound value, is there a way I can continuously print out the subsequent generations?Shouldn't it overwrite the existing new_g when I do that?
Looking just at the main while loop, it allocates and initializes a random array each time around the loop. It doesn't seem like this could be right.
I also don't see how the function next could work, as it never returns. Its last action is to call itself again, unconditionally.
Update
If you change the last line of next to this:
new_g
it returns the next generation each time you call it.
Here's a way to drive the code in a functional (rather than imperative) style:
let rec generation n array =
if n < 100 then
begin
(* Print the array *);
generation (n + 1) (next array)
end
The outermost code might look like this:
let myArray =
Array.init num
(fun _ -> Array.init num2 (fun _ -> Random.int 2))
in
generation 0 myArray

Array.create and jagged array

Can't understand the reason of such behavior:
let example count =
let arr = Array.create 2 (Array.zeroCreate count)
for i in [0..count - 1] do
arr.[0].[i] <- 1
arr.[1].[i] <- 2
arr
example 2 |> Array.iter(printfn "%A")
Print:
[|2; 2|]
[|2; 2|]
https://dotnetfiddle.net/borMmO
If I replace:
let arr = Array.create 2 (Array.zeroCreate count)
to:
let arr = Array.init 2 (fun _ -> Array.zeroCreate count)
Everything will work as expected:
let example count =
let arr = Array.init 2 (fun _ -> Array.zeroCreate count)
for i in [0..count - 1] do
arr.[0].[i] <- 1
arr.[1].[i] <- 2
arr
example 2 |> Array.iter(printfn "%A")
Print:
[|1; 1|]
[|2; 2|]
https://dotnetfiddle.net/uXmlbn
I think the reason is the fact that the array - a reference type. But I want to understand why this is happening. Since I didn't expect such results.
When you write:
let arr = Array.create 2 (Array.zeroCreate count)
You are creating an array where each element is a reference to the same array. This means that mutating a value using arr.[0] also mutates the value in arr.[1] - because the two array elements are pointing to the same mutable array. You end up with:
[| x ; x |]
\ /
[| 0; 0 |]
When you write:
let arr = Array.init 2 (fun _ -> Array.zeroCreate count)
The provided function is called for each position in the arr array and so you'll end up with different array for each element (and so arr.[0] <> arr.[1]). You end up with:
[| x ; y |]
/ \
[| 0; 0 |] [| 0; 0 |]

Arity-generic programming in Agda

How to write arity-generic functions in Agda? Is it possible to write fully dependent and universe polymorphic arity-generic functions?
I'll take an n-ary composition function as an example.
The simplest version
open import Data.Vec.N-ary
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Set γ}
-> (Y -> Z) -> N-ary n X Y -> N-ary n X Z
comp 0 g y = {!!}
comp (suc n) g f = {!!}
Here is how N-ary is defined in the Data.Vec.N-ary module:
N-ary : ∀ {ℓ₁ ℓ₂} (n : ℕ) → Set ℓ₁ → Set ℓ₂ → Set (N-ary-level ℓ₁ ℓ₂ n)
N-ary zero A B = B
N-ary (suc n) A B = A → N-ary n A B
I.e. comp receives a number n, a function g : Y -> Z and a function f, which has the arity n and the resulting type Y.
In the comp 0 g y = {!!} case we have
Goal : Z
y : Y
g : Y -> Z
hence the hole can be easily filled by g y.
In the comp (suc n) g f = {!!} case, N-ary (suc n) X Y reduces to X -> N-ary n X Y and N-ary (suc n) X Z reduces to X -> N-ary n X Z. So we have
Goal : X -> N-ary n X Z
f : X -> N-ary n X Y
g : Y -> Z
C-c C-r reduces the hole to λ x -> {!!}, and now Goal : N-ary n X Z, which can be filled by comp n g (f x). So the whole definition is
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Set γ}
-> (Y -> Z) -> N-ary n X Y -> N-ary n X Z
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
I.e. comp receives n arguments of type X, applies f to them and then applies g to the result.
The simplest version with a dependent g
When g has type (y : Y) -> Z y
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Y -> Set γ}
-> ((y : Y) -> Z y) -> (f : N-ary n X Y) -> {!!}
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
what should be there in the hole? We can't use N-ary n X Z as before, because Z is a function now. If Z is a function, we need to apply it to something, that has type Y. But the only way to get something of type Y is to apply f to n arguments of type X. Which is just like our comp but only at the type level:
Comp : ∀ n {α β γ} {X : Set α} {Y : Set β}
-> (Y -> Set γ) -> N-ary n X Y -> Set (N-ary-level α γ n)
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
And comp then is
comp : ∀ n {α β γ} {X : Set α} {Y : Set β} {Z : Y -> Set γ}
-> ((y : Y) -> Z y) -> (f : N-ary n X Y) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
A version with arguments with different types
There is the "Arity-generic datatype-generic programming" paper, that describes, among other things, how to write arity-generic functions, that receive arguments of different types. The idea is to pass a vector of types as a parameter and fold it pretty much in the style of N-ary:
arrTy : {n : N} → Vec Set (suc n) → Set
arrTy {0} (A :: []) = A
arrTy {suc n} (A :: As) = A → arrTy As
However Agda is unable to infer that vector, even if we provide its length. Hence the paper also provides an operator for currying, which makes from a function, that explicitly receives a vector of types, a function, that receives n implicit arguments.
This approach works, but it doesn't scale to fully universe polymorphic functions. We can avoid all these problems by replacing the Vec datatype with the _^_ operator:
_^_ : ∀ {α} -> Set α -> ℕ -> Set α
A ^ 0 = Lift ⊤
A ^ suc n = A × A ^ n
A ^ n is isomorphic to Vec A n. Then our new N-ary is
_->ⁿ_ : ∀ {n} -> Set ^ n -> Set -> Set
_->ⁿ_ {0} _ B = B
_->ⁿ_ {suc _} (A , R) B = A -> R ->ⁿ B
All types lie in Set for simplicity. comp now is
comp : ∀ n {Xs : Set ^ n} {Y Z : Set}
-> (Y -> Z) -> (Xs ->ⁿ Y) -> Xs ->ⁿ Z
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
And a version with a dependent g:
Comp : ∀ n {Xs : Set ^ n} {Y : Set}
-> (Y -> Set) -> (Xs ->ⁿ Y) -> Set
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
comp : ∀ n {Xs : Set ^ n} {Y : Set} {Z : Y -> Set}
-> ((y : Y) -> Z y) -> (f : Xs ->ⁿ Y) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
Fully dependent and universe polymorphic comp
The key idea is to represent a vector of types as nested dependent pairs:
Sets : ∀ {n} -> (αs : Level ^ n) -> ∀ β -> Set (mono-^ (map lsuc) αs ⊔ⁿ lsuc β)
Sets {0} _ β = Set β
Sets {suc _} (α , αs) β = Σ (Set α) λ X -> X -> Sets αs β
The second case reads like "there is a type X such that all other types depend on an element of X". Our new N-ary is trivial:
Fold : ∀ {n} {αs : Level ^ n} {β} -> Sets αs β -> Set (αs ⊔ⁿ β)
Fold {0} Y = Y
Fold {suc _} (X , F) = (x : X) -> Fold (F x)
An example:
postulate
explicit-replicate : (A : Set) -> (n : ℕ) -> A -> Vec A n
test : Fold (Set , λ A -> ℕ , λ n -> A , λ _ -> Vec A n)
test = explicit-replicate
But what are the types of Z and g now?
comp : ∀ n {β γ} {αs : Level ^ n} {Xs : Sets αs β} {Z : {!!}}
-> {!!} -> (f : Fold Xs) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
Recall that f previously had type Xs ->ⁿ Y, but Y now is hidden in the end of these nested dependent pairs and can depend on an element of any X from Xs. Z previously had type Y -> Set γ, hence now we need to append Set γ to Xs, making all xs implicit:
_⋯>ⁿ_ : ∀ {n} {αs : Level ^ n} {β γ}
-> Sets αs β -> Set γ -> Set (αs ⊔ⁿ β ⊔ γ)
_⋯>ⁿ_ {0} Y Z = Y -> Z
_⋯>ⁿ_ {suc _} (_ , F) Z = ∀ {x} -> F x ⋯>ⁿ Z
OK, Z : Xs ⋯>ⁿ Set γ, what type has g? Previously it was (y : Y) -> Z y. Again we need to append something to nested dependent pairs, since Y is again hidden, only in a dependent way now:
Πⁿ : ∀ {n} {αs : Level ^ n} {β γ}
-> (Xs : Sets αs β) -> (Xs ⋯>ⁿ Set γ) -> Set (αs ⊔ⁿ β ⊔ γ)
Πⁿ {0} Y Z = (y : Y) -> Z y
Πⁿ {suc _} (_ , F) Z = ∀ {x} -> Πⁿ (F x) Z
And finally
Comp : ∀ n {αs : Level ^ n} {β γ} {Xs : Sets αs β}
-> (Xs ⋯>ⁿ Set γ) -> Fold Xs -> Set (αs ⊔ⁿ γ)
Comp 0 Z y = Z y
Comp (suc n) Z f = ∀ x -> Comp n Z (f x)
comp : ∀ n {β γ} {αs : Level ^ n} {Xs : Sets αs β} {Z : Xs ⋯>ⁿ Set γ}
-> Πⁿ Xs Z -> (f : Fold Xs) -> Comp n Z f
comp 0 g y = g y
comp (suc n) g f = λ x -> comp n g (f x)
A test:
length : ∀ {α} {A : Set α} {n} -> Vec A n -> ℕ
length {n = n} _ = n
explicit-replicate : (A : Set) -> (n : ℕ) -> A -> Vec A n
explicit-replicate _ _ x = replicate x
foo : (A : Set) -> ℕ -> A -> ℕ
foo = comp 3 length explicit-replicate
test : foo Bool 5 true ≡ 5
test = refl
Note the dependency in the arguments and the resulting type of explicit-replicate. Besides, Set lies in Set₁, while ℕ and A lie in Set — this illustrates universe polymorphism.
Remarks
AFAIK, there is no comprehensible theory for implicit arguments, so I don't know, how all this stuff will work, when the second function (i.e. f) receives implicit arguments. This test:
foo' : ∀ {α} {A : Set α} -> ℕ -> A -> ℕ
foo' = comp 2 length (λ n -> replicate {n = n})
test' : foo' 5 true ≡ 5
test' = refl
is passed at least.
comp can't handle functions, if the universe, where some type lies, depends on a value. For example
explicit-replicate' : ∀ α -> (A : Set α) -> (n : ℕ) -> A -> Vec A n
explicit-replicate' _ _ _ x = replicate x
... because this would result in an invalid use of Setω ...
error : ∀ α -> (A : Set α) -> ℕ -> A -> ℕ
error = comp 4 length explicit-replicate'
But it's common for Agda, e.g. you can't apply explicit id to itself:
idₑ : ∀ α -> (A : Set α) -> A -> A
idₑ _ _ x = x
-- ... because this would result in an invalid use of Setω ...
error = idₑ _ _ idₑ
The code.

prove bubble sort is ordered by lemma

I already tried to prove that fun bubble_main is ordered but no approach seems to work. Could someone here help me to prove the lemma is_ordered (bubble_main L) please.
I just delete all my previous lemmas because none seems to help Isabelle find a proof.
Here is my code/theory:
text{*check if the list is ordered ascendant*}
fun is_sorted :: "nat list ⇒ bool" where
"is_sorted (x1 # x2 # xs) = (x1 < x2 ∧ is_sorted (x2 # xs))" |
"is_sorted x = True"
fun bubble_once :: "nat list ⇒ nat list" where
"bubble_once (x1 # x2 # xs) = (if x1 < x2
then x1 # bubble_once (x2 # xs)
else x2 # bubble_once (x1 # xs))" |
"bubble_once xs = xs"
text{*calls fun bubble_once *}
fun bubble_all where
"bubble_all 0 L = L"|
"bubble_all (Suc n) L = burbuja_all n (bubble_once L)"
text{*main function *}
fun bubble_main where
"bubble_main L = bubble_main (length L) L"
text{*-----prove by induction-----*}
lemma "is_sorted (bubble_main L)"
apply (induction L)
apply auto
quickcheck
oops
First of all, I would repair your definitions. E.g., using your version
of is_sorted is too strict in the sense, that [0,0] is not sorted. This
is also detected by quick check.
fun is_sorted :: "nat list ⇒ bool" where
"is_sorted (x1 # x2 # xs) = (x1 <= x2 ∧ is_sorted (x2 # xs))" |
"is_sorted x = True"
bubble_all has to call itself recursively.
fun bubble_all where
"bubble_all 0 L = L"|
"bubble_all (Suc n) L = bubble_all n (bubble_once L)"
and bubble_main has to invoke bubble_all.
fun bubble_main where
"bubble_main L = bubble_all (length L) L"
Then there are several auxiliary lemmas required to prove the result.
Some I listed here, others are visible in the sorry's.
lemma length_bubble_once[simp]: "length (bubble_once L) = length L"
by (induct rule: bubble_once.induct, auto)
lemma is_sorted_last: assumes "⋀ x. x ∈ set xs ⟹ x ≤ y"
and "is_sorted xs"
shows "is_sorted (xs # [y])" sorry
And of course, the main algorithm is bubble_all, so you should prove
the property for bubble_all, not for bubble_main inductively.
Moreover, an induction over the length of the list (or the number of iterations)
is advantageous here, since the list is changed by bubble_all in the recursive call.
lemma bubble_all_sorted: "n ≥ length L ⟹ is_sorted (bubble_all n L)"
proof (induct n arbitrary: L)
case (0 L) thus ?case by auto
next
case (Suc n L)
show ?case
proof (cases "L = []")
case True
from Suc(1)[of L] True
show ?thesis by auto
next
case False
let ?BL = "bubble_once L"
from False have "length ?BL ≠ 0" by auto
hence "?BL ≠ []" by (cases "?BL", auto)
hence "?BL = butlast ?BL # [last ?BL]" by auto
then obtain xs x where BL: "?BL = xs # [x]" ..
from BL have x_large: "⋀ y. y ∈ set xs ⟹ y ≤ x" sorry
from Suc(2) have "length ?BL ≤ Suc n" by auto
with BL have "length xs ≤ n" by auto
from Suc(1)[OF this] have sorted: "is_sorted (bubble_all n xs)" .
from x_large have id: "bubble_all n (xs # [x]) = bubble_all n xs # [x]" sorry
show ?thesis unfolding bubble_all.simps BL id
proof (rule is_sorted_last[OF x_large sorted])
fix x
assume "x ∈ set (bubble_all n xs)"
thus "x ∈ set xs" sorry
qed
qed
qed
The final theorem is then easily achieved.
lemma "is_sorted (bubble_main L)"
using bubble_all_sorted by simp
I hope, this helps a bit to see the direction what is required.

Resources