I run into examples of Applicatives that are not Monads. I like the multi-dimensional array example but I did not get it completely.
Let's take a matrix M[A]. Could you show that M[A] is an Applicative but not a Monad with Scala code ? Do you have any "real-world" examples of using matrices as Applicatives ?
Something like M[T] <*> M[T => U] is applicative:
val A = [[1,2],[1,2]] //let's assume such imaginary syntax for arrays
val B = [[*2, *3], [*5, *2]]
A <*> B === [[2,6],[5,4]]
There may be more complex applicatives in signal processing for example. Using applicatives allows you to build one matrix of functions (each do N or less element-operations) and do only 1 matrix-operation instead of N.
Matrix is not a monoid by definition - you have to define "+" (concatenation) between matrixes for that (fold more precisely). And not every (even monoidal) matrix is a monad - you have to additionaly define fmap (not flatMap - just map in scala) to make it a Functor (endo-functor if it returns matrix). But by default Matrix isn't Functor/Monoid/Monad(Functor + Monoid).
About monadic matrixes. Matrix can be monoid: you may define dimension-bound concatenation for matrixes that are same sized along the orthogonal dimension. Dimension/size-independent concatenation will be something like:
val A = [[11,12],[21,22]]; val B = [[11,12,13],[21,22,23],[31,32,33]]
A + B === [[11,12,0,0,0], [21,22,0,0,0], [0,0,11,12,13],[0,0,21,22,23],[0,0,31,32,33]
Identity element will be []
So you can also build the monad (pseudocode again):
def flatMap[T, U](a: M[T])(f: T => M[U]) = {
val mapped = a.map(f)// M[M[U]] // map
def normalize(xn: Int, yn: Int) = ... // complete matrix with zeros to strict xn * yn size
a.map(normalize(a.max(_.xn), a.max(_.yn)))
.reduceHorizontal(_ concat _)
.reduceVertical(_ concat _) // flatten
}
val res = flatMap([[1,1],[2,1]], x => if(x == 1)[[2,2]] else [[3,3,3]])
res === [[2,2,0,2,2],[3,3,3,2,2]]
Unfortunately, you must have zero-element (or any default) for T (not only for monoid itself). It doesn't make T itself some kind of magma (because no defined binary operation for this set is required - only some const defined for T), but may create additional problems (depending on your challenges).
Related
Hi total Haskell beginner here: What does the pattern in a function for an array look like ? For example: I simply want to add +1 to the first element in my array
> a = array (1,10) ((1,1) : [(i,( i * 2)) | i <- [2..10]])
My first thought was:
> arraytest :: Array (Int,Int) Int -> Array (Int,Int) Int
> arraytest (array (mn,mx) (a,b):xs) = (array (mn,mx) (a,b+1):xs)
I hope you understand my problem :)
You can't pattern match on arrays because the data declaration in the Data.Array.IArray module for the Array type doesn't have any of its data constructors exposed. This is a common practice in Haskell because it allows the author to update the internal representation of their data type without making a breaking change for users of their module.
The only way to use an Array, therefore, is to use the functions provided by the module. To access the first value in an array, you can use a combination of bounds and (!), or take the first key/value pair from assocs. Then you can use (//) to make an update to the array.
arraytest arr = arr // [(index, value + 1)]
where
index = fst (bounds arr)
value = arr ! index
If you choose to use assocs, you can pattern match on its result:
arraytest arr = arr // [(index, value + 1)]
where
(index, value) = head (assocs arr) -- `head` will crash if the array is empty
Or you can make use of the Functor instances for lists and tuples:
arraytest arr = arr // take 1 (fmap (fmap (+1)) (assocs arr))
You will probably quickly notice, though, that the array package is lacking a lot of convenience functions. All of the solutions above are fairly verbose compared to how the operation would be implemented in other languages.
To fix this, we have the lens package (and its cousins), which add a ton of convenience functions to Haskell and make packages like array much more bearable. This package has a fairly steep learning curve, but it's used very commonly and is definitely worth learning.
import Control.Lens
arraytest arr = arr & ix (fst (bounds arr)) +~ 1
If you squint your eyes, you can almost see how it says arr[0] += 1, but we still haven't sacrificed any of the benefits of immutability.
This is more like an extended comment to #4castle's answer. You cannot pattern match on an Array because its implementation is hidden; you must use its public API to work with them. However, you can use the public API to define such a pattern (with the appropriate language extensions):
{-# LANGUAGE PatternSynonyms, ViewPatterns #-}
-- PatternSynonyms: Define patterns without actually defining types
-- ViewPatterns: Construct patterns that apply functions as well as match subpatterns
import Control.Arrow((&&&)) -- solely to dodge an ugly lambda; inline if you wish
pattern Array :: Ix i => (i, i) -> [(i, e)] -> Array i e
-- the type signature hints that this is the array function but bidirectional
pattern Array bounds' assocs' <- ((bounds &&& assocs) -> (bounds', assocs'))
-- When matching against Array bounds' assocs', apply bounds &&& assocs to the
-- incoming array, and match the resulting tuple to (bounds', assocs')
where Array = array
-- Using Array in an expression is the same as just using array
arraytest (Array bs ((i,x):xs)) = Array bs ((i,x+1):xs)
I'm fairly sure that the conversions to and from [] make this absolutely abysmal for performance.
Given:
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
what I want is: wX = [(0.5)*[2;3;4];(0.4)*[7;3;2];(0.3)*[5;3;6]]
would like to know an elegant way to do this with lists as well as with arrays. Additional optimization information is welcome
You write about a list of lists, but your code shows a list of tuples. Taking the liberty to adjust for that, a solution would be
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
X
|> List.map2 (fun w x ->
x
|> List.map (fun xi ->
(float xi) * w
)
) weights
Depending on how comfortable you are with the syntax, you may prefer a oneliner like
List.map2 (fun w x -> List.map (float >> (*) w) x) weights X
The same library functions exist for sequences (Seq.map2, Seq.map) and arrays (in the Array module).
This is much more than an answer to the specific question but after a chat in the comments and learning that the question was specifically a part of a neural network in F# I am posting this which covers the question and implements the feedforward part of a neural network. It makes use of MathNet Numerics
This code is an F# translation of part of the Python code from Neural Networks and Deep Learning.
Python
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
F#
module NeuralNetwork1 =
//# Third-party libraries
open MathNet.Numerics.Distributions // Normal.Sample
open MathNet.Numerics.LinearAlgebra // Matrix
type Network(sizes : int array) =
let mutable (_biases : Matrix<double> list) = []
let mutable (_weights : Matrix<double> list) = []
member __.Biases
with get() = _biases
and set value =
_biases <- value
member __.Weights
with get() = _weights
and set value =
_weights <- value
member __.Backprop (x : Matrix<double>) (y : Matrix<double>) =
// Note: There is a separate member for feedforward. This one is only used within Backprop
// Note: In the text layers are numbered from 1 to n with 1 being the input and n being the output
// In the code layers are numbered from 0 to n-1 with 0 being the input and n-1 being the output
// Layers
// 1 2 3 Text
// 0 1 2 Code
// 784 -> 30 -> 10
let feedforward () : (Matrix<double> list * Matrix<double> list) =
let (bw : (Matrix<double> * Matrix<double>) list) = List.zip __.Biases __.Weights
let rec feedfowardInner layer activation zs activations =
match layer with
| x when x < (__.NumLayers - 1) ->
let (bias, weight) = bw.[layer]
let z = weight * activation + bias
let activation = __.Sigmoid z
feedfowardInner (layer + 1) activation (z :: zs) (activation :: activations)
| _ ->
// Normally with recursive functions that build list for returning
// the final list(s) would be reversed before returning.
// However since the returned list will be accessed in reverse order
// for the backpropagation step, we leave them in the reverse order.
(zs, activations)
feedfowardInner 0 x [] [x]
In weight * activation * is an overloaded operator operating on Matrix<double>
Related back to your example data and using MathNet Numerics Arithmetics
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
first the values for X need to be converted to float
let x1 = [[2.0;3.0;4.0];[7.0;3.0;2.0];[5.0;3;0;6;0]]
Now notice that x1 is a matrix and weights is a vector
so we can just multiply
let wx1 = weights * x1
Since the way I validated the code was a bit more than most I will explain it so that you don't have doubts to its validity.
When working with Neural Networks and in particular mini-batches, the starting numbers for the weights and biases are random and the generation of the mini-batches is also done randomly.
I know the original Python code was valid and I was able to run it successfully and get the same results as indicated in the book, meaning that the initial successes were within a couple of percent of the book and the graphs of the success were the same. I did this for several runs and several configurations of the neural network as discussed in the book. Then I ran the F# code and achieved the same graphs.
I also copied the starting random number sets from the Python code into the F# code so that while the data generated was random, both the Python and F# code used the same starting numbers, of which there are thousands. I then single stepped both the Python and F# code to verify that each individual function was returning a comparable float value, e.g. I put a break point on each line and made sure I checked each one. This actually took a few days because I had to write export and import code and massage the data from Python to F#.
See: How to determine type of nested data structures in Python?
I also tried a variation where I replaced the F# list with Linked list, but found no increase in speed, e.g. LinkedList<Matrix<double>>. Was an interesting exercise.
If I understand correctly,
let wX = weights |> List.map (fun w ->
X |> List.map (fun (a, b, c) ->
w * float a,
w * float b,
w * float c))
This is an alternate way to achieve this using Math.Net: https://numerics.mathdotnet.com/Matrix.html#Arithmetics
Let's say I have an array of vectors:
""" simple line equation """
function getline(a::Array{Float64,1},b::Array{Float64,1})
line = Vector[]
for i=0:0.1:1
vector = (1-i)a+(i*b)
push!(line, vector)
end
return line
end
This function returns an array of vectors containing x-y positions
Vector[11]
> Float64[2]
> Float64[2]
> Float64[2]
> Float64[2]
.
.
.
Now I want to seprate all x and y coordinates of these vectors to plot them with plotyjs.
I have already tested some approaches with no success!
What is a correct way in Julia to achive this?
You can broadcast getindex:
xs = getindex.(vv, 1)
ys = getindex.(vv, 2)
Edit 3:
Alternatively, use list comprehensions:
xs = [v[1] for v in vv]
ys = [v[2] for v in vv]
Edit:
For performance reasons, you should use StaticArrays to represent 2D points. E.g.:
getline(a,b) = [(1-i)a+(i*b) for i=0:0.1:1]
p1 = SVector(1.,2.)
p2 = SVector(3.,4.)
vv = getline(p1,p2)
Broadcasting getindex and list comprehensions will still work, but you can also reinterpret the vector as a 2×11 matrix:
to_matrix{T<:SVector}(a::Vector{T}) = reinterpret(eltype(T), a, (size(T,1), length(a)))
m = to_matrix(vv)
Note that this does not copy the data. You can simply use m directly or define, e.g.,
xs = #view m[1,:]
ys = #view m[2,:]
Edit 2:
Btw., not restricting the type of the arguments of the getline function has many advantages and is preferred in general. The version above will work for any type that implements multiplication with a scalar and addition, e.g., a possible implementation of immutable Point ... end (making it fully generic will require a bit more work, though).
Repa has fromListUnboxed that allows to create a 1-dimensional array from a list of values. But how can I create a 2-dimensional one given a list of 1-dimensional unboxed ones (of equal lengths)?
Use the reshape function: reshape :: (Shape sh1, Shape sh2, Source r1 e) => sh2 -> Array r1 sh1 e -> Array D sh2 e.
It's compile-time only (no runtime overhead).
I also stumbled upon this problem. I resolved it by converting the list of
arrays to unboxed vectors, concatenating those, and converting them back to a
repa array. Very clumsy, but that's all I could think about.
import Data.Array.Repa as R
import Data.Vector.Unboxed as V
import Prelude as P
arrs = P.replicate 5 $ fromListUnboxed (ix1 10) [0..9 :: Int]
main = print concatenatedArrs
where vectors = P.map toUnboxed arrs
concatenatedVectors = V.concat vectors
concatenatedArrs = fromUnboxed (R.ix2 5 10) concatenatedVectors
I have defined a Matrix module as follows:
module Matrix =
struct
type 'a matrix = 'a array array
let make (nr: int) (nc: int) (init: 'a) : 'a matrix =
let result = Array.make nr (Array.make nc init) in
for i = 0 to nr - 1 do
result.(i) <- Array.make nc init
done;
result
let copy (m: 'a matrix) : 'a matrix =
let l = nbrows m in
if l = 0 then m else
let result = Array.make l m.(0) in
for i = 0 to l - 1 do
result.(i) <- Array.copy m.(i)
done;
result
...
Then I could code for instance let mat = Matrix.make 5 5 100. The advantage of defining Matrix module is to hide the type of its components. For example, I may later want to define a matrix with 'a list list or with map. I will just need to change this module, but not the code who uses this module.
But one problem I realize is that, if I do let m1 = m0 in ..., m1 and m0 will share a same physical item: any change to m1 will affect m0. Actually this is the purpose of the copy function. But is there a way to let the module always call copy for an affectation?
The worse is for a function let f (m: 'a matrix) = ..., any change inside f to m will affect the outer parameter who past its value to m. Is there a way to avoid f to do so?
You can easily define shadow copies, something along:
type 'a m =
| Shared of 'a matrix
| Matrix of 'a array array
and 'a matrix = {
mutable m : 'a m;
}
let copy_matrix m = [... YOUR CODE FOR COPY ...]
(* Shadow copy *)
let copy matrix =
{ m = Shared matrix }
let rec content m =
match m.m with
| Shared m -> content m
| Matrix m -> m
let write m x y k =
let c = match m.m with
| Shared matrix ->
(* Break the shared chain & copy the initial shared matrix *)
let c = copy_matrix (content matrix) in
m.m <- Matrix c;
c
| Matrix m -> m in
c.(x).(y) <- k
When you write let m1 = m0, the names m1 and m0 denote the same object. This is not an assignment, it is a binding of a value to a name. Since the expression after the = sign is a simple name, both names m1 and m0 have the same value bound to them.
If you want to make a copy of a mutable data structure, you must request that copy explicitly.
If you want to be able to pass data around without having to modify it, this data must be immutable. This, indeed, is a key reason to use immutable data. When you use mutable data, you need to think carefully about sharing between data structures and who is responsible for copying when needed.
While you can reorganize any data structure to be immutable, dense matrices are not an example where immutability shines, because immutable representations of dense matrices tend to require rather more memory and more processing time.