Let's say I have an array of vectors:
""" simple line equation """
function getline(a::Array{Float64,1},b::Array{Float64,1})
line = Vector[]
for i=0:0.1:1
vector = (1-i)a+(i*b)
push!(line, vector)
end
return line
end
This function returns an array of vectors containing x-y positions
Vector[11]
> Float64[2]
> Float64[2]
> Float64[2]
> Float64[2]
.
.
.
Now I want to seprate all x and y coordinates of these vectors to plot them with plotyjs.
I have already tested some approaches with no success!
What is a correct way in Julia to achive this?
You can broadcast getindex:
xs = getindex.(vv, 1)
ys = getindex.(vv, 2)
Edit 3:
Alternatively, use list comprehensions:
xs = [v[1] for v in vv]
ys = [v[2] for v in vv]
Edit:
For performance reasons, you should use StaticArrays to represent 2D points. E.g.:
getline(a,b) = [(1-i)a+(i*b) for i=0:0.1:1]
p1 = SVector(1.,2.)
p2 = SVector(3.,4.)
vv = getline(p1,p2)
Broadcasting getindex and list comprehensions will still work, but you can also reinterpret the vector as a 2×11 matrix:
to_matrix{T<:SVector}(a::Vector{T}) = reinterpret(eltype(T), a, (size(T,1), length(a)))
m = to_matrix(vv)
Note that this does not copy the data. You can simply use m directly or define, e.g.,
xs = #view m[1,:]
ys = #view m[2,:]
Edit 2:
Btw., not restricting the type of the arguments of the getline function has many advantages and is preferred in general. The version above will work for any type that implements multiplication with a scalar and addition, e.g., a possible implementation of immutable Point ... end (making it fully generic will require a bit more work, though).
Related
With the Julia Language, I defined a function to sample points uniformly inside the sphere of radius 3.14 using rejection sampling as follows:
function spherical_sample(N::Int64)
# generate N points uniformly distributed inside sphere
# using rejection sampling:
points = pi*(2*rand(5*N,3).-1.0)
ind = sum(points.^2,dims=2) .<= pi^2
## ideally I wouldn't have to do this:
ind_ = dropdims(ind,dims=2)
return points[ind_,:][1:N,:]
end
I found a hack for subsetting arrays:
ind = sum(points.^2,dims=2) .<= pi^2
## ideally I wouldn't have to do this:
ind_ = dropdims(ind,dims=2)
But, in principle array indexing should be a one-liner. How could I do this better in Julia?
The problem is that you are creating a 2-dimensional index vector. You can avoid it by using eachrow:
ind = sum.(eachrow(points.^2)) .<= pi^2
So that your full answer would be:
function spherical_sample(N::Int64)
points = pi*(2*rand(5*N,3).-1.0)
ind = sum.(eachrow(points.^2)) .<= pi^2
return points[ind,:][1:N,:]
end
Here is a one-liner:
points[(sum(points.^2,dims=2) .<= pi^2)[:],:][1:N, :]
Note that [:] is dropping a dimension so the BitArray can be used for indexing.
This does not answer your question directly (as you already got two suggestions), but I rather thought to hint how you could implement the whole procedure differently if you want it to be efficient.
The first point is to avoid generating 5*N rows of data - the problem is that it is very likely that it will be not enough to generate N valid samples. The point is that the probability of a valid sample in your model is ~50%, so it is possible that there will not be enough points to choose from and [1:N, :] selection will throw an error.
Below is the code I would use that avoids this problem:
function spherical_sample(N::Integer) # no need to require Int64 only here
points = 2 .* pi .* rand(N, 3) .- 1.0 # note that all operations are vectorized to avoid excessive allocations
while N > 0 # we will run the code until we have N valid rows
v = #view points[N, :] # use view to avoid allocating
if sum(x -> x^2, v) <= pi^2 # sum accepts a transformation function as a first argument
N -= 1 # row is valid - move to the previous one
else
rand!(v) # row is invalid - resample it in place
#. v = 2 * pi * v - 1.0 # again - do the computation in place via broadcasting
end
end
return points
end
This one is pretty fast, and uses StaticArrays. You can probably also implement something similar with ordinary tuples:
using StaticArrays
function sphsample(N)
T = SVector{3, Float64}
v = Vector{T}(undef, N)
n = 1
while n <= N
p = rand(T) .- 0.5
#inbounds v[n] = p .* 2π
n += (sum(abs2, p) <= 0.25)
end
return v
end
On my laptop it is ~9x faster than the solution with views.
Given:
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
what I want is: wX = [(0.5)*[2;3;4];(0.4)*[7;3;2];(0.3)*[5;3;6]]
would like to know an elegant way to do this with lists as well as with arrays. Additional optimization information is welcome
You write about a list of lists, but your code shows a list of tuples. Taking the liberty to adjust for that, a solution would be
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
X
|> List.map2 (fun w x ->
x
|> List.map (fun xi ->
(float xi) * w
)
) weights
Depending on how comfortable you are with the syntax, you may prefer a oneliner like
List.map2 (fun w x -> List.map (float >> (*) w) x) weights X
The same library functions exist for sequences (Seq.map2, Seq.map) and arrays (in the Array module).
This is much more than an answer to the specific question but after a chat in the comments and learning that the question was specifically a part of a neural network in F# I am posting this which covers the question and implements the feedforward part of a neural network. It makes use of MathNet Numerics
This code is an F# translation of part of the Python code from Neural Networks and Deep Learning.
Python
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
F#
module NeuralNetwork1 =
//# Third-party libraries
open MathNet.Numerics.Distributions // Normal.Sample
open MathNet.Numerics.LinearAlgebra // Matrix
type Network(sizes : int array) =
let mutable (_biases : Matrix<double> list) = []
let mutable (_weights : Matrix<double> list) = []
member __.Biases
with get() = _biases
and set value =
_biases <- value
member __.Weights
with get() = _weights
and set value =
_weights <- value
member __.Backprop (x : Matrix<double>) (y : Matrix<double>) =
// Note: There is a separate member for feedforward. This one is only used within Backprop
// Note: In the text layers are numbered from 1 to n with 1 being the input and n being the output
// In the code layers are numbered from 0 to n-1 with 0 being the input and n-1 being the output
// Layers
// 1 2 3 Text
// 0 1 2 Code
// 784 -> 30 -> 10
let feedforward () : (Matrix<double> list * Matrix<double> list) =
let (bw : (Matrix<double> * Matrix<double>) list) = List.zip __.Biases __.Weights
let rec feedfowardInner layer activation zs activations =
match layer with
| x when x < (__.NumLayers - 1) ->
let (bias, weight) = bw.[layer]
let z = weight * activation + bias
let activation = __.Sigmoid z
feedfowardInner (layer + 1) activation (z :: zs) (activation :: activations)
| _ ->
// Normally with recursive functions that build list for returning
// the final list(s) would be reversed before returning.
// However since the returned list will be accessed in reverse order
// for the backpropagation step, we leave them in the reverse order.
(zs, activations)
feedfowardInner 0 x [] [x]
In weight * activation * is an overloaded operator operating on Matrix<double>
Related back to your example data and using MathNet Numerics Arithmetics
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
first the values for X need to be converted to float
let x1 = [[2.0;3.0;4.0];[7.0;3.0;2.0];[5.0;3;0;6;0]]
Now notice that x1 is a matrix and weights is a vector
so we can just multiply
let wx1 = weights * x1
Since the way I validated the code was a bit more than most I will explain it so that you don't have doubts to its validity.
When working with Neural Networks and in particular mini-batches, the starting numbers for the weights and biases are random and the generation of the mini-batches is also done randomly.
I know the original Python code was valid and I was able to run it successfully and get the same results as indicated in the book, meaning that the initial successes were within a couple of percent of the book and the graphs of the success were the same. I did this for several runs and several configurations of the neural network as discussed in the book. Then I ran the F# code and achieved the same graphs.
I also copied the starting random number sets from the Python code into the F# code so that while the data generated was random, both the Python and F# code used the same starting numbers, of which there are thousands. I then single stepped both the Python and F# code to verify that each individual function was returning a comparable float value, e.g. I put a break point on each line and made sure I checked each one. This actually took a few days because I had to write export and import code and massage the data from Python to F#.
See: How to determine type of nested data structures in Python?
I also tried a variation where I replaced the F# list with Linked list, but found no increase in speed, e.g. LinkedList<Matrix<double>>. Was an interesting exercise.
If I understand correctly,
let wX = weights |> List.map (fun w ->
X |> List.map (fun (a, b, c) ->
w * float a,
w * float b,
w * float c))
This is an alternate way to achieve this using Math.Net: https://numerics.mathdotnet.com/Matrix.html#Arithmetics
I want to create a 2D list that can have elements of variable lengths inside, for example, if I have a 10x10 list in MATLAB, I can
define it with:
z = cell(10,10)
and start assigning some elements by doing this:
z{2}{3} = ones(3,1)
z{1}{1} = zeros(100,1)
z{1}{2} = []
z{1}{3} = randn(20,1)
...
What is the optimal way to define such empty 2D list in torch? Moreover, is there a way to exploit the tensor structure to do this?
In python, I can do something along this to define an empty 10x10 2D list:
z = [[None for j in range(10)] for i in range(10)]
My best guess for torch is doing something like
z = torch.Tensor(10,10)
for i=1,10 do
for j=1,10 do
z[{{i},{j}}] = torch.Tensor()
end
end
but, this does not work, and defining a tensor inside a tensor seems like a bad idea ...
This is a follow up to the question asked here (however in the link it is asked in python): Create 2D lists in python with variable length indexed vectors
From the documentation I've read, tensors only support primitive numeric data types. You won't be able to use tensor for your intended usage. Leverage tables.
local function makeMatrix(initialVal, ...)
local isfunc = type(initialVal) == "function"
local dimtable = {...}
local function helper(depth)
if depth == 0 then
return isfunc and initialVal() or initialVal
else
local plane = {}
for i = 1, dimtable[depth] do
plane[i] = helper(depth-1)
end
return plane
end
end
return helper(#dimtable)
end
p = makeMatrix(0, 2, 3, 5) -- makes 3D matrix of size 2x3x5 with all elements initialized to 0
makeMatrix(torch.Tensor, m ,n)
Answer from Torch's Google Group forums. Agreeing that tables is the solution:
z = {}
for i=1,10 do
z[i] = {}
for j=1,10 do
z[i][j] = torch.Tensor()
end
end
I run into examples of Applicatives that are not Monads. I like the multi-dimensional array example but I did not get it completely.
Let's take a matrix M[A]. Could you show that M[A] is an Applicative but not a Monad with Scala code ? Do you have any "real-world" examples of using matrices as Applicatives ?
Something like M[T] <*> M[T => U] is applicative:
val A = [[1,2],[1,2]] //let's assume such imaginary syntax for arrays
val B = [[*2, *3], [*5, *2]]
A <*> B === [[2,6],[5,4]]
There may be more complex applicatives in signal processing for example. Using applicatives allows you to build one matrix of functions (each do N or less element-operations) and do only 1 matrix-operation instead of N.
Matrix is not a monoid by definition - you have to define "+" (concatenation) between matrixes for that (fold more precisely). And not every (even monoidal) matrix is a monad - you have to additionaly define fmap (not flatMap - just map in scala) to make it a Functor (endo-functor if it returns matrix). But by default Matrix isn't Functor/Monoid/Monad(Functor + Monoid).
About monadic matrixes. Matrix can be monoid: you may define dimension-bound concatenation for matrixes that are same sized along the orthogonal dimension. Dimension/size-independent concatenation will be something like:
val A = [[11,12],[21,22]]; val B = [[11,12,13],[21,22,23],[31,32,33]]
A + B === [[11,12,0,0,0], [21,22,0,0,0], [0,0,11,12,13],[0,0,21,22,23],[0,0,31,32,33]
Identity element will be []
So you can also build the monad (pseudocode again):
def flatMap[T, U](a: M[T])(f: T => M[U]) = {
val mapped = a.map(f)// M[M[U]] // map
def normalize(xn: Int, yn: Int) = ... // complete matrix with zeros to strict xn * yn size
a.map(normalize(a.max(_.xn), a.max(_.yn)))
.reduceHorizontal(_ concat _)
.reduceVertical(_ concat _) // flatten
}
val res = flatMap([[1,1],[2,1]], x => if(x == 1)[[2,2]] else [[3,3,3]])
res === [[2,2,0,2,2],[3,3,3,2,2]]
Unfortunately, you must have zero-element (or any default) for T (not only for monoid itself). It doesn't make T itself some kind of magma (because no defined binary operation for this set is required - only some const defined for T), but may create additional problems (depending on your challenges).
I'm trying to teach myself Haskell (coming from OOP languages). Having a hard time grasping the immutable variables stuff. I'm trying to sort a 2d array in row major.
In java, for example (pseudo):
int array[3][3] = **initialize array here
for(i = 0; i<3; i++)
for(j = 0; j<3; j++)
if(array[i][j] < current_low)
current_low = array[i][j]
How can I implement this same sort of thing in Haskell? If I create a temp array to add the low values to after each iteration, I won't be able to add to it because it is immutable, correct? Also, Haskell doesn't have loops, right?
Here's some useful stuff I know in Haskell:
main = do
let a = [[10,4],[6,10],[5,2]] --assign random numbers
print (a !! 0 !! 1) --will print a[0][1] in java notation
--How can we loop through the values?
First, your Java code does not sort anything. It just finds the smallest element. And, well, there's a kind of obvious Haskell solution... guess what, the function is called minimum! Let's see what it does:
GHCi> :t minimum
minimum :: Ord a => [a] -> a
ok, so it takes a list of values that can be compared (hence Ord) and outputs a single value, namely the smallest. How do we apply this to a "2D list" (nested list)? Well, basically we need the minimum amongst all minima of the sub-lists. So we first replace the list of list with the list of minima
allMinima = map minimum a
...and then use minimum allMinima.
Written compactly:
main :: IO ()
main = do
let a = [[10,4],[6,10],[5,2]] -- don't forget the indentation
print (minimum $ map minimum a)
That's all!
Indeed "looping through values" is a very un-functional concept. We generally don't want to talk about single steps that need to be taken, rather think about properties of the result we want, and let the compiler figure out how to do it. So if we weren't allowed to use the pre-defined minimum, here's how to think about it:
If we have a list and look at a single value... under what circumstances is it the correct result? Well, if it's smaller than all other values. And what is the smallest of the other values? Exactly, the minimum amongst them.
minimum' :: Ord a => [a] -> a
minimum' (x:xs)
| x < minimum' xs = x
If it's not smaller, then we just use the minimum of the other values
minimum' (x:xs)
| x < minxs = x
| otherwise = minxs
where minxs = minimum' xs
One more thing: if we recurse through the list this way, there will at some point be no first element left to compare with something. To prevent that, we first need the special case of a single-element list:
minimum' :: Ord a => [a] -> a
minimum' [x] = x -- obviously smallest, since there's no other element.
minimum' (x:xs)
| x < minxs = x
| otherwise = minxs
where minxs = minimum' xs
Alright, well, I'll take a stab. Zach, this answer is intended to get you thinking in recursions and folds. Recursions, folds, and maps are the fundamental ways that loops are replaced in functional style. Just try to believe that in reality, the question of nested looping rarely arises naturally in functional programming. When you actually need to do it, you'll often enter a special section of code, called a monad, in which you can do destructive writes in an imperative style. Here's an example. But, since you asked for help with breaking out of loop thinking, I'm going to focus on that part of the answer instead. #leftaroundabout's answer is also very good and you fill in his definition of minimum here.
flatten :: [[a]] -> [a]
flatten [] = []
flatten xs = foldr (++) [] xs
squarize :: Int -> [a] -> [[a]]
squarize _ [] = []
squarize len xs = (take len xs) : (squarize len $ drop len xs)
crappySort :: Ord a => [a] -> [a]
crappySort [] = []
crappySort xs =
let smallest = minimum xs
rest = filter (smallest /=) xs
count = (length xs) - (length rest)
in
replicate count smallest ++ crappySort rest
sortByThrees xs = squarize 3 $ crappySort $ flatten xs