I am looking at Accelerate to compute mean and standard deviation of arrays in Swift.
I can do the mean. How do I do the standard deviation?
let rr: [Double] = [ 18.0, 21.0, 41.0, 42.0, 48.0, 50.0, 55.0, 90.0 ]
var mn: Double = 0.0
vDSP_meanvD(rr, 1, &mn, vDSP_Length(rr.count))
print(mn) // prints correct mean as 45.6250
// Standard Deviation should be 22.3155
You can compute the standard deviation from the mean value and
the mean square value (compare https://en.wikipedia.org/wiki/Standard_deviation#Identities_and_mathematical_properties and https://en.wikipedia.org/wiki/Algebraic_formula_for_the_variance):
import Accelerate
let rr: [Double] = [ 18.0, 21.0, 41.0, 42.0, 48.0, 50.0, 55.0, 90.0 ]
var mn: Double = 0.0 // mean value
vDSP_meanvD(rr, 1, &mn, vDSP_Length(rr.count))
var ms: Double = 0.0 // mean square value
vDSP_measqvD(rr, 1, &ms, vDSP_Length(rr.count))
let sddev = sqrt(ms - mn * mn) * sqrt(Double(rr.count)/Double(rr.count - 1))
print(mn, sddev)
// 45.625 22.315513501982
Alternatively (for iOS 9.0 and later or macOS 10.11 and later), use vDSP_normalizeD:
var mn = 0.0
var sddev = 0.0
vDSP_normalizeD(rr, 1, nil, 1, &mn, &sddev, vDSP_Length(rr.count))
sddev *= sqrt(Double(rr.count)/Double(rr.count - 1))
print(mn, sddev)
// 45.625 22.315513501982
an add-on for #Martin R's answer: There is also a vDSP_normalize function for Float/single precision.
func vDSP_normalize(UnsafePointer<Float>, vDSP_Stride, UnsafeMutablePointer<Float>?, vDSP_Stride, UnsafeMutablePointer<Float>, UnsafeMutablePointer<Float>, vDSP_Length)
//Compute mean and standard deviation and then calculate new elements to have a zero mean and a unit standard deviation. Single precision.
func vDSP_normalizeD(UnsafePointer<Double>, vDSP_Stride, UnsafeMutablePointer<Double>?, vDSP_Stride, UnsafeMutablePointer<Double>, UnsafeMutablePointer<Double>, vDSP_Length)
//Compute mean and standard deviation and then calculate new elements to have a zero mean and a unit standard deviation. Double precision.
Related
Trying to come up with a fast way to make sure a is monotonic in Julia.
The slow (and obvious) way to do it that I have been using is something like this:
function check_monotonicity(
timeseries::Array,
column::Int
)
current = timeseries[1,column]
for row in 1:size(timeseries, 1)
if timeseries[row,column] > current
return false
end
current = copy(timeseries[row,column])
end
return true
end
So that it works something like this:
julia> using Distributions
julia>mono_matrix = hcat(collect(0:0.1:10), rand(Uniform(0.4,0.6),101),reverse(collect(0.0:0.1:10.0)))
101×3 Matrix{Float64}:
0.0 0.574138 10.0
0.1 0.490671 9.9
0.2 0.457519 9.8
0.3 0.567687 9.7
⋮
9.8 0.513691 0.2
9.9 0.589585 0.1
10.0 0.405018 0.0
julia> check_monotonicity(mono_matrix, 2)
false
And then for the opposite example:
julia> check_monotonicity(mono_matrix, 3)
true
Does anyone know a more efficient way to do this for long time series?
Your implementation is certainly not slow! It is very nearly optimally fast. You should definitely get rid of the copy. Though it doesn't hurt when the matrix elements are just plain data, it can be bad in other cases, perhaps for BigInt for example.
This is close to your original effort, but also more robust with respect to indexing and array types:
function ismonotonic(A::AbstractMatrix, column::Int, cmp = <)
current = A[begin, column] # begin instead of 1
for i in axes(A, 1)[2:end] # skip the first element
newval = A[i, column] # don't use copy here
cmp(newval, current) && return false
current = newval
end
return true
end
Another tip: You don't need to use collect. In fact, you should almost never use collect. Do this instead (I removed Uniform since I don't have Distributions.jl):
mono_matrix = hcat(0:0.1:10, rand(101), reverse(0:0.1:10)) # or 10:-0.1:0
Or perhaps this is better, since you have more control over the numer of elements in the range:
mono_matrix = hcat(range(0, 10, 101), rand(101), range(10, 0, 101))
Then you can use it like this:
1.7.2> ismonotonic(mono_matrix, 3)
false
1.7.2> ismonotonic(mono_matrix, 3, >=)
true
1.7.2> ismonotonic(mono_matrix, 1)
true
In mathematics typically we define a series to be monotonic if it is non-decreasing or non-increasing. If this is what you want then do:
issorted(view(mono_matrix, :, 2), rev=true)
to check if it is non-increasing, and:
issorted(view(mono_matrix, :, 2))
to check if it is non-decreasing.
If you want a decreasing check do:
issorted(view(mono_matrix, :, 3), rev=true, lt = <=)
for decreasing, but treating 0.0 and -0.0 as equal or
issorted(view(mono_matrix, :, 3), lt = <=)
for increasing, but treating 0.0 and -0.0 as equal.
If you want to distinguish 0.0 and -0.0 then do respectively:
issorted(view(mono_matrix, :, 3), rev=true, lt = (x, y) -> isequal(x, y) || isless(x, y))
issorted(view(mono_matrix, :, 3), lt = (x, y) -> isequal(x, y) || isless(x, y))
I'm currently working on creating a subtype of AbstractArray in Julia, which allows you to store a vector in addition to an Array itself. You can think of it as the column "names", with element types as a subtype of AbstractFloat. Hence, it has some similarities to the NamedArray.jl package, but restricts to only assigning the columns with Floats (in case of matrices).
The struct that I've created so far (following the guide to create a subtype of AbstractArray) is defined as follows:
struct FooArray{T, N, AT, VT} <: AbstractArray{T, N}
data::AT
vec::VT
function FooArray(data::AbstractArray{T1, N}, vec::AbstractVector{T2}) where {T1 <: AbstractFloat, T2 <: AbstractFloat, N}
length(vec) == size(data, 2) || error("Inconsistent dimensions")
new{T1, N, typeof(data), typeof(vec)}(data, vec)
end
end
#inline Base.#propagate_inbounds Base.getindex(fooarr::FooArray, i::Int) = getindex(fooarr.data, i)
#inline Base.#propagate_inbounds Base.getindex(fooarr::FooArray, I::Vararg{Int, 2}) = getindex(fooarr.data, I...)
#inline Base.#propagate_inbounds Base.size(fooarr::FooArray) = size(fooarr.data)
Base.IndexStyle(::Type{<:FooArray}) = IndexLinear()
This already seems to be enough to create objects of type fooArray and do some simple math with it. However, I've observed that some essential functions such as matrix-vector multiplications seem to be imprecise. For example, the following should consistently return a vector of 0.0, but:
R = rand(100, 3)
S = FooArray(R, collect(1.0:3.0))
y = rand(100)
S'y - R'y
3-element Vector{Float64}:
-7.105427357601002e-15
0.0
3.552713678800501e-15
While the differences are very small, they can quickly add up over many different calculations, leading to significant errors.
Where do these differences come from?
A look at the calculations via macro #code_llvm reveals that appearently different matmul functions from LinearAlgebra are used (with other minor differences):
#code_llvm S'y
...
# C:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:111 within `*'
...
#code_llvm S'y
...
# C:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:106 within `*'
...
Redefining the adjoint and * functions on our FooArray object provides the expected, correct result:
import Base: *, adjoint, /
Base.adjoint(a::FooArray) = FooArray(a.data', zeros(size(a.data, 1)))
*(a::FooArray{T, 2, AT, VT} where {AT, VT}, b::AbstractVector{S}) where {T, S} = a.data * b
S'y - R'y
3-element Vector{Float64}:
0.0
0.0
0.0
However, this solution (which is also done in NamedArrays here) would require defining and maintaining all sorts of functions, not just the standard functions in base, adding more and more dependencies just because of this small error margin.
Is there any simpler way to get rid of this issue without redefining every operation and possibly many other functions from other packages?
I'm using Julia version 1.6.1 on Windows 64-bit system.
Yes, the implementation of matrix multiplication will vary depending upon your array type. The builtin Array will use BLAS, whereas your custom fooArray will use a generic implementation, and due to the non-associativity of floating point arithmetic, these different approaches will indeed yield different values — and note that they may be different from the ground truth, even for the builtin Arrays!
julia> using Random; Random.seed!(0); R = rand(100, 3); y = rand(100);
julia> R'y - Float64.(big.(R)'big.(y))
3-element Vector{Float64}:
-3.552713678800501e-15
0.0
0.0
You may be able to implement your custom array as a DenseArray, which will ensure that it uses the same (BLAS-enabled) codepath. You just need to implement a few more methods, most importantly strides and unsafe_convert:
julia> struct FooArray{T, N} <: DenseArray{T, N}
data::Array{T, N}
end
Base.getindex(fooarr::FooArray, i::Int) = fooarr.data[i]
Base.size(fooarr::FooArray) = size(fooarr.data)
Base.IndexStyle(::Type{<:FooArray}) = IndexLinear()
Base.strides(fooarr::FooArray) = strides(fooarr.data)
Base.unsafe_convert(P::Type{Ptr{T}}, fooarr::FooArray{T}) where {T} = Base.unsafe_convert(P, fooarr.data)
julia> R = rand(100, 3); S = FooArray(R); y = rand(100)
R'y - S'y
3-element Vector{Float64}:
0.0
0.0
0.0
julia> R = rand(100, 1000); S = FooArray(R); y = rand(100)
R'y == S'y
true
As an exercice in pytorch framework (0.4.1) , I am trying to display the gradient of X (gX or dSdX) in a simple Linear layer (Z = X.W + B). To simplify my toy example, I backward() from a sum of Z (not a loss).
To sum up, I want gX(dSdX) of S=sum(XW+B).
The problem is that the gradient of Z (dSdZ) is None. As a result, gX is wrong too of course.
import torch
X = torch.tensor([[0.5, 0.3, 2.1], [0.2, 0.1, 1.1]], requires_grad=True)
W = torch.tensor([[2.1, 1.5], [-1.4, 0.5], [0.2, 1.1]])
B = torch.tensor([1.1, -0.3])
Z = torch.nn.functional.linear(X, weight=W.t(), bias=B)
S = torch.sum(Z)
S.backward()
print("Z:\n", Z)
print("gZ:\n", Z.grad)
print("gX:\n", X.grad)
Result:
Z:
tensor([[2.1500, 2.9100],
[1.6000, 1.2600]], grad_fn=<ThAddmmBackward>)
gZ:
None
gX:
tensor([[ 3.6000, -0.9000, 1.3000],
[ 3.6000, -0.9000, 1.3000]])
I have exactly the same result if I use nn.Module as below:
class Net1Linear(torch.nn.Module):
def __init__(self, wi, wo,W,B):
super(Net1Linear, self).__init__()
self.linear1 = torch.nn.Linear(wi, wo)
self.linear1.weight = torch.nn.Parameter(W.t())
self.linear1.bias = torch.nn.Parameter(B)
def forward(self, x):
return self.linear1(x)
net = Net1Linear(3,2,W,B)
Z = net(X)
S = torch.sum(Z)
S.backward()
print("Z:\n", Z)
print("gZ:\n", Z.grad)
print("gX:\n", X.grad)
First of all you only calculate gradients for tensors where you enable the gradient by setting the requires_grad to True.
So your output is just as one would expect. You get the gradient for X.
PyTorch does not save gradients of intermediate results for performance reasons. So you will just get the gradient for those tensors you set requires_grad to True.
However you can use register_hook to extract the intermediate grad during calculation or to save it manually. Here I just save it to the grad variable of tensor Z:
import torch
# function to extract grad
def set_grad(var):
def hook(grad):
var.grad = grad
return hook
X = torch.tensor([[0.5, 0.3, 2.1], [0.2, 0.1, 1.1]], requires_grad=True)
W = torch.tensor([[2.1, 1.5], [-1.4, 0.5], [0.2, 1.1]])
B = torch.tensor([1.1, -0.3])
Z = torch.nn.functional.linear(X, weight=W.t(), bias=B)
# register_hook for Z
Z.register_hook(set_grad(Z))
S = torch.sum(Z)
S.backward()
print("Z:\n", Z)
print("gZ:\n", Z.grad)
print("gX:\n", X.grad)
This will output:
Z:
tensor([[2.1500, 2.9100],
[1.6000, 1.2600]], grad_fn=<ThAddmmBackward>)
gZ:
tensor([[1., 1.],
[1., 1.]])
gX:
tensor([[ 3.6000, -0.9000, 1.3000],
[ 3.6000, -0.9000, 1.3000]])
Hope this helps!
Btw.: Normally you would want the gradient to be activated for your parameters - so your weights and biases. Because what you would do right now when using the optimizer, is altering your inputs X and not your weights W and bias B. So usually gradient is activated for W and B in such a case.
There's a much simpler way. Simply use retain_grad():
https://pytorch.org/docs/stable/autograd.html#torch.Tensor.retain_grad
Z.retain_grad()
before calling backward()
blue-phoenox, thanks for your answer. I am pretty happy to have heard about register_hook().
What led me to think that I had a wrong gX is that it was independant of the values of X. I will have to do the math to understand it. But using CCE Loss instead of SUM makes things much more clean. So I updated the example for those who might be interested. Using SUM was a bad idea in this case.
T_dec = torch.tensor([0, 1])
X = torch.tensor([[0.5, 0.8, 2.1], [0.7, 0.1, 1.1]], requires_grad=True)
W = torch.tensor([[2.7, 0.5], [-1.4, 0.5], [0.2, 1.1]])
B = torch.tensor([1.1, -0.3])
Z = torch.nn.functional.linear(X, weight=W.t(), bias=B)
print("Z:\n", Z)
L = torch.nn.CrossEntropyLoss()(Z,T_dec)
Z.register_hook(lambda gZ: print("gZ:\n",gZ))
L.backward()
print("gX:\n", X.grad)
Result:
Z:
tensor([[1.7500, 2.6600],
[3.0700, 1.3100]], grad_fn=<ThAddmmBackward>)
gZ:
tensor([[-0.3565, 0.3565],
[ 0.4266, -0.4266]])
gX:
tensor([[-0.7843, 0.6774, 0.3209],
[ 0.9385, -0.8105, -0.3839]])
I am trying to create time stamp arrays in Swift.
So, say I want to go from 0 to 4 seconds, I can use Array(0...4), which gives [0, 1, 2, 3, 4]
But how can I get [0.0, 0.5 1.0, 2.0, 2.5, 3.0, 3.5, 4.0]?
Essentially I want a flexible delta, such as 0.5, 0.05, etc.
You can use stride(from:through:by:):
let a = Array(stride(from: 0.0, through: 4.0, by: 0.5))
An alternative for non-constant increments (even more viable in Swift 3.1)
The stride(from:through:by:) functions as covered in #Alexander's answer is the fit for purpose solution where, but for the case where readers of this Q&A wants to construct a sequence (/collection) of non-constant increments (in which case the linear-sequence constructing stride(...) falls short), I'll also include another alternative.
For such scenarios, the sequence(first:next:) is a good method of choice; used to construct a lazy sequence that can be repeatedly queried for the next element.
E.g., constructing the first 5 ticks for a log10 scale (Double array)
let log10Seq = sequence(first: 1.0, next: { 10*$0 })
let arr = Array(log10Seq.prefix(5)) // [1.0, 10.0, 100.0, 1000.0, 10000.0]
Swift 3.1 is intended to be released in the spring of 2017, and with this (among lots of other things) comes the implementation of the following accepted Swift evolution proposal:
SE-0045: Add prefix(while:) and drop(while:) to the stdlib
prefix(while:) in combination with sequence(first:next) provides a neat tool for generating sequences with everything for simple next methods (such as imitating the simple behaviour of stride(...)) to more advanced ones. The stride(...) example of this question is a good minimal (very simple) example of such usage:
/* this we can do already in Swift 3.0 */
let delta = 0.05
let seq = sequence(first: 0.0, next: { $0 + delta})
/* 'prefix(while:)' soon available in Swift 3.1 */
let arr = Array(seq.prefix(while: { $0 <= 4.0 }))
// [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
// ...
for elem in sequence(first: 0.0, next: { $0 + delta})
.prefix(while: { $0 <= 4.0 }) {
// ...
}
Again, not in contest with stride(...) in the simple case of this Q, but very viable as soon as the useful but simple applications of stride(...) falls short, e.g. for a constructing non-linear sequences.
I am writing a data mining algorithm in Scala and I want to write the Euclidean Distance function for a given test and several train instances. I have an Array[Array[Double]] with test and train instances. I have a method which loops through each test instance against all training instances and calculates distances between the two (picking one test and train instance per iteration) and returns a Double.
Say, for example, I have the following data points:
testInstance = Array(Array(3.2, 2.1, 4.3, 2.8))
trainPoints = Array(Array(3.9, 4.1, 6.2, 7.3), Array(4.5, 6.1, 8.3, 3.8), Array(5.2, 4.6, 7.4, 9.8), Array(5.1, 7.1, 4.4, 6.9))
I have a method stub (highlighting the distance function) which returns neighbours around a given test instance:
def predictClass(testPoints: Array[Array[Double]], trainPoints: Array[Array[Double]], k: Int): Array[Double] = {
for(testInstance <- testPoints)
{
for(trainInstance <- trainPoints)
{
for(i <- 0 to k)
{
distance = euclideanDistanceBetween(testInstance, trainInstance) //need help in defining this function
}
}
}
return distance
}
I know how to write a generic Euclidean Distance formula as:
math.sqrt(math.pow((x1 - y1), 2) + math.pow((x2 - y2), 2))
I have some pseudo steps as to what I want the method to do with a basic definition of the function:
def distanceBetween(testInstance: Array[Double], trainInstance: Array[Double]): Double = {
// subtract each element of trainInstance with testInstance
// for example,
// iteration 1 will do [Array(3.9, 4.1, 6.2, 7.3) - Array(3.2, 2.1, 4.3, 2.8)]
// i.e. sqrt(3.9-3.2)^2+(4.1-2.1)^2+(6.2-4.3)^2+(7.3-2.8)^2
// return result
// iteration 2 will do [Array(4.5, 6.1, 8.3, 3.8) - Array(3.2, 2.1, 4.3, 2.8)]
// i.e. sqrt(4.5-3.2)^2+(6.1-2.1)^2+(8.3-4.3)^2+(3.8-2.8)^2
// return result, and so on......
}
How can I write this in code?
So the formula you put in only works for two-dimensional vectors. You have four dimensions, but you should probably write your function to be flexible on this. So check out this formula.
So what you really want to say is:
for each position i:
subtract the ith element of Y from the ith element of X
square it
add all of those up
square root the whole thing
To make this more functional-programming style it will be more like:
square root the:
sum of:
zip X and Y into pairs
for each pair, square the difference
So that would look like:
import math._
def distance(xs: Array[Double], ys: Array[Double]) = {
sqrt((xs zip ys).map { case (x,y) => pow(y - x, 2) }.sum)
}
val testInstances = Array(Array(5.0, 4.8, 7.5, 10.0), Array(3.2, 2.1, 4.3, 2.8))
val trainPoints = Array(Array(3.9, 4.1, 6.2, 7.3), Array(4.5, 6.1, 8.3, 3.8), Array(5.2, 4.6, 7.4, 9.8), Array(5.1, 7.1, 4.4, 6.9))
distance(testInstances.head, trainPoints.head)
// 3.2680269276736382
As for predicting the class, you can make that more functional too, but it's unclear what the Double is that you are intending to return. It seems like you would want to predict the class for each test instance? Maybe choosing the class c corresponding to the nearest training point?
def findNearestClasses(testPoints: Array[Array[Double]], trainPoints: Array[Array[Double]]): Array[Int] = {
testPoints.map { testInstance =>
trainPoints.zipWithIndex.map { case (trainInstance, c) =>
c -> distance(testInstance, trainInstance)
}.minBy(_._2)._1
}
}
findNearestClasses(testInstances, trainPoints)
// Array(2, 0)
Or maybe you want the k-nearest neighbors:
def findKNearestClasses(testPoints: Array[Array[Double]], trainPoints: Array[Array[Double]], k: Int): Array[Int] = {
testPoints.map { testInstance =>
val distances =
trainPoints.zipWithIndex.map { case (trainInstance, c) =>
c -> distance(testInstance, trainInstance)
}
val classes = distances.sortBy(_._2).take(k).map(_._1)
val classCounts = classes.groupBy(identity).mapValues(_.size)
classCounts.maxBy(_._2)._1
}
}
findKNearestClasses(testInstances, trainPoints)
// Array(2, 1)
The generic formula for the euclidean distance is as follows:
math.sqrt(math.pow((x1 - x2), 2) + math.pow((y1 - y2), 2))
You can only compare the x coordinate with the x, and y with the y.