I have a problem regarding values set in a dictionary. I don't understand why they are changed unintentionally within a loop.
Here, x_exog["B_l_pre"][2] changed from 0.5 to 0.525, but I only specified x_exog["B_l_post"][2] to change. Why ?
## Parameters and set up the environment
# Set exogenous parameters
x_exog = Dict{String, Any}()
# set amenity
x_exog["B_l_pre"] = [1;0.5;2]
x_exog["B_h_pre"] = [1;0.5;2]
x_exog["B_l_post"] = x_exog["B_l_pre"]
x_exog["B_h_post"] = x_exog["B_h_pre"]
x_exog_baseline = x_exog
# define the parameters
shock = "amenity"
for run in ["baseline","pro_poor_program", "pro_rich_program" ]
# set the initial values for exogenous variables
local x_exog = x_exog_baseline
x_exog["run"] = run
x_exog["shock"] = shock
# define the policy shock
if shock == "amenity"
# improve amenity slum
if run == "pro_poor_program"
x_exog["B_l_post"][2] = x_exog["B_l_post"][2] * 1.05
elseif run == "pro_rich_program"
x_exog["B_h_post"][2] = x_exog["B_h_post"][2] * 1.05
else
x_exog["B_l_post"][2] = x_exog["B_l_post"][2] * 1.05
x_exog["B_h_post"][2] = x_exog["B_h_post"][2] * 1.05
end
end
print(x_exog["B_l_pre"][2], x_exog["B_h_pre"][2]) ###Why the loop has changed x_exog["B_l_pre"] and x_exog["B_h_pre"] ?????
end
Julia uses pass-by-sharing (see this SO question How to pass an object by reference and value in Julia?).
Basically, for primitive types the assignment operator assigns a value while for complex types a reference is assigned. In result both x_exog["B_l_post"] and x_exog["B_l_pre"] point to the same memory location (=== compares mutable objects by address in memory):
julia> x_exog["B_l_post"] === x_exog["B_l_pre"]
true
what you need to do is to create a copy of the object:
x_exog["B_l_post"] = deepcopy(x_exog["B_l_pre"])
Now they are two separate objects just having the same value:
julia> x_exog["B_l_post"] === x_exog["B_l_pre"]
false
julia> x_exog["B_l_post"] == x_exog["B_l_pre"]
true
Hence in your case
It's so simple. Because you said:
x_exog["B_l_post"] = x_exog["B_l_pre"]
And remeber that you specified the x_exog["B_l_pre"] as:
x_exog["B_l_pre"] = [1;0.5;2]
So the x_exog["B_l_post"] refers to the same object in the memory as the x_exog["B_l_pre"] refers to. To avoid this, you can pass a copy of x_exog["B_l_pre"] to the x_exog["B_l_post"]:
julia> x_exog["B_l_post"] = copy(x_exog["B_l_pre"])
3-element Vector{Float64}:
1.0
0.5
2.0
julia> x_exog["B_l_post"][2] = 2
2
julia> x_exog["B_l_post"]
3-element Vector{Float64}:
1.0
2.0
2.0
julia> x_exog["B_l_pre"]
3-element Vector{Float64}:
1.0
0.5
2.0
As you can see, I changed the second element of the x_exog["B_l_post"] into 2, but this change doesn't happen in the x_exog["B_l_pre"] because these are two separated objects now.
Related
I'm currently working on creating a subtype of AbstractArray in Julia, which allows you to store a vector in addition to an Array itself. You can think of it as the column "names", with element types as a subtype of AbstractFloat. Hence, it has some similarities to the NamedArray.jl package, but restricts to only assigning the columns with Floats (in case of matrices).
The struct that I've created so far (following the guide to create a subtype of AbstractArray) is defined as follows:
struct FooArray{T, N, AT, VT} <: AbstractArray{T, N}
data::AT
vec::VT
function FooArray(data::AbstractArray{T1, N}, vec::AbstractVector{T2}) where {T1 <: AbstractFloat, T2 <: AbstractFloat, N}
length(vec) == size(data, 2) || error("Inconsistent dimensions")
new{T1, N, typeof(data), typeof(vec)}(data, vec)
end
end
#inline Base.#propagate_inbounds Base.getindex(fooarr::FooArray, i::Int) = getindex(fooarr.data, i)
#inline Base.#propagate_inbounds Base.getindex(fooarr::FooArray, I::Vararg{Int, 2}) = getindex(fooarr.data, I...)
#inline Base.#propagate_inbounds Base.size(fooarr::FooArray) = size(fooarr.data)
Base.IndexStyle(::Type{<:FooArray}) = IndexLinear()
This already seems to be enough to create objects of type fooArray and do some simple math with it. However, I've observed that some essential functions such as matrix-vector multiplications seem to be imprecise. For example, the following should consistently return a vector of 0.0, but:
R = rand(100, 3)
S = FooArray(R, collect(1.0:3.0))
y = rand(100)
S'y - R'y
3-element Vector{Float64}:
-7.105427357601002e-15
0.0
3.552713678800501e-15
While the differences are very small, they can quickly add up over many different calculations, leading to significant errors.
Where do these differences come from?
A look at the calculations via macro #code_llvm reveals that appearently different matmul functions from LinearAlgebra are used (with other minor differences):
#code_llvm S'y
...
# C:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:111 within `*'
...
#code_llvm S'y
...
# C:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.6\LinearAlgebra\src\matmul.jl:106 within `*'
...
Redefining the adjoint and * functions on our FooArray object provides the expected, correct result:
import Base: *, adjoint, /
Base.adjoint(a::FooArray) = FooArray(a.data', zeros(size(a.data, 1)))
*(a::FooArray{T, 2, AT, VT} where {AT, VT}, b::AbstractVector{S}) where {T, S} = a.data * b
S'y - R'y
3-element Vector{Float64}:
0.0
0.0
0.0
However, this solution (which is also done in NamedArrays here) would require defining and maintaining all sorts of functions, not just the standard functions in base, adding more and more dependencies just because of this small error margin.
Is there any simpler way to get rid of this issue without redefining every operation and possibly many other functions from other packages?
I'm using Julia version 1.6.1 on Windows 64-bit system.
Yes, the implementation of matrix multiplication will vary depending upon your array type. The builtin Array will use BLAS, whereas your custom fooArray will use a generic implementation, and due to the non-associativity of floating point arithmetic, these different approaches will indeed yield different values — and note that they may be different from the ground truth, even for the builtin Arrays!
julia> using Random; Random.seed!(0); R = rand(100, 3); y = rand(100);
julia> R'y - Float64.(big.(R)'big.(y))
3-element Vector{Float64}:
-3.552713678800501e-15
0.0
0.0
You may be able to implement your custom array as a DenseArray, which will ensure that it uses the same (BLAS-enabled) codepath. You just need to implement a few more methods, most importantly strides and unsafe_convert:
julia> struct FooArray{T, N} <: DenseArray{T, N}
data::Array{T, N}
end
Base.getindex(fooarr::FooArray, i::Int) = fooarr.data[i]
Base.size(fooarr::FooArray) = size(fooarr.data)
Base.IndexStyle(::Type{<:FooArray}) = IndexLinear()
Base.strides(fooarr::FooArray) = strides(fooarr.data)
Base.unsafe_convert(P::Type{Ptr{T}}, fooarr::FooArray{T}) where {T} = Base.unsafe_convert(P, fooarr.data)
julia> R = rand(100, 3); S = FooArray(R); y = rand(100)
R'y - S'y
3-element Vector{Float64}:
0.0
0.0
0.0
julia> R = rand(100, 1000); S = FooArray(R); y = rand(100)
R'y == S'y
true
type ExtendedJumpArray{T,T2} <: AbstractArray{Float64,1}
u::T
jump_u::T2
end
Base.length(A::ExtendedJumpArray) = length(A.u)
Base.size(A::ExtendedJumpArray) = (length(A),)
function Base.getindex(A::ExtendedJumpArray,i::Int)
i <= length(A.u) ? A.u[i] : A.jump_u[i-length(A.u)]
end
function Base.setindex!(A::ExtendedJumpArray,v,i::Int)
i <= length(A.u) ? (A.u[i] = v) : (A.jump_u[i-length(A.u)] = v)
end
similar(A::ExtendedJumpArray) = deepcopy(A)
indices(A::ExtendedJumpArray) = Base.OneTo(length(A.u) + length(A.jump_u))
I thought I was the cool kid on the block, creating an array which could index past its length (I am doing it for a specific reason). But Julia apparently doesn't like this:
julia> ExtendedJumpArray([0.2],[-2.0])
Error showing value of type ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}:
ERROR: MethodError: no method matching inds2string(::Int64)
Closest candidates are:
inds2string(::Tuple{Vararg{AbstractUnitRange,N}}) at show.jl:1485
in _summary(::ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}, ::Int64) at .\show.jl:1490
in #showarray#330(::Bool, ::Function, ::IOContext{Base.Terminals.TTYTerminal}, ::ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}, ::Bool) at .\show.jl:1599
in display(::Base.REPL.REPLDisplay{Base.REPL.LineEditREPL}, ::MIME{Symbol("text/plain")}, ::ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}) at .\REPL.jl:132
in display(::Base.REPL.REPLDisplay{Base.REPL.LineEditREPL}, ::ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}) at .\REPL.jl:135
in display(::ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}) at .\multimedia.jl:143
in print_response(::Base.Terminals.TTYTerminal, ::Any, ::Void, ::Bool, ::Bool, ::Void) at .\REPL.jl:154
in print_response(::Base.REPL.LineEditREPL, ::Any, ::Void, ::Bool, ::Bool) at .\REPL.jl:139
in (::Base.REPL.##22#23{Bool,Base.REPL.##33#42{Base.REPL.LineEditREPL,Base.REPL.REPLHistoryProvider},Base.REPL.LineEditREPL,Base.LineEdit.Prompt})(::Base.LineEdit.MIState, ::Base.AbstractIOBuffer{Array{UInt8,1}}, ::Bool) at .\REPL.jl:652
in run_interface(::Base.Terminals.TTYTerminal, ::Base.LineEdit.ModalInterface) at .\LineEdit.jl:1579
in run_frontend(::Base.REPL.LineEditREPL, ::Base.REPL.REPLBackendRef) at .\REPL.jl:903
in run_repl(::Base.REPL.LineEditREPL, ::Base.##932#933) at .\REPL.jl:188
in _start() at .\client.jl:360
Is there an easy way to do this without breaking the show methods, and whatever else may be broken? Or is there a better way to do this in general?
Indices needs to return a tuple, just like size.
julia> Base.similar(A::ExtendedJumpArray) = deepcopy(A)
julia> Base.indices(A::ExtendedJumpArray) = (Base.OneTo(length(A.u) + length(A.jump_u)),)
julia> ExtendedJumpArray([0.2],[-2.0])
2-element ExtendedJumpArray{Array{Float64,1},Array{Float64,1}}:
0.2
-2.0
julia> length(ans)
1
Having indices and size disagree in the dimensionality of an array, though, is likely to end with confusion and strife. Some functions use size, whereas others use indices. See display vs. length above.
When I construct a 2 element array in two different ways(e.g. a and b, I get two different results when I add an element to one of the inner arrays. This also happens with append!. Based on the output after constructing each, I'd expect them to be exactly the same?
julia> a = [[],[]]
2-element Array{Array{Any,1},1}:
Any[]
Any[]
julia> push!(a[1],1.0)
1-element Array{Any,1}:
1.0
julia> a
2-element Array{Array{Any,1},1}:
Any[1.0]
Any[]
julia> b = fill([],2)
2-element Array{Array{Any,1},1}:
Any[]
Any[]
julia> push!(b[1],1.0)
1-element Array{Any,1}:
1.0
julia> b
2-element Array{Array{Any,1},1}:
Any[1.0]
Any[1.0]
fill will create an array initialized with n copies (shallow) of the same object, so that b[1] === b[2], and when you update b[1], you're updating the same object that's also pointed to in b[2].
Take a look at the following examples.
Hopefully the behavior is as you expect.
Input[1]:
a_1 = []
a_2 = []
a = [a_1, a_2]
push!(a_1, 1.0)
#show a_1
#show a
Output[1]:
a_1 = Any[1.0]
a = Array{Any,1}[Any[1.0],Any[]]
Input[2]
push!(a[1], 2.0)
#show a_1 #Guess what this shows
#show a
Output[2]:
a_1 = Any[1.0,2.0]
a = Array{Any,1}[Any[1.0,2.0],Any[]]
Input[3]:
b_n = []
b = fill(b_n, 2)
push!(b_n, 1.0)
#show b_n
#show b
Output[3]:
b_n = Any[1.0]
b = Array{Any,1}[Any[1.0],Any[1.0]]
Input[4]:
push!(b[1], 2.0)
#show b_n #Guess what this shows
#show b
Output[4]
b_n = Any[1.0,2.0]
b = Array{Any,1}[Any[1.0,2.0],Any[1.0,2.0]]
Input[5]:
c_n = []
c = [c_n, c_n]
push!(c_n, 1.0)
#show c_n
#show c
Output[5]:
c_n = Any[1.0]
c = Array{Any,1}[Any[1.0],Any[1.0]]
Input[6]:
push!(c[1], 2.0)
#show c_n
#show c
Output[6]:
c_n = Any[1.0,2.0]
c = Array{Any,1}[Any[1.0,2.0],Any[1.0,2.0]]
So Input[1] is the same as your a, and Input[3] is the same as your b
Each time you put a [] you construct a new Vector.
So in the first case: a=[[],[]] creates a vector containing two new vectors which I call a_1 and a_2 in Input[1]
In the second case: b=fill([],2] creates a vector, which I call b_n in Input[3], and then it fills an vector of length 2, with that vector b_n.
This it equivelent to the example in Input[3] (with c).
I might as well have said: [b_n, b_n] as said fill(b_n,2)
So it references to the same vector at each position. So changing one, changes the both.
I'm porting some code from Julia 0.4.7 to 0.5.1. I've noticed that there is something not compatible related to the array of anonymous functions. The code is here:
f = x::Array{Function} -> size(x)
# Option 1
f([k -> k+1, k-> k+1]) # This works in 0.4 & 0.5
# Option 2
f(repmat([k -> k+1], 2)) # This only works in 0.4
As far as I can see, the difference is although in 0.4 the anonymous array is still internally seen as Array{Function, 1}, in 0.5 it's seen like Array{#11#12, 1} (the numbers may change), so then it raises a MethodError thus they don't match.
Although the example is stupid it shows what I really need: to replicate an anonymous function a variable number of times.
Thanks!
In Julia 0.5+, Function becomes an abstract type, so Array{Function} is a parametric type which is invariant.
julia> typeof(x -> 2x)
##1#2
julia> typeof(x -> 2x) <: Function
true
julia> typeof([x -> 2x]) <: Array{Function}
false
As a result, the correct way to define f is:
f{T<:Function}(x::Array{T}) = size(x)
julia> f(repmat([k -> k+1], 2))
(2,)
I have functions which are called many times and require temporary arrays. Rather than array allocation happening every time the function is called, I would like the temporary to be statically allocated once.
How do I create a statically allocated array in Julia, with function scope?
Ok, let's assume that your function is called foo with an argument x and your array is just 100 hundred elements (each of which is a 64-bit value) with one dimension. Then you can create a scope around that function
let
global foo
let A = Array{Int64}(100)
function foo(x)
# do your tasks
end
end
A should be a let variable since it would overwrite any other global A.
You can wrap the temp array as a reference in a class:
type MyWrapper
thetmparray
thefunction::Function
function MyWrapper(outertmp::Array)
this = new(outertmp)
this.thefunction = function()
#use this.thetmparray or outertmp
end
return this
end
end
This away you can avoid global variables and (in the future) have a per-executor/thread/process/machine/etc temp array.
You can use either the let block or partial application (I prefer this approach for such a case):
function bind_array(A::Array)
function f(x)
A = A*x
end
end
Now you can bind a private array to every new "instance" of your f:
julia> f_x = bind_array(ones(1,2))
f (generic function with 1 method)
julia> display(f_x(2))
1x2 Array{Float64,2}:
2.0 2.0
julia> display(f_x(3))
1x2 Array{Float64,2}:
6.0 6.0
julia> f_y = bind_array(ones(3,2))
f (generic function with 1 method)
julia> display(f_y(2))
3x2 Array{Float64,2}:
2.0 2.0
2.0 2.0
2.0 2.0
julia> display(f_y(3))
3x2 Array{Float64,2}:
6.0 6.0
6.0 6.0
6.0 6.0