I am trying to save multiple solutions of my ODE in an Array. Right now this is what I got:
sols = []
for i in 1:numSim
if solver == "Rosenbrock23"
solution = solve(odeprob, Rosenbrock23())
append!(sols, solution)
end
end
As you can see I only want to append to this Array, if a certain ode solver is used. However, the "append!" statement neglects this statement and runs every iteration of the loop. I tried preallocating the array sol, to use a statement like this:sols[i] = solution
But here im struggling with the type declaration of the array sol.
I tried
sols = zeros(length)
and then
sols[i] = solution
However solution is of type ODESolution and can not be converted to Float64
Please provide adequate information to get an exact answer. I can't reproduce your problem since you didn't mention what is odeprob or the value of numSim. Since you declared sols = [] then the eltype of sols should be Any; hence it should be capable to contain any element.
However, I can replicate the code in the official doc (Example 1 : Solving Scalar Equations (ODE)), and combine it with your approach:
using DifferentialEquations
f(u,p,t) = 1.01*u
u0 = 1/2
tspan = (0.0,1.0)
odeprob = ODEProblem(f,u0,tspan)
sols = []
numSim = 2
solver = "Rosenbrock23"
for i in 1:numSim
if solver == "Rosenbrock23"
solution = solve(odeprob, Rosenbrock23())
append!(sols, solution)
end
end
Then if I call the sols variable:
julia> sols
22-element Vector{Any}:
0.5
0.5015852274675505
0.5177172276935554
0.5423763076371255
0.5852018498590001
0.6425679823795596
0.7275742030693312
0.835930076418226
0.9846257728490266
1.1713401410831334
1.3738764155543854
0.5
0.5015852274675505
0.5177172276935554
0.5423763076371255
0.5852018498590001
0.6425679823795596
0.7275742030693312
0.835930076418226
0.9846257728490266
1.1713401410831334
1.3738764155543854
Related
In MATLAB, I have a struct array of the following form:
a(1).b.c = rand(1,10);
a(1).b.cSize = length(a(1).b.c);
a(2).b.c = rand(1,11);
a(2).b.cSize = length(a(2).b.c);
a(3).b.c = rand(1,12);
a(3).b.cSize = length(a(3).b.c);
a(4).b.c = rand(1,13);
a(4).b.cSize = length(a(4).b.c);
a(5).b.c = rand(1,14);
a(5).b.cSize = length(a(5).b.c);
a(6).b.c = rand(1,15);
a(6).b.cSize = length(a(6).b.c);
I would like to create a cell array c that contains the differently sized fields a.b.c of the nested struct, without using for loops.
I tried the following:
c = {a.b.c}
which is not working and returns the following error message:
Expected one output from a curly brace or dot indexing expression,
but there were 6 results.
The best solution I've found so far is the following
cellfun(#(x) x.c, {a.b}, 'UniformOutput', false)
Is there a faster solution without using cellfun? Maybe some reshape command?
You can create a structrue array from a.b then extract the field c from the array.
ab = [a.b];
result = {ab.c}
Just for fun, here's a one-line version of rahnema1's answer:
[result{1:numel(a)}] = subsref([a.b], substruct('.','c'));
I strongly discourage you from using this in the wild though, almost no-one understands this on first read (which is a good rule of thumb to use for coding).
I've got multiple arrays that you can't quite fit a curve/equation to, but i do need to solve them for a lot of values. Simplified it looks like this when i plot it, but the real ones have a lot more points:
So say i would like to solve for y=22,how would i do that? As you can see there'd be three solutions to this, but i only need the most left one.
Linear is okay, but i'd rather us a non-linear method.
The only way i found is to fit an equation to a set of points and solve that equation, but an equation can't approximate the array accurately enough.
This implementation uses a first-order interpolation- if you're looking for higher accuracy and it feels appropriate, you can use a similar strategy for another order estimator.
Assuming data is the name of your array containing data with x values in the first column and y values in the second, that the columns are sorted by increasing or decreasing x values, and you wanted to find all data at the value y = 22;
searchPoint = 22; %search for all solutions where y = 22
matchPoints = []; %matrix containing all values of x
for ii = 1:length(data)-1
if (data(ii,2)>searchPoint)&&(data(ii+1,2)<searchPoint)
xMatch = data(ii,1)+(searchPoint-data(ii,2))*(data(ii+1,1)-data(ii,1))/(data(ii+1,2)-data(ii,2)); %Linear interpolation to solve for xMatch
matchPoints = [matchPoints xMatch];
elseif (data(ii,2)<searchPoint)&&(data(ii+1,2)>searchPoint)
xMatch = data(ii,1)+(searchPoint-data(ii,2))*(data(ii+1,1)-data(ii,1))/(data(ii+1,2)-data(ii,2)); %Linear interpolation to solve for xMatch
matchPoints = [matchPoints xMatch];
elseif (data(ii,2)==searchPoint) %check if data(ii,2) is equal
matchPoints = [matchPoints data(ii,1)];
end
end
if(data(end,2)==searchPoint) %Since ii only goes to the rest of the data
matchPoints = [matchPoints data(end,1)];
end
This was written sans-compiler, but the logic was tested in octave (in other words, sorry if there's a slight typo in variable names, but the math should be correct)
Using R on a Windows machine, I am currently running a nested loop on a 3D array (720x360x1368) which cycles through d1 and d2 to apply a function over d3 and assemble the output to a new array of similar dimensionality.
In the following reproducible example, I have reduced the dimensions by factor 10, to make execution faster.
library(SPEI)
old.array = array(abs(rnorm(50)), dim=c(72,36,136))
new.array = array(dim=c(72,36,136))
for (i in 1:72) {
for (j in 1:36) {
new.listoflists <- spi(ts(old.array[i,j,], freq=12, start=c(1901,1)), 1, na.rm = T)
new.array[i,j,] = new.listoflists$fitted
}
}
where spi() is a function from the SPEI package returning a list of lists, of which one particular list $fittedof length 1368 is used from each loop increment to cunstruct the new array.
While this loop works flawlessly, it takes quite a long time to compute. I have read that foreachcan be used to parallelize for loops.
However, I do not understand how the nesting and the assembling of the new array can be achieved such that the dimnames of the old and the new array are consistent.
(In the end, what I want to be able to, is to transform both the old and the new array into a "flat" long panel data frame using as.data.frame.table() and merge them along their three dimensions.)
Any help on how I can achieve the desired output using parallel computing will be highly appreciated!
Cheers
CubicTom
It would have been better with a reproducible example, here is what i come up with:
First create the cluster to use
cl <- makeCluster(6, type = "SOCK")
registerDoSNOW(cl)
Then you create the loop and close the cluster:
zz <- foreach(i = 1:720, .combine = c) %:%
foreach(j = 1:360, .combine = c ) %dopar% {
new.listoflists <- FUN(old.array[i,j,])
new.array[i,j,] <- new.listoflists$list
}
stopCluster(cl)
This will create a list zz containing every iteration of new.array[i,j,], then you can bind them together with:
new.obj <- plyr::ldply(zz, data.frame)
Hope this helps you!
I did not use as much of dimensions as your question because I wanted to ensure the behavior was correct.
So here I use mapply which take multiple arguments. The result is a list of the results. Then I wrapped it with matrix() to get the dimensions you hoped for.
Please note that i is repeated using times and j is repeated using each. This is critical as matrix() put entries by row first then wraps to the next column when the number of row is reached.
new.array = array(1:(5*10*4), dim=c(5,10,4))
# FUN: function which returns lists of
FUN <- function(x){
list(lapply(x, rep, times=3))
}
# result of the computation
result <- matrix(
mapply(
function(i,j,...){
FUN(new.array[i,j,])
}
,i = rep(1:nrow(new.array),times=ncol(new.array))
,j = rep(1:ncol(new.array),each=nrow(new.array))
,new.array=new.array
)
,nrow=nrow(new.array)
,ncol=ncol(new.array)
)
I am quite new in Julia and I don't know how to remove consecutive duplicates in an array. For example if you take this array :
`v=[8,8,8,9,5,5,8,8,1];`
I would like to obtain the vector v1 such that:
v1 = [8,9,5,8,1];
Could anyone help me? Many thanks.
One method could be to define:
function fastuniq(v)
v1 = Vector{eltype(v)}()
if length(v)>0
laste = v[1]
push!(v1,laste)
for e in v
if e != laste
laste = e
push!(v1,laste)
end
end
end
return v1
end
And with this function, you have:
julia> println(fastuniq(v))
[8,9,5,8,1]
But, when dealing with arrays, one need to decide if elements are to be deep or shallow copied. In case of integers, it doesn't matter.
In StatsBase.jl there is an rle function (Run-length encoding) that does exactly this.
This is a lot slower than #DanGetz's function but here is a way to do it in one line:
function notsofastunique(v)
return [v[1]; v[2:end][v[2:end] .!= v[1:end-1]]]
end
>println(notsofastunique(v))
[8,9,5,8,1]
Maybe it's useful for someone looking for a vecotrised solution.
In the spirit #niczky12 one-liner solution goal, the following uses Iterators.jl package (very useful and slowly migrating into Base).
using Iterators # install with Pkg.add("Iterators")
neatuniq(v) = map(first,filter(p->p[1]!=p[2],partition(chain(v,[nothing]),2,1)))
Have not done any benchmarks, but it should be OK (but slower than the longer for based function).
Just for the sake of practicing...
Here is another little function you can use to do this, this function will work for non negative values (inluding 0) only.
function anotherone(v)
v1 = zeros(eltype(v),length(v))
v1[1]=v[1]+1
for e = 2:length(v)
if v[e] != v[e-1]
v1[e] = v[e]+1
end
end
return v1[find(v1)]-1
end
edit:
Adding one more version, according to the input in the comments. I think this one should be even faster, maybe you could test it out :) This version should work for negative numbers as well.
function anotherone(v)
v1 = falses(length(v))
v1[1]=true
for e = 2:length(v)
if v[e] != v[e-1]
v1[e] = true
end
end
return v[v1]
end
Consider the following code snippet
for i = 1:100
Yi= x(i:i + 3); % i in Yi is not an index but subscript,
% x is some array having sufficient values
i = i + 3
end
Basically I want that each time the for loop runs the subscript changes from 1 to 2, 3, ..., 100. SO in effect after 100 iterations I will be having 100 arrays, starting with Y1 to Y100.
What could be the simplest way to implement this in MATLAB?
UPDATE
This is to be run 15 times
Y1 = 64;
fft_x = 2 * abs(Y1(5));
For simplicity I have taken constant inputs.
Now I am trying to use cell based on Marc's answer:
Y1 = cell(15,1);
fft_x = cell(15,1);
for i = 1:15
Y1{i,1} = 64;
fft_x{i,1} = 2 * abs(Y1(5));
end
I think I need to do some changes in abs(). Please suggest.
It is impossible to make variably-named variables in matlab. The common solution is to use a cell array for Y:
Y=cell(100,1);
for i =1:100
Y{i,1}= x(i:i+3);
i=i+3;
end
Note that the line i=i+3 inside the for-loop has no effect. You can just remove it.
Y=cell(100,1);
for i =1:100
Y{i,1}= x(i:i+3);
end
It is possible to make variably-named variables in matlab. If you really want this do something like this:
for i = 1:4:100
eval(['Y', num2str((i+3)/4), '=x(i:i+3);']);
end
How you organize your indexing depends on what you plan to do with x of course...
Yes, you can dynamically name variables. However, it's almost never a good idea and there are much better/safer/faster alternatives, e.g. cell arrays as demonstrated by #Marc Claesen.
Look at the assignin function (and the related eval). You could do what asked for with:
for i = 1:100
assignin('caller',['Y' int2str(i)],rand(1,i))
end
Another related function is genvarname. Don't use these unless you really need them.