Creating a 3D plot in Matlab - arrays

I want to create a 3D plot of the final fraction of grass covered on the Earth (=in 2 billion years from now) (=A) as a function of a varying death rate of grass (=D) and the growth rate of grass (=G).
The final value of A (at 2 billion years away from now) can be calculated using a loop with the following discritised equation:
A(t+dt) = A(t)*((1-A(t))*G-D)*dt + A(t)
%Define variables and arrays
D=0.1; %constant value
G=0.4; %constant value
A=0.001; %initial value of A at t=0
t=0;
dt=10E6;
startloop=1; %define number of iterations
endloop=200;
timevector=zeros(1,endloop); %create vector with 0
grassvector=zeros(1,endloop);
%Define the loop
for t=startloop:endloop
A=A.*((((1-A).*G)-D)) + A;
grassvector(t)=A;
timevector(t)=t*dt;
end
Now i'm stuck on how to create a 3D plot of this final value of A as a function of a varying G and D. I got this but after a few trials, it keeps giving errors:
%(1) Create array of values for G and D varying between 0 and 1
A=0.001;
G=[0.005:0.005:1]; %Vary from 0.005 to 1 in steps of 0.005
D=[0.005:0.005:1]; %Vary from 0.005 to 1 in steps of 0.005
%(2) Meshgrid both variables = all possible combinations in a matrix
[Ggrid,Dgrid]=meshgrid(G,D);
%(3) Calculate the final grass fraction with varying G and D
D=0.1;
G=0.4;
A=0.001;
t=0;
dt=10E6;
startloop=1; %define number of iterations
endloop=200;
timevector=zeros(1,endloop); %create vector with 0
grassvector=zeros(1,endloop);
%Define the loop
for t=startloop:endloop
A=A.*((((1-A).*Ggrid)-Dgrid)) + A;
grassvector(t)=A;
timevector(t)=t*dt;
end
%(4) mesh together with D and G
...??
Can someone help? Thanks!

Your code is wrong, as grassvector(t)=A; can not be executed, as the sizes are not consistent. However, I think you may want to do:
grassvector=zeros([size(Ggrid),endloop]);
and in the loop:
grassvector(:,:,t)=A;
Also, while completely unnecesary computationally, you may want to initialize A to A=0.001*ones(size(Dgrid)), as it makes more sense logically.
Anyways: this is how you can plot it in the end:
surf(Ggrid,Dgrid,A,'LineStyle','none');
xlabel('growth rate ')
ylabel('death rate ')
zlabel('grass')
colorbar
gives:
But, as I was actually interested in your research, I decided to make a couple of plots to see how fast the grass will grow and stuff. Here is some nice plotting code. You can modify different stuff here to be able to change the appearance of it. I use custom colormaps, so if it doesn't work, delete the colormap(viridis()) line. If you like the colormap, visit this.
fh=figure();
filename='grass.gif';
for t=startloop:endloop
clf
hold on
surf(Ggrid,Dgrid,grassvector(:,:,t),'LineStyle','none');
[c,h]=contour3(Ggrid,Dgrid,grassvector(:,:,t)+0.05,[0:0.1:1],'LineColor',[153,0,18]/255,'LineWidth',2);
clabel(c,h);
xlabel('growth rate ')
ylabel('death rate ')
zlabel('grass')
title(['Years passed: ' num2str(t*dt/1000000) ' million'])
colormap(viridis())
axis([0 1 0 1 0 1])
grid on
view(-120,40);
frame = getframe(fh);
im = frame2im(frame);
[imind,cm] = rgb2ind(im,256);
if t == 1;
imwrite(imind,cm,filename,'gif', 'Loopcount',inf);
else
imwrite(imind,cm,filename,'gif','WriteMode','append','DelayTime',0.1);
end
end
Results:

Related

Take numbers form two intervals in concentric spheres in Julia

I am trying to take numbers from two intervals in Julia. The problem is the following,
I am trying to create concentric spheres and I need to generate vectors of dimension equal to 15 filled with numbers taken from each circle. The code is:
rmax = 5
ra = fill(0.0,1,rmax)
for i=1:rmax-1
ra[:,i].=rad/i
ra[:,rmax].= 0
end
for i=1:3
ptset = Any[]
for j=1:200
yt= 0
yt= rand(Truncated(Normal(0, 1), -ra[i], ra[i] ))
if -ra[(i+1)] < yt <= -ra[i] || ra[(i+1)] <= yt < ra[i]
push!(ptset,yt)
if length(ptset) == 15
break
end
end
end
end
Here, I am trying to generate spheres with uniform random numbers inside of each one; In this case, yt is only part of the construction of the numbers inside the sphere.
I would like to generate points in a sphere with radius r0 (ra[:,4] for this case), then points distributed from the edge of the first sphere to the second one wit radius r1 (here ra[:,3]) and so on.
In order to do that, I try to take elements that fulfill one of the two conditions -ra[(i+1)] < yt <= -ra[i]
or ra[(i+1)] <= yt < ra[i], i.e. I would like to generate a vector with positive and negative numbers. I used the operator || but it seems to take only the positive part. I am new in Julia and I am not sure how to take the elements from both parts of the interval. Does anyone has a hit on how to do it?. Thanks in advance
I hope I understood you correctly. First, we need to be able to sample uniformly from an N-dimensional shell with radii r0 and r1:
using Random
using LinearAlgebra: normalize
struct Shell{N}
r0::Float64
r1::Float64
end
Base.eltype(::Type{<:Shell}) = Vector{Float64}
function Random.rand(rng::Random.AbstractRNG, d::Random.SamplerTrivial{Shell{N}}) where {N}
shell = d[]
Δ = shell.r1 - shell.r0
θ = normalize(randn(N)) # uniformly distributed N-dimensional direction of length 1
r = shell.r0 .* θ # scale to a point on the interior of the shell
return r .+ Δ .* θ .* .√rand(N) # add a uniformly random segment between r0 and r1
end
(See here for more info about hooking into Random. You could equally implement a new Distribution, but that's not really necessary.)
Most importantly, a truncated normal will not result in a uniform distribution, but neither will adding a uniform scaling into the right direction: see here for why the square root is necessary (and I hope I got it right; you should check the math once more).
Then we can just create a sequence of shell samples with nested radii:
rmax = 5
rad = 10.0
ra = range(0, rad, length=rmax)
ptset = [rand(Shell{2}(ra[i], ra[i+1]), 15) for i = 1:(rmax - 1)]
(This part I wasn't really sure about, but the point should be clear.)

Matlab: Plot array such that each value has random shape and a color map

In Matlab:
How do I modify plot(x,y,'o'), where x=1:10 and y=ones(1,10), such that each point in the plot will have a random shape?
And how can I give it colors chosen from a scheme where the value at x=1 is the darkest blue, and x=10 is red (namely some sort of heat map)?
Can this be done without using loops? Perhaps I should replace "plot" with a different function for this purpose (like "scatter"? I don't know...)? The reason is that I am plotting this inside another loop, which is already very long, so I am interested in keeping the running-time short.
Thanks!
First, the plain code:
x = 1:20;
nx = numel(x);
y = ones(1, nx);
% Color map
cm = [linspace(0, 1, nx).' zeros(nx, 1) linspace(1, 0, nx).'];
% Possible markers
m = 'o+*.xsd^vph<>';
nm = numel(m);
figure(1);
hold on;
for k = 1:nx
plot(x(k), y(k), ...
'MarkerSize', 12, ...
'Marker', m(ceil(nm * (rand()))), ...
'MarkerFaceColor', cm(k, :), ...
'MarkerEdgeColor', cm(k, :) ...
);
end
hold off;
And, the output:
Most of this can be found in the MATLAB help for the plot command, at the Specify Line Width, Marker Size, and Marker Color section. Colormaps are simply n x 3 matrices with RGB values ranging from 0 to 1. So, I interpreted the darkest blue as [0 0 1], whereas plain red is [1 0 0]. Now, you just need a linear "interpolation" between those two for n values. Shuffling the marker type is done by simple rand. (One could generate some rand vector with size n beforehand, of course.) I'm not totally sure, if one can put all of these in one single plot command, but I'm highly sceptical. Thus, using a loop was the easiest way right now.

How to optimize this random entropy calculation for a large number of iterations?

What my code is trying to do :
I have an initial array containing an arrangement of molecules, each in a cell and confined to moving in a 2D array ( Up Down Left Right ) ( Array size:200x200 ). At each step I take a random molecule, and move it into a random adjacent cell.
Starting from a certain number and every few iterations, I calculate the entropy of this grid. The grid is cut into small squares of 25x25. I then use the Shannon entropy to calculate the entropy of the system.
Objective:
Doing 1e8+ iterations in a decent time using an i5-6500, no GPU.
My code:
function advance_multi_lattice(grid) #Find the next state of the system
rnd=rand(1:count(!iszero,grid)) #Random number to be used for a random molecule.
slots=find(!iszero,grid) #Cells containing molecules.
chosen_slot=find(!iszero,grid)[rnd] #Random cell. May contain multiple molecules.
dim=size(grid)[1] #Need this for rnd=3,4 later.
grid[chosen_slot]-=1 #Remove the molecules from the cell
rnd_arr=[1,2,3,4] #Array to random from.
while true
rnd=rand(rnd_arr) #Random number to see which side should the molecules go.
if rnd==1 #Right for example.
try #In case moving right is impossible, ie: moving right gets the molecule out. Remove 1 from rnd_arr and repeat.
grid[chosen_slot+1]+=1
break
catch
filter!(e->e!=1,rnd_arr)
end
elseif rnd==2
try #Same
grid[chosen_slot-1]+=1
break
catch
filter!(e->e!=2,rnd_arr)
end
#Repeat for the other numbers : 3 and 4...
return Grid
end
function S(P) #Entropy, if no molecules then return 0.
s=[]
for k in P
if k==0
push!(s,0)
else
push!(s,-k*log(k))
end
end
return s
end
function find_molecules(grid) #How many molecules in the array
s=0
for slot in grid
s+=slot
end
return s
end
function entropy_scale(grid,total_molecules) #Calculate the entropy of the grid.
P_array=Array{Float64}([])
for i=1:8
for j=1:8
push!(P_array,find_molecules(grid[(i-1)*25+1:i*25,(j-1)*25+1:j*25]))
end
end
P_array=P_array./total_molecules
return sum(S(P_array))
end
function entropy_evolution(grid,n) #The loop function. Changes the grid and returns the entropy as a function of steps.
t_arr=Array{Int64}([])
S_arr=Array{Float64}([])
p=Progress(Int(n)) #Progress bar, using ProgressMeter.
total_molecules=find_molecules(grid)
for k=1:1e3
grid=advance_multi_lattice(grid)
next!(p)
end
for k=1e3+1:n
grid=advance_multi_lattice(grid)
if k%500==0 #Only record entropy every 500 steps
push!(S_arr,entropy_scale(grid,totel_molecules))
end
next!(p)
end
return S_arr,grid
end
Results for my code :
For 1e5 iterations, I get 43 seconds. Which means that if I want an interesting result ( 1e9+ ), I need a lot of time, upwards to 1hour+. Changing the entropy calculation threshold barely scratches the performance unless it's really small.
I am assuming you are working under Julia 1.0 (for Julia 0.6 a small change is needed - I noted it in the code).
In order to improve the performance you should keep a vector of molecules - not a grid (you do not need it as you allow molecules to occupy the same location).
We will encode location of a molecule as a tuple (x,y). Now you need a function that randomly moves one molecule. Here is how you can implement it (I hard coded the boundaries but of course you could change them to be a parameter):
function move_molecule((x,y)) # in Julia 0.6 it should be move_molecule(t)
# and here in Julia 0.6 you should add: x, y = t
if x == 1
if y == 1
((1,2), (2,1))[rand(1:2)]
elseif y == 200
((1,199), (2,200))[rand(1:2)]
else
((2,y), (1,y-1), (1, y+1))[rand(1:3)]
end
elseif x == 200
if y == 1
((200,2), (199,1))[rand(1:2)]
elseif y == 200
((200,199), (199,200))[rand(1:2)]
else
((200,y), (200,y-1), (200, y+1))[rand(1:3)]
end
else
if y == 1
((x,2), (x-1,1), (x+1, 1))[rand(1:3)]
elseif y == 200
((x,199), (x-1,200), (x+1, 200))[rand(1:3)]
else
((x+1,y), (x-1,y), (x, y+1), (x,y-1))[rand(1:4)]
end
end
end
Now a function that will move a random molecule in one step a given number of steps is:
function go_sim!(molecules, steps)
for k in 1:steps
i = rand(axes(molecules, 1)) # in Julia 0.6 it should be: i = rand(1:length(molecules))
#inbounds molecules[i] = move_molecule(molecules[i])
if k % 500 == 0
# here do entropy calculation
end
end
end
You did not provide a fully reproducible example so I stop here - but it should be easy enough to rewrite the rest of the code for entropy calculation using this data structure (actually it might be even simpler). Here is a benchmark (the performance does not depend on size of the grid nor on the number of molecules and this is an important advantage over the code that uses grid):
julia> molecules = [(rand(1:200), rand(1:200)) for i in 1:1000];
julia> #time go_sim!(molecules, 1e9)
66.212943 seconds (22.64 k allocations: 1.191 MiB)
And you get 1e9 steps in around one minute (without entropy calculation).
What are key elements needed for a good performance:
do not use try-catch blocks as they are very slow;
try to avoid allocation of memory (i.e. creation of mutable objects); my code essentially does no allocations - in particular that is why I used tuples everywhere (you could use matrices in move_molecule function for simplicity but the performance would be around 2x worse)
Hope this helps.

Best way to pick random elements from an array with at least a min diff in R

I would like to randomly choose from an array a certain number of elements in a way that those respect always a limit in their reciprocal distance.
For example, having a vector a <- seq(1,1000), how can I pick 20 elements with a minimum distance of 15 between each other?
For now, I am using a simple iteration for which I reject the choice whenever is too next to any element, but it is cumbersome and tends to be long if the number of elements to pick is high. Is there a best-practice/function for this?
EDIT - Summary of answers and analysis
So far I had two working answers which I wrapped in two specific functions.
# dash2 approach
# ---------------
rand_pick_min <- function(ar, min.dist, n.picks){
stopifnot(is.numeric(min.dist),
is.numeric(n.picks), n.picks%%1 == 0)
if(length(ar)/n.picks < min.dist)
stop('The number of picks exceeds the maximum number of divisions that the array allows which is: ',
floor(length(ar)/min.dist))
picked <- array(NA, n.picks)
copy <- ar
for (i in 1:n.picks) {
stopifnot(length(copy) > 0)
picked[i] <- sample(copy, 1)
copy <- copy[ abs(copy - picked[i]) >= min.dist ]
}
return(picked)
}
# denis approach
# ---------------
rand_pick_min2 <- function(ar, min.dist, n.picks){
require(Surrogate)
stopifnot(is.numeric(min.dist),
is.numeric(n.picks), n.picks%%1 == 0)
if(length(ar)/n.picks < min.dist)
stop('The number of picks exceeds the maximum number of divisions that the array allows which is: ',
floor(length(ar)/min.dist))
lar <- length(ar)
dist <- Surrogate::RandVec(a=min.dist, b=(lar-(n.picks)*min.dist),
s=lar, n=(n.picks+1), m=1, Seed=sample(1:lar, size = 1))$RandVecOutput
return(cumsum(round(dist))[1:n.picks])
}
Using the same example proposed I run 3 tests. Firstly, the effective validity of the minimum limit
# Libs
require(ggplot2)
require(microbenchmark)
# Inputs
a <- seq(1, 1000) # test vector
md <- 15 # min distance
np <- 20 # number of picks
# Run
dist_vec <- c(sapply(1:500, function(x) c(dist(rand_pick_min(a, md, np))))) # sol 1
dist_vec2 <- c(sapply(1:500, function(x) c(dist(rand_pick_min2(a, md, np))))) # sol 2
# Tests - break the min
cat('Any distance breaking the min in sol 1?', any(dist_vec < md), '\n') # FALSE
cat('Any distance breaking the min in sol 2?', any(dist_vec2 < md), '\n') # FALSE
Secondly, I tested for the distribution of the resulting distances, obtaining the first two plots in order of solution (sol1 [A] is dash2's sol, while sol2 [B] is denis' one).
pa <- ggplot() + theme_classic() +
geom_density(aes_string(x = dist_vec), fill = 'lightgreen') +
geom_vline(aes_string(xintercept = mean(dist_vec)), col = 'darkred') + xlab('Distances')
pb <- ggplot() + theme_classic() +
geom_density(aes_string(x = dist_vec2), fill = 'lightgreen') +
geom_vline(aes_string(xintercept = mean(dist_vec)), col = 'darkred') + xlab('Distances')
print(pa)
print(pb)
Lastly, I computed the computational times needed for the two approaches as following and obtaining the last figure.
comp_times <- microbenchmark::microbenchmark(
'solution_1' = rand_pick_min(a, md, np),
'solution_2' = rand_pick_min2(a, md, np),
times = 500
)
ggplot2::autoplot(comp_times); ggsave('stckoverflow2.png')
Enlighted by the results, I am asking my-self if the distance distribution as it is should be expected or it is a deviation due to the applied methods.
EDIT2 - Answer to the last question, following the comment made by denis
Using many more sampling procedures (5000), I produced a pdf of the resulting positions and indeed your approach contains some artefact that makes your solution (B) deviate from the one I needed. Nonetheless, it would be interesting to have the ability to enforce a specific final distribution of positions.
If you want to avoid the hit and miss methods, you will have to translate your problem into a sampling of distances with constraints on the sum of your distances.
Basically how i translate what you want: your N positions sampled are equivalent to N+1 distance, ranging from the minimum distance to the size of your vector - N*mindist (the case where all your samples are packed together). You then need to constrain the sum of the distances to be equal to 1000 (the size of your vector).
In this case the solution will use Surrogate::RandVec from Surrogate package (see Random sampling to give an exact sum), that allows a sampling with a fixed sum.
library(Surrogate)
a <- seq(1,1000)
mind <- 15
N <- 20
dist <- Surrogate::RandVec(a=mind, b=(1000-(N)*mind), s=1000, n=(N+1), m=1, Seed=sample(1:1000, size = 1))$RandVecOutput
pos <- cumsum(round(dist))[1:20]
pos
> pos
[1] 22 59 76 128 204 239 289 340 389 440 489 546 567 607 724 773 808 843 883 927
dist is the sampling f the distance. You reconstruct your position by making the sum of the distances. It gives you pos, the vector of your index positions.
The advantage is that you can get any value, and that your sampling is supposed to be random. For the speed part I don't know, you'll need to compare to your method for your big data case.
Here is an histogramm of 1000 try:
I think the best solution, which guarantees randomness in some sense (I'm not exactly sure what sense!) may be:
Pick a random element
Remove all elements that are too close to that element
Pick another element
Return to 2.
So:
min_dist <- 15
a <- seq(1, 1000)
picked <- integer(20)
copy <- a
for (i in 1:20) {
stopifnot(length(copy) > 0)
picked[i] <- sample(copy, 1)
copy <- copy[ abs(copy - picked[i]) >= min_dist ]
}
Whether this is faster than sample-and-reject may depend on the characteristics of the original vector. Also, as you can see, you are not guaranteed to be able to get all the elements you want, though in your particular case there won't be a problem because 19 intervals of width 30 could never cover the whole of seq(1, 1000).

What's the fastest way to find deepest path in a 3D array?

I've been trying to find solution to my problem for more than a week and I couldn't find out anything better than a milion iterations prog, so I think it's time to ask someone to help me.
I've got a 3D array. Let's say, we're talking about the ground and the first layer is a surface.
Another layers are floors below the ground. I have to find deepest path's length, count of isolated caves underground and the size of the biggest cave.
Here's the visualisation of my problem.
Input:
5 5 5 // x, y, z
xxxxx
oxxxx
xxxxx
xoxxo
ooxxx
xxxxx
xxoxx
and so...
Output:
5 // deepest path - starting from the surface
22 // size of the biggest cave
3 // number of izolated caves (red ones) (izolated - cave that doesn't reach the surface)
Note, that even though red cell on the 2nd floor is placed next to green one, It's not the same cave because it's placed diagonally and that doesn't count.
I've been told that the best way to do this, might be using recursive algorithm "divide and rule" however I don't really know how could it look like.
I think you should be able to do it in O(N).
When you parse your input, assign each node a 'caveNumber' initialized to 0. Set it to a valid number whenever you visit a cave:
CaveCount = 0, IsolatedCaveCount=0
AllSizes = new Vector.
For each node,
ProcessNode(size:0,depth:0);
ProcessNode(size,depth):
If node.isCave and !node.caveNumber
if (size==0) ++CaveCount
if (size==0 and depth!=0) IsolatedCaveCount++
node.caveNumber = CaveCount
AllSizes[CaveCount]++
For each neighbor of node,
if (goingDeeper) depth++
ProcessNode(size+1, depth).
You will visit each node 7 times at worst case: once from the outer loop, and possibly once from each of its six neighbors. But you'll only work on each one once, since after that the caveNumber is set, and you ignore it.
You can do the depth tracking by adding a depth parameter to the recursive ProcessNode call, and only incrementing it when visiting a lower neighbor.
The solution shown below (as a python program) runs in time O(n lg*(n)), where lg*(n) is the nearly-constant iterated-log function often associated with union operations in disjoint-set forests.
In the first pass through all cells, the program creates a disjoint-set forest, using routines called makeset(), findset(), link(), and union(), just as explained in section 22.3 (Disjoint-set forests) of edition 1 of Cormen/Leiserson/Rivest. In later passes through the cells, it counts the number of members of each disjoint forest, checks the depth, etc. The first pass runs in time O(n lg*(n)) and later passes run in time O(n) but by simple program changes some of the passes could run in O(c) or O(b) for c caves with a total of b cells.
Note that the code shown below is not subject to the error contained in a previous answer, where the previous answer's pseudo-code contains the line
if (size==0 and depth!=0) IsolatedCaveCount++
The error in that line is that a cave with a connection to the surface might have underground rising branches, which the other answer would erroneously add to its total of isolated caves.
The code shown below produces the following output:
Deepest: 5 Largest: 22 Isolated: 3
(Note that the count of 24 shown in your diagram should be 22, from 4+9+9.)
v=[0b0000010000000000100111000, # Cave map
0b0000000100000110001100000,
0b0000000000000001100111000,
0b0000000000111001110111100,
0b0000100000111001110111101]
nx, ny, nz = 5, 5, 5
inlay, ncells = (nx+1) * ny, (nx+1) * ny * nz
masks = []
for r in range(ny):
masks += [2**j for j in range(nx*ny)][nx*r:nx*r+nx] + [0]
p = [-1 for i in range(ncells)] # parent links
r = [ 0 for i in range(ncells)] # rank
c = [ 0 for i in range(ncells)] # forest-size counts
d = [-1 for i in range(ncells)] # depths
def makeset(x): # Ref: CLR 22.3, Disjoint-set forests
p[x] = x
r[x] = 0
def findset(x):
if x != p[x]:
p[x] = findset(p[x])
return p[x]
def link(x,y):
if r[x] > r[y]:
p[y] = x
else:
p[x] = y
if r[x] == r[y]:
r[y] += 1
def union(x,y):
link(findset(x), findset(y))
fa = 0 # fa = floor above
bc = 0 # bc = floor's base cell #
for f in v: # f = current-floor map
cn = bc-1 # cn = cell#
ml = 0
for m in masks:
cn += 1
if m & f:
makeset(cn)
if ml & f:
union(cn, cn-1)
mr = m>>nx
if mr and mr & f:
union(cn, cn-nx-1)
if m & fa:
union(cn, cn-inlay)
ml = m
bc += inlay
fa = f
for i in range(inlay):
findset(i)
if p[i] > -1:
d[p[i]] = 0
for i in range(ncells):
if p[i] > -1:
c[findset(i)] += 1
if d[p[i]] > -1:
d[p[i]] = max(d[p[i]], i//inlay)
isola = len([i for i in range(ncells) if c[i] > 0 and d[p[i]] < 0])
print "Deepest:", 1+max(d), " Largest:", max(c), " Isolated:", isola
It sounds like you're solving a "connected components" problem. If your 3D array can be converted to a bit array (e.g. 0 = bedrock, 1 = cave, or vice versa) then you can apply a technique used in image processing to find the number and dimensions of either the foreground or background.
Typically this algorithm is applied in 2D images to find "connected components" or "blobs" of the same color. If possible, find a "single pass" algorithm:
http://en.wikipedia.org/wiki/Connected-component_labeling
The same technique can be applied to 3D data. Googling "connected components 3D" will yield links like this one:
http://www.ecse.rpi.edu/Homepages/wrf/pmwiki/pmwiki.php/Research/ConnectedComponents
Once the algorithm has finished processing your 3D array, you'll have a list of labeled, connected regions, and each region will be a list of voxels (volume elements analogous to image pixels). You can then analyze each labeled region to determine volume, closeness to the surface, height, etc.
Implementing these algorithms can be a little tricky, and you might want to try a 2D implementation first. Thought it might not be as efficient as you like, you could create a 3D connected component labeling algorithm by applying a 2D algorithm iteratively to each layer and then relabeling the connected regions from the top layer to the bottom layer:
For layer 0, find all connected regions using the 2D connected component algorithm
For layer 1, find all connected regions.
If any labeled pixel in layer 0 sits directly over a labeled pixel in layer 1, change all the labels in layer 1 to the label in layer 0.
Apply this labeling technique iteratively through the stack until you reach layer N.
One important considering in connected component labeling is how one considers regions to be connected. In a 2D image (or 2D array) of bits, we can consider either the "4-connected" region of neighbor elements
X 1 X
1 C 1
X 1 X
where "C" is the center element, "1" indicates neighbors that would be considered connected, and "X" are adjacent neighbors that we do not consider connected. Another option is to consider "8-connected neighbors":
1 1 1
1 C 1
1 1 1
That is, every element adjacent to a central pixel is considered connected. At first this may sound like the better option. In real-world 2D image data a chessboard pattern of noise or diagonal string of single noise pixels will be detected as a connected region, so we typically test for 4-connectivity.
For 3D data you can consider either 6-connectivity or 26-connectivity: 6-connectivity considers only the neighbor pixels that share a full cube face with the center voxel, and 26-connectivity considers every adjacent pixel around the center voxel. You mention that "diagonally placed" doesn't count, so 6-connectivity should suffice.
You can observe it as a graph where (non-diagonal) adjacent elements are connected if they both empty (part of a cave). Note that you don't have to convert it to a graph, you can use normal 3d array representation.
Finding caves is the same task as finding the connected components in a graph (O(N)) and the size of a cave is the number of nodes of that component.

Resources