Error input size in adaptive_avg_pool2d() MONAI - unet-neural-network

I'm using MONAI to build a U-Net for a segmentation task. I want to crop the input images with "CropForegroundd" as follow:
monai_load = [
LoadImaged(keys=["image","segmentation"],image_only=False,reader=PILReader()),
EnsureTyped(keys=["image", "segmentation"], data_type="numpy"),
AddChanneld(keys=["segmentation","image"]),
RepeatChanneld(keys=["image"],repeats=3),
AsChannelFirstd(keys=["image"], channel_dim = 0),
]
monai_transforms =[
AsDiscreted(keys=["segmentation"],threshold=0.5),
CropForegroundd(keys=["image","segmentation"], source_key="image", margin=20),
Resized(keys=["image","segmentation"],spatial_size=(240,240)),
ToTensord(keys=["image","segmentation"]),
]
train_transforms = Compose(monai_load + monai_transforms)
train_ds = IterableDataset(data = train_data, transform = train_transforms)
train_loader = DataLoader(dataset = train_ds, batch_size = batch_size, num_workers = 0, pin_memory = True)
While training running, it stops in a certain step number and give me this error:
RuntimeError: adaptive_avg_pool2d(): Expected input to have non-zero size for non-batch dimensions, but input has sizes [1, 3, 0, 0] with dimension 2 being empty
I think that the problem is not CropForegroundd margin parameter but I'm not able to solve this error.
Thank you in advance for getting back to me.
Federico

Related

Storing numpy.ndarrays from a loop

I am trying to store the numpy.ndarrays defined as x_c, y_c, and z_c for every iteration of the loop:
for z_value in np.arange(0, 5, 1):
ms.set_current_mesh(0)
planeoffset : float = z_value
ms.compute_planar_section(planeaxis = 'Z Axis', planeoffset = planeoffset)
m = ms.current_mesh()
matrix_name = m.vertex_matrix()
x_c = matrix_name[:,0]
y_c = matrix_name[:,1]
z_c = matrix_name[:,2]
I would like to be able to recall the three arrays at any z_value, preferably with reference to the z_value i.e x_c # z_value = 2 or similar.
Thanks for any help!
p.s very new to coding, so please go easy on me.
You have to store each array in an external variable, for example a dictionary
x_c={}
y_c={}
z_c={}
for z_value in np.arange(0, 5, 1):
ms.set_current_mesh(0)
planeoffset = float(z_value)
ms.compute_planar_section(planeaxis = 'Z Axis', planeoffset = planeoffset)
m = ms.current_mesh()
m.compact()
print(m.vertex_number(), "vertices in Planar Section Z =", planeoffset)
matrix_name = m.vertex_matrix()
x_c[planeoffset] = matrix_name[:,0]
y_c[planeoffset] = matrix_name[:,1]
z_c[planeoffset] = matrix_name[:,2]
Please, ensure you call m.compact() before accessing the vertex_matrix or you will get a MissingCompactnessException error. Please, note that it is not the same to store anything in x_c[2] or in x_c[2.0], so choose if your index has to be integers o floats and keep the same type (in this example, they are floats).
Later, you can recall values like this:
print("X Values with z=2.0")
print(x_c[2.0])

Data arrays must have the same length, and match time discretization in dynamic problems error in GEKKO

I want to find the value of the parameter m that minimizes my variable x subject to a system of differential equations. I have the following code
from gekko import GEKKO
def run_model_m(days, population, case, k_val, b_val, u0_val, sigma_val, Kmax0, a_val, c_val):
list_x =[]
list_u =[]
list_Kmax =[]
for i in range(len(days)):
list_xi=[]
list_ui=[]
list_Ki=[]
for j in range(len(days[i])):
#try:
m = GEKKO(remote=False)
#m.time= days[i][j]
eval = np.linspace(days[i][j][0], days[i][j][-1], 100, endpoint=True)
m.time = eval
x_data= population[i][j]
variable= np.linspace(population[i][j][0], population[i][j][-1], 100, endpoint=True)
x = m.Var(value=population[i][j][0], lb=0)
sigma= m.Param(sigma_val)
d = m.Param(c_val)
k = m.Param(k_val)
b = m.Param(b_val)
r = m.Param(a_val)
step = np.ones(len(eval))
step= 0.2*step
step[0]=1
m_param = m.CV(value=1, lb=0, ub=1, integer=True); m_param.STATUS=1
u = m.Var(value=u0_val, lb=0, ub=1)
#m.free(u)
a = m.Param(a_val)
c= m.Param(c_val)
Kmax= m.Param(Kmax0)
if case == 'case0':
m.Equations([x.dt()== x*(r*(1-x/(Kmax))-m_param/(k+b*u)-d), u.dt()== sigma*(m_param*b/((k+b*u)**2))])
elif case == 'case4':
m.Equations([x.dt()== x*(r*(1-u**2)*(1-x/(Kmax))-m_param/(k+b*u)-d), u.dt() == sigma*(-2*u*r*(1-x/(Kmax))+(b*m_param)/(b*u+k)**2)])
p = np.zeros(len(eval))
p[-1] = 1.0
final = m.Param(value=p)
m.Obj(x)
m.options.IMODE = 6
m.options.MAX_ITER=15000
m.options.SOLVER=1
# optimize
m.solve(disp=False, GUI=False)
#m.open_folder(dataset_path+'inf')
list_xi.append(x.value)
list_ui.append(u.value)
list_Ki.append(m_param.value)
list_x.append(list_xi)
list_Kmax.append(list_Ki)
list_u.append(list_ui)
return list_x, list_u, list_Kmax, m.options.OBJFCNVAL
scaled_days[i][j] =[-7.0, 42.0, 83.0, 125.0, 167.0, 217.0, 258.0, 300.0, 342.0]
scaled_pop[i][j] = [0.01762491277346285, 0.020592540360308997, 0.017870838266697213, 0.01690069378982034,0.015512320147187675,0.01506701796298272,0.014096420738841563,0.013991224004743027,0.010543380664478205]
k0,b0,group, case0, u0, sigma0, K0, a0, c0 = (100, 20, 'Size3, Inc', 'case0', 0.1, 0.05, 2, 0, 0.01)
list_x2, list_u2, list_Kmax2,final =run_model_m(days=[[scaled_days[i][j]]], population=
[[scaled_pop[i][j]]],case=case1, k_val=list_b1[i0][0], b_val=b1, u0_val=list_u1[i0][j0],
sigma_val=sigma1, Kmax0=K1, a_val=list_Kmax1[0][0], c_val=c1)
I get the error Data arrays must have the same length, and match time discretization in dynamic problems error but I don't understand why. I have tried making x and m_param arrays, with x=m.Var, m_param =m.MV... But still get the same error, even if they are all arrays of the same length. Is this the right way to find the solution of the minimization problem?
I think the error was just that in run_model_m I was passing a list as u0_val and it didn't have the same dimensions as m.time. So it should be u0_val=list_u1[0][0][0]

Map Layer Issues in ggplot2

I'm having a few issues with finalizing my map for a report. I think I'm warm on the solutions, but haven't quite figured them out. I would really appreciate any help on solutions so that I can finally move on!
1) The scale bar will NOT populate in the MainMap code and the subsequent Figure1 plot. This is "remedied" in the MainMap code if I comment out the "BCWA_land" map layer. However, when I retain the "BCWA_land" map layer it will eliminate the scale bar and produces this error:
Warning message: Removed 3 rows containing missing values (geom_text).
And this is the code:
MainMap <- ggplot(QOI) +
geom_sf(aes(fill = quadID)) +
scale_fill_manual(values = c("#6b8c42",
"#70b2ae",
"#d65a31")) +
labs(fill = "Quadrants of Interest",
caption = "Figure 1: Map depicting the quadrants in the WSDOT project area as well as other quadrants of interest in the Puget Sound area.")+
ggtitle("WSDOT Project Area and Quadrants of Interest") +
scalebar(x.min = -123, x.max = -122.8, y.min = 47, y.max = 47.1, location = "bottomleft",
transform = TRUE, dist = 10, dist_unit = "km", st.size = 3, st.bottom = TRUE, st.dist = 0.1) +
north(data = QOI, location = "topleft", scale = 0.1, symbol = 12, x.min = -123, y.min = 48.3, x.max = -122.7, y.max = 48.4) +
theme_bw()+
theme(panel.grid= element_line(color = "gray50"),
panel.background = element_blank(),
panel.ontop = TRUE,
legend.text = element_text(size = 11, margin = margin(l = 3), hjust = 0),
legend.position = c(0.95, 0.1),
legend.justification = c(0.85, 0.1),
legend.background = element_rect(colour = "#3c4245", fill = "#f4f4f3"),
axis.title = element_blank(),
plot.title = element_text(face = "bold", colour = "#3c4245", hjust = 0.5, margin = margin(b=10, unit = "pt")),
plot.caption = element_text(face = "italic", colour = "#3c4245", margin = margin(t = 7), hjust = 0, vjust = 0.5)) +
geom_sf(data = BCWA_land) + #this is what I've tried to comment out to determine the scale bar problem
xlim (-123.1, -121.4) +
ylim (47.0, 48.45)
MainMap
InsetRect <- data.frame(xmin=-123.2, xmax=-122.1, ymin=47.02, ymax=48.45)
InsetMap <- ggplotGrob( ggplot( quads) +
geom_sf(aes(fill = "")) +
scale_fill_manual(values = c("#eefbfb"))+
geom_sf(data = BCWA_land) +
scale_x_continuous(expand = c(0,0), limits = c(-124.5, -122.0)) +
scale_y_continuous(expand = c(0,0), limits = c(47.0, 49.5)) +
geom_rect(data = InsetRect,aes(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax),
color="#3c4245",
size=1.25,
fill=NA,
inherit.aes = FALSE) +
theme_bw()+
theme(legend.position = "none",
panel.grid = element_blank(),
axis.title = element_blank(),
axis.text = element_blank(),
axis.ticks = element_blank(),
plot.margin = margin(0,0,0,0)))
InsetMap
Figure1 <- MainMap +
annotation_custom(grob = InsetMap, xmin = -122.2, xmax = -121.3,
ymin = 47.75, ymax = 48.5)
Figure1
As you can see I'm not getting this issue or error for my north arrow so I'm not really sure what is happening with the scale bar!
This problem is probably a little too OCD, however I REALLY don't want the gridlines to show up on the InsetMap, and was hoping that the InsetMap would overlay on top of the MainMap, without gridlines as I had those parameters set to element_blank() in the InsetMap code.
Here is an image of my plot. If you would like the data for this, please let me know. Because these are shapefiles, the data is unwieldy and not super conducive to SO's character limit for a post...
If anyone has any insight into a solution(s) I would so so appreciate that!! Thanks for your time!
The issue was the
xlim (-123.1, -121.4) +
ylim (47.0, 48.45)
call that I made. Instead, I used coord_sf(xlim = c(min, max), ylim = c(min, max)). I thought that this would be helpful to someone who might be in my position later on!
Essentially the difference between setting the limits of to the graph using just the x/y lim calls is that that truncates the data that is available in your dataset, whereas the coord_sf call simply "focuses" your graph on that extent if you will without altering the data you have available in your dataset.

R: how to properly create rx_forest_model object?

I'm trying to do a churn analysis with R and SQL Server 2016.
I have uploaded my dataset on my database in a local SQL Server and I did all the preliminary work on this dataset.
Well, now I have this function trainModel() which I would use to estimate my random model forest:
trainModel = function(sqlSettings, trainTable) {
sqlConnString = sqlSettings$connString
trainDataSQL <- RxSqlServerData(connectionString = sqlConnString,
table = trainTable,
colInfo = cdrColInfo)
## Create training formula
labelVar = "churn"
trainVars <- rxGetVarNames(trainDataSQL)
trainVars <- trainVars[!trainVars %in% c(labelVar)]
temp <- paste(c(labelVar, paste(trainVars, collapse = "+")), collapse = "~")
formula <- as.formula(temp)
## Train gradient tree boosting with mxFastTree on SQL data source
library(RevoScaleR)
rx_forest_model <- rxDForest(formula = formula,
data = trainDataSQL,
nTree = 8,
maxDepth = 16,
mTry = 2,
minBucket = 1,
replace = TRUE,
importance = TRUE,
seed = 8,
parms = list(loss = c(0, 4, 1, 0)))
return(rx_forest_model)
}
But when I run the function I get this wrong output:
> system.time({
+ trainModel(sqlSettings, trainTable)
+ })
user system elapsed
0.29 0.07 58.18
Warning message:
In tempGetNumObs(numObs) :
Number of observations not available for this data source. 'numObs' set to 1e6.
And for this warning message, the function trainModel() does not create the object rx_forest_model
Does anyone have any suggestions on how to solve this problem?
After several attempts, I found the reason why the function trainModel() did not function properly.
Is not a connection string problem and is not even a data source type issue.
The problem is in the syntax of function trainModel().
It is enough to eliminate from the body of the function the statement:
return(rx_forest_model)
In this way, the function returns the same warning message, but creates the object rx_forest_model in the correct way.
So, the correct function is:
trainModel = function(sqlSettings, trainTable) {
sqlConnString = sqlSettings$connString
trainDataSQL <- RxSqlServerData(connectionString = sqlConnString,
table = trainTable,
colInfo = cdrColInfo)
## Create training formula
labelVar = "churn"
trainVars <- rxGetVarNames(trainDataSQL)
trainVars <- trainVars[!trainVars %in% c(labelVar)]
temp <- paste(c(labelVar, paste(trainVars, collapse = "+")), collapse = "~")
formula <- as.formula(temp)
## Train gradient tree boosting with mxFastTree on SQL data source
library(RevoScaleR)
rx_forest_model <- rxDForest(formula = formula,
data = trainDataSQL,
nTree = 8,
maxDepth = 16,
mTry = 2,
minBucket = 1,
replace = TRUE,
importance = TRUE,
seed = 8,
parms = list(loss = c(0, 4, 1, 0)))
}

numpy slicing using user defined input

I have (in a larger project) data contained in numpy.array.
Based on user input I need to move a selected axis (dimAxisNr) to the first dimension of the array and slice one or more (including the first) dimension based on user input (such as Select2 and Select0 in the example).
Using this input I generate a DataSelect which contains the information needed to slice. But the output size of the sliced array is different from the one using inline indexing. So basically I need a way to generate the '37:40:2' and '0:2' from an input list.
import numpy as np
dimAxisNr = 1
Select2 = [37,39]
Select0 = [0,1]
plotData = np.random.random((102,72,145,2))
DataSetSize = np.shape(plotData)
DataSelect = [slice(0,item) for item in DataSetSize]
DataSelect[2] = np.array(Select2)
DataSelect[0] = np.array(Select0)
def shift(seq, n):
n = n % len(seq)
return seq[n:] + seq[:n]
#Sort and Slice the data
print(np.shape(plotData))
print(DataSelect)
plotData = np.transpose(plotData, np.roll(range(plotData.ndim),-dimAxisNr))
DataSelect = shift(DataSelect,dimAxisNr)
print(DataSelect)
print(np.shape(plotData))
plotData = plotData[DataSelect]
print(np.shape(plotData))
plotDataDirect = plotData[slice(0, 72, None), 37:40:2, slice(0, 2, None), 0:2]
print(np.shape(plotDataDirect))
I'm not sure I've understood your question at all...
But if the question is "How do I generate a slice based on a list of indices like [37,39,40,23] ?"
then I would answer : you don't have to, just use the list as is to select the right indices, like so :
a = np.random.rand(4,5)
print(a)
indices = [2,3,1]
print(a[0:2,indices])
Note that the sorting of the list matters: [2,3,1] yields a different result from [1,2,3]
Output :
>>> a
array([[ 0.47814802, 0.42069094, 0.96244966, 0.23886243, 0.86159478],
[ 0.09248812, 0.85569145, 0.63619014, 0.65814667, 0.45387509],
[ 0.25933109, 0.84525826, 0.31608609, 0.99326598, 0.40698516],
[ 0.20685221, 0.1415642 , 0.21723372, 0.62213483, 0.28025124]])
>>> a[0:2,[2,3,1]]
array([[ 0.96244966, 0.23886243, 0.42069094],
[ 0.63619014, 0.65814667, 0.85569145]])
I have found the answer to my question. I need to use numpy.ix_.
Here is the working code:
import numpy as np
dimAxisNr = 1
Select2 = [37,39]
Select0 = [0,1]
plotData = np.random.random((102,72,145,2))
DataSetSize = np.shape(plotData)
DataSelect = [np.arange(0,item) for item in DataSetSize]
DataSelect[2] = Select2
DataSelect[0] = Select0
#print(list(37:40:2))
def shift(seq, n):
n = n % len(seq)
return seq[n:] + seq[:n]
#Sort and Slice the data
print(np.shape(plotData))
print(DataSelect)
plotData = np.transpose(plotData, np.roll(range(plotData.ndim),-dimAxisNr))
DataSelect = shift(DataSelect,dimAxisNr)
plotDataSlice = plotData[np.ix_(*DataSelect)]
print(np.shape(plotDataSlice))
plotDataDirect = plotData[slice(0, 72, None), 37:40:2, slice(0, 2, None), 0:1]
print(np.shape(plotDataDirect))

Resources