FileNotFoundError: [WinError 3] The system cannot find the path specified: './train' - python-3.10

train_datagen = ImageDataGenerator(rescale=1./255, rotation_range=10, zoom_range=0.1, horizontal_flip=False,fill_mode="nearest")
test_datagen = ImageDataGenerator(rescale=1./255, rotation_range=10,zoom_range=0.1, horizontal_flip=False,fill_mode="nearest")
training_set = train_datagen.flow_from_directory('./train', target_size=(150, 150), batch_size=40, class_mode='categorical')
test_set = test_datagen.flow_from_directory('./test', target_size=(150, 150), batch_size=40, class_mode='categorical')
print(len(training_set.filenames))
I tried giving dataset directory but it didn't workout for me

Related

How to create my own layers on MONAI U-Net?

I'm using MONAI on Spyder Anaconda to build a U-Net network. I want to add/modify layers starting from this baseline.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = nets.UNet(
spatial_dims = 2,
in_channels = 3,
out_channels = 1,
channels = (4, 8, 16, 32, 64),
strides = (2, 2, 2, 2),
num_res_units = 3,
norm = layers.Norm.BATCH,
kernel_size=3,).to(device)
loss_function = losses.DiceLoss()
torch.backends.cudnn.benchmark = True
optimizer = torch.optim.Adam(model.parameters(), lr = 1e-4, weight_decay = 0)
post_pred = Compose([EnsureType(), Activations(sigmoid = True), AsDiscrete(threshold=0.5)])
post_label = Compose([EnsureType()])
inferer = SimpleInferer()
utils.set_determinism(seed=46)
My final aim is to create a MultiResUNet that has different layers such as:
class Conv2d_batchnorm(torch.nn.Module):
'''
2D Convolutional layers
Arguments:
num_in_filters {int} -- number of input filters
num_out_filters {int} -- number of output filters
kernel_size {tuple} -- size of the convolving kernel
stride {tuple} -- stride of the convolution (default: {(1, 1)})
activation {str} -- activation function (default: {'relu'})
'''
def __init__(self, num_in_filters, num_out_filters, kernel_size, stride = (1,1), activation = 'relu'):
super().__init__()
self.activation = activation
self.conv1 = torch.nn.Conv2d(in_channels=num_in_filters, out_channels=num_out_filters, kernel_size=kernel_size, stride=stride, padding = 'same')
self.batchnorm = torch.nn.BatchNorm2d(num_out_filters)
def forward(self,x):
x = self.conv1(x)
x = self.batchnorm(x)
if self.activation == 'relu':
return torch.nn.functional.relu(x)
else:
return x
This is just an example of a different Conv2d layer that I would use instead of the native one of the baseline.
Hope some of you can figure out how to proceed.
Thanks, Fede

Getting No such file or directory [[{{node ReadFile}}]] [[IteratorGetNext]] [Op:__inference_train_function_9137] error

This may be a simple answer, but currently making a neural network using keras and I ran into this problem through this code
\`EPOCHS = 50
callbacks = \[
tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.1, patience=10, verbose=1, mode='min', min_delta=0.0001),
tf.keras.callbacks.ModelCheckpoint(
'weights.tf', monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=True),
tf.keras.callbacks.EarlyStopping(
monitor='val_loss', min_delta=0, patience=15, verbose=1, restore_best_weights=True)
\]
history = model.fit(
train_ds,
validation_data=val_ds,
verbose=1,
callbacks=callbacks,
epochs=EPOCHS,
)
model.load_weights('weights.tf')
model.evaluate(val_ds)\`
Output:
`Epoch 1/50
NotFoundError Traceback (most recent call last)
\<ipython-input-15-265d39d703c7\> in \<module\>
10 \]
11
\---\> 12 history = model.fit(
13 train_ds,
14 validation_data=val_ds,
1 frames
/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
\---\> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx.\_handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core.\_NotOkStatusException as e:
NotFoundError: Graph execution error:
train/60377.jpg; No such file or directory
\[\[{{node ReadFile}}\]\]
\[\[IteratorGetNext\]\] \[Op:\__inference_train_function_9137\]
`
Here's my data:
FairFace Dataset from Kaggle
Here's how I preprocessed (through code I borrowed) the images from the FairFace dataset.
\`IMG_SIZE = 224
AUTOTUNE = tf.data.AUTOTUNE
BATCH_SIZE = 224
NUM_CLASSES = len(labels_map)
# Dataset creation
y_train = tf.keras.utils.to_categorical(train.race, num_classes=NUM_CLASSES, dtype='float32')
y_val = tf.keras.utils.to_categorical(val.race, num_classes=NUM_CLASSES, dtype='float32')
train_ds = tf.data.Dataset.from_tensor_slices((train.file, y_train)).shuffle(len(y_train))
val_ds = tf.data.Dataset.from_tensor_slices((val.file, y_val))
assert len(train_ds) == len(train.file) == len(train.race)
assert len(val_ds) == len(val.file) == len(val.race)
# Read files
def map_fn(path, label):
image = tf.io.decode_jpeg(tf.io.read_file(path))
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
# Read files
train_ds = train_ds.map(lambda path, lbl: (tf.io.decode_jpeg(tf.io.read_file(path)), lbl), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda path, lbl: (tf.io.decode_jpeg(tf.io.read_file(path)), lbl), num_parallel_calls=AUTOTUNE)
# Batch and resize after batch, then prefetch
train_ds = val_ds.map(lambda imgs, lbls: (tf.image.resize(imgs, (IMG_SIZE, IMG_SIZE)), lbls), num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(lambda imgs, lbls: (tf.image.resize(imgs, (IMG_SIZE, IMG_SIZE)), lbls), num_parallel_calls=AUTOTUNE)
train_ds = train_ds.batch(BATCH_SIZE)
val_ds = val_ds.batch(BATCH_SIZE)
# Performance enchancement - cache, batch, prefetch
train_ds = train_ds.prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.prefetch(buffer_size=AUTOTUNE)\`
I tried changing the jpg file name but to no avail.

how to generate a COCO dataset from black and white masks

I have a dataset composed by welds and masks (white for weld and black for background), although I need to use Mask R-CNN so I have to convert them to COCO dataset annotation. Does anybody have any suggestions on how to do this?
I tried this one: https://github.com/chrise96/image-to-coco-json-converter
but I'm getting this error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-0ddc235b1528> in <module>
94
95 # Create images and annotations sections
---> 96 coco_format["images"], coco_format["annotations"], annotation_cnt = images_annotations_info(mask_path)
97
98 with open("output/{}.json".format(keyword),"w") as outfile:
<ipython-input-3-0ddc235b1528> in images_annotations_info(maskpath)
57 sub_masks = create_sub_masks(mask_image_open, w, h)
58 for color, sub_mask in sub_masks.items():
---> 59 category_id = category_colors[color]
60
61 # "annotations" info
KeyError: '(1, 1, 1)'
Here is the code, I've just added the weld cathegory:
import glob
from src.create_annotations import *
# Label ids of the dataset
category_ids = {
"outlier": 0,
"window": 1,
"wall": 2,
"balcony": 3,
"door": 4,
"roof": 5,
"sky": 6,
"shop": 7,
"chimney": 8,
"weld": 9,
}
# Define which colors match which categories in the images
category_colors = {
"(0, 0, 0)": 0, # Outlier
"(255, 0, 0)": 1, # Window
"(255, 255, 0)": 2, # Wall
"(128, 0, 255)": 3, # Balcony
"(255, 128, 0)": 4, # Door
"(0, 0, 255)": 5, # Roof
"(128, 255, 255)": 6, # Sky
"(0, 255, 0)": 7, # Shop
"(128, 128, 128)": 8, # Chimney
"(255, 255, 255)": 9 # Weld
}
# Define the ids that are a multiplolygon. In our case: wall, roof and sky
multipolygon_ids = [9, 2, 5, 6]
# Get "images" and "annotations" info
def images_annotations_info(maskpath):
# This id will be automatically increased as we go
annotation_id = 0
image_id = 0
annotations = []
images = []
for mask_image in glob.glob(maskpath + "*.png"):
# The mask image is *.png but the original image is *.jpg.
# We make a reference to the original file in the COCO JSON file
original_file_name = os.path.basename(mask_image).split(".")[0] + ".jpg"
# Open the image and (to be sure) we convert it to RGB
mask_image_open = Image.open(mask_image).convert("RGB")
w, h = mask_image_open.size
# "images" info
image = create_image_annotation(original_file_name, w, h, image_id)
images.append(image)
sub_masks = create_sub_masks(mask_image_open, w, h)
for color, sub_mask in sub_masks.items():
category_id = category_colors[color]
# "annotations" info
polygons, segmentations = create_sub_mask_annotation(sub_mask)
# Check if we have classes that are a multipolygon
if category_id in multipolygon_ids:
# Combine the polygons to calculate the bounding box and area
multi_poly = MultiPolygon(polygons)
annotation = create_annotation_format(multi_poly, segmentations, image_id, category_id, annotation_id)
annotations.append(annotation)
annotation_id += 1
else:
for i in range(len(polygons)):
# Cleaner to recalculate this variable
segmentation = [np.array(polygons[i].exterior.coords).ravel().tolist()]
annotation = create_annotation_format(polygons[i], segmentation, image_id, category_id, annotation_id)
annotations.append(annotation)
annotation_id += 1
image_id += 1
return images, annotations, annotation_id
if __name__ == "__main__":
# Get the standard COCO JSON format
coco_format = get_coco_json_format()
for keyword in ["train", "val"]:
mask_path = "dataset/{}_mask/".format(keyword)
# Create category section
coco_format["categories"] = create_category_annotation(category_ids)
# Create images and annotations sections
coco_format["images"], coco_format["annotations"], annotation_cnt = images_annotations_info(mask_path)
with open("output/{}.json".format(keyword),"w") as outfile:
json.dump(coco_format, outfile)
print("Created %d annotations for images in folder: %s" % (annotation_cnt, mask_path))
Check that 255, 255 , 255 its the correct value of the object in the mask.
Check also the bit depth of the masks it must be the same for all masks.

Estimator doesn't work even if input is right

I'm adapting a script.py to achieve transfer learning. I find many script to retrain a model by TFRecord files, but none of them worked for me bacause of something about TF2.0 and contrib, so I'm trying to convert a script to adapt to TF2 and to my model.
This is my script at the moment:
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
EPOCHS = 1
# Data preprocessing
import pathlib
#data_dir = tf.keras.utils.get_file(origin="/home/pi/venv/raccoon_dataset/", fname="raccoons_dataset")
#data_dir = pathlib.Path(data_dir)
data_dir = "/home/pi/.keras/datasets/ssd_mobilenet_v1_coco_2018_01_28/saved_model/saved_model.pb"
######################
# Read the TFRecords #
######################
def imgs_input_fn(filenames, perform_shuffle=False, repeat_count=1, batch_size=1):
def _parse_function(serialized):
features = \
{
'image': tf.io.FixedLenFeature([], tf.string),
'label': tf.io.FixedLenFeature([], tf.int64)
}
# Parse the serialized data so we get a dict with our data.
parsed_example = tf.io.parse_single_example(serialized=serialized,
features=features)
print("\nParsed example:\n", parsed_example, "\nEnd of parsed example:\n")
# Get the image as raw bytes.
image_shape = tf.stack([300, 300, 3])
image_raw = parsed_example['image']
label = tf.cast(parsed_example['label'], tf.float32)
# Decode the raw bytes so it becomes a tensor with type.
image = tf.io.decode_raw(image_raw, tf.uint8)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, image_shape)
#image = tf.subtract(image, 116.779) # Zero-center by mean pixel
#image = tf.reverse(image, axis=[2]) # 'RGB'->'BGR'
d = dict(zip(["image"], [image])), [label]
return d
dataset = tf.data.TFRecordDataset(filenames=filenames)
# Parse the serialized data in the TFRecords files.
# This returns TensorFlow tensors for the image and labels.
#print("\nDataset before parsing:\n",dataset,"\n")
dataset = dataset.map(_parse_function)
#print("\nDataset after parsing:\n",dataset,"\n")
if perform_shuffle:
# Randomizes input using a window of 256 elements (read into memory)
dataset = dataset.shuffle(buffer_size=256)
dataset = dataset.repeat(repeat_count) # Repeats dataset this # times
dataset = dataset.batch(batch_size) # Batch size to use
print("\nDataset batched:\n", dataset, "\nEnd dataset\n")
iterator = tf.compat.v1.data.make_one_shot_iterator(dataset)
print("\nIterator shape:\n", tf.compat.v1.data.get_output_shapes(iterator),"\nEnd\n")
#print("\nIterator:\n",iterator.get_next(),"\nEnd Iterator\n")
batch_features, batch_labels = iterator.get_next()
return batch_features, batch_labels
raw_train = tf.compat.v1.estimator.TrainSpec(input_fn=imgs_input_fn(
"/home/pi/venv/raccoon_dataset/data/train.record",
perform_shuffle=True,
repeat_count=5,
batch_size=20),
max_steps=1)
and this is the resulting screen:
Parsed example:
{'image': <tf.Tensor 'ParseSingleExample/ParseSingleExample:0' shape=() dtype=string>, 'label': <tf.Tensor 'ParseSingleExample/ParseSingleExample:1' shape=() dtype=int64>}
End of parsed example:
Dataset batched:
<BatchDataset shapes: ({image: (None, 300, 300, 3)}, (None, 1)), types: ({image: tf.float32}, tf.float32)>
End dataset
Iterator shape:
({'image': TensorShape([None, 300, 300, 3])}, TensorShape([None, 1]))
End
2019-11-20 14:01:14.493817: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Feature: image (data type: string) is required but could not be found.
2019-11-20 14:01:14.495019: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at iterator_ops.cc:929 : Invalid argument: {{function_node __inference_Dataset_map__parse_function_27}} Feature: image (data type: string) is required but could not be found.
[[{{node ParseSingleExample/ParseSingleExample}}]]
Traceback (most recent call last):
File "transfer_learning.py", line 127, in <module>
batch_size=20),
File "transfer_learning.py", line 107, in imgs_input_fn
batch_features, batch_labels = iterator.get_next()
File "/home/pi/venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 737, in get_next
return self._next_internal()
File "/home/pi/venv/lib/python3.7/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 651, in _next_internal
output_shapes=self._flat_output_shapes)
File "/home/pi/venv/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_dataset_ops.py", line 2673, in iterator_get_next_sync
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __inference_Dataset_map__parse_function_27}} Feature: image (data type: string) is required but could not be found.
[[{{node ParseSingleExample/ParseSingleExample}}]] [Op:IteratorGetNextSync]
I don't know what I'm doing wrong.

Tensorflow > 2GB array as an input for tf.slice_input_producer

Using python3 and tensorflow, I've tried to put my data as training data into tf.train.slice_input_producer and tf.train.shuffle_batch
def batch_data():
...
# trX as training_data and trY as training_labels.
# Both are numpy array
data_queues = tf.train.slice_input_producer([trX, trY])
X, Y = tf.train.shuffle_batch(data_queues, num_threads=num_threads,
batch_size=batch_size,
capacity=batch_size * 64,
min_after_dequeue=batch_size * 32,
allow_smaller_final_batch=False)
return X, Y
But I got error Tensor > 2GB:
data_queues = tf.train.slice_input_producer([trX, trY])
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\training\input.py", line 302, in slice_input_producer
tensor_list = ops.convert_n_to_tensor_or_indexed_slices(tensor_list)
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1153, in convert_n_to_tensor_or_indexed_slices
values=values, dtype=dtype, name=name, as_ref=False)
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1124, in internal_convert_n_to_tensor_or_indexed_slices
value, dtype=dtype, name=n, as_ref=as_ref))
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1083, in internal_convert_to_tensor_or_indexed_slices
value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\ops.py", line 926, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 229, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "C:\Users\ellamunde\AppData\Local\Continuum\anaconda2\envs\python36\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 447, in make_tensor_proto
"Cannot create a tensor proto whose content is larger than 2GB.")
ValueError: Cannot create a tensor proto whose content is larger than 2GB.
I tried to handle it with variable
def batch_data():
...
Xplaceholder = tf.placeholder(trX.dtype, shape=trX.shape, name='Xplaceholder')
Xvar = tf.get_variable('XVariable', shape=trX.shape, dtype=trX.dtype, initializer=tf.zeros_initializer())
Yplaceholder = tf.placeholder(trY.dtype, shape=trY.shape, name='Yplaceholder')
Yvar = tf.get_variable('YVariable', shape=trY.shape, dtype=trY.dtype, initializer=tf.zeros_initializer())
Xassign = Xvar.assign(Xplaceholder)
Yassign = Yvar.assign(Yplaceholder)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(Xassign, feed_dict={Xplaceholder: trX})
session.run(Yassign, feed_dict={Yplaceholder: trY})
session.close()
data_queues = tf.train.slice_input_producer([Xvar, Yvar])
X, Y = tf.train.shuffle_batch(data_queues, num_threads=num_threads,
batch_size=batch_size,
capacity=batch_size * 64,
min_after_dequeue=batch_size * 32,
allow_smaller_final_batch=False)
Actually, it works. But when I monitored the loss value for each training was different and the loss value always increase each training and never decrease.
Does anybody can give me insight why does it happen?
Thanks

Resources