I am loading images from a csv file. The images are 300 x 300 pixels but flattened to 90000. I am getting an error for input shape. I am using tensorflow back end. I have attached an image of my csv file as well as an image of the error. It looks like its passing the whole list of arrays instead of passing each line.
"ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 380 arrays:[array([ 43., 45., 46., ..., 161., 152., 146.]), array([ 211., 222., 224., ..., 212., 213., 213.]), array([ 201., 201., "
csv file
error
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
import csv
import cv2
import re
loaded_images_train = []
loaded_labels_train = []
loaded_images_test = []
loaded_labels_test = []
with open('images_train.csv') as f:
csvReader = csv.reader(f, lineterminator = '\n')
for row in csvReader:
row = np.asarray(row, dtype='float')
loaded_images_train.append(row)
with open('labels_train.csv') as f:
csvReader = csv.reader(f, lineterminator = '\n')
for row in csvReader:
row = str(row)
row = row.strip(',')
loaded_labels_train.append(row)
with open('images_test.csv') as f:
csvReader = csv.reader(f, lineterminator = '\n')
for row in csvReader:
row = np.asarray(row, dtype='float')
loaded_images_test.append(row)
with open('labels_test.csv') as f:
csvReader = csv.reader(f, lineterminator = '\n')
for row in csvReader:
row = str(row)
row = row.strip(',')
loaded_labels_test.append(row)
# load data
x_train = loaded_images_train
y_train = loaded_labels_train
print("Loaded Training Data")
x_test = loaded_images_test
y_test = loaded_labels_test
print("Loaded Testing Data")
model = Sequential()
model.add(Dense(64, input_shape=(90000,), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(x_train, y_train,
epochs=20,
batch_size=128)
#score = model.evaluate(x_test, y_test, batch_size=128)
print(score)
The way you are converting each line with asarray and then feeding keras with a list of arrays is not working.
I've tested your code with a sightly different approach and it did run flawlessly for me with the csv you provided in the comments (changing input_size to 400).
Read all lines from the file to loaded_images_train. It will be a list of lists:
input_size = 90000
with open('images_train.csv') as f:
csvReader = csv.reader(f, lineterminator = '\n')
for row in csvReader:
assert len(row) == input_size
loaded_images_train.append(row)
I've included the assertion following your feedback to my comment.
You can also assert len(row) == output_size for the labels.
On the other hand, if you are pretty sure about the sizes of the rows, you can substitute the loop by a simple:
loaded_images_train = list(csvReader)
Whichever you choose, do the same to test images.
Then do the conversion to numpy.ndarray when declaring x_train:
x_train = np.asarray(loaded_images_train, dtype=float) # you don't really need the quotes here
Finally, printing the shape of the loaded data can help you know that everything is ok. For example:
print("Loaded Training Data", x_train.shape)
The reason why you met the problem is the type of your dataset is list, but the acceptable type for Keras model is only numpy array.
You need to convert the lists to numpy array with np.asarray(loaded_images_train) and make sure the shape of the data is (n,90000).
Related
I was wondering if someone could help me with an error message I am getting. First allow me to brief my workflow
imported raster image via rasterio.open
converted raster to array via raster.read(band number)
did some calculations on the array
trying to convert the final results into geotiff
But when I am trying to execute my codes I am getting the followin error message:
**AttributeError: 'DatasetReader' object has no attribute 'open'
here are my codes
# Get necessary information
driver = "GTiff"
nlines = raster.height
ncols = raster.width
nbands = raster.count
data_type = "float32"
crs = raster.crs
transform = raster.transform
count = raster.count
file_name = "C:/file_path/file_name.tif"
#Writing the GeoTiff
with raster.open("C:/file_path/file_name.tif", "w",
driver = driver,
height = height,
width = width,
count = count,
dtype = dtype,
crs = crs,
transform = transform) as dst:
dst.write(raster_array)
Trying to write numpy array as GeoTiff
Even checked is my data is a numpy array, and the answer was TRUE
I am new to pytorch. I am trying to create a DataLoader for a dataset of images where each image got a corresponding ground truth (same name):
root:
--->RGB:
------>img1.png
------>img2.png
------>...
------>imgN.png
--->GT:
------>img1.png
------>img2.png
------>...
------>imgN.png
When I use the path for root folder (that contains RGB and GT folders) as input for the torchvision.datasets.ImageFolder it reads all of the images as if they were all intended for input (classified as RGB and GT), and it seems like there is no way to pair the RGB-GT images. I would like to pair the RGB-GT images, shuffle, and divide it to batches of defined size. How can it be done? Any advice will be appreciated.
Thanks.
I think, the good starting point is to use VisionDataset class as a base. What we are going to use here is: DatasetFolder source code. So, we going to create smth similar. You can notice this class depends on two other functions from datasets.folder module: default_loader and make_dataset.
We are not going to modify default_loader, because it's already fine, it just helps us to load images, so we will import it.
But we need a new make_dataset function, that prepared the right pairs of images from root folder. Since original make_dataset pairs images (image paths if to be more precisely) and their root folder as target class (class index) and we have a list of (path, class_to_idx[target]) pairs, but we need (rgb_path, gt_path). Here is the code for new make_dataset:
def make_dataset(root: str) -> list:
"""Reads a directory with data.
Returns a dataset as a list of tuples of paired image paths: (rgb_path, gt_path)
"""
dataset = []
# Our dir names
rgb_dir = 'RGB'
gt_dir = 'GT'
# Get all the filenames from RGB folder
rgb_fnames = sorted(os.listdir(os.path.join(root, rgb_dir)))
# Compare file names from GT folder to file names from RGB:
for gt_fname in sorted(os.listdir(os.path.join(root, gt_dir))):
if gt_fname in rgb_fnames:
# if we have a match - create pair of full path to the corresponding images
rgb_path = os.path.join(root, rgb_dir, gt_fname)
gt_path = os.path.join(root, gt_dir, gt_fname)
item = (rgb_path, gt_path)
# append to the list dataset
dataset.append(item)
else:
continue
return dataset
What do we have now? Let's compare our function with original one:
from torchvision.datasets.folder import make_dataset as make_dataset_original
dataset_original = make_dataset_original(root, {'RGB': 0, 'GT': 1}, extensions='png')
dataset = make_dataset(root)
print('Original make_dataset:')
print(*dataset_original, sep='\n')
print('Our make_dataset:')
print(*dataset, sep='\n')
Original make_dataset:
('./data/GT/img1.png', 1)
('./data/GT/img2.png', 1)
...
('./data/RGB/img1.png', 0)
('./data/RGB/img2.png', 0)
...
Our make_dataset:
('./data/RGB/img1.png', './data/GT/img1.png')
('./data/RGB/img2.png', './data/GT/img2.png')
...
I think it works great) It's time to create our class Dataset. The most important part here is __getitem__ methods, because it imports images, applies transformation and returns a tensors, that can be used by dataloaders. We need to read a pair of images (rgb and gt) and return a tuple of 2 tensor images:
from torchvision.datasets.folder import default_loader
from torchvision.datasets.vision import VisionDataset
class CustomVisionDataset(VisionDataset):
def __init__(self,
root,
loader=default_loader,
rgb_transform=None,
gt_transform=None):
super().__init__(root,
transform=rgb_transform,
target_transform=gt_transform)
# Prepare dataset
samples = make_dataset(self.root)
self.loader = loader
self.samples = samples
# list of RGB images
self.rgb_samples = [s[1] for s in samples]
# list of GT images
self.gt_samples = [s[1] for s in samples]
def __getitem__(self, index):
"""Returns a data sample from our dataset.
"""
# getting our paths to images
rgb_path, gt_path = self.samples[index]
# import each image using loader (by default it's PIL)
rgb_sample = self.loader(rgb_path)
gt_sample = self.loader(gt_path)
# here goes tranforms if needed
# maybe we need different tranforms for each type of image
if self.transform is not None:
rgb_sample = self.transform(rgb_sample)
if self.target_transform is not None:
gt_sample = self.target_transform(gt_sample)
# now we return the right imported pair of images (tensors)
return rgb_sample, gt_sample
def __len__(self):
return len(self.samples)
Let's test it:
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
bs=4 # batch size
transforms = ToTensor() # we need this to convert PIL images to Tensor
shuffle = True
dataset = CustomVisionDataset('./data', rgb_transform=transforms, gt_transform=transforms)
dataloader = DataLoader(dataset, batch_size=bs, shuffle=shuffle)
for i, (rgb, gt) in enumerate(dataloader):
print(f'batch {i+1}:')
# some plots
for i in range(bs):
plt.figure(figsize=(10, 5))
plt.subplot(221)
plt.imshow(rgb[i].squeeze().permute(1, 2, 0))
plt.title(f'RGB img{i+1}')
plt.subplot(222)
plt.imshow(gt[i].squeeze().permute(1, 2, 0))
plt.title(f'GT img{i+1}')
plt.show()
Out:
batch 1:
...
Here you can find a notebook with code and simple dummy dataset.
I have the following question: how can I change the format of curve2 (list). I want something similar to curve
curve = [0.0556, 0.0563]
curve2 = [[0.0159, 0.0178]]
Context: I´d like to apply a certain code, but I don´t get the result I expect since the input has different format
My code is something like:
import pandas as pd
import numpy as np
curve = [0.0556, 0.0563]
curve2 = [[0.0159, 0.0178]]
df= pd.DataFrame()
def SUM (curve):
df['COl1'] = curve
return df
print(SUM(curve))
PD: curve2 is a row extracted from an array (as a list):
[[ 0.01593353 0.01783041]
[ 0.00917833 0.00593893]
[ 0.00829569 0.02123637]
[-0.03057529 -0.04138836]
[ 0.05212978 0.03239212]]
I'm using rpart with rpy2 (version 2.8.6) on python 3.5, and want to train a decision tree for classification. My code snippet looks like this:
import rpy2.robjects.packages as rpackages
from rpy2.robjects.packages import importr
from rpy2.robjects import numpy2ri
from rpy2.robjects import pandas2ri
from rpy2.robjects import DataFrame, Formula
rpart = importr('rpart')
numpy2ri.activate()
pandas2ri.activate()
dataf = DataFrame({'responsev': owner_train_label,
'predictorv': owner_train_data})
formula = Formula('responsev ~.')
clf = rpart.rpart(formula = formula, data = dataf, method = "class", control=rpart.rpart_control(minsplit = 10, xval = 10))
where owner_train_label is a numpy float64 array of shape (12610,) and
owner_train_data is a numpy float64 array of shape (12610,88)
This is the error I'm getting when I run the last line of code to fit the data.
RRuntimeError: Error in ((xmiss %*% rep(1, ncol(xmiss))) < ncol(xmiss)) & !ymiss :
non-conformable arrays
I get that it is telling me they are non-conformable arrays but I don't know why as for the same training data, I can train using sklearn's Decision tree successfully.
Thanks for your help.
I got around this by creating the dataframe using pandas and passing the panadas dataframe to rpart using rpy2's pandas2ri to convert it to R's dataframe.
from rpy2.robjects.packages import importr
from rpy2.robjects import pandas2ri
from rpy2.robjects import Formula
rpart = importr('rpart')
pandas2ri.activate()
df = pd.DataFrame(data = owner_train_data)
df['l'] = owner_train_label
formula = Formula('l ~.')
clf = rpart.rpart(formula = formula, data = df, method = "class", control=rpart.rpart_control(minsplit = 10, xval = 10))
I want to get data in Column D behind " , " in the end of the sentence from left to right to get phrase in link bio:
[1]:( http://prntscr.com/fye9hi) "here"
Someone cant help me please ....
This is my code but it cant go like i want.
import xlrd
file_location = "C:/Users/admin/DataKH.xlsx"
wb = xlrd.open_workbook(file_location)
sheet = wb.sheet_by_index(0)
print(sheet.nrows)
print(sheet.ncols)
for rows in range(sheet.nrows):
row_0 = sheet.cell_value(rows,0)
from xlwt import Workbook
import xlwt
from xlwt import Formula
workbook = xlrd.open_workbook(file_location)
sheet = workbook.sheet_by_index(0)
data = [sheet.cell_value(row,3) for row in range(sheet.nrows)]
data1 = [sheet.cell_value(row, 4) for row in range(sheet.nrows)]
workbook = xlwt.Workbook()
sheet = workbook.add_sheet('test')
for index, value in enumerate(data):
sheet.write(index, 0, value)
for index, value in enumerate(data1):
sheet.write(index, 1 , value)
workbook.save('output.xls')
How about using split(",") method? It returns a list of phrases so you can easily iterate through it though.
#MinhTuấnNgô: I'm confused with xlrd's syntax so I switched to pandas instead.
import pandas as pd
df = pd.read_excel('SampleData.xlsx')
df['Extracted Address'] = pd.Series((cell.split(',')[-1] for cell in df['Address']), index = df.index)
Not sure what you mean by 'getting the data after the comma' but this shows a way to manipulate the cell data.
After you've finished formatting the data, you can export it back to excel using df.to_excel(<filepath>)
For xlrd, you can iterate through a specific column using this syntax:
for row in ws.col(2)[1:]:
This should skip the first row (as taken care of in the case of pandas anyway) and iterate all remaining rows.