Problems saving arrays as greyscale 'L' images using matplotlib? - arrays

I'm trying to save an array as an image using plt.imsave(). The original image is a 16 greyscale 'L' tiff. But I keep on getting the error:
Attribute error: 'str' object has no attribute 'shape'
figsize = [x / float(dpi) for x in (arr.shape[1], arr.shape[0])]
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from PIL import Image
im2=plt.imread('C:\Documents\Image\pic.tif')
plt.imsave(im2, '*.tif')
The image is 2048x2048, the array is 2048Lx2048L. Everything I've tried doesn't work: shape=[2048,2048], im2.shape(2048,2048). Can anybody tell me out how to add shape as a keyword argument? Or is there any easier way to do this, preferably avoiding PIL, since it seems to have issues with 16-bit greyscale tiffs and I absolutely have to use that format?

I think you've got the arguments backwards. From help(plt.imsave):
Help on function imsave in module matplotlib.pyplot:
imsave(*args, **kwargs)
Saves a 2D :class:`numpy.array` as an image with one pixel per element.
The output formats available depend on the backend being used.
Arguments:
*fname*:
A string containing a path to a filename, or a Python file-like object.
If *format* is *None* and *fname* is a string, the output
format is deduced from the extension of the filename.
*arr*:
A 2D array.
i.e.:
>>> im2.shape
(256, 256)
>>> plt.imsave(im2, "pic.tif")
Traceback (most recent call last):
File "<ipython-input-36-a7bbfaeb1a4c>", line 1, in <module>
plt.imsave(im2, "pic.tif")
File "/usr/lib/pymodules/python2.7/matplotlib/pyplot.py", line 1753, in imsave
return _imsave(*args, **kwargs)
File "/usr/lib/pymodules/python2.7/matplotlib/image.py", line 1230, in imsave
figsize = [x / float(dpi) for x in arr.shape[::-1]]
AttributeError: 'str' object has no attribute 'shape'
>>> plt.imsave("pic.tif", im2)
>>>

Related

Rasterio Creating TIFF file

I tried to use the code on the Rasterio-doc website to write an array as a TIFF to disk
https://rasterio.readthedocs.io/en/latest/topics/writing.html
with rasterio.Env():
profile = src.profile
profile.update(
dtype=rasterio.uint8,
count=1,
compress='lzw')
with rasterio.open('example.tif', 'w', **profile) as dst:
dst.write(array.astype(rasterio.uint8), 1)
When I run the code the following error occurs: 'name 'array' is not defined'.
I tried in the last line with 'np.array' instead of 'array' to say that it is a numpy-array but it didn't worked.
The variable 'array' stands for the data that should be written to disk. Create a numpy array as for example:
import numpy as np
array = np.array(my_array_data)
Then you can write this data to disk as described in the tutorial.

Converting RGB data into an array from a text file to create an Image

I am trying to convert txt RGB data from file.txt into an array. And then, using that array, convert the RGB array into an image.
(RGB data is found at this github repository: IR Sensor File.txt).
I am trying to convert the .txt file into an array which I could use the PIL/Image library and convert the array into an Image, and then put it through the following script to create my image.
My roadblock right now is converting the arrays in file.txt into an appropriate format to work with the Image function.
from PIL import Image
import numpy as np
data = [ARRAY FROM THE file.txt]
img = Image.fromarray(data, 'RGB')
img.save('my.png')
img.show()
The RGB data looks like as follows, and can also be found at the .txt file from that github repository linked above:
[[(0,255,20),(0,255,50),(0,255,10),(0,255,5),(0,255,10),(0,255,25),(0,255,40),(0,255,71),(0,255,137),(0,255,178),(0,255,147),(0,255,158),(0,255,142),(0,255,163),(0,255,112),(0,255,132),(0,255,137),(0,255,153),(0,255,101),(0,255,122),(0,255,122),(0,255,147),(0,255,66),(0,255,66),(0,255,30),(0,255,61),(0,255,0),(0,255,0),(0,255,40),(0,255,66),(15,255,0),(0,255,15)],
[(0,255,40),(0,255,45),(15,255,0),(20,255,0),(10,255,0),(35,255,0),(0,255,5),(0,255,56),(0,255,173),(0,255,168),(0,255,153),(0,255,137),(0,255,158),(0,255,147),(0,255,127),(0,255,117),(0,255,142),(0,255,142),(0,255,122),(0,255,122),(0,255,137),(0,255,137),(0,255,101),(0,255,66),(0,255,71),(0,255,61),(0,255,25),(0,255,25),(0,255,61),(0,255,35),(0,255,0),(35,255,0)],
[(0,255,15),(0,255,25),(51,255,0),(71,255,0),(132,255,0),(101,255,0),(35,255,0),(0,255,20),(0,255,91),(0,255,153),(0,255,132),(0,255,147),(0,255,132),(0,255,158),(0,255,122),(0,255,132),(0,255,142),(0,255,158),(0,255,122),(0,255,137),(0,255,142),(0,255,147),(0,255,101),(0,255,101),(0,255,86),(0,255,86),(0,255,50),(0,255,45),(0,255,50),(0,255,56),(0,255,30),(56,255,0)],
[(0,255,45),(0,255,10),(76,255,0),(127,255,0),(132,255,0)]]
I think this should work - no idea if it's decent Python:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
import re
# Read in entire file
with open('sensordata.txt') as f:
s = f.read()
# Find anything that looks like numbers
l=re.findall(r'\d+',s)
# Convert to numpy array and reshape
data = np.array(l).reshape((24,32,3))
# Convert to image and save
img = Image.fromarray(data, 'RGB')
img.save('result.png')
I enlarged and contrast-stretched the image subsequently so you can see it!

numpy array has different shape when I pass it as input to a keras layer

I have a keras encoder (part of an autoencoder) built this way:
input_vec = Input(shape=(200,))
encoded = Dense(20, activation='relu')(input_vec)
encoder = Model(input_vec, encoded)
I want to generate a dummy input using numpy.
>>> np.random.rand(200).shape
(200,)
But if i try to pass it as input to the encoder I get a ValueError:
>>> encoder.predict(np.random.rand(200))
>>> Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/francesco/PycharmProjects/W2VAutoencoded/venv/lib/python3.6/site-packages/keras/engine/training.py", line 1817, in predict
check_batch_axis=False)
File "/home/francesco/PycharmProjects/W2VAutoencoded/venv/lib/python3.6/site-packages/keras/engine/training.py", line 123, in _standardize_input_data
str(data_shape))
ValueError: Error when checking : expected input_1 to have shape (200,) but got array with shape (1,)
What am I missing?
While Keras Layers (Input, Dense, etc.) take as parameters the shape(s) for a single sample, Model.predict() takes as input batched data (i.e. samples stacked over the 1st dimension).
Right now your model believes you are passing it a batch of 200 samples of shape (1,).
This would work:
batch_size = 1
encoder.predict(np.random.rand(batch_size, 200))

PyTorch 2d Convolution with sparse filters

I am trying to perform a spatial convolution (e.g. on an image) in pytorch on dense input using a sparse filter matrix.
Sparse Tensors are implemented in PyTorch. I tried to use a sparse Tensor, but it ends up with a segmentation fault.
import torch
from torch.autograd import Variable
from torch.nn import functional as F
# build sparse filter matrix
i = torch.LongTensor([[0, 1, 1],[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
filter = Variable(torch.sparse.FloatTensor(i, v, torch.Size([3,3])))
inputs = Variable(torch.randn(1,1,6,6))
F.conv2d(inputs, filter)
Can anyone just give me a hint how to do that?
Thanks in advance!
dymat
I know this question is outdated but I also know that there are still people looking for an answer (like myself) so here goes...
On sparse filters
If you'd like sparse convolution without the freedom to specify the sparsity pattern yourself, take a look at dilated conv (also called atrous conv). This is implemented in PyTorch and you can control the degree of sparsity by adjusting the dilation param in Conv2d.
If you'd like to specify the sparsity pattern yourself, to the best of my knowledge, this feature is not currently available in PyTorch. But you may want to check this out if you are ok with using Tensorflow. There is also a blog post providing more details on this repo.
On sparse input
A list of existing and TODO sparse tensor operations is available here.
This talks about the current state of sparse tensors in PyTorch.
This lets you propose your own sparse tensor use case to the PyTorch contributors.
But at the time of this writing, I did not see conv on sparse tensors being an implemented feature or on the TODO list. nn.Linear on sparse input, however, is supported.
And if you build a sparse tensor and apply a conv layer to it, PyTorch (1.1.0) throws an exception:
>>> a = torch.zeros((1, 3, 2, 2), layout=torch.sparse_coo)
>>> net = torch.nn.Conv2d(1, 1, 1)
>>> b = net(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: sparse tensors do not have is_contiguous
>>> torch.__version__
'1.1.0'
Changing to a linear layer and it would work:
>>> c = torch.zeros((1, 2), layout=torch.sparse_coo)
>>> another_net = torch.nn.Linear(2, 1)
>>> d = another_net(c)
>>> d
tensor([[0.1944]], grad_fn=<AddmmBackward>)
>>> d.backward()
>>> another_net.weight.grad
tensor([[0., 0.]])
>>> another_net.bias.grad
tensor([1.])
these guys did something like a sparse conv2d - https://github.com/numenta/nupic.torch/

cv.Save and cv.Load (array)

I need to save and load an array, but I get this error:
cv.Save('i.xml',i)
TypeError: Cannot identify type of 'structPtr'
This is the code:
import cv
i = [[1,2],[3,4],[5,6],[7,8]]
cv.Save('i.xml',i)
That's because cv.Save needs to receive the object to be stored in the file as an OpenCV object. For example, the following is a minimal workable example that saves a numpy array in a file using cv.Save:
import cv2
import numpy as np
i = np.eye(3)
cv2.cv.Save('i.xml', cv2.cv.fromarray(i))
As you can see here, arrays should be converted back to numpy from OpenCV as well after reading.
Regards.

Resources