how to convert rgb image To grayscale in python - artificial-intelligence

I need help in the following regard...
this code:
show_picture(x_train[0])
print(x_train.shape)
plt.imshow(x_train,cmap=cm.Greys_r,aspect='equal')
returns the following:
(267, 100, 100, 3)
TypeError Traceback (most recent call last)
<ipython-input-86-649cf879cecf> in <module>()
2 show_picture(x_train[0])
3 print(x_train.shape)
----> 4 plt.imshow(x_train,cmap=cm.Greys_r,aspect='equal')
5
5 frames
/usr/local/lib/python3.7/dist-packages/matplotlib/image.py in set_data(self, A)
697 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
698 raise TypeError("Invalid shape {} for image data"
--> 699 .format(self._A.shape))
700
701 if self._A.ndim == 3:
TypeError: Invalid shape (267, 100, 100, 3) for image data
whats the correct procedure to do this

First of all, it seems like you are working with an array of 267 of 100x100 RGB images here. I am assuming you are using a NumPy array. In order to convert the images to grayscale you can use the method proposed in this answer:
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
x_train_gray = rgb2gray(x_train)
Note that this works for all images in one pass and the resulting shape should be (267, 100, 100). However, np.imshow only works for one image at a time so to plot an image in grayscale you can do the following:
plt.imshow(x_train_gray[0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
plt.show()

Related

Problems in np.reshape arrays

I have a problem when np.reshape an array.
I have a list roi_x with a total size of 120 elements, each of which is a np.array of float of different sizes:
len(roi_x)
Out: 120
for i in range(len(roi_x)):
print(len(roi_x[i]))`
625
3125
6250
625
3125
6250
... # and so on
I want to reshape it, but I get an error (even though I get the reshaped array):
roi_x = np.reshape(roi_x, (20,6))
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
result = getattr(asarray(obj), method)(*args, **kwds)
After a bit of research, I adjusted my code as suggested and I don't get the error anymore and I am happy with the outcome:
roi_x = np.array(roi_x, dtype=object) # I added dtype=object
roi_x.shape
Out: (120,) # I get just the first dimension
roi_x = np.reshape(roi_x, (20,6))
roi_x.shape
Out: (20, 6)
However, when I try to do the same with another array roi_x of different size:
len(roi_x)
Out: 192
for i in range(len(roi_x)):
print(len(roi_x[i]))`
625
625
625
625
625
625
... # in this case, all with the same size
And I try to reshape it following the same procedure (but with a different shape):
roi_x = np.array(roi_x, dtype=object)
roi_x.shape
Out: (192, 625) # it is already a different outcome compared to before, there is a 2nd dimension
roi_x = np.reshape(roi_x, (48,4)) # 48*4=192 so it should be ok
I get the following error:
ValueError: cannot reshape array of size 120000 into shape (48,4)
I can't figure out what I am doing different between my two examples.
Any ideas?
I had a look at this : Reshaping arrays in an array of arrays
and I tried to reshape each item, but I get an error:
roi_x = [i.reshape(48, 4) for i in roi_x]
ValueError: cannot reshape array of size 625 into shape (48,4)
Thank you in advance!

ValueError: could not broadcast input array from shape (180,180,3) into shape (1,3,180,180)

I'm trying to get an image shape of (1,3,180,180) from original shape, which is (1224, 1842, 3). I've tried specifying the shape like this:
im_cv = cv.imread('test.jpg')
im_cv = cv.resize(im_cv, (1,3,180,180))
But get the error
File "/Users/lucasjacaruso/Desktop/hawknet-openvino/experiment.py", line 47, in <module>
im_cv = cv.resize(im_cv, (1,3,180,180))
cv2.error: OpenCV(4.5.3-openvino) :-1: error: (-5:Bad argument) in function 'resize'
> Overload resolution failed:
> - Can't parse 'dsize'. Expected sequence length 2, got 4
> - Can't parse 'dsize'. Expected sequence length 2, got 4
However, the model will not accept anything other than (1,3,180,180). If I simply specify the shape as (180,180), it's not accepted by the model:
ValueError: could not broadcast input array from shape (180,180,3) into shape (1,3,180,180)
How can I get the shape to be (1,3,180,180)?
Many thanks.
resize is to change the image dimensions, e.g. to change the image from (180,180,3) to say (300,300,3).
You need to add a new dimension with np.newaxis. Further, as imread will return the depth (color) dimension as axis 2, you need to move the axis:
im_cv = np.moveaxis(im_cv, 2, 0)[np.newaxis]

Size of outputs in OpenMDAO

Is it possible to have lists or arrays passed as outputs of components in openMDAO?
Since my problem relies on 6x6 matrices to solve an equation of motion in 6 degrees of freedom, I would like to be able to do the following:
M = np.ones([6, 6])
outputs['M'] = M
However, that results in an error:
ValueError: cannot reshape array of size 36 into shape (1,)
Is there any way to avoid passing each of 36 values seperately?
Yes, you can declare an output of any size or shape in your component's setup method by doing the following:
self.add_output('M', shape=(6, 6))
or
self.add_output('M', val=np.ones((6, 6)))

qiskit VQC with Amplitude Encoding for state preparation

I am trying to implement a Quantum Neural Network in qiskit, using the VQC class.
The problem is that each data consists in 190 features, which just can't be encoded with the default VQC's function (ZZfeatureMap), since this would mean create a circuit with 190 qubits.
The solution that I would like to adopt is the Amplitude Encoding, which would allow me to use only 8 qubits (with 256 amplitudes = 190 features + 66 zeros).
How can I implement a parameterized circuit in qiskit that performs this ?
I tried with the following (as an example on 2 qubits), but just doesn't work:
custom_circ = QuantumCircuit(2)
x = ParameterVector('x', 4)
custom_circ.initialize(x)
EDIT:
My problem is not with the parameteres, but with the Amplitude Encoding.
Usually, if I need to Encode a vector of 4 numbers in 2 Qbits, I just do the following:
circuit = QuantumCircuit(2)
vector = [0.124, -0.124, 0.124, 0.124]
circuit.initialize(vector)
In this way I encode my vector as amplitudes of the Qubits.
But now I need to parameterize this (the vector is not fixed).
The problem is that the "initialize" function doesn't accept parameters:
Traceback (most recent call last):
[...]
File "D: ... \qiskit\extensions\quantum_initializer\initializer.py", line 455, in initialize
return self.append(Initialize(params, num_qubits), qubits)
File "D: ... \qiskit\extensions\quantum_initializer\initializer.py", line 89, in init
if not math.isclose(sum(np.absolute(params) ** 2), 1.0,
TypeError: bad operand type for abs(): 'Parameter'
Is there a way to create and amplitude encoding that is also parameterized?
EDIT 2:
I resolved the problem, Thank You.
If you want to parameterize the Initialize circuit just use RawFeatureVector.
You can build parameterized circuits in qiskit using the Parameter class. Here is an example:
In [1]: from qiskit import QuantumCircuit
...: from qiskit.circuit import Parameter
In [2]: custom_circ = QuantumCircuit(2)
...: theta = Parameter("\u03B8")
...: custom_circ.rz(theta, range(2))
...: custom_circ.draw()
Out[2]:
┌───────┐
q_0: ┤ RZ(θ) ├
├───────┤
q_1: ┤ RZ(θ) ├
└───────┘

Making an array of images in numpy python 2.7

I want to have an array of images. For example, I would have a 4x1 array, (called imslice below) where each element is a nxnx3 image.
I want to do this to to do matrix operations on my imslice matrix like it was a normal matrix. For example, multiply it by a regular 2x2 matrix (called V.) When I try an do this right now, I am getting an array with 5 dimensions and when I try and multiply it by my V matrix I am getting the error that the dimensions don't agree (even though mathematically it's fine because the inner dimensions agree.)
Code:
imslice = np.array(([imslice1q, imslice2q, imslice3q, imslice4q]))
print imslice.shape
V = mh.gen_vmonde(4, 2, 1)
V.shape
C = np.dot(np.transpose(V), imslice)
------------------------------------------- ValueError Traceback (most recent call
last)
in ()
6 V.shape
7
----> 8 np.dot(np.transpose(V), imslice)
9
ValueError: shapes (6,4) and (4,178,178,3) not aligned: 4 (dim 1) !=
178 (dim 2)
Both np.dot and np.matmul treat more-than-two-dimensional arrays as stacks of matrices, so the last and last but one dimensions have to match.
A simple workaround in your case would be transposing:
np.dot(imslice.T, V).T
If you need something more flexible, there is np.einsum:
np.einsum('ji,jklm', V, imslice)

Resources