Creating an array of images in python \ openCV \ numpy - arrays

I'm trying to implement an "LOG blob detector" using python and openCV.
the idea is to create 10-15 levels of LOG filters, apply each of them to my original gray scale image and save the images in an array of size height x width x numOfLevels and then find the local maximums on the 3D array.
The problem is I'm not sure how to save these in an array.
I tried to do the following:
myImage = cv2.imread('butterfly.jpg')
gray_image = cv2.cvtColor(myImage, cv2.COLOR_BGR2GRAY)
sigma = 2
k = 2**(0.25)
std2 = float(sigma**2)
arr = []
for i in range(10):
filt_size = 2*np.ceil(3*sigma)+1
H = log_filt( filt_size, sigma)
H *= sigma**2
dst = cv2.filter2D(gray_image,-1,H)
arr.append(dst)
cv2.imshow('Gray', dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
sigma = sigma * k
std2 = float(sigma**2)
plt.imshow(H,interpolation='nearest')
But then if I try to take on the of the images and use cv2.imshow(arr[0]) I get the following error:
TypeError: Required argument 'mat' (pos 2) not found
What am I doing wrong here?
Is there a better way to save these in an array?
Maybe using np.array somehow?

Your error:
cv2.imshow(arr[0]) I get the following error: TypeError: Required argument 'mat' (pos 2) not found
Is because you pass the image (arr[0]) as first parameter, but you should pass it as second:
cv2.imshow('WindowName', arr[0])
See the OpenCV 2.4 and 3.0 documentation of imshow:
cv2.imshow(winname, mat)

Related

Matlab, AppDesigner-Convert image on a good type and use it in "imread" function

I use AppDesigner inMATLAB to show photos with changed RGB. But there is the problem with character of the photo.
When I switch on my own fuction "changeRGB", finally "choosenImage" has 20bytes, class "char" and size(1x10). OK!
There is no problem in using the "function OpenButtonValueChanged". OK!
There is the problem with "function UploadButtonPushed". OK!
ABOUT THE PROBLEM:
When I click button which callback is "function UploadButtonPushed" I get the error:
"Error using imread>parse_inputs (line 502)
The file name or URL argument must be a character
vector or string scalar."
"Error in imread (line 342)
[source, fmt_s, extraArgs, was_cached_fmt_used] =
parse_inputs(cached_fmt, varargin{:});"
WHY?
Because in the "function UploadButtonPushed" my choosenImage has 1977624bytes, class "uint8" and size(681x968x3). So it's too bug for "imread".
WHAT I TRIED:
When in "function OpenButtonValueChanged" I convert a photo, adding "char": (myimage = char(app.clickedImage)); the class of the photo is changing from uint8 to char but the size.
When I use "num2cell", the claas of the photo is changing on "cell" but size and number of bytes are the same- so big. And I get the Error: "Error using imread>parse_inputs (line 502) The file name or URL argument must be a character vector or string scalar."
In my own function "changeRGB" I used "imread(image)" and here is the problem with the size of the photo. Do you know how to get the correct one?
%my own properties in AppDesigner- to use them in different functions
properties (Access = public)
clickedImage;
addR = 1;
addG = 1;
addB = 1;
end
%first function in AppDeesigner
function OpenButtonValueChanged(app, event)
value = app.OpenButton.Value;
[file, howManyFiles] = chooseImagesFromComputer; %myown function
%I load 3 images which are showed as miniatures
myFile1 = imread(file{1});
imshow(myFile1, 'Parent', app.UIAxes1_1);
myFile2 = imread(file{2});
imshow(myFile2, 'Parent', app.UIAxes1_2);
myFile3 = imread(file{3});
imshow(myFile3, 'Parent', app.UIAxes1_3);
%take values of changed RGB from the slider
app.addR = app.SliderR.Value
app.addG = app.SliderG.Value
app.addB = app.SliderB.Value
%work only on one image to change its colors. app.clickedImage, app.addR, app.addG, app.addB are properties at the beginning of the code
app.clickedImage = file{1};
app.clickedImage = changeRGB(app.clickedImage,app.addR,app.addG,app.addB); %changeRGB- my own function- here is the problem. I add it bottom
imshow(app.clickedImage,'Parent',app.UIAxesMain);
end
%second function in AppDesigner
%here is the button to upload color of the photo
function UploadButtonPushed(app, event)
myimage = app.clickedImage;
myimage = changeRGB(myimage,app.addR,app.addG,app.addB);
imshow(myimage);
end
%here is my own function in matlab, not in AppDesigner, which makes problem:
function [changedImage] = changeRGB(choosenImage, addR, addG, addB)
whos
loadedImage = imread(choosenImage);
R = loadedImage(:,:,1); %extract one of the color channels
G = loadedImage(:,:,2);
B = loadedImage(:,:,3);
RBG = cat(3,R,G,B);
R_adj2 = R + addR;
G_adj2 = G + addG;
B_adj2 = B + addB;
changedImage = cat(3,R_adj2,G_adj2,B_adj2);
end
First, you make unnecessary operations in changeRGB
function [changedImage] = changeRGB(choosenImage, addR, addG, addB)
loadedImage = imread(choosenImage);
loadedImage = bsxfun(#sum, loadedImage, reshape([addR, addG, addB], [1 1 3]);
end
Then this function return an array (the modified image) so in UploadButtonPushed(app, event) when you run myimage = app.clickedImage;, you are passing the modified array instead of the image path, that you set here app.clickedImage = changeRGB(app.clickedImage,app.addR,app.addG,app.addB);
So you have to change the design of your variables, because app.clickedImage is saving either the image path, or the image itself. Consider having 2 different variables.
A good advice also is to use matlab debugger which is really good help to find the source of this kind of problems.

Error using linsolve: Matrix must be positive definite

I am trying to use compressed sensing for a 2D matrix. I am trying to run the following piece of code -
Nf=800;
N=401;
E=E(Nf,N); %matrix of signal(this only for sampling) real matrix E is 2D matrix with size of Nf and N
% compressive sensing
M=ceil(0.3*N);
psi=fft(eye(N));
phi=randi(M,N);
EE = permute(E,[2 1]);
theta=phi*psi;
for k=1:Nf
y(:,k)=phi*EE(:,k);
end
x0 = theta.'*y;
for p=1:Nf
X_hat(:,p) = l1eq_pd(x0(:,p), theta, [], y(:,p), 1e-5); %l1eq_pd=l1-magic
end
X1_hat=psi*X_hat;
XX_hat=permute(X1_hat,[2 1]);
but while running the code I get the following error.
Error using linsolve
Matrix must be positive definite.
Error in l1eq_pd (line 77)
[w, hcond] = linsolve(A*A', b, opts);
Error in simulation_mono_SAR (line 91)
X_hat(:,p) = l1eq_pd(x0(:,p), theta, [], y(:,p), 1e-5);
Could someone point me, what is the problem? Is it a problem inherent to l1-magic? shall I use another solver?

Display a numpy array in Kivy

first of all, I'm totally new to kivy, so I'm struggling a bit.
I'm trying to display a numpy array in a kivy window.
So far i figured out that this should work using the Texture Class (http://kivy.org/docs/api-kivy.graphics.texture.html).
As my numpy array changes from time to time, I'm trying to adjust the following code to my application.
# create a 64x64 texture, defaults to rgb / ubyte
texture = Texture.create(size=(64, 64))
# create 64x64 rgb tab, and fill with values from 0 to 255
# we'll have a gradient from black to white
size = 64 * 64 * 3
buf = [int(x * 255 / size) for x in range(size)]
# then, convert the array to a ubyte string
buf = b''.join(map(chr, buf))
# then blit the buffer
texture.blit_buffer(buf, colorfmt='rgb', bufferfmt='ubyte')
# that's all ! you can use it in your graphics now :)
# if self is a widget, you can do this
with self.canvas:
Rectangle(texture=texture, pos=self.pos, size=(64, 64))
It seems that creating the texture and changing it works as it should, but i dont get, how to display the texture.
Can anybody explain to me, how to use the
with self.canvas:
Rectangle(texture=texture, pos=self.pos, size=(64, 64))
in a way, that I get to see my picture/numpy array.
Thanks alot in advance!
Holzroller
Edit:
I figured out that using Kivy 1.8.0 and the Texture Class is a bit messy. So I upgraded to Kivy 1.9.0 via github (installing Kivy via apt-get in Ubuntu 14.04 LTS serves you the 1.8.0 version) and I get to see the Texture using the following code. I hope that helps people who are having the same problem as me.
from kivy.graphics.texture import Texture
from kivy.graphics import Rectangle
from kivy.uix.widget import Widget
from kivy.base import runTouchApp
from array import array
from kivy.core.window import Window
# create a 64x64 texture, defaults to rgb / ubyte
texture = Texture.create(size=(1280, 1024), colorfmt='rgb')
# create 64x64 rgb tab, and fill with values from 0 to 255
# we'll have a gradient from black to white
size = 1280 * 1024 * 3
buf = [int(x * 255 / size) for x in range(size)]
# then, convert the array to a ubyte string
arr = array('B', buf)
# buf = b''.join(map(chr, buf))
# then blit the buffer
texture.blit_buffer(arr, colorfmt='rgb', bufferfmt='ubyte')
# that's all ! you can use it in your graphics now :)
# if self is a widget, you can do this
root = Widget()
with root.canvas:
Rectangle(texture=texture, pos=(0, 0), size=(1280*3, 1024*3))
runTouchApp(root)
Edit2:
Basically I'm back to the original Problem:
I have a numpy array (type 'numpy.ndarray'; dtype 'uint8') and I'm trying to convert it into a format, so that the texture will show me the image. I tried to break it down to the same way it is done in the example code i posted above. But i sadly doesn't work. I really do not know what I'm doing wrong here.
(my numpy array is called im2 in the folling code)
list1 = numpy.array(im2).reshape(-1,).tolist()
arr = array('B', list1)
texture.blit_buffer(arr, colorfmt='rgb', bufferfmt='ubyte')
Numpy have a tostring() attribute, that you could use directly, if the source array is uint8 type. You don't even need to reshape:
texture = Texture.create(size=(16, 16), colorfmt="rgb"))
arr = numpy.ndarray(shape=[16, 16, 3], dtype=numpy.uint8)
# fill your numpy array here
data = arr.tostring()
texture.blit_buffer(data, bufferfmt="ubyte", colorfmt="rgb"
About the issue you're talking in the comment, i see 2 points:
Ensure the callback from the ROS is called in the mainthread. Maybe the update is simply ignored.
When you manually change inplace the texture, the associated object that use it are not notified, you need to do it. Add a self.canvas.ask_update() to ensure the canvas redisplay at the next frame.

How can I adjust contrast in OpenCV in C?

I'm just trying to adjust contrast/ brightness in an image in gray scale to highlight whites in that image with Opencv in C. How can I do that? is there any function that makes this task in opencv?
Original image:
Modified image:
Thanks in advance!
I think you can adjust contrast here in two ways:
1) Histogram Equalization :
But when i tried this with your image, result was not as you expected. Check it below:
2) Thresholding :
Here, i compared each pixel value of input with an arbitrary value ( which i took 127). Below is the logic which has inbuilt function in opencv. But remember, output is Binary image, not grayscale as you did.
If (input pixel value >= 127):
ouput pixel value = 255
else:
output pixel value = 0
And below is the result i got :
For this, you can use Threshold function or compare function
3) If you are compulsory to get grayscale image as output, do as follows:
(code is in OpenCV-Python, but for every-function, corresponding C functions are available in opencv.itseez.com)
for each pixel in image:
if pixel value >= 127: add 'x' to pixel value.
else : subtract 'x' from pixel value.
( 'x' is an arbitrary value.) Thus difference between light and dark pixels increases.
img = cv2.imread('brain.jpg',0)
bigmask = cv2.compare(img,np.uint8([127]),cv2.CMP_GE)
smallmask = cv2.bitwise_not(bigmask)
x = np.uint8([90])
big = cv2.add(img,x,mask = bigmask)
small = cv2.subtract(img,x,mask = smallmask)
res = cv2.add(big,small)
And below is the result obtained:
You could also check out the OpenCV CLAHE algorithm. Instead of equalizing the histogram globally, it splits up the image into tiles and equalizes those locally, then stitches them together. This can give a much better result.
With your image in OpenCV 3.0.0:
import cv2
inp = cv2.imread('inp.jpg',0)
clahe = cv2.createCLAHE(clipLimit=4.0, tileGridSize=(8,8))
res = clahe.apply(inp)
cv2.imwrite('res.jpg', res)
Gives something pretty nice
Read more about it here, though it's not super helpful:
http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0
Although this post is a bit aged:
What about using "cvAddWeighted( )" ?
What it does is:
dst = src1*alpha + src2*beta + gamma
What I understand from applying brightness and contrast is, that one wants to do:
dst = src*contrast + brightness;
so if
src1 = input image
src2 = any image of same type as src1
alpha = contrast value
beta = 0.0
gamma = brightness value
dst = resulting Image (must be of same type as src1)
One should be pretty much done with the task, no?
This approch works for me using CvMat* images

R - Loop in matrix

I have two variables, the first is 1D flow vector containing 230 data and the second is 2D temperature matrix (230*44219).
I am trying to find the correlation matrix between each flow value and corresponding 44219 temperature. This is my code below.
Houlgrave_flow_1981_2000 = window(Houlgrave_flow_average, start = as.Date("1981-11-15"),end = as.Date("2000-12-15"))
> str(Houlgrave_flow_1981_2000)
‘zoo’ series from 1981-11-15 to 2000-12-15
Data: num [1:230] 0.085689 0.021437 0.000705 0 0.006969 ...
Index: Date[1:230], format: "1981-11-15" "1981-12-15" "1982-01-15" "1982-02-15" ...
Hulgrave_SST_1981_2000=X_sst[1:230,]
> str(Hulgrave_SST_1981_2000)
num [1:230, 1:44219] -0.0733 0.432 0.2783 -0.1989 0.1028 ...
sf_Houlgrave_SF_SST = NULL
sst_Houlgrave_SF_SST = NULL
cor_Houlgrave_SF_SST = NULL
for (i in 1:230) {
for(j in 1:44219){
sf_Houlgrave_SF_SST[i] = Houlgrave_flow_1981_2000[i]
sst_Houlgrave_SF_SST[i,j] = Hulgrave_SST_1981_2000[i,j]
cor_Houlgrave_SF_SST[i,j] = cor(sf_Houlgrave_SF_SST[i],Hulgrave_SST_1981_2000[i,j])
}
}
The error message always says:
Error in sst_Houlgrave_SF_SST[i, j] = Hulgrave_SST_1981_2000[i, j] :
incorrect number of subscripts on matrix
Thank you for your help.
try this:
# prepare empty matrix of correct size
cor_Houlgrave_SF_SST <- matrix(nrow=dim(Hulgrave_SST_1981_2000)[1],
ncol=dim(Hulgrave_SST_1981_2000)[2])
# Good practice to not specify "230" or "44219" directly, instead
for (i in 1:dim(Hulgrave_SST_1981_2000)[1]) {
for(j in 1:dim(Hulgrave_SST_1981_2000)[2]){
cor_Houlgrave_SF_SST[i,j] <- cor(sf_Houlgrave_SF_SST[i],Hulgrave_SST_1981_2000[i,j])
}
}
The two redefinitions inside your loop were superfluous I believe. The main problem with your code was not defining the matrix - i.e. the cor variable did not have 2 dimensions, hence the error.
It is apparently also good practice to define empty matrices for results in for-loops by explicitly giving them correct dimensions in advance - is meant to make the code more efficient.

Resources