Standard deviation of pixel values in a masked image - arrays

I have a DICOM image with a mask on. It looks like a black background with a white circle in the middle (area not covered and zeroed with the mask).
The code for which is:
import numpy as np
import dicom
import pylab
ds = dicom.read_file("C:\Users\uccadmin\Desktop\James_Phantom_CT_Dec_16th\James Phantom CT Dec 16th\Images\SEQ4Recon_3_34\IM-0268-0001.dcm")
lx, ly = ds.pixel_array.shape
X, Y = np.ogrid[0:lx, 0:ly]
mask = (X - lx/2)**2 + (Y - ly/2)**2 > lx*ly/8 # defining mask
ds.pixel_array[mask] = 0
print np.std(ds.pixel_array) # trying to get standard deviation
pylab.imshow(ds.pixel_array, cmap=pylab.cm.bone) # shows image with mask
I want to get the standard deviation of the pixel values INSIDE the white circle ONLY i.e. exclude the black space outside the circle (the mask).
I do not think the value I am getting with the above code is correct, as it is ~500, and the white circle is almost homogenous.
Any ideas how to make sure that I get the standard deviation of the pixel values within the white circle ONLY in a Pythonic way?

I think the reason you are getting a big number is because your standard deviation is including all the zero values.
Is it enough for you to simply ignore all zero values? (This will be okay, providing that no or very few pixels in the circle have value 0.) If so
np.std([x for x in ds.pixel_array if x > 0])
should do the trick. If this isn't good enough, then you can reverse the condition in your mask to be
mask = (X - lx/2)**2 + (Y - ly/2)**2 < lx*ly/8 # defining mask, < instead of >
and do
mp.std(ds.pixel_array[mask])

Related

What is the most efficient way to put multiple colours on a window, especially in frame-by-frame format?

I am making a game with C and X11. I've been trying for quite a while to find a way to put different coloured pixels on a window, frame by frame. I've seen fully developed games get thousands of frames per second. What is the most efficient way of doing this?
I have seen 2-coloured bitmaps with XImages, allocating 256 colours on a fade of black-white, and using XPutPixel with XImages (which I wasn't able to figure how to create an XImage properly that could later have pixels put on it).
I have made this for loop that creates a random image, but it is, obviously, pixel-by-pixel instead of frame-by-frame and takes 18 seconds to render one entire frame.
XColor pixel;
for (int x = 0; x < currentWindowWidth; x++) {
for (int y = 0; y < currentWindowHeight; y++) {
pixel.red = rand() % 256 * 256; //Converting 16-bit colour to 8-bit colour
pixel.green = rand() % 256 * 256;
pixel.blue = rand() % 256 * 256;
XAllocColor(display, XDefaultColormap(display, screenNumber), &pixel); //This probably takes the most time,
XSetForeground(display, graphics, pixel.pixel); //as does this.
XDrawPoint(display, window, graphics, x, y);
}
}
After three or so more weeks of testing things off and on, I finally figured out how to do it, and it was rather simple. As I said in the OP, XAllocColor and XSetForground take quite a bit of time (relatively) to work. XDrawPoint also was slow, as it does more than just put a pixel at a point on an image.
First I tested how Xlib's colour format works (for the unsigned long int represented as pixel.pixel, which was what I needed XAllocColor for), and it appears to have 100% red set to 16711680, 100% green set to 65280, and 100% blue set to 255, which is obviously a pattern. I found the maximum to be a 50% of all colours, 4286019447, which is a solid grey.
Next, I made sure my XVisualInfo would be supported by my system with a test using XMatchVisualInfo([expected visual info values]). That ensures the depth I will use and the TrueColor class works.
Finally, I made an XImage copied from the root window's image for manipulation. I used XPutPixel for each pixel on the window and set it to a random value between 0 and 4286019448, creating the random image. I then used XPutImage to paste the image to the window.
Here's the final code:
if (!XMatchVisualInfo(display, screenNumber, 24, TrueColor, &visualInfo)) {
exit(0);
}
frameImage = XGetImage(display, rootWindow, 0, 0, screenWidth, screenHeight, AllPlanes, ZPixmap);
while (1) {
for (unsigned short x = 0; x < currentWindowWidth; x += pixelSize) {
for (unsigned short y = 0; y < currentWindowHeight; y += pixelSize) {
XPutPixel(frameImage, x, y, rand() % 4286019447);
}
}
XPutImage(display, window, graphics, frameImage, 0, 0, 0, 0, currentWindowWidth, currentWindowHeight);
}
This puts a random image on the screen, at a stable 140 frames per second on fullscreen. I don't necessarily know if this is the most efficient way, but it works way better than anything else I've tried. Let me know if there is any way to make it better.
Thousands of frames per second is not possible. The monitor frequency is about 100 Hz, or 100 cycles per second, that's roughly the maximum frame rate. This is still very fast. Human eye wouldn't pick up faster frame rates.
The monitor response time is about 5ms, so any single point on the screen cannot be refreshed more than 200 times per second.
8-bit is 1 byte, so 8-bit image uses one byte per pixel, each pixel is from 0 to 256. The pixel doesn't have red, blue, green component. Instead each pixel points to an index in the color table. The color table holds 256 colors. There is a trick where you keep the pixels the same and change the color table, this makes the image fade in and out or do other weird things.
In a 24-bit image, each pixel has blue, red, green component. Each color is 1 byte, so each pixel is 3 bytes, or 24 bits.
uint8_t red = rand() % 256;
uint8_t grn = rand() % 256;
uint8_t blu = rand() % 256;
A 16-bit image uses an odd format to store red, blue, green. 16 is not divisible by 3, often times 2 colors are assigned 5-bits, and the 3rd color gets 6-bits. Then you have to fit these colors on one uint16_t sized pixel. It's probably not worth it to explore this.
The slowness of your routine is because you are painting one pixel at a time. You should paint to a buffer instead, and render the buffer once per frame. You might consider using other frame works like SDL. Other games may use things like OpenGL which takes advantage of GPU optimization for matrix operation etc.
You must use a GPU. GPUs have a highly parallel architecture optimized for graphics (hence the name). To access the GPU you will use an API like OpenGL or Vulkan or make use of a Game Engine.

2D Deconvolution using FFT in Matlab Problems

I have convoluted an image I created in matlab with a 2D Gaussian function which I have also defined in matlab and now I am trying to deconvolve the resultant matrix to see if I get the 2D Gaussian function back using the fft2 and ifft2 commands. However the matrix I get as a result is incorrect (to my knowledge). Here is the code for what I have done thus far:
% Code for input image (img) [300x300 array]
N = 100;
t = linspace(0,2*pi,50);
r = (N-10)/2;
circle = poly2mask(r*cos(t)+N/2+0.5, r*sin(t)+N/2+0.5,N,N);
img = repmat(circle,3,3);
% Code for 2D Gaussian Function with c = 0 sig = 1/64 (Z) [300x300 array]
x = linspace(-3,3,300);
y = x';
[X Y] = meshgrid(x,y);
Z = exp(-((X.^2)+(Y.^2))/(2*1/64));
% Code for 2D Convolution of img with Z (C) [599x599 array]
C = conv2(img,Z);
% I have tested that this convolution is correct using cross section profile vectors for img and C and the resulting x-y plots are what i expect from the convolution.
% From my knowledge of convolution, the algorithm works as a multiplier in Fourier space, therefore by dividing the Fourier transform of my output (convoluted image) by my input (img) I should get back the point spread function (Z - 2D Gaussian function) after the inverse Fourier transform is applied to this result by division.
% Code for attempted 2D deconvolution
Fimg = fft2(img,599,599);
% zero padding added to increase result to 599x599 array
FC = fft2(C);
R = FC/Fimg;
% I now get this error prompt: Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 2.551432e-22
iFR = ifft2(R);
I'm expecting iFR to be close to Z but I'm getting something completely different. It may be an approximation of Z with complex values but I can't seem to check it since I don't know how to plot a 3D complex matrix in matlab. So if anyone can tell me whether my answer is correct or incorrect and how to get this deconvolution to work? I'd be much appreciated.
R = FC/Fimg needs to be R = FC./Fimg; You need to do division element-wise.
Here are some Octave (version 3.6.2) plots of that deconvolved Gaussian.
% deconvolve in frequency domain
Fimg = fft2(img,599,599);
FC = fft2(C);
R = FC ./ Fimg;
r = ifft2(R);
% show deconvolved Gaussian
figure(1);
subplot(2,3,1), imshow(img), title('image');
subplot(2,3,2), imshow(Z), title('Gaussian');
subplot(2,3,3), imshow(C), title('image blurred by Gaussian');
subplot(2,3,4), mesh(X,Y,Z), title('initial Gaussian');
subplot(2,3,5), imagesc(real(r(1:300,1:300))), colormap gray, title('deconvolved Gaussian');
subplot(2,3,6), mesh(X,Y,real(r(1:300,1:300))), title('deconvolved Gaussian');
% show difference between Gaussian and deconvolved Gaussian
figure(2);
gdiff = Z - real(r(1:300,1:300));
imagesc(gdiff), colorbar, colormap gray, title('difference between initial Gaussian and deconvolved Guassian');

What is the OpenCV FindChessboardCorners convention?

I'm using OpenCV 2.2.
If I use cvFindChessboardCorners to find the corners of a chessboard, in what order are the corners stored in the variable corners (e.g. top-left corner first, then row then column)?
Documentation (which didn't helped much).
It seems to me the doc is lacking this kind of details.
For 3.2.0-dev it seems to me it depends on the angle of rotation of the chessboard.
With this snippet:
cv::Size patternsize(4,3); //number of centers
cv::Mat cal = cv::imread(cal_name);
std::vector<cv::Point2f> centers; //this will be filled by the detected centers
bool found = cv::findChessboardCorners( cal, patternsize, centers, cv::CALIB_CB_ADAPTIVE_THRESH );
std::cout << found << "\n";
if(found){
cv::drawChessboardCorners(cal,patternsize,centers,found);
You will get these results:
First image:
First image rotated by 180 degrees:
Note the
colored corners connected with lines
drawn by drawChessboardCorners, they differ only by the color: in the original image the red line is at the bottom, in the rotated image the red line is at the top of the image.
If you pass to drawChessboardCorners a grayscale image you will loose this piece of information.
If I need the first corner at the top left of the image and if I can assume that:
the angle of the chessboard in the scene will be only close to 0 or close to 180;
the tilt of the camera will be negligible;
then the following snippet will reorder the corners if needed:
cv::Size patternsize(4,3); //number of centers
cv::Mat cal = cv::imread(cal_name);
std::vector<cv::Point2f> centers; //this will be filled by the detected centers
bool found = cv::findChessboardCorners( cal, patternsize, centers, cv::CALIB_CB_ADAPTIVE_THRESH );
std::cout << found << "\n";
if(found){
cv::drawChessboardCorners(cal,patternsize,centers,found);
// I need the first corner at top-left
if(centers.front().y > centers.back().y){
std::cout << "Reverse order\n";
std::reverse(centers.begin(),centers.end());
}
for(size_t r=0;r<patternsize.height;r++){
for(size_t c=0;c<patternsize.width;c++){
std::ostringstream oss;
oss << "("<<r<<","<<c<<")";
cv::putText(cal, oss.str(), centers[r*patternsize.width+c], cv::FONT_HERSHEY_PLAIN, 3, CV_RGB(0, 255, 0), 3);
}
}
}
I am pretty sure it orders the chessboard corners by rows, starting with the one closest to the top-left corner of the image.
The chessboard pattern has no specified origin (one of the deficiencies of that calibration device), so if you turn it 90 or 180 degrees the corners you get back won't be in the same order.
The way to make sure is to look at the actual point values you get back and see if they are what you expected.
Edit: at least in case of OpenCV 3.3.1 and 5x3 chessboard, the chessboard corners can be returned either starting top-left or bottom-right, depending on the rotation.

OpenGL - Mapping between x and y in glVertex2f(x, y) to screen integer coordinates

I would like to know how the vertices of glVertex2f(x, y) map to actual screen integer co-ordinates.
I intend to use a complex plane with minR, minI and maxR, maxI (I and R - Imaginary and Real part), such that the plane gets mapped to 512 x 512 pixels on the screen. I have points of 512 steps between the min and max values.
The mapping between the vertices is unclear since, I had to scale the my planar image using glScalef(100, 100, 0) to get it roughly fit the screen. But still, a large portion of it is left blank.
Please note that I am using the glBegin(GL_POINTS) routine to map the points in the plane to the screen.
The code looks thus,
for (X = 0; X < 512; X++)
for (Y = 0; Y < 512; Y++)
glVertex2f (Complexplane[X][Y].real, Complexplane[X][Y].imag);
P.S.:
Complexplane[0][0].real = -2, Complexplane[0][0].imag = -1.2
Complexplane[511][511].real = 1.0, Complexplane[0][0].imag = 1.8
I'm assuming you haven't set the projection or modelview matrices - they will be set to the identity matrix by default BTW...
For X,Y coordinates, a point will be visible if: -1 <= X <= 1, -1 <= Y <= 1
The glViewport function describes how this range is mapped to the window. It is initially set to (0, 0, window_width, window_height) when the GL context is created. The fact that glScale(100, 100, 0) is only taking up a portion of the window suggests that you are applying another transform elsewhere.
The mapping depends on the transformation matrices set. In up to OpenGL-2 the pipeline is
v_eye = ModelviewMatrix * v
v_projected = ProjectionMatrix * v_eye
v_clipped = clip(v_projected)
v_NDC.xyzw = v_clipped.xyzw / v_clipped.w
The default matrices are identity, so the only operation applied in the default state is the clipping. v_NDC then undergoes the viewport transform:
p.xyz = (v_NDC.xyz + 1) * viewport.wh / 2 + viewport.xy

Looking for a fast outlined line rendering algorithm

I'm looking for a fast algorithm to draw an outlined line. For this application, the outline only needs to be 1 pixel wide. It should be possible, whether by default or through an option, to make two lines connect together seamlessly, if they share a common point.
Excuse the ASCII art but this is probably the best way to demonstrate it.
Normal line:
##
##
##
##
##
##
"Outlined" line:
**
*##**
**##**
**##**
**##**
**##**
**##*
**
I'm working on a dsPIC33FJ128GP802. It's a small microcontroller/digital signal processor, capable of 40 MIPS (million instructions per second.) It is only capable of integer math (add, subtract and multiply: it can do division, but it takes ~19 cycles.) It's being used to process an OSD layer at the same time and only 3-4 MIPS of the processing time is available for calculations, so speed is critical. The pixels occupy three states: black, white and transparent; and the video field is 192x128 pixels. This is for Super OSD, an open source project: http://code.google.com/p/super-osd/
The first solution I thought of was to draw 3x3 rectangles with outlined pixels on the first pass and normal pixels on the second pass, but this could be slow, as for every pixel at least 3 pixels are overwritten and the time spent drawing them is wasted. So I'm looking for a faster way. Each pixel costs around 30 cycles. The target is <50, 000 cycles to draw a line of 100 pixels length.
I suggest this (C/pseudocode mix) :
void draw_outline(int x1, int y1, int x2, int y2)
{
int x, y;
double slope;
if (abs(x2-x1) >= abs(y2-y1)) {
// line closer to horizontal than vertical
if (x2 < x1) swap_points(1, 2);
// now x1 <= x2
slope = 1.0*(y2-y1)/(x2-x1);
draw_pixel(x1-1, y1, '*');
for (x = x1; x <= x2; x++) {
y = y1 + round(slope*(x-x1));
draw_pixel(x, y-1, '*');
draw_pixel(x, y+1, '*');
// here draw_line() does draw_pixel(x, y, '#');
}
draw_pixel(x2+1, y2, '*');
}
else {
// same as above, but swap x and y
}
}
Edit: If you want to have successive lines connect seamlessly, I
think you really have to draw all the outlines in the first pass, and
then the lines. I edited the code above to draw only the outlines. The
draw_line() function would be exactly the same but with one single
draw_pixel(x, y, '#'); instead of four draw_pixel(..., ..., '*');.
And then you just:
void draw_polyline(point p[], int n)
{
int i;
for (i = 0; i < n-1; i++)
draw_outline(p[i].x, p[i].y, p[i+1].x, p[i+1].y);
for (i = 0; i < n-1; i++)
draw_line(p[i].x, p[i].y, p[i+1].x, p[i+1].y);
}
My approach would be to use the Bresenham to draw multiple lines. Looking at your ASCII art, you'll note that the outline lines are just the same as the Bresenham line, just shifted 1 pixel up and down -- plus a single pixel to the left of the first point and to the right of the last.
For a generic version, you'll need to determine whether your line is flat or steep -- i.e., whether abs(y1 - y0) <= abs(x1 - x0). For steep lines, the outlines are shifted by 1 pixel to the left and right, and the closing pixels are above the starting and below the ending point.
It could be worth optimizing this by drawing the line and two outline pixels in one go for each line pixel. However, if you need seamless outlines, the simplest solution would be to first draw all outlines, then the lines themselves -- which wouldn't work with the "three-pixel-Bresenham" optimization.

Resources