Why pixel value stays the same? - arrays

so, I'm trying to flood fill different regions of the image, with different shades of grey.
This is my input image:
1
So here is my code for flood filling one of the regions with some grey shade:
image = cv.imread('img.jpg', 0)
height, width = image.shape[:2]
for i in range(height):
for j in range(width):
if image[i][j] == 255:
cv.floodFill(image, None, (i, j), 90)
cv.imwrite('test1.jpg', image)
break
else:
continue
break
After this I get:
2
And if I try to load the new image again, and go through the pixels, the same pixel that was used to start flood fill in the previous example, still has the 255 value instead of 90.
How is that? What am I missing here?
Thanks, guys!

floodFill() and other functions which take a point expect them to be in x, y coordinates. You're sending in row, col coordinates instead (i.e., y, x), so you're floodfill()ing a different area than you actually detect that is white.

Related

With SDL 2.0 in C, how do I make a rectangle fade in and out?

I'm making a small 2D game and I'm trying to make a rectangle fade from transparent (white) to it's original color and then back to white again and just continue to pulse from white to it's color and back. For example:
white -> fade to red -> red -> fade to white -> repeat...
I've made my rectangle with:
SDL_Rect rect;
I then set the color and drawn the rect to the screen (after setting it's x and y position and size):
SDL_SetRenderDrawColor(renderer, 255, 51, 0, 255);
SDL_RenderFillRect(renderer, &rect);
But I don't know what the last parameter is (alpha). In the wiki, the documentation for the SetRenderDrawColor is:
int SDL_SetRenderDrawColor(SDL_Renderer* renderer,
Uint8 r,
Uint8 g,
Uint8 b,
Uint8 a)
Where 'a' is the alpha value (and r, g, b is red, green, blue). Is this what allows me to change the opacity of the color? If I have a loop and each frame I just have a variable changing between 0 and 255 and make that the "alpha" value would that allow my to change the transparency and make th rectangles color "pulse"? How can I do this?
Do I have to make my rectangle a texture? Is there a way to turn this feature on for me to change the alpha value?
EDIT: I realised I was't using RenderClear in the loop! THat fixed the problem.
You need to adjust the alpha value. Alpha represents the transparency. Maximum value means it is opaque (not transparent at all), minimum value of zero means it is completely transparent (not visible at all). In this case, 255 means it is not transparent. If you set it to say, 128, than it would be 50% transparent. To fade in and out, adjust the alpha value.

Diagonal line in PNM P6 not drawing correctly

I am currently writing a program for an assignment that requires a single black line to be drawn perfectly solid diagonal (so that all x=y) from the upper-left corner of a standard PNM P6 file. I have had no issues with file I/O.
However, I cannot get the diagonal line to display properly. Instead of the single, solid, white line I need from the corner, I get dotted lines wrapping over the image as shown in this picture.
Does anyone have any idea as to what is going wrong?
My function is as follows:
Image *
DiagonalWhite(Image *img)
{
int i, j;
for (i = 0; i < img->x; i++)
{
for (j=0; j < img->y; j++)
{
if (i==j)
{
img->data[i*img->y+j].red=255;
img->data[i*img->y+j].green=255;
img->data[i*img->y+j].blue=255;
}
}
}
return img;
}
You don't give any definition for Image *img, so actually this question cannot be answered with confidence; however, I assume you are doing the same class as yesterday's Issues writing PNM P6.
You are multiplying in the wrong direction. img->y holds the height of the image. However, since you need the span, you should be using img->x -- the width -- to go down by i pixels (followed by adding j pixels to go right).
img->data[i*img->x+j].red=255; /* x, not y */
Note: Better names for these properties would have been width and height.
Note: It's easier and quicker to loop only once over the minimum of width and height, and set pixel[i,j] immediately, rather than testing which one 'has' the same x and y position.

Saving grayscale in CMYK using libjpeg in c

If this function does what I think it does, it seems that on my machine at least in CMYK, C=0, M=0, Y=0 and K=0 does not correspond to white! What is the problem?
float *arr is a float array with size elements. I want to save this array as a JPEG with IJG's libjpeg in two color spaces upon demand: g: Gray scale and c: CMYK. I follow their example and make the input JSAMPLE *jsr array with the number of JSAMPLE elements depending on the color space: size elements for gray scale and 4*size elements for CMYK. JSAMPLE is just another name for unsigned char on my machine at least. The full program can be seen on Github. This is how I fill jsr:
void
floatfilljsarr(JSAMPLE *jsr, float *arr, size_t size, char color)
{
size_t i;
double m;
float min, max;
/* Find the minimum and maximum of the array:*/
fminmax(arr, size, &min, &max);
m=(double)UCHAR_MAX/((double)max-(double)min);
if(color=='g')
{
for(i=0;i<size;i++)
jsr[i]=(arr[i]-min)*m;
}
else
for(i=0;i<size;i++)
{
jsr[i*4+3]=(arr[i]-min)*m;
jsr[i*4]=jsr[i*4+1]=jsr[i*4+2]=0;
}
}
I should note that color has been checked before this function to be either c or g.
I then write the JPEG image exactly as the example.c program in libjpeg's source.
Here is the output after printing both images in a TeX document. Grayscale is on the left and CMYK is on the right. Both images are made from the same ordered array, so the bottom left element (the first in the array as I have defined it and displayed it here) has JSAMPLE value 0 and the top right element has JSAMPLE value 255.
Why aren't the two images similar? Because of the different nature, I would expect the CMYK image to be reversed with its bottom being bright and its top being black. Their displaying JSAMPLE values (the only value in grayscale and the K channel in CMYK) are identical, but this output I get is not what I expected! The CMYK image is also brighter to the top, but very faintly!
It seems that C=0, M=0, Y=0 and K=0 does not correspond to white at least with this algorithm and on my machine!?! How can I set white when I only what to change the K channel and keep the rest zero?
Try inverting your K channel:
jsr[i*4+3]= (m - ((arr[i]-min)*m);
I think I found the answer my self. First I tried setting all four colors to the same value. It did produce a reasonable result but the output was not inverted as I expected. Such that the pixel with the largest value in all four colors was white, not black!
It was then that it suddenly occurred to me that somewhere in the process, either in IJG's libjpeg or in general in the JPEG standard I have no idea which, CMYK colors are reversed. Such that for example a Cyan value of 0 is actually interpreted as UCHAR_MAX on the display or printing device and vice versa. If this was the solution, the fact that the image in the question was so dark and that its grey shade was the same as the greyscale image could easily be explained (since I set all three other colors to zero which was actually interpreted as the maximum intensity!).
So I set the first three CMYK colors to the full range (=UCHAR_MAX):
jsr[i*4]=jsr[i*4+1]=jsr[i*4+2]=UCHAR_MAX /* Was 0 before! */;
Then to my surprise the image worked. The greyscale (left) shades of grey are darker, but atleast generally everything can be explained and is reasonably similar. I have checked separately and the absolute black color is identical in both, but the shades of grey in grey scale are darker for the same pixel value.
After I checked them on print (below) the results seemed to differ less, although the shades of grey in greys-scale are darker! Image taken with my smartphone!
Also, until I made this change, on a normal image viewer (I am using Scientific Linux), the image would be completely black, that is why I thought I can't see a CMYK image! But after this correction, I could see the CMYK image just as an ordinary image. Infact using Eye of GNOME (the default image viewer in GNOME), the two nearly appear identical.

How to get separate contours (and fill them) in OpenCV?

I'm trying to separate the contours of an image (in order to find uniform regions) so I applied cvCanny and then cvFindContours, then I use the following code to draw 1 contour each time I press a key:
for( ; contours2 != 0; contours2 = contours2->h_next ){
cvSet(img6, cvScalar(0,0,0));
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours(img6, contours2, color, cvScalarAll(255), 100);
//cvFillConvexPoly(img6,(CvPoint *)contours2,sizeof (contours2),color);
area=cvContourArea(contours2);
cvShowImage("3",img6);
printf(" %d", area);
cvWaitKey();
}
But in the first iteration it draws ALL the contours, in the second it draws ALL but one, the third draws all but two, and so on.
And if I use the cvFillConvexPoly function it fills most of the screen (although as I wrote this I realized a convex polygon won't work for me, I need to fill just the insideof the contour)
So, how can I take just 1 contour on each iteration of the for, instead of all the remaining contours?
Thanks.
You need to change the last parameter you are passing to the function, which is currently 100, to either 0 or a negative value, depending on whether you want to draw the children.
According to the documentation (http://opencv.willowgarage.com/documentation/drawing_functions.html#drawcontours),
the function has the following signature:
void cvDrawContours(CvArr *img, CvSeq* contour, CvScalar external_color,
CvScalar hole_color, int max_level, int thickness=1, int lineType=8)
From the same docs, max_level has the following purpose (most applicable part is in bold):
max_level – Maximal level for drawn contours. If 0, only contour is
drawn. If 1, the contour and all contours following it on the same
level are drawn. If 2, all contours following and all contours one
level below the contours are drawn, and so forth. If the value is
negative, the function does not draw the contours following after
contour but draws the child contours of contour up to the
$|\texttt{max_ level}|-1$ level.
Edit:
To fill the contour, use a negative value for the thickness parameter:
thickness – Thickness of lines the contours are drawn with. If it is
negative (For example, =CV_FILLED), the contour interiors are drawn.

HVS color space in Open CV

I am going to detect a yellow color object when i open up my System CAM using Open CV programming, i got some help from the tutorial Object Recognition in Open CV but i am not clear about this line of code, what it does, i don't know. please elaborate me on the below line of code, which i am using.
cvInRangeS(imgHSV, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
other part of program:
CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments));
cvMoments(imgYellowThresh, moments, 1);
// The actual moment values
double moment10 = cvGetSpatialMoment(moments, 1, 0);
double moment01 = cvGetSpatialMoment(moments, 0, 1);
double area = cvGetCentralMoment(moments, 0, 0);
What about reading documentation?
inRange:
Checks if array elements lie between the elements of two other arrays.
And actually that article contains clear explanation:
And the two cvScalars represent the lower and upper bound of values
that are yellowish in colour.
About second code. From that calculations author finds center of object and its square. Quote from article:
You first allocate memory to the moments structure, and then you
calculate the various moments. And then using the moments structure,
you calculate the two first order moments (moment10 and moment01) and
the zeroth order moment (area).
Dividing moment10 by area gives the X coordinate of the yellow ball,
and similarly, dividing moment01 by area gives the Y coordinate.

Resources