If this function does what I think it does, it seems that on my machine at least in CMYK, C=0, M=0, Y=0 and K=0 does not correspond to white! What is the problem?
float *arr is a float array with size elements. I want to save this array as a JPEG with IJG's libjpeg in two color spaces upon demand: g: Gray scale and c: CMYK. I follow their example and make the input JSAMPLE *jsr array with the number of JSAMPLE elements depending on the color space: size elements for gray scale and 4*size elements for CMYK. JSAMPLE is just another name for unsigned char on my machine at least. The full program can be seen on Github. This is how I fill jsr:
void
floatfilljsarr(JSAMPLE *jsr, float *arr, size_t size, char color)
{
size_t i;
double m;
float min, max;
/* Find the minimum and maximum of the array:*/
fminmax(arr, size, &min, &max);
m=(double)UCHAR_MAX/((double)max-(double)min);
if(color=='g')
{
for(i=0;i<size;i++)
jsr[i]=(arr[i]-min)*m;
}
else
for(i=0;i<size;i++)
{
jsr[i*4+3]=(arr[i]-min)*m;
jsr[i*4]=jsr[i*4+1]=jsr[i*4+2]=0;
}
}
I should note that color has been checked before this function to be either c or g.
I then write the JPEG image exactly as the example.c program in libjpeg's source.
Here is the output after printing both images in a TeX document. Grayscale is on the left and CMYK is on the right. Both images are made from the same ordered array, so the bottom left element (the first in the array as I have defined it and displayed it here) has JSAMPLE value 0 and the top right element has JSAMPLE value 255.
Why aren't the two images similar? Because of the different nature, I would expect the CMYK image to be reversed with its bottom being bright and its top being black. Their displaying JSAMPLE values (the only value in grayscale and the K channel in CMYK) are identical, but this output I get is not what I expected! The CMYK image is also brighter to the top, but very faintly!
It seems that C=0, M=0, Y=0 and K=0 does not correspond to white at least with this algorithm and on my machine!?! How can I set white when I only what to change the K channel and keep the rest zero?
Try inverting your K channel:
jsr[i*4+3]= (m - ((arr[i]-min)*m);
I think I found the answer my self. First I tried setting all four colors to the same value. It did produce a reasonable result but the output was not inverted as I expected. Such that the pixel with the largest value in all four colors was white, not black!
It was then that it suddenly occurred to me that somewhere in the process, either in IJG's libjpeg or in general in the JPEG standard I have no idea which, CMYK colors are reversed. Such that for example a Cyan value of 0 is actually interpreted as UCHAR_MAX on the display or printing device and vice versa. If this was the solution, the fact that the image in the question was so dark and that its grey shade was the same as the greyscale image could easily be explained (since I set all three other colors to zero which was actually interpreted as the maximum intensity!).
So I set the first three CMYK colors to the full range (=UCHAR_MAX):
jsr[i*4]=jsr[i*4+1]=jsr[i*4+2]=UCHAR_MAX /* Was 0 before! */;
Then to my surprise the image worked. The greyscale (left) shades of grey are darker, but atleast generally everything can be explained and is reasonably similar. I have checked separately and the absolute black color is identical in both, but the shades of grey in grey scale are darker for the same pixel value.
After I checked them on print (below) the results seemed to differ less, although the shades of grey in greys-scale are darker! Image taken with my smartphone!
Also, until I made this change, on a normal image viewer (I am using Scientific Linux), the image would be completely black, that is why I thought I can't see a CMYK image! But after this correction, I could see the CMYK image just as an ordinary image. Infact using Eye of GNOME (the default image viewer in GNOME), the two nearly appear identical.
Related
I'm trying to play around with image manipulation in C and I want to be able to read and write pixels on an SDL Surface. (I'm loading a bmp to a surface to get the pixel data) I'm having some trouble figuring out how to properly use the following functions.
SDL_CreateRGBSurfaceFrom();
SDL_GetRGB();
SDL_MapRGB();
I have only found examples of these in c++ and I'm having a hard time implementing it in C because I don't fully understand how they work.
so my questions are:
how do you properly retrieve pixel data using GetRGB? + How is the pixel addressed with x, y cordinates?
What kind of array would I use to store the pixel data?
How do you use SDL_CreateRGBSurfaceFrom() to draw the new pixel data back to a surface?
Also I want to access the pixels individually in a nested for loop for y and x like so.
for(int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
// get/put the pixel data
}
}
First have a look at SDL_Surface.
The parts you're interested in:
SDL_PixelFormat*format
int w, h
int pitch
void *pixels
What else you should know:
On position x, y (which must be greater or equal to 0 and less than w, h) the surface contains a pixel.
The pixel is described by the format field, which tells us, how the pixel is organized in memory.
The Remarks section of SDL_PixelFormat gives more information on the used datatype.
The pitch field is basically the width of the surface multiplied by the size of the pixel (BytesPerPixel).
With the function SDL_GetRGB, one can easily convert a pixel of any format to a RGB(A) triple/quadruple.
SDL_MapRGB is the reverse of SDL_GetRGB, where one can specify a pixel as RGB(A) triple/quadruple to map it to the closest color specified by the format parameter.
The SDL wiki provides many examples of the specific functions, i think you will find the proper examples to solve your problem.
So I'm working on a project that involves taking pre-existing skeleton code of an application that simulates "fluid flow and visualization" and applying different visualization techniques on it.
The first step of the project is to apply different color-mapping techniques on three different data-sets which are as follows: fluid density (rho), fluid velocity magnitude (||v||) and the force field magnitude ||f||.
The skeleton code provided already has an example that I can study to be able to determine how best to design and implement different color-maps such as red-to-white or blue-to-yellow or what have you.
The snippet of code I'm trying to understand is the following:
//rainbow: Implements a color palette, mapping the scalar 'value' to a rainbow color RGB
void rainbow(float value,float* R,float* G,float* B)
{
const float dx=0.8;
if (value<0) value=0; if (value>1) value=1;
value = (6-2*dx)*value+dx;
*R = max(0.0,(3-fabs(value-4)-fabs(value-5))/2);
*G = max(0.0,(4-fabs(value-2)-fabs(value-4))/2);
*B = max(0.0,(3-fabs(value-1)-fabs(value-2))/2);
}
The float valuebeing passed by the first parameter is, as far as I can tell, the fluid density. I've determined this by studying these two snippets.
//set_colormap: Sets three different types of colormaps
void set_colormap(float vy)
{
float R,G,B;
if (scalar_col==COLOR_BLACKWHITE)
R = G = B = vy;
else if (scalar_col==COLOR_RAINBOW)
rainbow(vy,&R,&G,&B);
else if (scalar_col==COLOR_BANDS)
{
const int NLEVELS = 7;
vy *= NLEVELS; vy = (int)(vy); vy/= NLEVELS;
rainbow(vy,&R,&G,&B);
}
glColor3f(R,G,B);
}
and
set_colormap(rho[idx0]);
glVertex2f(px0, py0);
set_colormap(rho[idx1]);
glVertex2f(px1, py1);
set_colormap(rho[idx2]);
glVertex2f(px2, py2);
set_colormap(rho[idx0]);
glVertex2f(px0, py0);
set_colormap(rho[idx2]);
glVertex2f(px2, py2);
set_colormap(rho[idx3]);
glVertex2f(px3, py3);
With all of this said, could somebody please explain to me how the first method works?
Here's the output when the method is invoked by the user and matter is injected into the window by means of using the cursor:
Whereas otherwise it would look like this (gray-scale):
I suspect that this is a variation of HSV to RGB.
The idea is that you can map your fluid density (on a linear scale) to the Hue parameter of a color in HSV format. Saturation and Value can just maintain constant value of 1. Normally Hue starts and ends at red, so you also want to shift your Hue values into the [red, blue] range. This will give you a "heatmap" of colors in HSV format depending on the fluid density, which you then have to map to RGB in the shader.
Because some of your values can be kept constant and because you don't care about any of the intermediate results, the algorithm that transforms fluid density to RGB can be simplified to the snippet above.
I'm not sure which part of the function you don't understand, so let me explain this line by line:
void rainbow(float value,float* R,float* G,float* B){
}
This part is probably clear to you - the function takes in a single density/color value and outputs a rainbow color in rgb space.
const float dx=0.8;
Next, the constant dx is initialised. I'm not sure what the name "dx" stands for, but looks like it's later used to determine which part of the color spectrum is used.
if (value<0) value=0; if (value>1) value=1;
This clamps the input to a value between 0 and 1.
value = (6-2*dx)*value+dx;
This maps the input to a value between dx and 6-dx.
*R = max(0.0,(3-fabs(value-4)-fabs(value-5))/2);
This is probably the most complicated part. If value is smaller than 4, this simplifies to max(0.0,(2*value-6)/2) or max(0.0,value-3). This means that if value is less than 3, the red output will be 0, and if it is between 3 and 4, it will be value-3.
If value is between 4 and 5, this line instead simplifies to max(0.0,(3-(value-4)-(5-value))/2) which is equal to 1. So if value is between 4 and 5, the red output will be 1.
Lastly, if value is greater than 5, this line simplifies to max(0.0,(12-2*value)/2) or just 6-value.
So the output R is 1 when value is between 4 and 5, 0 when value is smaller than 3, and something in between otherwise. The calculations for the green and blue output or pretty much the same, just with tweaked value; green is brightest for values between 2 and 4, and blue is brightest for values between 1 and 2. This way the output forms a smooth rainbow color spectrum.
so, I'm trying to flood fill different regions of the image, with different shades of grey.
This is my input image:
1
So here is my code for flood filling one of the regions with some grey shade:
image = cv.imread('img.jpg', 0)
height, width = image.shape[:2]
for i in range(height):
for j in range(width):
if image[i][j] == 255:
cv.floodFill(image, None, (i, j), 90)
cv.imwrite('test1.jpg', image)
break
else:
continue
break
After this I get:
2
And if I try to load the new image again, and go through the pixels, the same pixel that was used to start flood fill in the previous example, still has the 255 value instead of 90.
How is that? What am I missing here?
Thanks, guys!
floodFill() and other functions which take a point expect them to be in x, y coordinates. You're sending in row, col coordinates instead (i.e., y, x), so you're floodfill()ing a different area than you actually detect that is white.
I am going to detect a yellow color object when i open up my System CAM using Open CV programming, i got some help from the tutorial Object Recognition in Open CV but i am not clear about this line of code, what it does, i don't know. please elaborate me on the below line of code, which i am using.
cvInRangeS(imgHSV, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
other part of program:
CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments));
cvMoments(imgYellowThresh, moments, 1);
// The actual moment values
double moment10 = cvGetSpatialMoment(moments, 1, 0);
double moment01 = cvGetSpatialMoment(moments, 0, 1);
double area = cvGetCentralMoment(moments, 0, 0);
What about reading documentation?
inRange:
Checks if array elements lie between the elements of two other arrays.
And actually that article contains clear explanation:
And the two cvScalars represent the lower and upper bound of values
that are yellowish in colour.
About second code. From that calculations author finds center of object and its square. Quote from article:
You first allocate memory to the moments structure, and then you
calculate the various moments. And then using the moments structure,
you calculate the two first order moments (moment10 and moment01) and
the zeroth order moment (area).
Dividing moment10 by area gives the X coordinate of the yellow ball,
and similarly, dividing moment01 by area gives the Y coordinate.
I have a binary image. Within a certain region of interest, I need to count the number of black pixels. There is always the way of looping through the pixels and counting them, but I'm looking for a more efficient method as I need to do it real-time.
I found a way to count the number of nonzero pixels(using cvCountNonZero()). Is there any such equivalent function for counting zero pixels (there doesn't seem to be as far as I've seen)? If not, what is the most efficient way to count the black pixels?
I believe the number of zero pixels could be seen as:
int TotalNumberOfPixels = width * height;
int ZeroPixels = TotalNumberOfPixels - cvCountNonZero(cv_image);