SDL pixel has wrong values - c

I am trying to create a image processing software.
I get some weird results trying to create an Unsharp Mask effect.
I will attach my code here and I will explain what it does and where the problems are (or at least , where I think they are):
void unsharpMask(SDL_Surface* inputSurface,SDL_Surface* outputSurface)
{
Uint32* pixels = (Uint32*)inputSurface->pixels;
Uint32* outputPixels=(Uint32*)outputSurface->pixels;
Uint32* blurredPixels=(Uint32*)blurredSurface->pixels;
meanBlur(infoSurface,blurredSurface);
for (int i=0;i<inputSurface->h;i++)
{
for(int j=0;j<inputSurface->w;j++)
{
Uint8 rOriginal,gOriginal,bOriginal;
Uint8 rBlurred,gBlurred,bBlurred;
Uint32 rMask,gMask,bMask;
Uint32 rFinal,gFinal,bFinal;
SDL_GetRGB(blurredPixels[i*blurredSurface->w+j],blurredSurface->format,&rBlurred,&gBlurred,&bBlurred);
SDL_GetRGB(pixels[i*inputSurface->w+j],inputSurface->format,&rOriginal,&gOriginal,&bOriginal);
rMask=rOriginal - rBlurred;
rFinal=rOriginal + rMask;
if(rFinal>255) rFinal=255;
if(rFinal<=0) rFinal=0;
gMask=gOriginal - gBlurred;
gFinal=gOriginal + gMask;
if(gFinal>255) gFinal=255;
if(gFinal<0) gFinal=0;
bMask=bOriginal - bBlurred;
bFinal=bOriginal + bMask;
if(bFinal>255) bFinal=255;
if(bFinal<0) bFinal=0;
Uint32 pixel =SDL_MapRGB(outputSurface->format,rFinal,gFinal,bFinal);
outputPixels[i *outputSurface->w+j]=pixel;
}
}
}
So, as you can see, my function gets 2 arguments: the image source(from which pixel data will be extracted, and a target, where the image will be projected). I blur the original image, then i subtract the RGB value of the blurred image from the source image to get "the mask" and then , i add the mask to the original image and that's it. I added some clamping to make sure everything stays in the correct range and then I draw every pixel resulted on the output surface. All these surfaces have been converted in an SDL_PIXELFORMAT_ARGB8888 . The output surface is loaded into a texture (also SDL_PIXELFORMAT_ARGB8888) and rendered on the screen.
The results are pretty good in 90% of the image, I get the effect I want, however , there are some pixels that look weird in some places.
Original:
Result:
I tried to fix this in any possible way I knew. I thought is a format problem and played with the pixel bit depth , but I couldn't get to a good result. What i found is that all the values > 255 are negative values and I tried to make them completely white. And it works for the skies ,for example, but if you can see on my examples, the dark values, on the grass are also affected, which makes this not a good solution.
I also get this kind of wrong pixels when I want to add a contrast or do sharpen using kernel convolution, and the values are really bright/dark.
In my opinion there may be a problem with the pixel format, but I'm not sure if that's true.
Is there anyone that had this kind of problem before or knows a potential solution?

Related

How to compare two image with c

I'd like to implement an algorithm capable of comparing two images using the C api of OpenCV 3.0.0 .
The image are captured by webcam so they have the same resolution (don't worry is small) and the same file format (jpeg) (but we can change it if we want)
My idea is to get two image, transform in b&w and with a simple for construct compare bit by bit.
In b&w we can consider the image as a full matrix with 0s and 1s.
Example :
Image1 (black and white)
0 0 0 1
0 1 0 1
0 0 0 0
0 0 1 1
So i know there's a lot of way to do it with opencv but i read a lot of example that are useless for me.
So my idea is to manually operate with every pixel. Considering the image as a matrix we can compare pixel by pixel with a simple double construct for (pixel_image[r:rows][c:columns] == pixel_image2[r:rows][c:columns] only if the "pixel" is considered as integer. So the problem is how can i access to the value of pixel from IplImage type?
Certainly in this way i think i have a bad optimization of code, or maybe i can't adapt it to a large scale of image data.
(Please don't answer me with: "use c++ - use python - c is deprecated ecc..", i know there're so many way to do it in easy way with an high level code)
Thanks to all.
Ok i solved.
This way u can access the value of single pixel (if the image is loaded as RGB)( rememeber that opencv see BGR)
IplImage* img=cvCreateImage(cvSize(640,480),IPL_DEPTH_32F,3);
double tmpb,tmpg,bmpr;
for(int i=0;i<img->height;i++){
for(int j=0;j<img->width;j++){
tmpb=cvGet2D(img,i,j).val[0];
tmpg=cvGet2D(img,i,j).val[1];
tmpr=cvGet2D(img,i,j).val[2];
}
}
If the image is a single channel (for example if we convert it to black and white) is
IplImage* img=cvCreateImage(cvSize(640,480),IPL_DEPTH_32F,3);
int tmp
for(int i=0;i<img->height;i++){
for(int j=0;j<img->width;j++){
tmp=cvGet2D(img,i,j).val[0];
printf("Value: %d",tmp);
}
}

glPolygonOffset() not work for object outline

I'm recently playing with glPolygonOffset( factor, units ) and find something interesting.
I used GL_POLYGON_OFFSET_FILL, and set factor and units to negative values so the filled object is pulled out. This pulled object is supposed to cover the wireframe which is drawn right after it.
This works correctly for pixels inside of the object. However for those on object outline, it seems the filled object is not pulled and there is still lines there.
Before pulling the filled object:
  
After pulling the filled object:
  
glEnable(GL_POLYGON_OFFSET_FILL);
float line_offset_slope = -1.f;
float line_offset_unit = 0.f;
// I also tried slope = 0.f and unit = -1.f, no changes
glPolygonOffset( line_offset_slope, line_offset_unit );
DrawGeo();
glDisable( GL_POLYGON_OFFSET_FILL );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
DrawGeo();
I read THIS POST about the meaning and usage of glPolygonOffset(). But I still don't understand why the pulling doesn't happen to those pixels on border.
To do this properly, you definitely do not want a unit of 0.0f. You absolutely want the pass that is supposed to be drawn overtop the wireframe to have a depth value that is at least 1 unit closer than the wireframe no matter the slope of the primitive being drawn. There is a far simpler approach that I will discuss below though.
One other thing to note is that line primitives have different coverage rules during rasterization than polygons. Lines use a diamond pattern for coverage testing and triangles use a square. You will sometimes see software apply a sub-pixel offset like (0.375, 0.375) to everything drawn, this is done as a hack to make the coverage tests for triangle edges and lines consistent. However, the depth value generated by line primitives is also different from planar polygons, so lines and triangles do not often jive for multi-pass rendering.
glPolygonMode (...) does not change the actual primitive type (it only changes how polygons are filled), so that will not be an issue if this is your actual code. However, if you try doing this with GL_LINES in one pass and GL_TRIANGLES in another you might get different results if you do not consider pixel coverage.
As for doing this simpler, you should be able to use a depth test of GL_LEQUAL (the default is GL_LESS) and avoid a depth offset altogether assuming you draw the same sphere on both passes. You will want to swap the order you draw your wireframe and filled sphere, however -- the thing that should be on top needs to be drawn last.

Stuck in a lab Task (image processing)

Hi everyone I am new to programming and I am first year in university cs.
my question Is that I am writing a program that screens simple images looking
for anomalies (indicated by excessive patterns of red). The program should load a
file and then print out whether or not the image contains more than a certain percentage
of intensive red pixels.
so far I have the following code:
#include <stdio.h>
#include "scc110img.h"
int main()
{
unsigned char* imageArray = LoadImage("red.bmp");
int imageSize =GetSizeOfImage();
int image;
for (image = 0; image<imageSize; image++);
printf("%d\n, imageArray[image]");
}
my question is how can I modify the program o that it prints out the amount of blue, green and red.
something like;
blue value is 0, green value is 0, red value is 0.
You have a byte array (unsigned char) that represents the bytes of your image. Currently you are printing them out one byte at a time.
So to know how to get the individual rgb values you need to know how they were stored.
Its as easy as that, but don't expect someone here to do it for you.
This code is really incomplete. We don't know what your LoadImage() or GetSizeOfImage() function does but one thing is sure is that the way you are representing image in you C program is definitely not the way it is represented. A 'bmp' image has several parts and you should find out the correct way to represent it as a struct. Then you can traverse through it pixel by pixel.
I would suggest using a library pre-written such as 'libbmp' to make your task easy.

[EDITED]Implementing Difference of Gaussian

I'm not sure if I'm doing the right way.
IplImage *dog_1 = cvCreateImage(cvGetSize(oriImg), oriImg->depth, oriImg->nChannels);
IplImage *dog_2 = cvCreateImage(cvGetSize(oriImg), oriImg->depth, oriImg->nChannels);
int kernel2 = 1;
int kernel1 = 5;
cvSmooth(oriImg, dog_2, CV_GAUSSIAN, kernel2, kernel2);
cvSmooth(oriImg, dog_1, CV_GAUSSIAN, kernel1, kernel1);
cvSub(dog_2, dog_1, dst, 0);
Am I doing the right way? Is the above the correct way of doing DOG? I just tried it from the explanation from wiki. But I could not get the desired image like in the wiki page http://en.wikipedia.org/wiki/Difference_of_Gaussians
[Edited]
I quote this from wiki page
"Difference of Gaussians is a grayscale image enhancement algorithm that involves the subtraction of one blurred version of an original grayscale image from another, less blurred version of the original. The blurred images are obtained by convolving the original grayscale image with Gaussian kernels having different standard deviations."
While reading a paper, the DoG image is done by
Original image, I(x,y) -> Blurs -> I1(x,y)
I1(x,y) -> Blurs -> I2(x,y)
output = I2(x,y) - I1(x,y)
As you see it's a slightly different from what I'm doing where I get the I1 and I2 using different kernel from the original image
Which one is the correct one or I misinterpret the meaning in the wiki?
If the image you've attached is your sample output, it doesn't necessarily look wrong. The DoG operation is quite simple: blur with two Gaussians of different sizes and compute the difference image. That appears to be what your code is doing, so I'd say you have it right.
If your worries stem from looking at the Wikipedia article (where the image is predominantly white, rather than black), it is just the inversion of the image that you have. I would not worry about that.

OpenCV: How to merge two static images into one and emboss text on it?

I have completed an image processing algorithm where I extract certain features from two similar images.
I'm using OpenCV2.1 and I wish to showcase a comparison between these two similar images. I wish to combine both the images into one, where the final image will have both the images next to one another. Like in the figure below.
Also, the black dots are the similarities my algorithm has found, now I want to mark them with digits. Where, point 1 on the right is the corresponding matching point on the left.**
What OpenCV functions are useful for this work?
If you really want them in the same window, and assuming they have same width and height (if they are similar they should have same width and height). You could try to create an image with a final width twice bigger than the width of your 2 similar images. And then use ROI to copy them.
You can write a new function to encapsulate these (usefull) functions in one function in order to have a nice code.
Mat img1,img2; //They are previously declared and of the same width & height
Mat imgResult(img1.rows,2*img1.cols,img1.type()); // Your final image
Mat roiImgResult_Left = imgResult(Rect(0,0,img1.cols,img1.rows)); //Img1 will be on the left part
Mat roiImgResult_Right = imgResult(Rect(img1.cols,0,img2.cols,img2.rows)); //Img2 will be on the right part, we shift the roi of img1.cols on the right
Mat roiImg1 = img1(Rect(0,0,img1.cols,img1.rows));
Mat roiImg2 = img2(Rect(0,0,img2.cols,img2.rows));
roiImg1.copyTo(roiImgResult_Left); //Img1 will be on the left of imgResult
roiImg2.copyTo(roiImgResult_Right); //Img2 will be on the right of imgResult
Julien,
The easiest way I can think right now would be to create two windows instead of one. You can do it using cvNamedWindow(), and then position them side by side with cvMoveWindow().
After that if you now the position of the similarities on the images, you can draw your text near them. Take a look at cvInitFont(), cvPutText().

Resources