I have two png images First one with Width1 2247 Height1 190 and second one with Width2 155 Height2 36. I wan't the second image(src) to be placed in the center of first image(dest). I created pixel buf of both and used gdk_pixbuf_composite as follows.
gdk_pixbuf_composite( srcpixbuf, dstpixbuf, 1000, 100, width2, height2, 0, 0, 1, 1, GDK_INTERP_BILINEAR, 255);
I get a hazy window of width2 and height2 on the first image.
If I replace width2 and height2 with 1.0 then I don't get the srcimage on the dstimage. Where am I going wrong?
gdk_pixbuf_composite( srcpixbuf, dstpixbuf, 1000, 100, width2, height2, 1000, 100, 1, 1, GDK_INTERP_BILINEAR, 255);
This solved. Wrongly understood the offset parameter. Basically an intermediate scaled image is created and only the part represented by the dest wid, height is composited. So in my case we need to move the entire unscaled image to the destination offset which is done by the offset parameter.
Related
I am trying to rotate a texture extracted from a video frame (provided by ffmpeg), I have tried the following code :
glTexSubImage2D(GL_TEXTURE_2D,
0,
0,
0,
textureWidth,
textureHeight,
GL_RGBA,
GL_UNSIGNED_BYTE,
//s_pixels);
pFrameConverted->data[0]);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.5,0.5,0.0);
glRotatef(90,0.0,0.0,1.0);
glTranslatef(-0.5,-0.5,0.0);
glMatrixMode(GL_MODELVIEW);
//glDrawTexiOES(-dPaddingX, -dPaddingY, 0, drawWidth + 2 * dPaddingX, drawHeight + 2 * dPaddingY);
glDrawTexiOES(0, 0, 0, drawWidth, drawHeight);
The image is not rotated, do you see the problem ?
From the GL_OES_draw_texture extension specification:
Note also that s, t, r, and q are computed for each fragment as part of DrawTex rendering. This implies that the texture matrix is ignored and has no effect on the rendered result.
You are trying to transform the texture coordinates using the fixed-function texture matrix, but like point sprites, those coordinates are generated per-fragment rather than per-vertex. Thus, that means that nothing you do to the texture matrix is ever going to affect the output of glDrawTexiOES (...).
Consider using a textured quad instead, those will pass through the traditional vertex processing pipeline.
The input image is imageA. I want to copy the middle 1/3 data(imageB) into the opencl buffer.
I use the clEnqueueWriteBuffer function.(I use buffer but NOT the Image)
clEnqueueWriteBuffer(queue,
cl_buffer_input, // opencl buffer
1,
0, // NOW offset is 0
WIDTH_IMAGE*(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL*sizeof(cl_uchar),//1/3 image height
(void*)(image_input.data), // input data
0, 0, NULL);
After that ,the buffer I copied is image C's data.
So I want to use the offset to copy image B.
The code I used is
clEnqueueWriteBuffer(queue,
cl_buffer_input, // opencl buffer
1,
(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL*sizeof(cl_uchar), // NOW offset is the offset of data
WIDTH_IMAGE*(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL*sizeof(cl_uchar),//1/3 image height
(void*)(image_input.data), // input data
0, 0, NULL);
But the result can't be updated! Even I change the offset into 1. The result is also still.(New frame data in the video can't be upload, the result is only the first frame and the position is like image C).
So I changed the start pointer of image data, and let the offset be 0
The new code is like this:
clEnqueueWriteBuffer(queue,
cl_buffer_input, // opencl buffer
1,
0, // NOW offset is the offset of data
WIDTH_IMAGE*(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL*sizeof(cl_uchar),//1/3 image height
(void*)(image_input.data+(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL), // input data's pointer changed
0, 0, NULL);
And... THe new result is like imageD!
It only has the offset of X line.
So...what can I do to copy just the middle 1/3 data of the image into a opencl buffer?
Thank you~
You are not getting anything weird, the results for the code you run are ok.
However, if you want to copy the part B, you need this piece of code:
clEnqueueWriteBuffer(queue,
cl_buffer_input, // opencl buffer
CL_TRUE, //Blocking?
0, // No offset inside the buffer (the image will start at 0 inside the cl_buffer)
WIDTH_IMAGE*(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL*sizeof(cl_uchar),//Copy only 1/3 of image size
(void*)(image_input.data+WIDTH_IMAGE*(HEIGHT_IMAGE/3))*COLOR_IMAGE_CHANNEL), // Offset the input data by 1/3 as well (the first data to copy is at 1/3 inside the array)
0, 0, NULL);
In details: What you need is to copy 1/3 of the image, so the size is 1/3. The buffer offset is 0, because you don't want to write the image at the end of the buffer, but at the beggining. And the ptr where to take the data to copy has to have an offset of 1/3 of the image. So that you copy the portion of the pointer 1/3 to 2/3, into the buffer. (The buffer will have 1/3 of the original image size)
I would like to optimize my code,
My code contains following steps
//img size 1024 X 720
cvSmooth(img, inputImg, CV_GAUSSIAN, 5, 0, 0);
FilterOperations::sobelSchaar(inputImg, sobImg);
cvResize(sobImg, sob2Img);
//sob2Img size 512 X 360
cvSmooth(sob2Img, sob2Img, CV_GAUSSIAN, 3, 3);
imgProceObj.normalise(sob2Img);
cvAdaptiveThreshold(sob2Img, sob2Img, 255,
CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY, 31, 50);
How can I optimize above code, without changing final result ?
pyrDown() might be doing both required steps in one go.
I think I've thoroughly searched the forums, unless I left out certain keywords in my search string, so forgive me if I've missed a post. I am currently using OpenCV 2.4.0 and I have what I think is just a simple problem:
I am trying to take in an unsigned character array (8 bit, 3 channel) that I get from another API and put that into an OpenCV matrix to then view it. However, all that displays is an image of the correct size but a completely uniform gray. This is the same color you see when you specify the incorrect Mat name to be displayed.
Have consulted:
Convert a string of bytes to cv::mat (uses a string inside of array) and
opencv create mat from camera data (what I thought was a BINGO!, but can't seem to get to display the image properly).
I took a step back and just tried making a sample array (to eliminate the other part that supplies this array):
int main() {
bool isCamera = true;
unsigned char image_data[] = {255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255};
cv::Mat image_as_mat(Size(6,3),CV_8UC3,image_data);
namedWindow("DisplayVector2",CV_WINDOW_AUTOSIZE);
imshow("DisplayVector2",image_as_mat);
cout << image_as_mat << endl;
getchar();
}
So I am just creating a 6x3 matrix, with the first row being red pixels, the second row being green pixels, and third row being blue. However this still results in the same blank gray image but of correct size.
The output of the matrix is (note the semicolons i.e. it formatted it correctly):
[255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0; 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0; 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255]
I might be crazy or missing something obvious here. Do I need to initialize something in the Mat to allow it to display properly? Much appreciated as always for all your help everyone!
all the voodoo here boils down to calling getchar() instead of (the required) waitKey()
let me explain, waitKey might be a misnomer here, but you actually need it, as the code inside wraps the window's messageloop, which triggers the actual blitting (besides waiting for keypresses).
if you don't call it, your window will never get updated, and will just stay grey ( that's what you observe here )
indeed, you should have trusted the result from cout , your Mat got properly constructed, it just did not show up in the namedWindow
(btw, getchar() waits for a keypress from the console window, not your img-window)
hope it helps, happy hacking further on ;)
I am trying to find the Distance Transform for each pixels of a binary image, using OpenCV library for C. According to the rule of DT, the value of each zero (black) pixels should be 0. And that of 255 (white) pixels should be the shortest distance to a zero (black) pixel, after applying Distance transform.
I post the code here.
IplImage *im = cvLoadImage("black_white.jpg", CV_LOAD_IMAGE_GRAYSCALE);
IplImage *tmp = cvCreateImage(cvGetSize(im), 32, 1);
cvThreshold(im, im, 128, 255, CV_THRESH_BINARY_INV);
//cvSaveImage("out.jpg", im);
cvDistTransform(im, tmp, CV_DIST_L1, 3, 0, 0 );
d = (uchar*)tmp->imageData;
da = (uchar*)im->imageData;
for(i=0;i<tmp->height;i++)
for(j=0;j<tmp->width;j++)
{
//if((int)da[i*im->widthStep + j] == 255)
fprintf(f, "pixel value = %d DT = %d\n", (int)da[i*im->widthStep + j], (int)d[i*tmp->widthStep + j]);
}
cvShowImage("H", tmp);
cvWaitKey(0);
cvDestroyWindow("H");
fclose(f);
I write the pixels values along with their DT values to a file. As it turns out, some of the 0 pixels have DT values like 65, 128 etc. ie they are not 0. Moreover, I also have some white pixels that have DT values as 0 (which I guess, souldn't happen as it should be atleast 1).
Any kind of help will be appreciated.
Thanks in advance.
I guess it is because of CV_THRESH_BINARY_INV which inverts your image. So the areas you expect to be white are in fact black for DT.
Of cause, inverting the image may be your intention. Display the image im and compare with tmp for verification