Read vertex color from screen space - c

For some reason, I need to know the color of the vertices of an object. The way I can think of is to render the vertex to the screen and then call glReadPixels to fetch the color of the vertices from screen space.
My program is implemented as follows:
Render the ith vertex:
glPointSize(8.0);
glDrawArrays(GL_POINTS, i, 1);
Compute the screen coordinates of this vertex:
oPos[3] = 1.0
// assume the object space coordinates of the vertex is oPos.
multiply oPos by the model-view-projection matrix to get the normalized device coordinates of this vertex, denoted as ndcPos;
ndcPos[1~3] /= ndcPos[3]
finally, multiply ndcPos with the viewport matrix to get the screen coordinates, denoted as screenPos. The viewport matrix is defined as:
GLfloat viewportMat[] = {
screen_width/2, 0, 0, 0,
0, screen_height/2, 0, 0,
0, 0, 1, 0,
(screen_width-1)/2.0, (screen_height-1)/2.0, 0, 1};
Finally, call glReadPixels as:
glReadPixels(int(screenPos[0]+0.5), int(screenPos[1]+0.5),
1, 1, GL_RGB, GL_FLOAT, currentColor);
The resulting color will then be stored at currentColor, which is a vector of length three.
My questions are:
Is there any better ways to get the vertex color rather than query them from screen space?
Any ideas about the correctness of my second and third step?

OpenGL transform feedback allows a shader to write arbitrary values into a buffer object, which can then be read by the host program. You can use this to make your vertex shader pass the computed vertex colors back to the host program.
The nice thing about transform feedback is that you can use it while drawing to the screen, so you can draw your geometry and capture the vertex colors in a single pass. Or, if you prefer, you can draw the geometry with rasterization turned off to capture the feedback data without touching the screen.
Since the data produced by transform feedback is stored in a buffer object, you can use it as input for other drawing operations, too. Depending on what you plan to do with the computed vertex colors, you may be able to avoid transferring them back to the host program at all.

Related

How to modify specific attribute of a vertex in compute shader

I have a buffer containing 6 values, the first three being position and the other three are the color of the vertex. I want to modify those values in the compute shader, but only positions, not the color. I achieved this by using Image Load/Store, but I used all vertex data, not only a part of it (one attribute). So basically I don't know how to get only one attribute in compute shader, and modify it, and write it back to the buffer.
This is the code that worked for one (and only) attribute.
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glGenTextures(1, &position_tbo);
glBindTexture(GL_TEXTURE_BUFFER, position_tbo);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, position_buffer);
glsl:
layout (local_size_x = 128) in;
layout (rgba32f, binding = 0) uniform imageBuffer position_buffer;
void main(void)
{
vec4 pos = imageLoad(position_buffer, int(gl_GlobalInvocationID.x));
pos.xyz = (translation * vec4(pos.xyz, 1.0)).xyz;
imageStore(position_buffer, int(gl_GlobalInvocationID.x), pos);
}
So how do I store only part of the vertex data into pos, not all attributes? Where do I specify what attribute goes into pos? And if I imageStore some specific attribute back to the buffer, am I sure that only that part of the buffer will be changed (the attribute I want to modify) and other attributes will remain the same?
Vertex Array State defined by functions like glVertexAttribPointer() is only relevant when drawing with the graphics pipeline. It defines the mapping from buffer elements to input vertex attributes. It does not have any effect in compute mode.
It is you yourself who defines the layout of your vertex buffer(s) and sets up the Vertex Array accordingly. Thus, you necessarily know where in your buffer to find which component of which attribute. If you didn't then you couldn't ever use the buffer to draw anything either.
I'm not sure why exactly you chose to use image load/store to access your vertex data in your compute shader. I would suggest simply binding it as a shader storage buffer. Assuming your buffers just contain a bunch of floats, the probably simplest approach would be to just interpret them as an array of floats in your compute shader:
layout(std430, binding = 0) buffer VertexBuffer
{
float vertex_data[];
};
You can then just access the i-th float in your buffer as vertex_data[i] in your compute shader via a normal index, just like any other array…
Apart from all that, I should point out that the glVertexAttribPointer() call in your code above sets up the buffer for only one vertex attribute consisting of 4 floats rather than two attributes of 3 floats each.

openGL 2D pixel rotation

I'm trying to rotate a 2D pixel matrix, but nothing actually happens.
my origin is a stored bitmap[w x h x 3].
why isn't the shown image being rotated?
Here's the display function:
void display()
{
uint32_t i = 0,j = 0,k = 0;
unsigned char pixels[WINDOW_WIDTH * WINDOW_HEIGHT * 3];
memset(pixels, 0, sizeof(pixels));
for(j = bitmap_h -1; j > 0; j--) {
for(i = 0; i < bitmap_w; i++) {
pixels[k++]=bitmap[j][i].r;
pixels[k++]=bitmap[j][i].g;
pixels[k++]=bitmap[j][i].b;
}
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(90,0,0,1);
glDrawPixels(g_img.descriptor.size_w, g_img.descriptor.size_h, GL_RGB, GL_UNSIGNED_BYTE, &pixels);
glutSwapBuffers();
}
First and foremost glDrawPixels should not be used. The problem you have is one of the reasons. The convoluted rules by which glDrawPixels operate are too vast to outline here, let's just say, that there's a so called "raster position" in your window, at which glDrawPixels will place the lower left corner of the image it draws. No transformation whatsoever will be applied to the image.
However when setting the raster position, that's when transformations get applied. And should, for whatever reason, the raster position lie outside the visible window nothing will get drawn at all.
Solution: Don't use glDrawPixels. Don't use glDrawPixels. DON'T USE glDrawPixels. I repeat DON'T USE glDrawPixels. It's best you completely forget that this function actually exists in legacy OpenGL.
Use a textured quad instead. That will also transform properly.
I did something similar. I'm creating a 3D space shooter game using OpenGL/C++. For one of my levels, I have a bunch of asteroids/rocks in the background each rotating and moving at a random speed.
I did this by taking the asteroid bitmap image and creating a texture. Then I applied the texture to a square (glBegin(GL_QUADS)). Each time I draw the square, I multiply each of the vertex coordinates (glVertex3f(x, y, z)) with a rotation matrix.
|cos0 -sin0|
|sin0 cos0 |
0 is the theta angle. I store this angle as part of my Asteroid class. each iteration I increment it by a value, depending on how fast I want the asteroid to spin. It works great.

Learning openCV : Use the pointer element to access cvptr2D to point to the middle 'green' channel. Draw the rectangle between 20,5 and 40,20

In the book learning opencv there's a question in chapter 3 :
Create a two dimensional matrix with three channels of type byte with data size 100-by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw the rectangle between 20,5 and 40,20.
I've managed to do the first part, but I can't get my head around second part. Here's what I've done so far :
/*
Create a two dimensional matrix with three channels of type byte with data size 100- by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw `enter code here`the rectangle between 20,5 and 40,20.
*/
void ex10_question3(){
CvMat* m = cvCreateMat(100,100,CV_8UC3);
CvSetZero(m); // initialize to 0.
uchar* ptr = cvPtr2D(m,0,1); // if RGB, then start from first RGB pair, Green.
cvAdd(m,r);
cvRect r(20,5,20,15);
//cvptr2d returns a pointer to a particular row element.
}
I was considering adding both the rect and matrix, but obviously that won't work because a rect is just co-ordinates, and width/height. I'm unfamiliar with cvPtr2D(). How can I visualise what the exercise wants me to do and can anyone give me a hint in the right direction? The solution must be in C.
From my understanding with interleaved RGB channels the 2nd channel will always be the channel of interest. (array index : 1,4,6..)
So that's the direction where the winds blow from...
First of all, the problem is the C API. This API is still present for legacy reasons, but will soon become obsolete. If you are serious about OpenCV please refer to C++ API. The official tutorials are great source of information.
To further target your question, this would be implementation of your question in C++.
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
std::vector<cv::Mat> channels(3); // prepare storage for splitting
split(canvas, channels); // split matrix to single channels
rectangle(channels[1], ...); // draw rectangle [I don't remember exact params]
merge(channels, canvas); // merge the channels together
If you only need to draw green rectangle, it's actually much easier:
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
rectangle(canvas, ..., Scalar(0,255,0)); // draw green rectangle
Edit:
To find out how to access single pixels in image using C++ API please refer to this answer:
https://stackoverflow.com/a/8139210/892914
Try this code:
cout<<"Chapter 3. Task 3."<<'\n';
CvMat *Mat=cvCreateMat(100, 100, CV_8UC3);
cvZero(Mat);
for(int J=5; J<=20; J++)
for(int I=20; I<40; I++)
(*(cvPtr2D(Mat, J, I)+1))=(uchar)(255);
cvNamedWindow("Chapter 3. Task 3", CV_WINDOW_FREERATIO);
cvShowImage("Chapter 3. Task 3", Mat);
cvWaitKey(0);
cvReleaseMat(&Mat);
cvDestroyAllWindows;

How to get separate contours (and fill them) in OpenCV?

I'm trying to separate the contours of an image (in order to find uniform regions) so I applied cvCanny and then cvFindContours, then I use the following code to draw 1 contour each time I press a key:
for( ; contours2 != 0; contours2 = contours2->h_next ){
cvSet(img6, cvScalar(0,0,0));
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours(img6, contours2, color, cvScalarAll(255), 100);
//cvFillConvexPoly(img6,(CvPoint *)contours2,sizeof (contours2),color);
area=cvContourArea(contours2);
cvShowImage("3",img6);
printf(" %d", area);
cvWaitKey();
}
But in the first iteration it draws ALL the contours, in the second it draws ALL but one, the third draws all but two, and so on.
And if I use the cvFillConvexPoly function it fills most of the screen (although as I wrote this I realized a convex polygon won't work for me, I need to fill just the insideof the contour)
So, how can I take just 1 contour on each iteration of the for, instead of all the remaining contours?
Thanks.
You need to change the last parameter you are passing to the function, which is currently 100, to either 0 or a negative value, depending on whether you want to draw the children.
According to the documentation (http://opencv.willowgarage.com/documentation/drawing_functions.html#drawcontours),
the function has the following signature:
void cvDrawContours(CvArr *img, CvSeq* contour, CvScalar external_color,
CvScalar hole_color, int max_level, int thickness=1, int lineType=8)
From the same docs, max_level has the following purpose (most applicable part is in bold):
max_level – Maximal level for drawn contours. If 0, only contour is
drawn. If 1, the contour and all contours following it on the same
level are drawn. If 2, all contours following and all contours one
level below the contours are drawn, and so forth. If the value is
negative, the function does not draw the contours following after
contour but draws the child contours of contour up to the
$|\texttt{max_ level}|-1$ level.
Edit:
To fill the contour, use a negative value for the thickness parameter:
thickness – Thickness of lines the contours are drawn with. If it is
negative (For example, =CV_FILLED), the contour interiors are drawn.

HVS color space in Open CV

I am going to detect a yellow color object when i open up my System CAM using Open CV programming, i got some help from the tutorial Object Recognition in Open CV but i am not clear about this line of code, what it does, i don't know. please elaborate me on the below line of code, which i am using.
cvInRangeS(imgHSV, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
other part of program:
CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments));
cvMoments(imgYellowThresh, moments, 1);
// The actual moment values
double moment10 = cvGetSpatialMoment(moments, 1, 0);
double moment01 = cvGetSpatialMoment(moments, 0, 1);
double area = cvGetCentralMoment(moments, 0, 0);
What about reading documentation?
inRange:
Checks if array elements lie between the elements of two other arrays.
And actually that article contains clear explanation:
And the two cvScalars represent the lower and upper bound of values
that are yellowish in colour.
About second code. From that calculations author finds center of object and its square. Quote from article:
You first allocate memory to the moments structure, and then you
calculate the various moments. And then using the moments structure,
you calculate the two first order moments (moment10 and moment01) and
the zeroth order moment (area).
Dividing moment10 by area gives the X coordinate of the yellow ball,
and similarly, dividing moment01 by area gives the Y coordinate.

Resources