How to modify specific attribute of a vertex in compute shader - c

I have a buffer containing 6 values, the first three being position and the other three are the color of the vertex. I want to modify those values in the compute shader, but only positions, not the color. I achieved this by using Image Load/Store, but I used all vertex data, not only a part of it (one attribute). So basically I don't know how to get only one attribute in compute shader, and modify it, and write it back to the buffer.
This is the code that worked for one (and only) attribute.
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);
glGenTextures(1, &position_tbo);
glBindTexture(GL_TEXTURE_BUFFER, position_tbo);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, position_buffer);
glsl:
layout (local_size_x = 128) in;
layout (rgba32f, binding = 0) uniform imageBuffer position_buffer;
void main(void)
{
vec4 pos = imageLoad(position_buffer, int(gl_GlobalInvocationID.x));
pos.xyz = (translation * vec4(pos.xyz, 1.0)).xyz;
imageStore(position_buffer, int(gl_GlobalInvocationID.x), pos);
}
So how do I store only part of the vertex data into pos, not all attributes? Where do I specify what attribute goes into pos? And if I imageStore some specific attribute back to the buffer, am I sure that only that part of the buffer will be changed (the attribute I want to modify) and other attributes will remain the same?

Vertex Array State defined by functions like glVertexAttribPointer() is only relevant when drawing with the graphics pipeline. It defines the mapping from buffer elements to input vertex attributes. It does not have any effect in compute mode.
It is you yourself who defines the layout of your vertex buffer(s) and sets up the Vertex Array accordingly. Thus, you necessarily know where in your buffer to find which component of which attribute. If you didn't then you couldn't ever use the buffer to draw anything either.
I'm not sure why exactly you chose to use image load/store to access your vertex data in your compute shader. I would suggest simply binding it as a shader storage buffer. Assuming your buffers just contain a bunch of floats, the probably simplest approach would be to just interpret them as an array of floats in your compute shader:
layout(std430, binding = 0) buffer VertexBuffer
{
float vertex_data[];
};
You can then just access the i-th float in your buffer as vertex_data[i] in your compute shader via a normal index, just like any other array…
Apart from all that, I should point out that the glVertexAttribPointer() call in your code above sets up the buffer for only one vertex attribute consisting of 4 floats rather than two attributes of 3 floats each.

Related

Convert 4 uint8 (byte) to float32 in GLSL

I have a vertex buffer object containing vertex data for a model. However the layout is a bit weird. The vertex uses 4 x uint_8 for the position and 4 x int_8 for the normal data. The texture position data is appended at the end, with 4 x uint_8 representing a float value, that I can access with a offset value. Using 8 bytes would give me 2 float values that i can use in a vec2 for texture coordinates.
The layout is basically [ [4 x uint_8 (vertex pos)] | [ 4 x int_8 (vertex_normal) ] | ... (alternating pos and norm) | [ 4x uint_8 ] (byte data for float value)].
In my hit shader I read the buffer as an array of int_8 and I am able to read the vertex data without problems. However I can't seem to find a way to construct a float value out of the 4 bytes used to represent it.
I can of course change the stucture of the data, but I have legacy code that relies on this structure and changing it would break the rest of the program. I could also create a new vertex buffer, but since I already have the data and can read it without problems it would only take up more space and would be redundant in my opinion.
There is probably a way to define the structure before, so that buffer information has the right format in the shader. I know that you can set a format for the vertex input in a pipeline, but since this is a raytracing pipeline, I possibly cannot use this feature. But maybe I am wrong.
So the final question is: Is it possible to construct a float value out of 4 uint_8 values in a glsl shader, or should I consider changing the vertex buffer? Or is there maybe another way to define the data?
I have found a solution that works for me.
Basically I use two layouts with the same set and binding, except for the textures and normals the buffer is read as an array of int8 values. The buffer in the second layout is read as an array of vec2s. Since the buffer reads the original byte data it can pack it into a vec2 correctly.
So for example the byte data
[31, 133, 27, 63, 84, 224, 75, 63]
would give me a vec2 of
(0.6075, 0.7964)
which is what I wanted.
Of course this solution is not perfect, but for now it is enough. If you know any prettier solutions, feel free to share them!

Passing data to a GLSL Vertex Shader

I'm trying to convert a program written in C using legacy OpenGL Fixed Pipeline commands.
I'm stuck trying to pass some data into a Vertex Shader. I'm trying to use the latest 4.5 commands and I have managed to pass in my array of vertex coordinates "vertices[]" to the Vertex Shader using
glCreateVertexArrays(1, &vertex_array_object);
glBindVertexArray(vertex_array_object);
glCreateBuffers(1, &vertex_buffer);
glNamedBufferStorage(vertex_buffer, sizeof(verticies), verticies, GL_DYNAMIC_STORAGE_BIT);
glVertexArrayVertexBuffer(vertex_array_object, 0, vertex_buffer, 0, sizeof(float)*3);
glVertexArrayAttribFormat(vertex_array_object,0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribBinding(vertex_array_object,0,0);
glEnableVertexArrayAttrib(vertex_array_object,0);
This all works fine and I can render the vertices as points.
As well as passing the vertices I also need to pass an additional block of 4 values for each vertex which I then want to pass from the Vertex Shader to a Geometry Shader. The values I need to pass are in an array of structures (1 per vertex) where the structure is defined as
typedef struct { /* Vertex vector data structure */
unsigned char v; /* Scaled magnitude value " */
char x; /* X Component of direction cosine */
char y; /* Y " " " " */
char z; /* Z " " " " */
} NVD;
I can't easily change this structure as it's used in lots of other places in the code.
Inside the vertex shader I need the 4 values as integers in the ranges
v (0->255)
x,y,z (-127 > 127)
I can't easily change this structure as it's used in lots of other places in the code.
Well, you're going to have to because you can't use structs as interface variables between shader stages. You can pass the data along as a single ivec4, with each component storing the value you want. Though really, you should just pass the floating-point values you compute in the shader; it's going to be 128-bits per-vertex either way, so no point in taking the time to quantize the data.
If the size of this data is shown through profiling to be an actual problem (and using a GS at all is far more likely to be the performance problem), you can encode the data in a single uint for passing to the GS, then unpacking it on the other end.

Read vertex color from screen space

For some reason, I need to know the color of the vertices of an object. The way I can think of is to render the vertex to the screen and then call glReadPixels to fetch the color of the vertices from screen space.
My program is implemented as follows:
Render the ith vertex:
glPointSize(8.0);
glDrawArrays(GL_POINTS, i, 1);
Compute the screen coordinates of this vertex:
oPos[3] = 1.0
// assume the object space coordinates of the vertex is oPos.
multiply oPos by the model-view-projection matrix to get the normalized device coordinates of this vertex, denoted as ndcPos;
ndcPos[1~3] /= ndcPos[3]
finally, multiply ndcPos with the viewport matrix to get the screen coordinates, denoted as screenPos. The viewport matrix is defined as:
GLfloat viewportMat[] = {
screen_width/2, 0, 0, 0,
0, screen_height/2, 0, 0,
0, 0, 1, 0,
(screen_width-1)/2.0, (screen_height-1)/2.0, 0, 1};
Finally, call glReadPixels as:
glReadPixels(int(screenPos[0]+0.5), int(screenPos[1]+0.5),
1, 1, GL_RGB, GL_FLOAT, currentColor);
The resulting color will then be stored at currentColor, which is a vector of length three.
My questions are:
Is there any better ways to get the vertex color rather than query them from screen space?
Any ideas about the correctness of my second and third step?
OpenGL transform feedback allows a shader to write arbitrary values into a buffer object, which can then be read by the host program. You can use this to make your vertex shader pass the computed vertex colors back to the host program.
The nice thing about transform feedback is that you can use it while drawing to the screen, so you can draw your geometry and capture the vertex colors in a single pass. Or, if you prefer, you can draw the geometry with rasterization turned off to capture the feedback data without touching the screen.
Since the data produced by transform feedback is stored in a buffer object, you can use it as input for other drawing operations, too. Depending on what you plan to do with the computed vertex colors, you may be able to avoid transferring them back to the host program at all.

Learning openCV : Use the pointer element to access cvptr2D to point to the middle 'green' channel. Draw the rectangle between 20,5 and 40,20

In the book learning opencv there's a question in chapter 3 :
Create a two dimensional matrix with three channels of type byte with data size 100-by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw the rectangle between 20,5 and 40,20.
I've managed to do the first part, but I can't get my head around second part. Here's what I've done so far :
/*
Create a two dimensional matrix with three channels of type byte with data size 100- by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw `enter code here`the rectangle between 20,5 and 40,20.
*/
void ex10_question3(){
CvMat* m = cvCreateMat(100,100,CV_8UC3);
CvSetZero(m); // initialize to 0.
uchar* ptr = cvPtr2D(m,0,1); // if RGB, then start from first RGB pair, Green.
cvAdd(m,r);
cvRect r(20,5,20,15);
//cvptr2d returns a pointer to a particular row element.
}
I was considering adding both the rect and matrix, but obviously that won't work because a rect is just co-ordinates, and width/height. I'm unfamiliar with cvPtr2D(). How can I visualise what the exercise wants me to do and can anyone give me a hint in the right direction? The solution must be in C.
From my understanding with interleaved RGB channels the 2nd channel will always be the channel of interest. (array index : 1,4,6..)
So that's the direction where the winds blow from...
First of all, the problem is the C API. This API is still present for legacy reasons, but will soon become obsolete. If you are serious about OpenCV please refer to C++ API. The official tutorials are great source of information.
To further target your question, this would be implementation of your question in C++.
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
std::vector<cv::Mat> channels(3); // prepare storage for splitting
split(canvas, channels); // split matrix to single channels
rectangle(channels[1], ...); // draw rectangle [I don't remember exact params]
merge(channels, canvas); // merge the channels together
If you only need to draw green rectangle, it's actually much easier:
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
rectangle(canvas, ..., Scalar(0,255,0)); // draw green rectangle
Edit:
To find out how to access single pixels in image using C++ API please refer to this answer:
https://stackoverflow.com/a/8139210/892914
Try this code:
cout<<"Chapter 3. Task 3."<<'\n';
CvMat *Mat=cvCreateMat(100, 100, CV_8UC3);
cvZero(Mat);
for(int J=5; J<=20; J++)
for(int I=20; I<40; I++)
(*(cvPtr2D(Mat, J, I)+1))=(uchar)(255);
cvNamedWindow("Chapter 3. Task 3", CV_WINDOW_FREERATIO);
cvShowImage("Chapter 3. Task 3", Mat);
cvWaitKey(0);
cvReleaseMat(&Mat);
cvDestroyAllWindows;

WebGL pass array shader

I'm new to WebGL and I'm facing some problems of the shaders. I wanna do multiple light sources in the scene. I searched online and knew that in WebGL, you can't pass an array into the fragment shader, so the only way is use the texture. Here is the problem I can't figure out.
First, I create a 32x32 texture using the following code:
var pix = [];
for(var i=0;i<32;i++)
{
for(var j=0;j<32;j++)
pix.push(0.8,0.8,0.1);
}
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, lightMap);
gl.pixelStorei(gl.UNPACK_ALIGNMENT,1);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, 32,32,0, gl.RGB, gl.UNSIGNED_BYTE,new Float32Array(pix));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.uniform1i(g_loader.program.set_uniform["u_texture2"],0);
But however, when I tried to access the texture in the shader:
[Fragment Shader]
varying vec2 v_texcoord;
uniform sampler2D u_texture2;
void main(void)
{
vec3 lightLoc = texture2D(u_texture, v_texcoord).rgb;
gl_FragData[0] = vec4(lightLoc,1.0);
}
The result is totally black. Is there anyone knows how to acces or create the texture correctly?
Intuitively, one would be tempted to implement multiple light sources doing something like this:
uniform int NUM_LIGHTS;
uniform vec3 uLa[NUM_LIGHTS];
But WebGL gives you an error like this:
ERROR: 0:12: ":constant expression required
ERROR: 0:12: ":array size must be a constant integer expression"
Nonetheless, you actually can pass uniform arrays to the Fragment Shader to represent multiple light sources. The only caveat is that you need to know beforehand the size that these arrays will have. For example:
const int NUM_LIGHTS = 5;
uniform vec3 uLa[NUM_LIGHTS]; //ambient
uniform vec3 uLd[NUM_LIGHTS]; //diffuse
uniform vec3 uLs[NUM_LIGHTS]; //specular
Is correct. Also you need to make sure that you map a flat array on the JavaScript side. So instead of doing this:
var ambientLightArray = [[0.1,0.1,0.1][0.1,0.1,0.1],...]
do this:
var ambientLightArray = [0.1,0.1,0.1,0.1,0.1,0.1,..]
Then you do:
var location = gl.getUniformLocation(prg,"uLa");
gl.uniform3fv(location, ambientLightArray);
Once you set up your arrays with a predefined size you can do things like this:
//Example: Calculate diffuse contribution from all lights to the current fragment
//vLightRay[] and vNormal are varyings calculated in the Vertex Shader
//uKa and uKd are material properties (ambient and diffuse)
vec3 COLOR = vec3(0.0,0.0,0.0);
vec3 N = normalize(vNormal);
for(int i = 0; i < NUM_LIGHTS; i++){
L = normalize(vLightRay[i]);
COLOR += (uLa[i] * uKa) + (uLd[i] * uKd * clamp(dot(N, -L),0.0,1.0));
}
gl_FragColor = vec4(COLOR,1.0);
I hope this can be helpful
You are calling glTexImage2D with a type of GL_UNSIGNED_BYTE, but then you give it an array of floats (Float32Array). According to the specification This causes a GL_INVALID_OPERATION error.
You should rather transform your positions from [0,1] floats to integers in the [0,255] range and use a Uint8Array. Unfortunately this looses you some precision and all your positions need to be in the [0,1] range (or at least some fixed range, which you later transform the [0,1] values you get from the texture into). But I'm sure to remember that WebGL doesn't support floating point textures at the moment.
EDIT: Due to the link in your comment WebGL seems indeed to support floating point textures. So using a type of GL_FLOAT and a Float32Array should work, too. But in this case you have to make sure your hardware also supports floating point textures (since ~GeForce 6) and your WebGL implementation supports the OES_texture_float extension.
You may also try to set the filter modes to GL_NEAREST, as older hardware may not support linearly filtered floating point textures. And as you want to use the texture as a simple array anyway, you shouldn't need any interpolation.
Note that in WebGL, contrary to OpenGL, you have to explicitly call getExtension before you can use an extension, like OES_texture_float. And then you want to pass gl.FLOAT as the type parameter to texImage2D.
The texImage2D function expects and image as parameter, not an array. You should write your texture to a Canvas and then use the canvas image as the texImage2D parameter.
Check out this link:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#pixel-manipulation

Resources