I am trying to crop an image loaded thanks to SOIL library, before using it as a texture.
So first, how can I load an image, and then convert it to a texture ?
And secondly, how to modify (crop, etc..) the image loaded ?
This is what I would like to do:
unsigned char * img = SOIL_load_image("img.png", &w, &h, &ch, SOIL_LOAD_RGBA);
// crop img ...
// cast it into GLuint texture ...
You can load a portion of your image by utilizing the glPixelStorei functionality:
// the location and size of the region to crop, in pixels:
int cropx = ..., cropy = ..., cropw = ..., croph = ...;
// tell OpenGL where to start reading the data:
glPixelStorei(GL_UNPACK_SKIP_PIXELS, cropx);
glPixelStorei(GL_UNPACK_SKIP_ROWS, cropy);
// tell OpenGL how many pixels are in a row of the full image:
glPixelStorei(GL_UNPACK_ROW_LENGTH, w);
// load the data to a previously created texture
glTextureSubImage2D(texure, 0, 0, 0, cropw, croph, GL_SRGB8_ALPHA8, GL_UNSIGNED_BYTE, img);
Here's a diagram from the OpenGL spec that might help:
EDIT: If you're using older OpenGL (older than 4.5) then replace the glTextureSubImage2D call with:
glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8_ALPHA8, cropw, croph, 0, GL_RGBA, GL_UNSIGNED_BYTE, img);
Make sure to create and bind the texture prior to this call (same way you create textures normally).
Related
After Applying a rotation or a translation matrix on the vertex array, the vertex buffer is not updated
So how can i get the position of vertices after applying the matrix?
here's the onDrawFrame() function
public void onDrawFrame(GL10 gl) {
PositionHandle = GLES20.glGetAttribLocation(Program,"vPosition");
MatrixHandle = GLES20.glGetUniformLocation(Program,"uMVPMatrix");
ColorHandle = GLES20.glGetUniformLocation(Program,"vColor");
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT|GLES20.GL_DEPTH_BUFFER_BIT );
Matrix.rotateM(RotationMatrix,0,-90f,1,0,0);
Matrix.multiplyMM(vPMatrix,0,projectionMatrix,0,viewMatrix,0);
Matrix.multiplyMM(vPMatrix,0,vPMatrix,0,RotationMatrix,0);
GLES20.glUniformMatrix4fv(MatrixHandle, 1, false, vPMatrix, 0);
GLES20.glUseProgram(Program);
GLES20.glEnableVertexAttribArray(PositionHandle);
GLES20.glVertexAttribPointer(PositionHandle,3,GLES20.GL_FLOAT,false,0,vertexbuffer);
GLES20.glUniform4fv(ColorHandle,1,color,1);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES,0,6);
GLES20.glDisableVertexAttribArray(PositionHandle);
}
The GPU doesn't normally write back transformed results anywhere the application can use them. It's possible in ES 3.0 with transform feedback, BUT it's very expensive.
For touch event "hit" testing, you generally don't want to use the raw geometry. Generally use some simple proxy geometry, which can be transformed in software on the CPU.
Maybe you should try this:
private float[] modelViewMatrix = new float[16];
...
Matrix.rotateM(RotationMatrix, 0, -90f, 1, 0, 0);
Matrix.multiplyMM(modelViewMatrix, 0, viewMatrix, 0, RotationMatrix, 0);
Matrix.multiplyMM(vpMatrix, 0, projectionMatrix, 0, modelViewMatrix, 0);
You can use the vertex movement calculations in the CPU, and then use the GLU.gluProject() function to convert the coordinates of the vertex of the object in pixels of the screen. This data can be used when working with touch events.
private var view: IntArray = intArrayOf(0, 0, widthScreen, heightScreen)
...
GLU.gluProject(modelX, modelY, modelZ, mvMatrix, 0,
projectionMatrix, 0, view, 0,
coordinatesWindow, 0)
...
// coordinates in pixels of the screen
val x = coordinatesWindow[0]
val y = coordinatesWindow[1]
sush as
in fragement shader
FragColor = vec4(TexCoords, MicrifiedCurrentPixelLevel, 0.5); notes:A'value is 0.5.
I wanted to obain FragColor's value including R,G,B,A in CPU memory. however,
I used
float* Pixel = new float[4 * SCR_WIDTH * SCR_HEIGHT];
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RGBA, GL_FLOAT, &Pixel[0]);
A'value I obained is always 1,is't 0.5.
why?
thanks very mush.
You cant store alpha values.Check your rgba format of your empty texture.
https://stackoverflow.com/a/7196109/9806560
Currently I'm working on a project for rendering applications to Framebuffer Object(FBO) first, and rendering back the applications by using the FBO color and depth texture attachments in OpenGL ES 2.0 .
Now multiple applications are rendered well with color buffers. When I'm trying to use depth information from depth texture buffer, it seems not working.
I tried to render depth texture by sampling it with texture coordinate, it is all white. People say the grayscale may differ quite slightly, namely even in the shadow part it's near to 1.0. So I modify my fragment shader to like following:
vec4 depth;
depth = texture2D(s_depth0, v_texCoord);
if(depth.r == 1.0)
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
And without surprise, it's all red.
The application code:
void Draw ( ESContext *esContext )
{
UserData *userData = esContext->userData;
// Clear the color buffer
glClear ( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw a triangle
GLfloat vVertices[] = { 0.0f, 0.5f, 0.5f,
-0.5f, -0.5f,-0.5f,
0.5f, -0.5f,-0.5f };
// Set the viewport
glViewport ( 0, 0, esContext->width, esContext->height );
// Use the program object
glUseProgram ( userData->programObject );
// Load the vertex position
glVertexAttribPointer ( 0, 3, GL_FLOAT, GL_FALSE, 0, vVertices );
glEnableVertexAttribArray ( 0 );
glDrawArrays ( GL_TRIANGLES, 0, 3 );
eglSwapBuffers ( esContext->eglDisplay, esContext->eglSurface );
}
So, what would be the problem, if color buffer works fine, while depth buffer doesn't work?
I solved it finally. That reason ends up to be a lack of GL_DEPTH_TEST enabling from client applications.
So if you got the same problem, be sure to enable GL_DEPTH_TEST by calling glEnable(GL_DEPTH_TEST); during OpenGL ES initialization. By default it's disabled for the sake of performance I guess.
Thanks for all advices and answers.
The depth texture needs to be linearized to be seen in the viewport because it's saved in exponential form. Try this in fragment:
uniform float farClip, nearClip;
float depth = texture2D(s_depth0, v_texCoord).x;
float depthLinear = (2 * nearClip) / (farClip + nearClip - depth * (farClip - nearClip));
I am using CUDA to generate this ABGR output image. The image in question is stored in a uchar4 array. Each element of the array represents the color of each pixel in the image. Obviously, this output array is a 2D image but it is allocated in CUDA as a linear memory of interleaved bytes.
I know that CUDA can easily map this array to an OpenGL Vertex Buffer Object. My question is, assuming that I have the RGB value of every pixel in an image, along with the width and height of the image, how can I draw this image to screen using OpenGL?
I know that some kind of shader must be involved but since my knowledge is very little, I have no idea how a shader can use the color of each pixel, but map it to correct screen pixels.
I know I should increase my knowledge in OpenGL, but this seems like a trivial task.
If there is an easy way for me to draw this image, I'd rather not spend much time learning OpenGL.
I finally figured out an easy way to do what I wanted. Unfortunately, I did not know about the existence of the sample that Robert was talking about on NVIDIA's website.
Long story short, the easiest way to draw the image was to define a Pixel Buffer Object in OpenGL, register the buffer with CUDA and pass it as an output array of uchar4 to the CUDA kernel. Here is a quick pseudo-code based on JOGL and JCUDA that shows the steps involved. Most of the code was obtained from the sample on NVIDIA's website:
1) Creaing the OpenGL buffers
GL2 gl = drawable.getGL().getGL2();
int[] buffer = new int[1];
// Generate buffer
gl.glGenBuffers(1, IntBuffer.wrap(buffer));
glBuffer = buffer[0];
// Bind the generated buffer
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, glBuffer);
// Specify the size of the buffer (no data is pre-loaded in this buffer)
gl.glBufferData(GL2.GL_ARRAY_BUFFER, imageWidth * imageHeight * 4, (Buffer)null, GL2.GL_DYNAMIC_DRAW);
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, 0);
// The bufferResource is of type CUgraphicsResource and is defined as a class field
this.bufferResource = new CUgraphicsResource();
// Register buffer in CUDA
cuGraphicsGLRegisterBuffer(bufferResource, glBuffer, CUgraphicsMapResourceFlags.CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
2) Initialize the texture and set texture parameters
GL2 gl = drawable.getGL().getGL2();
int[] texture = new int[1];
gl.glGenTextures(1, IntBuffer.wrap(texture));
this.glTexture = texture[0];
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexImage2D(GL2.GL_TEXTURE_2D, 0, GL2.GL_RGBA8, imageWidth, imageHeight, 0, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, (Buffer)null);
gl.glBindTexture(GL2.GL_TEXTURE_2D, 0);
3) Run the CUDA kernel and display the results in OpenGL's display loop.
this.runCUDA();
GL2 gl = drawable.getGL().getGL2();
gl.glBindBuffer(GL2.GL_PIXEL_UNPACK_BUFFER, glBuffer);
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glTexSubImage2D(GL2.GL_TEXTURE_2D, 0, 0, 0,
imageWidth, imageHeight,
GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, 0); //The last argument must be ZERO! NOT NULL! :-)
gl.glBindBuffer(GL2.GL_PIXEL_PACK_BUFFER, 0);
gl.glBindBuffer(GL2.GL_PIXEL_UNPACK_BUFFER, 0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glEnable(GL2.GL_TEXTURE_2D);
gl.glDisable(GL2.GL_DEPTH_TEST);
gl.glDisable(GL2.GL_LIGHTING);
gl.glTexEnvf(GL2.GL_TEXTURE_ENV, GL2.GL_TEXTURE_ENV_MODE, GL2.GL_REPLACE);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glViewport(0, 0, imageWidth, imageHeight);
gl.glBegin(GL2.GL_QUADS);
gl.glTexCoord2f(0.0f, 1.0f);
gl.glVertex2f(-1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 1.0f);
gl.glVertex2f(1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex2f(1.0f, 1.0f);
gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex2f(-1.0f, 1.0f);
gl.glEnd();
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPopMatrix();
gl.glDisable(GL2.GL_TEXTURE_2D);
3.5) The CUDA call:
public void runCuda(GLAutoDrawable drawable) {
devOutput = new CUdeviceptr();
// Map the OpenGL buffer to a resource and then obtain a CUDA pointer to that resource
cuGraphicsMapResources(1, new CUgraphicsResource[]{bufferResource}, null);
cuGraphicsResourceGetMappedPointer(devOutput, new long[1], bufferResource);
// Setup the kernel parameters making sure that the devOutput pointer is passed to the kernel
Pointer kernelParams =
.
.
.
.
int gridSize = (int) Math.ceil(imageWidth * imageHeight / (double)DESC_BLOCK_SIZE);
cuLaunchKernel(function,
gridSize, 1, 1,
DESC_BLOCK_SIZE, 1, 1,
0, null,
kernelParams, null);
cuCtxSynchronize();
// Unmap the buffer so that it can be used in OpenGL
cuGraphicsUnmapResources(1, new CUgraphicsResource[]{bufferResource}, null);
}
PS: I thank Robert for providing the link to the sample. I also thank the people who downvoted my question without any useful feedback!
I've written a small tiling game engine with OpenGL and C, and I can't seem to figure out what the problem is. My main loop looks like this:
void main_game_loop()
{
(poll for events and respond to them)
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
draw_block(WALL, 10, 10);
}
draw_block:
void draw_block(block b, int x, int y)
{
(load b's texture from a hash and store it in GLuint tex)
glPushMatrix();
glTranslatef(x, y, 0);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
//BLOCK_DIM is 32, the width and height of the texture
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(BLOCK_DIM, 0, 0);
glTexCoord2i(1, 1); glVertex3f(BLOCK_DIM, BLOCK_DIM, 0);
glTexCoord2i(0, 1); glVertex3f(0, BLOCK_DIM, 0);
glEnd();
glPopMatrix;
}
initialization function: (called before main_game_loop)
void init_gl()
{
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, SCREEN_WIDTH, SCREEN_HEIGHT, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
When run, this displays a black screen. However, if I remove the glViewport call, it seemingly displays the texture, but huge and in the corner of the window. Screenshot:
The texture IS being drawn correctly, because if I scale out by a huge factor, I can see the entire image. The y-axis also seems to be flipped from what I used in the gluOrtho2D call (discovered by making events add or subtract from x/y coordinates of the image, subtracting from the y coordinate causes the image to move downward). I'm starting to get frustrated, because this is the simplest possible example I can think of. I'm using SDL, and am passing SDL_OPENGL to SDL_SetVideoMode. What is going on here?
Looks like a problem with glViewport, but just to be sure, did you try clearing the color buffer to purple?
I've always thought of glViewport as a video/windowing function, not actually part of OpenGL itself, because it is the intermediate between the window manager and the OpenGL subsystem, and it uses window coordinates. As such, you should probably look at it along with the other SDL video calls. I suggest updating the question with the full code, or at least with those parts relevant to the video/window subsystem.
Or is it that you omitted to call glViewport after a resize?
You should also try your code without SDL_FULLSCREEN and/or with a smaller window. I usually start with a 512x512 or 640x480 window until I get the viewport and some basic controls right.
the first two parameters of glViewPort specifies the lower left of the view
http://www.opengl.org/sdk/docs/man/xhtml/glViewport.xml
You can try
glViewport(0, SCREEN_HEIGHT, SCREEN_WIDTH, SCREEN_HEIGHT);
For gluOrtho2D, the parameters are left, right, top, bottom
so I would probably use
gluOrtho2D(0, SCREEN_WIDTH, 0, SCREEN_HEIGHT);