Fill the area with texture cocos2d-x - arrays

I've been trying to make "Obstacle" class which builds box2d body by array of points and draw the area which my body covers. As for body, it works totally ok, I receive array of points, build b2PolygonShape and so on. BUT, I really don't know, how to fill the area with color or texture which was built by points array. Here's my draw() method:
void Obstacle::draw(cocos2d::Renderer *renderer, const cocos2d::Mat4 &transform, uint32_t flags)
{
CC_NODE_DRAW_SETUP();
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
GL::bindTexture2D(obstacleTexture->getName());
//DrawPrimitives::setDrawColor4F(1.0, 1.0, 0.0, 1.0);
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)shapePoints.size());
}
vertices is the array of points which I use for creating b2body.

You should triangulate the polygon shape you built in order to draw.
poly2tri is a good option for triangulating shapes: https://code.google.com/p/poly2tri/
After triangulating your shape, map texture coordinates or set the vertex colors.

Related

Optimising the rendition of quads using GLUT?

I'm making a 3d voxel engine, similar to Minecraft. I have some world generation and chunk logic working, however when I have a render distance of 12 chunks (which seems fairly typical for Minecraft for example), (2*12*12*16*16*2), I see there is a potential of being upwards of 150,000 faces needing to be rendered. Before trying to optimize the engine as a whole, I run a test where I just rendered one face 150,000 times. In theory, as the points aren't having to be calculated in 3d space each time, this task should really the least computationally expensive rendition the engine would have to make.
Nevertheless, running the following
glBegin(GL_QUADS);
glColor3f(1, 0, 0);
for (int i = 0; i < 150000;i++) {
glVertex3fv(renderp1);
glVertex3fv(renderp2);
glVertex3fv(renderp3);
glVertex3fv(renderp4);
}
glEnd();
Even when theres no texture and the points are all the same, I still get a very shabby fps, which makes the engine unusable.
I know modern games have meshes with upwards of 100,000 polygons, and run fantastically. Which makes me wonder how this code here is so slow? Is rendering using this technique a horrible way to go about doing this? How could i achieve such a render?
The first thing you should do (before any of the shader stuff) is to stop using glBegin/glEnd and start using glDrawArrays or glDrawElements.
e.g.
// define data structures
struct vec3 { GLfloat x, y, z; };
struct vertex_t {
vec3 position, color;
};
// define data (just a single triangle with RGB colors)
static const vertex_t vertices[] = {
{ { 0.0f, 0.5f, 0.0f }, { 1, 0, 0 } },
{ { 0.5f, -0.5f, 0.0f }, { 0, 1, 0 } },
{ { -0.5f, -0.5f, 0.0f }, { 0, 0, 1 } }
};
...
// setup the arrays
glVertexPointer(3, GL_FLOAT, sizeof(vertex_t), (char*)vertices + offsetof(vertex_t, position) );
glColorPointer(3, GL_FLOAT, sizeof(vertex_t), (char*)vertices + offsetof(vertex_t, color) );
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
...
// draw
glDrawArrays(GL_TRIANGLES, 0, 3);
This is OpenGL 1.1 stuff. It can be further improved with VBO (Vertex Buffer Object), which requires OpenGL 1.5
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// now replace vertices with nullptr in calls to glVertexPointer and glColorPointer
and we haven't even touched shaders yet.
For OpenGL 2.x style shaders, the above code can stay as is, or you can further "modernize" it by replacing glVertexPointer/glColorPointer with glVertexAttribPointer, and glEnableClientState with glEnableVertexAttribArray, to make "modern OpenGL" people happy.
But just using glDrawArrays, OpenGL 1.1 style, should be enough to resolve the performance problem.
This way you don't call glVertex/glColor 100000 times, but a single call to glDrawArrays can draw 100000 vertices at once (and if using VBO, they're already in the GPU memory).
And oh, quads are deprecated since 3.0. We're supposed to build everything from triangles.

glReadPixels() couldn't obain the A's value which outputng from fragement shader.

sush as
in fragement shader
FragColor = vec4(TexCoords, MicrifiedCurrentPixelLevel, 0.5); notes:A'value is 0.5.
I wanted to obain FragColor's value including R,G,B,A in CPU memory. however,
I used
float* Pixel = new float[4 * SCR_WIDTH * SCR_HEIGHT];
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RGBA, GL_FLOAT, &Pixel[0]);
A'value I obained is always 1,is't 0.5.
why?
thanks very mush.
You cant store alpha values.Check your rgba format of your empty texture.
https://stackoverflow.com/a/7196109/9806560

Texturing a sphere in OpenGL with glTexGen

I want to get an earth texture on sphere. My sphere is an icosphere built with many triangles (100+) and I found it confusing to set the UV coordinates for whole sphere. I tried to use glTexGen and effects are quite close but I got my texture repeated 8 times (see image) . I cannot find a way to make it just wrap the whole object once. Here is my code where the sphere and textures are created.
glEnable(GL_TEXTURE_2D);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++)
{
Triangle *draw_Trian = new_sphere->Trians+i;
glVertex3f(draw_Trian->pnts[0].coords[0], draw_Trian->pnts[0].coords[1], draw_Trian->pnts[0].coords[2]);
glVertex3f(draw_Trian->pnts[1].coords[0], draw_Trian->pnts[1].coords[1], draw_Trian->pnts[1].coords[2]);
glVertex3f(draw_Trian->pnts[2].coords[0], draw_Trian->pnts[2].coords[1], draw_Trian->pnts[2].coords[2]);
}
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
glEnd();
You need to define how your texture is supposed to map to your triangles. This depends on the texture you're using. There are a multitude of ways to map the surface of a sphere with a texture (since no one mapping is free of singularities). It looks like you have a cylindrical projection texture there. So we will emit cylindrical UV coordinates.
I've tried to give you some code here, but it's assuming that
Your mesh is a unit sphere (i.e., centered at 0 and has radius 1)
pnts.coords is an array of floats
You want to use the second coordinate (coord[1]) as the 'up' direction (or the height in a cylindrical mapping)
Your code would look something like this. I've defined a new function for emitting cylindrical UVs, so you can put that wherever you like.
/* Map [(-1, -1, -1), (1, 1, 1)] into [(0, 0), (1, 1)] cylindrically */
inline void uvCylinder(float* coord) {
float angle = 0.5f * atan2(coord[2], coord[0]) / 3.14159f + 0.5f;
float height = 0.5f * coord[1] + 0.5f;
glTexCoord2f(angle, height);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++) {
Triangle *t = new_sphere->Trians+i;
uvCylinder(t->pnts[0].coords);
glVertex3f(t->pnts[0].coords[0], t->pnts[0].coords[1], t->pnts[0].coords[2]);
uvCylinder(t->pnts[1].coords);
glVertex3f(t->pnts[1].coords[0], t->pnts[1].coords[1], t->pnts[1].coords[2]);
uvCylinder(t->pnts[2].coords);
glVertex3f(t->pnts[2].coords[0], t->pnts[2].coords[1], t->pnts[2].coords[2]);
}
glEnd();
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
Note on Projections
The reason it's confusing to build UV coordinates for the whole sphere is that there isn't one 'correct' way to do it. Mathematically-speaking, there's no such thing as a perfect 2D mapping of a sphere; hence why we have so many different types of projections. When you have a 2D image that's a texture for a spherical object, you need to know what type of projection that image was built for, so that you can emit the correct UV coordinates for that texture.

Draw image from vertex buffer object generated with CUDA using OpenGL

I am using CUDA to generate this ABGR output image. The image in question is stored in a uchar4 array. Each element of the array represents the color of each pixel in the image. Obviously, this output array is a 2D image but it is allocated in CUDA as a linear memory of interleaved bytes.
I know that CUDA can easily map this array to an OpenGL Vertex Buffer Object. My question is, assuming that I have the RGB value of every pixel in an image, along with the width and height of the image, how can I draw this image to screen using OpenGL?
I know that some kind of shader must be involved but since my knowledge is very little, I have no idea how a shader can use the color of each pixel, but map it to correct screen pixels.
I know I should increase my knowledge in OpenGL, but this seems like a trivial task.
If there is an easy way for me to draw this image, I'd rather not spend much time learning OpenGL.
I finally figured out an easy way to do what I wanted. Unfortunately, I did not know about the existence of the sample that Robert was talking about on NVIDIA's website.
Long story short, the easiest way to draw the image was to define a Pixel Buffer Object in OpenGL, register the buffer with CUDA and pass it as an output array of uchar4 to the CUDA kernel. Here is a quick pseudo-code based on JOGL and JCUDA that shows the steps involved. Most of the code was obtained from the sample on NVIDIA's website:
1) Creaing the OpenGL buffers
GL2 gl = drawable.getGL().getGL2();
int[] buffer = new int[1];
// Generate buffer
gl.glGenBuffers(1, IntBuffer.wrap(buffer));
glBuffer = buffer[0];
// Bind the generated buffer
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, glBuffer);
// Specify the size of the buffer (no data is pre-loaded in this buffer)
gl.glBufferData(GL2.GL_ARRAY_BUFFER, imageWidth * imageHeight * 4, (Buffer)null, GL2.GL_DYNAMIC_DRAW);
gl.glBindBuffer(GL2.GL_ARRAY_BUFFER, 0);
// The bufferResource is of type CUgraphicsResource and is defined as a class field
this.bufferResource = new CUgraphicsResource();
// Register buffer in CUDA
cuGraphicsGLRegisterBuffer(bufferResource, glBuffer, CUgraphicsMapResourceFlags.CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
2) Initialize the texture and set texture parameters
GL2 gl = drawable.getGL().getGL2();
int[] texture = new int[1];
gl.glGenTextures(1, IntBuffer.wrap(texture));
this.glTexture = texture[0];
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexImage2D(GL2.GL_TEXTURE_2D, 0, GL2.GL_RGBA8, imageWidth, imageHeight, 0, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, (Buffer)null);
gl.glBindTexture(GL2.GL_TEXTURE_2D, 0);
3) Run the CUDA kernel and display the results in OpenGL's display loop.
this.runCUDA();
GL2 gl = drawable.getGL().getGL2();
gl.glBindBuffer(GL2.GL_PIXEL_UNPACK_BUFFER, glBuffer);
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glTexSubImage2D(GL2.GL_TEXTURE_2D, 0, 0, 0,
imageWidth, imageHeight,
GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, 0); //The last argument must be ZERO! NOT NULL! :-)
gl.glBindBuffer(GL2.GL_PIXEL_PACK_BUFFER, 0);
gl.glBindBuffer(GL2.GL_PIXEL_UNPACK_BUFFER, 0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, glTexture);
gl.glEnable(GL2.GL_TEXTURE_2D);
gl.glDisable(GL2.GL_DEPTH_TEST);
gl.glDisable(GL2.GL_LIGHTING);
gl.glTexEnvf(GL2.GL_TEXTURE_ENV, GL2.GL_TEXTURE_ENV_MODE, GL2.GL_REPLACE);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glViewport(0, 0, imageWidth, imageHeight);
gl.glBegin(GL2.GL_QUADS);
gl.glTexCoord2f(0.0f, 1.0f);
gl.glVertex2f(-1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 1.0f);
gl.glVertex2f(1.0f, -1.0f);
gl.glTexCoord2f(1.0f, 0.0f);
gl.glVertex2f(1.0f, 1.0f);
gl.glTexCoord2f(0.0f, 0.0f);
gl.glVertex2f(-1.0f, 1.0f);
gl.glEnd();
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glPopMatrix();
gl.glDisable(GL2.GL_TEXTURE_2D);
3.5) The CUDA call:
public void runCuda(GLAutoDrawable drawable) {
devOutput = new CUdeviceptr();
// Map the OpenGL buffer to a resource and then obtain a CUDA pointer to that resource
cuGraphicsMapResources(1, new CUgraphicsResource[]{bufferResource}, null);
cuGraphicsResourceGetMappedPointer(devOutput, new long[1], bufferResource);
// Setup the kernel parameters making sure that the devOutput pointer is passed to the kernel
Pointer kernelParams =
.
.
.
.
int gridSize = (int) Math.ceil(imageWidth * imageHeight / (double)DESC_BLOCK_SIZE);
cuLaunchKernel(function,
gridSize, 1, 1,
DESC_BLOCK_SIZE, 1, 1,
0, null,
kernelParams, null);
cuCtxSynchronize();
// Unmap the buffer so that it can be used in OpenGL
cuGraphicsUnmapResources(1, new CUgraphicsResource[]{bufferResource}, null);
}
PS: I thank Robert for providing the link to the sample. I also thank the people who downvoted my question without any useful feedback!

OpenGL 2D texture rendering too large, glViewport broken

I've written a small tiling game engine with OpenGL and C, and I can't seem to figure out what the problem is. My main loop looks like this:
void main_game_loop()
{
(poll for events and respond to them)
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
draw_block(WALL, 10, 10);
}
draw_block:
void draw_block(block b, int x, int y)
{
(load b's texture from a hash and store it in GLuint tex)
glPushMatrix();
glTranslatef(x, y, 0);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
//BLOCK_DIM is 32, the width and height of the texture
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(BLOCK_DIM, 0, 0);
glTexCoord2i(1, 1); glVertex3f(BLOCK_DIM, BLOCK_DIM, 0);
glTexCoord2i(0, 1); glVertex3f(0, BLOCK_DIM, 0);
glEnd();
glPopMatrix;
}
initialization function: (called before main_game_loop)
void init_gl()
{
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, SCREEN_WIDTH, SCREEN_HEIGHT, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
When run, this displays a black screen. However, if I remove the glViewport call, it seemingly displays the texture, but huge and in the corner of the window. Screenshot:
The texture IS being drawn correctly, because if I scale out by a huge factor, I can see the entire image. The y-axis also seems to be flipped from what I used in the gluOrtho2D call (discovered by making events add or subtract from x/y coordinates of the image, subtracting from the y coordinate causes the image to move downward). I'm starting to get frustrated, because this is the simplest possible example I can think of. I'm using SDL, and am passing SDL_OPENGL to SDL_SetVideoMode. What is going on here?
Looks like a problem with glViewport, but just to be sure, did you try clearing the color buffer to purple?
I've always thought of glViewport as a video/windowing function, not actually part of OpenGL itself, because it is the intermediate between the window manager and the OpenGL subsystem, and it uses window coordinates. As such, you should probably look at it along with the other SDL video calls. I suggest updating the question with the full code, or at least with those parts relevant to the video/window subsystem.
Or is it that you omitted to call glViewport after a resize?
You should also try your code without SDL_FULLSCREEN and/or with a smaller window. I usually start with a 512x512 or 640x480 window until I get the viewport and some basic controls right.
the first two parameters of glViewPort specifies the lower left of the view
http://www.opengl.org/sdk/docs/man/xhtml/glViewport.xml
You can try
glViewport(0, SCREEN_HEIGHT, SCREEN_WIDTH, SCREEN_HEIGHT);
For gluOrtho2D, the parameters are left, right, top, bottom
so I would probably use
gluOrtho2D(0, SCREEN_WIDTH, 0, SCREEN_HEIGHT);

Resources