Im trying to create a mesh of z = y^2 - x^2 for a uni assessment. Ive create a matrix array to that hold the sqaures that i want to draw as GL_LINE_STRIPS called sqaureMatrix[100]. What i want to know is how i can send that to a vertex shader and display it.
i will put some code below that says how ive set things up so far
asessment.cpp
mat4 squareMatrix[100];
// this is in general how i fill the matrix
mat4 pseudo = mat4
(
vec4(1,1,1,1),
vec4(1,1,1,1),
vec4(1,1,1,1),
vec4(1,1,1,1)
);
// loop through and actually add to the squarematrix like
squareMatrix[0] = pseudo;
vshader.glsl
uniform mat4 mMatrix;
void
main()
{
for (int i = 0; i < 100; i++)
{
gl_Position = mMatrix[i];
}
}
Well you get the jist of it. The matrix is set up fine. I thought I would just add it to clarify some things.
Well, you already have the array. So you point OpenGL to it by passing it as a Vertex Array (or you copy it onto a Vertex Buffer Object which is then referenced as Vertex Array). Google for those terms to understand what they do. They're part of every modern OpenGL tutorial. Then you make OpenGL draw from that data.
As it happens I only recently added a minimal VBO example to my codesamples GitHub repository, in response to another StackOverflow question. It matches your question as well.
Related
I'm trying to play around with image manipulation in C and I want to be able to read and write pixels on an SDL Surface. (I'm loading a bmp to a surface to get the pixel data) I'm having some trouble figuring out how to properly use the following functions.
SDL_CreateRGBSurfaceFrom();
SDL_GetRGB();
SDL_MapRGB();
I have only found examples of these in c++ and I'm having a hard time implementing it in C because I don't fully understand how they work.
so my questions are:
how do you properly retrieve pixel data using GetRGB? + How is the pixel addressed with x, y cordinates?
What kind of array would I use to store the pixel data?
How do you use SDL_CreateRGBSurfaceFrom() to draw the new pixel data back to a surface?
Also I want to access the pixels individually in a nested for loop for y and x like so.
for(int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
// get/put the pixel data
}
}
First have a look at SDL_Surface.
The parts you're interested in:
SDL_PixelFormat*format
int w, h
int pitch
void *pixels
What else you should know:
On position x, y (which must be greater or equal to 0 and less than w, h) the surface contains a pixel.
The pixel is described by the format field, which tells us, how the pixel is organized in memory.
The Remarks section of SDL_PixelFormat gives more information on the used datatype.
The pitch field is basically the width of the surface multiplied by the size of the pixel (BytesPerPixel).
With the function SDL_GetRGB, one can easily convert a pixel of any format to a RGB(A) triple/quadruple.
SDL_MapRGB is the reverse of SDL_GetRGB, where one can specify a pixel as RGB(A) triple/quadruple to map it to the closest color specified by the format parameter.
The SDL wiki provides many examples of the specific functions, i think you will find the proper examples to solve your problem.
Very complicated for me to explain the problem, but I will try my best.
I am making a game. There is an area of game objects and a canvas that draws every object in that area using some "draw_from" function - void draw_from(const char *obj, int x, int y, double scale) so that it looks as if a copy of that area is made on-screen.
This gives the advantage of scaling that area using the scale parameter of the draw_from() function.
However, a problem occurs when doing so. For simplicity imagine there are just two actors in that area - one that is right above the other one.
When they are scaled-down, they will appear in different vertical positions, further from each other.
I need to calculate the new correct positions for each of the objects and pass them to draw_from, but I just seem to be unable to figure out how. What is the correct way to recalculate the new positions if each of those objects is scaled down with the same value?
Here is a decent illustration of the problem more or less:
As you can tell the draw_from function will draw the object centered on the x/y coordinates. To draw an object at 0:0 (top-left corner) you must do draw_from(obj, obj->width/2, obj->height/2, 1.0); Not sure if the scaling is implemented that way exactly, but I created a function to obtain the new width and height of the scaled object:
void character_draw_get_scaled_dimensions (Actor* srcActor, double scale, double* sWidth, double* sHeight)
{
double sCharacterWidth = 0;
double sCharacterHeight = 0;
if(srcActor->width >= srcActor->height)
{
sCharacterWidth = (double)srcActor->width * scale / 1.0;
sCharacterHeight = sCharacterWidth * (double)srcActor->height / (double)srcActor->width;
}
else
{
sCharacterHeight = (double)srcActor->height * scale / 1.0;
sCharacterWidth = sCharacterHeight * (double)srcActor->width / (double)srcActor->height;
}
if(sWidth)
(*sWidth) = sCharacterWidth;
if(sHeight)
(*sHeight) = sCharacterHeight;
}
In other words, I need to maintain the distances between those objects across down-scales and I explained how draw_from and /somehow/ how its scaling works.
I need the correct parameters to pass to the draw_from's x and y arguments.
From that point, I think it will get just too broad if I continue elaborating further.
Not the solution I hoped for, but it is still a solution.
The more hacky and less practical (including performance-wise) solution is to draw every object on an offscreen canvas with a scale of 1.0 then draw from that canvas to the main canvas at any scale desired.
That way only the canvas should be repositioned and not every object. It gets really easy from there. I still would prefer the conventional purposed mathematical solution.
I'm trying to rotate a 2D pixel matrix, but nothing actually happens.
my origin is a stored bitmap[w x h x 3].
why isn't the shown image being rotated?
Here's the display function:
void display()
{
uint32_t i = 0,j = 0,k = 0;
unsigned char pixels[WINDOW_WIDTH * WINDOW_HEIGHT * 3];
memset(pixels, 0, sizeof(pixels));
for(j = bitmap_h -1; j > 0; j--) {
for(i = 0; i < bitmap_w; i++) {
pixels[k++]=bitmap[j][i].r;
pixels[k++]=bitmap[j][i].g;
pixels[k++]=bitmap[j][i].b;
}
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(90,0,0,1);
glDrawPixels(g_img.descriptor.size_w, g_img.descriptor.size_h, GL_RGB, GL_UNSIGNED_BYTE, &pixels);
glutSwapBuffers();
}
First and foremost glDrawPixels should not be used. The problem you have is one of the reasons. The convoluted rules by which glDrawPixels operate are too vast to outline here, let's just say, that there's a so called "raster position" in your window, at which glDrawPixels will place the lower left corner of the image it draws. No transformation whatsoever will be applied to the image.
However when setting the raster position, that's when transformations get applied. And should, for whatever reason, the raster position lie outside the visible window nothing will get drawn at all.
Solution: Don't use glDrawPixels. Don't use glDrawPixels. DON'T USE glDrawPixels. I repeat DON'T USE glDrawPixels. It's best you completely forget that this function actually exists in legacy OpenGL.
Use a textured quad instead. That will also transform properly.
I did something similar. I'm creating a 3D space shooter game using OpenGL/C++. For one of my levels, I have a bunch of asteroids/rocks in the background each rotating and moving at a random speed.
I did this by taking the asteroid bitmap image and creating a texture. Then I applied the texture to a square (glBegin(GL_QUADS)). Each time I draw the square, I multiply each of the vertex coordinates (glVertex3f(x, y, z)) with a rotation matrix.
|cos0 -sin0|
|sin0 cos0 |
0 is the theta angle. I store this angle as part of my Asteroid class. each iteration I increment it by a value, depending on how fast I want the asteroid to spin. It works great.
In the book learning opencv there's a question in chapter 3 :
Create a two dimensional matrix with three channels of type byte with data size 100-by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw the rectangle between 20,5 and 40,20.
I've managed to do the first part, but I can't get my head around second part. Here's what I've done so far :
/*
Create a two dimensional matrix with three channels of type byte with data size 100- by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw `enter code here`the rectangle between 20,5 and 40,20.
*/
void ex10_question3(){
CvMat* m = cvCreateMat(100,100,CV_8UC3);
CvSetZero(m); // initialize to 0.
uchar* ptr = cvPtr2D(m,0,1); // if RGB, then start from first RGB pair, Green.
cvAdd(m,r);
cvRect r(20,5,20,15);
//cvptr2d returns a pointer to a particular row element.
}
I was considering adding both the rect and matrix, but obviously that won't work because a rect is just co-ordinates, and width/height. I'm unfamiliar with cvPtr2D(). How can I visualise what the exercise wants me to do and can anyone give me a hint in the right direction? The solution must be in C.
From my understanding with interleaved RGB channels the 2nd channel will always be the channel of interest. (array index : 1,4,6..)
So that's the direction where the winds blow from...
First of all, the problem is the C API. This API is still present for legacy reasons, but will soon become obsolete. If you are serious about OpenCV please refer to C++ API. The official tutorials are great source of information.
To further target your question, this would be implementation of your question in C++.
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
std::vector<cv::Mat> channels(3); // prepare storage for splitting
split(canvas, channels); // split matrix to single channels
rectangle(channels[1], ...); // draw rectangle [I don't remember exact params]
merge(channels, canvas); // merge the channels together
If you only need to draw green rectangle, it's actually much easier:
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
rectangle(canvas, ..., Scalar(0,255,0)); // draw green rectangle
Edit:
To find out how to access single pixels in image using C++ API please refer to this answer:
https://stackoverflow.com/a/8139210/892914
Try this code:
cout<<"Chapter 3. Task 3."<<'\n';
CvMat *Mat=cvCreateMat(100, 100, CV_8UC3);
cvZero(Mat);
for(int J=5; J<=20; J++)
for(int I=20; I<40; I++)
(*(cvPtr2D(Mat, J, I)+1))=(uchar)(255);
cvNamedWindow("Chapter 3. Task 3", CV_WINDOW_FREERATIO);
cvShowImage("Chapter 3. Task 3", Mat);
cvWaitKey(0);
cvReleaseMat(&Mat);
cvDestroyAllWindows;
I'm new to WebGL and I'm facing some problems of the shaders. I wanna do multiple light sources in the scene. I searched online and knew that in WebGL, you can't pass an array into the fragment shader, so the only way is use the texture. Here is the problem I can't figure out.
First, I create a 32x32 texture using the following code:
var pix = [];
for(var i=0;i<32;i++)
{
for(var j=0;j<32;j++)
pix.push(0.8,0.8,0.1);
}
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, lightMap);
gl.pixelStorei(gl.UNPACK_ALIGNMENT,1);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, 32,32,0, gl.RGB, gl.UNSIGNED_BYTE,new Float32Array(pix));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.uniform1i(g_loader.program.set_uniform["u_texture2"],0);
But however, when I tried to access the texture in the shader:
[Fragment Shader]
varying vec2 v_texcoord;
uniform sampler2D u_texture2;
void main(void)
{
vec3 lightLoc = texture2D(u_texture, v_texcoord).rgb;
gl_FragData[0] = vec4(lightLoc,1.0);
}
The result is totally black. Is there anyone knows how to acces or create the texture correctly?
Intuitively, one would be tempted to implement multiple light sources doing something like this:
uniform int NUM_LIGHTS;
uniform vec3 uLa[NUM_LIGHTS];
But WebGL gives you an error like this:
ERROR: 0:12: ":constant expression required
ERROR: 0:12: ":array size must be a constant integer expression"
Nonetheless, you actually can pass uniform arrays to the Fragment Shader to represent multiple light sources. The only caveat is that you need to know beforehand the size that these arrays will have. For example:
const int NUM_LIGHTS = 5;
uniform vec3 uLa[NUM_LIGHTS]; //ambient
uniform vec3 uLd[NUM_LIGHTS]; //diffuse
uniform vec3 uLs[NUM_LIGHTS]; //specular
Is correct. Also you need to make sure that you map a flat array on the JavaScript side. So instead of doing this:
var ambientLightArray = [[0.1,0.1,0.1][0.1,0.1,0.1],...]
do this:
var ambientLightArray = [0.1,0.1,0.1,0.1,0.1,0.1,..]
Then you do:
var location = gl.getUniformLocation(prg,"uLa");
gl.uniform3fv(location, ambientLightArray);
Once you set up your arrays with a predefined size you can do things like this:
//Example: Calculate diffuse contribution from all lights to the current fragment
//vLightRay[] and vNormal are varyings calculated in the Vertex Shader
//uKa and uKd are material properties (ambient and diffuse)
vec3 COLOR = vec3(0.0,0.0,0.0);
vec3 N = normalize(vNormal);
for(int i = 0; i < NUM_LIGHTS; i++){
L = normalize(vLightRay[i]);
COLOR += (uLa[i] * uKa) + (uLd[i] * uKd * clamp(dot(N, -L),0.0,1.0));
}
gl_FragColor = vec4(COLOR,1.0);
I hope this can be helpful
You are calling glTexImage2D with a type of GL_UNSIGNED_BYTE, but then you give it an array of floats (Float32Array). According to the specification This causes a GL_INVALID_OPERATION error.
You should rather transform your positions from [0,1] floats to integers in the [0,255] range and use a Uint8Array. Unfortunately this looses you some precision and all your positions need to be in the [0,1] range (or at least some fixed range, which you later transform the [0,1] values you get from the texture into). But I'm sure to remember that WebGL doesn't support floating point textures at the moment.
EDIT: Due to the link in your comment WebGL seems indeed to support floating point textures. So using a type of GL_FLOAT and a Float32Array should work, too. But in this case you have to make sure your hardware also supports floating point textures (since ~GeForce 6) and your WebGL implementation supports the OES_texture_float extension.
You may also try to set the filter modes to GL_NEAREST, as older hardware may not support linearly filtered floating point textures. And as you want to use the texture as a simple array anyway, you shouldn't need any interpolation.
Note that in WebGL, contrary to OpenGL, you have to explicitly call getExtension before you can use an extension, like OES_texture_float. And then you want to pass gl.FLOAT as the type parameter to texImage2D.
The texImage2D function expects and image as parameter, not an array. You should write your texture to a Canvas and then use the canvas image as the texImage2D parameter.
Check out this link:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#pixel-manipulation