give positions for sphere drawn using gluSphere() - c

Here's my code .. But my sphere always stays at the origin... my glTranslatef() doesn't change the position of the sphere... Give me answers with explanation..
glColor3f(1,0,0);
GLUquadric *quad;
quad = gluNewQuadric();
gluSphere(quad,25,100,20);
glTranslatef(2,2,2);

You're drawing the sphere before doing the translation, so of course the translation has no effect.
Move your glTranslatef to above gluSphere
glColor3f(1,0,0);
GLUquadric *quad;
quad = gluNewQuadric();
glTranslatef(2,2,2);
gluSphere(quad,25,100,20);
(Also note that the glu library is a quite old, and you should probably avoid it)

Related

Cube only displays if I call glPopMatrix after glEnd (not the other way around)

I am encountering a very weird issue in OpenGl. The following code produces a yellow cube as expected
glPushMatrix();
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, (GLfloat[]){ 1, 1, 0, 1 });
glBegin(GL_LINE_LOOP);
glutSolidCube(1);
glPopMatrix();
glEnd();
However when I put glpopMatrix() after glEnd(), I just get a black screen without a cube.
glPushMatrix();
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, (GLfloat[]){ 1, 1, 0, 1 });
glBegin(GL_LINE_LOOP);
glutSolidCube(1);
glEnd();
glPopMatrix();
To me the second approach makes more sense, (push, begin, end then pop) and I really have no idea why it does not work. Any help is appreciated, thanks!
The problem is not the placement of the glPopMatrix() call. The problem is with glBegin() and glEnd(). Remove them, glutSolidCube() does that already.
If you're using FreeGLUT then glutSolidCube() won't even be using glBegin() and glEnd(), and will be using vertex arrays under the hood. So to put it simply, you're probably just confusing your driver and that's why you're getting the weird result.

OpenGL model, view, projection matrices

I am trying to understand cameras in opengl that use matrices.
I've written a simple shader that looks like this:
#version 330 core
layout (location = 0) in vec3 a_pos;
layout (location = 1) in vec4 a_col;
uniform mat4 u_mvp_mat;
uniform mat4 u_mod_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
out vec4 f_color;
void main()
{
vec4 v = u_mvp_mat * vec4(0.0, 0.0, 1.0, 1.0);
gl_Position = u_mvp_mat * vec4(a_pos, 1.0);
//gl_Position = u_proj_mat * u_view_mat * u_mod_mat * vec4(a_pos, 1.0);
f_color = a_col;
}
It's a bit verbose but that's because I am testing passing in either the model, view or projection matrices and doing the multiplication on the gpu or doing the multiplication on the cpu and passing in the mvp matrix and then just doing the mvp * position matrix multiplication.
I understand that the later one can offer performance increase but drawing 1 quad I don't really see any issues with performance at this point.
Right now I use this code to get the locations from my shader and create the model view and projection matrices.
pos_loc = get_attrib_location(ce_get_default_shader(), "a_pos");
col_loc = get_attrib_location(ce_get_default_shader(), "a_col");
mvp_matrix_loc = get_uniform_location(ce_get_default_shader(), "u_mvp_mat");
model_mat_loc = get_uniform_location(ce_get_default_shader(), "u_mod_mat");
view_mat_loc = get_uniform_location(ce_get_default_shader(), "u_view_mat");
proj_matrix_loc =
get_uniform_location(ce_get_default_shader(), "u_proj_mat");
float h_w = (float)ce_get_width() * 0.5f; //width = 320
float h_h = (float)ce_get_height() * 0.5f; //height = 480
model_mat = mat4_identity();
view_mat = mat4_identity();
proj_mat = mat4_identity();
point3* eye = point3_new(0, 0, 0);
point3* center = point3_new(0, 0, -1);
vec3* up = vec3_new(0, 1, 0);
mat4_look_at(view_mat, eye, center, up);
mat4_translate(view_mat, h_w, h_h, -20);
mat4_ortho(proj_mat, 0, ce_get_width(), 0, ce_get_height(), 1, 100);
mat4_scale(model_mat, 30, 30, 1);
mvp_mat = mat4_identity();
after this I setup my vao and vbo's then get ready to do rendering.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(ce_get_default_shader()->shader_program);
glBindVertexArray(vao);
mvp_mat = mat4_multi(mvp_mat, view_mat, model_mat);
mvp_mat = mat4_multi(mvp_mat, proj_mat, mvp_mat);
glUniformMatrix4fv(mvp_matrix_loc, 1, GL_FALSE, mat4_get_data(mvp_mat));
glUniformMatrix4fv(model_mat_loc, 1, GL_FALSE, mat4_get_data(model_mat));
glUniformMatrix4fv(view_mat_loc, 1, GL_FALSE, mat4_get_data(view_mat));
glUniformMatrix4fv(proj_matrix_loc, 1, GL_FALSE, mat4_get_data(proj_mat));
glDrawElements(GL_TRIANGLES, quad->vertex_count, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
Assuming that all the matrix math is correct, I would like to abstract view and projection matrix out into a camera struct as well as the model matrix into a sprite struct so that I can avoid all this matrix math and make things easier to use.
The matrix multiplication order is:
Projection * View * Model * Vector
so the camera would hold the projection and view matrices while the sprite holds the model matrix.
Do all your camera transformations and your sprite transformations then right before you send the data to the gpu you do your matrix multiplications.
If I remember correctly matrix multiplication isn't commutative so doing
view * projection * model will result in the wrong resulting matrix.
pseudo code
glClearxxx(....);
glUseProgram(..);
glBindVertexArray(..);
mvp_mat = mat4_identity();
proj_mat = camera_get_proj_mat();
view_mat = camera_get_view_mat();
mod_mat = sprite_get_transform_mat();
mat4_multi(mvp_mat, view_mat, mod_mat); //mvp holds model * view
mat4_multi(mvp_mat, proj_mat, mvp_mat); //mvp holds proj * model * view
glUniformMatrix4fv(mvp_mat, 1, GL_FALSE, mat4_get_data(mvp_mat));
glDrawElements(...);
glBindVertexArray(0);
Is that a performant way to go about doing this that is scalable?
Is that a performant way to go about doing this that is scalable?
Yes, unless you have a very exotic use case of some sort which is very unlike the norm.
The last thing you should typically ever be worrying about is with respect to the performance of retrieving a modelview and projection matrix out of a camera.
It's because those matrices typically only need to be fetched once per frame per viewport. There's millions of iterations worth of other work that could occur in a frame while scanline-rasterizing primitives, and pulling matrices out of a camera is just a simple constant-time operation.
So typically you want to just make it as convenient as you like. In my case, I go all the way through an abstract interface of function pointers in a central SDK, at which point the functions then compute the proj/mv/ti_mv matrix on the fly out of user-defined properties associated with the camera. In spite of this, it never shows up as a hotspot -- it doesn't even show up in the profiler at all.
There's far more expensive things to worry about. Scalability implies scale -- the complexity of retrieving matrices out of camera doesn't scale. The number of triangles or quads or lines or other primitives you want to render could scale, the number of fragments processed in a frag shader can scale. Cameras typically don't scale except with respect to the number of viewports, and no one should ever have use for a million viewports.
I haven't checked that bit-wise, but it generally looks ok what you're doing.
I would like to abstract view and projection matrix out into a camera struct
That's a most appropriate idea; I can hardly imagine a serious GL application without such an abstraction
Is that a performant way to go about doing this that is scalable?
General constraints of scalability are
diffuse and specular BRDFs (which also require, btw, a light uniform, a normal attribute and calculation of a normal matrix if the scaling of the model is non-uniform) and need per-pixel illumination for quality rendering.
same with multiple lights (e.g. the sun and a close spotlight)
shadow maps! shadow maps? (one for each light-source?)
transparency
reflections (mirrors, glass, water)
textures
As you may take it from the list, you will not get very far with just an MVP uniform and a vertex coordinate attribute.
But the mere number of uniforms is by far not the most crucial points for performance - seeing your code I'm positive that you will not recompile your shaders unnecessarily, update your uniforms only if needed, use Uniform Buffer Objects etc..
The issue is the data that is plugged into those uniforms and VBOs. Or not.
Consider humanoid mesh "Alice" running (that's a mesh morph + translation) across a city square on a windy (water will have ripples) evening (more than one relevant light source), passing a fountain.
Lets' consider we collect it all for by all means on the CPU and old-school only plug ready-to render data into the shaders:
Alice's mesh is morphed, thus her VBOs need an update
Alice's mesh will move; thus all affected shadow maps will need an update (OK, given. they are generated by shadow illumination loops on the GPU, but if you do the wrong way you will shove a lot of data around)
Alice's reflection in the fountain will come and go
Alice's hair will be swirled - the CPU may have quiet a busy time, to say the least
(in fact the latter is so difficult that you will hardly see any halfway-realistic real-time long open hair animation, but amazingly (no, not really) many pony-tails and short hair cuts)
And we've not yet talked about Alice's attire; let's just hope she's wearing a t-shirt and a jeans (not wide shirt and a skirt, which would require fall-of-the-folds and collision calculations).
As you may have guessed that old-school approach doesn't take us far and thus, there is a fit to be found between between CPU and GPU operations.
In addition, one should think about parallelization of calculations at an early stage. It is advantageous to have the data as flat as possible in chunks as large as reasonable, so one just puts a pointer and size into a gl-call and bids that data farewell without any copying, re-arranging, looping or further ado.
That's my 2 cents of wisdom for today about GL performance and scalability.

Openlayers 3 Circle radius in meters

How to get Circle radius in meters
May be this is existing question, but i am not getting proper result. I am trying to create Polygon in postgis with same radius & center getting from openlayers circle.
To get radius in meters I followed this.
Running example link.
var radiusInMeters = circleRadius * ol.proj.METERS_PER_UNIT['m'];
After getting center, radius (in meters) i am trying to generate Polygon(WKT) with postgis (server job) & drawing that feature in map like this.
select st_astext(st_buffer('POINT(79.25887485937808 17.036647682474722 0)'::geography, 365.70644956827164));
But both are not covering same area. Can any body please let me know where i am doing wrong.
Basically my input/output to/from Circle will be in meters only.
ol.geom.Circle might not represent a circle
OpenLayers Circle geometries are defined on the projected plane. This means that they are always circular on the map, but the area covered might not represent an actual circle on earth. The actual shape and size of the area covered by the circle will depend on the projection used.
This could be visualized by Tissot's indicatrix, which shows how circular areas on the globe are transformed when projected onto a plane. Using the projection EPSG:3857, this would look like:
The image is from OpenLayer 3's Tissot example and displays areas that all have a radius of 800 000 meters. If these circles were drawn as ol.geom.Circle with a radius of 800000 (using EPSG:3857), they would all be the same size on the map but the ones closer to the poles would represent a much smaller area of the globe.
This is true for most things with OpenLayers geometries. The radius, length or area of a geometry are all reported in the projected plane.
So if you have an ol.geom.Circle, getting the actual surface radius would depend on the projection and features location. For some projections (such as EPSG:4326), there would not be an accurate answer since the geometry might not even represent a circular area.
However, assuming you are using EPSG:3857 and not drawing extremely big circles or very close to the poles, the Circle will be a good representation of a circular area.
ol.proj.METERS_PER_UNIT
ol.proj.METERS_PER_UNIT is just a conversion table between meters and some other units. ol.proj.METERS_PER_UNIT['m'] will always return 1, since the unit 'm' is meters. EPSG:3857 uses meters as units, but as noted they are distorted towards the poles.
Solution (use after reading and understanding the above)
To get the actual on-the-ground radius of an ol.geom.Circle, you must find the distance between the center of the circle and a point on it's edge. This could be done using ol.Sphere:
var center = geometry.getCenter()
var radius = geometry.getRadius()
var edgeCoordinate = [center[0] + radius, center[1]];
var wgs84Sphere = new ol.Sphere(6378137);
var groundRadius = wgs84Sphere.haversineDistance(
ol.proj.transform(center, 'EPSG:3857', 'EPSG:4326'),
ol.proj.transform(edgeCoordinate, 'EPSG:3857', 'EPSG:4326')
);
More options
If you wish to add a geometry representing a circular area on the globe, you should consider using the method used in the Tissot example above. That is, defining a regular polygon with enough points to appear smooth. That would make it transferable between projections, and appears to be what you are doing server side. OpenLayers 3 enables this by ol.geom.Polygon.circular:
var circularPolygon = ol.geom.Polygon.circular(wgs84Sphere, center, radius, 64);
There is also ol.geom.Polygon.fromCircle, which takes an ol.geom.Circle and transforms it into a Polygon representing the same area.
My answer is a complement of the great answer by Alvin.
Imagine you want to draw a circle of a given radius (in meters) around a point feature. In my particular case, a 200m circle around a moving vehicle.
If this circle has a small diameter (< some kilometers), you can ignore earth roudness. Then, you can use the marker "Circle" in the style function of your point feature.
Here is my style function :
private pointStyle(feature: Feature, resolution: number): Array<Style> {
const viewProjection = map.getView().getProjection();
const coordsInViewProjection = (<Point>(feature.getGeometry())).getCoordinates();
const longLat = toLonLat(coordsInViewProjection, viewProjection);
const latitude_rad = longLat[1] * Math.PI / 180.;
const circle = new Style({
image: new CircleStyle({
stroke: new Stroke({color: '#7c8692'});,
radius: this._circleRadius_m / (resolution / viewProjection.getMetersPerUnit() * Math.cos(latitude_rad)),
}),
});
return [circle];
}
The trick is to scale the radius by the latitude cosine. This will "locally" disable the distortion effect we can observe in the Tissot Example.

OpenGL - Nothing appearing on screen

I'm using Windows 7 with VC++ 2010
I'm trying to draw a simple point to a screen but it's not showing.
The screen is clearing to black so I know that I have a valid OpenGL context etc...
Basically my OpenGL code boils down to this (I don't have a depth buffer at this point):
glClear( GL_COLOR_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
gluPerspective( 45.0, 1018.0 / 743.0, 5.0, 999.0 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glColor4f( 1, 1, 1, 1 );
glPointSize( 100 );
glBegin( GL_POINTS );
glVertex2i( 0, 0 );
glEnd();
SwapBuffers( hdc );
The initialization code for OpenGL is this:
glClearColor( 0, 0, 0, 1 );
glShadeModel( GL_SMOOTH );
glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST );
The problem is that nothing appears on the screen, the only thing that happens is the screen gets cleared.
Go through the following checklist (which is the general opengl checklist from delphigl.com (de_DE), which we usually give people to go through when they don't see anything):
Is your object accidentially painted in black? Try and change the glClearColor.
Do you have texturing enabled accidentially? Disable it before drawing with glDisable(GL_TEXTURE_2D).
Try disabling the following tests:
GL_DEPTH_TEST
GL_CULL_FACE
GL_ALPHA_TEST
Check whether your glViewport is setup correctly.
Try translating your Model View Matrix out of the near-clipping-plane (5.0 in your case) with glTranslatef(0, 0, -6.0)
There are several potential issues. The main problem will be how you are using the gluPerspective projection. gluPerspective is for perspectivic view and as such, it won't display anything at the (0, 0, 0) in View Coordinates. In your setup, you forbid displaying anything before (0, 0, 5) in View Coordinates (near clipping plane). I suggest setting your point to glVertex3f(0., 0., 10.) and try again. Another solution would be to use glTranslatef to move your View Coordinates around by more than 5 units.
Also glPointSize will probably not accept your value of 100, as common implementations are limited to a point size of 64.
For a good start with OpenGL, I'd also recommend reading up on Nehes Tutorials. They might not be State-Of-The-Art, but cover anything you're facing right now.
The problem was because I had called glDepthRange misunderstanding what it actually did, I was calling it like this: glDepthRange( nearPlane, farPlane ). (which was 5.0f and 999.0f) When I removed this call everything was able to draw correctly. Thankyou very much for your help. :)

R - Contour map

I have plotted a contour map but i need to make some improvements. This is the structure of the data that are used:
str(lon_sst)
# num [1:360(1d)] -179.5 -178.5 -177.5 -176.5 -175.5 ...
str(lat_sst)
# num [1:180(1d)] -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5 -80.5 ...
dim(cor_Houlgrave_SF_SST_JJA_try)
# [1] 360 180
require(maps)
maps::map(database="world", fill=TRUE, col="light blue")
maps::map.axes()
contour(x=lon_sst, y=lat_sst, z=cor_Houlgrave_SF_SST_JJA_try[c(181:360, 1:180),],
zlim=c(-1,1), add=TRUE)
par(ask=TRUE)
filled.contour(x = lon_sst, y=lat_sst,
z=cor_Houlgrave_SF_SST_JJA_try[c(181:360, 1:180),],
zlim=c(-1,1), color.palette=heat.colors)
Because most of the correlations are close to 0, it is very hard to see the big ones.
Can i make it easier to see, or can i change the resolution so i can zoom it in? At the moment the contours are too tightly spaced so I can't see what the contour levels were.
Where can i see the increment, i set my range as (-1,1), i don't know how to set the interval manually.
Can someone tell me how to plot a specific region of the map, like longitude from 100 to 160 and latitude from -50 to -80? I have tried to replace lon_sst and lat_sst, but it has a dimension error. Thanks.
To answer 1 and 3 which appear to be the same request, try:
maps::map(database="world", fill=TRUE, col="light blue",
ylim=c(-80, -50), xlim=c(100,160) )
To address 2: You have a much smaller range than [-1,1]. The labels on those contour lines are numbers like .06, -.02 and .02. The contour function will accept either an 'nlevels' or a 'levels' argument. Once you have a blown up section you can use that to adjust the z-resolution of contours.
contourplot in the lattice package can also produce these types of contour plots, and makes it easy to both contour lines and fill colours. This may or may not suit your needs, but by filling contour intervals, you can do away with the text labels, which can get a little crowded if you want to have high resolution contours.
I don't have your sea surface temperature data, so the following figure uses dummy data, but you should get something similar. See ?contourplot and ?panel.levelplot for possible arguments.
For your desired small scale plot, overlaying the world map plot is probably inappropriate, especially considering that the area of interest is in the ocean.
library(lattice)
contourplot(cor_Houlgrave_SF_SST_JJA_try, region=TRUE, at=seq(-1, 1, 0.25),
labels=FALSE, row.values=lon_sst, column.values=lat_sst,
xlim=c(100, 160), ylim=c(-80, -50), xlab='longitude', ylab='latitude')
Here, the at argument controls the position at values at which contour lines will be calculated and plotted (and hence the number of breaks in the colour ramp). In my example, contour lines are provided at -0.75, -0.5, -0.25, 0, 0.25, 0.5, 0.75 and 1 (with -1 being the background). Changing to at=seq(-1, 1, 0.5), for example, would produce contour lines at -0.5, 0, 0.5, and 1.

Resources