SceneKit, shader to overlay color based on world coordinates to node based - scenekit

I have simple shader which allows me to redraw node color based on the local axises of the node (x > 0) -> green, but how to make it works based on the world coordinates.
(possible shader based not by converting some points from scene and passing it to shader)
Shader demo
vec4 pos = u_inverseModelTransform * u_inverseViewTransform * vec4(_surface.position, 1.0);
if (pos.x > 0.0) {
_output.color.rgb = vec3(0.0, 0.8, 0.0);
}

you don't want to multiply by u_inverseModelTransform which moves you back from world space to object space.
vec4 pos = u_inverseViewTransform * vec4(_surface.position, 1.0);

As a corollary to ‘mnuages’ correct answer, I finally figured out that if you’re in MetalSL and not GLSL you should use scn_frame.inverseViewTransform not u_inverseViewTransform.
This is tricky because SceneKit will automatically try to cross-compile GLSL shaderModifiers into MetalSL so it’s hard to know which one you’re using sometimes. (E.g., you can have your SceneKit view be Metal-backed instead of OpenGL- or GLES-backed and still write your shader modifiers in GLSL and SceneKit will work.)

Related

shadertoy GLSL - creating a large matrix and displaying it on the screen

I have a palette of 64 colors. I need to create a 512*512 table and write the color indexes in the palette into it, and then display everything on the screen. The problem is that glsl does not support two-dimensional arrays, and it is impossible to save a table between frames
The closest thing you can do is create a separate buffer and only use a part of it.
here's an example buffer A:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
if(any(greaterThan(fragCoord, vec2(512)))) return;
fragCoord -= .5;
fragColor = vec4(mod(fragCoord.x,2.), 0, 0, 1); // generate a color at point.
}
then in the main shader, you can can access a pixel with:
// vec2 p; // p.x and p.y in range(0, 512)
texture(iChannel0, p/iResolution.xy);
If you are using openGL instead of shadertoy, you can use a texture 2d instead.

Texturing a sphere in OpenGL with glTexGen

I want to get an earth texture on sphere. My sphere is an icosphere built with many triangles (100+) and I found it confusing to set the UV coordinates for whole sphere. I tried to use glTexGen and effects are quite close but I got my texture repeated 8 times (see image) . I cannot find a way to make it just wrap the whole object once. Here is my code where the sphere and textures are created.
glEnable(GL_TEXTURE_2D);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++)
{
Triangle *draw_Trian = new_sphere->Trians+i;
glVertex3f(draw_Trian->pnts[0].coords[0], draw_Trian->pnts[0].coords[1], draw_Trian->pnts[0].coords[2]);
glVertex3f(draw_Trian->pnts[1].coords[0], draw_Trian->pnts[1].coords[1], draw_Trian->pnts[1].coords[2]);
glVertex3f(draw_Trian->pnts[2].coords[0], draw_Trian->pnts[2].coords[1], draw_Trian->pnts[2].coords[2]);
}
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
glEnd();
You need to define how your texture is supposed to map to your triangles. This depends on the texture you're using. There are a multitude of ways to map the surface of a sphere with a texture (since no one mapping is free of singularities). It looks like you have a cylindrical projection texture there. So we will emit cylindrical UV coordinates.
I've tried to give you some code here, but it's assuming that
Your mesh is a unit sphere (i.e., centered at 0 and has radius 1)
pnts.coords is an array of floats
You want to use the second coordinate (coord[1]) as the 'up' direction (or the height in a cylindrical mapping)
Your code would look something like this. I've defined a new function for emitting cylindrical UVs, so you can put that wherever you like.
/* Map [(-1, -1, -1), (1, 1, 1)] into [(0, 0), (1, 1)] cylindrically */
inline void uvCylinder(float* coord) {
float angle = 0.5f * atan2(coord[2], coord[0]) / 3.14159f + 0.5f;
float height = 0.5f * coord[1] + 0.5f;
glTexCoord2f(angle, height);
}
glEnable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
for (int i = 0; i < new_sphere->NumOfTrians; i++) {
Triangle *t = new_sphere->Trians+i;
uvCylinder(t->pnts[0].coords);
glVertex3f(t->pnts[0].coords[0], t->pnts[0].coords[1], t->pnts[0].coords[2]);
uvCylinder(t->pnts[1].coords);
glVertex3f(t->pnts[1].coords[0], t->pnts[1].coords[1], t->pnts[1].coords[2]);
uvCylinder(t->pnts[2].coords);
glVertex3f(t->pnts[2].coords[0], t->pnts[2].coords[1], t->pnts[2].coords[2]);
}
glEnd();
glDisable(GL_TEXTURE_2D);
free(new_sphere->Trians);
free(new_sphere);
Note on Projections
The reason it's confusing to build UV coordinates for the whole sphere is that there isn't one 'correct' way to do it. Mathematically-speaking, there's no such thing as a perfect 2D mapping of a sphere; hence why we have so many different types of projections. When you have a 2D image that's a texture for a spherical object, you need to know what type of projection that image was built for, so that you can emit the correct UV coordinates for that texture.

Why is pango_cairo_show_layout drawing text at a slightly wrong location?

I have a Gtk app written in C running on Ubuntu Linux.
I'm confused about some behavior I'm seeing with the pango_cairo_show_layout function: I get the exact "ink" (not "logical") pixel size of a pango layout and draw the layout using pango_cairo_show_layout on a GtkDrawingArea widget. Right before drawing the layout, I draw a rectangle that should perfectly encompass the text that I'm about to draw, but the text always shows up a little below the bottom edge of the rectangle.
Here is my full code:
// The drawing area widget's "expose-event" callback handler
gboolean OnTestWindowExposeEvent(GtkWidget *pWidget, GdkEventExpose *pEvent, gpointer data)
{
// Note that this window is 365 x 449 pixels
double dEntireWindowWidth = pEvent->area.width; // This is 365.0
double dEntireWindowHeight = pEvent->area.height; // This is 449.0
// Create a cairo context with which to draw
cairo_t *cr = gdk_cairo_create(pWidget->window);
// Draw a red background
cairo_set_source_rgb(cr, 1.0, 0.0, 0.0);
cairo_rectangle(cr, 0.0, 0.0, dEntireWindowWidth, dEntireWindowHeight);
cairo_fill(cr);
// Calculate the padding inside the window which defines the text rectangle
double dPadding = 0.05 * ((dEntireWindowWidth < dEntireWindowHeight) ? dEntireWindowWidth : dEntireWindowHeight);
dPadding = round(dPadding); // This is 18.0
// The size of the text box in which to draw text
double dTextBoxSizeW = dEntireWindowWidth - (2.0 * dPadding);
double dTextBoxSizeH = dEntireWindowHeight - (2.0 * dPadding);
dTextBoxSizeW = round(dTextBoxSizeW); // This is 329.0
dTextBoxSizeH = round(dTextBoxSizeH); // This is 413.0
// Draw a black rectangle that defines the area in which text may be drawn
cairo_set_line_width(cr, 1.0);
cairo_set_antialias(cr, CAIRO_ANTIALIAS_NONE);
cairo_set_source_rgb(cr, 0.0, 0.0, 0.0);
cairo_rectangle(cr, dPadding, dPadding, dTextBoxSizeW, dTextBoxSizeH);
cairo_stroke(cr);
// The text to draw
std::string szText("Erik");
// The font name to use
std::string szFontName("FreeSans");
// The font size to use
double dFontSize = 153.0;
// The font description string
char szFontDescription[64];
memset(&(szFontDescription[0]), 0, sizeof(szFontDescription));
snprintf(szFontDescription, sizeof(szFontDescription) - 1, "%s %.02f", szFontName.c_str(), dFontSize);
// Create a font description
PangoFontDescription *pFontDescription = pango_font_description_from_string(szFontDescription);
// Set up the font description
pango_font_description_set_weight(pFontDescription, PANGO_WEIGHT_NORMAL);
pango_font_description_set_style(pFontDescription, PANGO_STYLE_NORMAL);
pango_font_description_set_variant(pFontDescription, PANGO_VARIANT_NORMAL);
pango_font_description_set_stretch(pFontDescription, PANGO_STRETCH_NORMAL);
// Create a pango layout
PangoLayout *pLayout = gtk_widget_create_pango_layout(pWidget, szText.c_str());
// Set up the pango layout
pango_layout_set_alignment(pLayout, PANGO_ALIGN_LEFT);
pango_layout_set_width(pLayout, -1);
pango_layout_set_font_description(pLayout, pFontDescription);
pango_layout_set_auto_dir(pLayout, TRUE);
// Get the "ink" pixel size of the layout
PangoRectangle tRectangle;
pango_layout_get_pixel_extents(pLayout, &tRectangle, NULL);
double dRealTextSizeW = static_cast<double>(tRectangle.width);
double dRealTextSizeH = static_cast<double>(tRectangle.height);
// Calculate the top left corner coordinate at which to draw the text
double dTextLocX = dPadding + ((dTextBoxSizeW - dRealTextSizeW) / 2.0);
double dTextLocY = dPadding + ((dTextBoxSizeH - dRealTextSizeH) / 2.0);
// Draw a blue rectangle which should perfectly encompass the text we're about to draw
cairo_set_antialias(cr, CAIRO_ANTIALIAS_NONE);
cairo_set_source_rgb(cr, 0.0, 0.0, 1.0);
cairo_rectangle(cr, dTextLocX, dTextLocY, dRealTextSizeW, dRealTextSizeH);
cairo_stroke(cr);
// Set up the cairo context for drawing the text
cairo_set_source_rgb(cr, 1.0, 1.0, 1.0);
cairo_set_antialias(cr, CAIRO_ANTIALIAS_BEST);
// Move to the top left coordinate before drawing the text
cairo_move_to(cr, dTextLocX, dTextLocY);
// Draw the layout text
pango_cairo_show_layout(cr, pLayout);
// Clean up
cairo_destroy(cr);
g_object_unref(pLayout);
pango_font_description_free(pFontDescription);
return TRUE;
}
So, why is the text not being drawn exactly where I tell it to be drawn?
Thanks in advance for any help!
Look at the documentation for pango_layout_get_extents() (this is not mentioned in the docs for pango_layout_get_pixel_extents():
Note that both extents may have non-zero x and y. You may want to use
those to offset where you render the layout.
https://developer.gnome.org/pango/stable/pango-Layout-Objects.html#pango-layout-get-extents
This is because the position that you render the layout at is (as far as I remember) the position of the base line (so something logically related to the text) instead of the top-left corner of the layout (which would be some "arbitrary thing" not related to the actual text).
In the case of your code, I would suggest to add tRectangle.x to dTextLocX (or subtract? I'm not completely sure about the sign). The same should be done with the y coordinate.
TL;DR: Your PangoRectangle has a non-zero x/y position that you need to handle.
Edit: I am not completely sure, but I think Pango handles this just like cairo. For cairo, there is a nice description at http://cairographics.org/tutorial/#L1understandingtext. The reference point is the point you give to cairo. You want to look at the description of bearing.

Orthographic Projection with OpenGL and how to implement camera or object movement in space

I have made a cube with display list using GL_POLYGON.I have initialised it in the origin of the coordinates that means in (0,0,0).In my display function which is called in glutDisplayFunc I use the code:
glLoadIdentity();
glOrtho(0,0,0,0,1,1);
glMatrixMode(GL_MODELVIEW);
I want to use orthographic projection using glOrtho.Well, my question is that: Is it normal that I still can see my cube considering that my window size is 600x600?
What's more, I would like some guidelines on how to move my cube or my camera with the relative OpenGL functions.Let's say I would like to move my camera back(to z axis) or my cube to the front(to -z axis).How can I do that?
First of you also need to set glMatrixMode() to GL_PROJECTION before you call glOrtho(), So it would look like this instead.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(...); // Replace ... with your values
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
To move the sceen you can simply call one or more of the following functions.
glTranslate*()
glRotate*()
glScale*()
You can click the above links to read how and what each function does. But basically:
glTranslate*() translates/moves the current selected matrix.
glRotate*() rotates the current selected matrix.
glScale*() scales the current selected matrix.
You can also use glPushMatrix() and glPopMatrix() to push and pop the current matrix stack.
Extra
Also be aware that you're using old and deprecated functions. You shouldn't use them, instead you're now suppose to calculate and create your own Matrix Stack.
Edit
Camera & Objects
Basically you do that by combining the above functions. Might sound harder that it actually is.
I will create an example of 1 camera and 2 objects, basically to give you the idea of how it works.
void render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// The Camera Rotations & Translation
glRotatef(camera_pitch, -1f, 0f, 0f);
glRotatef(camera_yaw, 0f, 1f, 0f);
glTranslate(-camera_x, -camera_y, -camera_z);
// Object 1
glPushMatrix();
glRotatef(...);
glTranslate(...);
// Render Object 1
glPopMatrix();
// Object 2
glPushMatrix();
glRotatef(...);
glTranslate(...);
// Render Object 2
glPopMatrix();
}
Again replace the ... with your own values.
The reason why need to translate the camera coordinates negatively is because why aren't moving a camera, we are actually "pushing" (translating, etc) everything away from the camera/center (Thereby the camera is in the center at all times).
Important the order in which you rotate then translate or translate and then rotate, is important. When needing to the camera transformations you always need to rotate then translate.
Edit
gluLookAt ?
gluLookAt does 100% the same, like in my example.
Example:
// The Camera Rotations & Translation
glRotatef(camera_pitch, -1f, 0f, 0f);
glRotatef(camera_yaw, 0f, 1f, 0f);
glTranslate(-camera_x, -camera_y, -camera_z);
This is my own function which does 100% the same as gluLookAt. How do I know? Because I've looked at the original gluLookAt function, and then I made the following function.
void lookAt(float eyex, float eyey, float eyez, float centerx, float centery, float centerz)
{
float dx = eyex - centerx;
float dy = eyey - centery;
float dz = eyez - centerz;
float pitch = (float) Math.atan2(dy, Math.sqrt(dx * dx + dz * dz));
float yaw = (float) Math.atan2(dz, dx);
pitch = -pitch;
yaw = yaw - 1.57079633f;
// Here you could call glLoadIdentity() if you want to reset the matrix
// glLoadIdentity();
glRotatef(Math.toDegrees(pitch), -1f, 0f, 0f);
glRotatef(Math.toDegrees(yaw), 0f, 1f, 0f);
glTranslatef(-eyex, -eyey, -eyez);
}
You might need to change the Math.* calls, since the above code isn't written in C.

Light position coordinate in phong shading

I'm learning Phong shading and get some confuses:
What coordinate of light position in Phong shading? (model space, modelview or what else?)
According to this: http://www.ozone3d.net/tutorials/glsl_lighting_phong_p2.php:
Vertex shader is:
varying vec3 normal, lightDir, eyeVec;
void main()
{
normal = gl_NormalMatrix * gl_Normal;
vec3 vVertex = vec3(gl_ModelViewMatrix * gl_Vertex);
lightDir = vec3(gl_LightSource[0].position.xyz - vVertex);
eyeVec = -vVertex;
gl_Position = ftransform();
}
Why eyeVec = -vVertex?
The coordinate frame is not relevant to the kind of shading. You could do phong shading in model space, world space, view space, or any made up space you want. The only important thing is to make sure that all relevant vectors in the formula are transformed into the same space.
In this case it looks like the shading is being done in view space. In view space, the vertex coordinates are defined relative to the eye. So the vector from a vertex to the eye (eyeVec) is the negation of the vector from the eye to the vertex (vVertex).

Resources