vectormath library and matrix operations - c

I found this copy on github to link to but I am using the one downloaded from sourceforge.
My question about the way they design their matrix operations seems really strange to me.
For example, if I create a 4×4 matrix and set it's scale. Then I would like to rotate that previous matrix using the vectormath matrix library seems to reset the matrix back to an identity matrix then applies the rotation which doesn't make any sense why this would happen.
Take a look at this function to make a rotation
static inline void vmathM4MakeRotationY( VmathMatrix4 *result, float radians )
{
float s, c;
s = sinf( radians );
c = cosf( radians );
vmathV4MakeFromElems( &result->col0, c, 0.0f, -s, 0.0f );
vmathV4MakeYAxis( &result->col1 );
vmathV4MakeFromElems( &result->col2, s, 0.0f, c, 0.0f );
vmathV4MakeWAxis( &result->col3 );
}
Does this library expect you to keep multiple matrices around use one to apply rotations then multiply?
edit
this is some previous matrix math code that I used to rotate a matrix, it looks like this.
mat4_s mat4_rotateX(mat4_s* out, float angle, mat4_s* inMat)
{
float s = sinf(angle),
c = cosf(angle),
a10 = inMat->m[4],
a11 = inMat->m[5],
a12 = inMat->m[6],
a13 = inMat->m[7],
a20 = inMat->m[8],
a21 = inMat->m[9],
a22 = inMat->m[10],
a23 = inMat->m[11];
if (!out->m) {
for(size_t i = 0; i < 16; i++)
{
out->m[i] = inMat->m[i];
}
} else if (inMat->m != out->m) { // If the source and destination differ, copy the unchanged rows
out->m[0] = inMat->m[0];
out->m[1] = inMat->m[1];
out->m[2] = inMat->m[2];
out->m[3] = inMat->m[3];
out->m[12] = inMat->m[12];
out->m[13] = inMat->m[13];
out->m[14] = inMat->m[14];
out->m[15] = inMat->m[15];
}
out->m[4] = a10 * c + a20 * s;
out->m[5] = a11 * c + a21 * s;
out->m[6] = a12 * c + a22 * s;
out->m[7] = a13 * c + a23 * s;
out->m[8] = a10 * -s + a20 * c;
out->m[9] = a11 * -s + a21 * c;
out->m[10] = a12 * -s + a22 * c;
out->m[11] = a13 * -s + a23 * c;
return *out;
}
this is the process that I have to take to get vectormath to do the same thing.
mat4* mat4_rotate_y(mat4* out, const float angle){
mat4 m;
mat4_identity_v(&m);
vmathM4MakeRotationY(m, angle);
mat4_multi(out, out, m);
return out;
}
the multiplication code is fairly standard but the vmathM4MakeRotationY looks like this:
static inline void vmathM4MakeRotationZ( VmathMatrix4 *result, float radians )
{
float s, c;
s = sinf( radians );
c = cosf( radians );
vmathV4MakeFromElems( &result->col0, c, s, 0.0f, 0.0f );
vmathV4MakeFromElems( &result->col1, -s, c, 0.0f, 0.0f );
vmathV4MakeZAxis( &result->col2 );
vmathV4MakeWAxis( &result->col3 );
}
just for completeness the vmathV4Make_Axis looks like this:
static inline void vmathV4MakeZAxis(VmathVector4 *result) {
vmathV4MakeFromElems(result, 0.0f, 0.0f, 1.0f, 0.0f);
}
vmathV4MakeFromElms looks like this:
static inline void vmathV4MakeFromElems(VmathVector4 *result, float _x,
float _y, float _z, float _w) {
result->x = _x;
result->y = _y;
result->z = _z;
result->w = _w;
}

This function seems to do "Initialize the matrix to be a rotation transform" instead of what you are expecting, "Add a rotation transform to the current transform".
Like you say, you can work around by storing the rotation in a separate temporary matrix and then multiplying:
VmathMatrix4 temp;
vmathM4MakeRotationY(&temp, 1.23);
vmathM4Mul(&mytransform, &mytransform, &temp);
That is pretty much what a hypothetical vmathM4ApplyRotationY() would have to do anyway.

Related

How to use a lookAt matrix to compute ray in raytracing?

As I understand, the 'lookat' method is one of the simplest way to placing/rotate the camera in a scene. So I implemented the Matrix available on (https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/lookat-function) in the code of my ray-tracing but I have no idea of how using it to compute rays.
Basically what I do is place the camera at negatives Z, send a ray to positive Z and select the pixel iterating the X and Y of my view plane.
It is easy because the view plane is in front of the camera and I have to simply assign X and Y of my iterations to ray destination X and Y.
However I would like to be able to send ray in any part of the space.
Could you please help me to understand how to do that?
Thank you!
What I do basically:
{
double deg = 50.;
double rad = deg / (180.0 / M_PI);
double distance = (WIDTH / 2) * (cotan(rad / 2));
ray.orig.x = HEIGH / 2.0;
ray.orig.y = WIDTH / 2.0;
ray.orig.z = -distance;
y = -1;
while (++y <= HEIGH)
{
x = -1;
while (++x <= WIDTH)
{
ray.dest.x = x - ray.orig.x;
ray.dest.y = y - ray.orig.y;
ray.dest.z = 0. - ray.orig.z;
ray.dest = ve_normalize(&ray.dest);
check_objects(c, &ray, 0);
add_diffuse_light(c);
put_pixel(c, &x, &y);
}
}
}
The functions to handle the lookat matrix:
t_lookat lookati(t_vector *from, t_vector *to)
{
t_lookat lookat;
t_vector fo;
t_vector ri;
t_vector up;
t_vector tmp;
tmp.x = 0; tmp.y = 1; tmp.z = 0;
fo = ve_subtraction(from, to);
fo = ve_normalize(&fo);
ri = ve_cross(&tmp, &fo);
ri = ve_normalize(&ri);
up = ve_cross(&fo, &ri);
up = ve_normalize(&up);
lookat.ri.x = ri.x;
lookat.ri.y = ri.y;
lookat.ri.z = ri.z;
lookat.up.x = up.x;
lookat.up.y = up.y;
lookat.up.z = up.z;
lookat.fo.x = fo.x;
lookat.fo.y = fo.y;
lookat.fo.z = fo.z;
lookat.fr.x = from->x;
lookat.fr.y = from->y;
lookat.fr.z = from->z;
return(lookat);
}
t_vector orientate(t_vector *a, t_vector *from, t_vector *to)
{
t_lookat k;
k = lookati(from, to);
t_vector orientate;
orientate.x = a->x * k.ri.x + a->y * k.up.x + a->z * k.fo.x + a->x * k.fr.x;
orientate.y = a->x * k.ri.y + a->y * k.up.y + a->z * k.fo.y + a->x * k.fr.y;
orientate.z = a->x * k.ri.z + a->y * k.up.z + a->z * k.fo.z + a->x * k.fr.z;
return(orientate);
}
Thank you guys, finally I solved the problem reading this guide (https://steveharveynz.wordpress.com/2012/12/20/ray-tracer-part-two-creating-the-camera) which suggests to normalize coordinates (like the pixel range of the user "Spektre") without using a matrix.
Ps.
typedef struct s_vector
{
double x;
double y;
double z;
} t_vector;
typedef struct s_lookat
{
t_vector ri; //right vector
t_vector up; // up
t_vector fo; // foorward
t_vector fr; // eye position
} t_lookat;

STM32 usart interrupt cannot translate correct data to other functions

I have some problem when I use a stm32 discovery board send data to another one and it can get correct data and print in callback function, but cannot print correctly in other function.
void UART7_IRQHandler()
{
HAL_UART_IRQHandler(&huart7);
HAL_UART_Receive_IT(&huart7, (uint8_t *)UART7RxBuffer, 16);
}
void HAL_UART_RxCpltCallback(UART_HandleTypeDef* huart)
{
if(huart->Instance == UART7) {
X = (UART7RxBuffer[1]-48) + (UART7RxBuffer[3]-48)*0.1 + (UART7RxBuffer[4]-48)*0.01 + (UART7RxBuffer[5]-48)*0.001;
}
}
But I receive wrong data like this
void controller(){
printf("%.3f\t\n", X);
}
It should be 0.012, and it correct in HAL_UART_RxCpltCallback(), I receive -38.02, -0.009, 0.512, 0.012, -1.948 and so on in controller. How should I do to prevent this situation?
Without knowing if your MCU actually supports floating point etc, I would probably do something like this in order to diagnose/debug what is happening:
void UART7_IRQHandler()
{
HAL_UART_IRQHandler(&huart7);
HAL_UART_Receive_IT(&huart7, (uint8_t *)UART7RxBuffer, 16);
}
char d1;
char d2;
char d3;
char d4;
float X1;
float X2;
float X3;
float X4;
float X;
void HAL_UART_RxCpltCallback(UART_HandleTypeDef* huart)
{
if(huart->Instance == UART7) {
d1 = UART7RxBuffer[1]-48; /* add breakpoint here, and single step from here while inspecting variables */
d2 = UART7RxBuffer[3]-48;
d2 = UART7RxBuffer[4]-48;
d3 = UART7RxBuffer[5]-48;
X1 = d1;
X2 = d2 * 0.1;
X3 = d3 * 0.01;
X4 = d4 * 0.001;
X = X1 + X2 + X3 + X4;
}
}

Reverse the Fish-eye Distortion(I've used openCV with VC++)

I've made a simulation of fish eye distortion.
I want to develop a reverse program that can convert the distorted image to normal image.
I've tried to use undistortPonts() function but couldn't understand the input(dist-coefficient).
cv.UndistortPoints(distorted, undistorted, intrinsics, dist_coeffs)
My code for fish eye distortion:
#include "stdio.h"
#include <cv.h>
#include <highgui.h>
#include <math.h>
#include <iostream>
void sampleImage(const IplImage* arr, float idx0, float idx1, CvScalar& res)
{
if(idx0<0 || idx1<0 || idx0>(cvGetSize(arr).height-1) || idx1>(cvGetSize(arr).width-1))
{
res.val[0]=0;
res.val[1]=0;
res.val[2]=0;
res.val[3]=0;
return;
}
float idx0_fl=floor(idx0);
float idx0_cl=ceil(idx0);
float idx1_fl=floor(idx1);
float idx1_cl=ceil(idx1);
CvScalar s1=cvGet2D(arr,(int)idx0_fl,(int)idx1_fl);
CvScalar s2=cvGet2D(arr,(int)idx0_fl,(int)idx1_cl);
CvScalar s3=cvGet2D(arr,(int)idx0_cl,(int)idx1_cl);
CvScalar s4=cvGet2D(arr,(int)idx0_cl,(int)idx1_fl);
float x = idx0 - idx0_fl;
float y = idx1 - idx1_fl;
res.val[0]= s1.val[0]*(1-x)*(1-y) + s2.val[0]*(1-x)*y + s3.val[0]*x*y + s4.val[0]*x*(1-y);
res.val[1]= s1.val[1]*(1-x)*(1-y) + s2.val[1]*(1-x)*y + s3.val[1]*x*y + s4.val[1]*x*(1-y);
res.val[2]= s1.val[2]*(1-x)*(1-y) + s2.val[2]*(1-x)*y + s3.val[2]*x*y + s4.val[2]*x*(1-y);
res.val[3]= s1.val[3]*(1-x)*(1-y) + s2.val[3]*(1-x)*y + s3.val[3]*x*y + s4.val[3]*x*(1-y);
}
float xscale;
float yscale;
float xshift;
float yshift;
float getRadialX(float x,float y,float cx,float cy,float k)
{
x = (x*xscale+xshift);
y = (y*yscale+yshift);
float res = x+((x-cx)*k*((x-cx)*(x-cx)+(y-cy)*(y-cy)));
return res;
}
float getRadialY(float x,float y,float cx,float cy,float k)
{
x = (x*xscale+xshift);
y = (y*yscale+yshift);
float res = y+((y-cy)*k*((x-cx)*(x-cx)+(y-cy)*(y-cy)));
return res;
}
float thresh = 1;
float calc_shift(float x1,float x2,float cx,float k)
{
float x3 = x1+(x2-x1)*0.5;
float res1 = x1+((x1-cx)*k*((x1-cx)*(x1-cx)));
float res3 = x3+((x3-cx)*k*((x3-cx)*(x3-cx)));
// std::cerr<<"x1: "<<x1<<" - "<<res1<<" x3: "<<x3<<" - "<<res3<<std::endl;
if(res1>-thresh && res1 < thresh)
return x1;
if(res3<0)
{
return calc_shift(x3,x2,cx,k);
}
else
{
return calc_shift(x1,x3,cx,k);
}
}
int main(int argc, char** argv)
{
IplImage* src = cvLoadImage( "D:\\2012 Projects\\FishEye\\Debug\\images\\grid1.bmp", 1 );
IplImage* dst = cvCreateImage(cvGetSize(src),src->depth,src->nChannels);
IplImage* dst2 = cvCreateImage(cvGetSize(src),src->depth,src->nChannels);
float K=0.002;
float centerX=(float)(src->width/2);
float centerY=(float)(src->height/2);
int width = cvGetSize(src).width;
int height = cvGetSize(src).height;
xshift = calc_shift(0,centerX-1,centerX,K);
float newcenterX = width-centerX;
float xshift_2 = calc_shift(0,newcenterX-1,newcenterX,K);
yshift = calc_shift(0,centerY-1,centerY,K);
float newcenterY = height-centerY;
float yshift_2 = calc_shift(0,newcenterY-1,newcenterY,K);
// scale = (centerX-xshift)/centerX;
xscale = (width-xshift-xshift_2)/width;
yscale = (height-yshift-yshift_2)/height;
std::cerr<<xshift<<" "<<yshift<<" "<<xscale<<" "<<yscale<<std::endl;
std::cerr<<cvGetSize(src).height<<std::endl;
std::cerr<<cvGetSize(src).width<<std::endl;
for(int j=0;j<cvGetSize(dst).height;j++)
{
for(int i=0;i<cvGetSize(dst).width;i++)
{
CvScalar s;
float x = getRadialX((float)i,(float)j,centerX,centerY,K);
float y = getRadialY((float)i,(float)j,centerX,centerY,K);
sampleImage(src,y,x,s);
cvSet2D(dst,j,i,s);
}
}
#if 0
cvNamedWindow( "Source1", 1 );
cvShowImage( "Source1", dst);
cvWaitKey(0);
#endif
cvSaveImage("D:\\2012 Projects\\FishEye\\Debug\\images\\grid3.bmp",dst,0);
cvNamedWindow( "Source1", 1 );
cvShowImage( "Source1", src);
cvWaitKey(0);
cvNamedWindow( "Distortion", 2 );
cvShowImage( "Distortion", dst);
cvWaitKey(0);
#if 0
for(int j=0;j<cvGetSize(src).height;j++)
{
for(int i=0;i<cvGetSize(src).width;i++)
{
CvScalar s;
sampleImage(src,j+0.25,i+0.25,s);
cvSet2D(dst,j,i,s);
}
}
cvNamedWindow( "Source1", 1 );
cvShowImage( "Source1", src);
cvWaitKey(0);
#endif
}
Actually, my original anwser was about the undistortion algorithm for individual points. If you want to undistort a complete image, there is a much simpler technique, as explained in this other thread:
Understanding of openCV undistortion
The outline of the algorithm (which is the one used in OpenCV function undistort()) is as follow. For each pixel of the destination lens-corrected image do:
Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
Apply your lens-distortion model, to obtain the distorted normalized coordinates (x'', y''),
Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel (u_dst, v_dst).
Original answer:
Here is the undistortion algorithm extracted from OpenCV function undistortPoints() :
void dist2norm(const cv::Point2d &pt_dist, cv::Point2d &pt_norm) const {
pt_norm.x = (pt_dist.x-Kcx)/Kfx;
pt_norm.y = (pt_dist.y-Kcy)/Kfy;
int niters=(Dk1!=0.?5:0);
double x0=pt_norm.x, y0=pt_norm.y;
for(int i=0; i<niters; ++i) {
double x2=pt_norm.x*pt_norm.x,
y2=pt_norm.y*pt_norm.y,
xy=pt_norm.x*pt_norm.y,
r2=x2+y2;
double icdist = 1./(1 + ((Dk3*r2 + Dk2)*r2 + Dk1)*r2);
double deltaX = 2*Dp1*xy + Dp2*(r2 + 2*x2);
double deltaY = Dp1*(r2 + 2*y2) + 2*Dp2*xy;
pt_norm.x = (x0-deltaX)*icdist;
pt_norm.y = (y0-deltaY)*icdist;
}
}
If you provide the coordinates of a point in the distorted image in argument pt_dist, it will calculate the normalized coordinates of the associated point and return them in pt_norm. Then, you can obtain the coordinates of the associated point in the undistorted image as
pt_undist = K . [pt_norm.x; pt_norm.y; 1]
where K is the camera matrix.
The standard lens distortion model used by OpenCV is explained at the beginning of this page:
where the distortion coefficients are (k1,k2,p1,p2,k3, k4,k5,k6) (most often we use k4=k5=k6=0).
I don't know what is your model for FishEye distortion, but you can surely adapt the above algorithm to your case. Otherwise, you may use a non-linear optimization algorithm (e.g. Levenberg-Marquardt or any other), to recover the undistorted coordinates from the distorted one.

Grainy looking sphere in my ray tracer

I am trying to write a simple ray tracer. The final image should like this: I have read stuff about it and below is what I am doing:
create an empty image (to fill each pixel, via ray tracing)
for each pixel [for each row, each column]
create the equation of the ray emanating from our pixel
trace() ray:
if ray intersects SPHERE
compute local shading (including shadow determination)
return color;
Now, the scene data is like: It sets a gray sphere of radius 1 at (0,0,-3). It sets a white light source at the origin.
2
amb: 0.3 0.3 0.3
sphere
pos: 0.0 0.0 -3.0
rad: 1
dif: 0.3 0.3 0.3
spe: 0.5 0.5 0.5
shi: 1
light
pos: 0 0 0
col: 1 1 1
Mine looks very weird :
//check ray intersection with the sphere
boolean intersectsWithSphere(struct point rayPosition, struct point rayDirection, Sphere sp,float* t){
//float a = (rayDirection.x * rayDirection.x) + (rayDirection.y * rayDirection.y) +(rayDirection.z * rayDirection.z);
// value for a is 1 since rayDirection vector is normalized
double radius = sp.radius;
double xc = sp.position[0];
double yc =sp.position[1];
double zc =sp.position[2];
double xo = rayPosition.x;
double yo = rayPosition.y;
double zo = rayPosition.z;
double xd = rayDirection.x;
double yd = rayDirection.y;
double zd = rayDirection.z;
double b = 2 * ((xd*(xo-xc))+(yd*(yo-yc))+(zd*(zo-zc)));
double c = (xo-xc)*(xo-xc) + (yo-yc)*(yo-yc) + (zo-zc)*(zo-zc) - (radius * radius);
float D = b*b + (-4.0f)*c;
//ray does not intersect the sphere
if(D < 0 ){
return false;
}
D = sqrt(D);
float t0 = (-b - D)/2 ;
float t1 = (-b + D)/2;
//printf("D=%f",D);
//printf(" t0=%f",t0);
//printf(" t1=%f\n",t1);
if((t0 > 0) && (t1 > 0)){
*t = min(t0,t1);
return true;
}
else {
*t = 0;
return false;
}
}
Below is the trace() function:
unsigned char* trace(struct point rayPosition, struct point rayDirection, Sphere * totalspheres) {
struct point tempRayPosition = rayPosition;
struct point tempRayDirection = rayDirection;
float f=0;
float tnear = INFINITY;
boolean sphereIntersectionFound = false;
int sphereIndex = -1;
for(int i=0; i < num_spheres ; i++){
float t = INFINITY;
if(intersectsWithSphere(tempRayPosition,tempRayDirection,totalspheres[i],&t)){
if(t < tnear){
tnear = t;
sphereIntersectionFound = true;
sphereIndex = i;
}
}
}
if(sphereIndex < 0){
//printf("No interesection found\n");
mycolor[0] = 1;
mycolor[1] = 1;
mycolor[2] = 1;
return mycolor;
}
else {
Sphere sp = totalspheres[sphereIndex];
//intersection point
hitPoint[0].x = tempRayPosition.x + tempRayDirection.x * tnear;
hitPoint[0].y = tempRayPosition.y + tempRayDirection.y * tnear;
hitPoint[0].z = tempRayPosition.z + tempRayDirection.z * tnear;
//normal at the intersection point
normalAtHitPoint[0].x = (hitPoint[0].x - totalspheres[sphereIndex].position[0])/ totalspheres[sphereIndex].radius;
normalAtHitPoint[0].y = (hitPoint[0].y - totalspheres[sphereIndex].position[1])/ totalspheres[sphereIndex].radius;
normalAtHitPoint[0].z = (hitPoint[0].z - totalspheres[sphereIndex].position[2])/ totalspheres[sphereIndex].radius;
normalizedNormalAtHitPoint[0] = normalize(normalAtHitPoint[0]);
for(int j=0; j < num_lights ; j++) {
for(int k=0; k < num_spheres ; k++){
shadowRay[0].x = lights[j].position[0] - hitPoint[0].x;
shadowRay[0].y = lights[j].position[1] - hitPoint[0].y;
shadowRay[0].z = lights[j].position[2] - hitPoint[0].z;
normalizedShadowRay[0] = normalize(shadowRay[0]);
//R = 2 * ( N dot L) * N - L
reflectionRay[0].x = - 2 * dot(normalizedShadowRay[0],normalizedNormalAtHitPoint[0]) * normalizedNormalAtHitPoint[0].x +normalizedShadowRay[0].x;
reflectionRay[0].y = - 2 * dot(normalizedShadowRay[0],normalizedNormalAtHitPoint[0]) * normalizedNormalAtHitPoint[0].y +normalizedShadowRay[0].y;
reflectionRay[0].z = - 2 * dot(normalizedShadowRay[0],normalizedNormalAtHitPoint[0]) * normalizedNormalAtHitPoint[0].z +normalizedShadowRay[0].z;
normalizeReflectionRay[0] = normalize(reflectionRay[0]);
struct point temp;
temp.x = hitPoint[0].x + (shadowRay[0].x * 0.0001 );
temp.y = hitPoint[0].y + (shadowRay[0].y * 0.0001);
temp.z = hitPoint[0].z + (shadowRay[0].z * 0.0001);
struct point ntemp = normalize(temp);
float f=0;
struct point tempHitPoint;
tempHitPoint.x = hitPoint[0].x + 0.001;
tempHitPoint.y = hitPoint[0].y + 0.001;
tempHitPoint.z = hitPoint[0].z + 0.001;
if(intersectsWithSphere(hitPoint[0],ntemp,totalspheres[k],&f)){
// if(intersectsWithSphere(tempHitPoint,ntemp,totalspheres[k],&f)){
printf("In shadow\n");
float r = lights[j].color[0];
float g = lights[j].color[1];
float b = lights[j].color[2];
mycolor[0] = ambient_light[0] + r;
mycolor[1] = ambient_light[1] + g;
mycolor[2] = ambient_light[2] + b;
return mycolor;
} else {
// point is not is shadow , use Phong shading to determine the color of the point.
//I = lightColor * (kd * (L dot N) + ks * (R dot V) ^ sh)
//(for each color channel separately; note that if L dot N < 0, you should clamp L dot N to zero; same for R dot V)
float x = dot(normalizedShadowRay[0],normalizedNormalAtHitPoint[0]);
if(x < 0)
x = 0;
V[0].x = - rayDirection.x;
V[0].x = - rayDirection.y;
V[0].x = - rayDirection.z;
normalizedV[0] = normalize(V[0]);
float y = dot(normalizeReflectionRay[0],normalizedV[0]);
if(y < 0)
y = 0;
float ar = totalspheres[sphereIndex].color_diffuse[0] * x;
float br = totalspheres[sphereIndex].color_specular[0] * pow(y,totalspheres[sphereIndex].shininess);
float r = lights[j].color[0] * (ar+br);
//----------------------------------------------------------------------------------
float bg = totalspheres[sphereIndex].color_specular[1] * pow(y,totalspheres[sphereIndex].shininess);
float ag = totalspheres[sphereIndex].color_diffuse[1] * x;
float g = lights[j].color[1] * (ag+bg);
//----------------------------------------------------------------------------------
float bb = totalspheres[sphereIndex].color_specular[2] * pow(y,totalspheres[sphereIndex].shininess);
float ab = totalspheres[sphereIndex].color_diffuse[2] * x;
float b = lights[j].color[2] * (ab+bb);
mycolor[0] = r + ambient_light[0];
mycolor[1] = g + ambient_light[1];
mycolor[2] = b+ ambient_light[2];
return mycolor;
}
}
}
}
}
The code calling trace() looks like :
void draw_scene()
{
//Aspect Ratio
double a = WIDTH / HEIGHT;
double angel = tan(M_PI * 0.5 * fov/ 180);
ray[0].x = 0.0;
ray[0].y = 0.0;
ray[0].z = 0.0;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
unsigned int x,y;
float sx, sy;
for(x=0;x < WIDTH;x++)
{
glPointSize(2.0);
glBegin(GL_POINTS);
for(y=0;y < HEIGHT;y++)
{
sx = (((x + 0.5) / WIDTH) * 2.0 ) - 1;
sy = (((y + 0.5) / HEIGHT) * 2.0 ) - 1;;
sx = sx * angel * a;
sy = sy * angel;
//set ray direction
ray[1].x = sx;
ray[1].y = sy;
ray[1].z = -1;
normalizedRayDirection[0] = normalize(ray[1]);
unsigned char* color = trace(ray[0],normalizedRayDirection[0],spheres);
unsigned char x1 = color[0] * 255;
unsigned char y1 = color[1] * 255;
unsigned char z1 = color[2] * 255;
plot_pixel(x,y,x1 %256,y1%256,z1%256);
}
glEnd();
glFlush();
}
}
There could be many, many problems with the code/understanding.
I haven't taken the time to understand all your code, and I'm definitely not a graphics expert, but I believe the problem you have is called "surface acne". In this case it's probably happening because your shadow rays are intersecting with the object itself. What I did in my code to fix this is add epsilon * hitPoint.normal to the shadow ray origin. This effectively moves the ray away from your object a bit, so they don't intersect.
The value I'm using for epsilon is the square root of 1.19209290 * 10^-7, as that is the square root of a constant called EPSILON that is defined in the particular language I'm using.
What possible reason do you have for doing this (in the non-shadow branch of trace (...)):
V[0].x = - rayDirection.x;
V[0].x = - rayDirection.y;
V[0].x = - rayDirection.z;
You might as well comment out the first two computations since you write the results of each to the same component. I think you probably meant to do this instead:
V[0].x = - rayDirection.x;
V[0].y = - rayDirection.y;
V[0].z = - rayDirection.z;
That said, you should also avoid using GL_POINT primitives to cover a 2x2 pixel quad. Point primitives are not guaranteed to be square, and OpenGL implementations are not required to support any size other than 1.0. In practice, most support 1.0 - ~64.0 but glDrawPixels (...) is a much better way of writing 2x2 pixels, since it skips primitive assembly and the above mentioned limitations. You are using immediate mode in this example anyway, so glRasterPos (...) and glDrawPixels (...) are still a valid approach.
It seems you are implementing the formula here, but you deviate at the end from the direction the article takes.
First the article warns that D & b can be very close in value, so that -b + D gets you a very limited number. They suggest an alternative.
Also, you are testing that both t0 & t1 > 0. This doesn't have to be true for you to hit the sphere, you could be inside of it (though you obviously should not be in your test scene).
Finally, I would add a test at the beginning to confirm that the direction vector is normalized. I've messed that up more than once in my renderers.

Rotation Matrix Shrinks Objects?

Is my math wrong? The user is supposed to be able to input an angle in degrees, and it rotate the matrix respectively. Instead, it shrinks the object and flips it... calling
glmxRotate(&modelview, 0.0f, 0.0f, 1.0f, 90.0f);
(with modelview being an identity matrix) yields:
Regular: http://i.imgur.com/eX7Td.png
Rotated: http://i.imgur.com/YnMEn.png
Here's glmxRotate:
glmxvoid glmxRotate(glmxMatrix* matrix, glmxfloat x, glmxfloat y, glmxfloat z,
glmxfloat angle)
{
if(matrix -> mx_size != 4){GLMX_ERROR = GLMX_NOT_4X4; return;}
//convert to rads
angle *= 180.0f / 3.14159;
const glmxfloat len = sqrtf((x * x) + (y * y) + (z * z)),
c = cosf(angle),
c1 = 1.0f - c,
s = sinf(angle);
//normalize vector
x /= len;
y /= len;
z /= len;
glmxfloat rot_mx[] = {x * x * c1 + c,
x * y * c1 + z * s,
x * z * c1 - y * s,
0.0f,
x * y * c1 - z * s,
y * y * c1 + c,
y * z * c1 + x * s,
0.0f,
x * z * c1 + y * s,
y * z * c1 - x * s,
z * z * c1 + c,
0.0f,
0.0f,
0.0f,
0.0f,
1.0f,};
_glmxMultiMatrixArray(matrix, rot_mx, 4);
}
Also, if a translation matrix is defined with the translation in the last four column, how would one go about translating an identity matrix, because the outcome would always yield 0s?
Your matrix looks correct to me, though are you aware that your angle to rads multiplication is actually a radians to angle multiplication?
//convert to rads
angle *= 180.0f / 3.14159;
Should be Pi/180.f.
as there is a limited amount of space that the float is stored in the sin and cos are not exactly calculated this means that small errors happen every time the object is rotated this means that over time the object will get smaller.
if you want this to not happen use Quaternions for rotations.
https://www.npmjs.com/package/quaternion

Resources