MRPT SLAM MRPT::slam::CMetricMapBuilderICP warning Pose Extrapolation failed - slam

I am using MRPT library's Map Builder ICP to implement a 2D slam on C++ by using Sick LMS151 to get CObservation2DRangeScan.
Whenever I am providing the 2D range scan to the map builder, it generates a warning that the Pose Extrapolation has failed. How do I know where the fault is in my codes?
mrpt::slam::CMetricMapBuilderICP mapBuilder;
double RANGE_MAX = 20.0;
double RANGE_MIN = 0.05;
py::list processObservation(double timestamp, py::list scanRanges, py::tuple pose) {
/* Takes observation and pose and returns the pose that is predicted by SLAM */
mrpt::obs::CObservation2DRangeScan *rangescan = new mrpt::obs::CObservation2DRangeScan();
//Set Intensities to false, as our lidar does not send it
rangescan->setScanHasIntensity(false);
//Set Tolerance of Scan to +- 0.8radians in pitch and roll
rangescan->isPlanarScan(0.08);
rangescan->timestamp = mrpt::system::time_tToTimestamp(timestamp);
rangescan->aperture = M_PI*1.5;
rangescan->maxRange = RANGE_MAX;
mrpt::poses::CPose3D Pose;
//Sensor Pose for Observation
Pose.setFromValues(pose[0].cast<double>()+base_to_lidar,pose[1].cast<double>(),0,0,0,pose[2].cast<double>());
rangescan->setSensorPose(Pose);
std::vector <float>scanranges;
std::vector <char>valid(scanranges.size());
for(auto i: scanRanges) {
float range = i.cast<float>();
valid.push_back(range<=RANGE_MAX && range>=RANGE_MIN);
scanranges.push_back(range);
}
rangescan->loadFromVectors(scanranges.size(), &scanranges[0], &valid[0]);
mrpt::obs::CObservation2DRangeScan::Ptr obs_ptr(rangescan);
try {
mapBuilder.processObservation(obs_ptr);
}
catch(...) {
std::cerr<<"Cannot Process Observation. The old pose will be returned\n";
}
mrpt::poses::CPose3DPDF::Ptr predicted_pose = mapBuilder.getCurrentPoseEstimation();
mrpt::math::CMatrixDouble cov;
mrpt::poses::CPose3D mean;
predicted_pose->getCovarianceDynAndMean(cov, mean);
std::vector <double> pos_vector;
pos_vector.push_back(mean.x());
pos_vector.push_back(mean.y());
pos_vector.push_back(mean.yaw());
pos_vector.insert(pos_vector.end(), cov.begin(), cov.end());
py::list list_pose = py::cast(pos_vector);
return list_pose;
}
The expected output was to be the 2D pose as predicted by the ICP-slam algorithm, which is not the case.
However, the output is as follows:
[10:36:40.1430|WARN |CMetricMapBuilderICP] processObservation(): new pose extrapolation failed, using last pose as is.
Cannot Process Observation. The old pose will be returned
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[x, y, yaw, covariances]

As long as this does not happen all the time, it is just a warning, not an error, so you should not worry too much.
It means that the initial pose used by ICP will be the last pose, without an additional refining step that consists in extrapolating (guessing) the robot pose at the timestamp of the LiDAR, using its velocity vector.

Related

GLSL layout attribute number

I was trying out a shader example to draw a triangle with the RGB interpolated across the vertices, and assumed that using
layout (location = 0)in vec4 vertex;
layout (location = 1) in vec4 VertexColor;
in the vertex shader would work, since the 4 float colors immediately follow 4 float vertices in the array. However, it always drew a solid red triangle. So I tried location = 4 only to get a black screen. Experimenting further gave a blue triangle for location = 2, and finally got the interpolated result with location = 3.
My question is, how was I expected to know to enter 3 as the location? The vertex array looks like this:
GLfloat vertices[] = { -1.5,-1.5, 0.0, 1.0, //first 3D vertex
1.0, 0.0, 0.0, 1.0, //first color
0.0, 1.5, 0.0, 1.0, //second vertex
0.0, 1.0, 0.0, 1.0, //second color
1.5,-1.5, 0.0, 1.0, //third vertex
0.0, 0.0, 1.0, 1.0,}; //third color
note: edited the original layout=1 from layout = 3 in first code block
each location can hold 4 floats (a single vec4), So a valid option would also be:
layout (location = 0)in vec4 vertex;
layout (location = 1) in vec4 VertexColor;
What dictates where what attribute comes from is the set of glVertexAttribPointer calls.
these are the ones I would expect for the glsl declaration above (assuming you use a VBO)
glVertexAttribPointer(0, 4, GL_FLOAT, false, sizeof(float)*4*2, 0);
glVertexAttribPointer(1, 4, GL_FLOAT, false, sizeof(float)*4*2, sizeof(float)*4);

Trouble getting view (lookat) and projection (perspective) matrices to work properly

I've been following the open.gl tutorials without using the the GLM library because reasons (stubbornness and C).
I can't get the view and projection matrices to work properly.
Here's the relevant vertex shader code,
#version 150 core
in vec3 size;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
uniform vec3 pos;
uniform float angle;
uniform vec3 camPos;
uniform vec3 camTarget;
const float fov=90, ratio=4.0/3.0, near=1.0, far=10.0;
mat4 projection ()
{
float t = tan(radians(fov)),
l = ratio * t;
return mat4(
vec4(near/l, 0.0, 0.0, 0.0),
vec4(0.0, near/t, 0.0, 0.0),
vec4(0.0, 0.0, -(far+near)/(far-near), -(2*far*near)/(far-near)),
vec4(0.0, 0.0, -1.0, 0.0)
);
}
mat4 rotZ(float theta)
{
return mat4(
vec4(cos(theta), -sin(theta), 0.0, 0.0),
vec4(sin(theta), cos(theta), 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(0.0, 0.0, 0.0, 1.0)
);
}
mat4 translate(vec3 translation)
{
return mat4(
vec4(1.0, 0.0, 0.0, 0.0),
vec4(0.0, 1.0, 0.0, 0.0),
vec4(0.0, 0.0, 1.0, 0.0),
vec4(translation.x, translation.y, translation.z, 1.0)
);
}
mat4 lookAtRH(vec3 eye, vec3 target)
{
vec3 zaxis = normalize(target - eye); // The "forward" vector.
vec3 xaxis = normalize(cross(vec3(0.0,0.0,1.0), zaxis));// The "right" vector.
vec3 yaxis = normalize(cross(zaxis, xaxis)); // The "up" vector.
mat4 axis = {
vec4(xaxis.x, yaxis.x, zaxis.x, 0),
vec4(xaxis.y, yaxis.y, zaxis.y, 0),
vec4(xaxis.z, yaxis.z, zaxis.z, 0),
vec4(dot(xaxis,-eye), dot(yaxis,-eye), dot(zaxis,-eye), 1)
};
return axis;
}
void main()
{
Color = color;
Texcoord = texcoord;
mat4 model = translate(pos) * rotZ(angle);
mat4 view = lookAtRH(camPos, camTarget);
gl_Position = projection() * view * model * vec4(size, 1.0);
}
From tweaking things around it seems as if the view matrix is correct, but the projection matrix is causing the dodgyness.
First I must remark that it is a very bad idea to do this directly in the shaders.
However, if you really want to, you can do this. You should be aware that the GLSL matrix constructors work with column vectors. Your projection matrix is thuse specified transposed (however, your translation matrix is correct).
EDIT: If you want pure C, here is nice lib for math (+ you can check the code :) ) https://github.com/datenwolf/linmath.h
Never do something like that :) Creating matrices in shader is very bad idea...
Vertex shader is executed for each vertex. So if you pass to shader thousand vertices you calculate new matrices thousand times. I think there's nothing more to explain :)
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
... // somewhere
glm::mat4 projection = glm::perspective(45.0f, float(window_width) / window_height, 0.1f, 100.0f);
glm::mat4 world(1.0f); // world/model matrix
glm::mat4 view(1.0f); // view/camera matrix
glm::mat4 mvp = projection * view * model;
... // inside the main loop
glUniformMatrix4fv(glGetUniformLocation(program, "mvpMatrix"), 1, GL_FALSE, &mvp[0][0]);
draw_mesh();
It's really cool and optimised :)

3D pyramid appears scattered, with mixed up sides

First of all I defined a structure to express the coordinated of a pyramid:
typedef struct
{
GLfloat xUp;
GLfloat yUp;
GLfloat zUp;
GLfloat base;
GLfloat height;
}pyramid;
Pretty self-explanatory here : I store the coordinates of the uppest point, the base and the height.
The I wrote a function to draw a pyramid:
void drawPyramid(pyramid pyr)
{
GLfloat p1[]= {pyr.xUp+pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp-pyr.base/2.0};
GLfloat p2[]= {pyr.xUp+pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp+pyr.base/2.0};
GLfloat p3[]= {pyr.xUp-pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp+pyr.base/2.0};
GLfloat p4[]= {pyr.xUp-pyr.base/2.0, pyr.yUp-pyr.height, pyr.zUp-pyr.base/2.0};
GLfloat up[]= {pyr.xUp, pyr.yUp, pyr.zUp};
glBegin(GL_TRIANGLES);
glColor4f(1.0, 0.0, 0.0, 0.0);
glVertex3fv(up);
glVertex3fv(p1);
glVertex3fv(p2);
glColor4f(0.0, 1.0, 0.0, 0.0);
glVertex3fv(up);
glVertex3fv(p2);
glVertex3fv(p3);
glColor4f(0.0, 0.0, 1.0, 0.0);
glVertex3fv(up);
glVertex3fv(p3);
glVertex3fv(p4);
glColor4f(1.0, 1.0, 0.0, 0.0);
glVertex3fv(up);
glVertex3fv(p4);
glVertex3fv(p1);
glEnd();
glColor4f(0.0, 1.0, 1.0, 0.0);
glBegin(GL_QUADS);
glVertex3fv(p1);
glVertex3fv(p2);
glVertex3fv(p3);
glVertex3fv(p4);
glEnd();
}
I struggled to draw all the vertices in anti-clockwise order, but probably I messed up something.
This is how I display the pyramid in my rendering function:
void display()
{
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0.0, -25.0, 50.0);
glRotatef(-angle, 0.0, 1.0, 0.0);
glTranslatef(0.0, 25.0, -50.0);
pyramid pyr;
pyr.xUp=0.0;
pyr.yUp=10.0;
pyr.zUp=50.0;
pyr.base=10.0;
pyr.height=18.0;
glColor4f(1.0, 0.0, 0.0, 0.0);
drawPyramid(pyr);
glutSwapBuffers();
}
I also use an init method called before the glut main loop:
void init()
{
glEnable(GL_DEPTH);
glViewport(-1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(35.0, 1.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0,1.0,0.0, 0.0,1.0,30.0, 0.0,1.0,0.0);
}
angle is just a double that I use to rotate the pyramid, changeable by pressing 'r', but this is not relevant.It appears that the real problem is how I draw the vertices.
The problem is that the faces of the pyramid appear scattered, messed up.I would better describe this situation with an image:
There's a face that is too small, that is displayed and I don't know why.
If I rotate the pyramid it appears messed up, I even recored a video to describe this.
Later I could upload it if the problem is not totally clear.
PS: Many people have noticed that I am using outdated techniques.But unfortunately this is what my university offers.
EDIT
I forgot to say about the main function:
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitWindowPosition(100, 100);
glutInitWindowSize(500, 500);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow("Sierpinsky Pyramid");
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
init();
glutMainLoop();
return 0;
}
It looks like depth buffer isn't initialzied.
Calling glEnable(GL_DEPTH_TEST) is not enough. You must correctly initialize glut and specify that you want depth buffer support, otherwise you won't get a depth buffer. If I remember correctly, this is done using glutInitDisplayMode(GLUT_DEPTH|...). See documentation here and introduction here. Additional info can be found using google.
--EDIT--
You're passing invalid parameter to glEnable. call glEnable(GL_DEPTH_TEST) instead of glEnable(GL_DEPTH).
Also:
Matrix code in display function isn't protected by glPushMatrix/glPopMatrix. Which means that every time you rotate pyramid, rotation is applied to previous transform. I.e. calling display function will rotate the pyramid.
glViewport is called with invalid parameters. glViewport takes 4 integer arguments, but you're trying to pass floats. Also, what's "width of -1.0" supposed to mean?
You have not checked any error codes (glGetError). If you tried to call glGetError after glEnable call, then you'd see that it returns GL_INVALID_ENUM.
OpenGL has documentation. Documentation is available on opengl.org. Use it and read it. Also, I'd recommend reading "OpenGL red book".

Texturing a quad (triangle strip) OpenGL ES 2.0 [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I got 256*256 texture and everything is fine, but part i trying to map whole texture on whole quad is ruined.
My quad textured only by 1/5 from left to right, rest is black which means no texturing there.
Texturing part in general is working like it shoud.
glFrontFace is untouched (default)
Texture created with:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLfloat quad_pos[] =
{
0.5, 0.5, 0.0,
-0.5, 0.5, 0.0,
0.5,-0.5, 0.0,
-0.5,-0.5, 0.0
};
//Maybe i need 5 and 6 verticles tex coords?
GLfloat quad_tex[]=
{
0.0, 1.0,
0.0, 0.0,
1.0, 1.0,
1.0, 0.0
};
GLfloat quad_col[]=
{
0.0, 1.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 1.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0
};
//In draw_quad method()
//Binding texture and setting uniforms/attributes skipped
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Vertex shader
attribute vec4 av_col;
attribute vec4 av_pos;
attribute vec2 av_tex;
varying vec4 vv_col;
varying vec2 vv_tex;
uniform mat4 um_mvp;
void main()
{
vv_col = av_col;
vv_tex = av_tex;
gl_Position = um_mvp * av_pos;
}
Fragment shader
precision lowp float;
uniform sampler2D us_tex;
varying vec4 vv_col;
varying vec2 vv_tex;
void main()
{
gl_FragColor = (vv_col * texture2D(us_tex, vv_tex));
}
if change tex coords to
GLfloat quad_tex[]=
{
0.0, 0.2,
0.0, 0.0,
0.2, 0.2,
0.2, 0.0
};
Quad will be fully textured and colored, but texture will be overscaled (minecraft pixel style).
Quad will be fully textured and colored, but texture will be overscaled (minecraft pixel style)
It sounds as though you have default GL_NEAREST filtering enabled. Try using GL_LINEAR instead:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
http://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml

Drawing using Vertex Buffer Objects in OpenGL ES 1.1 vs ES 2.0

i am new to openGL. Iam using apple documentation as my major referens
http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html#//apple_ref/doc/uid/TP40008793-CH107-SW6
My problem is that i am using openGL ES 1.1 and not 2 thus functions which are used in Listing 9-3 such as glVertexAttribPointer, glEnableVertexAttribArray are not recognized ... :)
I trying to make the optimizations which are described in this documentation:
to hold indices and vertex as a struct with all of its data: position, color (Listing 9-1)
typedef struct _vertexStruct
{
GLfloat position[3];
GLubyte color[4];
} VertexStruct;
const VertexStruct vertices[] = {...};
const GLushort indices[] = {...};
and to use VBOs such as in Listing 9-2, 9-3
As i mentioned, some of the functions that are used there don't exists in openGL ES 1.1. I am wondering if there is a way to do the same in ES 1.1 maybe with some other code ?
thank,
Alex
Edit according to Christians answer, tried to use glVertexPointer, glColorPointer.
Here is the code, it prints the cube but no colors ... :(. Anyone, is it possible to use
VBOs in such menner using ES 1.1
typedef struct {
GLubyte red;
GLubyte green;
GLubyte blue;
GLubyte alpha;
} Color3D;
typedef struct {
GLfloat x;
GLfloat y;
GLfloat z;
} Vertex3D;
typedef struct{
Vector3D position;
Color3D color;
} MeshVertex;
Cube Data:
static const MeshVertex meshVertices [] =
{
{ { 0.0, 1.0, 0.0 } , { 1.0, 0.0, 0.0 ,1.0 } },
{ { 0.0, 1.0, 1.0 } , { 0.0, 1.0, 0.0 ,1.0 } },
{ { 0.0, 0.0, 0.0 } , { 0.0, 0.0, 1.0 ,1.0 } },
{ { 0.0, 0.0, 1.0 } , { 1.0, 0.0, 0.0, 1.0 } },
{ { 1.0, 0.0, 0.0 } , { 0.0, 1.0, 0.0, 1.0 } },
{ { 1.0, 0.0, 1.0 } , { 0.0, 0.0, 1.0, 1.0 } },
{ { 1.0, 1.0, 0.0 } , { 1.0, 0.0, 0.0, 1.0 } },
{ { 1.0, 1.0, 1.0 } , { 0.0, 1.0, 0.0, 1.0 } }
};
static const GLushort meshIndices [] =
{ 0, 1, 2 ,
2, 1, 3 ,
2, 3, 4 ,
3, 5, 4 ,
0, 2, 6 ,
6, 2, 4 ,
1, 7, 3 ,
7, 5, 3 ,
0, 6, 1 ,
1, 6, 7 ,
6, 4, 7 ,
4, 5, 7
};
Function
GLuint vertexBuffer;
GLuint indexBuffer;
- (void) CreateVertexBuffers
{
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(meshVertices), meshVertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(meshIndices), meshIndices, GL_STATIC_DRAW);
}
- (void) DrawModelUsingVertexBuffers
{
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexPointer(3, GL_FLOAT, sizeof(MeshVertex), (void*)offsetof(MeshVertex,position));
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(MeshVertex), (void*)offsetof(MeshVertex,color));
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(meshIndices)/sizeof(GLushort), GL_UNSIGNED_SHORT, (void*)0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
Functions like glVertexAttribPointer and glEnableVertexAttribArray are used for generic custom vertex attributes (which are the only supported method for submitting vertex data in OpenGL ES 2.0).
When using the fixed-function pipeline (as you have to in OpenGL ES 1.1) you just use the builtin attributes (think of the glVertex and glColor calls, you might have used before switching to vertex arrays). There are functions for each attribute which are called similar to their immediate mode counterparts, like glVertexPointer or glColorPointer (instead of glVertexAttribPointer). These arrays are enabled/disabled by calling gl(En/Dis)ableClientState with constants like GL_VERTEX_ARRAY or GL_COLOR_ARRAY (instead of gl(En/Dis)ableVertexAttribArray).
But as a general rule you should not learn OpenGL ES 1.1 programming with 2.0 resources, as much of the information won't be of use to you (at least if you are new to OpenGL). For example some methods described on your linked site may not be supported in 1.1, like VBOs or even VAOs. But I also have to admit, that I have completely no ES experience, so am not perfectly sure about that.
EDIT: Regarding your updated code: I assume no colors means the cube is of a single color, probably white. In your first code example you used GLubyte color[4], and now its some Color3D type, maybe this doesn't fit to the glColorPointer(4, GL_UNSIGNED_BYTE, ...) call (where the first argument is the number of components and the second one the type)?
If your Color3D type only contains 3 colors or floating point colors, I would anyway suggest you to use 4-ubyte colors, because together with your 3 floats for the position you should get a perfectly 16-byte aligned vertex, which is also and optimization they suggest in your provided link.
And by the way, the repetition of the index buffer creation in your CreateVertexBuffers function is rather a typo, isn't it?
EDIT: Your colors contain ubytes (which range from 0 (black) to 255 (full color)) and you initialize them with floats. So your float value 1.0 (which should surely mean full color) is converted to ubyte and you get 1, which compared to the whole [0,255] range is still very small, so everything is black. When you use ubytes, then you should also initialize them with ubytes, so just replace every 0.0 with 0 and every 1.0 with 255 in the color data.
And by the way, since you're using VBOs in ES 1.1 and at least something is drawn, then ES 1.1 seems to support VBOs. I didn't know that. But I'm not sure if it also supports VAOs.
And by the way, you should call glBindBuffer(GL_ARRAY_BUFFER, 0) and likewise for the element array buffer after you're finished with using them at the end of these two functions. Othwerwise you may get problems in other functions which assume no buffers but the buffers are still bound. Always remember that OpenGL is a state machine and every state you set stays until it's changed again or the context is destroyed.

Resources