how to get Heading direction from raw IMU Data? - c

I have some raw data which accelerated and rotated in each axis(x, y, z). but I don't know what axis is gravity direction. Depends on each object, I can't figure out which direction the IMU is installed. Sometimes the x-axis is the direction of gravity, sometimes the y-axis, sometimes the z-axis and sometimes not all.
I need to find out when the object(with the IMU mounted) is moving at 1m/s^2 in Heading direction.
If the Z-axis is the direction of gravity and the x-axis is the direction of motion, the IMU needs to find a value with an Ax value of 1m/s^2 or more(If IMU installed oriented like the image below).
-img1
But I don't know which direction is the direction of motion and which is the direction of gravity. Therefore, I want to find out which direction is the moving direction through 3 acceleration signals and 3 gyro signals.
Even if the sensor is installed at an angle as shown in Figure 2, what should be done to find out that the sensor is moving with an acceleration of 1m/s^2 in the moving direction? I need to code in C. Since there is not enough computing margin in the my embedded environment, implementation should be as simple as possible. Is there any good solution ?

Related

HLSL circular/spiral transition shader

I am developing some shaders for WPF, and so far I managed to get fadeout, and swipe shader working, but for this one, I have no idea where to start.
Could someone please hand me out a few tips on how to approach this problem?
What I am trying to achieve is the following:
Thank you
In my opinion the easiest way to build any complicated effect is to decompose the original effect into small parts and add them together. In your case the effect consists of 3 parts:
5 rings filled after each other
each ring is filled counterclockwise from the left
the filling of a ring is a circle at each end
This in mind you can build a solution for each part separately and add the results together.
I assume that there will be a variable float progress; rolling from 0 to 1 which determines the progress of the transistion.
In following some starting points for each part:
For 1. you check the distance of the texture coordinate of the fragment to the center of the screen and divide the maximum distance into 5 parts. While the progress is 0 <= progress < 0.2, the first ring is visible, while 0.2 <= progress < 0.4 the second and so on.
For 2. you check the angle of the difference vector between the fragment and the center to the left vector, e.g. using atan2. Within each part (such as 0.0-0.2) you compare the local progress of the stage to the angle to determine the visibility, making the fragments appear in an angle dependent way.
The 3. might be the most tricky part, as you will have to construct the center of the progress ring ends and compute the distance to the fragment. If it is within the current ring thickness is is visible.
Hopefully this quick thoughts give you a rough starting point for your effect!

Make aSCNParticle's orientation match surface/vertex release

Is it possible to make particles being released from the surface of a geometry object (or from its vertices) push them out at an angle reflective/representative of the direction of travel?
eg. If the emitter object is a cube, and particles are moving out from each of the cube's 6 faces, the particles face exactly as the face that they're coming "off" from.
I've only been able to get them to move out correctly from the faces/vertices, but all the particles are aligned to the camera, screen or "free", in all cases they're essentially only facing one direction, not the six that they could/should if they each took on the angle of their origin and direction of travel from the faces/vertices of the cube.
What I want, is something like this behaviour from the particles emitting from the object (a cube in this example, but the principles the same for any kind of object).
EDIT:: above is just an example.
Imagine this on a much grander scale, not like the below, but it will give you somewhat of an idea of the goal, though even MORE:
You can use 6 emitters with orientation mode set to "SCNParticleOrientationModeFree" and set it to local=YES. then control the orientation with the node that own the emitter.

shadow and shading

I have read lots of ray tracer algorithm on the web. But, I have no clear understanding of the shading and shadow. Is below pseudocode correct written according to my understanding ?
for each primitive
check for intersection
if there is one
do color be half of the background color
Ishadow = true
break
for each ambient light in environment
calculate light contribution to the color
if ( Ishadow == false )
for each point light
calculate diffuse shading
calculate reflection direction
calculate specular light
trace for reflection ray // (i)
add color returned from i after multiplied by some coefficient
trace for refraction ray // (ii)
add color returned from ii after multiplied by some coefficient
return color value calculated until this point
You should integrate your shadows with the normal ray-tracing path:
For every screen-pixel you send a ray through the scene and you eventually determine the closest object-intersection: at the point of the closest object-intersection you would at first read out the pixel color (texture of the object at that point), aside from calculating reflection-vector etc (using the normal-vector) you would now additionally cast a ray from that intersection-point to each of the light-sources in your scene: if these rays intersect other objects before hitting the light-sources then the intersection-point is in shadow and you can adapt the final color of that point accordingly.
The trouble with pseudocode is that it is easy to get "pseudo" enough that it becomes the same well of ambiguity that we are trying to avoid by getting away from natural languages. "Color be half of the background color?" The fact that this line appears before you iterate through your light sources is confusing. How can you be setting Ishadow before you iterate over light sources?
Maybe a better description would be:
given a ray in space
find nearest object with which ray intersects
for each point light
if normal at surface of intersected object points toward light (use dot product for this)
cast a ray into space from the surface toward the light
if ray intersection is closer than light* light is shadowed at this point
*If you're seeing strange artifacts in your shadows, there is a mistake that is made by every single programmer when they write their first ray tracer. Floating point (or double-precision) math is imprecise and you will frequently (about half the time) re-intersect yourself when doing a shadow trace. The explanation is a bit hard to describe without diagrams, but let me see what I can do.
If you have an intersection point on the surface of a sphere, under most circumstances, that point's representation in a floating point register is not mathematically exact. It is either slightly inside or slightly outside the sphere. If it is inside the sphere and you try to run an intersection test to a light source, the nearest intersection will be the sphere itself. The intersection distance will be very small, so you can simply reject any shadow ray intersection that is closer than, say .000001 units. If your geometry is all convex and incapable of legitimately shadowing itself, then you can simply skip testing the sphere when doing shadow tests.

Calculating distance using a single camera

I would like to calculate distance to certain objects in the scene, I know that I can only calculate relative distance when using a single camera but I know the coordinates of some objects in the scene so in theory it should be possible to calculate actual distance. According to the opencv mailing list archives,
http://tech.groups.yahoo.com/group/OpenCV/message/73541
cvFindExtrinsicCameraParams2 is the function to use, but I can't find information on how to use it?
PS. Assuming camera is properly calibrated.
My guess would be, you know the width of an object, such as a ball is 6 inches across and 6 inches tall, you can also see that it is 20 pixels tall and 25 pixels wide. You also know the ball is 10 feet away. This would be your start.
Extrinsic parameters wouldn't help you, I don't think, because that is the camera's location and rotation in space relative to another camera or an origin. For a one camera system, the camera is the origin.
Intrinsic parameters might help. I'm not sure, I've only done it using two cameras.

Pathfinding Algorithm for Robot

I have a robot that uses an optical mouse as a position track. Basically, as the robot moves it is able to track change in X and Y directions using the mouse. The mouse also tracks which direction you are moving - ie negative X or positive X. These values are summed into separate X and Y registers.
Now, the robot rotates in place and moves forward only. So the movement of the robot is ideally in straight lines (although the mouse tracking can pickup deviations if you veer off) at particular angles. A particular set of movements of the robot would be like:
A: Rotate 45 degrees, move 3 inches
B: Rotate 90 degrees, move 10 inches
C: Rotate -110 degrees, move 5 inches
D: Rotate 10 degrees, move 1 inch
But each time the mouse X and mouse Y registers give the real distances you moved in each direction.
Now, if I want to repeat the movement set going from A to D only, how can I do this using the information I have already gathered. I know I can basically sum all the angles and distances I feed into it already, but this would prove to be inaccurate if there were large errors in each movement orders. How can I use the raw information from my mouse? A friend provided an idea that I could continuously sine and cosine the mouse values and calculate the final vector but I'm not really sure how this would work.
The problem is that the mouse only gives relative readings so rotating or moving backwards, you are potentially erasing information. So yeah, what I am wondering is how you can implement the algorithm so it can continually track changes to give you a shortest path if you moved in zigzags to get there originally.
I think the basic algorithm you need to do is this:
currentX = currentY = 0;
heading = 0; // radians
while (true)
{
deltas = SampleMouseDeltas();
heading += deltas.Heading;
currentX += Math.Cos(heading) * deltas.Distance;
currentY += Math.Sin(heading) * deltas.Distance;
}
You are right in your idea that this won't be precise. It is called "dead reckoning" for a reason.
Where you can get your deltas.Heading based on the "X" coordinate (the formula will be (deltax in inches) / (mouse sensor distance in inches from center of rotation). Also, the deltas.Distance would come from the "Y" sensor, after you convert it from pixels to inches.
Then to perform the steps, you could do something like:
robot.RotateLeft();
heading = 0;
while (heading < 45 degrees)
heading += SampleMouseDeltas.Heading;
robot.StopRotateLeft();
... etc ...
Not an answer to your question, but perhaps a cautionary tale...
I did exactly this kind of robot as a school project a year back. It was an utter failure, though I learnt quite a bit while doing it.
As for using the mouse for tracking how far you have driven: It did not work well for us at all, or any of the other groups. Probably because the camera in the mouse was out of focus due to the fact that we needed to have the mouse a few mm above the floor. The following year no group doing the same project used this methid. They instead put markings on the weels and used a simple ir-sensor to calculate how many revolutions the wheels made.
I know I'm somewhat necroing this thread, but if you wanted more accurate angle tracking, two optical mice would be ideal. Basically if you cancelled out the motion in the same direction, you would be left with the motion that the mice made relative to eachother. From there, it would just be some simple math to accurately determine how far the 'bot has turned.

Resources