Face Position augmentation - dataset

Kindly I need your help for any suggestion to increase my "face data set" by position augmentation
I mean taking one frontal image and generate (+30 degrees face rotation , -30 degrees face rotation, right side
face, left side face, down tilt face, up tilt face)
thanks

Related

how to get Heading direction from raw IMU Data?

I have some raw data which accelerated and rotated in each axis(x, y, z). but I don't know what axis is gravity direction. Depends on each object, I can't figure out which direction the IMU is installed. Sometimes the x-axis is the direction of gravity, sometimes the y-axis, sometimes the z-axis and sometimes not all.
I need to find out when the object(with the IMU mounted) is moving at 1m/s^2 in Heading direction.
If the Z-axis is the direction of gravity and the x-axis is the direction of motion, the IMU needs to find a value with an Ax value of 1m/s^2 or more(If IMU installed oriented like the image below).
-img1
But I don't know which direction is the direction of motion and which is the direction of gravity. Therefore, I want to find out which direction is the moving direction through 3 acceleration signals and 3 gyro signals.
Even if the sensor is installed at an angle as shown in Figure 2, what should be done to find out that the sensor is moving with an acceleration of 1m/s^2 in the moving direction? I need to code in C. Since there is not enough computing margin in the my embedded environment, implementation should be as simple as possible. Is there any good solution ?

Three-react-fiber Setting the rotation axis to the centre of the model

I have a model which rotates on the X axis, but the center of the rotation is not on the axis itself. The rotation code is pretty simple:
model.current.rotation.x += 0.016; (axis and speed)
but there seems no way to define the actual axis of rotation to ensure the model just rotates around its own center. At the moment it rotates in a big circle!
Any suggestions appreciated.
:-0
your mesh most likely is not centred, meaning the vertices point way out of the models center of mass. you can either fix this in blender, but even threejs has methods (on the geometry) that recalculate the vertices. a cheaper solution would be to render the mesh, take a box3, set it from the object, then get min/max and use it to shift the object by half of it.

I have a dot bouncing around an image. Need to calculate angles of reflection off of groups of pixels (surface of objects)

Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple

Find the orientation of an image

I'm doing pattern matching with OpenCv. I have a model and I compare targets with the function cvMatchShapes.
It works but I want to know the orientation of the target. How can I do it?
Are bounding rotated rectangle suited for case when contour orientation differs 180 degrees for example?
Another way for solving your problem is to calculate contours moments (I suppose you are ussing contours in cvMatchShapes, you can compute image moments in similar way too) OpenCV Contours Moments?, then calculating principal axes angle from formula:
atan2((float)(-2)*Ixy,Ix - Iy)/2
This angle says about rotation. More theory about this issue: http://farside.ph.utexas.edu/teaching/336k/newton/node67.html

Calculating distance using a single camera

I would like to calculate distance to certain objects in the scene, I know that I can only calculate relative distance when using a single camera but I know the coordinates of some objects in the scene so in theory it should be possible to calculate actual distance. According to the opencv mailing list archives,
http://tech.groups.yahoo.com/group/OpenCV/message/73541
cvFindExtrinsicCameraParams2 is the function to use, but I can't find information on how to use it?
PS. Assuming camera is properly calibrated.
My guess would be, you know the width of an object, such as a ball is 6 inches across and 6 inches tall, you can also see that it is 20 pixels tall and 25 pixels wide. You also know the ball is 10 feet away. This would be your start.
Extrinsic parameters wouldn't help you, I don't think, because that is the camera's location and rotation in space relative to another camera or an origin. For a one camera system, the camera is the origin.
Intrinsic parameters might help. I'm not sure, I've only done it using two cameras.

Resources