Direction, Orientation object - c

How do I move an object according to its orientation? I mean, I have a cube in one position, I want to rotate about the Y axis and move according to their orientation. Then move and rotate again to change your direction. Something like this:

in JS you can try something like this:
var previousPosition = [x, y]; //change x and y with the coordinate of the object
var nextPosition = [x, y]; //change x and y with the new coordinate of the object
var x = previousPosition[0] - previousPosition[0];
var y = nextPosition[1] - nextPosition[1];
var rad = Math.atan(y/x);
var deg = rad * 180 / 3.14;
in the variable deg you have the value in degree to rotate your cube

Related

How to expand 2d array (30x20) to 36x60, which is contained in 620x480 2d array?

The start data: 2d array (620x480) is contained image, where shows human face, and 2d array (30x20) which is contained eye image. Face image includes eye image.
How I can expand eye image to 36x60 to include pixels from face image? Are there ready-made solutions?
Another similar task: the eye image have 37x27 size. How I can expand eye image to target(closest to 36x60) size, e.g. 39x65 i.e maintain the aspect ratio required before resizing and then resize to 36x60.
Code for testing (project is available by reference):
import dlib
import cv2 as cv
from imutils.face_utils import shape_to_np
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('res/model.dat')
frame = cv.imread('photo.jpg')
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
img = frame.copy()
dets = detector(gray, 0)
for i, det in enumerate(dets):
shape = shape_to_np(predictor(gray, det))
shape_left_eye = shape[36:42]
x, y, h, w = cv.boundingRect(shape_left_eye)
cv.rectangle(img, (x, y), (x + h, y + w), (0, 255, 0), 1)
cv.imwrite('file.png', frame[y: y+w, x: x+h])
The image 42x13:
For the first part you can use cv2.matchTemplate to find the eye region in the face and then according to the size you want you can enlarge it. You can read more about it here.
FACE IMAGE USED
EYE IMAGE USED
The size of eye I have (12, 32).
face = cv2.imread('face.jpg', 0)
eye = cv2.imread('eye.jpg', 0)
w, h = eye.shape[::-1]
res = cv2.matchTemplate(face,eye,cv2.TM_CCOEFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(face ,top_left, bottom_right, 255, 2)
cv2.imshow('image', face)
cv2.waitKey(0)
cv2.destroyAllWindows()
The result with this code is:
Now I have the top left and bottom right co-ordinates of the eye that is matched where top_left = (112, 108) and bottom_right = (144, 120). Now to expand them to dimensions of 36x60 I simply subtract the required values from top_left and add the required values in bottom_right.
EDIT 1
The question has been edited which suggests that dlib has been used along with a model trained to perform left eye detection. Using the same code I obtained
After that as proposed above I find top_left = (x,y) and bottom_right = (x+w, y+h).
Now if the eye size is smaller 36x60 then we just have to take the area around it to expand it to 36x60 otherwise we have to expand it as such that the aspect ratio is not disturbed and then resized and it cannot be hard coded. The full code used is:
import dlib
from imutils.face_utils import shape_to_np
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('res/model.dat')
face = cv2.imread('face.jpg', 0)
img = face.copy()
dets = detector(img, 0)
for i, det in enumerate(dets):
shape = shape_to_np(predictor(img, det))
shape_left_eye = shape[36:42]
x, y, w, h = cv2.boundingRect(shape_left_eye)
cv2.rectangle(face, (x, y), (x + w, y + h), (255, 255, 255), 1)
top_left = (x, y)
bottom_right = (x + w, y + h)
if w <= 36 and h <= 60:
x = int((36 - w)/2)
y = int((60 - h)/2)
else:
x1 = w - 36
y1 = h - 60
if x1 > y1:
x = int((w % 3)/2)
req = (w+x) * 5 / 3
y = int((req - h)/2)
else:
y = int((h % 5)/2)
req = (y+h) * 3 / 5
x = int((req - w)/2)
top_left = (top_left[0] - x, top_left[1] - y)
bottom_right = (bottom_right[0] + x, bottom_right[1] + y)
extracted = face[top_left[1]:bottom_right[1], top_left[0]:bottom_right[0]]
result = cv2.resize(extracted, (36, 60), interpolation = cv2.INTER_LINEAR)
cv2.imshow('image', face)
cv2.imshow('imag', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Which gives us a 36x60 region of the eye:
This takes care of the case when size of eye is smaller than 36x60. For the second case when the size of eye is larger than 36x60 region I used face = cv2.resize(face, None, fx=4, fy=4, interpolation = cv2.INTER_CUBIC). The result was:
The size of eye detected is (95, 33) and the extracted region is (97, 159) which is very close to the aspect ration of 3:5 before resizing which also satisfies that second task.

How to get x and y coordinate of mouseClick

I am trying to achieve painting a rectangle around the center of mouseClick position. For that i feel i need to obtain the x and y coordinate as an int.
(This is edited code and e.X or e.Y is the solutions to this question)
let mouseClick (e: MouseEventArgs) =
let x = e.X
let y = e.Y
let coords = [|System.Drawing.Point((x-10),(y-10));
System.Drawing.Point((x-10),(y+10));
System.Drawing.Point((x+10),(y+10));
System.Drawing.Point((x+10),(y-10));
System.Drawing.Point((x-10),(y-10))|]
window.Paint.Add(fun e -> e.Graphics.DrawLines(pen, coords ))
window.MouseClick.Add mouseClick
I tried using the e.Location property that doesn't work which makes sense to some extend since when i print it then it prints "x=(some number) y=(some number)"
Can any one help me obtaining the x and y coordinate as an int?
As stated in the comment, to get mouse position from a MouseEventArgs you simply need to access it's X or Y properties
which just reflects the Location.X and Location.Y properties also available on e
Regarding your edit and your additional comment, I think you've done something wrong by adding a new Paint handler with each click and you just need to draw (which probably still requires a Refresh at some point though)
let mouseClick (e: MouseEventArgs) =
let x = e.X
let y = e.Y
let coords = [| System.Drawing.Point(x - 10, y - 10)
System.Drawing.Point(x - 10, y + 10)
System.Drawing.Point(x + 10, y + 10)
System.Drawing.Point(x + 10, y - 10)
System.Drawing.Point(x - 10, y - 10) |]
// maybe use instead of let ?
let g = window.CreateGraphics()
g.Graphics.DrawLines(pen, coords)
window.MouseClick.Add mouseClick

Backspin effect in pool game with SceneKit

I would like to create a realistic pool game and to implement at least some basic ball effects. I started from scratch with SceneKit and at this point I'm just studying the proper technology to go with it.SceneKit would be the ideal.
I managed to achieve an acceptable ball effect for sidespin and some sort of forward spin. The one I'm struggle with is backspin. I'm playing around with the position parameter of applyForce method, but it seems that alone will not give me the result I'm looking for. Either I'm missing something (I've got limited knowledge of physics) or SceneKit's physics simulation is just not enough for what I want. Basically I have a sphere of 1.5 radius and I went from -1.5 to 1.5 on Y component for the position vector and the result is either the white ball or the ball I'm hitting jumps when collision occurs.
The first screenshot shows the moment of impact whilst the latter shows after the collision and how it jumps.
The two spheres are configured like this
let sphereGeometry = SCNSphere(radius: 1.5)
sphere1 = SCNNode(geometry: sphereGeometry)
sphere1.position = SCNVector3(x: -15, y: 0, z: 0)
sphere2 = SCNNode(geometry: sphereGeometry)
sphere2.position = SCNVector3(x: 15, y: 0, z: 0)
And the code that gives me that effect is the following:
sphere1.physicsBody?.applyForce(SCNVector3Make(350, 0, 0), atPosition:SCNVector3Make(1.5, -0.25, 0), impulse: true)
What I'm trying to do in that code is to hit the ball roughly a bit below the center. How I got to -0.25 was to get an angle of 10 degrees and calculate its sin function. Then I multiplied it by sphere radius so I can get a point that lies right on the sphere's surface.
So I've been reading several papers/chapters about pool physics and I think I found something that at least proves me I can do it with SceneKit. So what I was missing was i. right formulae ii. angular velocity. The physics still need a lot of polish but at least it seems to get roughly the trajectory one would expect when applying these effects. Here's the code in case anyone's interested in:
//Cue strength
let strength : Float = 1000
//Cue mass expressed in terms of ball's mass
let cueMass : Float = self.balls[0].mass * 1.25
//White ball
let whiteBall = self.balls[0]
//The ball we are trying to hit
let targetBall = self.balls[1]
//White ball radius
let ballRadius = whiteBall.radius
//This should be in the range of {-R, R} where R is the ball radius. It determines how much off the center we would like to hit the ball along the z-axis. Produces left/right spin
let a : Float = 0
//This should be in the range of {-R, R} where R is the ball radius. It determines how much off the center we would like to hit the ball along the y-axis. Produces top/back spin
let b : Float = -ballRadius * 0.7
//This is calculated based off a and b and it is the position that we will be hitting the ball along the x-axis.
let c : Float = sqrt(ballRadius * ballRadius - a * a - b * b)
//This is the angle of the cue expressed in degrees. Values greater than zero will produce jump shots
let cueAngle : Float = 0
//Cue angle in radians for math functions
let cueAngleInRadians : Float = (cueAngle * 3.14) / 180
let cosAngle = cos(cueAngleInRadians)
let sinAngle = sin(cueAngleInRadians)
//Values to calculate the magnitude to be applied given the above variables
let m0 = a * a
let m1 = b * b * cosAngle * cosAngle
let m2 = c * c * sinAngle * sinAngle
let m3 = 2 * b * c * cosAngle * sinAngle
let w = (5 / (2 * ballRadius * ballRadius)) * (m0 + m1 + m2 + m3)
let n = 2 * whiteBall.mass * strength
let magnitude = n / (1 + whiteBall.mass / cueMass + w)
//We would like to point to the target ball
let targetVector = targetBall.position
//Get the unit vector of our target
var target = (targetVector - whiteBall.position).normal
//Multiply our direction by the force's magnitude. Y-axis component reflects the angle of the cue
target.x *= magnitude
target.y = (magnitude / whiteBall.mass) * sinAngle
target.z *= magnitude
//Apply the impulse at the given position by c, b, a
whiteBall.physicsBody?.applyForce(target, atPosition: SCNVector3Make(c, b, a), impulse: true)
//Values to calculate angular force
let i = ((2 / 5) * whiteBall.mass * ballRadius * ballRadius)
let wx = a * magnitude * sinAngle
let wy = -a * magnitude * cosAngle
let wz = -c * magnitude * sinAngle + b * magnitude * cosAngle
let wv = SCNVector3Make(wx, wy, wz) * (1 / i)
//Apply a torque
whiteBall.physicsBody?.applyTorque(SCNVector4Make(wv.x, wv.y, wv.z, 0.4), impulse: true)
Note that values of a, b, c should take into account the target vector's direction.

Polyline() - change colour with an array value

I'm trying to create a very simple example of a for steps in [] loop using a Polyline() inside an IronPython WPF application. Each iteration of the loop should draw a different colour however Brushes implements a set of predefined System.Windows.Media.SolidColorBrush objects. I can't work out how to swap Red for my steps variable.
def polylineShape(self):
x = self.myCanvas.Width/2
y = self.myCanvas.Height/2
polyline = Polyline()
polyline.StrokeThickness = 5
for steps in ['Red','Blue','Green','Black']:
x = x
y = x
polyline.Points.Add(Point(x,y))
x = x + 40
polyline.Points.Add(Point(x,y))
polyline.Stroke = Brushes.Red #change colour on iteration
self.myCanvas.Children.Add(polyline)
I created a solution with some trial and error, I couldn't work out how to pass colours directly to the Brushes type.
def polylineShape(self):
x = 0
y = 0
for steps in [Brushes.SteelBlue, Brushes.DarkOrange, Brushes.DarkSeaGreen, Brushes.Honeydew]:
polyline = Polyline()
polyline.StrokeThickness = self.myCanvas.Height/4
x = 0
y = y + self.myCanvas.Height/4
polyline.Points.Add(Point(x,y))
x = self.myCanvas.Width
polyline.Points.Add(Point(x,y))
polyline.Stroke = steps
self.myCanvas.Children.Add(polyline)

Plotting single X- Axis with two different set of values - Charting - WPF Codeplex

I am plotting a XY graph with two different sets of X and Y values. This is how my dataset looks -> [ X1 = {1,3,5,...}, Y1 = {104, 98, 36,....} and X2 = {2,4,6..}, Y2 = { 76, 65, 110..}].
This is the code I am using:
series1.DependentValueBinding = new System.Windows.Data.Binding("Y1");
series1.IndependentValueBinding = new System.Windows.Data.Binding("X1");
series1.DependentRangeAxis = YAxis;
series1.IndependentAxis = XAxis;
series2.DependentValueBinding = new System.Windows.Data.Binding("Y2");
series2.IndependentValueBinding = new System.Windows.Data.Binding("X2");
series2.DependentRangeAxis = YAxis;
series2.IndependentAxis = XAxis;
This code works fine for assigning two series to single Y-Axis, but when two series with different X and Y values are assigned to X-Axis it messes up the first series. It plots both Y1 = {104, 98, 36,....} and Y2 = { 76, 65, 110..} with respect to X2 = {2,4,6..}, instead of plotting X1 with respect to Y1 and X2 with respect to Y2 and having only one X and Y axis.
Please advice me on what needs to be done to assign two different set of values to single X-axis.
Thank you in advance!
-Anna
Problem is solved. This code is correct.. There is something to do with my value assignment (I noticed that I was clearing X1 value before using X2. But to make the code work all the values X1, Y1, X2, Y2 should be preserved until the chart is created)..Thank you!
series1.DependentValueBinding = new System.Windows.Data.Binding("Y1");
series1.IndependentValueBinding = new System.Windows.Data.Binding("X1");
series1.DependentRangeAxis = YAxis;
series1.IndependentAxis = XAxis;
series2.DependentValueBinding = new System.Windows.Data.Binding("Y2");
series2.IndependentValueBinding = new System.Windows.Data.Binding("X2");
series2.DependentRangeAxis = YAxis;
series2.IndependentAxis = XAxis;

Resources