The goal is to increase the length of a SCNBox such that it only grows in the positive-z direction.
This answer suggests playing with the pivot property.
However, the documentation for the pivot property is sparse on the SCNNode page, and there is nothing on the SCNBox page.
Can someone explain how the pivot property works?
Changing a node's pivot is conceptually the same as inserting an intermediate node between this node and its parent. This can be useful in different cases. One example is when the center of the node's geometry isn't where you expect it to be.
For instance if you have an SCNBox, it's bounding box is
min: (-0.5 * width, -0.5 * height, -0.5 * length)
max: (+0.5 * width, +0.5 * height, +0.5 * length)
center: (0.0, 0.0, 0.0)
If you want the length of the SCNBox to only increase in the positive Z axis, then what you want is
min: (-0.5 * width, -0.5 * height, 0.0)
max: (+0.5 * width, +0.5 * height, length)
center: (0.0, 0.0, +0.5 * length)
A geometry's bounding box will never change, but there are ways to arrange nodes and change their bounding boxes.
Solution 1: Intermediate node
One common solution when dealing with transforms is to use intermediate nodes to get a better understanding of how the transforms are applied.
In your case you will want to change the node hierarchy from
- parentNode
| - node
| * geometry
| * transform = SCNMatrix4MakeScale(...)
to
- parentNode
| - intermediateNode
| * transform = SCNMatrix4MakeScale(...)
| | - node
| | * geometry
| | * transform = SCNMatrix4MakeTranslation(0, 0, +0.5 * length)
With this new hierarchy, the center of the bounding box of node is still (0.0, 0.0, 0.0), but the center of the bounding box of intermediateNode is (0.0, 0.0, +0.5 * length).
By scaling intermediateNode instead of node you'll obtain the wanted result.
Solution 2: Pivot
It turns out that's exactly what the pivot property does:
node.pivot = SCNMatrix4MakeTranslation(0, 0, -0.5 * length);
Once you have mentally figured out the transform of the intermediate node, simply set its inverse to the pivot property.
You can find more information about the pivot property here: https://developer.apple.com/reference/scenekit/scnnode/1408044-pivot
This is very similar to Core Animation's anchorPoint property on CALayer, except that in Core Animation the anchor point is specified as relative to the layer's bounding box (goes from 0 to 1 as a percentage of the layer's width and height), while in SceneKit it's absolute.
Say you have a box created like this:
SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)
The pivot point will be in the center of that box, you now want to move it to one of the edges. This can be done by translating the pivot node by 0.5. (This is half the width of the box or the distance between the center and the edge.)
boxNode.pivot = SCNMatrix4MakeTranslation(0, 0, -0.5)
The pivot point will now be located at center X, center Y, and zero Z of the object. If you now scale the box it will only grow in the positive Z direction.
Sounds like you want to increase the length of the SCNBox(a geometry). So you can simply increase the length property. The answer you mentioned is about the pivot property. As you can see from the doc:
The pivot point for the node’s position, rotation, and scale.
For example, by setting the pivot to a translation transform you can position a node containing a sphere geometry relative to where the sphere would rest on a floor instead of relative to its center.
Related
I've created an OpenGL application where I define a viewing volume with glOrtho(). Then I use gluLookAt() to transform all the points.
The problem is - as soon as I do this all the points fall out of the clipping plane because it is "left behind". And I end up with a black screen.
How do I change the viewing volume so that it is still tight to my objects once they have been transformed by gluLookAt()?
Here's some code to help better illustrate my problem:
Vector3d eyePos = new Vector3d(0.25, 0.25, 0.6);
Vector3d point = new Vector3d(0, 0, 0.001);
Vector3d up = new Vector3d(0, 1,0);
Matrix4d mat = Matrix4d.LookAt(eyePos, point, up);
GL.Viewport(0, 0, glControl1.Width, glControl1.Height);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, 1, 0, 1, 0, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.LoadMatrix(ref mat);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.Enable(EnableCap.DepthTest);
GL.DepthMask(true);
GL.ClearDepth(1.0);
GL.Color3(Color.Yellow);
GL.Begin(PrimitiveType.Triangles);
GL.Vertex3(0.2, 0.3, 0.01);
GL.Vertex3(0, 0.600, 0.01);
GL.Vertex3(0.600, 0.600, 0.01);
GL.Color3(Color.Blue);
GL.Vertex3(0, 0.6, 0.01);
GL.Vertex3(0.2, 0.3, 0.01);
GL.Vertex3(0.7, 0.5, 0.03);
GL.Color3(Color.Red);
GL.Vertex3(0.100, 0.000, 0.01);
GL.Vertex3(0, 0.0, 0.01);
GL.Vertex3(0.600, 0.600, 0.03);
GL.End();
glControl1.SwapBuffers();
I suppose a more succinct revision of my question is how do I transform two points by the lookat matrix?("mat" in my code)
Okay, I got your problem.
You set the camera position to (0.25, 0.25, 0.6) and with the point to look at your view direction is (-0.25, -0.25, -0.599) (has to be normalised). As I mentioned in my comment below your question, you are drawing a cube in the upper right corner of your view direction.
So the two points (0.25, 0.25, 0.6) (the camera position) and the point defined by the camera position and your view direction, your up vector and your right vector (Have a look at this) define the cube of the things which are shown on the screen (the two points define a diagonal within the cube). Unfortunatly, all of your points are not within this cube.
May it be, that you mixed up the camera position with the point you wanna look at? The change of the two values should solve your problem. The rest of your code seems to be correct. The only thing you have to do is to check whether your points are within the clip space of your projections. Another solution to check where your points are, would be to use, e.g. the keyboard input, to move your camera dynamically.
This is an interview question.
We are given dimensions of various rectangles, we have to find out the area(minimum) of rectangle that can enclose all of them? rectangles can be rotated also .
test case:-
input:
3 //number of rectangles
8 8
4 3
3 4
output:
88
11x8:
+ - - - - - - + + - +
| | | |
| | | |
| | + - +
| | + - +
| | | |
| | | |
+ - - - - - - + + - +
i looked at a similar question asked before fitting rectangles in the smallest possible area
the above approach looks at all possibilities ,rotations, and determine the minimum over all such possibilities in all layout cases.
can't we base an algorithm in which we find the sum of area of rectangles first and then look for max length ,width?
There is no absolute solution to this problem, but there are several approximate solutions, you can read about some of them here.
Optimal Rectangle Packing on Non-Square Benchmarks:
Given a set of rectangles, our problem is to find all enclosing
rectangles of minimum area that will contain them without overlap. We
refer to an enclosing rectangle as a bounding box. The optimization
problem is NP-hard, while the problem of deciding whether a set of
rectangles can be packed in a given bounding box is NP-complete, via a
reduction from bin-packing (Korf 2003).
New Improvements in Optimal Rectangle Packing:
Korf [2003] divided the rectangle packing problem into two
subproblems: the minimal bounding box problem and the containment
problem. The former finds a bounding box of least area that can
contain a given set of rectangles, while the latter tries to pack the
given rectangles in a given bounding box. The algorithm that solves
the minimal bounding box problem calls the algorithm that solves the
containment problem as a subroutine.
Minimal Bounding Box Problem
A simple way to solve the minimal bounding box problem is to find the
minimum and maximum areas that describe the set of feasible and
potentially optimal bounding boxes. Bounding boxes of all dimensions
can be generated with areas within this range, and then tested in
non-decreasing order of area until all feasible solutions of smallest
area are found. The minimum area is the sum of the areas of the given
rectangles. The maximum area is determined by the bounding box of a
greedy solution found by setting the bounding box height to that of
the tallest rectangle, and then placing the rectangles in the first
available position when scanning from left to right, and for each
column scanning from bottom to top.
See also Optimal Rectangle Packing: New Results.
First of all you should check, could be enclosing rectangle be rotated or no?
Anyway, you could ignore "rectangles" condition and resolve task in points.
You have array of points (which are vertexes of rectangles). Your task is to find encosing rectangle with minimum area.
If enclosing rectangle could not be rotated then solution is silly and has complexity O(n).
Generated array of rectangles and make array of points, which are vertexes of rectangles.
Next is simple:
long n; // Number of vertexes
point arr[SIZE]; //Array of vertexes
long minX = MAXNUMBER, minY = MAXNUMBER, maxX = -MAXNUMER, maxY = -MAXNUMBER;
for (long i = 0; i < 4 * n; i++)
{
minX = MIN(minX, arr[i].x);
minY = MIN(minY, arr[i].y);
maxX = MIN(maxX, arr[i].x);
maxY = MIN(maxY, arr[i].y);
}
long width = maxX - minX, height = maxY - minY;
printf("%ddX%ld", width, height);
Another task if rectangle could be rotated. Then you should first:
Build minimum convex polygon of all the points in rectangle. You can
use any of existing algorythms. Complexity O(n log n). As exmaple "Graham's Scan" : http://en.wikipedia.org/wiki/Graham%27s_scan
Use simple algorithm for convex polygon. Link: http://cgm.cs.mcgill.ca/~orm/maer.html
Link for your task in wiki: http://en.wikipedia.org/wiki/Minimum_bounding_rectangle
I want to get Screen Position from World Position in DirectX.
I can get my ball's world position. But I can't know how can I convert it to screen position.
You have a view/eye transform V (the one that "places" your "camera") and a projection transform P.
Clip space coordinates are reached by
clip_position = P * V * world_space_position
From clip space you reach NDC space by dividing the clip space coordinates by the 4th clip space coordinate w, i.e.
ndc_x = clip_x / clip_w
ndc_y = clip_y / clip_w
ndc_z = clip_z / clip_w
ndc_w = clip_w / clip_w = 1
The viewport XY coordinates are then reached by mapping the range [-1,1] to the viewport dimensions. The difference between OpenGL and DirectX is, that in OpenGL the depth range [-1,1] is mapped to [0, DEPTH_BUFFER_RESOLUTION], while in DirectX it's the depth range [0, 1] that maps to the depth buffer value range.
I need to resize and crop an image to a specific width and height. I was able to construct a method that will create a square thumbnail, but I'm unsure on how to apply this, when the desired thumbnail is not square.
def rescale(data, width, height):
"""Rescale the given image, optionally cropping it to make sure the result image has the specified width and height."""
from google.appengine.api import images
new_width = width
new_height = height
img = images.Image(data)
org_width, org_height = img.width, img.height
# We must determine if the image is portrait or landscape
# Landscape
if org_width > org_height:
# With the Landscape image we want the crop to be centered. We must find the
# height to width ratio of the image and Convert the denominater to a float
# so that ratio will be a decemal point. The ratio is the percentage of the image
# that will remain.
ratio = org_height / float(org_width)
# To find the percentage of the image that will be removed we subtract the ratio
# from 1 By dividing this number by 2 we find the percentage that should be
# removed from each side this is also our left_x coordinate
left_x = (1- ratio) / 2
# By subtract the left_x from 1 we find the right_x coordinate
right_x = 1 - left_x
# crop(image_data, left_x, top_y, right_x, bottom_y), output_encoding=images.PNG)
img.crop(left_x, 0.0, right_x, 1.0)
# resize(image_data, width=0, height=0, output_encoding=images.PNG)
img.resize(height=height)
# Portrait
elif org_width < org_height:
ratio = org_width / float(org_height)
# crop(image_data, left_x, top_y, right_x, bottom_y), output_encoding=images.PNG)
img.crop(0.0, 0.0, 1.0, ratio)
# resize(image_data, width=0, height=0, output_encoding=images.PNG)
img.resize(width=witdh)
thumbnail = img.execute_transforms()
return thumbnail
If there is a better way to do this please let me know. Any help would be greatly appreciated.
Here's a diagram explaining the desired process.
Thanks,
Kyle
I had a similar problem (your screenshot was very useful). This is my solution:
def rescale(img_data, width, height, halign='middle', valign='middle'):
"""Resize then optionally crop a given image.
Attributes:
img_data: The image data
width: The desired width
height: The desired height
halign: Acts like photoshop's 'Canvas Size' function, horizontally
aligning the crop to left, middle or right
valign: Verticallly aligns the crop to top, middle or bottom
"""
image = images.Image(img_data)
desired_wh_ratio = float(width) / float(height)
wh_ratio = float(image.width) / float(image.height)
if desired_wh_ratio > wh_ratio:
# resize to width, then crop to height
image.resize(width=width)
image.execute_transforms()
trim_y = (float(image.height - height) / 2) / image.height
if valign == 'top':
image.crop(0.0, 0.0, 1.0, 1 - (2 * trim_y))
elif valign == 'bottom':
image.crop(0.0, (2 * trim_y), 1.0, 1.0)
else:
image.crop(0.0, trim_y, 1.0, 1 - trim_y)
else:
# resize to height, then crop to width
image.resize(height=height)
image.execute_transforms()
trim_x = (float(image.width - width) / 2) / image.width
if halign == 'left':
image.crop(0.0, 0.0, 1 - (2 * trim_x), 1.0)
elif halign == 'right':
image.crop((2 * trim_x), 0.0, 1.0, 1.0)
else:
image.crop(trim_x, 0.0, 1 - trim_x, 1.0)
return image.execute_transforms()
You can specify both height and width parameters to resize -- it will not change the aspect ratio (you cannot do that with GAE's images module), but it will ensure that each of the two dimensions is <= the corresponding value you specify (in fact, one will be exactly equal to the value you specify, the other one will be <=).
I'm not sure why you're cropping first and resizing later -- it seems like you should do things the other way around... resize so that as much of the original image "fits" as is feasible, then crop to ensure exact resulting dimension. (So you wouldn't use the original provided values of height and width for the resize -- you'd scale them up so that none of the resulting image is "wasted" aka "blank", if I understand your requirements correctly). So maybe I'm not understanding exactly what you require -- could you provide an example (URLs to an image as it looks before the processing, to how it should look after the processing, and details of the parameters you'd be passing)?
With reference to this programming game I am currently building.
alt text http://img12.imageshack.us/img12/2089/shapetransformationf.jpg
To translate a Canvas in WPF, I am using two Forms: TranslateTransform (to move it), and RotateTransform (to rotate it) [children of the same TransformationGroup]
I can easily get the top left x,y coordinates of a canvas when its not rotated (or rotated at 90deg, since it will be the same), but the problem I am facing is getting the top left (and the other 3 points) coordinates.
This is because when a RotateTransform is applied, the TranslateTransform's X and Y properties are not changed (and thus still indicate that the top-left of the square is like the dotted-square (from the image)
The Canvas is being rotated from its center, so that is its origin.
So how can I get the "new" x and y coordinates of the 4 points after a rotation?
[UPDATE]
alt text http://img25.imageshack.us/img25/8676/shaperotationaltransfor.jpg
I have found a way to find the top-left coordinates after a rotation (as you can see from the new image) by adding the OffsetX and OffsetY from the rotation to the starting X and Y coordinates.
But I'm now having trouble figuring out the rest of the coordinates (the other 3).
With this rotated shape, how can I figure out the x and y coordinates of the remaining 3 corners?
[EDIT]
The points in the 2nd image ARE NOT ACCURATE AND EXACT POINTS. I made the points up with estimates in my head.
[UPDATE] Solution:
First of all, I would like to thank Jason S for that lengthy and Very informative post in which he describes the mathematics behind the whole process; I certainly learned a lot by reading your post and trying out the values.
But I have now found a code snippet (thanks to EugeneZ's mention of TransformBounds) that does exactly what I want:
public Rect GetBounds(FrameworkElement of, FrameworkElement from)
{
// Might throw an exception if of and from are not in the same visual tree
GeneralTransform transform = of.TransformToVisual(from);
return transform.TransformBounds(new Rect(0, 0, of.ActualWidth, of.ActualHeight));
}
Reference: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/86350f19-6457-470e-bde9-66e8970f7059/
If I understand your question right:
given:
shape has corner (x1,y1), center (xc,yc)
rotated shape has corner (x1',y1') after being rotated about center
desired:
how to map any point of the shape (x,y) -> (x',y') by that same rotation
Here's the relevant equations:
(x'-xc) = Kc*(x-xc) - Ks*(y-yc)
(y'-yc) = Ks*(x-xc) + Kc*(y-yc)
where Kc=cos(theta) and Ks=sin(theta) and theta is the angle of counterclockwise rotation. (to verify: if theta=0 this leaves the coordinates unchanged, otherwise if xc=yc=0, it maps (1,0) to (cos(theta),sin(theta)) and (0,1) to (-sin(theta), cos(theta)) . Caveat: this is for coordinate systems where (x,y)=(1,1) is in the upper right quadrant. For yours where it's in the lower right quadrant, theta would be the angle of clockwise rotation rather than counterclockwise rotation.)
If you know the coordinates of your rectangle aligned with the x-y axes, xc would just be the average of the two x-coordinates and yc would just be the average of the two y-coordinates. (in your situation, it's xc=75,yc=85.)
If you know theta, you now have enough information to calculate the new coordinates.
If you don't know theta, you can solve for Kc, Ks. Here's the relevant calculations for your example:
(62-75) = Kc*(50-75) - Ks*(50-85)
(40-85) = Ks*(50-75) + Kc*(50-85)
-13 = -25*Kc + 35*Ks = -25*Kc + 35*Ks
-45 = -25*Ks - 35*Kc = -35*Kc - 25*Ks
which is a system of linear equations that can be solved (exercise for the reader: in MATLAB it's:
[-25 35;-35 -25]\[-13;-45]
to yield, in this case, Kc=1.027, Ks=0.3622 which does NOT make sense (K2 = Kc2 + Ks2 is supposed to equal 1 for a pure rotation; in this case it's K = 1.089) so it's not a pure rotation about the rectangle center, which is what your drawing indicates. Nor does it seem to be a pure rotation about the origin. To check, compare distances from the center of rotation before and after the rotation using the Pythagorean theorem, d2 = deltax2 + deltay2. (for rotation about xc=75,yc=85, distance before is 43.01, distance after is 46.84, the ratio is K=1.089; for rotation about the origin, distance before is 70.71, distance after is 73.78, ratio is 1.043. I could believe ratios of 1.01 or less would arise from coordinate rounding to integers, but this is clearly larger than a roundoff error)
So there's some missing information here. How did you get the numbers (62,40)?
That's the basic gist of the math behind rotations, however.
edit: aha, I didn't realize they were estimates. (pretty close to being realistic, though!)
I use this method:
Point newPoint = rotateTransform.Transform(new Point(oldX, oldY));
where rotateTransform is the instance on which I work and set Angle...etc.
Look at GeneralTransform.TransformBounds() method.
I'm not sure, but is this what you're looking for - rotation of a point in Cartesian coordinate system:
link
You can use Transform.Transform() method on your Point with the same transformations to get a new point to which these transformations were applied.