I have 100 .png images loaded into a Godot 3.5.1 array of objects (named splash_planets) as StreamTextures. I need to know how to reference them to allow scaling. I am able to draw them using the following:
for splash_planet in splash_planets:
center = Vector2(splash_planet._x, splash_planet._y)
draw_texture(splash_planet._texture, center)
Now I want to do something like:
draw_texture(splash_planet._texture, center, (scale_x, scale_y))
Thanks in advance for any advice.
First of all, in this line:
draw_texture(splash_planet._texture, center)
You would be drawing the texture with the upper left corner on the coordinates of center. If you want to draw the texture with the center on center, you do this:
var texture := splash_planet._texture
var size := texture.get_size()
draw_texture(texture, center - size * 0.5)
Since you are using draw_texture and you want to scale, you can use draw_set_transform:
draw_set_transform(Vector2.ZERO, 0.0, Vector2(scale_x, scale_y))
As you can see the third parameter is the scale. The first two parameters are translation and rotation. Once you have set the transform you can do the draw call as usual:
draw_set_transform(Vector2.ZERO, 0.0, Vector2(scale_x, scale_y))
var texture := splash_planet._texture
var size := texture.get_size()
draw_texture(texture, center - size * 0.5)
Calling draw_set_transform again (or calling draw_set_transform_matrix) will overwrite the previos transform.
Alternatively you can use draw_texture_rect:
var texture := splash_planet._texture
var size := texture.get_size() * Vector2(scale_x, scale_y)
var rect := Rect2(center - size * 0.5, size)
draw_texture_rect(texture, rect, false)
In this case we compute the rectangle where to draw the texture. We also specify we don't want to tile it (the false as third argument), so Godot will stretch it.
You might also be interested in draw_texture_rect_region, which allows you to draw a rectangular region of the source texture.
Related
So the idea is quite simple: given the sun's position (azimuth and elevation) I want my app to be able to display a shape using augmented reality when the camera is pointing at the sun.
So there is a few steps:
Convert azimuth and elevation into radians, then into cartesian coordinates to get a simple vector {x, y, z}.
Get the phone's gyroscope data to get its orientation in space as a 3D vector {x, y, z}.
Calculate new coordinates for the sun regarding the phone orientation.
Display a random shape using Three.js at these coordinates.
1 and 2 are quite easy. There are a lot of APIs out there giving the sun's position depending on a location. Then I used a formula to convert the sun's spherical coordinates into cartesian ones:
x = R * cos(ϕ) * sin(θ)
y = R * cos(ϕ) * cos(θ)
z = R * sin(ϕ)
with R, the distance of the point from the origin, θ (the azimuth) and ϕ (the elevation).
I got the device's orientation in space with Expo.io, using their Device Motion API. Documentation here
I'm really struggling with the third step. I don't know how to combine sun and device coordinates in space, and project the whole thing through Three.js perspective camera.
I found this post the other day: Compare device 3D orientation with the sun position but I've found the explanations a bit confusing.
Let's say I want to display a cube with Three:
const geometry = new THREE.BoxGeometry(0.07, 0.07, 0.07);
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const cube = new THREE.Mesh(geometry, material);
cube.position.x = 0;
cube.position.y = 0;
cube.position.z = -1;
The final goal here will be to find the correct {x, y, z} so the cube can be displayed at the sun's location. This vector will be of course updated every time the user moves his phone in space.
Hi would it be possible to text or label to an Shape object in silverlight like an arc? Currently I created a chart compose of multiple arcs, I need to set a label on top of the arc to identify whose data it is.
You can use a PathListBox if you want to place the text on an arc. See text along curvature path like circular or arc in silverlight
Alternatively, you can position your own TextBlock object. Use the Polar to Rectangular conversion http://www.teacherschoice.com.au/maths_library/coordinates/polar_-_rectangular_conversion.htm
For instance if the center of your circle is 10,20 and the radius is 30 and the angle where you want to place the textblock is 45, then
double DegreeToRadian(double degree) { return Math.PI / 180 * degree; }
x = 30 * Math.Cos(DegreeToRadian(45)) + 10
y = 30 * Math.Sin(DegreeToRadian(45)) + 20
I have an attributed string that I want to draw bottom-aligned into a rectangular path, using Core Text. Is there a way to get CTFrameSetter / CTFrame to do this, or do I need to do it manually? The manual way being:
Figure out the height of the frame using CTFramesetterSuggestFrameSizeWithConstraints
Adjust the height of the path.
You'll have to do it manually.
CGRect boundingBox = CTFontGetBoundingBox(font);
//Get the position on the y axis
float midHeight = self.frame.size.height / 2;
midHeight -= boundingBox.size.height / 2;
CGPathAddRect(path, NULL, CGRectMake(0, midHeight, self.frame.size.width, boundingBox.size.height));
Reference: Vertical align with Core Text?
I need to resize and crop an image to a specific width and height. I was able to construct a method that will create a square thumbnail, but I'm unsure on how to apply this, when the desired thumbnail is not square.
def rescale(data, width, height):
"""Rescale the given image, optionally cropping it to make sure the result image has the specified width and height."""
from google.appengine.api import images
new_width = width
new_height = height
img = images.Image(data)
org_width, org_height = img.width, img.height
# We must determine if the image is portrait or landscape
# Landscape
if org_width > org_height:
# With the Landscape image we want the crop to be centered. We must find the
# height to width ratio of the image and Convert the denominater to a float
# so that ratio will be a decemal point. The ratio is the percentage of the image
# that will remain.
ratio = org_height / float(org_width)
# To find the percentage of the image that will be removed we subtract the ratio
# from 1 By dividing this number by 2 we find the percentage that should be
# removed from each side this is also our left_x coordinate
left_x = (1- ratio) / 2
# By subtract the left_x from 1 we find the right_x coordinate
right_x = 1 - left_x
# crop(image_data, left_x, top_y, right_x, bottom_y), output_encoding=images.PNG)
img.crop(left_x, 0.0, right_x, 1.0)
# resize(image_data, width=0, height=0, output_encoding=images.PNG)
img.resize(height=height)
# Portrait
elif org_width < org_height:
ratio = org_width / float(org_height)
# crop(image_data, left_x, top_y, right_x, bottom_y), output_encoding=images.PNG)
img.crop(0.0, 0.0, 1.0, ratio)
# resize(image_data, width=0, height=0, output_encoding=images.PNG)
img.resize(width=witdh)
thumbnail = img.execute_transforms()
return thumbnail
If there is a better way to do this please let me know. Any help would be greatly appreciated.
Here's a diagram explaining the desired process.
Thanks,
Kyle
I had a similar problem (your screenshot was very useful). This is my solution:
def rescale(img_data, width, height, halign='middle', valign='middle'):
"""Resize then optionally crop a given image.
Attributes:
img_data: The image data
width: The desired width
height: The desired height
halign: Acts like photoshop's 'Canvas Size' function, horizontally
aligning the crop to left, middle or right
valign: Verticallly aligns the crop to top, middle or bottom
"""
image = images.Image(img_data)
desired_wh_ratio = float(width) / float(height)
wh_ratio = float(image.width) / float(image.height)
if desired_wh_ratio > wh_ratio:
# resize to width, then crop to height
image.resize(width=width)
image.execute_transforms()
trim_y = (float(image.height - height) / 2) / image.height
if valign == 'top':
image.crop(0.0, 0.0, 1.0, 1 - (2 * trim_y))
elif valign == 'bottom':
image.crop(0.0, (2 * trim_y), 1.0, 1.0)
else:
image.crop(0.0, trim_y, 1.0, 1 - trim_y)
else:
# resize to height, then crop to width
image.resize(height=height)
image.execute_transforms()
trim_x = (float(image.width - width) / 2) / image.width
if halign == 'left':
image.crop(0.0, 0.0, 1 - (2 * trim_x), 1.0)
elif halign == 'right':
image.crop((2 * trim_x), 0.0, 1.0, 1.0)
else:
image.crop(trim_x, 0.0, 1 - trim_x, 1.0)
return image.execute_transforms()
You can specify both height and width parameters to resize -- it will not change the aspect ratio (you cannot do that with GAE's images module), but it will ensure that each of the two dimensions is <= the corresponding value you specify (in fact, one will be exactly equal to the value you specify, the other one will be <=).
I'm not sure why you're cropping first and resizing later -- it seems like you should do things the other way around... resize so that as much of the original image "fits" as is feasible, then crop to ensure exact resulting dimension. (So you wouldn't use the original provided values of height and width for the resize -- you'd scale them up so that none of the resulting image is "wasted" aka "blank", if I understand your requirements correctly). So maybe I'm not understanding exactly what you require -- could you provide an example (URLs to an image as it looks before the processing, to how it should look after the processing, and details of the parameters you'd be passing)?
With reference to this programming game I am currently building.
alt text http://img12.imageshack.us/img12/2089/shapetransformationf.jpg
To translate a Canvas in WPF, I am using two Forms: TranslateTransform (to move it), and RotateTransform (to rotate it) [children of the same TransformationGroup]
I can easily get the top left x,y coordinates of a canvas when its not rotated (or rotated at 90deg, since it will be the same), but the problem I am facing is getting the top left (and the other 3 points) coordinates.
This is because when a RotateTransform is applied, the TranslateTransform's X and Y properties are not changed (and thus still indicate that the top-left of the square is like the dotted-square (from the image)
The Canvas is being rotated from its center, so that is its origin.
So how can I get the "new" x and y coordinates of the 4 points after a rotation?
[UPDATE]
alt text http://img25.imageshack.us/img25/8676/shaperotationaltransfor.jpg
I have found a way to find the top-left coordinates after a rotation (as you can see from the new image) by adding the OffsetX and OffsetY from the rotation to the starting X and Y coordinates.
But I'm now having trouble figuring out the rest of the coordinates (the other 3).
With this rotated shape, how can I figure out the x and y coordinates of the remaining 3 corners?
[EDIT]
The points in the 2nd image ARE NOT ACCURATE AND EXACT POINTS. I made the points up with estimates in my head.
[UPDATE] Solution:
First of all, I would like to thank Jason S for that lengthy and Very informative post in which he describes the mathematics behind the whole process; I certainly learned a lot by reading your post and trying out the values.
But I have now found a code snippet (thanks to EugeneZ's mention of TransformBounds) that does exactly what I want:
public Rect GetBounds(FrameworkElement of, FrameworkElement from)
{
// Might throw an exception if of and from are not in the same visual tree
GeneralTransform transform = of.TransformToVisual(from);
return transform.TransformBounds(new Rect(0, 0, of.ActualWidth, of.ActualHeight));
}
Reference: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/86350f19-6457-470e-bde9-66e8970f7059/
If I understand your question right:
given:
shape has corner (x1,y1), center (xc,yc)
rotated shape has corner (x1',y1') after being rotated about center
desired:
how to map any point of the shape (x,y) -> (x',y') by that same rotation
Here's the relevant equations:
(x'-xc) = Kc*(x-xc) - Ks*(y-yc)
(y'-yc) = Ks*(x-xc) + Kc*(y-yc)
where Kc=cos(theta) and Ks=sin(theta) and theta is the angle of counterclockwise rotation. (to verify: if theta=0 this leaves the coordinates unchanged, otherwise if xc=yc=0, it maps (1,0) to (cos(theta),sin(theta)) and (0,1) to (-sin(theta), cos(theta)) . Caveat: this is for coordinate systems where (x,y)=(1,1) is in the upper right quadrant. For yours where it's in the lower right quadrant, theta would be the angle of clockwise rotation rather than counterclockwise rotation.)
If you know the coordinates of your rectangle aligned with the x-y axes, xc would just be the average of the two x-coordinates and yc would just be the average of the two y-coordinates. (in your situation, it's xc=75,yc=85.)
If you know theta, you now have enough information to calculate the new coordinates.
If you don't know theta, you can solve for Kc, Ks. Here's the relevant calculations for your example:
(62-75) = Kc*(50-75) - Ks*(50-85)
(40-85) = Ks*(50-75) + Kc*(50-85)
-13 = -25*Kc + 35*Ks = -25*Kc + 35*Ks
-45 = -25*Ks - 35*Kc = -35*Kc - 25*Ks
which is a system of linear equations that can be solved (exercise for the reader: in MATLAB it's:
[-25 35;-35 -25]\[-13;-45]
to yield, in this case, Kc=1.027, Ks=0.3622 which does NOT make sense (K2 = Kc2 + Ks2 is supposed to equal 1 for a pure rotation; in this case it's K = 1.089) so it's not a pure rotation about the rectangle center, which is what your drawing indicates. Nor does it seem to be a pure rotation about the origin. To check, compare distances from the center of rotation before and after the rotation using the Pythagorean theorem, d2 = deltax2 + deltay2. (for rotation about xc=75,yc=85, distance before is 43.01, distance after is 46.84, the ratio is K=1.089; for rotation about the origin, distance before is 70.71, distance after is 73.78, ratio is 1.043. I could believe ratios of 1.01 or less would arise from coordinate rounding to integers, but this is clearly larger than a roundoff error)
So there's some missing information here. How did you get the numbers (62,40)?
That's the basic gist of the math behind rotations, however.
edit: aha, I didn't realize they were estimates. (pretty close to being realistic, though!)
I use this method:
Point newPoint = rotateTransform.Transform(new Point(oldX, oldY));
where rotateTransform is the instance on which I work and set Angle...etc.
Look at GeneralTransform.TransformBounds() method.
I'm not sure, but is this what you're looking for - rotation of a point in Cartesian coordinate system:
link
You can use Transform.Transform() method on your Point with the same transformations to get a new point to which these transformations were applied.