I have to rotate an image on touch on a fixed axis. Currently I am able to set the image to a fixed position with certain angle. But when i am trying to rotate it with some other deg. then the position of image gets changed.
I have set the transform-origin: 0 0; Since my image is covering more area. the rotation is not working as i want.
My screen shot goes here
I have an axis with an arrow which needs to rotate on the X-Y Axis image at its center.
Any suggestions will be very helpful.
Related
Based on this stackoverflow answer:
https://stackoverflow.com/a/55385972
I'm trying to find a way to move the output of fragment shader inside screen coordinates. In that example the output must have the same size of screen resolution, otherwise you'll see only a portion of the result. Furthermore the result is always aligned with lower-left corner.
In which way someone can resize the final frame and draw it centered in viewport? E.g., screen 1920x1080, viewport 1920x1080, final distorted frame 640x480 centered, so frame position x = 640, y = 300. I can't find a way to move the destination result
I am creating the "perfect" sprite packer. This is a sprite packer that makes sure the output sprite is compatible with most if not all game engines and animation software. It is a program that merges images into a horizontal sprite sheet.
It converts (if needed) the source frames to BMP in memory
It considers the top-left pixel fully transparent for the entire image (can be configured)
It parses the frames each individually to find the real coordinates rect (where the actual frame starts, ends, its width and height (sometimes images may have a lot of extra transparent pixels).
It determines the frame box, which have the width and height of the frame with the largest width/height so that it is long enough to contain every frame. (For extra compatibility, every frame must have the same dimensions).
Creates output sprite with width of nFrames * wFrameBox
The problem is - anchor alignment. Currently, it tries to align each frame so that its center is on the frame box center.
if((wBox / 2) > (frame->realCoordinates.w / 2))
{
xpos = xBoxOffset + ((wBox / 2) - (frame->realCoordinates.w / 2));
}
else
{
xpos = xBoxOffset + ((frame->realCoordinates.w / 2) - (wBox / 2));
}
When animated, it looks better with it, but there is still this inconsistent horizontal frame position so that a walking animation looks like walking and shaking.
I also tried the following:
store the real x pixel position of the widest frame and use it as a reference point:
xpos = xBoxOffset + (frame->realCoordinates.x - xRef);
It also gives a little better results, showing that this is still not the correct algorithm.
Honestly, I don't know what am I doing.
What will be the correct way to align sprite frames (obtain the appropriate x position for drawing the next frame) given that the output sprite sheet have width of the number of frames multiplied by the width of the widest frame?
Your problem is that you first calculate the center then calculate the size of the required bounding box. That is why your image 'shakes' because in each image that center is different to the original center.
You should use the center of the original bounding box as your origin, then find out the size of each sprite, keeping track of the leftmost, rightmost, topmost and bottommost non transparent pixels. That would give you the bounding box you need to use to avoid the shaking.
The problem is that you will find that most sprites are already done that way, so the original bounding box is actually defined as to the minimum space to paint the whole sprite's sequence covering these non transparent pixels.
The only way to remove unused sprite space is to store the first sprite complete, and then the origin and dimensions of each other sprite, like is done in animated GIF and APNG ( Animated PNG -> https://en.wikipedia.org/wiki/APNG )
In my react app I have a babylonjs scene component created according to babylonjs's example code and also am using the axis code from this example where onSceneMounted in React example is basically the createScene from axis example. What I would like to do is to instead of placing the axis to the scene origin in scene space, I would like to show them in upper right corner, similar to how Unity editor does it.
The axes are always pinned to the upper right corner and are rotated according to the camera rotation.
I have the property RelativePosition in class MapItem is the relative position of the working point within the cell that contains it. The two components of the vector are always in the range [0,1]. In our example image below the corrdinates would be something like (0.25, 0.05).
Each item has a property RelativePosition which is a Vector that defines the working point position relative to the item. For example (0,0) is the top-left corner and (1,1) is the bottom-right.
How to draw a rectangle in a cell throught relative position ? Thanks for help me?
Have you thought about positioning the image by using containers? Maybe UniformGrid could fit for you. If you want the middle of the image at 0.25 from the left center it in the left cell of an UniformGrid with two columns.
I have a Viewport3D with a 3D model composed of multiple smaller components centered at the origin. I'm animating the PerspectiveCamera to rotate about the Y-axis using an AnimationClock created from a DoubleAnimation to create a rotating effect on the model. In addition, I have another rotateTransform3D assigned to the camera's transformgroup to enable the user to orbit around and zoom in-out of the model using the mouse.
I want to be able to translate each component as it is being selected to move in front of the rotating camera. However, I don't know how to get the position of the camera relative to the 3D model because the coordinate system of camera is being animated and transformed by the user's input.
Is there a way to get the offset between two coordinate systems?
Any help or suggestions would be appreciated.
Thanks,