I have translate the object by applying the transform on 3DObject. It translate the objects correctly buy rotation of that object is getting disturb means in opposite direction. I want to rotate the 3DObject on its center not on viewport3d center.
Make sure you apply the translation and rotation in the correct order.
Move the Object so that its center is at the origin (0,0,0)
Rotate the Object
Translate the Object anywhere you want
If you do this using matrices, multiply the matrices in reverse order!
Related
Briefing: I'm attempting to parent a "drei Text" element to a point on the outside of a sphere near the pins in a react-three-fiber scene. So that when the sphere is rotated, or the camera rotates around the sphere, the Texts position is centered on the outside of the sphere.
An example: three.js text alignment
Questions:
How do I find the local and world space positions of an object or parts/points of an object?
How do I parent that objects position to a child object, such that when the parent moves, the child moves with it?
Does a scene have a world position that is relative to the axis at [0,0,0] and a local position that is relative to an object?
My Code Sandbox: earth with locations
Check out Object3d, the base class for many objects in three.js, including the scene itself. It has properties and functions that can help you like .position and .getWorldPosition().
Note that the position of any object in the scene is relative to its parent. You can add children to an object with .add(child), but a better way to group objects might be with a Group.
I have a model which rotates on the X axis, but the center of the rotation is not on the axis itself. The rotation code is pretty simple:
model.current.rotation.x += 0.016; (axis and speed)
but there seems no way to define the actual axis of rotation to ensure the model just rotates around its own center. At the moment it rotates in a big circle!
Any suggestions appreciated.
:-0
your mesh most likely is not centred, meaning the vertices point way out of the models center of mass. you can either fix this in blender, but even threejs has methods (on the geometry) that recalculate the vertices. a cheaper solution would be to render the mesh, take a box3, set it from the object, then get min/max and use it to shift the object by half of it.
I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!
Is it possible to convert 3D object from Viewport3D and show it on Canvas, but conversion MUST NOT be depended from a camera position and its view point.
By another words using WPF i would like to make 4 views like in 3Ds Max, such as: Perspective (for 3D objects) and Front, Top, Left views (for 2D ).
Perspective view is a Viewport3D, but how show all 3D objects from the Viewport to the other views - Top, Front and Left ?
Mathematically speaking, no, it's not possible.
However, you should be able to simulate that by specifying a Camera Position that is top, front, and left. Can't you calculate approximately where that is based on the bounds of the 3D object?
http://en.wikipedia.org/wiki/Homogeneous_coordinates#Use_in_computer_graphics
I have canvas with several cutom controls inherited from panel class, dynamically added to it at runtime with rendertransform=(.5,.5). But when apply translate transform (50,50) and rotate it by 100 degrees, it does not rotate on its place, it rotates in radius of 50, why?
Am I doing wrong something ?
Transformations are not commutative, you should apply the rotation before applying the translation.
Often you have a TransformGroup, then you can just change the order of its children, if this is somehow not an option because some transform is "inherited" from a parent you can nullify prior transforms using their inverse (in the case of a translation that should move the target back to the origin), then you can rotate it in place, and apply the original transform again.
The documentation is your friend, here is what can be found for TransformGroups:
In a composite transformation, the order of individual transformations is important. For example, if you first rotate, then scale, then translate, you get a different result than if you first translate, then rotate, then scale. One reason order is significant is that transformations like rotation and scaling are done with respect to the origin of the coordinate system. Scaling an object that is centered at the origin produces a different result than scaling an object that has been moved away from the origin. Similarly, rotating an object that is centered at the origin produces a different result than rotating an object that has been moved away from the origin.
If it rotates with a radius of 50, it's because your origin is wrong.
You just need to change the origin of your RotateTransform by setting the CenterX and CenterY properties both to 50 in this case.