Convert 3D mesh to 2D and place it on WPF Canvas - wpf

Is it possible to convert 3D object from Viewport3D and show it on Canvas, but conversion MUST NOT be depended from a camera position and its view point.
By another words using WPF i would like to make 4 views like in 3Ds Max, such as: Perspective (for 3D objects) and Front, Top, Left views (for 2D ).
Perspective view is a Viewport3D, but how show all 3D objects from the Viewport to the other views - Top, Front and Left ?

Mathematically speaking, no, it's not possible.
However, you should be able to simulate that by specifying a Camera Position that is top, front, and left. Can't you calculate approximately where that is based on the bounds of the 3D object?
http://en.wikipedia.org/wiki/Homogeneous_coordinates#Use_in_computer_graphics

Related

how do I do "reverse" texture mapping from texture image x,y to 3d space?

I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!

How to apply several side images to WPF 3D model?

I have already a 3D model for a planet and 4 pictures of front (0 degree), left (270), right (90) and back (180) side of the planet. Is there any known way to apply these 4 photos for texture of the 3D model?
I thought about combining these pictures into one like panoramic view and then apply it to the model. But that might be overkill. Maybe there is a way to apply 2D texture to the 2D view of 3D model like WYSWYG. Any hint would be appreciated.
Your best bet would be to combine the images into a single rectangular projection and then map that to the sphere using a spherical mapping.
Assuming your images are circular the first step is to create square images from those. A Mercator projection will probably work best.

direct access to pixel data in wpf 3d

Is there a lockbits equivalent of 2d in 3d in Windows Presentation Foundation to get direct pixel access ?
I understand you can paint a triangle at a time: 3d for the threst of us. Wouldn't it be easier to paint in cubes instead of triangles ? (I need to paint a stack of images such as an mri sequence).
The WriteableBitmap class allows you to access the pixels, I'm a bit unsure what you want with regards to 3d but you should be able to use a WriteableBitmap as the texture for each item and position them in 3D as required. For creating a stack of images 3D Panel and FluidKit's ElementFlow might be of interest to you.
Triangles are used because 3 points always make a flat surface this makes shading simpler and predictable plus you can make any shape if you use enough triangles.
If by painting in cubes you mean tiny cubes similar to how you use pixels in 2D these are known as Voxels they have their use-cases but most hardware and software are designed with polygons in mind.

WPF 3d rotation animations

I have a few 3d rectangles on my screen that I want to pivot around the Y axis.
I want to press down with the mouse, and rotate the 3d object to a max rotation, but when the user moves their mouse, I want to slightly rotate it so that it looks like a see-saw (rotating from a range of -13 to 13 degrees on the Y-Axis).
At the moment, I can do this, but my frame rate really really suffers when I move the mouse quickly. So for example, when I click the left side of the rectangle, I generate a storyboard and animation objects, then rotate the 3d object to -13 degrees. Then when I slightly move the mouse to the right, I want to rotate it to -12.5, and so on...
Again, I can do all of this, its just that the performance suffers greatly! It goes down to 5-FPS in some cases... which is not acceptable.
My question is am I doing this the best way? How else could you animate a rotation base on the users position on the screen?
Thanks for any help you can provide!
Mark
I assume you are doing the following:
Using a separate Model3D for the object you are rotating & including it in a Model3DGroup
Giving it a RotateTransform3D containing an AxisAngleRotation3D
Animating the AxisAngle3D's Axis property in the Storyboard
If my assumptions are correct I think we can conclude the problem is efficiency in rendering, since the CPU required to update a single Axis value, recompute the Transform, and update MILCore is negligible.
I can think of several ways that could improve the rendering performance:
If your Model3D being rotated is a GeometryModel3D backed by a MeshGeometry3D, do as much as you can to simplify the mesh and the materials used. It can also help to substitute a different mesh for closeups.
If the Model3D being rotated is a GeometryModel3D that uses VisualBrush brushes, during animation temporarily replace the VisualBrush with an ImageBrush containing a BitmapImage that is a snapshot of the Visual underlying the VisualBrush as of the instant the animation starts. When the animation ends, put backthe VisualBrush. The user probably won't notice that the contents of the object freeze temporarily when it rotates. (Note that this same technique can be used if your Visual3D is a Viewport2DVisual3D.)
If the Model3D being rotated is combined into a Model3DGroup with other objects but lies entirely in front of or behind the other groups, separate the model into its own separate Viewport3DVisual, appropriately layered to get the effect you want.
Simplify lighting or Material types
Monitor the actual frame rate and if it is going too low when using the storyboard, rotate the object immediately to where the mouse indicates without using an animation.
MSDN presents some tips on what impacts in WPF 3D performance. If you didn't stumble on it yet, check the items on "Performance Impact: High" list.
Edit: In march 2009, Josh Smith published an article on CodeProject that involves rotating 3D objetcs. Maybe you want to check his solution.

WPF 3D Billboards

In a 3D scene we often need to apply labels (little textelements or icons) next to 3D object that is moving around (rotation, translation) in the scene. These labels should always face the camera but still move with the object. This technique I believe is called billboard.
An additional cool feature would be if the label would stay always at the same size - no matter how far away the associated object is. So the label seems to live in 2D screenspace and not in the 3D scenegraph.
Does anyone figures out a clever way how to do this in WPF?
For billboarding you need to make sure that the face normal is pointing towards the camera. The algorithm is that the dot product between the face normal and the view direction should be -1 (minus one).
I have some old C code that does this, but it's probably not particularly useful.
For keeping the object the same size you'd need to work out the screen size and then apply a transform to keep it the constant size you desired.
However, if you want the object to appear as though it's in 2D space, why not draw it in a 2D overlay? This will solve both the billboarding and scaling problem at the same time. You work out the screen location of your label and then use the 2D drawing functions.

Resources