Get Point3D relative to Mesh in WPF - wpf

I have a pretty simple 3D scene that is roughly built up like this:
<Viewport3D>
<ModelVisual3D>
<ModelVisual3D.Content>
<Model3DGroup x:Name="Group1">
<GeometryModel3D x:Name="m1">...</GeometryModel3D>
</Model3DGroup>
<Model3DGroup x:Name="Group2">
<GeometryModel3D x:Name="m2">...</GeometryModel3D>
</Model3DGroup>
<Model3DGroup x:Name="Group3">
<GeometryModel3D x:Name="m3">...</GeometryModel3D>
</Model3DGroup>
</ModelVisual3D.Content>
</ModelVisual3D>
</Viewport3D>
And basically each of the 3 GeometryModel3D's has its own Mesh, Transforms, etc...
When I do hit testing on the Viewport3D, all I ever get is the point in 3D space that I clicked when I hit something. What I really am after, is the point ON the mesh that I hit in 3D space (not a single Point on the mesh, but rather the Point3D).
The goal for me is to add a new Material to the clicked on Mesh at the exact point that the user clicks. And I cannot do that if I only have access to the Point3D relative to the Viewport3D (because of all the transforms on the meshes etc...)
An example might be that the m1 Geometry is a rectangle like shape, with some slight rotations and transformed up the Y axis so its at the top of the screen.
When I click on the VERY bottom of the mesh, I would like to get the X-Y (Z not needed here) point relative to the local coordinate space of that mesh...
I hope that all makes sense, and thanks for any help you can give
Mark

I think what you're describing is actually two things:
Ray projection: given a hit-point on the 2D representation (i.e., the screen), cast a Ray from that point into your scene. This is alternatively called "Unproject", since it kind of relies on inverting the original screen transformation matrix and choosing some minimum and maximum "depth" values.
Ray/Mesh hit intersection: given a Ray and a Mesh, find where they meet. A slightly generalized variant is Ray/Sphere intersection Ray-Sphere Intersection
Check out this guy's blog, he has all the info I think you'll need:
Daniel Lehenbauer's Blog

Related

WPF draw text in ModelVisual3D

I'm creating WPF application in which I'm drawing a point cloud. Points are made as little cubes. I'd like to sign indexes with numbers near to each "cube-point" (eg. 1, 2, 3, ...) so I want to add text to my 3D View.
This is my xaml part:
<ModelVisual3D x:Name="model">
<ModelVisual3D.Content>
<Model3DGroup x:Name="group">
<AmbientLight Color="DarkGray" />
<DirectionalLight Color="White" Direction="-5,-5,-7" />
</Model3DGroup>
</ModelVisual3D.Content>
</ModelVisual3D>
In code-behind I'm adding GeometryModel3D (built with mesh cube-points) to Model3DGroup (named 'group').
I tried to use this code: http://www.ericsink.com/wpf3d/4_Text.html
but this is very inefficient way and everything works slowly when I generate and display about 7000 (number of cube-points) such textBlocks.
Do you have any idea how to add some text in a more efficient way?
Of course!
The best resouces about majority of WPF 3D features are in
1. Petzold Ch. book: 3D Programming for Windows
2. Jack Xu. Practical WPF Graphics Programming
Simplest way to take WPF 3D text feature is to work with code from the first detected book
http://www.microsoft.com/mspress/companion/9780735623941/

How to draw shape element faster and make them scale in wpf?

I have this problem.
So I have a bunch of data that must be visualized on a canvas (say more than 5000 items). So I draw them as a bunch of vertical rectangles over a horizontal line, some thing like this:
---|--|||||---|---|||---||----|||||||--------
Now, because the canvas is small, I only draw a different amount of rectangles at different zoom level. So if I zoom in more, the line get longer, and more rectangles I can see.
Problem is every time I zoom in, I have to clear the whole canvas, and redaw everything with the new zoom scale. And it is really suck, the drawing is slow and scaling not really nice.
So I wondering is there a way I can achieve a faster drawing, and good zooming (like those vector graph, you can zoom in unlimited)??
Have you tried ScaleTransform Class?
<Canvas.RenderTransform>
<ScaleTransform ScaleX="2" ScaleY="2" />
</Canvas.RenderTransform>
See How to: Scale an Element too. For performance reasons:
Freeze your Freezables.
Update Rather than Replace a RenderTransform
You may be able to update a Transform rather than replacing it as the
value of a RenderTransform property. This is particularly true in
scenarios that involve animation. By updating an existing Transform,
you avoid initiating an unnecessary layout calculation.
Have you looked at the ZoomableCanvas? I haven't used it, but it looks like it's designed to do exactly what you want.

Horizontal Flip Transform in Silverlight?

I was wondering what is the best way to flip an element horizontally in silverlight.
What I've tried so far:
1- Scale transformation: the problem this approach has is that I need to hardcode the width of the element to translate it after setting scale=-1; this makes it hard to implement (for many elements)
<ScaleTransform CenterX="240" ScaleX="-1" />
2- Plane projection: the problem with this one is that even mouse gestures are reversed! this makes it so impossible to use.
<PlaneProjection RotationY="-180" />
[NOTE] By reversed mouse gesture I mean: when applying plane projection, then dragging mouse to left is interpreted as dragging to right and vice versa.
Any suggestion? Or is there anyway in (1) to say CenterX="50%" ?
All you need to do is to set <uiElement RenderTransformOrigin="0.5,0.5"/>. Your scale transform will not need translation after that.

Viewport2DVisual3D blurry text on WPF controls

I'm attempting to host a WPF form on a Viewport2DVisual3D surface. I've set up the camera so that the controls fit the width of the window. The default geometry maps the entire form onto a square face, so it is necessary to do some sort of transformation to get the surface to look like a regular 2d form and not appear stretched vertically. The form looks okay overall but the text doesn't scale well, it is blurry and blocky and looks bad in different ways from line to line. Here's what I've tried to set the aspect ratio:
A ScaleTransform3D
Setting the mesh Positions to the proper aspect ratio
Setting the TextureCoordinates to the proper aspect ratio
The first two get me the results that I want, except for the blocky/blurry text. My conclusion at this point is that the font rendering is occurring before the form image is projected onto the 3d surface and then scaling occurs, so it will look bad no matter what. Does anyone know a way to work around this or to set it up right from the beginning? I don't know much about 3d graphics, just enough basic math to get the camera angles right, etc.
Have tested on Win 7 and XP.
Some of the resources I've used:
http://www.codeproject.com/KB/WPF/ContentControl3D.aspx
http://pavanpodila.spaces.live.com/blog/cns!9C9E888164859398!151.entry
A few snippets of the code:
<Viewport2DVisual3D.Geometry>
<MeshGeometry3D x:Name="FrontFaceGeometry"
Positions="-1,1,0 -1,-1,0 1,-1,0 1,1,0"
TextureCoordinates="0,0 0,1 1,1 1,0"
TriangleIndices="0 1 2 0 2 3"/>
</Viewport2DVisual3D.Geometry>
...
<Grid Width="500" x:Name="FrontFaceGrid">
Then in the Window_Loaded routine, e.g.
var aRatio = FrontFaceGrid.ActualHeight / FrontFaceGrid.ActualWidth;
FrontFaceGeometry.Positions[0] = new System.Windows.Media.Media3D.Point3D(-1, aRatio, 0);
FrontFaceGeometry.Positions[1] = new System.Windows.Media.Media3D.Point3D(-1, -aRatio, 0);
FrontFaceGeometry.Positions[2] = new System.Windows.Media.Media3D.Point3D(1, -aRatio, 0);
FrontFaceGeometry.Positions[3] = new System.Windows.Media.Media3D.Point3D(1, aRatio, 0);
To avoid the blurred text and other visual distortions make the 3D XY aspect ratio equal to the 2D control aspect ratio. This is achieved by setting X and Y MeshGeometry3D.Positions. For example, a 2D control sized at 500x700 could be mapped to a rectangle 3D mesh without distortion by assigning positions
<Viewport2DVisual3D.Geometry>
<MeshGeometry3D x:Name="FrontFaceGeometry"
Positions="-2.5,3.5,0 -2.5,-3.5,0 2.5,-3.5,0 2.5,3.5,0"
TextureCoordinates="0,0 0,1 1,1 1,0"
TriangleIndices="0 1 2 0 2 3"/>
</Viewport2DVisual3D.Geometry>
The image of the 2D control displayed within the 3D environment is always "stretched" to the mesh's dimensions.
You will be rendering the WPF form onto a texture on the square and then displaying the square using the GPU's texture engine. Depending on the mode the texture engine is using this could cause blockiness or blurriness (since the texture engine will try to interpolate the texture by default).
Why do you want to render it using a 3D visual and not normally if it is intended to fill the screen?

WPF 3D triangle overlap problem

I'm rendering a scene with WPF 3D by making a MeshGeometry3D and adding vertices and normals to it. Everything looks good in the rendered scene (lights are there, material looks fine), but where mesh triangles overlap, the triangle closer to the camera is not necessarily rendered on top. It looks like they're being drawn in a random order. Is there any way I can ensure that the mesh triangles are rendered in the "correct" order?
Just in case it helps, here's my XAML:
<Viewport3D>
<ModelVisual3D>
<ModelVisual3D.Content>
<Model3DGroup>
<AmbientLight Color="#FF5A5A5A" />
<GeometryModel3D x:Name="geometryModel">
<GeometryModel3D.Material>
<DiffuseMaterial Brush="DarkRed"/>
</GeometryModel3D.Material>
</GeometryModel3D>
</Model3DGroup>
</ModelVisual3D.Content>
</ModelVisual3D>
</Viewport3D>
and in code, I'm generating the mesh something like this:
var mesh = new MeshGeometry3D();
foreach (var item in objectsToTriangulate) {
var triangles = item.Triangulate();
foreach (var triangle in triangles) {
mesh.Positions.Add(triangle.Vertices[0]);
mesh.Positions.Add(triangle.Vertices[1]);
mesh.Positions.Add(triangle.Vertices[2]);
mesh.Normals.Add(triangle.Normal);
mesh.Normals.Add(triangle.Normal);
mesh.Normals.Add(triangle.Normal);
}
}
geometryModel.Geometry = mesh;
EDIT: None of the triangles intersect (except at the edges), and sometimes the triangle that appears on top is actually WAY behind the other one, so I don't think it's an ambiguity with the 3D sorting of the triangles, as Ray Burns has suggested.
The other behavior that I've noticed is that the order the triangles are rendered does not seem to change as I move around the scene. I.e. if I view a problem area from the other side, the triangles are rendered in the same, now "correct", order.
In WPF, as in most 3D systems, a simplifying assumption is made that any given triangle is assumed to lie entirely in front of or entirely behind any other given triangle. But in fact this is not necessarily always the case. Specifically, if two triangles intersect along their interiors (not just at their edges) and are not viewed along their intesection line, an accurate rendering would paint each triangle front for part of the viewport.
Because of this assumption, 3D engines sort triangles by considering he entire triangle to be a certain distance from the camera. It may choose the triangle's nearest corner, the furthest corner, average the corners, or use some other algorithm but in the end it selects a representative point for computing Z Order.
My guess is that your triangles are structured in a way that the representative point is uses for computing Z Order is causing them to display in an unexpected order.
Edit
From the information you provide in your edit, I can see that my first guess was wrong. I'll leave it in case it is useful to someone else, and give a few more ideas I've had. Hopefully someone else can chime in here. I've never had this kind of depth problem myself so these are all just educated guesses.
Here are my ideas:
It may be that your BackMaterial is not set or transparent, causing you to see only triangles whose winding order is clockwise from your perspective. Depending on your actual mesh, the missing invisible triangles could make it appear that the ones in the rear are overlapping them when in actual fact they are simply visible through them. This could also happen if your Material was not set or was transparent.
Something is clearly determining the order the triangles are displayed in. Could it be the order from your TriangleIndices array? If you randomly reorder the TriangleIndices array (in sets of three of course) without making any other changes, does it change the display? If so, you've learned something about the problem and perhaps found a workaround (if it is using TriangleIndices order, you could do the sorting yourself).
If you are using ProjectionCamera or OrthoGraphicCamera, are the NearPlaneDistance and FarPlaneDistance set appropriately? A wrong NearPlaneDistance especially could make triangles closer to you invisible, making it appear that triangles further away are actually being drawn on top. Wrong distances could also affect the granularity of the depth buffer, which could give you the effect you are experiencing.
Is your model extremely large or extremely small? If you scale the model and the camera position, does it make a difference? Depth buffers are generally 32 bit integers, so it is possible in extremely tiny models to have two triangles round off to the same depth buffer value. This would also cause the effect you describe.
It may be that you are encountering a bug. You can try some changes to see if they affect the problem, for example you might try software-only rendering, different lighting types (diffuse vs specular, etc), different camera types, a graphics card from a different vendor, etc.
I hope some of these ideas help.

Resources