Using WPF 3D, I define my geometry as follows:
<MeshGeometry3D Positions="0 0 0 1 1 0 1 0 0" />
My camera is defined as
<PerspectiveCamera FarPlaneDistance="20"
LookDirection="0,0,1"
UpDirection="0,1,0"
NearPlaneDistance="0"
Position="0,0,-10"
FieldOfView="45" />
However, when I look at the resulting picture, I get this:
It would appear that the X co-ordinate is backwards, i.e., the X axis is facing left. Curiously, when I try to flip the sign, i.e., when I write
<MeshGeometry3D Positions="0 0 0 -1 1 0 -1 0 0" />
The image disappears entirely. What's going on here?
I'm going to answer your second question first...3d engines like WPF usually employ backface culling, where faces that point away from the camera aren't rendered in order to improve performance. You can fix this by either changing the order of the points in your MeshGeometry3D or assigning a BackMaterial to your GeometryModel3D (as opposed to the regular Material).
With respect to your first question, it's not the X that's flipped around, it's the Z. If you look at the WPF 3D Graphics Overview on the Microsoft site you'll see that Z is negative going into the screen. You've set your camera at [0,0,-10] and set the look direction to [0,0,1], so the camera is effectively "behind" the object and looking backwards. Change these values to [0,0,10] and [0,0,-1], and add a BackMaterial like I mentioned above, and all will be good in the world again.
Related
I have an odd behaviour in certain conditions when I have two geometries one above the other and the first one at least has alpha value != 0.
I am using a program for the SCNMaterial, program is in metal and does simple stuff, color out is of the type
float4(color.x * alpha, color.y * alpha, color.z * alpha, alpha)
The geometry doesn't really influence the behaviour, but for the shake of explanation I have a SCNPlane on the back and an SCNCyclinder on the front.
It all looks good until camera it's perpendicular +/- 30 degs more or less, see below
As you can see you have a bluish background and then a cylinder few centimeters in front of it, it is semi transparent and we can clearly see the bluish behind it.
Whenever the camera get passed +/- 30 from being perpendicular, we can see through the background because it is not anymore rendered in corrispondence of the cylinder, see the one on the left
Basically when it is parpendicular or so the two layers are summed up and give the correct result, when reached that cone of angles only the layer on the top is rendered.
What I am missing here to make it as it should.
Not using any priority for the rendering as I am aware.
Thank you.
I am developing some shaders for WPF, and so far I managed to get fadeout, and swipe shader working, but for this one, I have no idea where to start.
Could someone please hand me out a few tips on how to approach this problem?
What I am trying to achieve is the following:
Thank you
In my opinion the easiest way to build any complicated effect is to decompose the original effect into small parts and add them together. In your case the effect consists of 3 parts:
5 rings filled after each other
each ring is filled counterclockwise from the left
the filling of a ring is a circle at each end
This in mind you can build a solution for each part separately and add the results together.
I assume that there will be a variable float progress; rolling from 0 to 1 which determines the progress of the transistion.
In following some starting points for each part:
For 1. you check the distance of the texture coordinate of the fragment to the center of the screen and divide the maximum distance into 5 parts. While the progress is 0 <= progress < 0.2, the first ring is visible, while 0.2 <= progress < 0.4 the second and so on.
For 2. you check the angle of the difference vector between the fragment and the center to the left vector, e.g. using atan2. Within each part (such as 0.0-0.2) you compare the local progress of the stage to the angle to determine the visibility, making the fragments appear in an angle dependent way.
The 3. might be the most tricky part, as you will have to construct the center of the progress ring ends and compute the distance to the fragment. If it is within the current ring thickness is is visible.
Hopefully this quick thoughts give you a rough starting point for your effect!
Suppose I have PathGeometry, consisting of lines, like this
(the rectangle == panel, for example Grid):
I want to fill them to the bottom of the panel, like this:
The quick and not very good solution I see is to create additional curve with 2 additional points in the bottom and use it for filling.
Is there some better way to solve the task?
Something like this (pseudocode):
<Path Data=... FillStyle = "ToTheBottom" Fill="Blue"/>
There is no standard way of doing this; there is no Fill like this defined in WPF.
You could put two path geometries on top of each other. The bottom one would have a stroke thickness of 0 and has 2 extra points (those on the lower edge of the rectangle)
The second one the top would simply be the geometry you have now.
If you need to draw a lot of these you might create a custom control that does this for you.
I'm attempting to host a WPF form on a Viewport2DVisual3D surface. I've set up the camera so that the controls fit the width of the window. The default geometry maps the entire form onto a square face, so it is necessary to do some sort of transformation to get the surface to look like a regular 2d form and not appear stretched vertically. The form looks okay overall but the text doesn't scale well, it is blurry and blocky and looks bad in different ways from line to line. Here's what I've tried to set the aspect ratio:
A ScaleTransform3D
Setting the mesh Positions to the proper aspect ratio
Setting the TextureCoordinates to the proper aspect ratio
The first two get me the results that I want, except for the blocky/blurry text. My conclusion at this point is that the font rendering is occurring before the form image is projected onto the 3d surface and then scaling occurs, so it will look bad no matter what. Does anyone know a way to work around this or to set it up right from the beginning? I don't know much about 3d graphics, just enough basic math to get the camera angles right, etc.
Have tested on Win 7 and XP.
Some of the resources I've used:
http://www.codeproject.com/KB/WPF/ContentControl3D.aspx
http://pavanpodila.spaces.live.com/blog/cns!9C9E888164859398!151.entry
A few snippets of the code:
<Viewport2DVisual3D.Geometry>
<MeshGeometry3D x:Name="FrontFaceGeometry"
Positions="-1,1,0 -1,-1,0 1,-1,0 1,1,0"
TextureCoordinates="0,0 0,1 1,1 1,0"
TriangleIndices="0 1 2 0 2 3"/>
</Viewport2DVisual3D.Geometry>
...
<Grid Width="500" x:Name="FrontFaceGrid">
Then in the Window_Loaded routine, e.g.
var aRatio = FrontFaceGrid.ActualHeight / FrontFaceGrid.ActualWidth;
FrontFaceGeometry.Positions[0] = new System.Windows.Media.Media3D.Point3D(-1, aRatio, 0);
FrontFaceGeometry.Positions[1] = new System.Windows.Media.Media3D.Point3D(-1, -aRatio, 0);
FrontFaceGeometry.Positions[2] = new System.Windows.Media.Media3D.Point3D(1, -aRatio, 0);
FrontFaceGeometry.Positions[3] = new System.Windows.Media.Media3D.Point3D(1, aRatio, 0);
To avoid the blurred text and other visual distortions make the 3D XY aspect ratio equal to the 2D control aspect ratio. This is achieved by setting X and Y MeshGeometry3D.Positions. For example, a 2D control sized at 500x700 could be mapped to a rectangle 3D mesh without distortion by assigning positions
<Viewport2DVisual3D.Geometry>
<MeshGeometry3D x:Name="FrontFaceGeometry"
Positions="-2.5,3.5,0 -2.5,-3.5,0 2.5,-3.5,0 2.5,3.5,0"
TextureCoordinates="0,0 0,1 1,1 1,0"
TriangleIndices="0 1 2 0 2 3"/>
</Viewport2DVisual3D.Geometry>
The image of the 2D control displayed within the 3D environment is always "stretched" to the mesh's dimensions.
You will be rendering the WPF form onto a texture on the square and then displaying the square using the GPU's texture engine. Depending on the mode the texture engine is using this could cause blockiness or blurriness (since the texture engine will try to interpolate the texture by default).
Why do you want to render it using a 3D visual and not normally if it is intended to fill the screen?
I'm rendering a scene with WPF 3D by making a MeshGeometry3D and adding vertices and normals to it. Everything looks good in the rendered scene (lights are there, material looks fine), but where mesh triangles overlap, the triangle closer to the camera is not necessarily rendered on top. It looks like they're being drawn in a random order. Is there any way I can ensure that the mesh triangles are rendered in the "correct" order?
Just in case it helps, here's my XAML:
<Viewport3D>
<ModelVisual3D>
<ModelVisual3D.Content>
<Model3DGroup>
<AmbientLight Color="#FF5A5A5A" />
<GeometryModel3D x:Name="geometryModel">
<GeometryModel3D.Material>
<DiffuseMaterial Brush="DarkRed"/>
</GeometryModel3D.Material>
</GeometryModel3D>
</Model3DGroup>
</ModelVisual3D.Content>
</ModelVisual3D>
</Viewport3D>
and in code, I'm generating the mesh something like this:
var mesh = new MeshGeometry3D();
foreach (var item in objectsToTriangulate) {
var triangles = item.Triangulate();
foreach (var triangle in triangles) {
mesh.Positions.Add(triangle.Vertices[0]);
mesh.Positions.Add(triangle.Vertices[1]);
mesh.Positions.Add(triangle.Vertices[2]);
mesh.Normals.Add(triangle.Normal);
mesh.Normals.Add(triangle.Normal);
mesh.Normals.Add(triangle.Normal);
}
}
geometryModel.Geometry = mesh;
EDIT: None of the triangles intersect (except at the edges), and sometimes the triangle that appears on top is actually WAY behind the other one, so I don't think it's an ambiguity with the 3D sorting of the triangles, as Ray Burns has suggested.
The other behavior that I've noticed is that the order the triangles are rendered does not seem to change as I move around the scene. I.e. if I view a problem area from the other side, the triangles are rendered in the same, now "correct", order.
In WPF, as in most 3D systems, a simplifying assumption is made that any given triangle is assumed to lie entirely in front of or entirely behind any other given triangle. But in fact this is not necessarily always the case. Specifically, if two triangles intersect along their interiors (not just at their edges) and are not viewed along their intesection line, an accurate rendering would paint each triangle front for part of the viewport.
Because of this assumption, 3D engines sort triangles by considering he entire triangle to be a certain distance from the camera. It may choose the triangle's nearest corner, the furthest corner, average the corners, or use some other algorithm but in the end it selects a representative point for computing Z Order.
My guess is that your triangles are structured in a way that the representative point is uses for computing Z Order is causing them to display in an unexpected order.
Edit
From the information you provide in your edit, I can see that my first guess was wrong. I'll leave it in case it is useful to someone else, and give a few more ideas I've had. Hopefully someone else can chime in here. I've never had this kind of depth problem myself so these are all just educated guesses.
Here are my ideas:
It may be that your BackMaterial is not set or transparent, causing you to see only triangles whose winding order is clockwise from your perspective. Depending on your actual mesh, the missing invisible triangles could make it appear that the ones in the rear are overlapping them when in actual fact they are simply visible through them. This could also happen if your Material was not set or was transparent.
Something is clearly determining the order the triangles are displayed in. Could it be the order from your TriangleIndices array? If you randomly reorder the TriangleIndices array (in sets of three of course) without making any other changes, does it change the display? If so, you've learned something about the problem and perhaps found a workaround (if it is using TriangleIndices order, you could do the sorting yourself).
If you are using ProjectionCamera or OrthoGraphicCamera, are the NearPlaneDistance and FarPlaneDistance set appropriately? A wrong NearPlaneDistance especially could make triangles closer to you invisible, making it appear that triangles further away are actually being drawn on top. Wrong distances could also affect the granularity of the depth buffer, which could give you the effect you are experiencing.
Is your model extremely large or extremely small? If you scale the model and the camera position, does it make a difference? Depth buffers are generally 32 bit integers, so it is possible in extremely tiny models to have two triangles round off to the same depth buffer value. This would also cause the effect you describe.
It may be that you are encountering a bug. You can try some changes to see if they affect the problem, for example you might try software-only rendering, different lighting types (diffuse vs specular, etc), different camera types, a graphics card from a different vendor, etc.
I hope some of these ideas help.