SnapsToDevicePixels issue - wpf

In my WPF app, I have created few Line elements and added inside a StackPanel. The thickness for all lines is set to 0.5. But when I render it, sometimes few lines are appearing blur. I tried setting SnapsToDevicePixels in the StackPanel but this makes the lines completely invisible. Now if I increase the line thickness to 1 or greater than 1 then SnapsToDevicePixels is working properly.
I am creating Line as shown below:
private void CreateLine(Double y1, Double y2, Double x1, Double x2, Double width, Double height)
{
Line line = new Line() { Y1 = y1, Y2 = y2, X1 = x1, X2 = x2, Width = width, Height = height };
}
Here, if LineThickness is set to 0.5, x1 and x2 values will be 0.25 (LineThickenss / 2) and width is 0.5 (LineThickness).
Is there any minimum pixel value required to be set in order to make the SnapsToDevicePixels work in WPF?

I solved many of my SnapToDevicePixels issues by using UseLayoutRounding instead:
In your case:
<StackPanel UseLayoutRounding="True">
...
</StackPanel>
I don't know if this will solve your issue, but from my experience, it's worth a try!

No, there isn't a minimum per se. The blurriness you are experiencing is due to how WPf handles drawing in general. According to my experience you can't really do anything about it. Snapping to device pixels may give some reprieve, but can still be unpredictable.
Also there is a difference between a pixel and a WPF unit that makes things more complicated, though many techniques exist to translate between them.
A common approach to translating the pixel to WPF unit is:
Matrix m = PresentationSource.FromVisual(this).CompositionTarget.TransformToDevice;
double dpiFactor = 1/m.M11;
double lineThickness = dpiFactor * 1; // Repace '1' with desired pixel size.
Here is a useful article on the topic:
http://www.wpftutorial.net/DrawOnPhysicalDevicePixels.html

It is not recommended to set fractional positions.
What means the half of WPF point?
WPF will interpret 1 point as 1/96 inch. It differs in pixels for distinct monitors (96 DPI, 300 DPI).
WPF considers 1 point as 1 pixel in the usual monitors with 96 DPI. And
UIElement.SnapsToDevicePixels works great. It tries to snap 0.5 pixel to the monitor grid. There are two results: enlarged version two one pixel or shortened version to 0 pixel (disappears).
If for some reason there is a need for exact 1 pixel (not 1 point) positioning then use GuidelineSet.
With .NET 4 or higher it is better to use Layout Rounding. It calculates pixel offsets at the UI position measuring level. While SnapsToDevicePixels works at the render level. The minus for Layout Rounding is that it is bad for dynamic moving.

Related

Pseudo 3D walls (top-down raycasting, sort of)

See, I'm not posting code because I need logic, math, algorithms. Well:
I'm trying to achieve a 3d-looking visual for a top-down tile map using layers and parallax scrolling. The thing is: At the moment I simply set different "speeds" for each layer. But that would only work with some very specific camera positions, also, it makes so that the blocks have virtually an infinite height (as they will "increase in height" until they are out of the camera's FOV).
Is there a better (should be) to achieve the effect? Oh, and I'm using C with Allegro 5.
I thought about limiting each layer's offset, but I have no idea how.
My current method:
That's my current code for the layer "speed" (it repeats for up, down, left and right, changing coordinates):
if (key[ALLEGRO_KEY_UP])
camera_y[0] -= 1;
camera_y[1] -= 2;
camera_y[2] -= 3;
Then I run a loop to draw the map with the tiles relative to the current layer's offset.
By the way, that's the desired effect (example with 3 layers):
For parallax scrolling, layers that scroll faster must be correspondingly larger:
You can use unscaled tiles stacked on top of each other, offset by a fixed fraction of the distance from the center of the tile to the center of the viewport,
but the tops will not be continuous (unless the bottoms overlap). If all layer tiles are hand-drawn or rendered images, this is not an issue.
If the walls are box-shaped, and you have images of the top and each of the four sides, you can draw them in almost 3D,
where at most two sides of each box wall is drawn, skewed.
In all cases:
If the center of the viewport is at world coordinates (xc, yc), point (x, y, z) maps to coordinates (x', y') relative to the center of the viewport:
x' = (x - xc) × (z + z0) / z0
y' = (y - yc) × (z + z0) / z0
where z0 is a constant that determines the "size" of the parallax or depth effect.
I think you're on the right lines, but the "infinite height" issue can be solved by simply giving the camera an "altitude" property, and adjust the "speed" of each layer by calculating ...
layer.speed = (layer.altitude / camera.altitude) * ZOOM_FACTOR; //gives a float value.
Can't really suggest anything more until you show us some of your math code.

How can I create beveled corners on a border in WPF?

I'm trying to do simple drawing in a subclass of a decorator, similar to what they're doing here...
How can I draw a border with squared corners in wpf?
...except with a single-pixel border thickness instead of the two they're using there. However, no matter what I do, WPF decides it needs to do its 'smoothing' (e.g. instead of rendering a single-pixel line, it renders a two-pixel line with each 'half' about 50% of the opacity.) In other words, it's trying to anti-alias the drawing. I do not want anti-aliased drawing. I want to say if I draw a line from 0,0 to 10,0 that I get a single-pixel-wide line that's exactly 10 pixels long without smoothing.
Now I know WPF does that, but I thought that's specifically why they introduced SnapsToDevicePixels and UseLayoutRounding, both of which I've set to 'True' in the XAML. I'm also making sure that the numbers I'm using are actual integers and not fractional numbers, but still I'm not getting the nice, crisp, one-pixel-wide lines I'm hoping for.
Help!!!
Mark
Aaaaah.... got it! WPF considers a line from 0,0 to 10,0 to literally be on that logical line, not the row of pixels as it is in GDI. To better explain, think of the coordinates in WPF being representative of the lines drawn on a piece of graph paper whereas the pixels are the squares those lines make up (assuming 96 DPI that is. You'd need to adjust accordingly if they are different.)
So... to get the drawing to refer to the pixel locations, we need to shift the drawing from the lines themselves to be the center of the pixels (squares on graph paper) so we shift all drawing by 0.5, 0.5 (again, assuming a DPI of 96)
So if it is a 96 DPI setting, simply adding this in the OnRender method worked like a charm...
drawingContext.PushTransform(new TranslateTransform(.5, .5));
Hope this helps others!
M
Have a look at this article: Draw lines exactly on physical device pixels
UPD
Some valuable quotes from the link:
The reason why the lines appear blurry, is that our points are center
points of the lines not edges. With a pen width of 1 the edges are
drawn excactly between two pixels.
A first approach is to round each point to an integer value (snap to a
logical pixel) an give it an offset of half the pen width. This
ensures, that the edges of the line align with logical pixels.
Fortunately the developers of the milcore (MIL stands for media
integration layer, that's WPFs rendering engine) give us a way to
guide the rendering engine to align a logical coordinate excatly on a
physical device pixels. To achieve this, we need to create a
GuidelineSet
protected override void OnRender(DrawingContext drawingContext)
{
Pen pen = new Pen(Brushes.Black, 1);
Rect rect = new Rect(20,20, 50, 60);
double halfPenWidth = pen.Thickness / 2;
// Create a guidelines set
GuidelineSet guidelines = new GuidelineSet();
guidelines.GuidelinesX.Add(rect.Left + halfPenWidth);
guidelines.GuidelinesX.Add(rect.Right + halfPenWidth);
guidelines.GuidelinesY.Add(rect.Top + halfPenWidth);
guidelines.GuidelinesY.Add(rect.Bottom + halfPenWidth);
drawingContext.PushGuidelineSet(guidelines);
drawingContext.DrawRectangle(null, pen, rect);
drawingContext.Pop();
}

simple plot algorithm with autoscale

I need to implement a simple plotting component in C#(WPF to be more precise). What i have is a collection of data samples containing time (X axis) and a value (both double types).
I have a drawing canvas of a fixed size (Width x Height) and a DrawLine method/function that can draw on it. The problem I am facing now is how do I draw the plot so that it is autoscaled? In other words how do I map the samples I have to actual pixels on my Width x Height canvas?
One hacky method that may work is to use a Viewbox control. This control will scale the rendering of its content to fit the size available. However, this might lead to your lines and labels looking too thick or thin.
The more sensible method that you're probably after, though, is how to work out at what scale to draw your graph at in the first place. To do that, work out the range of values on a given axis (for example, your Y-axis value might range from 0 to 100). Work out the available drawing space on that axis (for example, your canvas might have 400 pixels of height). Your Y-axis "scale factor" when drawing the graph would be <available space> / <data range> - or, in this case, 4.
Your canvas' coordinates start from zero in the top-left so, to calculate the Y-position for a given data point, you would calculate like this:
double availableSpace = 400.0; // the size of your canvas
double dataRange = 100.0; // the range of your values
double scaleFactor = availableSpace / dataRange;
double currentValue = 42.0; // the value we're trying to plot
double plottableY = availableSpace - (currentValue * scaleFactor); // the position on screen to draw at
The value of plottableY is the y-coordinate that you would use to draw this point on the canvas.
(Obviously this code would need to be spread out across your drawing method so you're not recalculating all of the values for each point, but it demonstrates the math).

OpenGL: How do I avoid rounding errors when specifying UV co-ordinates

I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;
This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.
What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?
Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.
To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).
Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.
To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.
Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.
How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).
In general, if you follow the general principles I laid out, you should never see artifacts.
Use Ortho and Viewport of exactly your frame buffer size
Use positions of X, X+width exactly
Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.
If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).
To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation).
It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.
A bit late, but for posterity I was having the same problem, with the pixels from adjacent regions of a texture atlas bleeding into sprites/tiles when scaling or zooming the view. I had my glOrtho, glViewport, etc dimensions all set correctly, then I realized the problem was I was scaling the view before translating the camera, which meant that even though I was snapping to integer pixels pre-zoom, after the zoom it would align to a fraction of a pixel and introduce the texel problem.
So if your code looks something like this, where camera.zoom is a non-integer (i.e. 0.75):
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(camera.x, camera.y, 0.0f);
You'll want to make sure the result of the translation after scaling aligns to whole pixels on the screen, so you can do something like:
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(
floor(camera.x * camera.zoom) / camera.zoom,
floor(camera.y * camera.zoom) / camera.zoom,
0.0f);
Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.
Your xpos/ypos must be based on 0 to (width or height) - 1 and then:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);

How can I prevent deformation when rotating about the line-of-sight in OpenGL?

I've drawn an ellipse in the XZ plane, and set my perspective slightly up on the Y-axis and back on the Z, looking at the center of ellipse from a 45-degree angle, using gluPerspective() to set my viewing frustrum.
Unrotated, the major axis of the ellipse spans the width of my viewport. When I rotate 90-degrees about my line-of-sight, the major axis of the ellipse now spans the height of my viewport, thus deforming the ellipse (in this case, making it appear less eccentric).
What do I need to do to prevent this deformation (or at least account for it), so rotation about the line-of-sight preserves the perceived major axis of the ellipse (in this case, causing it to go beyond the viewport)?
It looks like you're using 1.0 as the aspect when you call gluPerspective(). You should use width/height. For example, if your viewport is 640x480, you would use 1.33333 as the aspect argument.
According to the OpenGL Spec:
void gluPerspective( GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar )
Aspect should be a function of your window width and height. Specifically width divided by height (but watch out for division by zero).
Perhaps you are using 1 as the aspect which is not accurate unless your window is a square.
It looks like the aspect parameter on your gluPerspective call need tweaking. See The Man Page. If your window were physically square, the aspect ratio would be 1 and your problem would go away. However, your window is rectangular, so the viewing frustum needs to be non-square.
Set the aspect ratio to window_width / window_height, and your ellipse should look correct. Note that you'll need to update this whenever the window resizes; if you're using GLUT set a glutReshapeFunc and recalculate the projection matrix in there.

Resources