WPF shape geometry scaling without affecting stroke - wpf

I am wondering if someone has managed to override the default behaviour of the WPF shape rendering when applying a scaletransform to the shape. The default behaviour transforms the entire shape drawing, including strokes, but I would like to only scale the geometry. The difficulty is that my shapes reside in a visual hierarchy with render transforms applied on different levels (sort of like a 2D scene graph but a WPF visual tree), this I cannot change(!). I have read in different places that it might be possible to create a custom shape to cancel out the transform for the render transform and put it on the geometry instead. At the moment I have something like:
public sealed class MyPath : Shape
{
// This class has a Data property of type Geometry just like the normal Path class
protected override Geometry DefiningGeometry
{
get
{
Geometry data = Data;
if (data == null)
{
data = Geometry.Empty;
}
return data;
}
}
protected override void OnRender(DrawingContext drawingContext)
{
Transform tr = RenderedGeometry.Transform;
Geometry geomToDraw = RenderedGeometry.Clone();
geomToDraw.Transform = new MatrixTransform(tr.Value * tr.Value);
Matrix trInv = tr.Value; trInv.Invert();
drawingContext.PushTransform(new MatrixTransform(trInv));
drawingContext.DrawGeometry(Brushes.Transparent, new Pen() { Brush = Brushes.Black, Thickness = 1 }, geomToDraw);
}
}
As is clearly evident, I am quite new to this and the above code is probably completely messed up. I was trying to transfer the matrix to the geometry without changing the final resulting geometry transform, hence the tr.Value*tr.Value and trInv. But it does not work as I want it to. I know this transfer transform technique works in theory because I tried it out with constant transforms (testing to set Geometry.Transform to scale x with 4 and pushing a transform to scale x with 0.25 worked fine but the resulting shape drawing did not seem to apply stretch=fill, which I rely upon). So there must be something that I am missing with the render transforms.
The test scenario that is not working is this:
I apply a render scale transform with scaleX=4 and scaleY=1 in xaml.
The built-in Path class scales the entire drawing so that strokes 4 times wider in the x direction than in the y direction.
I want MyPath to scale the geometry only, not the strokes. <- THIS IS NOT WORKING!
What happens is: The geometry gets scaled correctly, the strokes get scaled by 4 in the x direction and by slightly less than 4 in the y direction. What is wrong? I have a feeling that I should not be working solely with RenderedGeometry.Transform but what should I use instead? I need to incorporate the render transform and the stretch=fill on the shape. My render transforms hierarchy may contain a mix of scales, rotations and translations so the solution must be general enough to handle any transform, not just axis-aligned scaling.
Note: I know it is bad to create the geometry in OnRender but I want to get it working before spending time cleaning it up.
By the way, I read this post:
Invariant stroke thickness of Path regardless of the scale
The problem as stated before is that I do have to take render transforms into consideration and I am not sure of how to adapt that solution to work with them.

If I understand the question correctly you want to cancel out the effect of the render transformation on the pen but not on the geometry.
This could be accomplished by getting the transformation of the control relative to the item from which you want to cancel the transform an using its inverse to cancel out the effect on the pen. (so for example if you have the hierarchy P1/P2/P3/UrShape, and P1,P2,P3 all have transforms on them and you want all of them to not affect your pen, you will need to obtain the transform of P1 relative to UrShape). Then you could reapply the transform to just your shape.
var brush = new SolidColorBrush(Colors.Red);
var pen = new Pen(brush, 5);
//Instead of the direct parent you could walk up the visual tree to the root from where you want to cancel the transform
var rootTransform = (MatrixTransform)this.TransformToAncestor((Visual)this.Parent);
var inverserRootTransform = (MatrixTransform)rootTransform.Inverse;
//We cancel out the transformation from the parent
drawingContext.PushTransform(inverserRootTransform);
var renderGeometry = this.Geometry.Clone();
// We apply the parent transform to the shape only, and group it with the original transform that was present on the shape
// we do this to honor any transformation that was set on the shape.
renderGeometry.Transform = new TransformGroup()
{
Children =
{
rootTransform,
this.Geometry.Transform
}
};
//We draw the shape, the pen size will have the correct size since we canceled out the transform from the parent
// but the shape now has the transformation directly on it.
drawingContext.DrawGeometry(brush, pen, renderGeometry);
drawingContext.Pop();

Related

Cropping an arbitrary wpf geometry

The background to my problem is that I have a bunch of geometries (huge amount, think map over a larger area) split across multiple wpf geometry instances (originally they were PathGeometry, but to reduce memory usage I pre-process them and create StreamGeometries during load). Now what I want to do is to generate tiles from these geometries.
Basically I would like to take a larger geometry object and "cut out" a rectangle of it (my tile) so I get several smaller geometries. Something like the image below:
Notice that I want the result to be a new geometry, not a rendering. I know I can achieve the visual result by applying a clip to a UIElement or by pushing a clip to a drawingvisual.
I've tried using Geometry.Combine with one of the arguments being the clip rectangle, but I can't get it to do what I want (I typically only get the clip rect back, or an empty geometry, depending on which combine mode I use).
Alternatively, if this cannot be done using WPF, is there any other (third party is ok) general purporse geometry API for .NET that can do these kind of operations? Or maybe this can be implemented using other parts of the WPF geometry API?
Code shows the bottom right rectangle like in your "smaller tiles" visualisation:
var geometry = MyOriginalPath.Data.Clone();
var bounds = geometry.Bounds;
var rectangleGeometry = new RectangleGeometry(bounds);
var halfWidth = bounds.Width * 0.5;
var halfHeight = bounds.Height * 0.5;
var bottomQuarter = new RectangleGeometry(
new Rect(bounds.X + halfWidth, bounds.Y + halfHeight,
halfWidth, halfHeight));
var combinedGeometry = new CombinedGeometry(GeometryCombineMode.Exclude,
rectangleGeometry, bottomQuarter);
combinedGeometry = new CombinedGeometry(GeometryCombineMode.Exclude,
geometry, combinedGeometry);
MyBottomQuarterPath.Data = combinedGeometry;
Regards Dave

WPF: Get 1:1 pixel rendering in Image whose size is modified with a LayoutTransform

Let me start by saying I have searched extensively on this and have found partial answers, but nothing that works all the way.
I need to display bitmap images in my WPF application that are not scaled. I want to map 1 pixel of the bitmap to 1 pixel of the display. I do intend to support multiple resolutions by shipping multiple versions of my bitmaps. But I want to know that, when a particular bitmap has been chosen, it will be rendered EXACTLY as it has been designed.
My strategy for overcoming the automatic scaling that happens in WPF is to look at what is being applied automatically (by virtue of the OS DPI setting), and then apply a LayoutTransform that is the inverse, to the outermost container of my window.
This ensures that, no matter what the user's DPI settings are, the app renders the contents of the window a 1:1 ratio of WPF pixels to hardware pixels. So far, so good.
That code looks like this. (Presume this is called with an argument of 1.0).
private void SetScale(double factor)
{
// First note the current window transform factor.
// This is the factor being applied to the entire window due to OS DPI settings.
Matrix m = PresentationSource.FromVisual(this).CompositionTarget.TransformToDevice;
double currentWindowTransformFactorX = m.M11;
double currentWindowTransformFactorY = m.M22;
// Now calculate the inverse.
double currentWindowTransformInverseX = (1 / m.M11);
double currentWindowTransformInverseY = (1 / m.M22);
// This factor will put us "back to 1.0" in terms of a device-independent-pixel to physical pixel mapping.
// On top of this, we can apply our caller-specified factor.
double transformFactorX = currentWindowTransformInverseX * factor;
double transformFactorY = currentWindowTransformInverseY * factor;
// Apply the transform to the registered target container
ScaleTransform dpiTransform = new ScaleTransform(transformFactorX, transformFactorY);
if (dpiTransform.CanFreeze)
dpiTransform.Freeze();
this.pnlOutermost.LayoutTransform = dpiTransform;
}
Up to here, everything works great. No matter what I set my Windows DPI to, the contents of that main container are always exactly the same size, and the bitmaps are rendered precisely.
Now comes the fun part. I want to support different screen resolutions by providing resolution-specific artwork, and scaling my entire UI as appropriate.
It turns out that LayoutTransform works really well for this. So if I call the above method with 1.25 or 1.5 or whatever, the entire UI scales and everything looks perfect...except my images, which are back to looking stretched and crappy, even when I change the source to be an image that is exactly the right size for the new, scaled dimensions.
For example, suppose I have an image that is 100x100 in the XAML. My artwork comes in three flavors: 100x100, 125x125, and 150x150. When I scale the container that houses the image, I also change the source of that image to the appropriate one.
Interestingly, if the image object is sitting at a position that, when scaled by the factor, yields integral results, then the scaled image looks fine. That is to say, suppose the image has the following properties:
Canvas.Left = 12
Canvas.Top = 100
When we apply a factor of 1.25, this yields 15 and 125, and the image looks great. But if the image is moved by one pixel, to say:
Canvas.Left = 13
Canvas.Top = 100
Now when we apply a factor of 1.25, we get 15.25 and 125, and the result looks crappy.
Clearly, this looks like some kind of rounding issue or something like that. So I've tried:
UseLayoutRounding="True"
SnapsToDevicePixels="True"
RenderOptions.EdgeMode="Aliased"
RenderOptions.BitmapScalingMode="NearestNeighbor"
I've tried these in the window, in the container being scaled, and in the image object. And nothing works. And the BitmapScalingMode doesn't really make sense anyway, because the image should not be being scaled at all.
Eternal thanks to anyone who can shed some light on this.
I had the exact same problem so it looks like this has not been fixed in the framework as of 2019.
I managed to solve the issue using a three step approach.
Enable layout rounding on my top level UI element
<UserControl ... UseLayoutRounding="True">
Apply the inverse LayoutTransform to my Imageobjects (the LayoutTransformwas applied to the parent ListBox).
<Image ... LayoutTransform="{Binding Path=LayoutTransform.Inverse,
Mode=OneTime,
RelativeSource={RelativeSource FindAncestor,
AncestorType={x:Type ListBox}}}">
Subclass Imageand add a custom override for OnRender.
internal class CustomImage: Image {
private PresentationSource presentationSource;
public CustomImage() => Loaded += OnLoaded;
protected override void OnRender(DrawingContext dc) {
if (this.Source == null) {
return;
}
var offset = GetOffset();
dc.DrawImage(this.Source, new Rect(offset, this.RenderSize));
}
private Point GetOffset() {
var offset = new Point(0, 0);
var root = this.presentationSource?.RootVisual;
var compositionTarget = this.presentationSource?.CompositionTarget;
if (root == null || compositionTarget == null) {
return offset;
}
// Transform origin to device (pixel) coordinates.
offset = TransformToAncestor(root).Transform(offset);
offset = compositionTarget.TransformToDevice.Transform(offset);
// Round to nearest integer value.
offset.X = Math.Round(offset.X);
offset.Y = Math.Round(offset.Y);
// Transform back to local coordinate system.
offset = compositionTarget.TransformFromDevice.Transform(offset);
offset = root.TransformToDescendant(this).Transform(offset);
return offset;
}
private void OnLoaded(object sender, RoutedEventArgs e) {
this.presentationSource = PresentationSource.FromVisual(this);
InvalidateVisual();
}
}
}
The code from step 3 is based on this blogpost.
By using the CustomImage class in my XAML instead of Image and binding to a BitmapSource that will return a properly sized image based on the current scale factor, I managed to achieve great looking images without any unwanted scaling.
Note that you might need to call InvalidateVisual on your images when they need to be re-rendered.

Can you render a StreamGeometry object in multiple places during an OnRender override?

We have a StreamGeometry object which we would like to render in about 400 different locations during the OnRender call. The problem is, of course, that a geometry object uses absolute coordinates.
While we could of course apply transforms before the render call, that means we'd in essence be creating 400 transforms as well, which just seems like overkill.
We just want to say 'Render this in that location, like this (Note: DrawGeometryAtPoint is fictitious)...
protected override void OnRender(System.Windows.Media.DrawingContext dc)
{
base.OnRender(dc);
var myGeometry = new StreamGeometry();
// Code to init the geometry goes here
// Render the same geometry but at four different locations
dc.DrawGeometryAtPoint(Brush1, Pen1, myGeometry, Origin1);
dc.DrawGeometryAtPoint(Brush2, Pen2, myGeometry, Origin2);
dc.DrawGeometryAtPoint(Brush3, Pen3, myGeometry, Origin3);
dc.DrawGeometryAtPoint(Brush4, Pen4, myGeometry, Origin4);
}
So can it be done?
This is essentially the same question as your previous one.
Either you push a separate transform before each rendering.
var transform = new TranslateTransform(origin.X, origin.Y);
transform.Freeze();
dc.PushTransform();
dc.DrawGeometry(brush, pen, geometry;
dc.Pop();
This is essentially the same as putting a GeometryDrawing in a DrawingGroup and set the DrawingGroup.Transform property.
Or you put the StreamGeometry into a GeometryGroup and set the Transform there.
var transform = new TranslateTransform(origin.X, origin.Y);
transform.Freeze();
var group = new GeometryGroup { Transform = transform };
group.Children.Add(geometry);
dc.DrawGeometry(brush, pen, group;
As i told you in my comment to the other question, there is no way to get around having a separate Transform object for every rendering of the same Geometry at different locations.
EDIT: You should consider a different design. Instead of running a complete OnRender pass each time your objects move a bit, you should render once and afterwards only change the Tranform objects. Which must then of course not be frozen. Therefore you would not override OnRender in some control, but provide a special control that hosts a DrawingVisual.

How to smooth WPF animation?

I am struggling in smoothing WPF animation
Actually my animation code is as follows:
private void AnimateX ( FrameworkElement element, double XMoveStart, double XMoveEnd, int secondX)
{
SineEase eEase = new SineEase();
eEase.EasingMode = EasingMode.EaseInOut;
Storyboard sb = new Storyboard();
DoubleAnimation daX = new DoubleAnimation(XMoveStart, XMoveEnd, new Duration(new TimeSpan(0, 0, 0, secondX, 0)));
daX.EasingFunction = eEase;
Storyboard.SetTargetProperty(daX, new PropertyPath("(Canvas.Left)"));
sb.Children.Add(daX);
element.BeginStoryboard(sb);
}
The above code is a method to move an object horizontally with sine ease. When only one object is moving, it is OK. However, whenever two or more objects move together (call AnimateX method on another object when the previous animation has not yet completed), the animation starts to become jittery. By jittery I mean, the objects are kind of shaking during the course of animation.
I faced the same problem many times. I found out that depending on the objects you add to your canvas, WPF will often have to regenerate representations of these objects on every frame (which I believe might be your case, depending on the type of UI elements you are manipulating). You can solve the jitter issue by telling WPF to cache a representation of your canvas in a bitmap. This is done very simply as follows, in your Xaml definition of the canvas:
<Canvas ...Your canvas properties...>
<Canvas.CacheMode>
<BitmapCache />
</Canvas.CacheMode>
...Your objects...
</Canvas>`
This reduces the load on your WPF application, as it simply stores the representation of your objects as a bitmap image, and as a consequence your application does not have to redraw them on every frame. This solution only works if your animation is applied externally to the canvas, and that there is no on-going local animations applying to the individual objects drawn in your canvas. You'll want to create separates canvases with their own caching if other animations in your code move the two objects with respect to each other.
Note that some UI elements will not be eased by this strategy. However, I've seen this strategy work efficiently for many elements, including TextBoxes and the likes, as well as geometric shapes. In any case, it's always worth the try.
Secondly, if caching local representations does not suffice, then you might want to have a look at the performance of your code and see if any process could be responsible for blocking the UI momentarily. There is no uniform solution regarding this aspect and it depends on what else is putting strain on your application UI. Cleaning the code and using asynchronous processes where relevant could help.
Finally, if, after all these checks the overall demand on your application remains too high, you can somewhat remove some strain on the application by reducing its general frame rate, the default being 60. You can try 30 or 40 and see if this improves the jittering by including the following code in your initialization:
Timeline.DesiredFrameRateProperty.OverrideMetadata(typeof(Timeline), new FrameworkPropertyMetadata { DefaultValue = 40 });
Just a guess, but what happens if you directly animate the property, withoud using a Storyboard?
private void AnimateX(FrameworkElement element, double xMoveStart, double xMoveEnd, double durationSeconds)
{
DoubleAnimation animation = new DoubleAnimation
{
From = xMoveStart,
To = xMoveEnd,
Duration = TimeSpan.FromSeconds(durationSeconds),
EasingFunction = new SineEase { EasingMode = EasingMode.EaseInOut }
};
element.BeginAnimation(Canvas.LeftProperty, animation);
}

Vector graphics in silverlight

I am new to Silverlight. Just created my first application that shows deepzoom images.
Looking for some pointers how to display vector graphics in Silverligth. The graphics are all in 2D and is a series of lines (x1y1, x2y2), points (xy), basic shapes. The data is available in ASCII text files.
What is the way(s) to read the data from files and draw in SL? Do I need to convert / translate the vector objects into images (XAML) first? Where to start?
The ideal case is that all vector obects should be selectable either programmatically or by user actions.
Thanks,
Val
There is no direct drawing API to my knoweldge, but you can add the values seperately by adding various shapes to the visual tree.
The code you are looking for will likely involve the Path class and, in turn, PathFigure and PolyLineSegment (or possibly LineSegment).
Below is some code that draws a square:
PolyLineSegment segment = new PolyLineSegment();
segment.Points.Add(new Point(0, 50));
segment.Points.Add(new Point(50, 50));
segment.Points.Add(new Point(50, 0));
segment.Points.Add(new Point(0, 0));
PathFigure figure = new PathFigure()
{
StartPoint = new Point(0, 0)
};
figure.Segments.Add(segment);
PathGeometry geometry = new PathGeometry()
{
Figures.Add(pathFigure)
};
Path path = new Path()
{
Stroke = new SolidColorBrush(Colors.Black),
StrokeThickness = 2,
Data = pathGeometry
};
// To render, the Path needs to be added to the visual tree
LayoutRoot.Children.Add(path);
Edit If the data in the ASCII text files cannot change at runtime, it might be worth investigating writing a script that transforms the files into XAML so it can be compiled.
First of you have the issue of actually getting access to the files.
Getting the file content
If you have these files held somewhere serverside then you would use WebClient to fetch the file using DownloadStringAsync.
On the other hand if the user is to open a file locally then you need use the OpenFileDialog class to ask them to open the file and then use OpenText on the FileInfo object that OpenFileDialog provides to read the string data.
Parsing
Well its your format so you'll have to code that yourself.
__Generating UI elements_
You will not have to convert it to Xaml. Since you want these vector items to be individually selectable elements then you probably want to use the set of Shape types found in the System.Windows.Shapes namely, Elipse, Line, Path, Polygon, Polyline and Rectangle.
No doubt the format in question has someway to define the position of these elements relative to a fixed 0,0 point. Hence the best panel to use to display these is a Canvas.
You would read through each Vectored item, select create an instance of one of the appropriate shapes set its properties based on the data in the item. You would need to determine its correct location within a Canvas and use the Canvas.Left and Canvas.Top attached properties. The add the shape to the Children collection of the Canvas.

Resources