I have a WPF Canvas that I want to make a bitmap of.
Specifically, I want to render it actual size on a 300dpi bitmap.
The "actual size" of the objects on the canvas is 10 device independent pixels = 1" in real life.
Theoretically, WPF device independent pixels are 96dpi.
I've spent days trying to get this to work and am coming up flummoxed.
My understanding is that the general procedure is roughly:
var masterBitmap = new RenderTargetBitmap((int)(canvas.ActualWidth * ?SomeFactor?),
(int)(canvas.ActualHeight * ?SomeFactor?),
BitmapDpi, BitmapDpi, PixelFormats.Default);
masterBitmap.Render(canvas);
and that I need to set the canvas's LayoutTransform to a ScaleTransform of ?SomeOtherFactor? and then do a measure and arrange of the canvas to ?SomeDesiredSize?
What I am stuck on is what to use for the values of ?SomeFactor?, ?SomeOtherFactor? and ?SomeDesiredSize? to make this work. MSDN documentation gives no indication of what factors to use.
I use this code to display images with 1:1 pixel accuracy.
double dpiXFactor, dpiYFactor;
Matrix m = PresentationSource.FromVisual(Application.Current.MainWindow).CompositionTarget.TransformToDevice;
if (m.M11 > 0 && m.M22 > 0)
{
dpiXFactor = m.M11;
dpiYFactor = m.M22;
}
else
{
// Sometimes this can return a matrix with 0s.
// Fall back to assuming normal DPI in this case.
dpiXFactor = 1;
dpiYFactor = 1;
}
double width = widthPixels / dpiXFactor;
double height = heightPixels / dpiYFactor;
Don't forget to enable UseLayoutRounding on the control as well.
Related
My problem effectively boils down to accurate mouse movement detection.
I need to create my own implementation of an InkCanvas and have succeeded for the most part, except for drawing strokes accurately.
void OnMouseMove(object sneder, MouseEventArgs e)
{
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
DrawBrush.Draw(intPosition, PixelDisplay);
UpdateStroke(intPosition); // calls CaptureMouse
}
This works. The Bitmap (PixelDisplay) is updated and all is well. However, any kind of quick mouse movement causes large skips in the drawing. I've narrowed down the problem to e.GetPosition(this), which blocks the event long enough to be inaccurate.
There's this question which is long beyond revival, and its answers are unclear or simply don't have a noticeable difference.
After some more testing, the stated solution and similar ideas fail specifically because of e.GetPosition.
I know InkCanvas uses similar methods after looking through the source; detect the device, if it's a mouse, get its position and capture. I see no reason for the same process to not work identically here.
I ended up being able to partially solve this.
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
if (DrawBrush == null)
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
// Calculate pixel coordinates based on the control height
var lastPoint = CurrentStroke?.Points.LastOrDefault(new IntVector(-1, -1));
// Uses System.Linq to grab the last stroke, if it exists
PixelDisplay.Lock();
// My special locking mechanism, effectively wraps Bitmap.Lock
if (lastPoint != new IntVector(-1, -1)) // Determine if we're in the middle of a stroke
{
var alphaAdd = 1d / new IntVector(intPosition.X - lastPoint.Value.X, intPosition.Y - lastPoint.Value.Y).Magnitude;
// For some interpolation, calculate 1 / distance (magnitude) of the two points.
// Magnitude formula: Math.Sqrt(Math.Pow(X, 2) + Math.Pow(Y, 2));
var alpha = 0d;
var xDiff = intPosition.X - lastPoint.Value.X;
var yDiff = intPosition.Y - lastPoint.Value.Y;
while (alpha < 1d)
{
alpha += alphaAdd;
var adjusted = new IntVector(
Math2.FloorToInt((position.X + (xDiff * alpha)) / ratio.X),
Math2.FloorToInt((position.Y + (yDiff * alpha)) / ratio.Y));
// Inch our way towards the current intPosition
DrawBrush.Draw(adjusted, PixelDisplay); // Draw to the bitmap
UpdateStroke(intPosition);
}
}
DrawBrush.Draw(intPosition, PixelDisplay); // Draw the original point
UpdateStroke(intPosition);
PixelDisplay.Unlock();
This implementation interpolates between the last point and the current one to fill in any gaps. It's not perfect when using a very small brush size for example, but is a solution nonetheless.
Some remarks
IntVector is a lazily implemented Vector2 by me, just using integers instead.
Math2 is a helper class. FloorToInt is short for (int)MathF.Round(...))
I have the strict requirement to have a texture with resolution (let's say) of 512x512, always (even if the window is bigger, and SDL basically scales the texture for me, on rendering). This is because it's an emulator of a classic old computer assuming a fixed texture, I can't rewrite the code to adopt multiple texture sizes and/or texture ratios dynamically.
I use SDL_RenderSetLogicalSize() for the purpose I've described above.
Surely, when this is rendered into a window, I can get the mouse coordinates (window relative), and I can "scale" back to the texture position with getting the real window size (since the window can be resized).
However, there is a big problem here. As soon as window width:height ratio is not the same as the texture's ratio (for example in full screen mode, since the ratio of modern displays would not match of the ratio I want to use), there are "black bars" at the sides or top/bottom. Which is nice, since I want always the same texture ratio, fixed, and SDL does it for me, etc. However I cannot find a way to ask SDL where is my texture rendered exactly inside the window based on the fixed ratio I forced. Since I need the position within the texture only, and the exact texture origin is placed by SDL itself, not by me.
Surely, I can write some code to figure out how those "black bars" would change the origin of the texture, but I can hope there is a more simple and elegant way to "ask" SDL about this, since surely it has the code to position my texture somewhere, so I can re-use that information.
My very ugly (can be optimized, and floating point math can be avoided I think, but as the first try ...) solution:
static void get_mouse_texture_coords ( int x, int y )
{
int win_x_size, win_y_size;
SDL_GetWindowSize(sdl_win, &win_x_size, &win_y_size);
// I don't know if there is more sane way for this ...
// But we must figure out where is the texture within the window,
// which can be changed since the fixed ratio versus the window ratio (especially in full screen mode)
double aspect_tex = (double)SCREEN_W / (double)SCREEN_H;
double aspect_win = (double)win_x_size / (double)win_y_size;
if (aspect_win >= aspect_tex) {
// side ratio correction bars must be taken account
double zoom_factor = (double)win_y_size / (double)SCREEN_H;
int bar_size = win_x_size - (int)((double)SCREEN_W * zoom_factor);
mouse_x = (x - bar_size / 2) / zoom_factor;
mouse_y = y / zoom_factor;
} else {
// top-bottom ratio correction bars must be taken account
double zoom_factor = (double)win_x_size / (double)SCREEN_W;
int bar_size = win_y_size - (int)((double)SCREEN_H * zoom_factor);
mouse_x = x / zoom_factor;
mouse_y = (y - bar_size / 2) / zoom_factor;
}
}
Where SCREEN_W and SCREEN_H are the dimensions of the my texture, quite misleading by names, but anyway. Input parameters x and y are the window-relative mouse position (reported by SDL). mouse_x and mouse_y are the result, the texture based coordinates. This seems to work nicely. However, is there any sane solution or a better one?
The code which calls the function above is in my event handler loop (which I call regularly, of course), something like this:
void handle_sdl_events ( void ) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION:
get_mouse_texture_coords(event.motion.x, event.motion.y);
break;
[...]
My linear algebra is weak. WPF is a great system for Rendering different transformations upon an image. However, the standard ScaleTransform will only scale the image's along the x-y axes. When the edges have first been rotated, the result of applying the ScaleTransform will result in a skewed transformation (as shown below) since the edges are no longer aligned.
So, if I have an image that has undergone several different transforms with the result being shown by the WPF rendering system, how do I calculate the correct matrix transform to take the (final rotated image) and scale it along the axes of the rendered image?
Any help or suggestions will be most appreciated.
TIA
(For the complete code, please see my previous question.)
Edit #1: To see the above effect:
Drop image onto Inkcavas. -- no skewing seen.
Rotate image counterclockwise (to about 45deg) -- no skewing seen.
Make the image larger (about twice its prescaled size -- no skewing seen.
Rotate the image clockwise (about back to where it started) -- skewing is
immediately seen during and after the rotation.
If step 3 is skipped, simple rotation -- no matter how many times done -- will not cause the skewing effect. Actually, this makes sense. The ScaleTransform preserves the distance from center from the edges of the image. If the image is at an angle, the x-y distance from the edges of the transform are no longer constant through the width and length of the rendered image. So the edges get appropriately scaled, but the angles are changed.
Here is the most relevant code:
private ImageResizing(Image image)
{
if (image == null)
throw new ArgumentNullException("image");
_image = image;
TransformGroup tg = new TransformGroup();
image.RenderTransformOrigin = new Point(0.5, 0.5); // All transforms will be based on the center of the rendered element.
tg.Children.Add(image.RenderTransform); // Keeps whatever transforms have already been applied.
image.RenderTransform = tg;
_adorner = new MyImageAdorner(image); // Create the adorner.
InstallAdorner(); // Get the Adorner Layer and add the Adorner.
}
Note: The image.RenderTransformOrigin = new Point(0.5, 0.5) is set to the center
of the rendered image. All transforms will be based on the center of the image at the time it is seem by the transform.
public MyImageAdorner(UIElement adornedElement)
: base(adornedElement)
{
visualChildren = new VisualCollection(this);
// Initialize the Movement and Rotation thumbs.
BuildAdornerRotate(ref moveHandle, Cursors.SizeAll);
BuildAdornerRotate(ref rotateHandle, Cursors.Hand);
// Add handlers for move and rotate.
moveHandle.DragDelta += new DragDeltaEventHandler(moveHandle_DragDelta);
moveHandle.DragCompleted += new DragCompletedEventHandler(moveHandle_DragCompleted);
rotateHandle.DragDelta += new DragDeltaEventHandler(rotateHandle_DragDelta);
rotateHandle.DragCompleted += new DragCompletedEventHandler(rotateHandle_DragCompleted);
// Initialize the Resizing (i.e., corner) thumbs with specialized cursors.
BuildAdornerCorner(ref topLeft, Cursors.SizeNWSE);
// Add handlers for resizing.
topLeft.DragDelta += new DragDeltaEventHandler(TopLeft_DragDelta);
topLeft.DragCompleted += TopLeft_DragCompleted;
// Put the outline border arround the image. The outline will be moved by the DragDelta's
BorderTheImage();
}
#region [Rotate]
/// <summary>
/// Rotate the Adorner Outline about its center point. The Outline rotation will be applied to the image
/// in the DragCompleted event.
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
void rotateHandle_DragDelta(object sender, DragDeltaEventArgs e)
{
// Get the position of the mouse relative to the Thumb. (All cooridnates in Render Space)
Point pos = Mouse.GetPosition(this);
// Render origin is set at center of the adorned element. (all coordinates are in rendering space).
double CenterX = AdornedElement.RenderSize.Width / 2;
double CenterY = AdornedElement.RenderSize.Height / 2;
double deltaX = pos.X - CenterX;
double deltaY = pos.Y - CenterY;
double angle;
if (deltaY.Equals(0))
{
if (!deltaX.Equals(0))
angle = 90;
else
return;
}
else
{
double tan = deltaX / deltaY;
angle = Math.Atan(tan); angle = angle * 180 / Math.PI;
}
// If the mouse crosses the vertical center,
// find the complementary angle.
if (deltaY > 0)
angle = 180 - Math.Abs(angle);
// Rotate left if the mouse moves left and right
// if the mouse moves right.
if (deltaX < 0)
angle = -Math.Abs(angle);
else
angle = Math.Abs(angle);
if (double.IsNaN(angle))
return;
// Apply the rotation to the outline. All Transforms are set to Render Center.
rotation.Angle = angle;
rotation.CenterX = CenterX;
rotation.CenterY = CenterY;
outline.RenderTransform = rotation;
}
/// Rotates image to the same angle as outline arround the render origin.
void rotateHandle_DragCompleted(object sender, DragCompletedEventArgs e)
{
// Get Rotation Angle from outline. All element rendering is set to rendering center.
RotateTransform _rt = outline.RenderTransform as RotateTransform;
// Add RotateTransform to the adorned element.
TransformGroup gT = AdornedElement.RenderTransform as TransformGroup;
RotateTransform rT = new RotateTransform(_rt.Angle);
gT.Children.Insert(0, rT);
AdornedElement.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // clear transform from outline.
}
#endregion //Rotate
#region [TopLeft Corner
// Top Left Corner is being dragged. Anchor is Bottom Right.
void TopLeft_DragDelta(object sender, DragDeltaEventArgs e)
{
ScaleTransform sT = new ScaleTransform(1 - e.HorizontalChange / outline.ActualWidth, 1 - e.VerticalChange / outline.ActualHeight,
outline.ActualWidth, outline.ActualHeight);
outline.RenderTransform = sT; // This will immediately show the new outline without changing the Image.
}
/// The resizing outline for the TopLeft is based on the bottom right-corner. The resizing transform for the
/// element, however, is based on the render origin being in the center. Therefore, the Scale transform
/// received from the outling must be recalculated to have the same effect--only from the rendering center.
///
/// TopLeft_DragCompleted resize the element rendering.
private void TopLeft_DragCompleted(object sender, DragCompletedEventArgs e)
{
// Get new scaling from the Outline.
ScaleTransform _sT = outline.RenderTransform as ScaleTransform;
scale.ScaleX *= _sT.ScaleX; scale.ScaleY *= _sT.ScaleY;
Point Center = new Point(AdornedElement.RenderSize.Width/2, AdornedElement.RenderSize.Height/2);
TransformGroup gT = AdornedElement.RenderTransform as TransformGroup;
ScaleTransform sT = new ScaleTransform( _sT.ScaleX, _sT.ScaleY, Center.X, Center.Y);
gT.Children.Insert(0, sT);
AdornedElement.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // Clear outline transforms. (Same as null).
}
#endregion
Note: I am adding each new transform to the first of the children list. This makes calculations on the image easier.
I could not find with Google or in text all the elements needed to answer this question completely. So, for all other newbies like my self, I will post this (very long) answer. (Editors and Gurus please feel free to correct).
A word toward setup. I have an inkcanvas onto which an image is dropped and added as a child of the inkcanvas. At the time of the drop, an adorner containing a Thumb on each corner for resizing, a Top-Middle thumb for rotating, and a middle thumb for translation is added for final positioning of the image. Along with a "outline" designed as a path element, the Thumbs and outline complete the Adorner and create a kind of wire frame around the adorned element.
There are multiple key points:
WPF first uses a layout pass to position elements within their parent container, followed by a rendering pass to arrange the element. Transforms can be applied to either or both the layout and rendering passes. However, it needs to be noted that the layout pass uses an x-y coordinate system with the origin on the top left of the parent, where as the rendering system inherently references the top left of the child element. If the layout position of the dropped element is not specifically defined, it will by default be added to the "origin" of the parent container.
The RenderTransform is by default a MatrixTransform but can be replaced by a TransformGroup. Using either or both of these allows for Matrices (in the MatrixTransform) or Transforms (in the TransformGroup) to be applied in any order. My preference was to use the MatrixTransforms to better see the relationship between scaling, rotation, and translation.
The rendering of the adorner follows the element it adorns. That is, the element's rendering will also be applied to the Adorner. This behavior can be overriden by use of
public override GeneralTransform GetDesiredTransform(GeneralTransform transform)
As noted in the initial question, I had avoided using SetTop() and SetLeft() as they messed up my other matrices. In hindsight, the reason my matrices failed was because SetTop() and SetLeft() apparently work during the layout phase--so all my coordinates for rendering were off. (I was using a TransalateTransform to position the image upon drag-drop.) However, using SetTop() and SetLeft() apparently act during the Layout Phase. Using this greatly simplified the calculations needed for the Rendering Phase since all coordinates could refer to the image without regard to the position on the canvas.
private void IC_Drop(object sender, DragEventArgs e)
{
InkCanvas ic = sender as InkCanvas;
// Setting InkCanvasEditingMode.None is necessary to capture DrawingLayer_MouseDown.
ic.EditingMode = InkCanvasEditingMode.None;
ImageInfo image_Info = e.Data.GetData(typeof(ImageInfo)) as ImageInfo;
if (image_Info != null)
{
// Display enlarged image on ImageLayer
// This is the expected format for the Uri:
// ImageLayer.Source = new BitmapImage(new Uri("/Images/Female - Front.png", UriKind.Relative));
// Source = new BitmapImage(image_Info.Uri);
Image image = new Image();
image.Width = image_Info.Width * 4;
// Stretch.Uniform keeps the Aspect Ratio but totally screws up resizing the image.
// Stretch.Fill allows for resizing the Image without keeping the Aspect Ratio.
image.Stretch = Stretch.Fill;
image.Source = new BitmapImage(image_Info.Uri);
// Position the drop. Note that SetLeft and SetTop are active during the Layout phase of the image drop and will
// be applied before the Image hits its Rendering stage.
Point position = e.GetPosition(ic);
InkCanvas.SetLeft(image, position.X);
InkCanvas.SetTop(image, position.Y);
ic.Children.Add(image);
ImageResizing imgResize = ImageResizing.Create(image);
}
}
Since I want to be able to resize the image from any direction, the image is set with Stretch.Fill. When Stretch.Uniform was used, the image appeared to first be resized then jump back to its initial size.
Since I am using MatrixTransform, the order of the Matrices is important. So when applying the Matrices, for my use
// Make new render transform. The Matrix order of multiplication is extremely important.
// Scaling should be done first, followed by (skewing), rotation and translation -- in
// that order.
MatrixTransform gT = new MatrixTransform
{
Matrix = sM * rM * tM
};
ele.RenderTransform = gT;
Scaling (sM), is performed before Rotation (rM). Translation is applied last. (C# does matrix multiplication from left to right).
In review of the matrices, it is apparent that the Rotation Matrix also involves skewing elements. (Which makes sense since apparently the RotationTransform is intended to keep the angles at the edges constant). Thus, the Rotation Matrix depends on the size of the image.
In my case, the reason scaling after rotation was causing skewing is because the Scaling transform multiplies the distance between points of the image and the x-y axes. So if the edge of the image is not of constant distance to the x-y axes, scaling will distort (i.e., skew) the image.
Putting this together, results in the following method to resize the image:
Action<Matrix, Vector> DragCompleted = (growthMatrix, v) =>
{
var ele = AdornedElement;
// Get the change vector. Transform (i.e, Rotate) change vector into x-y axes.
// The Horizontal and Vertical changes give the distance between the the current cursor position
// and the Thumb.
Matrix m = new Matrix();
m.Rotate(-AngleDeg);
Vector v1 = v * m;
// Calculate Growth Vector.
var gv = v1 * growthMatrix;
// Apply new scaling along the x-y axes to obtain the rendered size.
// Use the current Image size as the reference to calculate the new scaling factors.
var scaleX = sM.M11; var scaleY = sM.M22;
var W = ele.RenderSize.Width * scaleX; var H = ele.RenderSize.Height * scaleY;
var sx = 1 + gv.X/ W; var sy = 1 + gv.Y / H;
// Change ScalingTransform by applying the new scaling factors to the existing scaling transform.
// Do not add offsets to the scaling transform matrix as they will be included in future scalings.
// With RenderTransformOrigin set to the image center (0.5, 0.5), scalling occurs from the center out.
// Move the new center of the new resized image to its correct position such that the image's thumb stays
// underneath the cursor.
sM.Scale(sx, sy);
tM.Translate(v.X / 2, v.Y / 2);
// New render transform. The order of the transform's is extremely important.
MatrixTransform gT = new MatrixTransform
{
Matrix = sM * rM * tM
};
ele.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // clear this transform from the outline.
};
Just to be clear, my "Growth Matrix" is defined in such a manner as to result in "Positive" growth as the cursor is moved away from the center of the image. For Example, the TopLeft corner will "grow" the image when moved to the left and up. Hence
growth matrix = new Matrix(-1, 0, 0, -1, 0, 0) for top-left corner.
The last problem is that of correctly calculating the rotation center (i.e., I want to spin, not orbit). This becomes greatly simplified by using
// All transforms will be based on the center of the rendered element.
AdornedElement.RenderTransformOrigin = new Point(0.5, 0.5);
Lastly, since I am scaling from a corner, the center of the image needs to be translated to keep the corner underneath the cursor.
Sorry for the length of this answer, but there is much to cover (and learn :) ). Hope this helps somebody.
In a wpf control with zoom functionality I calculate from the MouseWheelEventArgs how to scale the drawing canvas to implement the zoom effect.
Point mouse = e.GetPosition(myCanvas);
Matrix m = myCanvas.RenderTransform.Value;
if (e.Delta > 0)
{
f = 1.1;
}
else
{
f = 1.0 / 1.1;
}
m.ScaleAtPrepend(f, f, mouse.X, mouse.Y);
myCanvas.RenderTransform = new MatrixTransform(m);
I would like to know the actual size of one of the circles on the canvas. However Width, ActualWidth and such stay the same while zooming in and out (16.0). How would you determine (calculate?) the size of the circle that a user actually sees on his or her screen?
MatrixTransform has a method TransformBounds for this. You feed it with original bounding box of the image (I think in your case 0,0,width,height will do) and it will return you its resulting bounding box.
According to msdn in Silverlight images are hit testable over their image/media display areas, basically their Height and Width. Transparent / full alpha pixels in the image file are still hit testable.
My question is now, what is the best way to have only non-transparent pixel hit testable in images in Silverlight?
This is not going to be possible using the normal hit testing capability, as you found out with the MSDN reference.
The only idea I had was to convert your image to the WritableBitmap class and use the Pixels property to do alpha channel hit testing. I have not actually tried this and I can't imagine it's trivial to do, but it should work in theory.
The pixels are one large int[] with the 4 bytes of each integer corresponding to ARGB. It uses the premultiplied ARGB32 format, so if there is any alpha transparency besides full 255 the other RGB values are scaled accordingly. I am assuming you want anything NOT full alpha to be considered a "hit" so you could just check against the alpha byte to see if it is 255.
You would access the row/col pixel you are looking to check by array index like this:
int pixel = myBitmap.Pixels[row * myBitmap.PixelWidth + col];
Check out this post for some more ideas.
EDIT:
I threw together a quick test, it works and it's pretty straightforward:
public MainPage()
{
InitializeComponent();
this.image = new BitmapImage(new Uri("my_tranny_image.png", UriKind.Relative));
this.MyImage.Source = image;
this.LayoutRoot.MouseMove += (sender, e) =>
{
bool isHit = ImageHitTest(image, e.GetPosition(this.MyImage));
this.Result.Text = string.Format("Hit Test Result: {0}", isHit);
};
}
bool ImageHitTest(BitmapSource image, Point point)
{
var writableBitmap = new WriteableBitmap(image);
// check bounds
if (point.X < 0.0 || point.X > writableBitmap.PixelWidth - 1 ||
point.Y < 0.0 || point.Y > writableBitmap.PixelHeight - 1)
return false;
int row = (int)Math.Floor(point.Y);
int col = (int)Math.Floor(point.X);
int pixel = writableBitmap.Pixels[row * writableBitmap.PixelWidth + col];
byte[] pixelBytes = BitConverter.GetBytes(pixel);
if (pixelBytes[0] != 0x00)
return true;
else
return false;
}
You would probably want to make some optimizations like not create the WritableBitmap on every MouseMove event but this is just a proof of concept to show that it works.