What is the best way to have only non-transparent pixel hit testable in images in Silverlight? - silverlight

According to msdn in Silverlight images are hit testable over their image/media display areas, basically their Height and Width. Transparent / full alpha pixels in the image file are still hit testable.
My question is now, what is the best way to have only non-transparent pixel hit testable in images in Silverlight?

This is not going to be possible using the normal hit testing capability, as you found out with the MSDN reference.
The only idea I had was to convert your image to the WritableBitmap class and use the Pixels property to do alpha channel hit testing. I have not actually tried this and I can't imagine it's trivial to do, but it should work in theory.
The pixels are one large int[] with the 4 bytes of each integer corresponding to ARGB. It uses the premultiplied ARGB32 format, so if there is any alpha transparency besides full 255 the other RGB values are scaled accordingly. I am assuming you want anything NOT full alpha to be considered a "hit" so you could just check against the alpha byte to see if it is 255.
You would access the row/col pixel you are looking to check by array index like this:
int pixel = myBitmap.Pixels[row * myBitmap.PixelWidth + col];
Check out this post for some more ideas.
EDIT:
I threw together a quick test, it works and it's pretty straightforward:
public MainPage()
{
InitializeComponent();
this.image = new BitmapImage(new Uri("my_tranny_image.png", UriKind.Relative));
this.MyImage.Source = image;
this.LayoutRoot.MouseMove += (sender, e) =>
{
bool isHit = ImageHitTest(image, e.GetPosition(this.MyImage));
this.Result.Text = string.Format("Hit Test Result: {0}", isHit);
};
}
bool ImageHitTest(BitmapSource image, Point point)
{
var writableBitmap = new WriteableBitmap(image);
// check bounds
if (point.X < 0.0 || point.X > writableBitmap.PixelWidth - 1 ||
point.Y < 0.0 || point.Y > writableBitmap.PixelHeight - 1)
return false;
int row = (int)Math.Floor(point.Y);
int col = (int)Math.Floor(point.X);
int pixel = writableBitmap.Pixels[row * writableBitmap.PixelWidth + col];
byte[] pixelBytes = BitConverter.GetBytes(pixel);
if (pixelBytes[0] != 0x00)
return true;
else
return false;
}
You would probably want to make some optimizations like not create the WritableBitmap on every MouseMove event but this is just a proof of concept to show that it works.

Related

Implementing stroke drawing similar to InkCanvas

My problem effectively boils down to accurate mouse movement detection.
I need to create my own implementation of an InkCanvas and have succeeded for the most part, except for drawing strokes accurately.
void OnMouseMove(object sneder, MouseEventArgs e)
{
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
DrawBrush.Draw(intPosition, PixelDisplay);
UpdateStroke(intPosition); // calls CaptureMouse
}
This works. The Bitmap (PixelDisplay) is updated and all is well. However, any kind of quick mouse movement causes large skips in the drawing. I've narrowed down the problem to e.GetPosition(this), which blocks the event long enough to be inaccurate.
There's this question which is long beyond revival, and its answers are unclear or simply don't have a noticeable difference.
After some more testing, the stated solution and similar ideas fail specifically because of e.GetPosition.
I know InkCanvas uses similar methods after looking through the source; detect the device, if it's a mouse, get its position and capture. I see no reason for the same process to not work identically here.
I ended up being able to partially solve this.
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
if (DrawBrush == null)
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
// Calculate pixel coordinates based on the control height
var lastPoint = CurrentStroke?.Points.LastOrDefault(new IntVector(-1, -1));
// Uses System.Linq to grab the last stroke, if it exists
PixelDisplay.Lock();
// My special locking mechanism, effectively wraps Bitmap.Lock
if (lastPoint != new IntVector(-1, -1)) // Determine if we're in the middle of a stroke
{
var alphaAdd = 1d / new IntVector(intPosition.X - lastPoint.Value.X, intPosition.Y - lastPoint.Value.Y).Magnitude;
// For some interpolation, calculate 1 / distance (magnitude) of the two points.
// Magnitude formula: Math.Sqrt(Math.Pow(X, 2) + Math.Pow(Y, 2));
var alpha = 0d;
var xDiff = intPosition.X - lastPoint.Value.X;
var yDiff = intPosition.Y - lastPoint.Value.Y;
while (alpha < 1d)
{
alpha += alphaAdd;
var adjusted = new IntVector(
Math2.FloorToInt((position.X + (xDiff * alpha)) / ratio.X),
Math2.FloorToInt((position.Y + (yDiff * alpha)) / ratio.Y));
// Inch our way towards the current intPosition
DrawBrush.Draw(adjusted, PixelDisplay); // Draw to the bitmap
UpdateStroke(intPosition);
}
}
DrawBrush.Draw(intPosition, PixelDisplay); // Draw the original point
UpdateStroke(intPosition);
PixelDisplay.Unlock();
This implementation interpolates between the last point and the current one to fill in any gaps. It's not perfect when using a very small brush size for example, but is a solution nonetheless.
Some remarks
IntVector is a lazily implemented Vector2 by me, just using integers instead.
Math2 is a helper class. FloorToInt is short for (int)MathF.Round(...))

Generalizing the ScaleTransform (WPF) when image is not aligned to x-y axes

My linear algebra is weak. WPF is a great system for Rendering different transformations upon an image. However, the standard ScaleTransform will only scale the image's along the x-y axes. When the edges have first been rotated, the result of applying the ScaleTransform will result in a skewed transformation (as shown below) since the edges are no longer aligned.
So, if I have an image that has undergone several different transforms with the result being shown by the WPF rendering system, how do I calculate the correct matrix transform to take the (final rotated image) and scale it along the axes of the rendered image?
Any help or suggestions will be most appreciated.
TIA
(For the complete code, please see my previous question.)
Edit #1: To see the above effect:
Drop image onto Inkcavas. -- no skewing seen.
Rotate image counterclockwise (to about 45deg) -- no skewing seen.
Make the image larger (about twice its prescaled size -- no skewing seen.
Rotate the image clockwise (about back to where it started) -- skewing is
immediately seen during and after the rotation.
If step 3 is skipped, simple rotation -- no matter how many times done -- will not cause the skewing effect. Actually, this makes sense. The ScaleTransform preserves the distance from center from the edges of the image. If the image is at an angle, the x-y distance from the edges of the transform are no longer constant through the width and length of the rendered image. So the edges get appropriately scaled, but the angles are changed.
Here is the most relevant code:
private ImageResizing(Image image)
{
if (image == null)
throw new ArgumentNullException("image");
_image = image;
TransformGroup tg = new TransformGroup();
image.RenderTransformOrigin = new Point(0.5, 0.5); // All transforms will be based on the center of the rendered element.
tg.Children.Add(image.RenderTransform); // Keeps whatever transforms have already been applied.
image.RenderTransform = tg;
_adorner = new MyImageAdorner(image); // Create the adorner.
InstallAdorner(); // Get the Adorner Layer and add the Adorner.
}
Note: The image.RenderTransformOrigin = new Point(0.5, 0.5) is set to the center
of the rendered image. All transforms will be based on the center of the image at the time it is seem by the transform.
public MyImageAdorner(UIElement adornedElement)
: base(adornedElement)
{
visualChildren = new VisualCollection(this);
// Initialize the Movement and Rotation thumbs.
BuildAdornerRotate(ref moveHandle, Cursors.SizeAll);
BuildAdornerRotate(ref rotateHandle, Cursors.Hand);
// Add handlers for move and rotate.
moveHandle.DragDelta += new DragDeltaEventHandler(moveHandle_DragDelta);
moveHandle.DragCompleted += new DragCompletedEventHandler(moveHandle_DragCompleted);
rotateHandle.DragDelta += new DragDeltaEventHandler(rotateHandle_DragDelta);
rotateHandle.DragCompleted += new DragCompletedEventHandler(rotateHandle_DragCompleted);
// Initialize the Resizing (i.e., corner) thumbs with specialized cursors.
BuildAdornerCorner(ref topLeft, Cursors.SizeNWSE);
// Add handlers for resizing.
topLeft.DragDelta += new DragDeltaEventHandler(TopLeft_DragDelta);
topLeft.DragCompleted += TopLeft_DragCompleted;
// Put the outline border arround the image. The outline will be moved by the DragDelta's
BorderTheImage();
}
#region [Rotate]
/// <summary>
/// Rotate the Adorner Outline about its center point. The Outline rotation will be applied to the image
/// in the DragCompleted event.
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
void rotateHandle_DragDelta(object sender, DragDeltaEventArgs e)
{
// Get the position of the mouse relative to the Thumb. (All cooridnates in Render Space)
Point pos = Mouse.GetPosition(this);
// Render origin is set at center of the adorned element. (all coordinates are in rendering space).
double CenterX = AdornedElement.RenderSize.Width / 2;
double CenterY = AdornedElement.RenderSize.Height / 2;
double deltaX = pos.X - CenterX;
double deltaY = pos.Y - CenterY;
double angle;
if (deltaY.Equals(0))
{
if (!deltaX.Equals(0))
angle = 90;
else
return;
}
else
{
double tan = deltaX / deltaY;
angle = Math.Atan(tan); angle = angle * 180 / Math.PI;
}
// If the mouse crosses the vertical center,
// find the complementary angle.
if (deltaY > 0)
angle = 180 - Math.Abs(angle);
// Rotate left if the mouse moves left and right
// if the mouse moves right.
if (deltaX < 0)
angle = -Math.Abs(angle);
else
angle = Math.Abs(angle);
if (double.IsNaN(angle))
return;
// Apply the rotation to the outline. All Transforms are set to Render Center.
rotation.Angle = angle;
rotation.CenterX = CenterX;
rotation.CenterY = CenterY;
outline.RenderTransform = rotation;
}
/// Rotates image to the same angle as outline arround the render origin.
void rotateHandle_DragCompleted(object sender, DragCompletedEventArgs e)
{
// Get Rotation Angle from outline. All element rendering is set to rendering center.
RotateTransform _rt = outline.RenderTransform as RotateTransform;
// Add RotateTransform to the adorned element.
TransformGroup gT = AdornedElement.RenderTransform as TransformGroup;
RotateTransform rT = new RotateTransform(_rt.Angle);
gT.Children.Insert(0, rT);
AdornedElement.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // clear transform from outline.
}
#endregion //Rotate
#region [TopLeft Corner
// Top Left Corner is being dragged. Anchor is Bottom Right.
void TopLeft_DragDelta(object sender, DragDeltaEventArgs e)
{
ScaleTransform sT = new ScaleTransform(1 - e.HorizontalChange / outline.ActualWidth, 1 - e.VerticalChange / outline.ActualHeight,
outline.ActualWidth, outline.ActualHeight);
outline.RenderTransform = sT; // This will immediately show the new outline without changing the Image.
}
/// The resizing outline for the TopLeft is based on the bottom right-corner. The resizing transform for the
/// element, however, is based on the render origin being in the center. Therefore, the Scale transform
/// received from the outling must be recalculated to have the same effect--only from the rendering center.
///
/// TopLeft_DragCompleted resize the element rendering.
private void TopLeft_DragCompleted(object sender, DragCompletedEventArgs e)
{
// Get new scaling from the Outline.
ScaleTransform _sT = outline.RenderTransform as ScaleTransform;
scale.ScaleX *= _sT.ScaleX; scale.ScaleY *= _sT.ScaleY;
Point Center = new Point(AdornedElement.RenderSize.Width/2, AdornedElement.RenderSize.Height/2);
TransformGroup gT = AdornedElement.RenderTransform as TransformGroup;
ScaleTransform sT = new ScaleTransform( _sT.ScaleX, _sT.ScaleY, Center.X, Center.Y);
gT.Children.Insert(0, sT);
AdornedElement.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // Clear outline transforms. (Same as null).
}
#endregion
Note: I am adding each new transform to the first of the children list. This makes calculations on the image easier.
I could not find with Google or in text all the elements needed to answer this question completely. So, for all other newbies like my self, I will post this (very long) answer. (Editors and Gurus please feel free to correct).
A word toward setup. I have an inkcanvas onto which an image is dropped and added as a child of the inkcanvas. At the time of the drop, an adorner containing a Thumb on each corner for resizing, a Top-Middle thumb for rotating, and a middle thumb for translation is added for final positioning of the image. Along with a "outline" designed as a path element, the Thumbs and outline complete the Adorner and create a kind of wire frame around the adorned element.
There are multiple key points:
WPF first uses a layout pass to position elements within their parent container, followed by a rendering pass to arrange the element. Transforms can be applied to either or both the layout and rendering passes. However, it needs to be noted that the layout pass uses an x-y coordinate system with the origin on the top left of the parent, where as the rendering system inherently references the top left of the child element. If the layout position of the dropped element is not specifically defined, it will by default be added to the "origin" of the parent container.
The RenderTransform is by default a MatrixTransform but can be replaced by a TransformGroup. Using either or both of these allows for Matrices (in the MatrixTransform) or Transforms (in the TransformGroup) to be applied in any order. My preference was to use the MatrixTransforms to better see the relationship between scaling, rotation, and translation.
The rendering of the adorner follows the element it adorns. That is, the element's rendering will also be applied to the Adorner. This behavior can be overriden by use of
public override GeneralTransform GetDesiredTransform(GeneralTransform transform)
As noted in the initial question, I had avoided using SetTop() and SetLeft() as they messed up my other matrices. In hindsight, the reason my matrices failed was because SetTop() and SetLeft() apparently work during the layout phase--so all my coordinates for rendering were off. (I was using a TransalateTransform to position the image upon drag-drop.) However, using SetTop() and SetLeft() apparently act during the Layout Phase. Using this greatly simplified the calculations needed for the Rendering Phase since all coordinates could refer to the image without regard to the position on the canvas.
private void IC_Drop(object sender, DragEventArgs e)
{
InkCanvas ic = sender as InkCanvas;
// Setting InkCanvasEditingMode.None is necessary to capture DrawingLayer_MouseDown.
ic.EditingMode = InkCanvasEditingMode.None;
ImageInfo image_Info = e.Data.GetData(typeof(ImageInfo)) as ImageInfo;
if (image_Info != null)
{
// Display enlarged image on ImageLayer
// This is the expected format for the Uri:
// ImageLayer.Source = new BitmapImage(new Uri("/Images/Female - Front.png", UriKind.Relative));
// Source = new BitmapImage(image_Info.Uri);
Image image = new Image();
image.Width = image_Info.Width * 4;
// Stretch.Uniform keeps the Aspect Ratio but totally screws up resizing the image.
// Stretch.Fill allows for resizing the Image without keeping the Aspect Ratio.
image.Stretch = Stretch.Fill;
image.Source = new BitmapImage(image_Info.Uri);
// Position the drop. Note that SetLeft and SetTop are active during the Layout phase of the image drop and will
// be applied before the Image hits its Rendering stage.
Point position = e.GetPosition(ic);
InkCanvas.SetLeft(image, position.X);
InkCanvas.SetTop(image, position.Y);
ic.Children.Add(image);
ImageResizing imgResize = ImageResizing.Create(image);
}
}
Since I want to be able to resize the image from any direction, the image is set with Stretch.Fill. When Stretch.Uniform was used, the image appeared to first be resized then jump back to its initial size.
Since I am using MatrixTransform, the order of the Matrices is important. So when applying the Matrices, for my use
// Make new render transform. The Matrix order of multiplication is extremely important.
// Scaling should be done first, followed by (skewing), rotation and translation -- in
// that order.
MatrixTransform gT = new MatrixTransform
{
Matrix = sM * rM * tM
};
ele.RenderTransform = gT;
Scaling (sM), is performed before Rotation (rM). Translation is applied last. (C# does matrix multiplication from left to right).
In review of the matrices, it is apparent that the Rotation Matrix also involves skewing elements. (Which makes sense since apparently the RotationTransform is intended to keep the angles at the edges constant). Thus, the Rotation Matrix depends on the size of the image.
In my case, the reason scaling after rotation was causing skewing is because the Scaling transform multiplies the distance between points of the image and the x-y axes. So if the edge of the image is not of constant distance to the x-y axes, scaling will distort (i.e., skew) the image.
Putting this together, results in the following method to resize the image:
Action<Matrix, Vector> DragCompleted = (growthMatrix, v) =>
{
var ele = AdornedElement;
// Get the change vector. Transform (i.e, Rotate) change vector into x-y axes.
// The Horizontal and Vertical changes give the distance between the the current cursor position
// and the Thumb.
Matrix m = new Matrix();
m.Rotate(-AngleDeg);
Vector v1 = v * m;
// Calculate Growth Vector.
var gv = v1 * growthMatrix;
// Apply new scaling along the x-y axes to obtain the rendered size.
// Use the current Image size as the reference to calculate the new scaling factors.
var scaleX = sM.M11; var scaleY = sM.M22;
var W = ele.RenderSize.Width * scaleX; var H = ele.RenderSize.Height * scaleY;
var sx = 1 + gv.X/ W; var sy = 1 + gv.Y / H;
// Change ScalingTransform by applying the new scaling factors to the existing scaling transform.
// Do not add offsets to the scaling transform matrix as they will be included in future scalings.
// With RenderTransformOrigin set to the image center (0.5, 0.5), scalling occurs from the center out.
// Move the new center of the new resized image to its correct position such that the image's thumb stays
// underneath the cursor.
sM.Scale(sx, sy);
tM.Translate(v.X / 2, v.Y / 2);
// New render transform. The order of the transform's is extremely important.
MatrixTransform gT = new MatrixTransform
{
Matrix = sM * rM * tM
};
ele.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // clear this transform from the outline.
};
Just to be clear, my "Growth Matrix" is defined in such a manner as to result in "Positive" growth as the cursor is moved away from the center of the image. For Example, the TopLeft corner will "grow" the image when moved to the left and up. Hence
growth matrix = new Matrix(-1, 0, 0, -1, 0, 0) for top-left corner.
The last problem is that of correctly calculating the rotation center (i.e., I want to spin, not orbit). This becomes greatly simplified by using
// All transforms will be based on the center of the rendered element.
AdornedElement.RenderTransformOrigin = new Point(0.5, 0.5);
Lastly, since I am scaling from a corner, the center of the image needs to be translated to keep the corner underneath the cursor.
Sorry for the length of this answer, but there is much to cover (and learn :) ). Hope this helps somebody.

Drawing multiple lines with DrawLines and DrawLine produces different result

I am trying to draw multiple lines on a winforms panel using it's graphics object in paint event. I am actually drawing a number of lines joining given points. So, first of all I did this,
private void panel1_Paint(object sender, PaintEventArgs e)
{
e.Graphics.DrawLines(new Pen(new SolidBrush(Color.Crimson), 3), PointFs.ToArray());
float width = 10;
float height = 10;
var circleBrush = new SolidBrush(Color.Crimson);
foreach (var point in PointFs)
{
float rectangleX = point.X - width / 2;
float rectangleY = point.Y - height / 2;
var r = new RectangleF(rectangleX, rectangleY, width, height);
e.Graphics.FillEllipse(circleBrush, r);
}
}
Which produces a result like the image below,
As you can see lines are drawn with having a little bit of extension at sharp turns, which is not expected. So, I changed the drawlines code to,
var pen = new Pen(new SolidBrush(Color.Crimson), 3);
for (int i = 1; i < PointFs.Count; i++)
{
e.Graphics.DrawLine(pen, PointFs[i - 1], PointFs[i]);
}
And now the drawing works fine.
Can anyone tell the difference between the two approaches?
I have just had the same problem (stumbled upon this question during my research), but I have now found the solution.
The problem is caused by the LineJoin property on the Pen used. This DevX page explains the different LineJoin types (see Figure 1 for illustrations). It seems that Miter is the default type, and that causes the "overshoot" when you have sharp angles.
I solved my problem by setting the LineJoin property to Bevel:
var pen = new Pen(new SolidBrush(Color.Crimson), 3);
pen.LineJoin = Drawing2D.LineJoin.Bevel;
Now DrawLines no longer overshoot the points.

An efficient way to paint gradient rectangles

I'm generating a bunch of RectangleF objects having different sizes and positions. What would be the best way to fill them with a gradient Brush in GDI+?
In WPF I could create a LinearGradientBrush, set Start and End relative points and WPF would take care of the rest.
In GDI+ however, the gradient brush constructor requires the position in absolute coordinates, which means I have to create a Brush for each of the rectangle, which would be a very complex operation.
Am I missing something or that's indeed the only way?
You can specify a transform at the moment just before the gradient is applied if you would like to declare the brush only once. Note that using transformations will override many of the constructor arguments that can be specified on a LinearGradientBrush.
LinearGradientBrush.Transform Property (System.Drawing.Drawing2D)
To modify the transformation, call the methods on the brush object corresponding to the desired matrix operations. Note that matrix operations are not commutative, so order is important. For your purposes, you'll probably want to do them in this order for each rendition of your rectangles: Scale, Rotate, Offset/Translate.
LinearGradientBrush.ResetTransform Method # MSDN
LinearGradientBrush.ScaleTransform Method (Single, Single, MatrixOrder) # MSDN
LinearGradientBrush.RotateTransform Method (Single, MatrixOrder) # MSDN
LinearGradientBrush.TranslateTransform Method (Single, Single, MatrixOrder) # MSDN
Note that the system-level drawing tools don't actually contain a stock definition for gradient brush, so if you have performance concerns about making multiple brushes, creating a multitude of gradient brushes shouldn't cost any more than the overhead of GDI+/System.Drawing maintaining the data required to define the gradient and styling. You may be just as well off to create a Brush per rectangle as needed, without having to dive into the math required to customize the brush via transform.
Brush Functions (Windows) # MSDN
Here is a code example you can test in a WinForms app. This app paints tiles with a gradient brush using a 45 degree gradient, scaled to the largest dimension of the tile (naively calculated). If you fiddle with the values and transformations, you may find that it isn't worth using the technique setting a transform for all of your rectangles if you have non-trivial gradient definitions. Otherwise, remember that your transformations are applied at the world-level, and in the GDI world, the y-axis is upside down, whereas in the cartesian math world, it is ordered bottom-to-top. This also causes the angle to be applied clockwise, whereas in trigonometry, the angle progresses counter-clockwise in increasing value for a y-axis pointing up.
using System.Drawing.Drawing2D;
namespace TestMapTransform
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Paint(object sender, PaintEventArgs e)
{
Rectangle rBrush = new Rectangle(0,0,1,1);
Color startColor = Color.DarkRed;
Color endColor = Color.White;
LinearGradientBrush br = new LinearGradientBrush(rBrush, startColor, endColor, LinearGradientMode.Horizontal);
int wPartitions = 5;
int hPartitions = 5;
int w = this.ClientSize.Width;
w = w - (w % wPartitions) + wPartitions;
int h = this.ClientSize.Height;
h = h - (h % hPartitions) + hPartitions;
for (int hStep = 0; hStep < hPartitions; hStep++)
{
int hUnit = h / hPartitions;
for (int wStep = 0; wStep < wPartitions; wStep++)
{
int wUnit = w / wPartitions;
Rectangle rTile = new Rectangle(wUnit * wStep, hUnit * hStep, wUnit, hUnit);
if (e.ClipRectangle.IntersectsWith(rTile))
{
int maxUnit = wUnit > hUnit ? wUnit : hUnit;
br.ResetTransform();
br.ScaleTransform((float)maxUnit * (float)Math.Sqrt(2d), (float)maxUnit * (float)Math.Sqrt(2d), MatrixOrder.Append);
br.RotateTransform(45f, MatrixOrder.Append);
br.TranslateTransform(wUnit * wStep, hUnit * hStep, MatrixOrder.Append);
e.Graphics.FillRectangle(br, rTile);
br.ResetTransform();
}
}
}
}
private void Form1_Resize(object sender, EventArgs e)
{
this.Invalidate();
}
}
}
Here's a snapshot of the output:
I recommend you to create a generic method like this:
public void Paint_rectangle(object sender, PaintEventArgs e)
{
RectangleF r = new RectangleF(0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height);
if (r.Width > 0 && r.Height > 0)
{
Color c1 = Color.LightBlue;
Color c2 = Color.White;
Color c3 = Color.LightBlue;
LinearGradientBrush br = new LinearGradientBrush(r, c1, c3, 90, true);
ColorBlend cb = new ColorBlend();
cb.Positions = new[] { 0, (float)0.5, 1 };
cb.Colors = new[] { c1, c2, c3 };
br.InterpolationColors = cb;
// paint
e.Graphics.FillRectangle(br, r);
}
}
then, for every rectangle just call:
yourrectangleF.Paint += new PaintEventHandler(Paint_rectangle);
If the gradrients colors are all the same, you can make that method shorter. Hope that helped..

Rendering a WPF canvas as a specificly sized bitmap

I have a WPF Canvas that I want to make a bitmap of.
Specifically, I want to render it actual size on a 300dpi bitmap.
The "actual size" of the objects on the canvas is 10 device independent pixels = 1" in real life.
Theoretically, WPF device independent pixels are 96dpi.
I've spent days trying to get this to work and am coming up flummoxed.
My understanding is that the general procedure is roughly:
var masterBitmap = new RenderTargetBitmap((int)(canvas.ActualWidth * ?SomeFactor?),
(int)(canvas.ActualHeight * ?SomeFactor?),
BitmapDpi, BitmapDpi, PixelFormats.Default);
masterBitmap.Render(canvas);
and that I need to set the canvas's LayoutTransform to a ScaleTransform of ?SomeOtherFactor? and then do a measure and arrange of the canvas to ?SomeDesiredSize?
What I am stuck on is what to use for the values of ?SomeFactor?, ?SomeOtherFactor? and ?SomeDesiredSize? to make this work. MSDN documentation gives no indication of what factors to use.
I use this code to display images with 1:1 pixel accuracy.
double dpiXFactor, dpiYFactor;
Matrix m = PresentationSource.FromVisual(Application.Current.MainWindow).CompositionTarget.TransformToDevice;
if (m.M11 > 0 && m.M22 > 0)
{
dpiXFactor = m.M11;
dpiYFactor = m.M22;
}
else
{
// Sometimes this can return a matrix with 0s.
// Fall back to assuming normal DPI in this case.
dpiXFactor = 1;
dpiYFactor = 1;
}
double width = widthPixels / dpiXFactor;
double height = heightPixels / dpiYFactor;
Don't forget to enable UseLayoutRounding on the control as well.

Resources