I want to draw like in the old qbasik, where you can into 5 lines and PSET (x, y) derive any graph, or Lissajous figures.
Question: what better way to go for WPF? and way for XNA?
Any samples?
For WPF and Silverlight
WriteableBitmap
http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.writeablebitmap.aspx
WriteableBitmapEx library. Tries to compensate that with extensions methods that are easy to use like built in methods and offer GDI+ like functionality:
http://writeablebitmapex.codeplex.com/
In XNA this isn't the most efficient thing in general, but I think your best bet is to probably create a texture and set each pixel using SetData, and render it to the screen with SpriteBatch.
SpriteBatch spriteBatch;
Texture2D t;
Color[] blankScreen;
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
//initialize texture
t = new Texture2D(GraphicsDevice, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, false, SurfaceFormat.Color);
//clear screen initially
blankScreen = new Color[GraphicsDevice.Viewport.Width * GraphicsDevice.Viewport.Height];
for (int i = 0; i < blankScreen.Length; i++)
{
blankScreen[i] = Color.Black;
}
ClearScreen();
}
private void Set(int x, int y, Color c)
{
Color[] cArray = { c };
//unset texture from device
GraphicsDevice.Textures[0] = null;
t.SetData<Color>(0, new Rectangle(x, y, 1, 1), cArray, 0, 1);
//reset
GraphicsDevice.Textures[0] = t;
}
private void ClearScreen()
{
//unset texture from device
GraphicsDevice.Textures[0] = null;
t.SetData<Color>(blankScreen);
//reset
GraphicsDevice.Textures[0] = t;
}
protected override void Draw(GameTime gameTime)
{
spriteBatch.Begin();
spriteBatch.Draw(t, Vector2.Zero, Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
With this you can call either Set or ClearScreen at will in your Update or Draw. You may have to play with the texture index (I just used 0 for this example, might not be it for you), and also you only need to unset / reset one time per frame, so you can optimize that depending on how you use them.
Related
I'm trying to set the camera to move around the rendered object and I'm following this guide:
MSDN Guide on Rotating and Moving The Camera
here's the code I got:
public partial class MainPage : UserControl
{
float _cameraRotationRadians = 0.0f;
Vector3 _cameraPosition = new Vector3(5.0f, 5.0f, 5.0f);
List<TexturedMesh> _meshes;
BasicEffect _effect;
public MainPage()
{
InitializeComponent();
}
private void DrawingSurface_Loaded(object sender, RoutedEventArgs e)
{
GraphicsDevice device = GraphicsDeviceManager.Current.GraphicsDevice;
RenderModeReason reason = GraphicsDeviceManager.Current.RenderModeReason;
_meshes = StreamHelper.ToMesh(device, "capsule.obj");
_effect = new BasicEffect(GraphicsDeviceManager.Current.GraphicsDevice);
_effect.TextureEnabled = false;
_effect.World = Matrix.Identity;
_effect.View = Matrix.CreateLookAt(_cameraPosition, new Vector3(0.0f, 0.0f, 0.0f), Vector3.Up);
_effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 1.667f, 1.0f, 100.0f);
}
private void DrawingSurface_Draw(object sender, DrawEventArgs e)
{
GraphicsDevice device = GraphicsDeviceManager.Current.GraphicsDevice;
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, new Microsoft.Xna.Framework.Color(0, 0, 0, 0), 10.0f, 0);
device.RasterizerState = new RasterizerState()
{
CullMode = CullMode.None
};
foreach (TexturedMesh mesh in _meshes)
{
// Load current vertex buffer
device.SetVertexBuffer(mesh.VertexBuffer);
// Apply texture
if (mesh.Texture != null)
{
_effect.Texture = mesh.Texture;
_effect.TextureEnabled = true;
}
else
_effect.TextureEnabled = false;
// Draw the mesh
foreach (EffectPass pass in _effect.CurrentTechnique.Passes)
{
pass.Apply();
device.SamplerStates[0] = SamplerState.LinearClamp;
device.DrawPrimitives(PrimitiveType.TriangleList, 0, mesh.VertexBuffer.VertexCount / 3);
}
// updates camera position
UpdateCameraPosition();
}
e.InvalidateSurface();
}
private void UpdateCameraPosition()
{
float nearClip = 1.0f;
float farClip = 2000.0f;
// set object origin
Vector3 objectPosition = new Vector3(0, 0, 0);
// set angle to rotate to
float cameraDegree = _cameraRotationRadians;
Matrix rotationMatrix = Matrix.CreateRotationX(MathHelper.ToRadians(cameraDegree));
// set where camera is looking at
Vector3 transformedReference = Vector3.Transform(objectPosition, rotationMatrix);
Vector3 cameraLookAt = _cameraPosition + transformedReference;
// set view matrix and projection matrix
_effect.View = Matrix.CreateLookAt(_cameraPosition, objectPosition, Vector3.Up);
_effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 1.667f, nearClip, farClip);
}
private void slider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
// slider.Value;
_cameraRotationRadians += (float)slider.Value;
}
}
I have a slider control that gives the radians value. When the slider value changes, I then set a new angle for the camera. The UpdateCameraPosition() is called every time the DrawingSurface refreshes after the objects are loaded.
Now when I try to move the slider, the position of the camera never changes. Can someone help and tell me what to fix to make it revolve around the whole group of objects?
You are basically trying to rotate the vector (0,0,0). Rotating a vector in the origo will have no result.
You should instead rotate the camera-position.
That guide is how to rotate the camera in place, but it sounds like you are trying to have your camera orbit your object. In that case, both transformedReference and cameraLookAt are not needed.
Simply remove those two lines and rotate the _cameraPosition around the objectPosition to have the camera orbit the object.
//add this line in place of those two lines:
_cameraPosition = Vector3.Transform(_cameraPosition, rotationMatrix);
//now when you plug _cameraPosition into Matrix.CreateLookAt(), it will offer it a new camera position
Technically my snippet causes _cameraPosition to orbit the world origin 0,0,0. It works because your objectPosition happens to be at the world origin so the camera is orbiting objectPosition too.
But if objectPosition ever wanders away from the origin, you can still cause the camera to orbit the object by translating the camera/object system to the origin, perform the rotation, the translate them back. it is easier that it sounds:
_cameraPosition = Vector3.Transform(_cameraPosition - objectPosition, rotationMatrix) + objectPosition;
//now the camera will orbit the object no matter where in the world the object is.
I'm generating a bunch of RectangleF objects having different sizes and positions. What would be the best way to fill them with a gradient Brush in GDI+?
In WPF I could create a LinearGradientBrush, set Start and End relative points and WPF would take care of the rest.
In GDI+ however, the gradient brush constructor requires the position in absolute coordinates, which means I have to create a Brush for each of the rectangle, which would be a very complex operation.
Am I missing something or that's indeed the only way?
You can specify a transform at the moment just before the gradient is applied if you would like to declare the brush only once. Note that using transformations will override many of the constructor arguments that can be specified on a LinearGradientBrush.
LinearGradientBrush.Transform Property (System.Drawing.Drawing2D)
To modify the transformation, call the methods on the brush object corresponding to the desired matrix operations. Note that matrix operations are not commutative, so order is important. For your purposes, you'll probably want to do them in this order for each rendition of your rectangles: Scale, Rotate, Offset/Translate.
LinearGradientBrush.ResetTransform Method # MSDN
LinearGradientBrush.ScaleTransform Method (Single, Single, MatrixOrder) # MSDN
LinearGradientBrush.RotateTransform Method (Single, MatrixOrder) # MSDN
LinearGradientBrush.TranslateTransform Method (Single, Single, MatrixOrder) # MSDN
Note that the system-level drawing tools don't actually contain a stock definition for gradient brush, so if you have performance concerns about making multiple brushes, creating a multitude of gradient brushes shouldn't cost any more than the overhead of GDI+/System.Drawing maintaining the data required to define the gradient and styling. You may be just as well off to create a Brush per rectangle as needed, without having to dive into the math required to customize the brush via transform.
Brush Functions (Windows) # MSDN
Here is a code example you can test in a WinForms app. This app paints tiles with a gradient brush using a 45 degree gradient, scaled to the largest dimension of the tile (naively calculated). If you fiddle with the values and transformations, you may find that it isn't worth using the technique setting a transform for all of your rectangles if you have non-trivial gradient definitions. Otherwise, remember that your transformations are applied at the world-level, and in the GDI world, the y-axis is upside down, whereas in the cartesian math world, it is ordered bottom-to-top. This also causes the angle to be applied clockwise, whereas in trigonometry, the angle progresses counter-clockwise in increasing value for a y-axis pointing up.
using System.Drawing.Drawing2D;
namespace TestMapTransform
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Paint(object sender, PaintEventArgs e)
{
Rectangle rBrush = new Rectangle(0,0,1,1);
Color startColor = Color.DarkRed;
Color endColor = Color.White;
LinearGradientBrush br = new LinearGradientBrush(rBrush, startColor, endColor, LinearGradientMode.Horizontal);
int wPartitions = 5;
int hPartitions = 5;
int w = this.ClientSize.Width;
w = w - (w % wPartitions) + wPartitions;
int h = this.ClientSize.Height;
h = h - (h % hPartitions) + hPartitions;
for (int hStep = 0; hStep < hPartitions; hStep++)
{
int hUnit = h / hPartitions;
for (int wStep = 0; wStep < wPartitions; wStep++)
{
int wUnit = w / wPartitions;
Rectangle rTile = new Rectangle(wUnit * wStep, hUnit * hStep, wUnit, hUnit);
if (e.ClipRectangle.IntersectsWith(rTile))
{
int maxUnit = wUnit > hUnit ? wUnit : hUnit;
br.ResetTransform();
br.ScaleTransform((float)maxUnit * (float)Math.Sqrt(2d), (float)maxUnit * (float)Math.Sqrt(2d), MatrixOrder.Append);
br.RotateTransform(45f, MatrixOrder.Append);
br.TranslateTransform(wUnit * wStep, hUnit * hStep, MatrixOrder.Append);
e.Graphics.FillRectangle(br, rTile);
br.ResetTransform();
}
}
}
}
private void Form1_Resize(object sender, EventArgs e)
{
this.Invalidate();
}
}
}
Here's a snapshot of the output:
I recommend you to create a generic method like this:
public void Paint_rectangle(object sender, PaintEventArgs e)
{
RectangleF r = new RectangleF(0, 0, e.ClipRectangle.Width, e.ClipRectangle.Height);
if (r.Width > 0 && r.Height > 0)
{
Color c1 = Color.LightBlue;
Color c2 = Color.White;
Color c3 = Color.LightBlue;
LinearGradientBrush br = new LinearGradientBrush(r, c1, c3, 90, true);
ColorBlend cb = new ColorBlend();
cb.Positions = new[] { 0, (float)0.5, 1 };
cb.Colors = new[] { c1, c2, c3 };
br.InterpolationColors = cb;
// paint
e.Graphics.FillRectangle(br, r);
}
}
then, for every rectangle just call:
yourrectangleF.Paint += new PaintEventHandler(Paint_rectangle);
If the gradrients colors are all the same, you can make that method shorter. Hope that helped..
I was trying out different strategies for drawing a graph from the left edge of a control to the right edge. Until now we were using a Canvas with a polyline which performs OK, but could still use some improvement.
When I tried out DrawingContext.DrawLine I experienced incredibly bad performance, and I can't figure out why. This is the most condensed code I can come up with that demonstrates the problem:
public class TestControl : Control {
static Pen pen = new Pen(Brushes.Gray, 1.0);
static Random rnd = new Random();
protected override void OnRender(DrawingContext drawingContext) {
var previousPoint = new Point(0, 0);
for (int x = 4; x < this.ActualWidth; x += 4) {
var newPoint = new Point(x, rnd.Next((int)this.ActualHeight));
drawingContext.DrawLine(pen, previousPoint, newPoint);
previousPoint = newPoint;
}
}
}
And MainWindow.xaml just contains this:
<StackPanel>
<l:TestControl Height="16"/>
<!-- copy+paste the above line a few times -->
</StackPanel>
Now resize the window: depending on the number of TestControls in the StackPanel I experience a noticeable delay (10 controls) or a 30-second-total-standstill (100 controls) where I can't even hit the "Stop Debugger"-Button in VS...
I'm quite confused about this, obviously I am doing something wrong but since the code is so simple I don't see what that could be...
I am using .Net4 in case it matters.
You can gain performance by freezing the pen.
static TestControl()
{
pen.Freeze();
}
The most efficient way to draw a graph in WPF is to use DrawingVisual.
Charles Petzold wrote an excellent article explaining how to do it in MSDN Magazine:
Foundations: Writing More Efficient ItmesControls
The techniques work for displaying thousands of data points.
Ok, playing around with it a bit more, I found that freezing the pen had a huge impact. Now I create the pen in the constructor like this:
public TestControl() {
if (pen == null) {
pen = new Pen(Brushes.Gray, 1.0);
pen.Freeze();
}
}
The performance is now as I would expect it to be. I knew it had to be something simple...
Drawing in WPF becomes extremely slow if you use a pen with a dash style other than Solid (the default). This affects every draw method of DrawingContext that accepts a pen (DrawLine, DrawGeometry, etc.)
This question is really old but I found a way that improved the execution of my code which used DrawingContext.DrawLine aswell.
This was my code to draw a curve one hour ago:
DrawingVisual dv = new DrawingVisual();
DrawingContext dc = dv.RenderOpen();
foreach (SerieVM serieVm in _curve.Series) {
Pen seriePen = new Pen(serieVm.Stroke, 1.0);
Point lastDrawnPoint = new Point();
bool firstPoint = true;
foreach (CurveValuePointVM pointVm in serieVm.Points.Cast<CurveValuePointVM>()) {
if (pointVm.XValue < xMin || pointVm.XValue > xMax) continue;
double x = basePoint.X + (pointVm.XValue - xMin) * xSizePerValue;
double y = basePoint.Y - (pointVm.Value - yMin) * ySizePerValue;
Point coord = new Point(x, y);
if (firstPoint) {
firstPoint = false;
} else {
dc.DrawLine(seriePen, lastDrawnPoint, coord);
}
lastDrawnPoint = coord;
}
}
dc.Close();
Here is the code now:
DrawingVisual dv = new DrawingVisual();
DrawingContext dc = dv.RenderOpen();
foreach (SerieVM serieVm in _curve.Series) {
StreamGeometry g = new StreamGeometry();
StreamGeometryContext sgc = g.Open();
Pen seriePen = new Pen(serieVm.Stroke, 1.0);
bool firstPoint = true;
foreach (CurveValuePointVM pointVm in serieVm.Points.Cast<CurveValuePointVM>()) {
if (pointVm.XValue < xMin || pointVm.XValue > xMax) continue;
double x = basePoint.X + (pointVm.XValue - xMin) * xSizePerValue;
double y = basePoint.Y - (pointVm.Value - yMin) * ySizePerValue;
Point coord = new Point(x, y);
if (firstPoint) {
firstPoint = false;
sgc.BeginFigure(coord, false, false);
} else {
sgc.LineTo(coord, true, false);
}
}
sgc.Close();
dc.DrawGeometry(null, seriePen, g);
}
dc.Close();
The old code would take ~ 140 ms to plot two curves of 3000 points. The new one takes about 5 ms. Using StreamGeometry seems to be much more efficient than DrawingContext.Drawline.
Edit: I'm using the dotnet framework version 3.5
My guess is that the call to rnd.Next(...) is causing a lot of overhead each render. You can test it by providing a constant and then compare the speeds.
Do you really need to generate new coordinates each render?
Using Opencv and Linux I would like to create a fun-house mirror effect, short and squat, tall and thin effect using a live webcamera. My daughter loves those things and I would like to create one using a camera. I am not quite sure about the transforms necessary for these effects. Any help would be appreciated. I have much of the framework running, live video playing and such, just not the transforms.
thanx
I think that you need to use 'radial' transforms and 'pin cushion' which is inverse radial.
In order to braker the symmetry of the transforms you can strech the image before and after:
Suppose your image is 300x300
pixels.
Strech it to 300x600 or
600x300 using cvResize()
Apply transform: radial, pincushion or
sinusoidal
Strech back to 300x300
I never used radial or sinusoidal transforms in openCV so I dont have a piece of code to attach. But you can use cvUndistort2() and see if it is OK.
Create window with trackbars with range 0..100. Each trackbar controls parameter of distortion:
static IplImage* srcImage;
static IplImage* dstImage;
static double _camera[9];
static double _dist4Coeff[4]; // This is the transformation matrix
static int _r = 50; // Radial transform. 50 in range 0..100
static int _tX = 50; // Tangetial coef in X directio
static int _tY = 50; // Tangetial coef in Y directio
static int allRange = 50;
// Open windows
cvNamedWindow(winName, 1);
// Add track bars.
cvShowImage(winName, srcImage );
cvCreateTrackbar("Radial", winName, &_r , 2*allRange, callBackFun);
cvCreateTrackbar("Tang X", winName, &_tX , 2*allRange, callBackFun);
cvCreateTrackbar("Tang Y", winName, &_tY , 2*allRange, callBackFun);
callBackFun(0);
// The distortion call back
void callBackFun(int arg){
CvMat intrCamParamsMat = cvMat( 3, 3, CV_64F, _camera );
CvMat dist4Coeff = cvMat( 1, 4, CV_64F, _dist4Coeff );
// Build distortion coefficients matrix.
dist4Coeff.data.db[0] = (_r-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[1] = (_r-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[2] = (_tY-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[3] = (_tX-allRange*1.0)/allRange*1.0;
// Build intrinsic camera parameters matrix.
intrCamParamsMat.data.db[0] = 587.1769751432448200/2.0;
intrCamParamsMat.data.db[1] = 0.;
intrCamParamsMat.data.db[2] = 319.5000000000000000/2.0+0;
intrCamParamsMat.data.db[3] = 0.;
intrCamParamsMat.data.db[4] = 591.3189722549362800/2.0;
intrCamParamsMat.data.db[5] = 239.5000000000000000/2.0+0;
intrCamParamsMat.data.db[6] = 0.;
intrCamParamsMat.data.db[7] = 0.;
intrCamParamsMat.data.db[8] = 1.;
// Apply transformation
cvUndistort2( srcImage, dstImage, &intrCamParamsMat, &dist4Coeff );
cvShowImage( winName, dstImage );
}
I have a WPF BitmapImage which I loaded from a .JPG file, as follows:
this.m_image1.Source = new BitmapImage(new Uri(path));
I want to query as to what the colour is at specific points. For example, what is the RGB value at pixel (65,32)?
How do I go about this? I was taking this approach:
ImageSource ims = m_image1.Source;
BitmapImage bitmapImage = (BitmapImage)ims;
int height = bitmapImage.PixelHeight;
int width = bitmapImage.PixelWidth;
int nStride = (bitmapImage.PixelWidth * bitmapImage.Format.BitsPerPixel + 7) / 8;
byte[] pixelByteArray = new byte[bitmapImage.PixelHeight * nStride];
bitmapImage.CopyPixels(pixelByteArray, nStride, 0);
Though I will confess there's a bit of monkey-see, monkey do going on with this code.
Anyway, is there a straightforward way to process this array of bytes to convert to RGB values?
Here is how I would manipulate pixels in C# using multidimensional arrays:
[StructLayout(LayoutKind.Sequential)]
public struct PixelColor
{
public byte Blue;
public byte Green;
public byte Red;
public byte Alpha;
}
public PixelColor[,] GetPixels(BitmapSource source)
{
if(source.Format!=PixelFormats.Bgra32)
source = new FormatConvertedBitmap(source, PixelFormats.Bgra32, null, 0);
int width = source.PixelWidth;
int height = source.PixelHeight;
PixelColor[,] result = new PixelColor[width, height];
source.CopyPixels(result, width * 4, 0);
return result;
}
usage:
var pixels = GetPixels(image);
if(pixels[7, 3].Red > 4)
{
...
}
If you want to update pixels, very similar code works except you will create a WriteableBitmap, and use this:
public void PutPixels(WriteableBitmap bitmap, PixelColor[,] pixels, int x, int y)
{
int width = pixels.GetLength(0);
int height = pixels.GetLength(1);
bitmap.WritePixels(new Int32Rect(0, 0, width, height), pixels, width*4, x, y);
}
thusly:
var pixels = new PixelColor[4, 3];
pixels[2,2] = new PixelColor { Red=128, Blue=0, Green=255, Alpha=255 };
PutPixels(bitmap, pixels, 7, 7);
Note that this code converts bitmaps to Bgra32 if they arrive in a different format. This is generally fast, but in some cases may be a performance bottleneck, in which case this technique would be modified to match the underlying input format more closely.
Update
Since BitmapSource.CopyPixels doesn't accept a two-dimensional array it is necessary to convert the array between one-dimensional and two-dimensional. The following extension method should do the trick:
public static class BitmapSourceHelper
{
#if UNSAFE
public unsafe static void CopyPixels(this BitmapSource source, PixelColor[,] pixels, int stride, int offset)
{
fixed(PixelColor* buffer = &pixels[0, 0])
source.CopyPixels(
new Int32Rect(0, 0, source.PixelWidth, source.PixelHeight),
(IntPtr)(buffer + offset),
pixels.GetLength(0) * pixels.GetLength(1) * sizeof(PixelColor),
stride);
}
#else
public static void CopyPixels(this BitmapSource source, PixelColor[,] pixels, int stride, int offset)
{
var height = source.PixelHeight;
var width = source.PixelWidth;
var pixelBytes = new byte[height * width * 4];
source.CopyPixels(pixelBytes, stride, 0);
int y0 = offset / width;
int x0 = offset - width * y0;
for(int y=0; y<height; y++)
for(int x=0; x<width; x++)
pixels[x+x0, y+y0] = new PixelColor
{
Blue = pixelBytes[(y*width + x) * 4 + 0],
Green = pixelBytes[(y*width + x) * 4 + 1],
Red = pixelBytes[(y*width + x) * 4 + 2],
Alpha = pixelBytes[(y*width + x) * 4 + 3],
};
}
#endif
}
There are two implementations here: The first one is fast but uses unsafe code to get an IntPtr to an array (must compile with /unsafe option). The second one is slower but does not require unsafe code. I use the unsafe version in my code.
WritePixels accepts two-dimensional arrays, so no extension method is required.
Edit: As Jerry pointed out in the comments, because of the memory layout, the two-dimensional array has the vertical coordinate first, in other words it must be dimensioned as Pixels[Height,Width] not Pixels[Width,Height] and addressed as Pixels[y,x].
I'd like to add to Ray´s answer that you can also declare PixelColor struct as a union:
[StructLayout(LayoutKind.Explicit)]
public struct PixelColor
{
// 32 bit BGRA
[FieldOffset(0)] public UInt32 ColorBGRA;
// 8 bit components
[FieldOffset(0)] public byte Blue;
[FieldOffset(1)] public byte Green;
[FieldOffset(2)] public byte Red;
[FieldOffset(3)] public byte Alpha;
}
And that way you'll also have access to the UInit32 BGRA (for fast pixel access or copy), besides the individual byte components.
The interpretation of the resulting byte array is dependent upon the pixel format of the source bitmap, but in the simplest case of a 32 bit, ARGB image, each pixel will be composed of four bytes in the byte array. The first pixel would be interpreted thusly:
alpha = pixelByteArray[0];
red = pixelByteArray[1];
green = pixelByteArray[2];
blue = pixelByteArray[3];
To process each pixel in the image, you would probably want to create nested loops to walk the rows and the columns, incrementing an index variable by the number of bytes in each pixel.
Some bitmap types combine multiple pixels into a single byte. For instance, a monochrome image packs eight pixels into each byte. If you need to deal with images other than 24/32 bit per pixels (the simple ones), then I would suggest finding a good book that covers the underlying binary structure of bitmaps.
I'd like to improve upon Ray's answer - not enough rep to comment. >:( This version has the best of both safe/managed, and the efficiency of the unsafe version. Also, I've done away with passing in the stride as the .Net documentation for CopyPixels says it's the stride of the bitmap, not of the buffer. It's misleading, and can be computed inside the function anyway. Since the PixelColor array must be the same stride as the bitmap (to be able to do it as a single copy call), it makes sense to just make a new array in the function as well. Easy as pie.
public static PixelColor[,] CopyPixels(this BitmapSource source)
{
if (source.Format != PixelFormats.Bgra32)
source = new FormatConvertedBitmap(source, PixelFormats.Bgra32, null, 0);
PixelColor[,] pixels = new PixelColor[source.PixelWidth, source.PixelHeight];
int stride = source.PixelWidth * ((source.Format.BitsPerPixel + 7) / 8);
GCHandle pinnedPixels = GCHandle.Alloc(pixels, GCHandleType.Pinned);
source.CopyPixels(
new Int32Rect(0, 0, source.PixelWidth, source.PixelHeight),
pinnedPixels.AddrOfPinnedObject(),
pixels.GetLength(0) * pixels.GetLength(1) * 4,
stride);
pinnedPixels.Free();
return pixels;
}
I took all examples and created a slightly better one - tested it too
(the only flaw was that magic 96 as DPI which really bugged me)
I also compared this WPF tactic versus:
GDI by using Graphics (system.drawing)
Interop by directly invoking GetPixel from GDI32.Dll
To my supprise,
This works x10 faster than GDI, and around x15 times faster then Interop.
So if you're using WPF - much better to work with this to get your pixel color.
public static class GraphicsHelpers
{
public static readonly float DpiX;
public static readonly float DpiY;
static GraphicsHelpers()
{
using (var g = Graphics.FromHwnd(IntPtr.Zero))
{
DpiX = g.DpiX;
DpiY = g.DpiY;
}
}
public static Color WpfGetPixel(double x, double y, FrameworkElement AssociatedObject)
{
var renderTargetBitmap = new RenderTargetBitmap(
(int)AssociatedObject.ActualWidth,
(int)AssociatedObject.ActualHeight,
DpiX, DpiY, PixelFormats.Default);
renderTargetBitmap.Render(AssociatedObject);
if (x <= renderTargetBitmap.PixelWidth && y <= renderTargetBitmap.PixelHeight)
{
var croppedBitmap = new CroppedBitmap(
renderTargetBitmap, new Int32Rect((int)x, (int)y, 1, 1));
var pixels = new byte[4];
croppedBitmap.CopyPixels(pixels, 4, 0);
return Color.FromArgb(pixels[3], pixels[2], pixels[1], pixels[0]);
}
return Colors.Transparent;
}
}
A little remark:
If you are trying to use this code (Edit: provided by Ray Burns), but get the error about the array's rank, try to edit the extension methods as follows:
public static void CopyPixels(this BitmapSource source, PixelColor[,] pixels, int stride, int offset, bool dummy)
and then call the CopyPixels method like this:
source.CopyPixels(result, width * 4, 0, false);
The problem is, that when the extension method doesn't differ from the original, the original one is called. I guess this is because PixelColor[,] matches Array as well.
I hope this helps you if you got the same problem.
If you want just one Pixel color:
using System.Windows.Media;
using System.Windows.Media.Imaging;
...
public static Color GetPixelColor(BitmapSource source, int x, int y)
{
Color c = Colors.White;
if (source != null)
{
try
{
CroppedBitmap cb = new CroppedBitmap(source, new Int32Rect(x, y, 1, 1));
var pixels = new byte[4];
cb.CopyPixels(pixels, 4, 0);
c = Color.FromRgb(pixels[2], pixels[1], pixels[0]);
}
catch (Exception) { }
}
return c;
}
Much simpler. There's no need to copy the data around, you can get it directly. But this comes at a price: pointers and unsafe. In a specific situation, decide whether it's worth the speed and ease for you (but you can simply put the image manipulation into its own separate unsafe class and the rest of the program won't be affected).
var bitmap = new WriteableBitmap(image);
data = (Pixel*)bitmap.BackBuffer;
stride = bitmap.BackBufferStride / 4;
bitmap.Lock();
// getting a pixel value
Pixel pixel = (*(data + y * stride + x));
bitmap.Unlock();
where
[StructLayout(LayoutKind.Explicit)]
protected struct Pixel {
[FieldOffset(0)]
public byte B;
[FieldOffset(1)]
public byte G;
[FieldOffset(2)]
public byte R;
[FieldOffset(3)]
public byte A;
}
The error checking (whether the format is indeed BGRA and handling the case if not) will be left to the reader.
You can get color components in a byte array. First copy the pixels in 32 bit to an array and convert that to 8-bit array with 4 times larger size
int[] pixelArray = new int[stride * source.PixelHeight];
source.CopyPixels(pixelArray, stride, 0);
// byte[] colorArray = new byte[pixelArray.Length];
// EDIT:
byte[] colorArray = new byte[pixelArray.Length * 4];
for (int i = 0; i < colorArray.Length; i += 4)
{
int pixel = pixelArray[i / 4];
colorArray[i] = (byte)(pixel >> 24); // alpha
colorArray[i + 1] = (byte)(pixel >> 16); // red
colorArray[i + 2] = (byte)(pixel >> 8); // green
colorArray[i + 3] = (byte)(pixel); // blue
}
// colorArray is an array of length 4 times more than the actual number of pixels
// in the order of [(ALPHA, RED, GREEN, BLUE), (ALPHA, RED...]