where did PixelFormats and WriteableBitmap.Lock go in Silverlight3? - silverlight

A few months ago I built some online samples like this one from Jeff Prosise that use the WriteableBitmap class in Silverlight.
Revisiting them today with the latest Silverlight3 installer (3.0.40624.0), the API seems to have changed.
I figured out some of the changes. For example, the WriteableBitmap array accessor has disappeared, but I found it in the new Pixels property, so instead of writing:
bmp[x]
I can write
bmp.Pixels[x]
Are there similar simple replacements for these calls, or has the use pattern itself changed?
bmp = new WriteableBitmap(width, height, PixelFormats.Bgr32);
bmp.Lock();
bmp.Unlock();
Can anybody point me to a working example using the updated API?

Another important detail about switching to the new WriteableBitmap is given in this answer ... because the pixel format is now always pbgra32, you must set an alpha value for each pixel, otherwise you just get an all-white picture. In other words, code that formerly generated pixel values like this:
byte[] components = new byte[4];
components[0] = (byte)(blue % 256); // blue
components[1] = (byte)(grn % 256); // green
components[2] = (byte)(red % 256); // red
components[3] = 0; // unused
should be changed to read:
byte[] components = new byte[4];
components[0] = (byte)(blue % 256); // blue
components[1] = (byte)(grn % 256); // green
components[2] = (byte)(red % 256); // red
components[3] = 255; // alpha

What happens if you don't use Lock and Unlock and just use the WritabelBitmap(int, int) constructor? Do things break?
It would seem that between SL3 Beta and the release this API has changed. See Breaking Changes Document Errata (Silverlight 3)

Related

Implementing stroke drawing similar to InkCanvas

My problem effectively boils down to accurate mouse movement detection.
I need to create my own implementation of an InkCanvas and have succeeded for the most part, except for drawing strokes accurately.
void OnMouseMove(object sneder, MouseEventArgs e)
{
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
DrawBrush.Draw(intPosition, PixelDisplay);
UpdateStroke(intPosition); // calls CaptureMouse
}
This works. The Bitmap (PixelDisplay) is updated and all is well. However, any kind of quick mouse movement causes large skips in the drawing. I've narrowed down the problem to e.GetPosition(this), which blocks the event long enough to be inaccurate.
There's this question which is long beyond revival, and its answers are unclear or simply don't have a noticeable difference.
After some more testing, the stated solution and similar ideas fail specifically because of e.GetPosition.
I know InkCanvas uses similar methods after looking through the source; detect the device, if it's a mouse, get its position and capture. I see no reason for the same process to not work identically here.
I ended up being able to partially solve this.
var position = e.GetPosition(this);
if (!Rect.Contains(position))
return;
if (DrawBrush == null)
return;
var ratio = new Point(Width / PixelDisplay.Size.X, Height / PixelDisplay.Size.Y);
var intPosition = new IntVector(Math2.FloorToInt(position.X / ratio.X), Math2.FloorToInt(position.Y / ratio.Y));
// Calculate pixel coordinates based on the control height
var lastPoint = CurrentStroke?.Points.LastOrDefault(new IntVector(-1, -1));
// Uses System.Linq to grab the last stroke, if it exists
PixelDisplay.Lock();
// My special locking mechanism, effectively wraps Bitmap.Lock
if (lastPoint != new IntVector(-1, -1)) // Determine if we're in the middle of a stroke
{
var alphaAdd = 1d / new IntVector(intPosition.X - lastPoint.Value.X, intPosition.Y - lastPoint.Value.Y).Magnitude;
// For some interpolation, calculate 1 / distance (magnitude) of the two points.
// Magnitude formula: Math.Sqrt(Math.Pow(X, 2) + Math.Pow(Y, 2));
var alpha = 0d;
var xDiff = intPosition.X - lastPoint.Value.X;
var yDiff = intPosition.Y - lastPoint.Value.Y;
while (alpha < 1d)
{
alpha += alphaAdd;
var adjusted = new IntVector(
Math2.FloorToInt((position.X + (xDiff * alpha)) / ratio.X),
Math2.FloorToInt((position.Y + (yDiff * alpha)) / ratio.Y));
// Inch our way towards the current intPosition
DrawBrush.Draw(adjusted, PixelDisplay); // Draw to the bitmap
UpdateStroke(intPosition);
}
}
DrawBrush.Draw(intPosition, PixelDisplay); // Draw the original point
UpdateStroke(intPosition);
PixelDisplay.Unlock();
This implementation interpolates between the last point and the current one to fill in any gaps. It's not perfect when using a very small brush size for example, but is a solution nonetheless.
Some remarks
IntVector is a lazily implemented Vector2 by me, just using integers instead.
Math2 is a helper class. FloorToInt is short for (int)MathF.Round(...))

Saving kinect v2 frames in an array

I am trying to develop a software based on Kinect v2 and I need to keep the capturedframes in an array. I have a problem and I dont have any idea about it as follow.
The captured frames are processed by my processing class and the processed writable bitmap will be called as the source of the image box in my ui window which works perfectly and I have a realtime frames in my ui.
for example:
/// Color
_ProcessingInstance.ProcessColor(colorFrame);
ImageBoxRGB.Source = _ProcessingInstance.colorBitmap;
but when I want to assign this to an element of an array, all of the elements in array will be identical as the first frame!! I should mention that, this action is in the reading event which above action is there.
the code:
ColorFrames_Array[CapturingFrameCounter] = _ProcessingInstance.colorBitmap;
the equal check in intermediate window:
ColorFrames_Array[0].Equals(ColorFrames_Array[1])
true
ColorFrames_Array[0].Equals(ColorFrames_Array[2])
true
Please give me some hints about this problem. Any idea?
Thanks Yar
You are right and when I create a new instance, frames are saved correctly.
But my code was based on the Microsoft example and problem is that creating new instances makes the memory leakage because writablebitmap is not disposable.
similar problem is discussed in the following link which the frames are frizzed to the first frame and this is from the intrinsic properties of writeablebitmap:
http://www.wintellect.com/devcenter/jprosise/silverlight-s-big-image-problem-and-what-you-can-do-about-it
Therefore i use a strategy similar to the above solution and try to get a copy instead of the original bitmap frame. In this scenario, I have create a new writeblebitmap for each element of ColorFrames_Array[] at initialization step.
ColorFrames_Array = new riteableBitmap[MaximumFramesNumbers_Capturing];
for (int i=0; i < MaximumFramesNumbers_Capturing; ++i)
{
ColorFrames_Array[i] = new WriteableBitmap(color_width, color_height, 96.0, 96.0, PixelFormats.Bgr32, null);
}
and finally, use clone method to copy the bitmap frames to array elements.
ColorFrames_ArrayBuffer[CapturingFrameCounter] = _ProcessingInstance.colorBitmap.Clone();
While above solution works, but it has a huge memory leakage!!.
Therefore I use Array and .copypixel methods (of writeablebitmap) to copy the pixels of the frame to array and hold it (while the corresponding writeablebitmap will be disposed correctly without leakage).
public Array[] ColorPixels_Array;
for (int i=0; i< MaximumFramesNumbers_Capturing; ++i)
{
ColorPixels_Array[i]=new int[color_Width * color_Height];
}
colorBitmap.CopyPixels(ColorPixels_Array[Counter_CapturingFrame], color_Width * 4, 0);
Finally, when we want to save the arrays of pixels, we need to convert them new writeablebitmap instances and write them on hard.
wb = new WriteableBitmap(color_Width, color_Height, 96.0, 96.0, PixelFormats.Bgr32, null);
wb.WritePixels(new Int32Rect(0, 0, color_Width, color_Height)
, Ar_Px,
color_Width * 4, 0);

SharpDX.WPF Increasing memory usage during rendering

I have started working with DirectX in WPF app. My first step was to use simple library:
SharpdDX.WPF. Based on samples I've implemented WPF control drawing simple line. SharpDX.WPF uses D3DImage to render images in WPF.
Unfortunately application's memory increasing all time.
I implemented class TestControlRenderer : D3D10.
Vertex shader is initialized like:
var sizeInBytes = dataLength * sizeof(int) * 3;
var bufferDescription = new BufferDescription(
sizeInBytes,
ResourceUsage.Dynamic,
BindFlags.VertexBuffer,
CpuAccessFlags.Write,
ResourceOptionFlags.None);
using (var stream = new DataStream(sizeInBytes, true, true))
{
stream.Position = 0;
_graphDataVertexBuffer = new SharpDX.Direct3D10.Buffer(Device, stream, bufferDescription);
}
Device.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(_graphDataVertexBuffer, sizeof(int) * 3, 0));
Device.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineStrip;
Then constant buffer with parameters used in shader:
_controlInfoConstantBuffer = new ConstantBuffer<ControlParamsShaderData>(Device);
Device.VertexShader.SetConstantBuffer(0, _controlInfoConstantBuffer.Buffer);
To init animation Reset method was overriden like that:
base.Reset(args);
if (args.RenderSize.Width == 0) return;
_drawArgs = args;
InitVertexBuffer(dataLength);
_controlInfoConstantBuffer.Value = new ControlParamsShaderData
{
SamplesInControl = dataLength,
MinSignalDataY = -1500,
MaxSignalDataY = 1500
};
Device.VertexShader.SetConstantBuffer(0, _controlInfoConstantBuffer.Buffer);
The last step is RenderScene method:
public override void RenderScene(DrawEventArgs args)
{
if (args.RenderSize.Width == 0) return;
Device.ClearRenderTargetView(RenderTargetView, Color.Transparent);
using (var stream = _graphDataVertexBuffer.Map(MapMode.WriteDiscard, SharpDX.Direct3D10.MapFlags.None))
{
for (int i = 0; i < Data.Length; i++)
{
stream.Write(new Vector3(i, Data[i], 0));
}
}
_graphDataVertexBuffer.Unmap();
Device.Draw(Data.Length, 0);
}
Rendering is controlled by DispatcherTimer where OnTickMethod updates array with points coordinates and then invoke Render() method.
My question is simply, is that memory leak or something is created on each render iteration?
I don't change backbuffer or create another objects. Only change Data array, update it to GPU and Shaders process it to display.
My case is to display about 30 wpf controls width DirectX on one screen. Controls are with simple but realtime animation. Is that possible in that way?
Most likely you are leaking resources. You can see this by setting the static configuration property
SharpDX.Configuration.EnableObjectTracking = true;
then calling
SharpDX.Diagnostics.ObjectTracker.ReportActiveObjects()
at various points in your application lifetime to see if anything is leaking (at least on the SharpDX side). You can edit your code to make sure to dispose these objects. Only enable object tracking while debugging - it hurts performance.
SharpDX used to release COM objects when the finalizer ran if the object had not already been Diposed (at least as in version 2.4.2), but later disabled that (they detail why in one of their changelogs, I forget which one).
Additionally, DirectX requires that you release objects in the reverse order they were created - this can create hard-to-debug memory leaks. So when your code is
var device = new Devie(...);
var effect = new Effec(Device, byteCode);
technique = effect.GetTechniqueByName(techniqueName);
inputLayout = new InputLayout(Device, _technique.GetPassByIndex(0).Description.Signature, ...);
then your dispose code has to be
_inputLayout.Dispose();
_technique.Dispose();
_effect.Dispose();
_device.Dispose();

Rendering on a WPF Control with DirectX 11

I am trying to create a map editor based on WPF. Currently I'm using a hack to render DirectX contents. I created a WinFormsHost and rendered on a WinForms-Panel.
This all because DirectX (I´m using DirectX 11 with Featurelevel 10) wants a Handle (alias IntPtr) where to render. I don´t know how I can initialize and use the DX Device without a handle.
But a WPF control has no handle. So I just found out, there is an interop class called "D3DImage". But I don't understand how to use it.
My current system works like this:
The inner loop goes through a list of "IGameloopElement"s. For each, it renders its content calling "Draw()". After that, it calls "Present()" of the swap chain to show the changes. Then it resets the device to switch the handle to the next element (mostly there is only one element).
Now, because D3DImage doesn't have a handle, how do I render onto it? I just know I have to use "Lock()" then "SetBackBuffer()", "AddDirtyRect()" and then "Unlock()".
But how do I render onto a DirectX11.Texture2D object without specifying a handle for the device?
I´m really lost... I just found the "DirectX 4 WPF" sample on codeplex, but this implements all versions of DirectX, manages the device itself and has such a huge overhead.
I want to stay at my current system. I´m managing the device by myself. I don´t want the WPF control to handle it.
The loop just should call "Render()" and then passes the backbuffer texture to the WPF control.
Could anyone tell me how to do this? I´m totally stuck ...
Thanks a lot :)
R
WPF's D3DImage only supports Direct3D9/Direct3D9Ex, it does not support Direct3D 11. You need to use DXGI Surface Sharing to make it work.
Another answer wrote, "D3DImage only supports Direct3D9/Direct3D9Ex"... which is perhaps not entirely true for the last few years anyway. As I summarized in a comment here, the key appears to be that Direct3D11 with DXGI has a very specific interop compatibility mode (D3D11_SHARED_WITHOUT_MUTEX flag) which makes the ID3D11Texture2D1 directly usable as a D3DResourceType.IDirect3DSurface9, without copying any bits, which just so happens to be exactly (and only) what WPF D3DImage is willing to accept.
This is a rough sketch of what worked for me, to create a D3D11 SampleAllocator that produces ID3D11Texture2D1 that are directly compatible with WPF's Direct3D9. Because all the .NET interop shown here is of my own design, this will not be totally ready-to-run code to drop in your project, but the method, intent, and procedures should be clear for easy adaptation.
1. preliminary helper
static D3D_FEATURE_LEVEL[] levels =
{
D3D_FEATURE_LEVEL._11_1,
D3D_FEATURE_LEVEL._11_0,
};
static IMFAttributes GetSampleAllocatorAttribs()
{
MF.CreateAttributes(out IMFAttributes attr, 6);
attr.SetUINT32(in MF_SA_D3D11_AWARE, 1U);
attr.SetUINT32(in MF_SA_D3D11_BINDFLAGS, (uint)D3D11_BIND.RENDER_TARGET);
attr.SetUINT32(in MF_SA_D3D11_USAGE, (uint)D3D11_USAGE.DEFAULT);
attr.SetUINT32(in MF_SA_D3D11_SHARED_WITHOUT_MUTEX, (uint)BOOL.TRUE);
attr.SetUINT32(in MF_SA_BUFFERS_PER_SAMPLE, 1U);
return attr;
}
static IMFMediaType GetMediaType()
{
MF.CreateMediaType(out IMFMediaType mt);
mt.SetUINT64(in MF_MT_FRAME_SIZE, new SIZEU(1920, 1080).ToSwap64());
mt.SetGUID(in MF_MT_MAJOR_TYPE, in WMMEDIATYPE.Video);
mt.SetUINT32(in MF_MT_INTERLACE_MODE, (uint)MFVideoInterlaceMode.Progressive);
mt.SetGUID(in MF_MT_SUBTYPE, in MF_VideoFormat.RGB32);
return mt;
}
2. the D3D11 device and context instances go somewhere
ID3D11Device4 m_d3D11_device;
ID3D11DeviceContext2 m_d3D11_context;
3. initialization code is next
void InitialSetup()
{
D3D11.CreateDevice(
null,
D3D_DRIVER_TYPE.HARDWARE,
IntPtr.Zero,
D3D11_CREATE_DEVICE.BGRA_SUPPORT,
levels,
levels.Length,
D3D11.SDK_VERSION,
out m_d3D11_device,
out D3D_FEATURE_LEVEL _,
out m_d3D11_context);
MF.CreateDXGIDeviceManager(out uint tok, out IMFDXGIDeviceManager m_dxgi);
m_dxgi.ResetDevice(m_d3D11_device, tok);
MF.CreateVideoSampleAllocatorEx(
ref REFGUID<IMFVideoSampleAllocatorEx>.GUID,
out IMFVideoSampleAllocatorEx sa);
sa.SetDirectXManager(m_dxgi);
sa.InitializeSampleAllocatorEx(
PrerollSampleSink.QueueMax,
PrerollSampleSink.QueueMax * 2,
GetSampleAllocatorAttribs(),
GetMediaType());
}
4. use sample allocator to repeatedly generate textures, as needed
ID3D11Texture2D1 CreateTexture2D(SIZEU sz)
{
var vp = new D3D11_VIEWPORT
{
TopLeftX = 0f,
TopLeftY = 0f,
Width = sz.Width,
Height = sz.Height,
MinDepth = 0f,
MaxDepth = 1f,
};
m_d3D11_context.RSSetViewports(1, ref vp);
var desc = new D3D11_TEXTURE2D_DESC1
{
SIZEU = sz,
MipLevels = 1,
ArraySize = 1,
Format = DXGI_FORMAT.B8G8R8X8_UNORM,
SampleDesc = new DXGI_SAMPLE_DESC { Count = 1, Quality = 0 },
Usage = D3D11_USAGE.DEFAULT,
BindFlags = D3D11_BIND.RENDER_TARGET | D3D11_BIND.SHADER_RESOURCE,
CPUAccessFlags = D3D11_CPU_ACCESS.NOT_REQUESTED,
MiscFlags = D3D11_RESOURCE_MISC.SHARED,
TextureLayout = D3D11_TEXTURE_LAYOUT.UNDEFINED,
};
m_d3D11_device.CreateTexture2D1(ref desc, IntPtr.Zero, out ID3D11Texture2D1 tex2D);
return tex2D;
}

What is the best way to have only non-transparent pixel hit testable in images in Silverlight?

According to msdn in Silverlight images are hit testable over their image/media display areas, basically their Height and Width. Transparent / full alpha pixels in the image file are still hit testable.
My question is now, what is the best way to have only non-transparent pixel hit testable in images in Silverlight?
This is not going to be possible using the normal hit testing capability, as you found out with the MSDN reference.
The only idea I had was to convert your image to the WritableBitmap class and use the Pixels property to do alpha channel hit testing. I have not actually tried this and I can't imagine it's trivial to do, but it should work in theory.
The pixels are one large int[] with the 4 bytes of each integer corresponding to ARGB. It uses the premultiplied ARGB32 format, so if there is any alpha transparency besides full 255 the other RGB values are scaled accordingly. I am assuming you want anything NOT full alpha to be considered a "hit" so you could just check against the alpha byte to see if it is 255.
You would access the row/col pixel you are looking to check by array index like this:
int pixel = myBitmap.Pixels[row * myBitmap.PixelWidth + col];
Check out this post for some more ideas.
EDIT:
I threw together a quick test, it works and it's pretty straightforward:
public MainPage()
{
InitializeComponent();
this.image = new BitmapImage(new Uri("my_tranny_image.png", UriKind.Relative));
this.MyImage.Source = image;
this.LayoutRoot.MouseMove += (sender, e) =>
{
bool isHit = ImageHitTest(image, e.GetPosition(this.MyImage));
this.Result.Text = string.Format("Hit Test Result: {0}", isHit);
};
}
bool ImageHitTest(BitmapSource image, Point point)
{
var writableBitmap = new WriteableBitmap(image);
// check bounds
if (point.X < 0.0 || point.X > writableBitmap.PixelWidth - 1 ||
point.Y < 0.0 || point.Y > writableBitmap.PixelHeight - 1)
return false;
int row = (int)Math.Floor(point.Y);
int col = (int)Math.Floor(point.X);
int pixel = writableBitmap.Pixels[row * writableBitmap.PixelWidth + col];
byte[] pixelBytes = BitConverter.GetBytes(pixel);
if (pixelBytes[0] != 0x00)
return true;
else
return false;
}
You would probably want to make some optimizations like not create the WritableBitmap on every MouseMove event but this is just a proof of concept to show that it works.

Resources