I am trying to develop a software based on Kinect v2 and I need to keep the capturedframes in an array. I have a problem and I dont have any idea about it as follow.
The captured frames are processed by my processing class and the processed writable bitmap will be called as the source of the image box in my ui window which works perfectly and I have a realtime frames in my ui.
for example:
/// Color
_ProcessingInstance.ProcessColor(colorFrame);
ImageBoxRGB.Source = _ProcessingInstance.colorBitmap;
but when I want to assign this to an element of an array, all of the elements in array will be identical as the first frame!! I should mention that, this action is in the reading event which above action is there.
the code:
ColorFrames_Array[CapturingFrameCounter] = _ProcessingInstance.colorBitmap;
the equal check in intermediate window:
ColorFrames_Array[0].Equals(ColorFrames_Array[1])
true
ColorFrames_Array[0].Equals(ColorFrames_Array[2])
true
Please give me some hints about this problem. Any idea?
Thanks Yar
You are right and when I create a new instance, frames are saved correctly.
But my code was based on the Microsoft example and problem is that creating new instances makes the memory leakage because writablebitmap is not disposable.
similar problem is discussed in the following link which the frames are frizzed to the first frame and this is from the intrinsic properties of writeablebitmap:
http://www.wintellect.com/devcenter/jprosise/silverlight-s-big-image-problem-and-what-you-can-do-about-it
Therefore i use a strategy similar to the above solution and try to get a copy instead of the original bitmap frame. In this scenario, I have create a new writeblebitmap for each element of ColorFrames_Array[] at initialization step.
ColorFrames_Array = new riteableBitmap[MaximumFramesNumbers_Capturing];
for (int i=0; i < MaximumFramesNumbers_Capturing; ++i)
{
ColorFrames_Array[i] = new WriteableBitmap(color_width, color_height, 96.0, 96.0, PixelFormats.Bgr32, null);
}
and finally, use clone method to copy the bitmap frames to array elements.
ColorFrames_ArrayBuffer[CapturingFrameCounter] = _ProcessingInstance.colorBitmap.Clone();
While above solution works, but it has a huge memory leakage!!.
Therefore I use Array and .copypixel methods (of writeablebitmap) to copy the pixels of the frame to array and hold it (while the corresponding writeablebitmap will be disposed correctly without leakage).
public Array[] ColorPixels_Array;
for (int i=0; i< MaximumFramesNumbers_Capturing; ++i)
{
ColorPixels_Array[i]=new int[color_Width * color_Height];
}
colorBitmap.CopyPixels(ColorPixels_Array[Counter_CapturingFrame], color_Width * 4, 0);
Finally, when we want to save the arrays of pixels, we need to convert them new writeablebitmap instances and write them on hard.
wb = new WriteableBitmap(color_Width, color_Height, 96.0, 96.0, PixelFormats.Bgr32, null);
wb.WritePixels(new Int32Rect(0, 0, color_Width, color_Height)
, Ar_Px,
color_Width * 4, 0);
Related
I want to simulate a tv screen in my game. A 'no signal' image is displayed all along. It will be replaced by a scene of a man shooting another one, that's all. so I wrote this, which load my image every time :
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
/*Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene==NULL){printf("Erreur no signal : %s\n", SDL_GetError());}*/
if (Scene == NULL)
{
Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
//SDL_DestroyTexture(Scene);
//500, 545
}
and it causes memory leaks. I've tried to destroy the texture in the loop etc., but nothing changes. so, can you advice me some ways to load the image at the very beginning , keep it , and display it only when needed.
I agree with the commenters about dedicated texture loader being a correct solution, but if you only want this behavior for one particular texture it may be an overkill. In that case you can write a separate function which loads this particular texture and make sure it is only called once.
Alternatively, you can use static variables. If a variable declared in a function is marked as static it will retain its value across calls to that function. You can find a simple example here (it's a tutorial-grade source but it shows basic usage) or here (SO source).
By modifying your code ever so slightly you should be able to make sure that the texture is loaded only once. By marking a pointer to it as static you ensure that its value (so address of the loaded texture) is not lost bewteen calls to the function. Afterwards the pointer will live in memory until the program terminates. Thanks to this, we do not have to free the texture's memory (unless you explicitly want to free it at some point, but then the texture manager is probably a better idea). A memory leak will not occur, since we are never going to lose the reference to the texture.
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
static SDL_Texture* Scene_cache = NULL; // Introduce a variable which remembers the texture across function calls
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
if (Scene_cache == NULL) // First time we call the function, Scene_cache will be NULL, but in next calls it will point to a loaded texture
{
Scene_cache = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg"); // If Scene_cache is NULL we load the texture
if (Scene_cache == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
Scene = Scene_cache; // Set Scene to point to the loaded texture
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
}
If you care about performance and memory usage etc you should read about consequences of static variables, for instance their impact on cache and how they work internally. This might be considered a "dirty hack" but it might be just enough for a small project that does not need bigger solutions.
I have started working with DirectX in WPF app. My first step was to use simple library:
SharpdDX.WPF. Based on samples I've implemented WPF control drawing simple line. SharpDX.WPF uses D3DImage to render images in WPF.
Unfortunately application's memory increasing all time.
I implemented class TestControlRenderer : D3D10.
Vertex shader is initialized like:
var sizeInBytes = dataLength * sizeof(int) * 3;
var bufferDescription = new BufferDescription(
sizeInBytes,
ResourceUsage.Dynamic,
BindFlags.VertexBuffer,
CpuAccessFlags.Write,
ResourceOptionFlags.None);
using (var stream = new DataStream(sizeInBytes, true, true))
{
stream.Position = 0;
_graphDataVertexBuffer = new SharpDX.Direct3D10.Buffer(Device, stream, bufferDescription);
}
Device.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(_graphDataVertexBuffer, sizeof(int) * 3, 0));
Device.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineStrip;
Then constant buffer with parameters used in shader:
_controlInfoConstantBuffer = new ConstantBuffer<ControlParamsShaderData>(Device);
Device.VertexShader.SetConstantBuffer(0, _controlInfoConstantBuffer.Buffer);
To init animation Reset method was overriden like that:
base.Reset(args);
if (args.RenderSize.Width == 0) return;
_drawArgs = args;
InitVertexBuffer(dataLength);
_controlInfoConstantBuffer.Value = new ControlParamsShaderData
{
SamplesInControl = dataLength,
MinSignalDataY = -1500,
MaxSignalDataY = 1500
};
Device.VertexShader.SetConstantBuffer(0, _controlInfoConstantBuffer.Buffer);
The last step is RenderScene method:
public override void RenderScene(DrawEventArgs args)
{
if (args.RenderSize.Width == 0) return;
Device.ClearRenderTargetView(RenderTargetView, Color.Transparent);
using (var stream = _graphDataVertexBuffer.Map(MapMode.WriteDiscard, SharpDX.Direct3D10.MapFlags.None))
{
for (int i = 0; i < Data.Length; i++)
{
stream.Write(new Vector3(i, Data[i], 0));
}
}
_graphDataVertexBuffer.Unmap();
Device.Draw(Data.Length, 0);
}
Rendering is controlled by DispatcherTimer where OnTickMethod updates array with points coordinates and then invoke Render() method.
My question is simply, is that memory leak or something is created on each render iteration?
I don't change backbuffer or create another objects. Only change Data array, update it to GPU and Shaders process it to display.
My case is to display about 30 wpf controls width DirectX on one screen. Controls are with simple but realtime animation. Is that possible in that way?
Most likely you are leaking resources. You can see this by setting the static configuration property
SharpDX.Configuration.EnableObjectTracking = true;
then calling
SharpDX.Diagnostics.ObjectTracker.ReportActiveObjects()
at various points in your application lifetime to see if anything is leaking (at least on the SharpDX side). You can edit your code to make sure to dispose these objects. Only enable object tracking while debugging - it hurts performance.
SharpDX used to release COM objects when the finalizer ran if the object had not already been Diposed (at least as in version 2.4.2), but later disabled that (they detail why in one of their changelogs, I forget which one).
Additionally, DirectX requires that you release objects in the reverse order they were created - this can create hard-to-debug memory leaks. So when your code is
var device = new Devie(...);
var effect = new Effec(Device, byteCode);
technique = effect.GetTechniqueByName(techniqueName);
inputLayout = new InputLayout(Device, _technique.GetPassByIndex(0).Description.Signature, ...);
then your dispose code has to be
_inputLayout.Dispose();
_technique.Dispose();
_effect.Dispose();
_device.Dispose();
I am trying to create a map editor based on WPF. Currently I'm using a hack to render DirectX contents. I created a WinFormsHost and rendered on a WinForms-Panel.
This all because DirectX (I´m using DirectX 11 with Featurelevel 10) wants a Handle (alias IntPtr) where to render. I don´t know how I can initialize and use the DX Device without a handle.
But a WPF control has no handle. So I just found out, there is an interop class called "D3DImage". But I don't understand how to use it.
My current system works like this:
The inner loop goes through a list of "IGameloopElement"s. For each, it renders its content calling "Draw()". After that, it calls "Present()" of the swap chain to show the changes. Then it resets the device to switch the handle to the next element (mostly there is only one element).
Now, because D3DImage doesn't have a handle, how do I render onto it? I just know I have to use "Lock()" then "SetBackBuffer()", "AddDirtyRect()" and then "Unlock()".
But how do I render onto a DirectX11.Texture2D object without specifying a handle for the device?
I´m really lost... I just found the "DirectX 4 WPF" sample on codeplex, but this implements all versions of DirectX, manages the device itself and has such a huge overhead.
I want to stay at my current system. I´m managing the device by myself. I don´t want the WPF control to handle it.
The loop just should call "Render()" and then passes the backbuffer texture to the WPF control.
Could anyone tell me how to do this? I´m totally stuck ...
Thanks a lot :)
R
WPF's D3DImage only supports Direct3D9/Direct3D9Ex, it does not support Direct3D 11. You need to use DXGI Surface Sharing to make it work.
Another answer wrote, "D3DImage only supports Direct3D9/Direct3D9Ex"... which is perhaps not entirely true for the last few years anyway. As I summarized in a comment here, the key appears to be that Direct3D11 with DXGI has a very specific interop compatibility mode (D3D11_SHARED_WITHOUT_MUTEX flag) which makes the ID3D11Texture2D1 directly usable as a D3DResourceType.IDirect3DSurface9, without copying any bits, which just so happens to be exactly (and only) what WPF D3DImage is willing to accept.
This is a rough sketch of what worked for me, to create a D3D11 SampleAllocator that produces ID3D11Texture2D1 that are directly compatible with WPF's Direct3D9. Because all the .NET interop shown here is of my own design, this will not be totally ready-to-run code to drop in your project, but the method, intent, and procedures should be clear for easy adaptation.
1. preliminary helper
static D3D_FEATURE_LEVEL[] levels =
{
D3D_FEATURE_LEVEL._11_1,
D3D_FEATURE_LEVEL._11_0,
};
static IMFAttributes GetSampleAllocatorAttribs()
{
MF.CreateAttributes(out IMFAttributes attr, 6);
attr.SetUINT32(in MF_SA_D3D11_AWARE, 1U);
attr.SetUINT32(in MF_SA_D3D11_BINDFLAGS, (uint)D3D11_BIND.RENDER_TARGET);
attr.SetUINT32(in MF_SA_D3D11_USAGE, (uint)D3D11_USAGE.DEFAULT);
attr.SetUINT32(in MF_SA_D3D11_SHARED_WITHOUT_MUTEX, (uint)BOOL.TRUE);
attr.SetUINT32(in MF_SA_BUFFERS_PER_SAMPLE, 1U);
return attr;
}
static IMFMediaType GetMediaType()
{
MF.CreateMediaType(out IMFMediaType mt);
mt.SetUINT64(in MF_MT_FRAME_SIZE, new SIZEU(1920, 1080).ToSwap64());
mt.SetGUID(in MF_MT_MAJOR_TYPE, in WMMEDIATYPE.Video);
mt.SetUINT32(in MF_MT_INTERLACE_MODE, (uint)MFVideoInterlaceMode.Progressive);
mt.SetGUID(in MF_MT_SUBTYPE, in MF_VideoFormat.RGB32);
return mt;
}
2. the D3D11 device and context instances go somewhere
ID3D11Device4 m_d3D11_device;
ID3D11DeviceContext2 m_d3D11_context;
3. initialization code is next
void InitialSetup()
{
D3D11.CreateDevice(
null,
D3D_DRIVER_TYPE.HARDWARE,
IntPtr.Zero,
D3D11_CREATE_DEVICE.BGRA_SUPPORT,
levels,
levels.Length,
D3D11.SDK_VERSION,
out m_d3D11_device,
out D3D_FEATURE_LEVEL _,
out m_d3D11_context);
MF.CreateDXGIDeviceManager(out uint tok, out IMFDXGIDeviceManager m_dxgi);
m_dxgi.ResetDevice(m_d3D11_device, tok);
MF.CreateVideoSampleAllocatorEx(
ref REFGUID<IMFVideoSampleAllocatorEx>.GUID,
out IMFVideoSampleAllocatorEx sa);
sa.SetDirectXManager(m_dxgi);
sa.InitializeSampleAllocatorEx(
PrerollSampleSink.QueueMax,
PrerollSampleSink.QueueMax * 2,
GetSampleAllocatorAttribs(),
GetMediaType());
}
4. use sample allocator to repeatedly generate textures, as needed
ID3D11Texture2D1 CreateTexture2D(SIZEU sz)
{
var vp = new D3D11_VIEWPORT
{
TopLeftX = 0f,
TopLeftY = 0f,
Width = sz.Width,
Height = sz.Height,
MinDepth = 0f,
MaxDepth = 1f,
};
m_d3D11_context.RSSetViewports(1, ref vp);
var desc = new D3D11_TEXTURE2D_DESC1
{
SIZEU = sz,
MipLevels = 1,
ArraySize = 1,
Format = DXGI_FORMAT.B8G8R8X8_UNORM,
SampleDesc = new DXGI_SAMPLE_DESC { Count = 1, Quality = 0 },
Usage = D3D11_USAGE.DEFAULT,
BindFlags = D3D11_BIND.RENDER_TARGET | D3D11_BIND.SHADER_RESOURCE,
CPUAccessFlags = D3D11_CPU_ACCESS.NOT_REQUESTED,
MiscFlags = D3D11_RESOURCE_MISC.SHARED,
TextureLayout = D3D11_TEXTURE_LAYOUT.UNDEFINED,
};
m_d3D11_device.CreateTexture2D1(ref desc, IntPtr.Zero, out ID3D11Texture2D1 tex2D);
return tex2D;
}
A few months ago I built some online samples like this one from Jeff Prosise that use the WriteableBitmap class in Silverlight.
Revisiting them today with the latest Silverlight3 installer (3.0.40624.0), the API seems to have changed.
I figured out some of the changes. For example, the WriteableBitmap array accessor has disappeared, but I found it in the new Pixels property, so instead of writing:
bmp[x]
I can write
bmp.Pixels[x]
Are there similar simple replacements for these calls, or has the use pattern itself changed?
bmp = new WriteableBitmap(width, height, PixelFormats.Bgr32);
bmp.Lock();
bmp.Unlock();
Can anybody point me to a working example using the updated API?
Another important detail about switching to the new WriteableBitmap is given in this answer ... because the pixel format is now always pbgra32, you must set an alpha value for each pixel, otherwise you just get an all-white picture. In other words, code that formerly generated pixel values like this:
byte[] components = new byte[4];
components[0] = (byte)(blue % 256); // blue
components[1] = (byte)(grn % 256); // green
components[2] = (byte)(red % 256); // red
components[3] = 0; // unused
should be changed to read:
byte[] components = new byte[4];
components[0] = (byte)(blue % 256); // blue
components[1] = (byte)(grn % 256); // green
components[2] = (byte)(red % 256); // red
components[3] = 255; // alpha
What happens if you don't use Lock and Unlock and just use the WritabelBitmap(int, int) constructor? Do things break?
It would seem that between SL3 Beta and the release this API has changed. See Breaking Changes Document Errata (Silverlight 3)
I am investigating a GDI resource leak in a large application. In order to further my understanding of how these problems occur, I have created a very small application which I have deliberately made 'leaky'. Here is a simple user control which should result in the creation of 100 Pen objects:
public partial class TestControl : UserControl
{
private List pens = new List();
public TestControl()
{
InitializeComponent();
for (int i = 0; i < 100; i++)
{
pens.Add(new Pen(new SolidBrush(Color.FromArgb(255, i * 2, i * 2, 255 - i * 2))));
}
this.Paint += new PaintEventHandler(TestControl_Paint);
}
void TestControl_Paint(object sender, PaintEventArgs e)
{
for (int i = 0; i < 100; i++)
{
e.Graphics.DrawLine(pens[i], 0, i, Width, i);
}
}
}
However, when I create an instance of my object and add it to a form, looking at my application with TaskManager I currently see ~37 GDI objects. If I repeatedly add new TestObject user controls to my form, I still only see ~37 GDI objects.
What is going on here! I thought that the constructor for System.Drawing.Pen would use the GDI+ API to create a new Pen, thus using a new GDI object.
I must be going nuts here. If I cannot write a simple test application that creates GDI objects, how can I create one which leaks them!
Any help would be much appreciated.
Best Regards, Colin E.
Does the GDI+ use GDI handles? I'm not sure, though I read somewhere that there is a .NET System.Drawing implementation that relies on bare GDI.
However, maybe you can try to find your leaks with a profiler like AQTime instead.
How are you sure your large app is leaking GDI handles? Is the count in Task Manager large? If so, are you always using GDI+, or also GDI? Does your test app GDI handle count increase if you create your control multiple times?
You are not really leaking resources in your sample. Remove this code from your Load event:
for (int i = 0; i < 100; i++)
{
pens.Add(new Pen(new SolidBrush(Color.FromArgb(255, i * 2, i * 2, 255 - i * 2))));
}
Your Paint event handler should look like this:
void TestControl_Paint(object sender, PaintEventArgs e)
{
for (int i = 0; i < 100; i++)
{
e.Graphics.DrawLine(new Pen(new SolidBrush(Color.FromArgb(255, i * 2, i * 2, 255 - i * 2))), 0, i, Width, i);
}
}
Now you will be leaking in every paint call. Start minimizing/restoring your Form and see GDI objects sky rocket...
Hope this helps.
If you want to leak a GDI object from .NET, then just create a GDI object and not release it:
[DllImport("gdi32.dll", EntryPoint="CreatePen", CharSet=CharSet.Auto, SetLastError=true, ExactSpelling=true)]
private static extern IntPtr CreatePen(int fnStyle, int nWidth, int crColor);
CreatePen(0, 0, 0); //(PS_SOLID, 0=1px wide, 0=black)
Blingo blango, you're leaking GDI pens.
i don't know why you want to create GDI leaks. But your question asked how to create GDI leaks from a WinForm - so there it is.
I think the compiler only use one handle.
If I in delphi create a lot of fonts I just take memory
but if I use the WinAPI CreateFont() I take GDI objects.
Create two buttons on a form. Inside each button, add the following code. In one button, comment out the Dispose method.
Form _test = null;
for (int i = 0; i < 20; i++)
{
_test = new Form();
_test.Visible = false;
_test.Show();
_test.Hide();
_test.Dispose();
}
The button with the Dispose commented out shows you the leak. The other shows that Dispose causes the User and GDI handles to stay the same.
This is probably the best page I've found that explains it.
I think the following blog may have answered this question:
Using GDI Objects the Right Way
The GDI objects that aren't explicitly disposed should be implicitly disposed by their finalizes.
(Bob Powell has also mentioned this issue in GDI+ FAQ )
But I doubt if the CLR garbage collector can remove GDI resources so quickly that we can't even see memory usage changes from TaskManager. Maybe current GDI+ implementation doesn't use GDI.
I've tried the following piece of code to generate more GDI objects. But I still couldn't see any changes of the number of GDI handles.
void Form1_Paint(object sender, PaintEventArgs e)
{
Random r = new Random();
while (true)
{
for (int i = 0; i < 100; i++)
{
e.Graphics.DrawLine(
new Pen(new SolidBrush(Color.FromArgb(r.Next()))), 0, i, Width, i);
}
}
}