I have a WPF application that is intended for overlaying a HUD in a live stream. The original idea was to create a plugin for xsplit (a popular application for presenting live streams) to display the content of the WPF application. The problem with this approach is that rendering a bitmap to the COM interface of xsplit is far to damaging in CPU performance to release the application (As I believe there are issues in xsplit's COM interface as well as using RenderTargetBitmap taxing the CPU).
I've been looking at directly rendering the overlay into the game (The target DirectX application) because it provides a number of benefits. Chiefly it circumvents the performance problems in xsplit, but also opens the application up to a variety of streaming and capture applications.
I'm not a very experienced with DirectX but I think this is the outline of the solution
Initialize the WPF application and capture WPF's Direct3d device (via this method)
Find and hook the target DirectX application's EndScene call (using EasyHook+Slimdx)
Render contents of the WPF Device's surface ontop of hooked DirectX application
The main question I have is how to accomplish step 3 using SlimDX. I'd hope a solution could somehow reuse the surface and not rely on copying as the goal is to not impact the performance of the hooked application. I'd also like to be able to limit the region and support transparency. I am also wondering if using WPF's Direct3d device in the hooked DirectX application's device might cause any instabilities.
Any insight would be appreciated, thank you.
I'm trying to do the same. What I've found so far is that you can render your wpfvisualtree to a bitmap and afterwards write is bitmap to the d3d device captured in point 2.
void render(Direct3D.Device device)
{
wpfRenderTargetBitmap.Render(WpfVisualTree);
wpfRenderTargetBitmap.CopyPixels(devicePtr);
}
I didnt test this yet but I think I'm on the right track with this. The only problem I now have is that I loose all interactivity from my window. Button clicks and so on will no longer be captured...
Any help on that would be nice.
Related
I have a considerable challenging subject i'm trying to figure out.
Within WPF/WinForms I can create a WebBrowser component which has significant limitations, these would be drastically resolved/reduced if i can bounce the directx surface from the web browser control to a DirectX surface i have setup.
A few things to note:
The WebBrowser component hosts IWebBrowser2 OLE/ActiveX component in a "floating" window above existing content as a child window of the WPF window.
I know and can get (without hacks) the HWND of the floating window; and the sub-classed HWND's of the children which have the actual Internet Explorer component running.
I can confirm that the window is rendering with directX, but i do not have any handles to anything outside of a HWND. I do not know the surface its rendering to, the device, or anything else.
What I've found as potential solutions:
BitBlt the child window to a WPF surface to solve the problem; this is an option of last resort as it would require a timer to capture, and update the bitmap. This seems wasteful and doesn't seem to be all that great.
The option to "Redirect" a HWND's directx surface has been noted as "trivial" by Microsoft Blog writers, but they never actually explain how. So there may be a non-directx non-gdi method thats more high-level that might work if anyone knows of one.
Use a swap chain to bounce the directx surface from the source WebBrowser HWND to the new target. This is my optimal choice; but its difficult to even begin since I do not have anything other than the HWND of the target. (I'm not limited to WPF, other tech such as OLE/COM/MFC/ATL/WinForms solutions are great!)
Is there a way to access the directx device from a OLE/COM object?... Kind of a hail marry, using reflection/debuggers i can't seem to find any reference to it. But this is also somewhat hack-y as i'm digging deep into the internals of the implementation.
Is there any pointers, hints or direction anyone can provide on how i might best accomplish this with the minimal "hackery" and maximum performance?
To begin with, I really don't think you can force WebBrowser to render directly onto your custom DirectDraw surface. However, you might be able to provide a DD surface's HDC to draw onto.
If you want to play with this, WebBrowser ActiveX control implements Windowless ActiveX Controls interfaces. In theory, you could implement a windowless ActiveX host and use IViewObject::Draw to draw onto the DD surface's HDC. I cannot predict what the performance of this would be, but I doubt it would even closely approach the native DirectDraw performance of Trident rendering engine.
I also posted a somewhat related code which uses OleDraw (which indirectly calls IViewObject::Draw).
In our WPF application, we have a need to display about 64 real-time level meters for an audio application. The tests we've thrown at WPF, even when rendering basic primitives as efficiently as we can still show it to be nowhere near where our application needs to be, often times bogging down the main thread so much to the point that it's non-responsive to input.
As such, we have to go with something more optimized for graphics performance such as DirectX (via SlimDX or SharpDX) or OpenGL/ES (via Atlas which converts it to DirectX calls.)
My question is if it's possible to create multiple, small DirectX-based areas, each representing an individual meter, or for that matter, is that even the right approach? I was under the understanding that you have to run it as at a minimum, the entire window, not a portion thereof.
The issues I see with the latter are airspace issues wherein you can't have WPF content in front of DirectX content in the same window, and we really don't want to have to redo all of our controls in DirectX since for the other non-meter 95% of our UI WPF is great!
I have read that you can render DirectX to a brush, then use that inside WPF, or using the WriteableBitmap class which gives you direct access to the buffers WPF then uses in its Render thread, both of which don't seem to suffer from the Airspace issues, but that seems we'd be right back at the same place with WPF being the bottleneck since it still has to do the rendering.
We are of course going to dedicate a few weeks to sample applications testing all of the above, but I'm wondering if I'm even headed in the right direction, and/or if there are any caveats we can avoid by talking to people with experience doing something like this to avoid common pitfalls, etc. As such, any comments will be appreciated.
I'm hoping we can perhaps even start a wiki somewhere to discuss this topic as it seems to be a popular one, albeit spread all over the place making it hard for new entrants to get the information they seek.
With wpf / d3d interop, You should always try to create the smallest number of interop calls. So you should prefer rendering all 64 level meters in a single render target (also it allows you to batch your primitive rendering and draw everything in the smallest number of gpu calls).
you should try to use the D3DImage API that allows you to share your own D3D texture with the wpf renderer.
If WPF can't really handle these 64 moving bars, you could go with a single D3DImage and use Direct3D9 for rendering all bars at once directly to it. For your specific scenario, you shouldn't have any performance problem.
I'm writing a Video application utilizing D3DImage. Frames are from memory and rendered as textures in native code with DirectX9, finally exposed by D3DImage to the WPF GUI. I have some Overlays on top, created with WPF's painting framework (Text, shapes etc.). Up to this point, it works like a charm.
Now, I would like encode the composited image from my underlying native C++ code. Video is 640x480 BGR, 25 FPS and has to be rendered and encoded in parallel, also on older Hardware with Windows versions down to XP/SP3.
Problem is, I cannot find any documentation describing the composition process between WPF and D3DImage. They 'blend' in some sense, but what is the meaning of this? And is it possible to get a handle to the WPF's part of the drawing or even the composited image in my native C++ code?
p.s: I'm also open to managed solutions, but didn't found much performant up to now.
There is global static method called "CompositionTarget.Rendering". Add an event to that and every time WPF renders that method will be called before WPF presents(the FPS can vary though). So just updated your renderTarget accordingly.
There might be a better way, but i'm not aware of it.
NOTE:: Also for D3DImage on WindowsXP you use a D3D9 device with a lockable renderTarget while on Vista/7 you use a D3D9Ex device with a non-lockable renderTarget. Just a note.
Greetings
I've read WPF utilizes DirectX so I'm wondering if it is possible to create a Game Overlay with WPF. I have tried with Winforms or WPF by itself and the transparent forms or windows always cause problems for streaming software thus I'm wondering is it possible to do the following:
Create a WPF application which shows a Window on the desktop with all the options needed for the overlay. Once all the options is filled in you can press Update and the Overlay is created in the game with all the information on it. The WPF app itself won't be visible on the stream. This means all the viewers will not have any trouble with it when the broadcaster changes settings.
More about the overlay
The overlay will be a scoreboard so it will need a set amount of info. For example:
So to sum up my question(s)
Can I make a WPF application which
dynamically creates a DirectX overlay
ingame?
Since it needs to work in DirectX9,
is this project possible to make by a
single dev (me) which has little to
no exp with DirectX?
If it is possible, where should I
start?
Thanks in advance for all your possible insights and replies!
What you want would be possible using D3DImage. It allows you to host any Direct3D content within WPF and also allows you to have overlay with transparency. Here is a simple example.
From your comment above, it sounds like your really trying to inject your overlay (at least from the user's perspective) into Starcraft II. You would almost have to host a copy of the directx buffer.
Also, besides WPF, you might want to look at XNA.
After reading the wikipedia article on WPF architecture, I am a bit confused with the benefits that WPF will offer me. (wikipedia is not a good research reference, but i found it useful). I have some questions
1) WPF uses d3d surfaces to render. However, the scenegraph is rendered into the d3d surface by the media integrated layer, which runs on the CPU. Is this true ?
2) I just found out by asking a question here that bitmaps dont use native resources. Does this mean that if i use alot of images, the MIL will copy each when rendering, rather than storing the bitmaps on the video card as a texture ?
3) The article mentions that WPF uses the painters algorithm which is back to front. Thats painfully slow. Is there any rational why WPF omits using Z-buffering and rendering front to back ? I am guessing its because the simplest way to handle transparency, but it seems weak.
The reason i ask is that i am thinking it wont be wise for me to put hundreds of buttons on a screen even though my colleagues are saying its directx accelerated. I dont quite believe that whole directx accelerated bit about WPF. I used to work on video games and my memory of writing d3d and opengl code tells me to be cautious.
For questions #1 and #3 you might want to check out this section of the SDK that discusses the Visual class and how it's rendering instructions are exchanged between the higher level framework and the media integration layer (MIL). It also discusses why the painters algorithm is used.
For #2, no that is most definitely not the case. The bitmap data will be moved to the hardware and cached there.
I tested that, I wrote two programs that show 1,000 buttons on screen, one in WinForms and one in WPF, both worked just fine.
I then pushed that up to 10,000 buttons, at that point the WPF app took a few seconds to start but run just fine, the WinForms app didn't start.
Win32 itself (and WinForms) isn't built for applications with hundreds of controls (believe me I wrote such an app), at some point it just stops working, WPF on the other hand, keeps working even if it slows down a bit at some point.
So, if you do need to put a lot of controls on screen WPF is your best bet (unless you want to roll your own UI framework - and you think you can do better than the entire MS perf team).
Also, WPF has many advantages other than graphics acceleration: richer graphics, drawing model that is easier to work with, animations, 3d and my personal favorite - amazing data-binding.
This will let you develop richer UIs faster - and I think that will make a much bigger difference than the painting algorithm used.
BTW, if you need to put hundreds of buttons on the screen this is likely to be a bad user experience and you may want to reconsider your UI design,