I'm looking for a way to overlay the graphical output of a third-party application with some lines, arcs etc. The applications accepts a handle of a window in which it will then display its output.
Using VC++ I put together a Windows Forms app in Visual Studio that draws (non-static) stuff in the onPaint-method of a form. Passing this form's handle to the other app, of course, overwrites my graphics stuff every time the other app redraws.
Can I somehow hook into this redrawing process to add my graphics after the other app redraws? Overlaying the form with a transparent panel onto which I draw could be an alternative. But real transparency for controls seem to be a problem of its own in Windows ...
You can't do this easily without getting notifications from the app. Which, if it doesn't provide them, would require setting a global hook with SetWindowsHookEx() so you can see the WM_ERASEBKGND and WM_PAINT messages. That's hard to get right, you cannot write such a hook in managed code. Since it requires injecting a DLL into the target process.
The only other option is that you put a transparent overlay on top of your form. Another form that has its TransparencyKey property set. The basic code you need to get that right is available in my answer in this thread. You just need to tweak it so it is permanent.
Related
i recently started to play Second Life and I would like to know if there is a way to write a program outside of SL viewer that allow to click on sl menu's button automatically.
Not unless You really feel like writing Your own 3rd party viewer. The only time I've seen this done is through SmartBots, but even that is using a custom coded viewer to host the bot.
You can always use external programs such as AutoHotkey to do clicks at certain locations on-screen.
Do note that all UI elements in SL viewer are drawn not using the OS's GUI component system, but actually drawn by SL viewer itself using OpenGL calls, so you'll have to do coordinate calculations yourself, and click at certain relative coordinate to the viewer's window.
From what I understand, a GUI will have its windows, window classes, and use these for the main windows and all the buttons and tabs etc.
These would all have handles and be rendered either with the Windows GDI or another backend such as OpenGL. When a user interacts, say by clicking on a widget, there will be a callback function/event handler and it'll do its job. But what is happening when the user clicks on a button that switches the (I'm not sure what to call this so I'll call it a "form" - by this I mean the visible set of all menus and widgets and things - like on Google Chrome I have this tab open right now and I could move to another one that displays a different website and GUI) form.
How does the GUI framework change all the windows on the screen? I can understand it could change what's being rendered with the API of choice, like OpenGL, but how does it get rid of all the old windows and load the new ones? Does it disable all the child windows through their handles, and just leave them there on the screen, but unseen and not accepting input? Does it delete everything and create new windows? How does it actually perform this change (efficiently too)? I may be making a mountain out of a molehill here - if I'm overthinking this please let me know!
I once made a very bad game, using c Win32, the GDI and Direct2D, and when you pressed "play" it'd go to the game, but I just had to hide the buttons in a very glitchy fashion - I had no clue how to perform the "switch."
I have never ever used a "proper" GUI framework like Qt nor have I ever built one myself so apologies for any errata in the question, please correct me. I ask because I want to make my own GUI framework as a long term project (nothing special just something I can say that I've achieved) and I am at a loss as to how I can implement this from a low-level perspective, or rather how industry standards such as Qt will implement this at the lowest possible level.
Any answers would preferably not refer to managed code or any scripting languages, or external libraries - I want to know how to do this in c Win32 + any arbitrary graphics API. Thanks in advance.
This is accomplished by altering the z-order (the idea being that the windows form a stack from closest to the user to furthest away) of children at the appropriate level. The direct children of every window are in some z-order even if they are arranged such that they don't actually overlap.
For example, in the case of a tab control there will likely be a single child associated with each tab, that child representing the view for that tab. When a button is clicked the child for that tab is moved in the z-order so that is above all of its siblings (the forms for the other tabs). Those windows for the tab children will all be the same size (the empty area of the tab's client window) so bringing the child to the top of its parent's z-order will cover all other views.
In the case of the window's API you alter z-order placement via SetWindowPos, if you are going to roll your own (as WPF does) then you will need to re-implement this idea in some manner.
We've been working on an application for the last few months that's aimed at Windows 7 tablet PCs. So we've used the Surface 2 SDK for most controls and it's all touch-happy.
I have noticed recently, though, that one of our custom controls isn't working as it should. This control provides popout menus, and these are achieved through the Popup control. On a developer's laptop, this works fine and the menus vanish when you click away from them. I've noticed, though, that on our test tablet they have a tendency to stay open.
I found that there was a SurfacePopup in the first Surface SDK, but I can't find one in the Surface 2 SDK. Did they get rid of it? Is there a 'best practice' approach?
If there's no simple solution, I may have to go old-school and add a window-sized hidden SurfaceButton below the menu when it appears, that hides itself and the menu when clicked or touched.
Beyond that I've noticed that sometimes the SurfaceScrollViewer within the popups won't work. I'm guessing this is because it's not picking up touch events properly. I tried adding this extension method to the window..
this.EnableSurfaceInput();
..but I get a NullReferenceException on System.Windows.Input.Mouse.get_LeftButton() which bizarrely suggests that it can only enable surface inputs for controls when there's a mouse plugged in.
Any ideas? They'll all be welcomed with open arms!
There's no SurfacePopup in the Surface SDK 2.0, however you can use a normal WPF popup. Then you need to make sure that it receives Touch Events by using the extension method you suggested above on the popup, not the window:
((HwndSource)HwndSource.FromVisual(popup)).EnableSurfaceInput();
Edit: As I just found out, this only works when the popup is initially open. To get it to work when the popup is opened later on, you don't need to use the popup, but the parent of it's child (see this question).
For the benefit of Daniel, and anyone else who needs a solution to this, I'll try to cast my mind back two years and explain how we got this working.
As far as I can remember, the answer was to use an adorner layer instead of a popup. Basically, every WPF control has an adorner layer, which sits above the control's UI stack. By default it contains nothing, but you can add whatever you like to it.
I got this all working by writing a custom control that allows you to place that control, with content, in the XAML and then show and hide it whenever you need to. When it's shown, it moves its contents into the adorner layer of the containing window, and when it's hidden it moves the contents back into the control itself, which is hidden from the user.
Afraid I can't go into any more detail than that, but as far as I can remember this was the ultimate solution; replacing popups (which never quite worked very well) with a custom control that uses the adorner layer.
Hope that helps!
I have a WPF application that uses an HwndHost to display DirectX 9 content. I'm trying to use the PIX for Windows graphics profiler (which comes bundled with the DirectX SDK) to see render states and debug shaders in this DirectX window.
The problem I'm running into is that when I try to do a single-frame capture by pressing F12 in PIX, I sometimes don't get all the draw information for an entire frame. Also, when I try clicking on a DrawPrimitive call in PIX I get an error message saying "A call that previously succeeded failed during playback."
I think the reason for these problems is that WPF also uses DirectX to render all the WPF widgets. It does this on the WPF render thread which is hidden from the user in WPF. It looks like PIX uses calls to IDirect3DDevice9::Present to determine when a frame has ended. The WPF render thread is making calls to IDirect3DDevice9::Present in the middle of a frame of my render, causing PIX to get confused and truncate that frame (and errors when I try to look at DrawPrimitive calls).
Here's a little more information about how my application is set up: I have a WPF application with an HwndHost inside the main window that I use to show some 3D content. In the HwndHost::BuildWindowCore function, I create a window using CreateWindowEx which gives me an hwnd. I then initialize the DirectX 9 device using this hwnd. I then return the hwnd to the HwndHost. I hook up to the CompositionTarget.Render event which calls my unmanaged render function to draw everything in the scene.
I've tried getting this working using the D3DImage control to display my DirectX surface, but I run into the same problem.
It seems to me like the only way to solve this problem is to make sure that the WPF render thread has completely finished before doing a frame capture in PIX. That way, PIX won't be thrown off by any WPF DirectX calls. However, I don't see that there is any way to determine that WPF has finished rendering everything. There is a good post here that explains how to use the Dispatcher.Invoke function to determine when a single WPF element has completed rendering, but I need to know when everything has finished rendering.
I'm wondering if anybody has successfully set up a WPF application with their own DirectX window and was able to get PIX to work properly. Any help would be appreciated.
I have a set of forms which are visualized as MDI tab children of a main form (through an Infragistics UltraTabbedMDIManager, but this API is not so important)
I use GetDC(), CreateCompatibleDC(), CreateCompatibleBitmap(), SelectObject(), BitBlt().. to blit the bitmap of the device contexts of these forms into some memory.
This works, but only for the active MDI child form, the one that is visible to the user.
If I do it for forms that are not active (any tabs that are not currently shown), I get a black screen in the memory area, or I even get a "copy" of the screen that's above it.
If I do it for forms that are no longer visible, I also get a black screen.
What should I do to get a bitmap of these hidden forms? Do I have to resort to caching or is there some other trickery I can use?
I cannot use Winforms DrawToBitmap() function, because the forms contain some low-level graphical things that cannot be retrieved with it.
How can I use the winapi to retrieve a bitmap of these "hidden" forms' DC?
I managed to do it using the PrintWindow API in user32.dll.
It solves the MDI tabs problem, however it didn't solve the problem for hidden forms.
I solved that problem by showign the forms briefly in some off-screen location.
It seems the "ultimate" way is to use the (undocumented) dwm.dll, but this is not so advisable because interfaces differ between versions of Windows.