Parent window not receiving window's messages (Key Events) - winforms

I have a GUI application that is written using win API's
and we need to launch a new GUI application when the user selects some command menu items.
We decided to write the new application in PyQt and launch the PyQt application usig Python C Api.
Everything is working fine except that the Parent window, through which we launch the PyQt Application, is not responding to some of the events when PyQt application is open. Once we close the PyQt Application it starts responding again to the key events.
I guess, that once the PyQt Gui application is launched, somehow the messages are not passed to the Parent window.
Inspecting with Spy++ I've found the following result:
Receives messages for:
- ALT key
- F1, F2 keys
- Mouse events
Does NOT receive messages for:
- CTRL key
- All other Fn keys
- All letter keys
- SHIFT, CAPS keys
Any thoughts to solve this problem would be appreciated

I believe what you are trying to do -- operate two separate GUIs within a single process -- is not supported by any major operating system. A while back, I searched for a long time for ways to do this and never came up with any advice except "don't".
I'm surprised that missing keys are the only problem you have.. I recommend finding a different solution before you discover more trouble (unless you can find some good evidence that this is at least supposed to work).
Could you perhaps spawn a new process to run the Qt event loop instead? Since you already have python embedded in the main process, this should be fairly easy--use python's built-in IPC to handle the communication between processes.

One solution is to build the QtWinMigrate module to create a QWinHost which supports parenting to a native HWND but unfortunately it is not part of the PyQt distribution.
You can find some sources here: https://github.com/glennra/PyQtWinMigrate.
This is what had to be done for Python integration in 3ds Max by Blur studio. I am currently studying the C++ source code of QWinWidget too see if I can work out an alternative solution using Win32 calls.

Related

Is it possible to communicate between electron browser and form application window?

I'm newbie to "Electron" framework:
I need to create a POC showing communication between "Electron" browser window and a local windows forms application. I'm aware that the "Electron" has abilities of "investigating" the machine and maybe even understand what apps (form application, in my case) are up. I'm aware of the inter process communication (IPC) it has, but it doesn't seems to help me.
I would like to know if I have the ability to click on a button in the "Electron" Browser window (BrowserWindow) and make some response (write to a text box, for example) in an up and running form application.
Thanks
IPC is only for communicating within Electron's main process and render process. It can't be used to let Electron talk to other applications (like Firefox for example). Electron is basically a chromium browser that uses NodeJS to interact with local OS resources.
I am not sure it is possible to do what you want to do unless you create custom Addons in C/C++ to sit between the OS and all running applications. If that isn't your thing, check NPM for something that might exist already.
Out of the box, the closest things you will get are Node's Child Process and command line options, but they won't do what you want them to do based on my understanding of your question.

Embedding CEF3 with existing application

I have a running WIN32 application. There a window in this application where I want to show web content using CEF3. But, I am facing problems and the entire window becomes white without showing any web page content. So I have the following questions:
Is it possible to use CEF3 with existing message loop in application? I dont want to call the CEF message loop, it may impact other things in my application.
Is it absolutely necessary to use a message window as in the sample application? I am not able to understand its objective.
When CEF3 launches multiple processes, how does it show in the task manager? If my application name is A.exe, does it show A.exe multiple times in task manager?
Any help is much appreciated.
For windows users there is possible to use multi threaded message loop (CefSettings). It is allow maintain browser windows via own message loop. But there is good practice use single threaded message loop, - you can call CefDoMessageLoopWork periodiacally on idle or some additional events. It is possible even with existing message loop.
I'm not sure what you mean.
CefSettings.BrowserSubprocessPath specifies which executable will be used for child processes. While you are integrating it in other process, looks like it is one possible solution and in task manager you will see processes as you named it.
About the question number 2:
every windows application has its own "main window" and wndProc that receives all the messages sent by his children.
And the sample win32 cefclient shows how to integrate cef message loop inside the application's message loop.
And if you don't handle and dispatch cef messages in proper way the browser window becomes white.

WPF application calls an API that needs a message pump; Dispather.Run() causes problems

I have a WPF app that uses a non-WPF vendor library. My app does not receive any events that the library fires. I've been told that this is because I need a message pump.
In another (very similar) question, the accepted answer suggested using System.Windows.Threading.Dispatcher.Run().
When I add in that call, however, my window won't pop up-- the app is effectively backgrounded and I have to shut it down with Task Manager.
I'm really stumped here, and I'm not even sure how to investigate it. Any help would be greatly appreciated.
You already have one if you use WPF, there's no other way that it can get any Windows notifications. Every WPF app starts life with a call to Application.Run() on the main thread. It is usually well hidden, auto-generated in the bin\debug\app.g.cs source code file. Application.Run() in turn calls Dispatcher.Run()
Your vendor is correct, without a message loop many COM components go catatonic. But since you have one you need to look for the problem elsewhere. Don't use the component on threads.

WPF Application that Listens for Commands Even When not the Active Application

I would like to configure a WPF application to function in a similar way to SlickRun. I would like to be able to minimize the application to the taskbar, then while in any other program, press a key command (ex: ALT + X) and have my application appear to the user.
Can someone point me in the right direction?
Your best bet is to use RegisterHotKey(). That works by sending the WM_HOTKEY message to the HWND you passed in. Since WPF doesn't expose its windows' message loop to developers, you'll probably need to get your hands dirty with some interop and create a message only window to receive the hot key messages.

Is there a way for my binary to react to some global hotkeys in Linux?

Is it possible to listen for a certain hotkey (e.g:Ctrl-I) and then perform a specific action? My application is written in C, will only run on Linux, and it doesn't have a GUI. Are there any libraries that help with this kind of task?
EDIT: as an example, amarok has global shortcuts, so for example if you map a combination of keys to an action (let's say Ctrl-+, Ctrl and +) you could execute that action when you press the keys. If I would map Ctrl-+ to the volume increase action, each time I press ctrl-+ the volume should increase by a certain amount.
Thanks
How global do your hotkeys need to be? Is it enough for them to be global for a X session? In that case you should be able to open an Xlib connection and listen for the events you need.
Ordinarily keyboard events in X are delivered to the window that currently has the focus, and propagated up the hierarchy until they are handled. Clearly this is not what we want. We need to process the event before any other window can get to it. We need to call XGrabKey on the root window with the keycode and modifiers of our hotkey to accomplish this.
I found a good example here.
I think smoofra is on the right track here; you're looking to register a global hotkey with X so that you can intercept keypresses and take appropriate action. Xlib is probably what you want, and XGrabKey is the function, i think.
It's not easy to learn, I'm afraid; I did locate this example that seems useful: TinyWM. I also found an example using Java/JNI (accessing the same underlying Xlib function).
You should look at the source code of xbindkeys.
Xlib programming is pretty arcane, documentation is hard to find, and there are subtle portability issues. You'll be better off copying some battle-hardened code.
One way to do it is to have your application listen on a certain port, or socket file, for incoming requests.
Then you can write a small client application that connects to that port or socket file and sends commands to the running application.
Then you can configure your window manager to bind certain key combinations to launch your small client app.
In UNIX, your access to a commandline shell is via a terminal. This harks back to the days when folks accessed their big shared computers literally via terminals connected directly to the machines (e.g. by a serial cable).
In fact, the 'xterm' program or whatever derivative you use on your UNIX box is properly referred to as a terminal emulator - it behaves (from both your point of view and that of the operating system) much like one of those old-fashioned terminal machines.
This makes it slightly complicated to handle input in interesting ways, since there are lots of different kinds of terminals, and your UNIX system has to know about the capabilities of each kind. These capabilities were traditionally stored in a termcap file, and I think more modern systems use terminfo instead. Try
man 5 terminfo
on a Linux system for more information.
Now, the good news is that you don't need to do too much messing about with terminal capabilities, etc. to have a commandline application that does interesting things with input or windowing features. There's a library, curses, that will help. Lookup
man 3 ncurses
on your Linux system for more information. You will probably be able to find a decent tutorial on using curses online.

Resources