I'm writing a cross-platform application that on certain condition takes control of all user input for a period of time.
On GNU/Linux I've used gtk+, which allows me to retrieve mouse and keyboard events such as movement or presses. That's something I need as my application responds to them. It has also a small graphical interfce created with gtk+.
I've been trying to grab mouse input on Windows without success as gtk does work well graphically, but does not grab user input. I've tried using BlockIntput() but it does not works as expected because:
I need administrator priviledges to run the application
I can't read mouse nor keyboard input
Is there a way to grab mouse and keyboard input on windows and still being able to read their inputs without administrative rights?
I finally found a solution that fits my requirements. One of Marc's links guided me to the use of hooks on Windows which I had already tried with no success, but I ended up implementing them for both keyboard and mouse grabbing.
My Windows code uses windows libraries and when I need to block input I create a thread which calls a function:
DWORD dwThread;
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)MouseHooker, NULL, 0, &dwThread);
Then I install the hook:
DWORD WINAPI MouseHooker(LPVOID lpParameter) {
HINSTANCE hExe = GetModuleHandle(NULL);
//The thread's parameter is the first command line argument which is the path to our executable.
if (!hExe) //If it fails we will try to actually load ourselves as a library.
hExe = LoadLibrary((LPCSTR) lpParameter);
if (!hExe)
return 1;
//Install the hook as a low level mouse hook thats calls mouseEvent
hMouseHook = SetWindowsHookEx (WH_MOUSE_LL, (HOOKPROC)MouseEvent, hExe, 0);
...
UnhookWindowsHookEx(hMouseHook);
return 0;
}
And on each mouse event code gets called:
if (nCode == HC_ACTION && ...) { //HC_ACTION means we may process this event, we may add specific mouse events
//We block mouse input here and do our thing
}
//return CallNextHookEx(hKeyHook, nCode, wParam, lParam);
return 1;
So as we not continue with the hook chain input never gets processed and the workstation gets blocked.
The code runs on Windows 7 as expected. I kept using gtk+ on Windows as I can still generate my GUI and retrieve mouse inputs with gdk.
On GNU/Linux code works only using GTK+ libraries as I had no issues when grabbing input.
Related
I am trying to create a mouse upper-level filter driver using Visual Studios 2015 in a Kernel Mode Driver(KMDF) empty project in C. I am also using Microsoft's sample driver moufiltr from GitHub as a headstart, which I modified the .inf file to work on USB mice.
A function I intend to implement in the filter driver is that even when the mouse is idle and not moving, the cursor should keep moving in the direction it was previously moving. Originally I had planned to use moufiltr's callback function to implement this:
VOID
MouFilter_ServiceCallback(
IN PDEVICE_OBJECT DeviceObject,
IN PMOUSE_INPUT_DATA InputDataStart,
IN PMOUSE_INPUT_DATA InputDataEnd,
IN OUT PULONG InputDataConsumed
)
/*++
Routine Description:
Called when there are mouse packets to report to the RIT. You can do
anything you like to the packets.
Arguments:
DeviceObject - Context passed during the connect IOCTL
InputDataStart - First packet to be reported
InputDataEnd - One past the last packet to be reported. Total number of
packets is equal to InputDataEnd - InputDataStart
InputDataConsumed - Set to the total number of packets consumed by the RIT
(via the function pointer we replaced in the connect
IOCTL)
Return Value:
Status is returned.
--*/
{
PDEVICE_EXTENSION devExt;
WDFDEVICE hDevice;
hDevice = WdfWdmDeviceGetWdfDeviceHandle(DeviceObject);
devExt = FilterGetData(hDevice);
/*********
My Code to modify mouse packets goes here
*********/
//
// UpperConnectData must be called at DISPATCH
//
(*(PSERVICE_CALLBACK_ROUTINE)devExt->UpperConnectData.ClassService)(
devExt->UpperConnectData.ClassDeviceObject,
InputDataStart,
InputDataEnd,
InputDataConsumed
);
}
And by modifying the coordinates in InputDataStart until InputDataEnd, I could simulate the movement I had intended. However, while testing I realized that this function would only be called everytime the mouse changed position or moved, instead of what I thought that the function would be called every time the mouse was polled, even when it was idle.
To work around this I started searching ways I could assign a function, either through the kernel or Windows API, that would be called everytime the mouse was polled instead, similar to how the above function is called when there are packets to process. But I haven't found much documentation on how I could do that. Is there a way to do this? Or should I find an alternative to implement this functionality.
I'm using the SetWindowPos function for an automation task to show a window. I know that there are two ways that Windows provides to do this:
Synchronously: SetWindowPos or ShowWindow.
Asynchronously: SetWindowPos with SWP_ASYNCWINDOWPOS or ShowWindowAsync.
Now, I'd like to get the best of both worlds: I want to be able to show the window synchronously, because I'd like it to be done when the function returns. But I don't want the call to hang my process - if it takes too long, I want to be able to abort the call.
Now, while looking for an answer, the only thing I could come up with is using a separate thread and using SendMessageTimeout, but even then, if the thread hangs, there's not much I can do to end it except of TerminateProcess, which is not a clean solution.
I also have seen this answer, but as far as I understand, it has no alternative for native WinAPI.
The answer in the question you linked to simply loops until either the desired condition occurs or the timeout expires. It uses Sleep() every iteration to avoid hogging the processor. So a version for WinAPI can be written quite simply, as follows:
bool ShowWindowAndWait(HWND hWnd, DWORD dwTimeout) {
if (IsWindowVisible(hWnd)) return true;
if (!ShowWindowAsync(hWnd, SW_SHOW)) return false;
DWORD dwTick = GetTickCount();
do {
if (IsWindowVisible(hWnd)) return true;
Sleep(15);
} while (dwTimeout != 0 && GetTickCount() - dwTick < dwTimeout);
return false;
}
Unfortunately I think this is the best you're going to get. SendMessageTimeout can't actually be used for this purpose because (as far as I know anyway) there's no actual message you could send with it that would cause the target window to be shown. ShowWindowAsync and SWP_ASYNCWINDOWPOS both work by scheduling internal window events, and this API isn't publicly exposed.
I'm trying to use mmTimer with a callback function, which is a static CALLBACK function.
I know that a static function cannot call a non-static function, thanks to you all guys, except from the case where the static function gets a pointer to an object as an argument.
the weird thing is that my timer works fine in release mode, and when I try to run it in debug mode there is this unhandeled exception that pops up and breaks the program down.
void CMMTimerDlg::TimerProc(UINT uID, UINT uMsg, DWORD dwUser, DWORD dw1, DWORD dw2)
{
CMMTimerDlg* p = (CMMTimerDlg*)dwUser;
if(p)
{
p->m_MMTimer += p->m_TimeDelay;
p->UpdateData(FALSE);
}
}
my questions are : - is there any way to resolve this problem? - If this error occurs on debug mode, who ensures me that it wouldn't happen once i release the program?
there is where the program stops:
#ifdef _DEBUG
void CWnd::AssertValid() const
{
if (m_hWnd == NULL)
return; // null (unattached) windows are valid
// check for special wnd??? values
ASSERT(HWND_TOP == NULL); // same as desktop
if (m_hWnd == HWND_BOTTOM)
ASSERT(this == &CWnd::wndBottom);
else if (m_hWnd == HWND_TOPMOST)
ASSERT(this == &CWnd::wndTopMost);
else if (m_hWnd == HWND_NOTOPMOST)
ASSERT(this == &CWnd::wndNoTopMost);
else
{
// should be a normal window
ASSERT(::IsWindow(m_hWnd));
// should also be in the permanent or temporary handle map
CHandleMap* pMap = afxMapHWND();
ASSERT(pMap != NULL);
when it gets to pMap it stops at that assertion!!!!
here is the static CALLBACK function
static void CALLBACK TimerProc(UINT uID, UINT uMsg, DWORD dwUser, DWORD dw1, DWORD dw2);
here is how I set the timer
UINT unTimerID = timeSetEvent(m_TimeDelay,1,(LPTIMECALLBACK)TimerProc,(DWORD)this,TIME_PERIODIC);
The problem here is that multimedia timer API unlike many other has restrictions on what you are allowed to do inside the callback. You are basically not allowed much and what you are allowed is to update internal structures, do some debug output, and set an synchronization event.
Remarks
Applications should not call any system-defined functions from inside
a callback function, except for PostMessage, timeGetSystemTime,
timeGetTime, timeSetEvent, timeKillEvent, midiOutShortMsg,
midiOutLongMsg, and OutputDebugString.
Assertion failures start display message boxes which are not allowed and can eventually crash the process. Additionally, windowing API such as IsWindow and friends are not allowed either and are the first place cause leading further to assertion failures.
The best here is to avoid using multimedia timers at all. In most cases you have less restrictive alternate options.
It only looks like your code works in the Release build, it will not assert() that you are doing it right. And you are not doing it right.
The callback from a multi-media timer runs on an arbitrary thread-pool thread. You have to be very careful about what you do in the callback. For one, you cannot directly touch the UI, that code is fundamentally thread-unsafe. So you most certainly cannot call UpdateData(). At best, you update a variable and let the UI thread know that it needs to refresh the window. Use PostMessage(). In general you need a critical section to ensure that your callback doesn't update that variable while the UI thread is using it to update the window.
The assert you get in the Debug build suggests more trouble. Looks like you are not making sure that the timer can no longer callback when the user closes the window. That's pretty hard to solve cleanly, it is a fundamental threading race. PostMessage() will already keep you out of the worst trouble. To do it perfectly clean, you must prevent the window from closing until you know that the timer cannot callback anymore. Which requires setting an event when you get WM_CLOSE and not call DestroyWindow. The timer's callback needs to check that event, call timeKillEvent() and post another message. Which the UI thread can now use to really close the window.
Threading is hard, do make sure that SetTimer() isn't already good enough to get the job done. It certainly will be if the UI update is the only side-effect. You only need timeSetEvent() when you require an accurate timer that needs to do something that is not UI related. Human eyes just don't have that requirement. Only our ears do.
I've got an embedded device running Linux/X11 that is connected to a device that provides touch events over a USB connection. This device is not recognized as any form of standard pointer/mouse input. What I'm trying to do is find a way to "inject" mouse events into X11 when the external device reports an event.
Doing so would remove the need for my application ( written in C using Gtk+ ) to fake mouse presses with Gtk+ calls.
If this can be done my Gtk+ application would not need to know or care about the device generating the touch events. It would just appear to the application as standard mouse events.
Anybody know how to go about inserting synthetic mouse events into X11?
Right now I'm doing the following which works, but isn't optimal.
GtkWidget *btnSpin; /* sample button */
gboolean buttonPress_cb( void *btn );
gboolean buttonDePress_cb( void *btn );
/* make this call after the device library calls the TouchEvent_cb() callback
and the application has determined which, if any, button was touched
In this example we are assuming btnSpin was touched.
This function will, in 5ms, begin the process of causing the button to do it's
normal animation ( button in, button out effects ) and then send the actual
button_clicked event to the button.
*/
g_timeout_add(5, (GSourceFunc) buttonPress_cb, (void *)btnSpin);
/* this callback is fired 5ms after the g_timeout_add() function above.
It first sets the button state to ACTIVE to begin the animation cycle (pressed look)
And then 250ms later calls buttonDePress_cb which will make the button look un-pressed
and then send the button_clicked event.
*/
gboolean buttonPress_cb( void *btn )
{
gtk_widget_set_state((GtkWidget *)btn, GTK_STATE_ACTIVE);
g_timeout_add(250, (GSourceFunc) buttonDePress_cb, btn);
return( FALSE );
}
/* Sets button state back to NORMAL ( not pressed look )
and sends the button_clicked event so that the registered signal handler for the
button can be activated
*/
gboolean buttonDePress_cb( void *btn )
{
gtk_widget_set_state( btn, GTK_STATE_NORMAL);
gtk_button_clicked( GTK_BUTTON( btn ));
return( FALSE );
}
The Linux input system has a facility for user-space implementation of input devices called uinput. You can write a background program that uses your device's callback library to send input events to the kernel. The X server (assuming it is using the evdev input module) would then process these just as any other mouse event.
There's a library called libsuinput that makes this fairly easy to do. It even includes an example mouse input program that you can probably use as a model. However, since your device is a touch-based device it will probably use absolute axes (ABS_X, ABS_Y) instead of relative (REL_X, REL_Y).
There are several methods.
Use XSendEvent. Caveat: some application frameworks ignore events sent with XSendEvent. I think Gtk+ doesn't, but I have not checked.
Use XTestFakeMotionEvent and XTestFakeButtonEvent. You need XTest extension on your X server.
Write a kernel driver for your device so that it will appear as a mouse/touchpad.
The coolest thing would be to implement a device driver inside the Kernel that creates a /dev/input/eventX file which speaks the evdev protocol. I recommend you to read the book called Linux Device Drivers if you want to do this. The book is freely available on the web.
If you want to do this in user space, I suggest you to use Xlib (or XCB). On plain Xlib (C language), you can use the X Test Extension or XSendEvent().
There's also a binary called xte from the xautomation package (on Debian, sudo apt-get install xautomation and then man xte). xte is very easy to use, and you can also look at its source code to learn how to use the X Test Extension.
Pointers:
http://lwn.net/Kernel/LDD3/
http://cgit.freedesktop.org/xorg/lib/libXtst/
http://cgit.freedesktop.org/xcb/xpyb/
http://hoopajoo.net/projects/xautomation.html
http://linux.die.net/man/1/xte
Seems that after a bit more research, Gtk+ uses a GDK library that can do what I want without having to delve deeply into X11 coding or write a Kernel driver. Although, if I had the time, I would prefer to write a Linux Kernel Mouse driver.
Using the GDK 2 Reference Manual I found I can do the following:
Use gtk_event_put() to append a GdkEvent of type GdkEventButton
The structure for a GdkEventButton is:
struct GdkEventButton {
GdkEventType type;
GdkWindow *window;
gint8 send_event;
guint32 time;
gdouble x;
gdouble y;
gdouble *axes;
guint state;
guint button;
GdkDevice *device;
gdouble x_root, y_root;
};
Most of these fields will be trivial to fill in with the exception of:
gdouble x;
the x coordinate of the pointer relative to the window.
gdouble y;
the y coordinate of the pointer relative to the window.
GdkDevice *device;
the device where the event originated.
gdouble x_root;
the x coordinate of the pointer relative to the root of the screen.
gdouble y_root;
the y coordinate of the pointer relative to the root of the screen.
I will need to research how to convert the screen root coordinates the window relative coordinates.
*device - I'm not sure if I need to use this field ( set to NULL ) because this is for an extended input device. However, if I do need to have a valid device here I should be able to use gdk_devices_list()
I have an application that uses GTK+ to display some nice GUI, but I am using SDL to display a small RGB frame buffer inside GTK+
I have used the following code to get SDL into GTK+:
char SDL_windowhack[32];
sprintf(SDL_windowhack, "SDL_WINDOWID=%ld", GDK_WINDOW_XWINDOW(deviceWindow->window));
putenv(SDL_windowhack);
Unfortunately, I also use SDL for keyboard and mouse event. The main thread that uses SDL to update the image spawns the following thread:
void *SDLEvent(void *arg)
{
SDL_Event event;
while (1) {
fprintf(stderr, "Test\n");
SDL_WaitEvent(&event);
switch (event.type) {
/* ... */
}
}
}
I see that the print statement is executed twice, then none. As soon as I terminate the thread that SDL uses to update the screen (display), the loop in SDLEvent starts executing very fast again.
This code used to work fine before I integrated SDL into GTK+ so I am thinking GTK+ is maybe blocking SDL in some ways?
Does anyone have any suggestions please?
Thank you very much!
Although I have not used SDL, but as you are looking for events it appears that you are running two event loops. Gtk runs its own event loop which handles events like the ones from mouse & keyboard. I think you need to find a way to integrate the both. Some googling resulted in the following link where in the section "Double event loop issue" your problem has been addressed (I think). Try adding SDLEvent function as idler function using g_idle_add as suggested in the link and see if it works.
Hope this helps!