I'm trying to write a WinForms program that will capture mouse coordinates upon pressing and (more importantly) releasing the middle mouse button.
My form has topmost set to true (so that text in it can always be visible even when it doesn't have focus).
What I'm aiming for is being able to hover the mouse over a game window after my program starts, hit the middle mouse button, and have it record the mouse position for later use.
I can get it to detect when the middle mouse button is clicked inside the form using the MouseUp event (bound to the form itself) but have no clue what I need to do to have it detect when the mid mouse is clicked outside my form.
Thanks for any help guys.
I believe what you are after are called low level hooks. A quick google brings up this: Erroneous Mouse Coordinates Returned from Low Level Mouse Hook C#
A microsoft example of how to do can be found here: http://support.microsoft.com/kb/318804
Related
I have a WSAWaitForMultipleEvents based loop, that (in addition to other stuff), triggers on keyboard and mouse events. Firstly setting:
SetConsoleMode(GetStdHandle(STD_INPUT_HANDLE), ENABLE_EXTENDED_FLAGS | ENABLE_PROCESSED_INPUT | ENABLE_MOUSE_INPUT);
And then using:
GetStdHandle(STD_INPUT_HANDLE);
As an event added to WSAWaitForMultipleEvents .
Now this is working fine but I'm really only interested in the mouse position when clicked. But naturally it is triggering for mouse movement as well. Is there any way of excluding mouse movement from the event such that WSAWaitForMultipleEvents will only awaken on mouse click and will ignore mouse movement?
Is there any way of excluding mouse movement from the event such that WSAWaitForMultipleEvents will only awaken on mouse click and will ignore mouse movement?
No. Using ENABLE_MOUSE_INPUT means that the console handle satisfies the wait on any mouse activity within the console window while the window is focused. That includes both mouse movements and mouse clicks. The documentation even says as much. So, you will just have to receive and discard the console events that you are not interested in.
I have a ListView with which I'd like to use a context menu that changes depending on selection. I'm making sure that I'm first able to display a menu when the right mouse button has been released (as per normal context menu behaviour).
In my ListView WNDPROC I'm using WM_CONTEXTMENU to display the context menu. The menu however is displayed at the location the cursor began the selection, not at the end.
From the MS documentation:
DefWindowProc generates the WM_CONTEXTMENU message when it processes the WM_RBUTTONUP or WM_NCRBUTTONUP message or when the user types SHIFT+F10. The WM_CONTEXTMENU message is also generated when the user presses and releases the VK_APPS key.
When I inspect the call stack, with a breakpoint in WM_CONTEXTMENU, I see that the message sent prior to WM_CONTEXTMENU was 0x0204 or WM_RBUTTONDOWN containing the coordinates of the cursor at this time. This probably explains the menu location issue, but why would this be happening?
When I hold the RMB down outside of the ListView and release it inside, the context menu still appears and I can see from the call stack that the last message was 0x0205 or WM_RBUTTONUP.
Not sure whether I have something wrong in my code, or I'm not understanding something. Any help on this issue would be greatly appreciated, thanks.
Rather than relying on the WM_RBUTTON(DOWN|UP) messages to determine the mouse coordinates, the WM_CONTEXTMENU's own lParam gives you the mouse's screen coordinates of the message that generated the WM_CONTEXTMENU. If those coordinates are not what you are expecting, you can use GetMessagePos() instead, which will report the screen coordinates at the time WM_CONTEXTMENU was generated. Either way, you can then convert the screen coordinates into ListView client coordinates using ScreenToClient() or MapWindowPoints().
Just be sure you also handle the case where the popup menu is being invoked by the user via keyboard input rather than mouse click. In that case, the lParam of WM_CONTEXTMENU will carry the screen coordinates [x=-1,y=-1], and you can query the ListView for the position of its selected item(s) using LVM_GETITEMPOSITION or LVM_GETITEMRECT as needed, then convert that position to screen coordinates using ClientToScreen() or MapWindowPoints(), and then display the popup menu at that screen location.
I am trying to implement the following feature using C/GTK3/Cairo:
-Left click on an GtkDrawingArea Widget and printf the coordinates Xo and Yo.
-While keeping the left button down, move the mouse and draw a line conecting (Xo,Yo) to the current mouse position.
-Release the left mouse button and printf("something")
How do I do this? Anyone knows of a good tutorial showing how to handle mouse clicl-move events?
So far, the best I found was this zetcode lines (which shows how to handle mouse click events but not button-down/move/button-up and this , which explains how to change the mouse cursor when hovering over a Widget.
Thanks
Did you see this GtkDrawingArea demo from the Gtk people? This one is written in C, but there is a Python version of the same program (links updated - thanks #kyuuhachi).
Anyway, in the constructor (__init__), calls are connected to the motion_notify_event.
You also need to connect to the button_press_event and the button_release_event.
Then, on button press, you save the coordinates of the start point. (and save it to the end point too, which are the same for now).
On each motion_notify_event, you delete the previous line (by overwriting), and redraw it to the new end point.
Finally, when the button is released, the line is final.
It's much easier if you use a canvas widget, for example GooCanvas, which takes care of most of the updating. You can just update the coordinates of the line object, and it will move itself. Also you can easily remove lines. The 'algorithm' is similar as above:
Connect button_press_event, button_release_event, and motion_notifyevent to the canvas,
When a button press occurs, create a GooCanvas.polyline object, and set begin and endpoint,
Update the endpoint on each motion_notify_event
Finalize with a button_release_event.
In WinAPI is there a mouse move notification for the full desktop (full screen) and not for a window only?
I would like to receive mouse screen coordinates in my main window procedure.
Edit:
What I try to do is getting the coordinates from the mouse when dragging from a button in my window to outside that window.
Not as such, no. If you wanted to do something anywhere on the desktop from within your program, e.g. point somewhere or draw something anywhere, you could capture the mouse and then follow the movement until the mouse button is released. See SetCapture for this.
For an example, see this article on MSDN: Drawing Lines with the Mouse
Otherwise you can always use Windows hooks to follow mouse movements anywhere.
You can set a mouse hook to be notified about all mouse events.
You can use GetCursorPos, or GetMessagePos that contains coordinate of the last message
I have an slider control in my application and an image control. When the slider value is going up, the image is zoomed in and vice versa.
In addtion i have a zoom-in and a zoom-out button. the zoom-in button will increase the slider value by one smallchange and the zoom-out will decrease it.
What i want: while the zoom-in (and out) button is pressed i want to slider to keep increasing(performin a zoom in) and not just one time.
In other words, as long as the left button is pressed, i want to keep performing the mousebuttonpressed on the button until the mousebutton is released.
Oh, by the way, im using Silverlight 3.
Thanks.
You can use the RepeatButton class to do this. It raises it's click event repeatedly while held down, much like the thumb of a scrollbar or an up/down spinner.
http://msdn.microsoft.com/en-us/library/system.windows.controls.primitives.repeatbutton(VS.95).aspx