Is the hotspot of the WPF Cross cursor in the middle of the crosshair? - wpf

I'm having amazing difficulties with a high-precision pixel-oriented image program in WPF and starting to suspect that the Cursors.Cross cursor hotspot is not at its centre, as you would expect.
I'm debugging using Magnifier at 16x and mouse set to the lowest acceleration. The code is based on DrawTools from CodeProject.
Is this the same cursor as you get in Winforms? If so, I can look at that cursor's hotspot - the Cursor class in System.Windows.Input doesn't have the HotSpot property.
UPDATE
In case anyone is looking for a workaround, in my case I already have a delegate being invoked to filter the points so I can implement snap-to-grid behaviour. It was trivial to offset the point by 1 to compensate. This was a lot easier than creating a custom cursor and has the advantage that I'm still using the stock cursor if its appearance should change.

I have an empirical answer that yes, the hotspot is offset.
I modified the program to be able to trigger graphics modes by pressing keys, so you don't need to move the mouse.
Using the same Magnifier view as the snapshot above, just pressing a key to change the mode toggles the cursor between arrow and cross.
When I switch it draws the cursor so the black lines centre on the top-left point of the normal arrow cursor.
The arrow cursor hotspot is at the pixel it points to (not a black pixel) so yes, the cross cursor hotspot is NOT at the centre of the crosshairs!
sigh

Related

Clearing X11 window with desktop background pixels, and putting XImage with transparent pixels on it?

I am trying to make an application which will graphically repeat the mouse pointer, so I can ultimately make a mouse trail program, for Ubuntu 18.04 - and it seems, the way to do it is via X11/Xlib - although, these days I don't even know, as my machine says also wayland:
$ loginctl | while IFS= read line; do echo "$line"; if [[ $line == *"tty"* ]]; then sessnum=$(echo "$line" | awk '{print $1;}'); echo sessnum: $sessnum\; $(loginctl show-session $sessnum -p Type); fi; done
SESSION UID USER SEAT TTY
c1 121 gdm seat0 tty1
sessnum: c1; Type=wayland
2 1000 administrator seat0 tty2
sessnum: 2; Type=x11
2 sessions listed.
Regardless, I managed to put together an unholy assemblage of:
https://keithp.com/blogs/Cursor_tracking/ - which sets up the program for capturing raw mouse events, so the mouse pointer position can be extracted (and a redraw triggered) whenever the mouse pointer position changes
xosd.c (via https://github.com/AndreRenaud/XOSD) - I thought at first that On-Screen Display would have a special method to draw on top - but this sets up a topmost window, child of the root, where all drawing happens; and it also sets up event and timer thread
.... plus a ton of other code snippets (mostly from SO), which sort of does what I want (even if I don't really fully understand all of the layers and compositing that goes on in it). I posted this as a gist: xosd_track_cursor.c since it's 700+ lines (but can post it here if needed).
Here is how the application behaves (also see full-res imgur .mp4 video)
Basically, at start, the "OSD" topmost window is set up, and it's quite smaller than the desktop window - which helps us see the window border decorations around it (ultimately, I'd make this window the same size as the desktop).
At start, the desktop pixels at the location of this window have seemingly been copied as the window background.
Once the mouse pointer enters the OSD window, there is a draw of a circle, which becomes the mask for the OSD window (which again can be seen via the window border decorations) - and this circular window follows the mouse. Then, inside it, I draw a XFillRectangle to draw a lime rectangle, and then XPutImage to draw the pixels captured from the latest mouse pointer (the video doesn't show it, but also the copied cursor changed when the normal one does, say from left_ptr to bottom_side or xterm cursor bitmaps).
So far so good - but these are the problems, and questions:
All of the draws - both the lime rectangle and the mouse pointer copy - remain on the OSD window, and are not cleared upon redraw (which is quite obvious when the mouse pointer leaves the OSD window, so there is no masking). How can I erase these previous draws each time a new state is rendered?
When I click on window to change the focus, it is obvious (especially when the mouse pointer leaves the OSD window, so there is no masking) that the desktop "background" shown in the OSD window, shows the state when the program started. How can I capture the current state of the desktop background (that is, behind the OSD window), so I can use that for clearing the OSD window in the previous step?
(I thought I could hide the OSD window, then capture the desktop at the same location with XGetImage, maybe (?) - then show the window; but show always sends Expose event, which otherwise runs the expose function that does the redraw, and so I get a bunch of recursive calls hogging the application)
The mouse pointer copy is rendered with a black background - how can I make the drawing of mouse pointer copy transparent, where it is black now?
And, a sort of a bonus question (just curious here - obviously I'd rather not have the leftovers to begin with):
I first do XFillRectangle to draw a lime rectangle, then XPutImage to draw the pixels of the mouse pointer copy. I'd expect this to show the mouse cursor copy on top of the green pixels - and it is indeed so, while the OSD window is masked with the circle. But when the OSD window is shown in full, the leftovers make it seem as if the green pixels were drawn on top of the mouse cursor copy pixels. Why is this so?
Well, I think I got somewhere - the result is in the same gist, just different revision: gist: xosd_track_cursor.c (a31e9dff5); and it looks like this:
And so, to answer my questions:
How can I erase these previous draws each time a new state is rendered?
Cannot - not in the way the previous code was set up. It was set up as a override_redirect, meaning it would stay out of the management of any window manager. Furthermore, the default bit depth was 24, meaning that transparency was not supported, meaning that to grab the desktop "behind" (to use as "clear" background), we'd had to hide and then show the window, which used to cause recursion due to reaction to Expose events.
However, I saw in How to make an OpenGL rendering context with transparent background? that using an glXCreateContext might help - and it did. However, it turns out, it was not necessary - as soon as XMatchVisualInfo successfully returned a match for 32-bit depth for the OSD window (alpha transparency supported), then it was possible to define a "fully transparent" color, via XSetForeground, as 0x00000000 (as far as I see, that is 0xAARRGGBB format) - and use that to draw directly on the window with XFillRectangle -> that manages to clear the entire OSD window transparently.
The mouse pointer copy is rendered with a black background - how can I make the drawing of mouse pointer copy transparent, where it is black now?
Turns out, also this started working as soon as the window was created with XCreateWindow using settings from XMatchVisualInfo for 32-bit depth. By that, I mean that the result of XPutImage was such, that the transparent points in the cursor image were now "see-through"/transparent - whereas previously, the result of XPutImage showed black pixels at those locations.
But when the OSD window is shown in full, the leftovers make it seem as if the green pixels were drawn on top of the mouse cursor copy pixels. Why is this so?
Apparently, I didn't remember correctly what order the pixels were drawn in; when that demo capture was taken, indeed the mouse cursor pixels were copied first, and then the green pixels were copied on top. ( which now changes the question - how come the mouse cursor was visible in that capture, at all?! but now that the overall problem is solved, I'm not that curious :) )
Otherwise, few more notes on gist: xosd_track_cursor.c (a31e9dff5): since X11 has a client/server architecture, that means the user program can only queue requests to the server, and thus all of the drawing calls are asynchronous/non-blocking - and so, when we run, say, XFillRectangle and it exits, it does not mean that the drawing of pixels has been finished - just that the request has been sent to the queue, that ends up being sent to the server. Furthermore, in spite of commands like XFlush, XSync - there is never a guarantee that we can wait for a finished drawing operation; and there is no guarantee either that the server will honor any given request.
However, the less you try to do, the bigger the probability the X Server will honor the requests. So this version of the code actually makes a smallish window, 60x60 pixels, then sets it up so it is (centrally aligned) dragged by the motion of the mouse pointer. Then, the (main) mouse pointer is simply copied in this window at the same relative location.
Finally, there is a primitive attempt to do a mouse trail, by rendering two "ghost" copies of the mouse pointer, and have them be displaced by a history of mouse motion delta vector - the effect, as is visible on the gif, is not really amazing, but at least it's there as a "proof of concept", of sorts. Also, the window is setup at start as "click-through" using XShapeCombineRectangles - meaning the OSD window doesn't pick up/handle any mouse events (clicks) directly on it, instead everything is automatically passed to the windows below it, so the interaction remains the same, as if the program was not running at all.
(Note that to get the behavior of gist: xosd_track_cursor.c (a31e9dff5) shown in the gif, you should look up the defines DEBUGPRINT and MOUSE_TRAIL, and have them uncommented when you build)

How to handle mouse motion events in GTK3?

I am trying to implement the following feature using C/GTK3/Cairo:
-Left click on an GtkDrawingArea Widget and printf the coordinates Xo and Yo.
-While keeping the left button down, move the mouse and draw a line conecting (Xo,Yo) to the current mouse position.
-Release the left mouse button and printf("something")
How do I do this? Anyone knows of a good tutorial showing how to handle mouse clicl-move events?
So far, the best I found was this zetcode lines (which shows how to handle mouse click events but not button-down/move/button-up and this , which explains how to change the mouse cursor when hovering over a Widget.
Thanks
Did you see this GtkDrawingArea demo from the Gtk people? This one is written in C, but there is a Python version of the same program (links updated - thanks #kyuuhachi).
Anyway, in the constructor (__init__), calls are connected to the motion_notify_event.
You also need to connect to the button_press_event and the button_release_event.
Then, on button press, you save the coordinates of the start point. (and save it to the end point too, which are the same for now).
On each motion_notify_event, you delete the previous line (by overwriting), and redraw it to the new end point.
Finally, when the button is released, the line is final.
It's much easier if you use a canvas widget, for example GooCanvas, which takes care of most of the updating. You can just update the coordinates of the line object, and it will move itself. Also you can easily remove lines. The 'algorithm' is similar as above:
Connect button_press_event, button_release_event, and motion_notifyevent to the canvas,
When a button press occurs, create a GooCanvas.polyline object, and set begin and endpoint,
Update the endpoint on each motion_notify_event
Finalize with a button_release_event.

Mouse move on WPF image control using Kinect

I am trying to move the mouse using the kinect . I achieved using the interopservices in c#.
Now i want to move mouse only inside the image control . so mouse should not move on other layouts. Is there any way to achieve the mouse movement without using the interopservices.
Cursor.Position = new Point()
Will let you move the cursor. You can restrict where in the code too.
Is that what you're looking for, or am I missing something? There really isn't anything specific to the Kinect that I can see.
EDIT:
You can find the tracking function I use in the following post:
how to use skeletal joint to act as cursor using bounds (No gestures)
In it, I assign the position of the hand to a "RightHandX" and "RightHandY" parameter. These are basically the mouse position -- you could replace them with a call to Cursor.Position.
If you only want to move the mouse around the image, you can get the bounds of the image and then just add another 'if' statement that does or doesn't send a Cursor.Position based on those bounds and the calculated position of the hand.

Does anyone have code to make the mouse cursor a cross/plus sign in silverlight?

Does anyone have code to make the mouse cursor a cross/plus sign in silverlight?
when I click on one draw button then I want cursor as cross/plus sign do how can I implement in the silver light ?
Its not actually possible to set the Cursor image to anything other than the set of images specified by the Cursors class. Its quite a limited set.
Silverlight Tip of the Day #28: How to Implement a Custom Mouse Cursor
http://blogs.silverlight.net/blogs/msnow/archive/2010/03/16/81607.aspx

Converting mouse position to world position OpenGL

Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective
A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);
A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).

Resources