WPF get active touch points - wpf

Is there a way in WPF to get active touch points? I need to determine if user is touching screen, similar to Mouse classes' Pressed -property?
I just need to know if any touch is present on the screen - don't mind what UIElement it's touching.

Here are two options, but they may not be the most correct way to do it:
1) You could subscribe to the MainWindow.PreviewTouchDown and MainWindow.PreviewTouchUp and maintain a list of all the current touch devices. It would be easy to implement but could make your code messy.
2) Subscribe to Touch.FrameReported which you can get a collection of touch points from the TouchFrameEventArgs.GetTouchPoints(null);. This will happen on every touch event firing, so it may be too often, but it would allow you to handle this event from any class.

You can subscribe to your main windows ManipulationStarting event (when the first finger makes contact with the screen), ManipulationInertiaStarting event (when the last finger lifts off the screen) and/or ManipulationDelta event (when any finger moves).
Within your event handlers you can get a list of all current touchpoints via ManipulationDeltaEventArgs.Manipulators
Don't forget to set your main window's IsManipulationEnabled to true.
This way you just have to remember whether a manipulation is currently in progress or not. You don't have to keep track of all the individual touch points yourself.

Related

When does each event occur in WinForms

As I am trying to make my own custom "WinForms", I am left confused on when does each mouse event occurs. I have made my own custom classes, but now the events are something I have to rework, as they won't work right.
I have a custom class for controls. Objects from that class can contain other controls, which can contain other controls and so on. There is a main control, which gets input from a picture box. That input is what is where the mouse is and what even has been activated in the picture box.
So far I have figured, that MouseMove, MouseHover and MouseDown events are the simplest to write, as they occur in simple conditions. But the rest require additional data about the mouse's location, state and past. MouseDoubleClick seems to activate after a specific sequence of events (strictly down-up-down-up, down-up-down-move-up and down-up-down-move-leave-enter-move-up, with the movement events not activating). With that in mind, I am even more confused.
In what conditions and sequences does each mouse event occur?
EDIT
Further testing made things even more confusing. For one, now I want to know at what rate is the MouseMove being registered, and testing it shows that between each event there is a different time (or so does my use of a StopWatch say). This is important, because then that raises the question when is Hover being triggered.
Click is down-up, where moving is allowed between the two.
DoubleClick proved to be simple enough - down-up-down-up, where moving is allowed explicitly only between the second down-up.
Hover activates only once after each Enter, when the mouse remains stationary; if you want to trigger Hover again, the mouse has to leave and then re-enter.
So the question now is how the system tracks the mouse's activity - how does it detect the mouse moving, being held down and being released. Hopefully that would help me get the full answer.

Detecting mouseout of a GtkTreeView row

How can I detect when the mouse cursor leaves a GtkTreeView row associated with a GtkListStore model?
Note that the signal "cursor-changed" is not what I am looking for, as it gets emitted as if it is a mouse enter (mouseover) event and I need it to be triggered when the mouse has just left the row instead. However within "cursor-changed" signal and a call of gtk_tree_view_get_cursor() I can obtain "the latest mouseovered row" to know which row the mouse cursor has previously entered. So I at least need a way to detect when the mouse cursor leaves some row.
Mouseout events normally require widgets to consume a window on the underlying implementation (the Xserver normally sends them when you abandon one window, Windows has another means of signalling them) so in environments where widgets don't use a window (this is quite normal on environments that don't have enough support from the implementation), they must be simulated. Normally you'll have to check the widget class hierarchy up to the root to see the place where those such events are being emulated, and how, to get an idea on how to deal with them. Probably you'll have some registering process in the superclasses to allow for a callback to be called on behalf of such events.

Strategy for differentiating TouchUp from TouchLeave, and TouchDown from TouchEnter?

For the basic scenario described in the msdn overview (under Touch and Manipulation) TouchEnter and TouchLeave are fired for every corresponding TouchDown and TouchUp respectively. Unlike the mouse, the Touch and Stylus are not constrained to maintain contact with the screen.
Is there a way to use TouchEnter and TouchLeave is to capture only when a finger is dragged into the UIElement. As these events are fired for every touchUp and touchDown, what is the best way to differentiate these events?
One strategy that would work for the single finger case, is to have a flag set on TouchDown, and check if the flag is set on TouchUp. This allows some condition checks on TouchUp. However, for multiple fingers, it isn't feasible.
There are no PreviewTouchEnter and PreviewTouchLeave events fired, only PreviewTouchDown and PreviewTouchUp. The sequence of events for a finger lowered on to a UIElement and then raised over it is as follows:
TouchEnter
PreviewTouchDown
TouchDown
PreviewTouchUp
TouchUp
TouchLeave
This sequence doesn't help differentiate a TouchEnter that has happened due to a finger dragged across the screen into the UIElement, from a finger that is lowered onto the UIElement directly. Am I missing something, or does the framework not support such differentiation itself?
Can you use the TouchDevice Class to keep track of where touches are generated. New touches are given a new ID, so you could distinguish between existing touches and new ones, and which elements are capturing the device. I guess that circumvents the Manipulation events and the normal processes, but I hope that helps.
If you retrieve a TouchPoint for the event, there is a property on it named Action which tells you whether it is a Down, a Move or a Up event.
void m_element_TouchEnter(object sender, System.Windows.Input.TouchEventArgs e)
{
var touchPoint = e.GetTouchPoint(m_someElement);
if (touchPoint.Action == System.Windows.Input.TouchAction.Move)
{
//This is a "true" TouchEnter event
}
else if (touchPoint.Action == System.Windows.Input.TouchAction.Down)
{
//This is a "true" TouchDown event.
}
}

No ManipulationCompleted event in Surface Toolkit for Windows Touch Beta

I am using Surface Toolkit for Windows Touch Beta. I have a UserControl within a ScatterViewItem on a ScatterView. I want to receive ManipulationCompleted event on a UserControl but it doesn't seem to ever be raised even though IsManipulationEnabled="True" is also set. The same thing works perfectly in a non-Surface WPF4 app.
It appears various Touch WPF events play well with Surface but it seems like a lot of work to recreate a tap event and NSWE events that I can easily interpret from ManipulationCompleted event.
I am looking on ways to either receive ManipulationCompleted event on my UserControl or to simulate it by handling existing touch events.
Any pointers?
does the scatterviewitem move when your usercontrol is touched? only one element at a time can be tracking manipulations for a given touch. if the scatterviewitem is getting the manipulation events, that means your user control will not.
if you only want your usercontrol to handle the input, then have it listen to TouchDown and call usercontrol.Capture(touch). if you want to have the SVI do it's thing but also handled the completed event on your own, then you will have to register your event handler manually: usercontrol.AddHandler( ManipulationCompletedEvent, yourHandler, true). the last parameter says you want to handle the event even if SVI already has.

Responding to a WPF Click Event in a Data-bound User Control

I hope this makes sense.
I have created several WPF User Controls. The lowest level item is 'PostItNote.xaml'. Next, I have a 'NotesGroup.xaml' file that has an ItemsControl bound to a List of PostItNotes. Above that, I have a 'ProgrammerControl.xaml' file. Each ProgrammerControl has a grid with four different NotesGroup user controls on it (and each NotesGroup contains 0-many PostItNotes.
Then, I have my main window. It also has an ItemsControl, bound to a list of Programmers.
So, you end up with a high level visual view of a list of programmers, each programmer has four groups of tickets, each group of tickets has many PostItNotes.
The trouble I'm having, is that I want to respond to a mouse click event in my mainWindow's code behind file.
I can add a MouseClick event into my PostItNote.xaml.vb file and that is getting called when the user clicks a PostItNote, and I can re-raise the event; but I can't seem to get the NotesGroup to listen for that event. I'm not sure if that's even the correct approach.
When the user clicks the PostItNote, I'm going to do a bunch of business-logic type stuff that the PostItNote control doesn't have a reference to/doesn't know about it.
Can anyone point me in the right direction?
You have a couple choices:
Use the PreviewXXX events which are fired during the "tunneling" phase of WPF event routing. The parent controls can always preview the events going down through them to children.
Use the more advanced approach to hooking up events leveraging the AddHandler method to which you can pass a parameter called "handledEventsToo" which basically means you want to know when the event happened "within" you even if some descendent element handled the event itself.
I am going to take a flyer here. You probably don't want to be handling the event that high up; not really anyway. You are catching the event at the lower levels, which is unavoidable. Consider invoking a routed command from the PostItNote click event handler.
The routed commands bubble up and tunnel down through the tree. You can have an architecture where a high-level handler can listen to a logical event (Opening a postit note perhaps?). The handler for this doesn't need to care where the command originates from. It might be from you clicking something, it might be from clicking on a toolbar button. Both are valid scenarios.
It sounds like you are creating some kind of custom UI, am I right? You want the application to respond to the users interactions. That is what the RoutedCommands are for.

Resources