Question: Should we check at runtime for a touch device (using Modernizr.touch?) and then use the angular swipe service when it is a touch device and use mouse events when it isn't?
Objective: I am writing a Slider Directive and I want to move the slider button.
I want to handle mousedown, mousemove, and mouseup events on the desktop and I want similar behavior on a touch screen. I had this working well with mouse events on the desktop but I've modified the code to use angular's $swipe service instead. Now the touch screen functionality works well, but the mouseup event on the desktop browsers is lost if it occurs outside of the target element.
Here is a jsBin that demonstrates the $swipe utility:
http://jsbin.com/berome/1/edit?html,js,console,output
On a touch screen device, the demo functions correctly. The start event, move events, and end event fire respectively when dragging the finger across the target element.
On the desktop, the start and move events fire, but the end event will only fire if the mousedown is released whilst still within the target div. This is expected behavior because the mouseup is bound to the target element. However, if the cursor is taken outside of the div during a mousemove event and then the mouse button is released, no end event occurs. Moreover the mousemove event will continue to fire when the cursor returns within the target! (Try it on the demo. It gets stuck in the mousemove state until another mousedown-mouseup sequence within the target)
For the desktop implementation, mousedown would be bound to the target element, and mousemove and mouseup would be bound to the $window (or some parent element) from within the mousedown event. The mouseup would then always fire even when the mouse down is released - even if the cursor is outside the target element! The mousemove and mouseup would be unbound in the mouseup event.
There doesn't seem to be much documentation or examples for using $swipe yet. The ngBook has a paragraph or two but no working example. Any suggestions or comments? I'm leaning towards two implementations: one for touch and one for the desktop.
Thanks!
You are correct, there is very little documentation for ngTouch's $swipe. Just starting digging into this I am having a hard time finding anything. So thank you for your example.
The only solution to your problem that I can see from my limited understanding is that you treat it like a scroll on a touch device and cancel. These sentences are from the documentation:
If the vertical distance is greater, this is a scroll, and we let the
browser take over. A cancel event is sent.
cancel is called either on a touchcancel from the browser, or when we
begin scrolling as described above.
I haven't tried any of this, but I this thought occurred to me and I thought it might be useful to you and me (as I'm sure I will have to deal with it soon).
Related
I am having trouble using the tap event, I made the space shooter with a different theme and I want to turn it into mobile, but when I hold down the object I created for the tap event, the image angle changes only once, I want it to continue continuously when I hold it down, can you help?
Here is the code I wrote inside the tap event:
with (obj_tuna)
{
image_angle -= 5;
}
A tap is a quick touch where you press and release quickly (just like a mouseclick), so it only recognises your action as a quick press, rather than holding down.
I'm not familiar with mobile platforms, but I notice with a quick search that mouse functions also work with touch functions. So perhaps, if you replace your tap event with a mouse event (or replace a Tap Event with a mouse function in the Step Event), then it may work.
Source: https://forum.yoyogames.com/index.php?threads/how-to-detect-when-a-player-holds-down-on-the-screen.63997/post-383553
As I am trying to make my own custom "WinForms", I am left confused on when does each mouse event occurs. I have made my own custom classes, but now the events are something I have to rework, as they won't work right.
I have a custom class for controls. Objects from that class can contain other controls, which can contain other controls and so on. There is a main control, which gets input from a picture box. That input is what is where the mouse is and what even has been activated in the picture box.
So far I have figured, that MouseMove, MouseHover and MouseDown events are the simplest to write, as they occur in simple conditions. But the rest require additional data about the mouse's location, state and past. MouseDoubleClick seems to activate after a specific sequence of events (strictly down-up-down-up, down-up-down-move-up and down-up-down-move-leave-enter-move-up, with the movement events not activating). With that in mind, I am even more confused.
In what conditions and sequences does each mouse event occur?
EDIT
Further testing made things even more confusing. For one, now I want to know at what rate is the MouseMove being registered, and testing it shows that between each event there is a different time (or so does my use of a StopWatch say). This is important, because then that raises the question when is Hover being triggered.
Click is down-up, where moving is allowed between the two.
DoubleClick proved to be simple enough - down-up-down-up, where moving is allowed explicitly only between the second down-up.
Hover activates only once after each Enter, when the mouse remains stationary; if you want to trigger Hover again, the mouse has to leave and then re-enter.
So the question now is how the system tracks the mouse's activity - how does it detect the mouse moving, being held down and being released. Hopefully that would help me get the full answer.
Is there a way in WPF to get active touch points? I need to determine if user is touching screen, similar to Mouse classes' Pressed -property?
I just need to know if any touch is present on the screen - don't mind what UIElement it's touching.
Here are two options, but they may not be the most correct way to do it:
1) You could subscribe to the MainWindow.PreviewTouchDown and MainWindow.PreviewTouchUp and maintain a list of all the current touch devices. It would be easy to implement but could make your code messy.
2) Subscribe to Touch.FrameReported which you can get a collection of touch points from the TouchFrameEventArgs.GetTouchPoints(null);. This will happen on every touch event firing, so it may be too often, but it would allow you to handle this event from any class.
You can subscribe to your main windows ManipulationStarting event (when the first finger makes contact with the screen), ManipulationInertiaStarting event (when the last finger lifts off the screen) and/or ManipulationDelta event (when any finger moves).
Within your event handlers you can get a list of all current touchpoints via ManipulationDeltaEventArgs.Manipulators
Don't forget to set your main window's IsManipulationEnabled to true.
This way you just have to remember whether a manipulation is currently in progress or not. You don't have to keep track of all the individual touch points yourself.
For the basic scenario described in the msdn overview (under Touch and Manipulation) TouchEnter and TouchLeave are fired for every corresponding TouchDown and TouchUp respectively. Unlike the mouse, the Touch and Stylus are not constrained to maintain contact with the screen.
Is there a way to use TouchEnter and TouchLeave is to capture only when a finger is dragged into the UIElement. As these events are fired for every touchUp and touchDown, what is the best way to differentiate these events?
One strategy that would work for the single finger case, is to have a flag set on TouchDown, and check if the flag is set on TouchUp. This allows some condition checks on TouchUp. However, for multiple fingers, it isn't feasible.
There are no PreviewTouchEnter and PreviewTouchLeave events fired, only PreviewTouchDown and PreviewTouchUp. The sequence of events for a finger lowered on to a UIElement and then raised over it is as follows:
TouchEnter
PreviewTouchDown
TouchDown
PreviewTouchUp
TouchUp
TouchLeave
This sequence doesn't help differentiate a TouchEnter that has happened due to a finger dragged across the screen into the UIElement, from a finger that is lowered onto the UIElement directly. Am I missing something, or does the framework not support such differentiation itself?
Can you use the TouchDevice Class to keep track of where touches are generated. New touches are given a new ID, so you could distinguish between existing touches and new ones, and which elements are capturing the device. I guess that circumvents the Manipulation events and the normal processes, but I hope that helps.
If you retrieve a TouchPoint for the event, there is a property on it named Action which tells you whether it is a Down, a Move or a Up event.
void m_element_TouchEnter(object sender, System.Windows.Input.TouchEventArgs e)
{
var touchPoint = e.GetTouchPoint(m_someElement);
if (touchPoint.Action == System.Windows.Input.TouchAction.Move)
{
//This is a "true" TouchEnter event
}
else if (touchPoint.Action == System.Windows.Input.TouchAction.Down)
{
//This is a "true" TouchDown event.
}
}
I am using Surface Toolkit for Windows Touch Beta. I have a UserControl within a ScatterViewItem on a ScatterView. I want to receive ManipulationCompleted event on a UserControl but it doesn't seem to ever be raised even though IsManipulationEnabled="True" is also set. The same thing works perfectly in a non-Surface WPF4 app.
It appears various Touch WPF events play well with Surface but it seems like a lot of work to recreate a tap event and NSWE events that I can easily interpret from ManipulationCompleted event.
I am looking on ways to either receive ManipulationCompleted event on my UserControl or to simulate it by handling existing touch events.
Any pointers?
does the scatterviewitem move when your usercontrol is touched? only one element at a time can be tracking manipulations for a given touch. if the scatterviewitem is getting the manipulation events, that means your user control will not.
if you only want your usercontrol to handle the input, then have it listen to TouchDown and call usercontrol.Capture(touch). if you want to have the SVI do it's thing but also handled the completed event on your own, then you will have to register your event handler manually: usercontrol.AddHandler( ManipulationCompletedEvent, yourHandler, true). the last parameter says you want to handle the event even if SVI already has.