I've got a listbox with images. I'm capturing MouseMove. In FF (Win7 & OSX) & also in IE8, this fires whenever the mouse is moved over the images. In Chrome (on OSX), however, it only fires while the mouse button is pressed down. This Chrome behaviour would actually be quite useful, but only if I could control it, rather than have it just randomly happen on certain browsers. So does anyone know if there's some overall setting somewhere that's making it behave this way in Chrome, or is it just an inconsistency in SL implementations?
I suspect the latter, since I've never been able to find a way in SL of testing whether the mouse button is down.
Thanks for any help.
As far as I understand the problem is MouseMove, MouseEnter and MouseLeave events are not fired when mouse left button is down.
I had the same problem with TextBox. Actually it happened in IE too and my googling told me it is so by design. Further googling revealed the reason: the element is capturing mouse using CaptureMouse() method. So I have just derived from TextBox and overrode OnMouseMove(...) method:
protected override void OnMouseMove(MouseEventArgs e)
{
base.OnMouseMove(e);
ReleaseMouseCapture();
}
Note: I'm not sure, whether capturing mouse is the only thing the base implementation of this method does, so I added ReleaseMouseCapture() after calling it, but commenting it out also works, of course.
Related
From a production application, we notice that our WPF buttons fire the ICommand.Execute method twice on fast double click.
Now, on every Command, the application is covered with a full-screen spinner animation, preventing any further interaction with the application.
This github repo contains a minimalistic repro of the issue. Note that:
when the Button's Command fires, the "IsBusy" flag is set to true
as a consequence, the BusyIndicator overlay will be shown
as a consequence, the Button cannot be pressed again until after 300ms
However, especially on slow computers, when fast double-clicking (really fast, like gaming fast that is), it is possible to fire the command twice without the BusyIndicator blocking the second call (this can be seen if the output shows 2 'click' lines right after one another).
This is unexpected behavior to me, as the IsBusy flag is set to true right away on the UI thread.
How come a second click is able to pass through?
I would expect the IsBusy Binding to show the overlay on the UI thread, blocking any further interaction?
The github sample also contains 2 workarounds:
using the ICommand.CanExecute to block the Execute handler
using the PreviewMouseDown to prevent double clicks
I'm trying to understand what the issue is.
What work-around would you prefer?
Diagnosis
This is only my guess and not a solid and confirmed info, but it seems that when you click the mouse button, the hit-testing is done immediately, but all the mouse related events are only scheduled to be raised (using the Dispatcher I presume). The important thing is that the control that is clicked is determined at the time the click occurred, and not after the previous click has been completely handled (including all UI changes that potentially follow).
So in your case, even if the first click results in showing the BusyIndicator covering (and thus blocking) the Button, if you manage to click for the second time before the BusyIndicator is actually shown (and that does not happen immediately), the click event on the Button will be scheduled to be raised (which will happen after the BusyIndicator is shown), causing the command to be executed again even though at that point the BusyIndicator will possibly be blocking the Button.
Solution
If your goal is to prevent command execution while the previous one is still executing the obvious choice is to make the Command.CanExecute result depend on the state of the IsBusy flag. Moreover, I wouldn't even call it a workaround, but a proper design.
What you're facing here is a clear-cut example of why you shouldn't make your business logic rely on UI. Firstly, because rendering strongly depends on the machine's processing power, and secondly because covering a button with another control by far does not guarantee the button cannot be "clicked" (using for example UI Automation framework).
Question: Should we check at runtime for a touch device (using Modernizr.touch?) and then use the angular swipe service when it is a touch device and use mouse events when it isn't?
Objective: I am writing a Slider Directive and I want to move the slider button.
I want to handle mousedown, mousemove, and mouseup events on the desktop and I want similar behavior on a touch screen. I had this working well with mouse events on the desktop but I've modified the code to use angular's $swipe service instead. Now the touch screen functionality works well, but the mouseup event on the desktop browsers is lost if it occurs outside of the target element.
Here is a jsBin that demonstrates the $swipe utility:
http://jsbin.com/berome/1/edit?html,js,console,output
On a touch screen device, the demo functions correctly. The start event, move events, and end event fire respectively when dragging the finger across the target element.
On the desktop, the start and move events fire, but the end event will only fire if the mousedown is released whilst still within the target div. This is expected behavior because the mouseup is bound to the target element. However, if the cursor is taken outside of the div during a mousemove event and then the mouse button is released, no end event occurs. Moreover the mousemove event will continue to fire when the cursor returns within the target! (Try it on the demo. It gets stuck in the mousemove state until another mousedown-mouseup sequence within the target)
For the desktop implementation, mousedown would be bound to the target element, and mousemove and mouseup would be bound to the $window (or some parent element) from within the mousedown event. The mouseup would then always fire even when the mouse down is released - even if the cursor is outside the target element! The mousemove and mouseup would be unbound in the mouseup event.
There doesn't seem to be much documentation or examples for using $swipe yet. The ngBook has a paragraph or two but no working example. Any suggestions or comments? I'm leaning towards two implementations: one for touch and one for the desktop.
Thanks!
You are correct, there is very little documentation for ngTouch's $swipe. Just starting digging into this I am having a hard time finding anything. So thank you for your example.
The only solution to your problem that I can see from my limited understanding is that you treat it like a scroll on a touch device and cancel. These sentences are from the documentation:
If the vertical distance is greater, this is a scroll, and we let the
browser take over. A cancel event is sent.
cancel is called either on a touchcancel from the browser, or when we
begin scrolling as described above.
I haven't tried any of this, but I this thought occurred to me and I thought it might be useful to you and me (as I'm sure I will have to deal with it soon).
I have a rectangle which should trigger an event on TouchEnter.
But when I Touch the rectangle nothing happens until the longtouch is over. It looks like it is waiting to be sure there is no longtouch before the TouchEnter kicks in.
If I touch the rectangle and move slightly (so there can't be a LongTouch anymore) it triggers the event.
How can I get rid of this:
"I better wait for the longTouch to be finished until I trigger the touchEnter" ?
The same happens with the TouchDown event...
Set as high as possible in the visual tree the attached properties Stylus.IsPressAndHoldEnabled and Stylus.IsFlicksEnabled to false in xaml or in code.
These will disable most of the delay experienced when doing a touch enter and down due to the touch needing the delay to figure out what the user wants to do.
It happened only on my Touch-Monitor.
Using a Tablet it worked fine.
So I think it depends on the Hardware.
There is a custom UserControl which contains a TextBox. How do I give focus to the textbox when UserControl is shown? I've tried calling TextBox.Focus on UserControl.IsVisibleChanged and TextBox.IsVisibleChanged event, but that didn't help. What else can I try?
It seems that something causes TextBox to loose focus. The approach that I've mentioned normally works. How can I find out what causes the TextBox to loose focus? I've tried listening to TextBox.LostFocus event, but it's parameters don't contain much valuable information and I also don't see previous methods in the stack trace.
The code:
void TextBox1_IsVisibleChanged(object sender, DependencyPropertyChangedEventArgs e)
{
if (this.TextBox1.IsVisible)
this.TextBox1.Focus();
}
As I've said before, it works if I use same code on a similar scenario in a test project, but it doesn't work in my application (the application is big and I am basically fixing bugs in it, and this one is amongst them). So I think that the problem isn't that focus is set improperly (as I've thought when I was opening this question), I think that the problem is that something else resets the focus. I am trying to find what it is here: Find out why textbox has lost focus .
I'm not exactly sure why it doesnt come back on visibility changed... however when I did TextBox1.Focus() in IsEnabledChanged it worked like a charm.
Use the UC as the bridge, when you let the UC shown, then call some function to focus the TextBox.
I am working on the touch screen application which is running on Windows XP Standard. With current hardware to invoke a right click user has to click and hold for couple of seconds, but this might interfere with other actions like holding a repeat button in the scrollviewer, so I have decide to disable a right click.
I would ideally wan't to disable a right click on the application level, but if it is not possible, disable right click on windows level would also work for me.
The OnPreviewMouseRightButtonDown/Up approach did not work for me.
There is a property Stylus.IsPressAndHoldEnabled on UIElement however. Set that to false to get rid of the press and hold right click behavior. I tested this on Windows 7 with .NET 4.0, but the property is available from .NET 3.0.
<RepeatButton Stylus.IsPressAndHoldEnabled="false" ... />
There is also a blogpost here that provides a code sample for a similar disabling of press and hold at window level. But with this in place, the PreviewTouchDown and TouchDown events will not be raised as soon as the finger touches the screen (which would be necessary for a RepeatButton I guess), only later. Read the 'Remarks' on this msdn page.
You can override the OnPreviewMouseRightButtonDown on the Window and set Handled to true. You also need to handle OnPreviewMouseRightButtonUp (thanks to Vitalij for pointing this out)
That should do the trick.