As I am trying to make my own custom "WinForms", I am left confused on when does each mouse event occurs. I have made my own custom classes, but now the events are something I have to rework, as they won't work right.
I have a custom class for controls. Objects from that class can contain other controls, which can contain other controls and so on. There is a main control, which gets input from a picture box. That input is what is where the mouse is and what even has been activated in the picture box.
So far I have figured, that MouseMove, MouseHover and MouseDown events are the simplest to write, as they occur in simple conditions. But the rest require additional data about the mouse's location, state and past. MouseDoubleClick seems to activate after a specific sequence of events (strictly down-up-down-up, down-up-down-move-up and down-up-down-move-leave-enter-move-up, with the movement events not activating). With that in mind, I am even more confused.
In what conditions and sequences does each mouse event occur?
EDIT
Further testing made things even more confusing. For one, now I want to know at what rate is the MouseMove being registered, and testing it shows that between each event there is a different time (or so does my use of a StopWatch say). This is important, because then that raises the question when is Hover being triggered.
Click is down-up, where moving is allowed between the two.
DoubleClick proved to be simple enough - down-up-down-up, where moving is allowed explicitly only between the second down-up.
Hover activates only once after each Enter, when the mouse remains stationary; if you want to trigger Hover again, the mouse has to leave and then re-enter.
So the question now is how the system tracks the mouse's activity - how does it detect the mouse moving, being held down and being released. Hopefully that would help me get the full answer.
Related
I have an "odd" situation.
I've got a form with a binding source and a binding navigator.
In this instance, I've got 161 records (via EF6) to display.
The databinding to the controls works nicely.
But what I find is that the expected events for the binding navigator don't happen consistently. Then they settle down.
I've got event handlers (additional to the default ones, but the same thing happens when I remove the default ones as well)
I set the binding source one the navigator, and the "Position Changed" event is raised (as I'd expect)
Clicking on any of the "Move" buttons, or editing the position field will result in:
No event being raised (not the item click events, not the binding source Position Changed) roughly 3 out of 4 times.
Then the event raises, all the expected navigation occurs, and repeat.
But it doesn't seem to be permanent, because after a while almost all the records have the navigation all starts working properly.
This happens with and without the debugger connected.
The other thing I notice is that when it fails, the icon in the taskbar flashes once.
It's not something in any of my handler code, because it never gets to my code.
It might be a property setting.
It's not an exception, because even with "break on all exceptions", no exception is reported.
When you talk about "move" buttons, I take it that you mean the next/previous record navigation buttons on the BindingNavigator. Those buttons are not full-fledged Windows controls, but rather they are "lightweight" controls. I've seen issues in the past because of this.
While I do not have all the details fresh in memory, it had to do with the fact that they don't steal the focus away from other controls as regular Windows controls do, and this caused some events to not be raised.
I suggest you create your own navigation buttons, which is what I ended up doing on all my Windows Forms project. Those regular buttons can then call the BindingSource methods such as MoveNext and so on.
From a production application, we notice that our WPF buttons fire the ICommand.Execute method twice on fast double click.
Now, on every Command, the application is covered with a full-screen spinner animation, preventing any further interaction with the application.
This github repo contains a minimalistic repro of the issue. Note that:
when the Button's Command fires, the "IsBusy" flag is set to true
as a consequence, the BusyIndicator overlay will be shown
as a consequence, the Button cannot be pressed again until after 300ms
However, especially on slow computers, when fast double-clicking (really fast, like gaming fast that is), it is possible to fire the command twice without the BusyIndicator blocking the second call (this can be seen if the output shows 2 'click' lines right after one another).
This is unexpected behavior to me, as the IsBusy flag is set to true right away on the UI thread.
How come a second click is able to pass through?
I would expect the IsBusy Binding to show the overlay on the UI thread, blocking any further interaction?
The github sample also contains 2 workarounds:
using the ICommand.CanExecute to block the Execute handler
using the PreviewMouseDown to prevent double clicks
I'm trying to understand what the issue is.
What work-around would you prefer?
Diagnosis
This is only my guess and not a solid and confirmed info, but it seems that when you click the mouse button, the hit-testing is done immediately, but all the mouse related events are only scheduled to be raised (using the Dispatcher I presume). The important thing is that the control that is clicked is determined at the time the click occurred, and not after the previous click has been completely handled (including all UI changes that potentially follow).
So in your case, even if the first click results in showing the BusyIndicator covering (and thus blocking) the Button, if you manage to click for the second time before the BusyIndicator is actually shown (and that does not happen immediately), the click event on the Button will be scheduled to be raised (which will happen after the BusyIndicator is shown), causing the command to be executed again even though at that point the BusyIndicator will possibly be blocking the Button.
Solution
If your goal is to prevent command execution while the previous one is still executing the obvious choice is to make the Command.CanExecute result depend on the state of the IsBusy flag. Moreover, I wouldn't even call it a workaround, but a proper design.
What you're facing here is a clear-cut example of why you shouldn't make your business logic rely on UI. Firstly, because rendering strongly depends on the machine's processing power, and secondly because covering a button with another control by far does not guarantee the button cannot be "clicked" (using for example UI Automation framework).
Is there a way in WPF to get active touch points? I need to determine if user is touching screen, similar to Mouse classes' Pressed -property?
I just need to know if any touch is present on the screen - don't mind what UIElement it's touching.
Here are two options, but they may not be the most correct way to do it:
1) You could subscribe to the MainWindow.PreviewTouchDown and MainWindow.PreviewTouchUp and maintain a list of all the current touch devices. It would be easy to implement but could make your code messy.
2) Subscribe to Touch.FrameReported which you can get a collection of touch points from the TouchFrameEventArgs.GetTouchPoints(null);. This will happen on every touch event firing, so it may be too often, but it would allow you to handle this event from any class.
You can subscribe to your main windows ManipulationStarting event (when the first finger makes contact with the screen), ManipulationInertiaStarting event (when the last finger lifts off the screen) and/or ManipulationDelta event (when any finger moves).
Within your event handlers you can get a list of all current touchpoints via ManipulationDeltaEventArgs.Manipulators
Don't forget to set your main window's IsManipulationEnabled to true.
This way you just have to remember whether a manipulation is currently in progress or not. You don't have to keep track of all the individual touch points yourself.
I hope this makes sense.
I have created several WPF User Controls. The lowest level item is 'PostItNote.xaml'. Next, I have a 'NotesGroup.xaml' file that has an ItemsControl bound to a List of PostItNotes. Above that, I have a 'ProgrammerControl.xaml' file. Each ProgrammerControl has a grid with four different NotesGroup user controls on it (and each NotesGroup contains 0-many PostItNotes.
Then, I have my main window. It also has an ItemsControl, bound to a list of Programmers.
So, you end up with a high level visual view of a list of programmers, each programmer has four groups of tickets, each group of tickets has many PostItNotes.
The trouble I'm having, is that I want to respond to a mouse click event in my mainWindow's code behind file.
I can add a MouseClick event into my PostItNote.xaml.vb file and that is getting called when the user clicks a PostItNote, and I can re-raise the event; but I can't seem to get the NotesGroup to listen for that event. I'm not sure if that's even the correct approach.
When the user clicks the PostItNote, I'm going to do a bunch of business-logic type stuff that the PostItNote control doesn't have a reference to/doesn't know about it.
Can anyone point me in the right direction?
You have a couple choices:
Use the PreviewXXX events which are fired during the "tunneling" phase of WPF event routing. The parent controls can always preview the events going down through them to children.
Use the more advanced approach to hooking up events leveraging the AddHandler method to which you can pass a parameter called "handledEventsToo" which basically means you want to know when the event happened "within" you even if some descendent element handled the event itself.
I am going to take a flyer here. You probably don't want to be handling the event that high up; not really anyway. You are catching the event at the lower levels, which is unavoidable. Consider invoking a routed command from the PostItNote click event handler.
The routed commands bubble up and tunnel down through the tree. You can have an architecture where a high-level handler can listen to a logical event (Opening a postit note perhaps?). The handler for this doesn't need to care where the command originates from. It might be from you clicking something, it might be from clicking on a toolbar button. Both are valid scenarios.
It sounds like you are creating some kind of custom UI, am I right? You want the application to respond to the users interactions. That is what the RoutedCommands are for.
Is there a simple way to tell what triggered Click event of a Button apart from setting multiple flags in Mouse/Key Up/Down event handlers? I'm currently only interested in distinguishing mouse from everything else, but it would be nice to handle Stylus and other input types if possible. Do I have to create my own button control to achieve this?
Edit: To clarify why I care: in this particular case I'm trying to implement "next" and "previous" buttons for a sort of picture viewer. Pictures in question may be of different size and buttons' positions will change (so they are always centered below picture). It's quite annoying to follow such buttons with mouse if you need to scroll through several pictures, so I want to keep mouse position constant relative to clicked button, but only if it was clicked by mouse, not keyboard.
Edit2: It does not matter whether the buttons are on top or down at the bottom, since the center can change anyway. "Picture viewer" here is just an abstraction and in this particular case it's important for me that top left corner of the picture retains it's position, but it's out of the scope of the question to go in details. Scaling the picture is not so trivial in this sort of application as well, so I do want to know the answer to the question I asked not going into UI implementation discussion.
if (InputManager.Current.MostRecentInputDevice is KeyboardDevice)
You should instead handle specifically the MouseXXX, StylusXXx, and KeyboardXXX events.
Could you elaborate on why you would care?
Having written many custom controls myself over the years, I cannot recall one instance where I cared how a click event was triggered. (Except for that pre VB6 control lifecycle glitch that fired the got focus-click-lost focus in a different order depending on whether you clicked a button, used an accelerator key, or pressed ENTER as the default).
Personally I find it annoying when people place buttons at the bottom of Windows forms and web pages. Read some of the literature on UI and you will find that most people don't even get that far if they don't find something interesting on the page/form. I like to be able to click next as soon as I know the content is of no interest to me, so keep the nav buttons prominent at the top.
I would put the prev/next at the top of the picture where you can control their position. Dancing those buttons around goes against most opinions on UI consistency. Further creating a different experience for a mouse user versus a keyboard user also goes against most current wisdom on good UI design.
The alternative is to choose a constant max size a picture can obtain on the UI and if it exceeds that scale to fit, otherwise allow it to change freely within a frame. This keeps your buttons at the same place if you absolutely must have them on the bottom.
You could create an enumeration with the different devices, have a global property that you set every time the mouse/keyboard/etc. is initiated, and just refer to this when needed.