The Whole GUI is loaded. After clicked on certain button, the GUI is not responsive: any buttons are no response. If I switch to other app and switch back, the GUI is OK immediately.
Behind the clicked button, it is a handle sending requests to back end. I debugged app, and data has always been returned.
Since the app is huge and I'm not familiar with it, I cannot extract a simple model of sending requests and processing data.I wonder possible causes, since I have no idea now.
Best regards,
-----Add More-----
The back end request sending is in a threadpool thread; when getting data, no UI controls are updated directly. Only presenters (view model) are updated, which are binding to UI control.
I think as sa-ddam213 have suggested and I too believe that you are executing a code block in background Thread or Task.
I think problem is that
- On button click, start execution in background.
- You have kept a flag for checking if background process is running for CanExecute() for the Button's Command.
- UI checks CanExecute() for the Button's Command for a while then does not as mentioned in another question here - Is Josh Smith's implementation of the RelayCommand flawed?.
- Process returns in background.
- UI does not knows the background process is completed and as it has stopped checking the CanExecute() for the Button's Command, UI will not come to know itself.. (you need to tell it)
- Therefore UI is not responding but when user will click somewhere in the application or do an in-out as you said, it will back again.
Solution
- By using InvalidateRequerySuggested method you can invoke the command requery on UI thread so UI will recheck the CanExecute() for each Control and/or Button Command.
// Forcing the CommandManager to raise the RequerySuggested event
CommandManager.InvalidateRequerySuggested();
The InvalidateRequerySuggested method forces the CommandManager to raise the RequerySuggested event. The RequerySuggested event informs a command source to query the command it is associated with to determine whether or not the command can execute.
For more refer:
- How does CommandManager.RequerySuggested work?
- MSDN - CommandManager
- MSDN - CommandManager.InvalidateRequerySuggested Method
Its just a wild guess but try dropping the priority of the logic inside your click event
Example
private void Button_Click(object sender, RoutedEventArgs e)
{
Dispatcher.BeginInvoke(DispatcherPriority.Background, (Action)delegate
{
// all your code here.
});
}
Related
From a production application, we notice that our WPF buttons fire the ICommand.Execute method twice on fast double click.
Now, on every Command, the application is covered with a full-screen spinner animation, preventing any further interaction with the application.
This github repo contains a minimalistic repro of the issue. Note that:
when the Button's Command fires, the "IsBusy" flag is set to true
as a consequence, the BusyIndicator overlay will be shown
as a consequence, the Button cannot be pressed again until after 300ms
However, especially on slow computers, when fast double-clicking (really fast, like gaming fast that is), it is possible to fire the command twice without the BusyIndicator blocking the second call (this can be seen if the output shows 2 'click' lines right after one another).
This is unexpected behavior to me, as the IsBusy flag is set to true right away on the UI thread.
How come a second click is able to pass through?
I would expect the IsBusy Binding to show the overlay on the UI thread, blocking any further interaction?
The github sample also contains 2 workarounds:
using the ICommand.CanExecute to block the Execute handler
using the PreviewMouseDown to prevent double clicks
I'm trying to understand what the issue is.
What work-around would you prefer?
Diagnosis
This is only my guess and not a solid and confirmed info, but it seems that when you click the mouse button, the hit-testing is done immediately, but all the mouse related events are only scheduled to be raised (using the Dispatcher I presume). The important thing is that the control that is clicked is determined at the time the click occurred, and not after the previous click has been completely handled (including all UI changes that potentially follow).
So in your case, even if the first click results in showing the BusyIndicator covering (and thus blocking) the Button, if you manage to click for the second time before the BusyIndicator is actually shown (and that does not happen immediately), the click event on the Button will be scheduled to be raised (which will happen after the BusyIndicator is shown), causing the command to be executed again even though at that point the BusyIndicator will possibly be blocking the Button.
Solution
If your goal is to prevent command execution while the previous one is still executing the obvious choice is to make the Command.CanExecute result depend on the state of the IsBusy flag. Moreover, I wouldn't even call it a workaround, but a proper design.
What you're facing here is a clear-cut example of why you shouldn't make your business logic rely on UI. Firstly, because rendering strongly depends on the machine's processing power, and secondly because covering a button with another control by far does not guarantee the button cannot be "clicked" (using for example UI Automation framework).
I am experimenting with a simple ViewModel-first based WPF application and some primitive navigation logic. The application consists of two views (screens). One screen contains a button "Go forward" and the other a button "Go backward". By pressing one of the buttons a delegate command is invoked, which in turn causes the shell view-model to switch the active screen. Screen 1 switches to Screen 2, whereas Screen 2 switches to Screen 1.
The problem with this approach is, that it introduces a race condition. When clicking fast enough there is a chance that the corresponding action (go forward/go backward) is executed twice, thus causing the application to fail. The interesting thing about that is that the screen has already been changed, but the UI doesn't reflect the state changes instantaneously. Until now I never had experienced this kind of gap and I made this experiment just to prove, that a single-threaded (dispatched) WPF application is automatically thread-safe.
Does somebody have an explanation for this odd behavior? Is the WPF binding mechanism too slow, so that the button can be pressed a second time, until the UI has updated itself to represent the new screen state?
I have no idea how to fix this according to the recommendations for developing mvvm applications. There is no way to synchronize the code, because there is only one thread. I hope you can help me, because now I feel very unsecure with relying on the WPF data binding and templating system.
Zip archive containing the project files
MainWindow.xaml:
<Window x:Class="MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:local="clr-namespace:WpfApplication1"
mc:Ignorable="d"
Title="MainWindow" Height="350" Width="525">
<Window.Resources>
<DataTemplate DataType="{x:Type local:Screen1}">
<local:View1/>
</DataTemplate>
<DataTemplate DataType="{x:Type local:Screen2}">
<local:View2/>
</DataTemplate>
</Window.Resources>
<Window.DataContext>
<local:ShellViewModel/>
</Window.DataContext>
<Grid>
<ContentControl Content="{Binding CurrentScreen}"/>
</Grid>
</Window>
The ShellViewModel containing a "go forward" and "go backward" method:
Public Class ShellViewModel
Inherits PropertyChangedBase
Private _currentScreen As Object
Public Property Screens As Stack(Of Object) = New Stack(Of Object)()
Public Sub New()
Me.Screens.Push(New Screen1(Me))
Me.GoForward()
End Sub
Property CurrentScreen As Object
Get
Return _currentScreen
End Get
Set(value)
_currentScreen = value
RaisePropertyChanged()
End Set
End Property
Public Sub GoBack()
Log("Going backward.")
If (Me.Screens.Count > 2) Then
Throw New InvalidOperationException("Race condition detected.")
End If
Log("Switching to Screen 1")
Me.Screens.Pop()
Me.CurrentScreen = Me.Screens.Peek()
End Sub
Public Sub GoForward()
Log("Going forward.")
If (Me.Screens.Count > 1) Then
Throw New InvalidOperationException("Race condition detected.")
End If
Log("Switching to Screen 2.")
Me.Screens.Push(New Screen2(Me))
Me.CurrentScreen = Me.Screens.Peek()
End Sub
End Class
The Screen class containing just a delegate command to start the action:
Public Class Screen1
Inherits PropertyChangedBase
Private _clickCommand As ICommand
Private _shellViewModel As ShellViewModel
Public Sub New(parent As ShellViewModel)
_shellViewModel = parent
End Sub
Public ReadOnly Property ClickCommand As ICommand
Get
If _clickCommand Is Nothing Then
_clickCommand = New RelayCommand(AddressOf ExecuteClick, AddressOf CanExecute)
End If
Return _clickCommand
End Get
End Property
Private Function CanExecute(arg As Object) As Boolean
Return True
End Function
Private Sub ExecuteClick(obj As Object)
Threading.Thread.SpinWait(100000000)
_shellViewModel.GoForward()
End Sub
End Class
There is no weird race condition
I've run your code. There is one thread. The main one.
One thread = no race condition.
Why do you want to prove the following ?
I made this experiment just to prove, that a single-threaded
(dispatched) WPF application is automatically thread-safe.
It's a bullet proof fact. One thread = thread safe (as long as you don't share data process wide, but then it's not thread safety anymore).
Binding and Method no supporting successive calls
In fact, your methods GoBack and GoForward are not supporting successive calls. They should be called one after another.
Thread safety here doesn't imply that your methods cannot be call twice in a row. If there is any task queue in the process, methods can be called twice.
What you may intend to prove is maybe the following:
Clicks are captured and processed in line, without any task queuing between
the click, the property changed event raised, the dispatcher
invocation, the binding / display refresh.
It's clearly Wrong!
When you call Dispatcher.BeginInvoke or Invoke, it's using internally a queue of tasks. And nothing prevent you from queuing twice the same task coming from two similar clicks for example.
To be frank, I was unable to reproduce your bug. I think it's the same thread that captures clicks that dispatch it to your code and then display it on screen. However since task for click events, display refresh are in the same queue, it is theoretically possible to enqueue two clicks before screen change. However :
I cannot click fast enough to beat my cpu.
I don't think the SpinWait are needed.
Something may miss in my configuration.
Why not making your methods supporting successive calls ?
GoBack and GoBackward could check a status and do nothing if the current status is not valid.
You could have used:
1. Two screens both instantiated from the start.
2. A bool to indicate the current state (Forward or Back).
3. An enum to be more clearer in code.
4. A state machine.. no! I'm kidding.
Note: Why using a Stack to push and pop screen(only one by the way) ? and...
in case, you add another thread:
Stack pop / push are not thread safe.
Instead use ConcurrentStack<T>
Simulation
Even when the UI thread is frozen or doing something, another inputs are being collected. Try this out (Sorry for C# but you get the point):
private void ButtonClick(object sender, EventArgs args)
{
Debug.WriteLine("start");
Thread.Sleep(6000);
Debug.WriteLine("End");
}
Click the button, then place a breakpoint on the Start line, click the button again before the UI thread unfreezes. and you will see that exactly 6 seconds after your first click the breakpoint gets hit.
Explanation
The UI thread can obviously only perform 1 action at a time, but it has to be optimized for multithreading - meaning it queues it's actions. Therefore any PropertyChanged (or any handler at all including the OnClick) is only queueing the action for the UI thread. It doesn't jump out of your code to update the UI elements in the middle of your setter. If you call Thread.Sleep after the setter you will see that you don't see any change - because the UI thread didn't get to invoke the update yet.
Why is this important
In your code you first Push a screen and then set is as current, calling propertyChanged. That does not change the screens immediately, it just queues it for the update. There is no guarantee that another click doesn't get scheduled before that.
You could also achieve freezing your UI thread by calling PropertyChanged a million times causing it to freeze during the update itself. Yet the clicks in the meantime would be collected.
So your "safepoint" - the place where it is safe that no other click can be scheduled now - is not after the Setter is finished, but after the Loaded Event is called on the new window.
How to fix
Read
Fab
's answer :)
Dont think that just because the UI thread is blocked at the moment nothing gets in. If you want to disable inputs while something is being calculated you do need to disable the inputs manually.
Possible solutions
Set IsEnabled, Visibility, IsHitTestVisible
Some Overlay or whatever
Boolean parameter, that could globally allow or disallow all methods coming (basically a lock)
I cannot reproduce the described behavior - double clicking causes the app to first "go backward", and then "go forward" at my side. Nevertheless, I think that expecting the button to disappear before the user can click it for the second time is not a good design (especially in case of devices that for instance have a separate "double-click" button), and I would personally never rely on that.
What I think is the best way to proceed in this situation is to properly implement the CanExecute method, so that it not only returns true (which by the way is most likely redundant), but rather queries the _shellViewModel whether it's in a state allowing to invoke the method called by ExecuteClick (in case of GoForward it should return true if there is only one element on the stack). I cannot test that (because I cannot reproduce the behavior in question), but I'm pretty sure that even if the user clicks the same button twice, the second call to CanExecute will occur after the first call to ExecuteClick, thus the model will be guaranteed to be "up-to-date" (the result will be false and GoForward will not be called again).
#Pavel Kalandra:
While it is possible for a simple click event-handler to be queued multiple times, even if the UI thread is blocked, I can't reproduce this behavior with a delegate command. Hence I assume that the WPF framework does handle the invocation of an command a little bit differently compared to a simple click event-handler. Moreover, in your example the click event is already queued before the execution of the event-handler has finished. In my situation this is not the case.
To prove this assumption I made a further experiment: By using a command that blocks the UI thread for a few seconds and then shows a message, you can see that it is not possible to invoke it multiple times during its invocation. I believe that the WPF framework is somehow preventing this from happening. Therefore both scenarios aren't comparable to one.
But I think that your explanation is still correct. Pushing the screen causes the PropertyChanged-event to be fired, but the screen is not updated immediately. Indeed the associated job is pushed onto the dispatcher queue and is scheduled. As a consequence there is a short time span during which it is possible to invoke the command a second time.
#Fab:
When you strongly rely on the accepted definition of a race condition, then there shouldn't be one in my sample application. But for simplicity purposes I would like to call it still a race condition because the scheduling of jobs makes the execution nondeterministic.
Nevertheless, the assumption I was intended to prove, is wrong. I wanted to prove it because of threading problems we are currently faced with. Our application simulates multiple modal methods at the same time, therefore it relies on multiple threads. Because the interactions, a user is allowed to, aren't properly synchronized there is a high chance for race conditions to happen (synchronizing them is not an option because of other reasons).
At the moment I am working on a prototype which doesn't use threads so heavily. My hope was, that by executing everything on the dispatcher thread race conditions (or similar problems) shouldn't be possible. At least for a ViewModel-first approach this seems to be wrong because of the way the WPF is scheduling binding updates.
I have used a simple scenario where it is easy to provide a fix to the potential "race condition". But in general it won't be easy to write a Bulletproof WPF application. A flag to indicate the direction (forward/backward) won't be enough when dealing with multiple screens. But the command delegate could check if it is invoked from the active screen.
PS: As long as I rely exclusively on the dispatcher thread to execute actions, I see no need for using a ConcurrentStack ;-)
I have come across another similar issue, that proves that UI scheduling can in fact introduce race conditions even if the application is single-threaded.
In the example a piece of code is called, that is supposed to be atomic. Because scheduling is used using different priorities, the code may be interrupted in the middle of execution.
This is an example that I found in a similar form in our production code. Users have mentioned an issue that is spontaneously occuring. I have found out then that a SelectionChanged-event was interrupting a piece of code that was supposed to be executed as a block.
public partial class MainWindow : Window
{
private bool inBetweenMethod;
public MainWindow()
{
InitializeComponent();
this.timer = new DispatcherTimer(DispatcherPriority.Loaded);
this.timer.Interval = TimeSpan.FromMilliseconds(10);
this.timer.Tick += Timer_Tick;
this.timer.Start();
this.MethodThatIsSupposedToBeAtomic();
}
private void Timer_Tick(object sender, EventArgs e)
{
if (inBetweenMethod)
{
throw new Exception("Method was interrupted in the middle of execution".);
}
}
private void MethodThatIsSupposedToBeAtomic()
{
inBetweenMethod = true;
Dispatcher.Invoke(new Action(() =>
{
for (int i = 0; i < 100; i++)
{
Console.WriteLine("iterating");
}
}), DispatcherPriority.ContextIdle);
inBetweenMethod = false;
}
}
I've seen I have a problem with several users that use to double-click in buttons.
I have several buttons bound to commands that launch many actions.
For example there are two windows that communicate between them through a mediator so when I click "close the other window", the bound command sends a "CloseTheOtherWindowMessage". The problem is that when a user makes double click it tries to close the window a second time and, as expected, it crashes.
I've tried to set the window BusyIndicator as IsBusy when I press the button but my finger is quicker than MVVM and it still let me double-click before it starts showing the BusyIndicator.
I've found many examples of how to only admit double click in MVVM using interaction.Behaviors but I want just the opposite. Is there any example or other good and general solution for this problem?
Why is it "as expected" when it crashes? A crash should never be "as expected".
Your finger shouldn't be "quicker than MVVM". The Dispatcher thread always acts deterministically and sequentially. Do you use a multi-threaded approach?
In the command's Execute method or handler, raise its CanExecuteChanged event, and the binding engine will immediately call CanExecute(...). Make it so that this method will return false the second time. Maybe use a timer, or, better yet, you can logically determine by your view model state alone that the action is not possible right now (i.e. because IsOtherStuffAvailable is currently false).
i recently acquired some source code for a console wrapper for a server. The program was originaly in WPF and part of the code was:
private void ServerProc_ErrorDataReceived(object sender, DataReceivedEventArgs e)
{
Dispatcher.Invoke(new Action(() =>
{
ConsoleTextBlock.Text += e.Data + "\r\n";
ConsoleScroll.ScrollToEnd();
}));
}
private void ServerProc_OutputDataReceived(object sender, DataReceivedEventArgs e)
{
Dispatcher.Invoke(new Action(() =>
{
ConsoleTextBlock.Text += e.Data + "\r\n";
ConsoleScroll.ScrollToEnd();
ParseServerInput(e.Data);
}));
}
Its also had this annotation in both voids:
// You have to do this through the Dispatcher because this method is
called by a different Thread
However in WinForms there is no such thing - is there a way to change this to a Background worker or something (Ive barely done any multi-threading)?
Both methods are event handlers. Chances are they are from some kind of listening code and I would expect that they are called from a non UI thread (eg normally a threadpool thread that is doing the listening). You can check that by putting a break point and looking at the threads window in the debugger.
So you will need to apply the winforms way of updating the UI from a non UI thread.
If you search SO you should find quite a lot on how to do that. E.g
Updating UI from a different thread
How to update GUI from another thread in C#?
Some background: A process that is running in a thread other than the UI thread is not allowed to access any UI controls directly. Your WPF ServerProc is running in a different thread than your UI which requires the Dispatcher to help you communicate from the ServerProc thread back to the UI controls in your UI thread.
If your ServerProc -- in WPF or WinForms -- were running in the UI thread, you would not need to surround it with the Dispatcher.Invoke call.
For you, you can put your ServerProc in a BackgroundWorker (MSDN example). Your DoWork method would contain the meat of the code doing the work and then depending on how the ServerProc does its work, you might be able to use ProgressChanged to do what both your sample WPF methods are doing. ProgressChanged has a parameter passed in that you would indicate if it were an error or data has been received and inside the function you could display the appropriate info. Take a look at the MSDN docs because they have a good example.
What's important to point out is that ProgressChanged happens on the UI thread so you do NOT need to surround your calls to your UI controls with Invoke; just call them normally. Same goes for RunWorkerCompleted which may be the other option for displaying data when your ServerProc has finished its job.
Finally, if you actually had to access a UI control from within your thread, you do something very similar looking as your WPF code sample. Look at MethodInvoker. Instead of Dispatcher, you're really just calling it from your main Form.
I have a WPF form with Next and Prev buttons for navigating to the next and previous months of a custom made calendar. I want to disable these buttons as soon as the user clicks on them and enable them when the mext/prev month data loads. That would prevent the user from clicking on the buttons repeatedly and the events queuing up and getting fired slowly one by one.
So far, I have tried putting the calendar data load stuff in a Dispatcher thread, and maintaining a flag to indicate whether the page is busy or not. Doesn't seem to work, and the events still pile up.
I also tried using a command for the button clicks, have the execute handler load the data, and CanExecute decide whether data can be loaded, based on a IsBusy flag. Doesn't work either.
Any pointers?
I suppose you have an event fired when the month is loaded.
To me the best solution would be to manually disable the buttons on their click event, and enable them back when the month is loaded.
I see no problem on adding some code behind in the view if needed.
What type of Command are you using?
A RelayCommand will re-run the CanExecute method when one of the properties changes, however a DelegateCommand will not.
You need to manually raise the CanExecuteChanged event on a DelegateCommand when your CanExecute parameters change.
void MyViewModel_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
if (e.PropertyName == "IsBusy")
{
// Might need a Cast here if your command is of type ICommand
LoadDataCommand.RaiseCanExecuteChanged();
}
}
I fixed this by using a BackgroundWorker to load the view and checking BackgroundWorker.IsBusy in the next/prev event handlers.