Approach to loading forms and busy indicator - silverlight

I am "slowly" moving into Silverlight from asp.net and have a question about how to deal with situation where some code needs to be executed after web service calls have completed. For example, when user clicks on the row in the data grid a dialog box is shown that allows editing of the record. It contains numerous combo boxes, check boxes etc. So I need to first load data for each of the combo boxes, and than when all finished loading, I need to set the bound entity. Since I am new to this async thing, I was thinking to have some kind of counter that will keep track on how many calls have been dispatched, and as they finish reduce them by one, until it is zero, at which point I could raise an event that load has finished, and I could proceed with what ever is dependent on this. But this seems very clunky way of doing it. I am sure many have faced this issue, so how do you do this. If it helps, we use Prism with MVVM approach and Ria Services with Dtos.

What you've described is pretty much the way to go. There may be more elegant things you can do with locks and mutexes, but your counter will work. It has the bonus that you can see how many operations are still "in progress" at any one time.
You could dispatch your events sequentially but that would defeat the whole purpose of asynchronous operations.
If you analysed what each part of your UI needs you might be able to do some operations before all of your async events have finished. Making sure you start the longest running operations first might help - but there's no guarantee that the other shorter operations will finish first. It all depends on what resources are available on both the client and server at the time the call is made.

Related

Async/Await in WPF LOB application: is it worth the added complexity?

Consider a basic WPF line-of-business application where the server and clients both run on the local network. The server simply exposes a Web API which the clients (running on desktop computers) call into. The UI would consist of CRUD-style screens with buttons to trigger calls to the server.
In my original version of the app none of these UI operations were asynchronous; the UI would freeze for the duration of the call. But nobody complained about the UI becoming unresponsive, nor did anyone notice; the calls were typically less than a quarter second. On the rare occasion if the network connection was down, the UI would freeze for as long as it took for the operation to timeout, which was the only time that eyebrows were raised.
Now that I’ve begun implementing async/await for all server calls, it has quickly become apparent that I have a new issue on my hands: the complexities of dealing with re-entrancy and cancellation. Theoretically, now the user can click on any button while a call is already in progress. They can initiate operations that conflict with the pending one. They can inadvertently create invalid application states. They can navigate to a different screen or log out. Now all these previously impossible scenarios have to be accounted for.
It seems like I’ve opened up a Pandora’s Box.
I contrast this to my old non-async design, where the UI would lock-up for the duration of the server call, and the user could simply not click on anything. This guaranteed that they couldn’t foul anything up, and thus allowed the application code to remain at least 10x simpler.
So what is really gained by all this modern approach of async-everywhere? I bet if the user compared the sync and async versions side-by-side, they wouldn’t even notice any benefit from the async version; the calls are so quick that the busy indicator doesn’t even have time to render.
It just seems like a whole tonne of extra work, complexity, harder-to-maintain code, for very little benefit. I hear the KISS principle calling…
So what am I missing? In an LOB application scenario, what are the benefits of async warrant the extra work?
So what is really gained by all this modern approach of async-everywhere?
You already know the answer: the primary benefit of async for UI apps is responsiveness. (The primary benefit of async on the server side is scalability, but that doesn't come into play here).
If you don't need responsiveness, then you don't need async. In your scenario, it sounds like you may get away with that approach, and that's fine.
For software that is sold, though - and in particular, mobile applications - the standard is higher. Apps that freeze are so '90s. And mobile platforms really dislike apps that freeze, since you're freezing the entire screen instead of just a window - at least one platform I know of will execute your application and drop network acess, and if it freezes, it's automatically rejected from the app store. Freezing simply isn't acceptable for modern applications.
But like I said, for your specific scenario, you may get away without going async.

How to find out if application runs slowly?

I'm currently developing some sort of I/O-pipeline system. Simply said: You can run simultaneous workers which do some stuff, either import or export data. I don't want to limit the user in how many workers will run simultaneously as performance, logically, depends on how they work.
If one wants to import 30 small images simultaneously, then he shall do this. But I want to equip my application with a monitor which notices when the application, especially the main thread, runs to slow, so that it will lag visibly. If this happens, the pipeline should reduce the amount of workers and maybe pause some of them in order so stabilize the application.
Is there any way to do this? How can I effectively monitor the speed, so I can say it is definetly to slow?
EDIT:
Okay I may have been a bit unclear. Sorry for that. The problem is more the status invokes. The user should be able to see what's going on. Therefore every worker invokes status updates. This happens in real-time. As you can imagine, this causes massive lags as there are a few hundred invokes per second. I tackled this problem by adding a lock which filters reports so that just every 50ms a report gets really invoked. The problem is: It still causes lags when there are about 20-30 workers active. I thought the solution was: Adjusting the time-lock by the current CPU load.
But I want to equip my application with a monitor which notices when the application, especially the main thread, runs to slow, so that it will lag visibly.
I don't quote understand this. I believe there are two common cases of UI lag:
Application that runs significant amounts of CPU-bound code on the UI thread or that performs synchronous IO operations there.
Showing way too many items in your UI, like a data grid with many thousand rows. Such UI is not useful to the user, so you should figure out a better way of presenting all those items.
In both cases, the lag can be avoided and you don't need any monitor for that, you need to redesign your application.

CQRS Design Pattern Updates

I was looking to implement CQRS pattern. For the process of updating the read database, is it best to use a windows service, or to update the view at the time of creating a new record in the update database? Is it best to use triggers, or some other process? I've seen a couple of approaches and haven't made up my mind what is the best approach to achieve this.
Thanks.
Personally I love to use messaging to solve these kind of problems.
You commands result in events when they are processed and if you use messaging to publish the events one or more downstream read services can subscribe to the events and process them to update the read models.
The reason why messaging is nice in this case is that it allows you to decouple the write and read side from each other. Also, it allows you to easily have several subscribers if you find a need for it. Additionally, messaging using a persistent queuing system like MSMQ enables retrying of failed messages. It also means that you can take a read model offline (for updates etc) and when it comes back up it can then process all the events in the queue.
I'm no friend of Triggers in relational databases, but I imagine the must be pretty hard to test. And triggers would introduce routing logic where it doesn't belong. Could it be also that if the trigger action fails, the entire write transaction rolls back? Triggers is probably the least beneficial solution.
It depends on how tolerant your application must be with regards to eventual consistency.
If your app has no problem with read data being 5 minutes old, there's no need to denormalize upon every write data change. In that case, a background service that kicks in every n minutes or that kicks in only when the CPU consumption is below a certain threshold, for instance, can be a good solution.
If, on the other hand, your app is time-sensitive, such as in the case of frequently changing statuses, machine monitoring, stock exchange data etc., then you will want to keep the lag as low as possible and denormalize on the spot -- that is, in-process or at least in real-time. So in this case you may choose to run the denormalizers in a constantly-running process or to add them to the chain of event handlers straight in your code.
Your call.

how to run project starting without debuging like starting debugging mode?

i'm using C++ managed 2010 for designing a GUI in a form.h file. The GUI acts as a master querying data streaming from slave card.
Pressing a button a function (in the ApplicationIO.cpp file) is called in which 2 threads are created by using API win32 (CREATETHREAD(...)): the former is for handling data streaming, and the latter is for data parsing and data monitoring on real time grpah on the GUI.
The project has two different behaviour: if it starts in debugging mode it is able to update GUI controls as textbox (using invoke) and graph during data straming, contrariwise when it starts without debugging no data appears in the textbox, and data are shown very slowly on the chart.
has anyone ever addressed a similar problem? Any suggestion, please?
A pretty classic mistake is to use Control::Begin/Invoke() too often. You'll flood the UI thread with delegate invoke requests. UI updates tend to be expensive, you can easily get into a state where the message loop doesn't get around to doing its low-priority duties. Like painting. This happens easily, invoking more than a thousand times per second is the danger zone, depending on how much time is spent by the delegate targets.
You solve this by sending updates at a realistic rate, one that takes advantage of the ability of the human eye to distinguish them. At 25 times per second, the updates turn into a blur, updating it any faster is just a waste of cpu cycles. Which leaves lots of time for the UI thread to do what it needs to do.
This might still not be sufficiently slow when the updates are expensive. At which point you'll need to skip updates or throttle the worker thread. Note that Invoke() automatically throttles, BeginInvoke() doesn't.

Asynchronously populate datagridview in Windows Forms application

howzit!
I'm a web developer that has been recently requested to develop a Windows forms application, so please bear with me (or dont laugh!) if my question is a bit elementary.
After many sessions with my client, we eventually decided on an interface that contains a tabcontrol with 5 tabs. Each tab has a datagridview that may eventually hold up to 25,000 rows of data (with about 6 columns each). I have successfully managed to bind the grids when the tab page is loaded and it works fine for a few records, but the UI freezes when I bound the grid with 20,000 dummy records. The "freeze" occurs when I click on the tab itself, and the UI only frees up (and the tab page is rendered) once the bind is complete.
I communicated this to the client and mentioned the option of paging for each grid, but she is adament w.r.t. NOT wanting this. My only option then is to look for some asynchronous method of doing this in the background. I don't know much about threading in windows forms, but I know that I can use the BackgroundWorker control to achieve this. My only issue after reading up a bit on it is that it is ideally used for "long-running" tasks and I/O operations.
My questions:
How does one determine a long-running task?
How does one NOT MISUSE the BackgroundWorker control, ie. is there a general guideline to follow when using this? (I understand that opening/spawning multiple threads may be undesirable in certain instances)
Most importantly: How can I achieve (asychronously) binding of the datagridview after the tab page - and all its child controls - loads.
Thank you for reading this (ahem) lengthy query, and I highly appreciate any responses/thoughts/directions on this matter!
Cheers!
There's no hard and fast rule for determining a long-running task. It's something you have to know as a developer. You have to understand the nature of your data and your architecture. For example, if you expect to fetch some info from a desktop database with a single user from a table that contains a couple dozen rows you might not even bother showing a wait cursor. But if you're fetching hundreds of rows of data across a network to a shared database sever then you'd better expect that it will potentially be a long-running task to be handled not simply with a wait cursor but a thread that frees up your UI for the duration of the fetch. (You're definitely on the right track here.)
BackgroundWorker is a quick and dirty way of handling threading in forms. In your case, it will very much tie the fetching of data to the user interface. It is doable, works fine but certainly is not considered "best practice" for threading, OOP, separation of concerns etc. And if you're worried about abusing the alocation of threads you might want to read up on the ThreadPool.
Here's a nice example of using asynchronous threading with the thread pool. To do data binding, you fetch your data in the thread and when you get your callback, simply assign the result set to the the grid view's datasource property.

Resources