i'm using C++ managed 2010 for designing a GUI in a form.h file. The GUI acts as a master querying data streaming from slave card.
Pressing a button a function (in the ApplicationIO.cpp file) is called in which 2 threads are created by using API win32 (CREATETHREAD(...)): the former is for handling data streaming, and the latter is for data parsing and data monitoring on real time grpah on the GUI.
The project has two different behaviour: if it starts in debugging mode it is able to update GUI controls as textbox (using invoke) and graph during data straming, contrariwise when it starts without debugging no data appears in the textbox, and data are shown very slowly on the chart.
has anyone ever addressed a similar problem? Any suggestion, please?
A pretty classic mistake is to use Control::Begin/Invoke() too often. You'll flood the UI thread with delegate invoke requests. UI updates tend to be expensive, you can easily get into a state where the message loop doesn't get around to doing its low-priority duties. Like painting. This happens easily, invoking more than a thousand times per second is the danger zone, depending on how much time is spent by the delegate targets.
You solve this by sending updates at a realistic rate, one that takes advantage of the ability of the human eye to distinguish them. At 25 times per second, the updates turn into a blur, updating it any faster is just a waste of cpu cycles. Which leaves lots of time for the UI thread to do what it needs to do.
This might still not be sufficiently slow when the updates are expensive. At which point you'll need to skip updates or throttle the worker thread. Note that Invoke() automatically throttles, BeginInvoke() doesn't.
Related
I'm currently developing some sort of I/O-pipeline system. Simply said: You can run simultaneous workers which do some stuff, either import or export data. I don't want to limit the user in how many workers will run simultaneously as performance, logically, depends on how they work.
If one wants to import 30 small images simultaneously, then he shall do this. But I want to equip my application with a monitor which notices when the application, especially the main thread, runs to slow, so that it will lag visibly. If this happens, the pipeline should reduce the amount of workers and maybe pause some of them in order so stabilize the application.
Is there any way to do this? How can I effectively monitor the speed, so I can say it is definetly to slow?
EDIT:
Okay I may have been a bit unclear. Sorry for that. The problem is more the status invokes. The user should be able to see what's going on. Therefore every worker invokes status updates. This happens in real-time. As you can imagine, this causes massive lags as there are a few hundred invokes per second. I tackled this problem by adding a lock which filters reports so that just every 50ms a report gets really invoked. The problem is: It still causes lags when there are about 20-30 workers active. I thought the solution was: Adjusting the time-lock by the current CPU load.
But I want to equip my application with a monitor which notices when the application, especially the main thread, runs to slow, so that it will lag visibly.
I don't quote understand this. I believe there are two common cases of UI lag:
Application that runs significant amounts of CPU-bound code on the UI thread or that performs synchronous IO operations there.
Showing way too many items in your UI, like a data grid with many thousand rows. Such UI is not useful to the user, so you should figure out a better way of presenting all those items.
In both cases, the lag can be avoided and you don't need any monitor for that, you need to redesign your application.
I am creating a silverlight pivot collection with 31K items (and images), however when I'm using the DeepZoomTools library to create the deep zoom images; it takes hours and hours (and hasn't actually completed even one).
Is there a multi-threaded way or distributed way in which collections could be created?
It is a time intensive process to be sure. Does your individual data points change often? What we have found in nearly all of our projects that the image for an individual item almost never changes. This allows you to streamline the process a little bit.
What I do in a case like this is to initially process the entire dataset. Then the next time I run the process, I only update the images that have been added or modified. As I said, in almost all of my cases this solved the problem you are running into. In fact, when it works, I will plug my card generation into whatever business applications that are running and generate/modify a card when data is added/changed in the system. This removes the need for batch processing altogether after your initial build.
If that will not work for you, take a look at the code for PAuthor. It is using DeepZoomTools and does so in a multi-threaded way. You should be able to find the code you are looking for there. PAuthor - CodePlex
Let me know if you have more specifics about your specific needs and we can see if we can come up with something.
I am "slowly" moving into Silverlight from asp.net and have a question about how to deal with situation where some code needs to be executed after web service calls have completed. For example, when user clicks on the row in the data grid a dialog box is shown that allows editing of the record. It contains numerous combo boxes, check boxes etc. So I need to first load data for each of the combo boxes, and than when all finished loading, I need to set the bound entity. Since I am new to this async thing, I was thinking to have some kind of counter that will keep track on how many calls have been dispatched, and as they finish reduce them by one, until it is zero, at which point I could raise an event that load has finished, and I could proceed with what ever is dependent on this. But this seems very clunky way of doing it. I am sure many have faced this issue, so how do you do this. If it helps, we use Prism with MVVM approach and Ria Services with Dtos.
What you've described is pretty much the way to go. There may be more elegant things you can do with locks and mutexes, but your counter will work. It has the bonus that you can see how many operations are still "in progress" at any one time.
You could dispatch your events sequentially but that would defeat the whole purpose of asynchronous operations.
If you analysed what each part of your UI needs you might be able to do some operations before all of your async events have finished. Making sure you start the longest running operations first might help - but there's no guarantee that the other shorter operations will finish first. It all depends on what resources are available on both the client and server at the time the call is made.
I am building a real-time multi-threaded application in WPF, but i am having difficulties in updating the UI.
I have a background worker thread that contains logic which determines what trades to send into the market. When a valid trade is sent to the market, i receive status updates on these trades via events in my main application window. I have other events where i receive real-time price updates.
Through these events, i upate the UI. Now it appears that i receive events so rapidly through out the application, that the UI can't keep up with the speed at which events are received - causing the UI to update slowly or not at all. Essentially the UI freezes. After all events have fired, the UI slowly becomes responsive again. Once it is fully responsive, the UI shows the data that i am expecting.
My question is, how do i get the UI to update in real-time as fast as i receive events? I have been struggling with this for a while now, so any help would be appreciated.
Thanks in advance!
Instead of having the worker thread push the updates to the UI thread via events consider having the UI thread pull (or poll) them periodically. The push method is fine in a lot of situations but has two major disadvantages that are working against you.
There is an expensive marshaling operation somewhere that is transferring execution of a method to perform the UI updates safely (at least there should be).
The worker thread gets to dictate how often the UI should update and by implication how much work it should perform. It can easily overwhelm the message pump.
I propose using a shared queue in which the worker thread will enqueue a data structure containing the update and the UI thread will dequeue and process it. You can have the UI thread poll the queue at a strategically chosen interval so that it never gets bogged down. The queue will act as the buffer instead of the UI message pump. It will shrink and grow as the amount of updates ebb and flow. Here is a simple diagram of what I am talking about.
[Worker-Thread] -> [Queue] -> [UI-Thread]
I would start with the simple queue approach first, but you could take this to the next logical step of creating a pipeline in which there are 3 threads participating in the flow of updates. The worker thread enqueues updates and the UI thread dequeues them like before. But, a new thread could be added to the mix that manages the number of updates waiting in the queue and keeps it at a manageable size. It will do this by forwarding on all updates if the queue remains small, but will switch into safe mode and start discarding the updates you can live without or combining many into one if a reasonable merge operation can be defined. Here is a simple diagram of how this pattern might work.
[Worker-Thread] -> [Queue-1] -> [Pipeline-Thread] -> [Queue-2] -> [UI-Thread]
Again, start with the simple one queue approach. If you need more control then move to the pipeline pattern. I have used both successfully.
You probably need to coalesce received events such that not every tick results in a GUI update. Batch them up if your GUI is already updating, and have the GUI process the next batch only when it's ready. If the feed is high-volume (frequently the case with active trade data updates) you will not be able to create a GUI that reflects every individual tick as its own self-contained refresh trigger.
howzit!
I'm a web developer that has been recently requested to develop a Windows forms application, so please bear with me (or dont laugh!) if my question is a bit elementary.
After many sessions with my client, we eventually decided on an interface that contains a tabcontrol with 5 tabs. Each tab has a datagridview that may eventually hold up to 25,000 rows of data (with about 6 columns each). I have successfully managed to bind the grids when the tab page is loaded and it works fine for a few records, but the UI freezes when I bound the grid with 20,000 dummy records. The "freeze" occurs when I click on the tab itself, and the UI only frees up (and the tab page is rendered) once the bind is complete.
I communicated this to the client and mentioned the option of paging for each grid, but she is adament w.r.t. NOT wanting this. My only option then is to look for some asynchronous method of doing this in the background. I don't know much about threading in windows forms, but I know that I can use the BackgroundWorker control to achieve this. My only issue after reading up a bit on it is that it is ideally used for "long-running" tasks and I/O operations.
My questions:
How does one determine a long-running task?
How does one NOT MISUSE the BackgroundWorker control, ie. is there a general guideline to follow when using this? (I understand that opening/spawning multiple threads may be undesirable in certain instances)
Most importantly: How can I achieve (asychronously) binding of the datagridview after the tab page - and all its child controls - loads.
Thank you for reading this (ahem) lengthy query, and I highly appreciate any responses/thoughts/directions on this matter!
Cheers!
There's no hard and fast rule for determining a long-running task. It's something you have to know as a developer. You have to understand the nature of your data and your architecture. For example, if you expect to fetch some info from a desktop database with a single user from a table that contains a couple dozen rows you might not even bother showing a wait cursor. But if you're fetching hundreds of rows of data across a network to a shared database sever then you'd better expect that it will potentially be a long-running task to be handled not simply with a wait cursor but a thread that frees up your UI for the duration of the fetch. (You're definitely on the right track here.)
BackgroundWorker is a quick and dirty way of handling threading in forms. In your case, it will very much tie the fetching of data to the user interface. It is doable, works fine but certainly is not considered "best practice" for threading, OOP, separation of concerns etc. And if you're worried about abusing the alocation of threads you might want to read up on the ThreadPool.
Here's a nice example of using asynchronous threading with the thread pool. To do data binding, you fetch your data in the thread and when you get your callback, simply assign the result set to the the grid view's datasource property.