My question is very easy. What is the correct point for loading data from service/database in the page or view?
I am using OnAppearing method:
protected override void OnAppearing()
{
base.OnAppearing();
_ = Task.Run(async () =>
{
await Task.Delay(500);
await LoadData();
});
}
private async Task LoadData()
{
this.Items = await this.LoadDataFromDatabase();
}
Definition of items:
[ObservableProperty]
private ObservableCollection<Product>? _items = null;
The database returns a maximum of 20 items.
I use ScrollView with HorizontalWrapLayout and DataTemplate for items.
DataTemplate:
Border
-- Grid with rows
--- Label
--- Label
--- Label
My problem is that the application is not smooth. Flyout menu is not smooth. It seems like UI wait for loading data. Loading data from database takes +/- 100ms however displaying data (20 items) on the page takes another 3-4sec. User experience is not very smooth in Release mode. The same result is also in Mobile/Tablet/Virtual devices/Phyical devices.
I also measured the speed of HorizontalWrapLayout:
Measure: ~10ms
ArrangeChildren: ~15ms
I have many of these logos in android:
Skipped 363 frames! The application may be doing too much work on its main thread.
What can be wrong? Is it possible that this is MAUI problem? I load the data and the whole display happens automatically via bindable property = Items. What is the correct point of loading data (Constructor in model or OnAppearing or.... )?
I uses Net7.0 and last version of MAUI => 7.0.52
Visual Studio 2022 version 17.5 Preview 2
Android API 33, Android v13 (virtual devices also phyzical devices)
PC: Intel(R) Core(TM) i9-10850K CPU # 3.60GHz, 32 GB ram
I am providing more information because my assumption is that the problem is not in weak devices.
Thank you for any advice, recommendations, suggestions, etc.
Edit 1:
Now I noticed a very interesting thing. HorizontalWrapLayoutManager is very fast (I listed the times above). However, Measure and ArrangeChildren are called 128-time to display data. (~10ms + ~15ms) * 128 = 3200ms. A very interesting thing is that the widthConstraint parameter for Manager is always the same, but the heightConstraint changes alternately between 1493 and Infinity. Is there any logic why the manager is called 128 times when displaying 8 items?
Edit 2:
I reduced my xaml step by step. After removing each parent view, the recalculation of the Manager is called less times. By removing almost everything, recalculation in the manager is called only 4 times. It also seems that if there is a parent Grid element with Auto size for Column or Row, the number of recalculations increases. I understand the logic of the recalculation, but it causes the display of data to be very slow in the final. Accordint to my logs, methods (Measure and Arrange) on the View inside the Manager cause a final delay when called repeatedly. Would not the solution be to save the data and call these methods only once?
First, I want to point out that I see this:
await Task.Delay(500);
In a question related to performance and speed.
I do not know what you plan to do with this, and why you are doing it. But whatever the reason, please remove that line.
Second, if you are using Android, please make sure you test your performance in release. It may differ in order of magnitudes compared to the performance you see in debug.
If this does not help, try CollectionView, or DataGrid, and see if it will get better.
Edit: Just want to add, you are doing it correctly. I often use OnAppearing for loading and displaying data. (as command in my ViewModels)
My summarization:
I use HorizontalWrapLayout with custom implementation for HorizontalWrapLayoutManager.
I foud out that Manarger is very fast, however ArrangeChildren and Measure are called more times. The number depends on the number of parents and the layout of the screen. The number of calls also affects the auto size in parent's Grids. Repeated recalculation and calls to Measure/ArrangeChildren in HorizontalWrapLayoutManager cause a large delay when displaying data in HorizontalWrapLayout.
I solved the problem by updating the HorizontalWrapLayoutManager. Manager stores the data returned from Measure and Arrange for each child. The first call will be recalculated. In the event that the items, padding, spacing, widthConstraint or other essential values have not changed during subsequent calls, the Manager no longer calls Measure and Arrange for each view, but returns data from the cache.
Small test:
HorizontalWrapLayoutManager without cache = 128 calls => ~3100ms
HorizontalWrapLayoutManager with cache = 128 calls => ~100ms
So far, I have not found any problems that would result from the cache. In the future, however, it is not excluded that recalculation will have to occur when other properties of HorizontalWrapLayout are changed.
Related
I've been in the process of rewriting an old AngularJS app in React (actually it's using preact, chosen by the developer who started this project initially).
This app handles large deeply nested objects that get be displayed via Material UI accordions and tables. The data is more WIDE than deep, but at any rate, React has trouble rendering it all without this RangeError.
I've been dancing with this issue for a while now and have avoided it by strategically managing accordions and not rendering data for accordions that are not open.
I've commonly seen this reported as a recursion issue, and I've carefully reviewed the ode to confirm there is no recursion involved. Plenty of iteration, but no recursion.
Please note the stack trace, it's hitting this in the flush() function, which is not in our application code, but in the Chrome debugger VM. I've set breakpoints and it appears to be something related to DOM operations as the objects being flushed are React elements. Here's a code snippet from the point where this error is hit:
function flush(commit) {
const {
rootId,
unmountIds,
operations,
strings,
stats
} = commit;
if (unmountIds.length === 0 && operations.length === 0) return;
const msg = [rootId, ...flushTable(strings)];
if (unmountIds.length > 0) {
msg.push(MsgTypes.REMOVE_VNODE, unmountIds.length, ...unmountIds);
}
msg.push(...operations); <--- error occurs here when operations.length too long
And the stack trace logged when error occurs:
VM12639:1240 Uncaught (in promise) RangeError: Maximum call stack size exceeded
at flush (<anonymous>:1240:8)
at Object.onCommit (<anonymous>:3409:19)
at o._commit.o.__c (<anonymous>:3678:15)
at QRet.Y.options.__c (index.js:76:17)
at Y (index.js:265:23)
at component.js:141:3
at Array.some (<anonymous>)
at m (component.js:220:9)
The error is occurring if operations is too large. Normally it will be anywhere from a dozen or so in length up to maybe 3000, depending on what's going on, but when I try to load our page displaying the wide/deep nested object this number is more like 150000, which apparently is choking the spread operator.
My sense is that this type of app is a challenge for React. I cannot think of another example of a React app that displays data the way we do with this. If anyone here has experience with this sort of dataset and can offer suggestions as to how to make this work, please share.
My guess is I'm going to need to somehow break this object up into smaller chunks that represent smaller updates, but I'm posting here in case there's something I can learn.
It looks similar to this open issue on the React repo, only it happens in a different place (also in dev tools). Might be worth reporting your issue there too. So probably React is otherwise "fine" rendering this amount of elements, though you'll inevitably get slow performance.
Likely the app is just displaying too much data, or doing it inefficiently.
but when I try to load our page displaying the wide/deep nested object this number is more like 150000, ...
150000 DOM operations is a really high amount. Either your app really does display a whole lot of elements, or the old AngularJS app had too many wrapper elements and these were preserved. Since you mention it concerns data tables, it's probably the first reason. In any case complex applications always need some platform specific optimization.
If you can give an idea about the intended use case, or even better, share (parts of) the code, that would help others to give more targeted advice. Are the 150k operations close to what would happen in real world usage, or is it just a very inflated number for stress testing? Do you see any other performance regressions, compared to the Angular app, with very complex objects? How many tables are on the screen at a time?
A few hundreds of visible elements on the screen already gets quite cramped. So where would all these extra operations coming from? Either you're loading a super long page of which a user can only see a few percent at the same time, or the HTML structure is unnecessarily deeply nested.
Suggested performance improvements
I wouldn't say React isn't suitable for really large amounts of data, but you do need to watch out for some things yourself. React is only your vehicle to apply changes to the DOM. Putting a large amount of elements in the DOM is always going to lead to decreased performance, and is something you usually want to avoid.
In this case you could consider whether it's necessary to display all the table's data, which is probably the bulk of the operations. Using pagination would resolve the problem, and might even make it more user friendly.
If that's not an option, you maybe can use a library like react-lazyload to show/hide the items as they enter/exit the visible part of the table. To achieve this, use their unmountIfInvisible prop. You can then replace a complex data row with a single element that has the same height. The last is important to preserve the scroll height.
<LazyLoad
height={100}
offset={100}
unmountIfInvisible
placeholder={<tr height={100}/>}
>
<MyComplexDataRow />
</LazyLoad>
This way your data table never consists of much more complex elements than can be seen in the viewport. You probably need to tune the offset a bit so that it's always ready in time as it's benig scrolled.
I have a function that imports data from a database and populates a DataGridView (winform) within a WPF application.
Originally I had the application call the function through it's main thread. Performance was running at around 10 seconds per 1,000 rows, which is significantly longer than I would like it to be. However, because of the use of the application it doesn't really matter how long it takes so I wasn't too worried about improving the speed.
What I was trying to do was make the rows populate as they come in by using a BackgroundWorker to retrieve the rows, then invoke the BackgroundWorker to add the rows as they come in + provide a progress bar.
I was fiddling around and decided to just invoke the entire method, and now the time it takes to import the datarows is more like 1 second per 1,000 rows.
My code looks something like this:
Window_Loaded()
{
dataPopulater = new BackgroundWorker(); // variable declared in the class
dataPopulater.WorkerReportsProgress = true;
dataPopulater.DoWork += new DoWorkEventHandler(dataPopulater_DoWorkReadSavedRecords);
dataPopulater.ProgressChanged += new ProgressChangedEventHandler(dataPopulater_ProgressChanged);
dataPopulater.RunWorkerCompleted += dataPopulater_RunWorkerCompleted;
dataPopulater.RunWorkerAsync(startUpRead);
}
private void dataPopulater_DoWorkReadSavedRecords(object sender, DoWorkEventArgs e)
{
this.Dispatcher.BeginInvoke((Action)delegate()
{
//Import method...
});
}
Any ideas on why I would receive such a spike in performance? It was my understanding that the this.Dispatcher.BeginInvoke((Action)delegate() {}); command runs anything that follows on the main thread, which is what I was previously doing with the 10sec/1,000 rows performance. Is creating a BackgroundWorker allocating more processing speed / cores or something of the like?
I just have no idea why this would happen.
Based on your comments the previous version of the code was adding a row to the datagrid inside the update loop. Adding a row to a grid control has a lot of overhead, mostly from the control repainting its self.
Calling .BeginInvoke on a form doesn't actually do the work immediately, it just queues the work on the UI thread and returns immediately. This small change is allowing your update logic to run at full speed on a different thread from the UI updates. You essentially separated the logic from the presentation, allowing each one to run asynchronously with each other.
I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.
This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.
So yes, apparently it is possible to have long grid with lots of rows built with angular. But then a problem comes with data updates.
You see if I just get all (let's say 10 000 rows) and render them in my grid - that works. It just takes a few seconds initially. And
a) I don't have all the date up front
b) I need the grid to be responsive immediately.
I can just do that with throwing let's say only 100 rows at the beginning, and then slowly update data as it becomes available. And that turns out to be the problem. Everytime you push new rows into $scope.data - it blocks UI. So I need to be smart about these updates.
Maybe I should set an interval and update the data only every few seconds? That seems to be not working
Maybe I should somehow watch for mouse movements and once it stops moving - start/resume adding rows, once mouse-movement detected seize adding rows and wait for another chance? - What if user never stops moving the mouse? (say some sort of a psycho)
Experimenting with _.throtle and _.debounce didn't get me anywhere.
You guys have any ideas?
UPD: here's a crazy one: what if? instead of waiting till angular updates the DOM, create entire DOM structure in memory, right before the digest cycle (with no data) and then insert that HTML chunk (that would be faster, right?) And after that let angular do its magic, data should appear. Would that work?
You are going to run into performance issues when something changes even if you can get all those rows rendered to the DOM. And your user probably isn't going to scroll through 10000 rows anyway. I would just use pagination. e.g.:
<div ng-repeat="item in items | startFrom:currentPage*itemsPerPage | limitTo:itemsPerPage"></div>
If you really want to have everything on one page you could load the rows as the user scrolls. If you are interested in that solution checkout http://binarymuse.github.io/ngInfiniteScroll/
One of the things I've noticed that I stupidly used to ignore - the container has to have fixed height. That significantly makes updates faster. Although technically it doesn't solve the problem entirely
I'm writing a silverlight app that queries a web service to populate a tree control. Each element will have at least 2 levels of children, so something like this:
a
+-b
+-c
d
+-g
+-h
e
+-i
+-j
f
+-k
+-l
The web service API is such that I can only get one level of child nodes at a time, so the first trip, I can get a,d,e,f. To get b,g,i,k, I have to make 4 trips. Similarly, I have to make 4 more trips to get c,h,j,l. (The service does actually allow me to get all the nodes in one trip, but it doesn't give me parent-child relationships along with it :-()
My question is this: should I make the user wait for a while up front while I get all the nodes for the tree view, or should I just get the top few nodes, and get the other nodes on-demand, or in a background task? Also, the nodes can change asynchronously, so if I get all the nodes up front, I'll need a "refresh" button for the treeview, and if I do it on demand, I'll have to have a caching strategy.
Which is best for the user?
A compromise where you load the first level up front and then load the remaining items in the background overridden by on-demand as required. If you load the nodes breadth first (e.g. a,d,e,f then b,g,i,k) rather than depth first (e.g. a,d,e,f followed by b,c) you can redirect your loading to be focused on the most recently expanded node.
Personally, as a user, I would prefer all the data to be loaded up front so that once the application finishes loading I can trust that I won't have to wait anymore (or at least very little)
But, I suppose it depends on several traits of your application / data:
How dynamic is the data? Does it update more often then the rate at which the user explores the nodes? If it does, then you will have to read the data as the user explores it, otherwise you can probably get away with only updating it occasionally and checking for the freshest data before performing important operations.
How much of the data will the user explore during normal use? If they are constantly exploring throughout the entire tree, then having the entire tree loaded is important. On the other hand, if most users will usually only expand a small portion of the tree, then maybe loading on demand is better so you don't waste thier time loading data they will never see anyway.
How much affect with this have on performance? Does it really take a long time to load all the data? If the data is not too much, maybe the whole thing can be loaded in a matter of seconds, in which case the amount of work to implement the optimization will not be significant to the end user and in turn will not have a good return on investment.
Most likely you don't have clear cut answers to these questions, but they're probably good to consider when you're attacking this interesting problem.
Short answer is to make the user wait for as little as possible. They will curse your name if they have to wait 10-20 seconds on application load, but not notice 0.1-0.2 seconds for a tree node to expand.
I have an app in production with a similar structure. I cannot load up-front because it'd be effectively loading the entire database. Here's my strategy:
The tree control starts with 1 level expanded below the root.
Each unexpanded node has a dummy child node in order to get the [+] expansion icon to show
When a node is expanded, it fires an event which is trapped by the app. If the only child node is the dummy one, the dummy is deleted and the children are loaded from the database.
Changes in the data are not reflected automatically by visible nodes, however the context menu for the tree has a Refresh item that can be used to refresh a node.
I have considered showing updates asynchronously, but have tended to avoid it because large amounts of data can be shown in the tree and I'm wary of DB load if I'm checking them all for changes.
The app is WinForms, written in C# using .NET 2.0.