Our Utilty has one single table, and it has 10 million to 50 million rows, There may be a case we need to show 50 million rows in a single page html client page, To show the rows in browser we use jQuery in UI.
To retrieve rows we use Hibernate and use Spring for MVC. I am looking for best practice in retrieving the rows and showing in UI. Should I retrieve a bulk of thousands rows or two thousand rows in Hibernate and buffer to Web Client or a best practice is there ?
The best practice is not to do this. It will explode the browser memory and rendering engine, and will take too much time to load.
Add a search form to your webapp, make the end user search for what he's interested about, and only display the N first search results, just like Google does.
Nobody is able to do anything meaningful with 50 million rows without searching anyway.
i think you should use scroll pagination (when user reaches to almost bottom of page makes ajax call and load data).
Just for example quick google example & demo
and if your data is tabular then you can use jQGrid
Handling a larger quantity of data in an application must be done via virtualization. While it's true that the user will be overwhelmed by millions of records, it's not exactly true that they can't do stuff with it, nor that such quantities of data are unfathomable.
In practice and depending on what you're doing you'll note that this limit crops up on you with just thousands of records. Which frankly is very little data. Data centric apps just need a different approach, altogether, if they are going to work in a browser and perform well.
The way we do this is quite simple but not all that straightforward.
It helps to decide on a fixed height, because you will need to know the max height of a scrollable container. You then render into this container the subset of records that can be visible at any given moment and position them accordingly (based on scroll events). There are more or less efficient ways of doing this.
The end goal remains the same. You basically need to cull everything which isn't directly visible on screen in such a way that the browser isn't paying the cost of memory and layout logic for the app to be responsive. This is common practice in game development, only the world that is visible right now on screen is ever present at any given moment. So that's what you need to do to get large quantities of stuff to behave well.
In the context of a browser, anything which attributes to memory use and layout/render cost needs to go away if it's isn't absolutely vital.
You can also stagger and smear recalculations so that you don't incur the cost of whatever is causing the app to degrade on every small update. The user can afford to wait 1 second, if the apps remains responsive.
Related
I have a table and I filled it with REST API with KeySet-based pagination. I have used pagination contains 5 buttons for representative of pages.
I want to use all the available space to display the maximum number of rows in the table, but at the same time, making the page smaller and larger does not lead to scrolling inside the table. So, I do not mean usual responsive.
I tried a solution that caused some sequential problems. I think it would be better to go back and look for a better idea from the beginning. Do you have any solutions based on your experience?
I am sorry, but I do not understand your problem, but maybe this could help you.
There is a WEB API called "Intersection Observer API". With this observer, you can reach out to the elements in the view of your device. Maybe this can help you, bro.
I have a requirement to consume huge volume of data ( like more than 100000 rows) by calling API end point and Data format is JSON and display them in react page. I am developing the logic using React-Table, but would like to hear experts opinion to know whether this is possible in reactjs? Is React-Table the right option in reactjs? Will there be performance issues?
Yes this is surely possible but involves the usage of virtual views like react-virtualized
The problem with 100k rows is that first render takes a lot of time, scroll could be tedious and every re-render takes a significant amount of time too.
With virtual views data is rendered only in active viewport and element are added/removed upon scroll reducing the rendering/reconciliation payload.
. Requesting the page(on HTTP or WebPage), it is very slow or even crash unless i load my JSON with fewer data. I really need to solve this since sooner or later i will be using large amount of data frequently. Here are my JSON data. --->>>
Notes:
1. The JSON loads only String and Integer.
2. I used to view my JSON in JSONView more like treeview using plugin
from GoogleChrome.
I am using angular and nodejs. tq
A quick resume of all the things that comes to my mind :
I had a similar issue once. My solutions may make the UI change.
Pagination
I doubt you can display that much data at one time, so the strategy should be divising your data in small amounts and then only load more when the client ask for it.
This way, the whole data is no longer stored in RAM as it is currently. This is how forums works (only 20 topics at a time).
Just imagine if StackOverflow make you load the whole historic of questions in the main page, how much GB would your navigator need just for that ?
You can use pagination in a classic way (button with page number, like google), or in an infinite scroll way, as you want.
For that you need to adapt your api and keep track of the index of the pages you already loaded at every moment in your Front. There are plenty of examples in AngularJS.
Only show the beginning of the data
When you look at Facebook comments, you may have a "show more" button. In their case, maybe it's to not break the UI, but it can also be used to not overload data.
You can display only the main lines of your datae (titles or somewhat) and add a button so the user can load more details if they want.
In your data model, the cost seems to be on the second level of "C". Just load data untill this second level, and download the remaining part (for this object) only if the user asks for.
Once again, no need to overload, your client's RAM will be thankfull, and your client's mobile 3G too.
Optimize your data stucture
If this is still not enough :
As StefanArya said in comment, indeed remove the "I" attribute, which is redundant with the JSON key.
Remove the "I" as you can use Object.keys() to get key name.
You also may don't need that much precision on your floats.
If I see any other ideas, I'll edit this post later.
Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.
I have been looking at multiple posts addressing the same issue. Before moving to remotely filtering data from the store and feeding them into the gridpanel, is there any solution yet :
To display about 10000 records in one go in the grid panel view.
To Use column filtering in the grid to filter all of these records.
The two approaches are at cross purposes- when looking at a dataset of this size, you need to think about using a buffered grid/store. The whole point of buffered grids is that the underlying dataset is too large to load simultaneously into the grid without introducing substantial overheads (typically in DOM complexity, secondarily for JS processing). As such, by its very nature data is buffered and loaded in chunks. Introducing local filtering/sorting as a layer on top of this is thus not appropriate due to the fact it would be applied to only the current chunk, so when the next chunk of records was fetched they would not subscribe to the filter/sort indicated by the GUI/that the user expects. You thus end up with a disjointed UX.
You therefore have a few approaches:
Use a paginated grid - with local filtering/sorting on each page, still produces less than ideal UX, but at least in some way it is semi-logical
Use no pagination/buffering and simply load all your records and then do local filtering/sorting, this isnt really a scalable solution and is more trying to escape doing work to put in solid foundations for your app
Use a buffered grid/store and implement server-side filtering/sorting, given your requirement, this is clearly the recommendation- and shouldnt be too hard to implement
So the short answer is NO- the recommendation being to use a buffered grid/store with remote filtering/sorting.