I'm implementing pagination for my Backgrid powered tables, like so:
footer: Backgrid.Extension.Paginator
And in my Collection:
state: {
pageSize: 15
}
Now, I haven't actually implemented any handling of pagination server side (which is what I feel might be producing odd results), but it put this HTML in:
Any ideas anyone?
This is sort of a bug and sort of a mismatched expectation.
The reason it started paging backwards is because your server mode Backbone.PageableCollection didn't have a totalRecords set in state, which is required under server mode according to the documentation.
In addition, Backgrid.Extension.Paginator has a default sliding window of 10 pages, but apparently version 0.1 made an assumption that the collection won't be empty when you decide to summon up a grid, (it basically assumed you'd follow the recommended best practice here), so this case was never checked. Following the progress of this issue here.
Related
I've been in the process of rewriting an old AngularJS app in React (actually it's using preact, chosen by the developer who started this project initially).
This app handles large deeply nested objects that get be displayed via Material UI accordions and tables. The data is more WIDE than deep, but at any rate, React has trouble rendering it all without this RangeError.
I've been dancing with this issue for a while now and have avoided it by strategically managing accordions and not rendering data for accordions that are not open.
I've commonly seen this reported as a recursion issue, and I've carefully reviewed the ode to confirm there is no recursion involved. Plenty of iteration, but no recursion.
Please note the stack trace, it's hitting this in the flush() function, which is not in our application code, but in the Chrome debugger VM. I've set breakpoints and it appears to be something related to DOM operations as the objects being flushed are React elements. Here's a code snippet from the point where this error is hit:
function flush(commit) {
const {
rootId,
unmountIds,
operations,
strings,
stats
} = commit;
if (unmountIds.length === 0 && operations.length === 0) return;
const msg = [rootId, ...flushTable(strings)];
if (unmountIds.length > 0) {
msg.push(MsgTypes.REMOVE_VNODE, unmountIds.length, ...unmountIds);
}
msg.push(...operations); <--- error occurs here when operations.length too long
And the stack trace logged when error occurs:
VM12639:1240 Uncaught (in promise) RangeError: Maximum call stack size exceeded
at flush (<anonymous>:1240:8)
at Object.onCommit (<anonymous>:3409:19)
at o._commit.o.__c (<anonymous>:3678:15)
at QRet.Y.options.__c (index.js:76:17)
at Y (index.js:265:23)
at component.js:141:3
at Array.some (<anonymous>)
at m (component.js:220:9)
The error is occurring if operations is too large. Normally it will be anywhere from a dozen or so in length up to maybe 3000, depending on what's going on, but when I try to load our page displaying the wide/deep nested object this number is more like 150000, which apparently is choking the spread operator.
My sense is that this type of app is a challenge for React. I cannot think of another example of a React app that displays data the way we do with this. If anyone here has experience with this sort of dataset and can offer suggestions as to how to make this work, please share.
My guess is I'm going to need to somehow break this object up into smaller chunks that represent smaller updates, but I'm posting here in case there's something I can learn.
It looks similar to this open issue on the React repo, only it happens in a different place (also in dev tools). Might be worth reporting your issue there too. So probably React is otherwise "fine" rendering this amount of elements, though you'll inevitably get slow performance.
Likely the app is just displaying too much data, or doing it inefficiently.
but when I try to load our page displaying the wide/deep nested object this number is more like 150000, ...
150000 DOM operations is a really high amount. Either your app really does display a whole lot of elements, or the old AngularJS app had too many wrapper elements and these were preserved. Since you mention it concerns data tables, it's probably the first reason. In any case complex applications always need some platform specific optimization.
If you can give an idea about the intended use case, or even better, share (parts of) the code, that would help others to give more targeted advice. Are the 150k operations close to what would happen in real world usage, or is it just a very inflated number for stress testing? Do you see any other performance regressions, compared to the Angular app, with very complex objects? How many tables are on the screen at a time?
A few hundreds of visible elements on the screen already gets quite cramped. So where would all these extra operations coming from? Either you're loading a super long page of which a user can only see a few percent at the same time, or the HTML structure is unnecessarily deeply nested.
Suggested performance improvements
I wouldn't say React isn't suitable for really large amounts of data, but you do need to watch out for some things yourself. React is only your vehicle to apply changes to the DOM. Putting a large amount of elements in the DOM is always going to lead to decreased performance, and is something you usually want to avoid.
In this case you could consider whether it's necessary to display all the table's data, which is probably the bulk of the operations. Using pagination would resolve the problem, and might even make it more user friendly.
If that's not an option, you maybe can use a library like react-lazyload to show/hide the items as they enter/exit the visible part of the table. To achieve this, use their unmountIfInvisible prop. You can then replace a complex data row with a single element that has the same height. The last is important to preserve the scroll height.
<LazyLoad
height={100}
offset={100}
unmountIfInvisible
placeholder={<tr height={100}/>}
>
<MyComplexDataRow />
</LazyLoad>
This way your data table never consists of much more complex elements than can be seen in the viewport. You probably need to tune the offset a bit so that it's always ready in time as it's benig scrolled.
We have a need for "Blending of hits from different sources", as per your documentation it is recommended to write a custom-searcher in JAVA. Is there a demo of this written somewhere on Github ? I wouldn't even know where to start :( I understand I can create search "chains" , preferably Asynchronous, and then blend results in JAVA before returning them...but then how would I handle paginations, limits...etc ? This all seems very complicated, for someone who doesn't even know JAVA that much. So, I am hoping someone has already written a demo for this ? Please ? Anyone ?
Thank you so much
EDIT to make my quesion clearer:
We are writing a search engine that fetches data from various websites. Some websites have 10mil indexable items, other websites only 100,000. When we present the results to end user, we want to include results from all our sources ( when match applies ). Let's say 10 results from each of the websites we crawl, so that they all get equal amount of attention on page. If we don't do custom blending, what happens is that the largest website with most items wins all our traffic.
I understand that we can send 10 separate queries to VESPA, and blend the results in our front end, but that seems very inefficient. Thus, the quesion of "Custome Searcher". Thank you so much !
That documentation covers some very advanced use cases which you do not have. Are your sources different Vespa schemas or content clusters? If so Vespa will by default blend the hits returned from each according to their relevance scores so there's nothing you need to do.
The two other most common use-cases are:
Some (or all) the data sources are external, so you need to write a Searcher component to fetch the external data and turn it into a Result.
You want the data to be blended in some custom way (rather than by relevance score). If so you need to exclude the default blending Searcher (com.yahoo.prelude.searcher.BlendingSearcher) and write your own.
If you provide some more information about your use cases I can give you some code examples.
EDIT: Use grouping to solve the need explained under "EDIT" in the question:
Create a "siteid" field when feeding (e.g in document processing).
Use the grouping expression all(group(siteid) each(max(10) output(summary())))
See http://docs.vespa.ai/documentation/grouping.html
I am developing a Sitecore project that has several data import jobs running on daily basis. Every time a job is executed, it may update a large amount of Sitecore items (thousands) and I've noticed that all these editings trigger Solr index updates.
My concern is, I don't really sure if this is better or update everything at the end of the job is. So, I would love to try both options. Could anyone tell me how can I use code to temporarily disable Lucene/Solr indexing and enable it later when I finish editing all items?
This is a common requirement, and you're right to have such concerns. In general it's considered good practice to disable indexing during big import jobs, then rebuild afterwards.
Assuming you're using Sitecore 7 or above, this is pretty much what you need:
IndexCustodian.PauseIndexing();
IndexCustodian.ResumeIndexing();
Here's a comprehensive article discussing this:
http://blog.krusen.dk/disable-indexing-temporarily-in-sitecore-7/
In addition to #Martin answer, you can pass (silent=true) when you finish the editing of the item, Something like:
item.Editing.BeginEdit();
//Change fields values
item.Editing.EndEdit(true,true);
The second parameter in EndEdit() method force a silent update of the item, which means no Events/Indexing will be triggered on item save.
I feel this is safer than pausing indexing on the whole application level during import process, you just skip indexing of the items you are updating.
EDIT:
In case you need to rebuild the index for the updated items after the import process is done, you can use the following code, It will index the content tree starting from RootItemInTree and below:
var index = Sitecore.ContentSearch.ContentSearchManager.GetIndex("Your_Index_Name")
index.Refresh(new SitecoreIndexableItem(RootItemInTree));
To disable indexing during large import/update tasks you should wrap your logic inside a BulkUpdateContext block. You can also use other wrappers like the EventDisabler to stop events from being fired if that is appropriate in your context. Alternatively you could wrap your code in an EditContext and set it to silent. So your code could end up something like this:
using (new BulkUpdateContext())
using (new EditContext(targetItem, false, true))
{
// insert update logic here...
}
here is a older question that discusses this topic: Optimisation tips when migrating data into Sitecore CMS
When I started working on my current project I was given quite an arduous task - to build something that in essence suppose to replace big spreadsheet people use internally in my company.
That's why we I thought a paginated table would never work, and quite honestly I think pagination is stupid. Displaying dynamically changing data on a paginated table is lame. Say an item on page #2 with next data update can land on page whatever.
So we needed to build a grid with nice infinite scroll. Don't get me wrong, I've tried many different solutions. First, I built vanilla ng-repeat thing and tried using ng-infinite-scroll, and then ng-scroll from UI.Utils. That quickly get me to the point where scrolling became painfully slow, and I haven't even had used any crazy stuff like complicated cell templates, ng-ifs or filters. Very soon performance became my biggest pain. When I started adding stuff like resizable columns and custom cell templates, no browser could handle all those bindings anymore.
Then I tried ng-grid, and at first I kinda liked it - easy to use, it has a few nice features I needed, but soon I realized - ng-grid is awful. Current version stuffed with bugs, all contributors stopped fixing those and switched to work on a next version. And only God knows when that will be ready to use. ng-grid turned out to be pretty much worse than even vanilla ng-repeat.
I kept trying to find something better. trNgGrid looked good, but way too simplistic and doesn't offer features I was looking for out of the box.
ng-table didn't look much different from ng-grid, probably it would've caused me same performance issues.
And of course I needed to find a way to optimize bindings. Tried bind-once - wasn't satisfied, grid was still laggy. (upd: angular 1.3 offers {{::foo}} syntax for one-time binding)
Then I tried React. Initial experiment looked promising, but in order to build something more complicated I need to learn React specifics, besides that thing feels kinda non-anguleresque and who knows how to test directives built with angular+react. All my efforts to build nice automated testing failed - I couldn't find a way to make React and PhanthomJS to like each other (which is probably more Phantom's problem. is there better headless browser) Also React doesn't solve "appending to DOM" problem - when you push new elements into the data array, for a few milliseconds browser blocks the UI thread. That of course is completely different type of problem.
My colleague (who's working on server-side of things) after seeing my struggles, grumbled to me that I already spent too much, trying to solve performance problems. He made me to try SlickGrid, telling me stories how this is freakin zee best grid widget. I honestly tried it, and quickly wanted to burn my computer. That thing completely depends on jQuery and bunch of jQueryUI plugins and I refuse to suddenly drop to medieval times of web-development and lose all angular goodness. No, thank you.
Then I came by ux-angularjs-datagrid, and I really, really, really liked it. It uses some smart bad-ass algorithm to keep things very responsive. Project is young, yet looks very promising. I was able to build some basic grid with lots of rows (I mean huge number of rows) without straying too much from the way of angular zen and scrolling still smooth. Unfortunately it's not a complete grid widget solution - you won't have resizable columns and other things out of the box, documentation is somewhat lacking, etc.
Also I found this article, and had mixed feelings about it, these guys applied a few non-documented hacks to angular and most probably those would breaks with feature versions of angular.
Of course there are at least couple of paid options like Wijmo and Kendo UI. Those are compatible with angular, however examples shown are quite simple paginated tables and I'm not sure if it is worth even trying them. I might end-up having same performance issues. Also you can't selectively pay just for the grid widget, you have to buy entire suite - full of shit I probably never use.
So, finally to my question - is there good, guaranteed, less painful way to have nice grid with infinite scrolling? Can someone point to good examples, projects or web-pages? Is it safe to use ux-angularjs-datagrid or better to build my own thing using angular and react? Anybody ever tried Kendo or Wijmo grids?
Please! Don't vote for closing this question, I know there are a lot of similar questions on stackoverflow, and I read through almost every single one of them, yet the question remains open.
Maybe the problem is not with the existing widgets but more with the way you use it.
You have to understand that over 2000 bindings angular digest cycles can take too long for the UI to render smoothly. In the same idea the more html nodes you have on your page, the more memory you will use and you might reach the browser capacity to render so many nodes in a smooth way. This is one of the reason why people use this "lame" pagination.
At the end what you need to achieve to get something "smooth" is to limit the amount of displayed data on the page. To make it transparent you can do pagination on scroll.
This plunker shows you the idea, with smart-table. When scrolling down, the next page is loaded (you will have to implement the previous page when scrolling up). And at any time the maximum amount of rows is 40.
function getData(tableState) {
//here you could create a query string from tableState
//fake ajax call
$scope.isLoading = true;
$timeout(function () {
//if we reset (like after a search or an order)
if (tableState.pagination.start === 0) {
$scope.rowCollection = getAPage();
} else {
//we load more
$scope.rowCollection = $scope.rowCollection.concat(getAPage());
//remove first nodes if needed
if (lastStart < tableState.pagination.start && $scope.rowCollection.length > maxNodes) {
//remove the first nodes
$scope.rowCollection.splice(0, 20);
}
}
lastStart = tableState.pagination.start;
$scope.isLoading = false;
}, 1000);
}
This function is called whenever the user scroll down and reach a threshold (with throttle of course for performance reason)
but the important part is where you remove the first entries in the model if you have loaded more than a given amount of data.
I'd like to bring your attention towards Angular Grid. I had the exactly same problems as you said, so ended up writing (and sharing) my own grid widget. It can handle very large datasets and it has excellent scrolling.
Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.