How to do grid paging by offset? - extjs

I want to implement a paging grid of people, similar to Gmail contacts, where the grid loads a maximum of N people's names, the paging bar says something like 'Allen - Baxter', and you can page through the alphabetical list. The main differences with the stock ExtJS pager is:
(a) This custom pager doesn't use a page number from beginning, but rather it uses an offset to begin the query (e.g. Baxter). On the server side, it actually queries names > Baxter.
(b)The pager won't know total count of people or pages, because the server doesn't check this. It just queries users > Baxter up to N people.
I guess you can sort of call it "infinite paging".
Is there a simpler approach to this problem other than writing a totally custom pager class (and possibly making changes to grid, store, and/or proxy classes)?

I came up with a working solution by creating my own toolbar class which is basically a copy-paste of Ext.toolbar.Paging that changes almost every method. The two client parameters I used to specify the type of request is: (1) direction, and (2) cursor. Those two parameters together can identify every type of paging request: (a) first, (b) next, (c) previous, (d) last. Other than the limit parameter, the class ignores the existing paging behavior built into ExtJs; it's an entirely different paradigm. And the server also includes a response parameter called hasMore, tells the client whether there are more items in the direction it is paging. So the server then always queries one more item than the limit to determine whether there is more.

Related

How to design a system backend which user can customize some configuration

I should model a system that clients can apply some configuration on separated entities.
Let me explain with an example:
We have users that have a config tab in their dashboards.
We have a feature to send notifications on their browsers and we have a feature which we can send an email to them.
We also have a feature as a pop-up.
The user should be able to modify our default notification message, modify our default email template, modify our default text on email or elements.
For the pop-up, The user should be able to modify the width and height of the pop-up, change the default texts, modify background color, change the button location on the pop-up.
And when I want to send an email to the user I should apply these settings on the template then send the email. Also when the front-end wants to show those pop-ups, wants to get these configs from my API and apply them.
These settings will be more and more in the future. So I can not specify a settings table with some fields. I think it is not a good idea.
What can I do? How to design and model this scenario? What are the best practices?
Can I use a NoSQL like MongoDB instead of a relational database?
Thanks a lot.
PS:
I am using Django to develop this system.
I have built similar sub-systems before, by hand.
I don't know much about Django, but do some research to see if it has any "out of the box" or community developed / open source add-ons that do what you want.
If you have to do it yourself...
A key-value pair is not going to be enough, but it's close. You only need a simple data structure:
ID (how your code recognizes this property), e.g. UserPopupBackgroundColor.
Property name (what the user see's / how they recognize this property in the UI), e.g. "Popup Background Color".
Optional - Data type. This is essential if you want to do any sensible input validation. E.g. pop up height should probably expect an integer, and have a sensible min/max value on it, where as an email address is totally different.
Optional, some kind of flag to identify valid properties.
That last flag is bit of an edge case, but it's useful if you use the subsystem to hold more properties than you want users to have access to. E.g. imagine you want to get a list of all properties and display the list to the user - are there any 'special' ones you need to filter out that they should not see?
You then need somewhere to put the values, and link them to the user:
Row ID / GUID. You can use a unique constraint across the User and PropertyID if you wanted to instead, but personally I find a unique row ID is a reliable and flexible approach for most scenarios.
UserID.
PropertyID - refers to ID mentioned above.
PropertyValue
Depending on how serious you need to get, you can dump all the values into the one PropertyValue column (assuming you're persisting this in a database) - which means that column needs to be a string, or, you can add a column per data type.
If you want to add a column per data type, don't kill yourself. The most I have ever done is:
PropertyValue_text (text/varchar)
PropertyValue_int (or double)
PropertyValue_DateTime (date/time - surprise!!)
So when I say 'column per data type' I mean per data type your stack needs/wants to handle - not the 'optional' data types you define in the logic - since that data type is partially just about input validation.
Obviously if you use different logical data types, you can map those to data type columns in the database. The reasons for doing this (using the different data types in the database are:
To reduce the amount of casting you need to do (code to database, and vis-a-versa).
To leverage database level query features, which can be useful. E.g. find emails values and verify them; find expired date values; etc.
It takes a bit of work to build all this, but it's powerful once you get set-up because you can add any number of properties. If you are using the 'full' solution with explicit data types then adding new logical data types isn't too painful if you already have a few set-up.
Before you design and build this, think about future reuse, and anyway you can package it up for later - or community use. Remember it impact all layers (UI, logic and data).
Final tip - when coming up with the property ID's (that the code uses) make them human readable, and use some sort of naming convention so that adding new ones later is easy and follows a predictable path.
Update - Defining Property and PropertyValue in database tables is an obvious way to go. Depending on the situation you can also define Property in code - especially if you don't add new ones or change existing ones very frequently. Another bonus is that if you're in an MVP situation you can use the code effectively as a stub, and build out the database/persistence part for that later.

How to make JSON loads faster with large data (on HTTP or WebPage)

. Requesting the page(on HTTP or WebPage), it is very slow or even crash unless i load my JSON with fewer data. I really need to solve this since sooner or later i will be using large amount of data frequently. Here are my JSON data. --->>>
Notes:
1. The JSON loads only String and Integer.
2. I used to view my JSON in JSONView more like treeview using plugin
from GoogleChrome.
I am using angular and nodejs. tq
A quick resume of all the things that comes to my mind :
I had a similar issue once. My solutions may make the UI change.
Pagination
I doubt you can display that much data at one time, so the strategy should be divising your data in small amounts and then only load more when the client ask for it.
This way, the whole data is no longer stored in RAM as it is currently. This is how forums works (only 20 topics at a time).
Just imagine if StackOverflow make you load the whole historic of questions in the main page, how much GB would your navigator need just for that ?
You can use pagination in a classic way (button with page number, like google), or in an infinite scroll way, as you want.
For that you need to adapt your api and keep track of the index of the pages you already loaded at every moment in your Front. There are plenty of examples in AngularJS.
Only show the beginning of the data
When you look at Facebook comments, you may have a "show more" button. In their case, maybe it's to not break the UI, but it can also be used to not overload data.
You can display only the main lines of your datae (titles or somewhat) and add a button so the user can load more details if they want.
In your data model, the cost seems to be on the second level of "C". Just load data untill this second level, and download the remaining part (for this object) only if the user asks for.
Once again, no need to overload, your client's RAM will be thankfull, and your client's mobile 3G too.
Optimize your data stucture
If this is still not enough :
As StefanArya said in comment, indeed remove the "I" attribute, which is redundant with the JSON key.
Remove the "I" as you can use Object.keys() to get key name.
You also may don't need that much precision on your floats.
If I see any other ideas, I'll edit this post later.

More on ui-grid row filtering

Long version of the question
I have a complex filtering operation that I'm trying to implement for a ui-grid application. Essentially, I have a big grid with lots of columns, each having the typical filter fields at the top of the columns. That works great.
Then I have an extra analysis step that the user can turn on (which involves looking for sets of rows that meet a certain criterion, and then marking rows visible or not based on the results) that MUST be applied logically after all the other filters (i.e. it does share 'commutative property' as all the column-top filters do). This extra analysis/filter step intends to take the row set that is produced by the column-top filters and then apply this one final, mother-of-all-complex-filtration steps.
I am able to get that filtration logic to produce initially correct results - when the user first clicks into the special mode, I perform the analysis and save the necessary info in a hidden column of the grid; and then a RowsProcessor sets the row.visible attribute accordingly. (perhaps I didn't need the RowsProcessor, and maybe I could have just set the visibility in the analysis subroutine.) But whatever - the point is that the rows are marked visible or not. The problem occurs when the user subsequently adds/removes/changes a filter to one of the column top filters. That extra analysis step by necessity needs to be based upon the rows that are visible according to the column-top-filters. And the first time into the special filtering routine, a call to gridApi.core.getVisibleRows() returns exactly that rowset. But after that, the visible rowset is now reduced by the prior execution of the special filtering. But I need to get back to the rowset (i.e. complete recalculation of the row.visible attributes) of just the column-top-filters, without any special final filtration. Is there a way to do that - to effectively undo the filtration effects of the RowsProcessor?
Short version of the question
Is there some way to force recalculation of the visible row set based on the column top filters? and to do so in a way to get control back so additional filtration steps can be executed?
I've looked at various things in the APIs but cannot tell which, if any, might help me. For example:
In the ui.grid (Grid) portion of the API, I see many different flavors of refresh methods that may help, but there's no distinction given that I understand. I hope the one that I need is not refreshRows( ) that says "not functional at present"
Also, the GridRow 'class' seems to have various methods that speak of
visibility "overrides" - that sounds possibly like what I might need
(my final visibility result possibly being an override to those calculated by the column-top filters). But I tried using those methods instead of directly setting row.visible and I did not see any difference.
Can anyone suggest a direction for me to try?
and even better, is there any written description that provides a high-level overview of ui-grid functionality? I love the package, but using it for the first time, I'm just having a hard time with what are probably basic concepts, and possibly I'm thinking about this problem all wrong.
Once again, thanks for any assistance.
Whenever the rowsProcessors run they start by setting all rows to visible, then each rowsProcessor runs in turn with the results from the previous rowsProcessor being passed to the next one. RowsProcessors have a priority, so you can set your processor to run at the appropriate place in the sequence.
It sounds like your problem is that you're using getVisibleRows to calculate what to do, rather than looking at the rows that are passed in to your rows processor, and evaluating based on which rows are visible in that input.
My guess is that you would be better to set your rowsProcessor to have a high (late) priority, and then process all your calculations within that processor rather than attempting to cache them on the data set itself. If you need to extract the visible rows from the set of renderableRows that are passed to your processor, you could do it with:
var visibleRows = renderableRows.filter( function(row) { return row.visible; });

Seamless Integration with REST API

Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.

How to 'get' big list of item from the server?

I have WCF server and silverlight client. The client call the server to get list of items.
There is some cases that the item list is very big and I want to have the ability to get the items in more then one call -
Call1 => get the items 0-100
Call2 ( if the user click on 'more' button ) => get the item 101-200
.
.
Call N => get the 100*n - 100*(n+1) items.
How can I do it ?
Is there some 'easy' pattern to do it ?
Thanks.
If you have a standard page size of 100 then have the client pass the page they want to the service. Or get the client to tell the service how big their pages are and which page they want
You could hold in memory on the service which page the client has and then have then say "Next" but holding in-memory state in the service on behalf of the client reduces scalability and increases fragility (if that state is lost then the client has to start paging again.
making the client explicitly say what they want is a more robust and scalable solution and has an easy LINQ implementation with Skip and Take
As Richard mentions, paging is a common option. Also, returning the results as a stream (and not a buffered byte[] array but an actual stream -- WCF has some caveats around use of stream) is going to generally be most efficient. Also as marc_s noted, Silverlight local storage isn't huge, so keep that handicap in mind.
The chance of a user 'consuming' more than 100 items in one go is very small, even if the items have very little detail, maybe add navigation (categories etc) as filters to the data so the user will only get the 20 or so items they are actually interested in. Tree views can be quite handy to break lists up into smaller lists that are more relevant to users, but there are many ways of doing this...

Resources