Achieve good paging using objectify - google-app-engine

I'm using objectify cursors to achieve basic paging, basically creating a more button.. How best do you achieve paging using objectify for building links that allow users to go forward and backwards. Something more like a page list..
1, 2, 3, 4, more

Your best bet is probably to fetch the keys for the entire result set and stash it in a session or in javascript. Each next/previous can load the next item in your list by id. Loading by id is very cheap. You can cache the full query results in memcache as long as it's not too large but that's going to depend on what kind of objects you're fetching.

You can use cursors to create paging by one page forward and backwards, via FetchOptions.startCursor(..) and FetchOptions.endCursor(..)
To create more direct paging links you will have to use FetchOptions.limit(..) and FetchOptions.offset(..).
Note that offset(..) can be very costly as it fetches all entities that come before given page. So, depending on usage and size of the whole set, you might be better off by preloading and caching a set of keys. Or better, replace paging with search.

Related

Dynamic table pagination based on available space

I have a table and I filled it with REST API with KeySet-based pagination. I have used pagination contains 5 buttons for representative of pages.
I want to use all the available space to display the maximum number of rows in the table, but at the same time, making the page smaller and larger does not lead to scrolling inside the table. So, I do not mean usual responsive.
I tried a solution that caused some sequential problems. I think it would be better to go back and look for a better idea from the beginning. Do you have any solutions based on your experience?
I am sorry, but I do not understand your problem, but maybe this could help you.
There is a WEB API called "Intersection Observer API". With this observer, you can reach out to the elements in the view of your device. Maybe this can help you, bro.

What is the best way (Least Read Operations) to do autocomplete on Google App Engine Using Objectify

I am currently using ajax to do autocomplete emails and would like to find out what is the best way to do this without too much read operations. Thanks!
The best way to do these kind of operations is use the following approach
Use full text search:
https://cloud.google.com/appengine/docs/java/search/
When creating a document to search on, you could tokenize the email id. for example if you have foobar#baz.com. you could tokenize it to f, fo, foo, foobar .... and save it into a textfield.
then use index.search to query for the results.
then every successful lookup can be cached for say 2 hours ( you can change it as per your requirement ).
Anytime you update the model add/update/remove entries then delete the memcache entries/flush the memcache, preferably using the datastore callbacks.
https://cloud.google.com/appengine/docs/java/datastore/callbacks
please note that the tokenize + adding a document could to be processed in task queue to fit into the "gae way of doing things"
Also as a footnote, you could try implementing client side caching mechanism using http cache control + etags. I have not implemented such a solution so others could pitch in how their experience was implementing such a solution.
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en

Seamless Integration with REST API

Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.

Showing 1 million rows in a browser

Our Utilty has one single table, and it has 10 million to 50 million rows, There may be a case we need to show 50 million rows in a single page html client page, To show the rows in browser we use jQuery in UI.
To retrieve rows we use Hibernate and use Spring for MVC. I am looking for best practice in retrieving the rows and showing in UI. Should I retrieve a bulk of thousands rows or two thousand rows in Hibernate and buffer to Web Client or a best practice is there ?
The best practice is not to do this. It will explode the browser memory and rendering engine, and will take too much time to load.
Add a search form to your webapp, make the end user search for what he's interested about, and only display the N first search results, just like Google does.
Nobody is able to do anything meaningful with 50 million rows without searching anyway.
i think you should use scroll pagination (when user reaches to almost bottom of page makes ajax call and load data).
Just for example quick google example & demo
and if your data is tabular then you can use jQGrid
Handling a larger quantity of data in an application must be done via virtualization. While it's true that the user will be overwhelmed by millions of records, it's not exactly true that they can't do stuff with it, nor that such quantities of data are unfathomable.
In practice and depending on what you're doing you'll note that this limit crops up on you with just thousands of records. Which frankly is very little data. Data centric apps just need a different approach, altogether, if they are going to work in a browser and perform well.
The way we do this is quite simple but not all that straightforward.
It helps to decide on a fixed height, because you will need to know the max height of a scrollable container. You then render into this container the subset of records that can be visible at any given moment and position them accordingly (based on scroll events). There are more or less efficient ways of doing this.
The end goal remains the same. You basically need to cull everything which isn't directly visible on screen in such a way that the browser isn't paying the cost of memory and layout logic for the app to be responsive. This is common practice in game development, only the world that is visible right now on screen is ever present at any given moment. So that's what you need to do to get large quantities of stuff to behave well.
In the context of a browser, anything which attributes to memory use and layout/render cost needs to go away if it's isn't absolutely vital.
You can also stagger and smear recalculations so that you don't incur the cost of whatever is causing the app to degrade on every small update. The user can afford to wait 1 second, if the apps remains responsive.

CakePHP JSON API containable question

right now I have controllers/actions that do standard retrieval of model/associated model data. My actions currently just pass the variables to the views to pick and choose which values to display to the user via HTML.
I want to extend and reuse these functions for the case where a mobile device is making a call to grab the JSON format version of the data. I am using Router:parseExtensions("json") and that all works fine.
My main question is how to handle data size. Right now even a User model has many, many associated models and recursive relationships. As of now I am not using contain to cut out the unnecessary data before I pass it to the view, b/c the view will take the elements it wants and it won't affect the HTML size.
But for my JSON views, I just format it and return the whole thing, which makes it extremely large. My current thought process is I just need to case it to use containable in the case of JSON, but I was hoping there was a more elegant solution? Or is this the cakey way to do it?
Thanks!
Actually, using containable and fine tuning your query is a very elegant solution. Even if your view does not use the actual data, you put unnecessary load on your database by adding data / joins you don't need.
Try and limit your query and relations by using both Containable and fine tuning the relationships in your models and paginator calls.
It is also recommended that you move most of your find calls to the model layer. They will be re-usable, easier to test and overall more Cake-ish.

Resources