What are the best practices for pre-fetching in backbone.js? - backbone.js

What's the best way to pre-fetch the next page in backbone.js?
Is there a build-in mechanism to do that, or do I have to take care of it myself by making Ajax calls and storing the results.
Also, is there a way to preload the entire page like in JQuery mobile( http://jquerymobile.com/demos/1.2.0/docs/pages/page-cache.html)

There is no built in support for a such a thing. It's dependent on your use case but you could do a number of things.
1) Use setTime() to wait a short time before fetching the data you might be needing shortly. (Probably not a good solution)
2) Set up an event handler to fetch the data on a specific event, or something similar:
$('#my-button').on('hover',function() {
//fetch data
});
To fetch the data you can use the fetch() function on a backbone model or collection, which will return a jqXHR (or you can use a straight $.ajax() call). You can then wait and see if it failed or passed:
var fetch = myModel.fetch();
fetch.done(function(data) {
// Do something with data, maybe store it in a variable than you can later use
})
.fail(function(jqXHR) {
// Handle the failed ajax call
// Use the jqXHR to get the response text and/or response status to do something useful
});

No build-in support, but actually easy to add. Please referer to the concept of View Manager, it is able to handle both "view-keeping" tasks and transitions.
In short, the concept is: view manager is component, which is reposible for switching from one application view to another. It will dispose existing view, so it prevents zombies and memory leaks. As well it could handle transitions between view switch.

Here my way how I handle the loading of pages into an "endless scrolling" list.
Make you backend paginate aware
First of all you require a DB backend which is capable of handlling page load requests:
As an example refer to my git modelloader project which provides a small Coffee based framework integrated into a Node.js/Moongoose environment.
Model Loader on GIT will contain additional information and samples.
Here the most important points:
You backend should support the following Pagination features
Each request will return partial response only limiting it to for example 20 records (size of a page).
By default the last JSON record entry returned by a request, will contain additional technical and meta information about the request, necessary for allowing consumers to implement a paging.
{
_maxRec: "3",
_limit: "20",
_offset: "0"
}
_maxRec will list the total amount of records in the collection
_limit will list the maximum number of requests which are given back
_offset will tell you which set of records was passed back, i.e. an _offset of 200 would mean that result list skipped the first 199 records and presents the records from 200-220
The backend should support the following pagination control parameters:
http(s)://<yourDomainName>/<versionId</<collection>?offset=<number>
Use offset to skip a number of records, as for example with a limit of 20, you would send a first request with offset=0 then offset=20, then offset=40 etc. until you reached _maxRec.
In order to reduce the db activities you should provide a possiblity to reduce the _maxRec calculation for subsequent requests:
http(s)://<yourDomainName>/<versionId</<collection>?maxRec=<number>
By passing in a maxRec parameter (normally the one gotten by an earlier paging requerst), the request handler will by pass the database count objects statement, which results in one db activity less (performance optimization). The passed in number will passed back via _maxRec entry. Normally a consumer will fetch in the first request the _maxRec number and pass it back for the subsequent request, resulting in a faster data access request.
Fire of Backbone Model requests if necessary
So now you have to implement on the Backbone side the firing of page loading requests when necessary.
In the example below we assume to have a Backbone.View which has a list loaded into a jquery.tinyscrollbar based HTML element. The list contains the first 20 records loaded via the URL when built up initially:
http(s)://<yourDomainName>/<versionId</<collection>?offset=0
The View would listen in this case to the following scrolling events
events:
'mousewheel': 'checkScroll'
'mousemove': 'checkScroll'
Goal is as soon the user has scrolled down to the bottom of the scrollable list (e.g. he reaches a point which is 30px above the scrollable list end point) a request will be fired to load the next 20 entries. The following code sample desrcribes the necessary step:
checkScroll: () =>
# We calculate the actual scroll point within the list
ttop = $(".thumb").css("top")
ttop = ttop.substring(0,ttop.length-2)
theight = $(".thumb").css("height")
theight = theight.substring(0,theight.length-2)
triggerPoint = 30 # 30px from the bottom
fh = parseFloat(ttop)+parseFloat(theight)+triggerPoint
# The if will turn true if the end of the scrollable list
# is below 30 pixel, as well as no other loading request is
# actually ongoing
if fh > #scrollbarheight && ! #isLoading && #endLessScrolling
# Make sure that during fetch phase no other request intercepts
#isLoading = true
log.debug "BaseListView.checkscroll "+ ttop + "/"+ theight + "/"+ fh
# So let's increase the offset by the limit
# BTW these two variables (limit, offset) will be retrieved
# and updated by the render method when it's getting back
# the response of the request (not shown here)
skipRec = #offset+#limit
# Now set the model URL and trigger the fetch
#model.url = #baseURL+"?offset="+skipRec+"&maxRec="+#maxRec
#model.fetch
update: true
remove: false
merge: false
add: true
success: (collection, response, options) =>
# Set isLoading to false, as well as
# the URL to the original one
#isLoading = false
#model.url = #baseURL
error: (collection, xhr, options) =>
#isLoading = false
#model.url = #baseURL
The render method of the view will get the response back and will update the scrollable list, which will grow in size and allows the user again to start scrolling down along the new loaded entries.
This will load nicely all the data in a paged manner.

Related

Shrine clear cached images on persistence success

Background
I am using file system storage, with the Shrine::Attachment module in a model setting (my_model), with activerecord (Rails). I am also using it in a direct upload scenario, therefore i need the response from the file upload (save to cache).
my_model.rb
class MyModel < ApplicationRecord
include ImageUploader::Attachment(:image) # adds an `image` virtual attribute
omitted relations & code...
end
my_controller.rb
def create
#my_model = MyModel.new(my_model_params)
# currently creating derivatives & persisting all in one go
#my_model.image_derivatives! if #my_model.image
if #my_model.save
render json: { success: "MyModel created successfully!" }
else
#errors = #my_model.errors.messages
render 'errors', status: :unprocessable_entity
end
Goal
Ideally i want to clear only the cached file(s) I currently have hold of in my create controller as soon as they have been persisted (the derivatives and original file) to permanent storage.
What the best way is to do this for scenario A: synchronous & scenario B: asynchronous?
What i have considered/tried
After reading through the docs i have noticed 3 possible ways of clearing cached images:
1. Run a rake task to clear cached images.
I really don't like this as i believe the cached files should be cleaned once the file has been persisted and not left as an admin task (cron job) that cant be tested with an image persistence spec
# FileSystem storage
file_system = Shrine.storages[:cache]
file_system.clear! { |path| path.mtime < Time.now - 7*24*60*60 } # delete files older than 1 week
2. Run Shrine.storages[:cache] in an after block
Is this only for background jobs?
attacher.atomic_persist do |reloaded_attacher|
# run code after attachment change check but before persistence
end
3. Move the cache file to permanent storage
I dont think I can use this as my direct upload occurs in two distinct parts: 1, immediately upload the attached file to a cached store then 2, save it to the newly created record.
plugin :upload_options, cache: { move: true }, store: { move: true }
Are there better ways of clearing promoted images from cache for my needs?
Synchronous solution for single image upload case:
def create
#my_model = MyModel.new(my_model_params)
image_attacher = #my_model.image_attacher
image_attacher.create_derivatives # Create different sized images
image_cache_id = image_attacher.file.id # save image cache file id as it will be lost in the next step
image_attacher.record.save(validate: true) # Promote original file to permanent storage
Shrine.storages[:cache].delete(image_cache_id) # Only clear cached image that was used to create derivatives (if other images are being processed and are cached we dont want to blow them away)
end

NSURLCache cachedresponseforrequest no data

why the responseCache is nil? i'd run this post and really get the responseObject from cache. How can i get the responseCache?
manager.requestSerializer = [AFJSONRequestSerializer serializer];
manager.responseSerializer = [AFJSONResponseSerializer serializer];
manager.requestSerializer.cachePolicy=NSURLRequestReturnCacheDataElseLoad;
[manager POST:URL parameters:paramdic progress:^(NSProgress * _Nonnull uploadProgress) {
} success:^(NSURLSessionDataTask * _Nonnull task, id _Nullable responseObject) {
NSData * data=[NSJSONSerialization dataWithJSONObject:responseObject options:NSJSONWritingPrettyPrinted error:nil];
NSURLCache * cache=[NSURLCache sharedURLCache];
NSCachedURLResponse * responseCache=[cache cachedResponseForRequest:task.originalRequest];
NSCachedURLResponse * response=[[NSCachedURLResponse alloc]initWithResponse:task.response data:data userInfo:nil storagePolicy:NSURLCacheStorageAllowed];
[cache storeCachedResponse:response forRequest:task.originalRequest];
} failure:^(NSURLSessionDataTask * _Nullable task, NSError * _Nonnull error) {
NSLog(#"%#",error);
}];
There are three reasons it is nil at that point:
POST requests are not cached by any iOS/OS X networking code because they are not guaranteed to be idempotent (i.e. they can have side effects, such as storing data on the server). The only way a POST request will ever get stored in an NSURLCache is if you explicitly add it.
POST requests are not cached because NSURLCache uses the URL as the lookup key. Because the URL does not (cannot) include the POST body, any POST operations to the same URL would be returned for a different POST request, which is almost certainly not what you would want. So if you do add it, you'll have to add some custom rewriting of the URL on its way into the cache and custom lookup code to make the URLs unique enough based on specific POST body fields or whatever.
The cache is highly asynchronous, so cached data would not necessarily be available when the request's completion handler runs even if this were a GET request.
This is not necessarily a complete set of reasons. :-)
The cache is intended to reduce network traffic. You shouldn't generally consult it yourself. The normal lookup path used by NSURLSession et al performs checks for certain protocol caching policies (e.g. response expiration) that would not be performed by merely asking the cache if it has a response for a particular key.
If you need a general mechanism for storing a single response for later use by your app (rather than keeping it in memory), you should do so in your own internal dictionary (or, if the response is large, by using a download task and moving the file into a temporary folder in your app's sandbox that you purge on every launch).

My flux store gets re-instantiated on reload

Okay. I'm kinda new to react and I'm having a #1 mayor issue. Can't really find any solution out there.
I've built an app that renders a list of objects. The list comes from my mock API for now. The list of objects is stored inside a store. The store action to fetch the objects is done by the components.
My issue is when showing these objects. When a user clicks show, it renders a page with details on the object. Store-wise this means firing a getSpecific function that retrieves the object, from the store, based on an ID.
This is all fine, the store still has the objects. Until I reload the page. That is when the store gets wiped, a new instance is created (this is my guess). The store is now empty, and getting that specific object is now impossible (in my current implementation).
So, I read somewhere that this is by design. Is the solutions to:
Save the store in local storage, to keep the data?
Make the API call again and get all the objects once again?
And in case 2, when/where is this supposed to happen?
How should a store make sure it always has the expected data?
Any hints?
Some if the implementation:
//List.js
componentDidMount() {
//The fetch offers function will trigger a change event
//which will trigger the listener in componentWillMount
OfferActions.fetchOffers();
}
componentWillMount() {
//Listen for changes in the store
offerStore.addChangeListener(this.retriveOffers);
}
retrieveOffers() {
this.setState({
offers: offerStore.getAll()
});
}
.
//OfferActions.js
fetchOffers(){
let url = 'http://localhost:3001/offers';
axios.get(url).then(function (data) {
dispatch({
actionType: OfferConstants.RECIVE_OFFERS,
payload: data.data
});
});
}
.
//OfferStore.js
var _offers = [];
receiveOffers(payload) {
_offers = payload || [];
this.emitChange();
}
handleActions(action) {
switch (action.actionType) {
case OfferConstants.RECIVE_OFFERS:
{
this.receiveOffers(action.payload);
}
}
}
getAll() {
return _offers;
}
getOffer(requested_id) {
var result = this.getAll().filter(function (offer) {
return offer.id == requested_id;
});
}
.
//Show.js
componentWillMount() {
this.state = {
offer: offerStore.getOffer(this.props.params.id)
};
}
That is correct, redux stores, like any other javascript objects, do not survive a refresh. During a refresh you are resetting the memory of the browser window.
Both of your approaches would work, however I would suggest the following:
Save to local storage only information that is semi persistent such as authentication token, user first name/last name, ui settings, etc.
During app start (or component load), load any auxiliary information such as sales figures, message feeds, and offers. This information generally changes quickly and it makes little sense to cache it in local storage.
For 1. you can utilize the redux-persist middleware. It let's you save to and retrieve from your browser's local storage during app start. (This is just one of many ways to accomplish this).
For 2. your approach makes sense. Load the required data on componentWillMount asynchronously.
Furthermore, regarding being "up-to-date" with data: this entirely depends on your application needs. A few ideas to help you get started exploring your problem domain:
With each request to get offers, also send or save a time stamp. Have the application decide when a time stamp is "too old" and request again.
Implement real time communication, for example socket.io which pushes the data to the client instead of the client requesting it.
Request the data at an interval suitable to your application. You could pass along the last time you requested the information and the server could decide if there is new data available or return an empty response in which case you display the existing data.

Implement "load more" using threads.list() combined with 'q' = older/newer

On first sign-up I am doing a full sync for the last 50 threads in label with id INBOX.
How should I go about implementing a "load more" feature, where the user can say I would like to fetch the next 50 threads. As far as I see there are 2 possible ways to go about it:
Cache nextPageToken from initial full sync and use that to load next 50 (maxResults = 50)
Use the q parameter with older and newer - this works well for dates however I could not find if this works for absolute time.
Neither of them works for my use case in which I specifically would like to get the next 50 threads older or all threads newer than this point of time.
I would like to do this because if I fetch threads per label, and in my data model labels and threads have a many-to-many relationship, I will have date gaps in the different labels.
Here is an example: I go into a label that has messages from 2009, I fetch them. They are also in Inbox so if I go there I will see emails from October 2014 and then suddenly September 2009. My solution would be to fetch threads from All Mail newer than the oldest thread whenever I do load more or initial full sync to make sure there are no date gaps.
Also to save bandwidth, is it possible to include in the request the thread ids I already have, to not be returned in the response?
I don't think you need to overcomplicate things. If you don't do any newer, older or specific sorting the messages are ordered by date desc. For the pages I created a simple array to hold all the page tokens. Quite easy and works well (AngularJS example):
/*
* Get next page
*/
$scope.fetchNextPage = function() {
if ($scope.nextPageToken) {
$scope.page++;
$scope.pageTokenArray[$scope.page] = $scope.nextPageToken;
$scope.targetPage = $scope.pageTokenArray[$scope.page];
$scope.fetch(false);
// As we have a next page, always allow
// to go back
$scope.lastBtnDisabled = false;
}
};
/*
* Get previous page
*/
$scope.fetchLastPage = function() {
if ($scope.page > 0) {
$scope.page--;
$scope.targetPage = $scope.pageTokenArray[$scope.page];
$scope.fetch(false);
// When page is 0 now, disable last page
// button
if ($scope.page == 0) {
$scope.lastBtnDisabled = true;
} else {
$scope.lastBtnDisabled = false;
}
}
};

Asynchronously pull data from datastore and draw map

I have a Spring MVC 3 app (that uses JSP) running on Google App Engine and saving information on the Datastore. I'm using the Google Maps API v3 to project some of the data on maps by drawing shapes, colouring etc. My database will potentionally hold millions of entries.
I was wondering what the best way is to keep pulling data from the datastore and project them on the map until there are no more database entries left to project. I need to do this to avoid hitting the 30 seconds limit (and getting a DeadlineExceededException) but also for good user experience.
Is it worth using GWT?
Any advice would be great.
Thanks!
You could use a cursor similar to the pagination technique described here:
Pagination in Google App Engine with Java
When your page with the map loads, have it make an AJAX request with a blank cursor parameter. The request handler would fetch a small number of entities, then return a response containing them and a cursor (if there are entities remaining).
From the client javascript, after displaying the items on the map, if there is a cursor in the response start a new request with the cursor as an argument. In the request handler if a cursor is provided, use it when making the query.
This will set up a continuous loop of AJAX requests until all items have been fetched and displayed on the map.
Update:
You could write a service which returns JSON something like this:
{
items:
[
{ lat: 1.23, lon: 3.45, abc = 'def' },
{ lat: 2.34, lon: 4.56, abc = 'ghi' }
],
cursor: '1234abcd'
}
So, it contains an array of items (with lat/lon and whatever other info you need per item), as well as a cursor (which would be null when the last entity has been fetched).
Then, on the client side I would recommend using jQuery's ajax function to make the ajax calls, something like this:
$(document).ready(function()
{
// first you may need to initialise the map - then start fetching items
fetchItems(null);
});
function fetchItems(cursor)
{
// build the url to request the items - include the cursor as an argument
// if one is specified
var url = "/path/getitems";
if (cursor != null)
url += "?cursor=" + cursor;
// start the ajax request
$.ajax({
url: url,
dataType: 'json',
success: function(response)
{
// now handle the response - first loop over the items
for (i in response.items)
{
var item = response.items[i];
// add something to the map using item.lat, item.lon, etc
}
// if there is a cursor in the response then there are more items,
// so start fetching them
if (response.cursor != null)
fetchItems(response.cursor);
}});
}

Resources