Is it possible to use Backbone.sync to connect to cometd? - backbone.js

Is it possible to talk to a cometd service when using Backone.sync?
Thanks in advance
EDIT
After some reading it seems you can overwrite the Backbone.sync().
Backbone.sync is the function that Backbone calls every time it attempts to read or save a model to the server. By default, it uses (jQuery/Zepto).ajax to make a RESTful JSON request. You can override it in order to use a different persistence strategy, such as WebSockets, XML transport, or Local Storage.
I can't find any more information on this though.

Indeed, all you need to do is override sync.
A good example to follow in order to see how it is done is the backbone localstorage storage.
In brief, you define a method that replaces sync on your models/collections:
mySync = function(method, model, options)
The method argument can be one of read, create, update, delete and model can be either a model, or a collection. Essentially you only need to cover the four methods and everything will work like a charm. Bear in mind that while the localstorage example is useful it is also simplistic in some ways, so having a look at Backbone itself never hurts.

Related

DTOs and object graphs

I'm making an Angular2 SPA with a webAPI REST service that exposes an EntityFramework model.
The quickest way to get going is to load up a big object graph in a single controller action and pass out a big blob of JSON to the client, which can then split it up as it walks the object graph.
Is it considered better practice to build API actions for each object in the object graph and have the JS client pull the graph piecemeal as required?
The piecemeal approach requires many more controllers and actions and, correspondingly, angular services, i.e., more work! Do I need to just grasp the nettle and get on with it?
Actually it depends whether your are using Entity Framework in connected scenarios or in disconnected scenarios. Regarding your case, you are using Entity framework in disconnected scenarios, which mean that DBContext doesn't attach to object graph all the time, because you get the data from database, send it to the client and then close the context. For me, I would recommend to use divide your controllers and actions for each POCO or DTO because this will help you to maintain and attach each object individually rather than maintain the whole object graph at once. The problem will start to appear when you start editing or manipulating your entities because in disconnected scenarios you never know which object has been edited or deleted or added in a big object graph. However, you should maintain and manipulate each change in client side directly to the sever to reflect that update.
I don't know if this answers your question, but if you need any further explanation or code sample. please let me know.
I think you have to make one backend action for one angular2 page-level component. User shouldn't wait for extra data loads, only data that needed on this page.

The best way to pre-populate remote data in AngularJS app

In my AngularJS app, I need to retrieve multiple collections of static data from remote REST endpoints. Those data collections will be used throughout the entire application life cycle as static lookup lists. I would like for all those lists to be populated upon the initial application startup, and to be retained and made available to multiple controllers. I would prefer not to load any additional data dynamically, as one of the assumptions for this particular app, is that, once loaded, any further network connections may not be available for a while.
It is OK to take an initial hit, as the users will be preoccupied by reading a static content of the first page anyway.
I was thinking of making this mass loading a part of the initial application run block, and stick all this static data into various collections attached to the $rootScope (which would make that available everywhere else)
What is the best way to handle this requirement?
Interestingly enough, I just wrote a blog post about extending the script directive to handle this very scenario.
The concept is simple. You embed JSON data in your page when it loads from the server like so:
<script type="text/context-info">
{
"name":"foo-view",
"id":34,
"tags":[
"angular",
"javascript",
"directives"
]
}
</script>
Then you extend the script directive so it parses the data out for you and makes it available to other parts of your application via a service.
if(attr.type === 'text/context-info'){
var contextInfo = JSON.parse(element[0].text);
//Custom service that can be injected into
// the decorator
contextInfoService.addContextInfo(contextInfo);
}
You can see a live demo of it here: http://embed.plnkr.co/mSFgaO/preview
The way I approach this is to use a service (or a collection of services I nest), and set caching to true in the $http get functions. This way the service can be passed into any controller you desire, having cached results available to you without the need for additional http requests.
I can try to put this into an example if this is unclear to you, let me know.
edit: you can either wait for the first call to this service to do this caching, or do this on app load, either way is possible.

backbone.js automatic PUT after POST

Our server saves the model, and returns the JSON as specified in the doc. The problem is, backbone.js issues PUT as soon as it receives response. Can it be because the model is sent without _id property, and the server appends that to a model?
If you believe that Backbone automatically issues a PUT based on the response to a previous request, you are confused. Backbone does no such thing. If you a see a PUT going over the wire, something in your code base (an event binding or otherwise) is calling either save on a model or a manual sync.
Otherwise, you'll need to post code in order for us to help you debug, but I can assure you backbone itself will not ever issue a network request that is not triggered by external code through one of a very small set of methods such as fetch, save, or sync.
As to IDs on the server, that should be perfectly fine. In fact, if backbone were to get confused and think that an existing model was a new model, it would issue a POST instead of a PUT, which is not what you are seeing.

Why do backbone has sync method in models as well as collections?

Anyways using sync method only in collection or only in model can suffice, then why do they have to sync at both the places?
Actually both sync() methods are just proxies to a common Backbone.sync() method:
Model.sync()
Collection.sync()
Backbone.sync()
Collection always delegate in the Model.sync() for individual operations over its individual models like: create, remove and so on. But Collection uses its own sync() in the fetch() operation due it is very different to fetch a Model or a Collection, for example: the URL follows another patter and the backend layer should respond different.
In the other hand I see the Backbone.sync() as a private method and I try to not use it directly, if I'm doing this I don't feel well. I think the sync() method is a handler point to allow you to overwrite completely the backend synchronization a method that you can overwrite to implement different persistance layers as for example using LocalStorage. But not for be called directly.
As #JMM has said in the comments, the Model.sync() and Collection.sync() is also a good point to be overwrote to make it "does something custom and then calls Backbone.sync() to carry on as usual".
Backbone doesn't have a sync -method in models and collections by default, but both models and the collections have methods (fetch for both models and collections and save, destroy for models) that use the Backbone.sync -method to make ajax-calls. Docs, annotated source
The methods that use Backbone.sync check for the existence of a sync method for the individual collection or model, so the default functionality of sync can be overwritten for everything by overwriting the Backbone.sync or for specific parts by extending a model or collection that needs custom sync with a sync -function.
As to why both models and collections have the capability to synchronize with the server: flexibility. If only collections would have the syncing capability, then you couldn't have individual models, and if only models would have syncing capability, how would you fetch large batches of models initially from the server. There is no downside in having syncing capabilities for models and collections, so why not?
My counter-question for you: How would having sync on only the other suffice?

mvc programming question

Am using a view file,controller and a helper. am accessing data value through webserver.
Steps:
the controller get value from webserver and set it in the view. the view uses the helper to display the data in some format. But my helper again calls the webserver method to get the inner values. Is it correct the helper accessing webservice method? Is it the correct way of programming in mvc?
Thanks,
IMO, a webservice is just another datasource and should be accessed via the model. If it's me, I handle it by either creating a new model for the service call (if the service call is in support of an existing entity, it may make more sense to make the call in that entity's model itself). My controller calls the model method, sends the data to my view which, in turn, forwards that data on to the helper.
This maintains the MVC separation, but still allows the data you need to make it's way into the helper where you need it.
I will tell you what is written in the Ruby on Rails book. I can not remember the title right now but...
Helpers are usually used for view rendering not for server calls.
Hope it helps.

Resources