What is the best way in Backbone.JS to determine if a model was deleted on the server in the meantime?
I need this for a simple webapp where multiple users can update or delete items, and, even if at the time the page was loaded a model did exist, by the time the other user is interacting with it it might have been already deleted.
Include a revision number (or something similar) in your model, then on the server-side, when a client attempts to modify a resource you first verify that the revision number included matches what the server has. If it does, update the resource and then respond with the resource and the new revision number. If it doesn't, then respond with a 409 status code. If the client receives a 409 response, then it should pull the latest changes to the resource from the server and then attempt to push it's changes again with the updated revision number.
Related
I want to add a ics link in my website. Users would be able to add this url to their favorite calendar app and see their upcoming events.
My users use my website for a few months and then leave (it's a educational website). So my question is :
Is there a way (in the ics protocol maybe ?) to automatically unsubscribe my users from my ics url to avoid unecessary requests "for life"?
For exemple, iCal on Mac will do a request every hour to the url to get new data. But once a user leave, there will never be new data, so the requests are useless.
Thanks for your help!
You can either ask people to unsubscribe, make it desirable perhaps by a dummy daily event that says ' No longer updated, please unsubscribe'
OR
force an unsubscribe by returning an appropriate http return code to the requesting system - probably the 410 (gone) rather than the 404. The 410, as per it's description ,is the most appropriate: "The url is no longer there and the condition is likely to be permanent." https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/410
A url could be offered per user. Ensure that it then returns a 410 at end of life (not just an empty file)
The receiving devices don't just quietly unsubscribe the url. Usually they show the error. Ideally the human should unsubscribe. Perhaps an email with quick tips on how to unsubcribe may be best for your situation? at least then you've told them.
I find even for myself I have a lot of garbage calendar urls in my calendar app. If I started getting errors I would unsubscribe them (or if there was garbage events, I might unsubscribe or 'hide' it.
Other ways of conveying information to the requesting app, that may reduce load on your server:
Last Modified in the header https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After (No 'never'!)
There are also unofficial (non RFC5545) extensions that you could include in the ics file: eg: X-PUBLISHED-TTL - Recommended update interval for subscription to the calendar. One could make that a really long interval.
See https://en.wikipedia.org/wiki/ICalendar#Calendar_extensions
According to here: https://cloud.google.com/pubsub/docs/reference/rest/v1/PubsubMessage
The timestamp field should not be populated when a publisher sends a message to the queue. So this leads me to think PubSub attaches a timestamp automatically whenever it gets a message from a publisher.
Is this correct?
Yes, you got that correctly. This is what is implied with the following sentence (from the docs you linked):
publishTime: The time at which the message was published, populated by
the server when it receives the topics.publish call.
You can test that yourself: if you go to the Explorer's API and publish to a topic using the pubsub.projects.topics.publish method, without giving a publishTime and then you pull using a subscription from that same topic (pubsub.projects.subscriptions.pull), the pulled message will have a publishTime.
Now, there is also a sentence in the docs regarding publishTime which seems a bit unclear to me:
It must not be populated by the publisher in a topics.publish call.
If you actually try to add a (correctly formatted) publishTime in your publish call, you will not get an error. Still, the actual publishTime attached to the message that you later pull is the one provided by the pub/sub service (i.e. the publish time you gave will simply be ignored).
They do generate using the publishTime. However on cases where there are multiple publishers publishing to Pub-Sub, you could attach a timestamp to every event in the publisher.
https://cloud.google.com/pubsub/docs/ordering
We have integrated Websphere to support Camel integration and we have set up the DOM inventory model. So when the user reaches the product page and selects an item, an external call is made to Camel integration framework and it updates INVAVL table of Commerce.
However in page, the inventory status is still shown as unavailable. But, if the same product is chosen again, it is showing as available because the second request is made to the database directly instead of Camel request.
Any solution for this?
When Commerce calls the external system, it returns the object as returned by the external system, rather than what it stored in its caches.
This gives some discrepancies in behaviour, if the external system does not provide the full information set, that commerce itself applies to an inventory response.
Why it does this, is not really clear to me, but I have observed it to be true. It also affects physical store inventory display, as the external system will populate the external identifier of the inventory record, while the cache-only version wont. And the jsps expect it to be there.
So first run, will look ok, but subsequent executions will show no inventory.
The easiest way to debug this, is to use Soap UI to call GetInventoryAvailability on WCS both when
a) No existing cache entry is present
and
b) When an existing cache entry is present.
Then fiddle with the camel response, until it matches the response returned in b).
Typically, its fields you weren't thinking were important such as StoreIdentifier and AvailabilityTime.
I'm new to Angular and I just can't get my head wrapped around this idea, any help would be greatly appreciated.
A lot of conversations state the Model should come from the server via restful web services. I've been using $http in a factory. This makes sense to me "if" there is data present. If you load a screen and the user or whatever is new then you get a blank JSON value. For complex data (relationships) you get those items with a value but other properties are left off.
So what am I missing here, how can the model come from the server consistently?
It's useful to think of your model as both a server model and a client model. The server model should be your true model or "source of truth", and the client model is a working model or "mimic" that should behave as a local copy of the server model.
For the model to "come from the server consistently", you have to ensure that any changes to the client model get validated by the server side. Meaning that when any change requests to the model -- such as create, remove, update, or delete (crud) -- get sent as requests to the server, and then the resulting changed data model gets returned to the client model so it can be updated.
You could take advantage of standard HTTP status codes as a mechanism to provide results to the client:
for example, your service could return an HTTP code of 204 to indicate that the server successfully processed the request, but is not returning any content.
After reading about how CodeIgniter handles sessions, it has me concerned about the performance impact when sessions are configured to be stored and retrieved from the database.
This is from the CI documentation: "When session data is available in a database, every time a valid session is found in the user's cookie, a database query is performed to match it."
So every AJAX call, every HTML fragment I request is going to have this overhead? That is potentially a huge issue for systems that are trying to scale!
I would have guessed that CI would have implemented it better: include the MD5 hash to cover both the sessionID+timestamp when encoding them in the session record. Then only check the database for the session record every X minutes whenever the sessionID gets regenerated. Am I missing something?
You can make your AJAX requests use a different controller, for example Ajax_Controller instead of MY_Controller. MY_Controller would load the Session class but the Ajax_Controller doesn't. That way when you call to your AJAX, it doesn't touch session data and therefore doesn't make any erroneous calls to the database that aren't necessary.
If you are autoloading the Session class, maybe you can try unloading it for the AJAX requests? I've never tried it but it's talked about here http://codeigniter.com/forums/viewthread/65191/#320552 and then do something like this
if($this->input->is_ajax_request()){
// unload session class code goes here
}