I want to add a ics link in my website. Users would be able to add this url to their favorite calendar app and see their upcoming events.
My users use my website for a few months and then leave (it's a educational website). So my question is :
Is there a way (in the ics protocol maybe ?) to automatically unsubscribe my users from my ics url to avoid unecessary requests "for life"?
For exemple, iCal on Mac will do a request every hour to the url to get new data. But once a user leave, there will never be new data, so the requests are useless.
Thanks for your help!
You can either ask people to unsubscribe, make it desirable perhaps by a dummy daily event that says ' No longer updated, please unsubscribe'
OR
force an unsubscribe by returning an appropriate http return code to the requesting system - probably the 410 (gone) rather than the 404. The 410, as per it's description ,is the most appropriate: "The url is no longer there and the condition is likely to be permanent." https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/410
A url could be offered per user. Ensure that it then returns a 410 at end of life (not just an empty file)
The receiving devices don't just quietly unsubscribe the url. Usually they show the error. Ideally the human should unsubscribe. Perhaps an email with quick tips on how to unsubcribe may be best for your situation? at least then you've told them.
I find even for myself I have a lot of garbage calendar urls in my calendar app. If I started getting errors I would unsubscribe them (or if there was garbage events, I might unsubscribe or 'hide' it.
Other ways of conveying information to the requesting app, that may reduce load on your server:
Last Modified in the header https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After (No 'never'!)
There are also unofficial (non RFC5545) extensions that you could include in the ics file: eg: X-PUBLISHED-TTL - Recommended update interval for subscription to the calendar. One could make that a really long interval.
See https://en.wikipedia.org/wiki/ICalendar#Calendar_extensions
Related
I am attempting to use the Gmail api to synchronize all the email's from a user's Gmail inbox. I am using the Partial Synchronization technique described in Gmail's "Synchronizing Clients" [1] documentation. One of the listed limitations of this is that in rare cases the historyId of certain emails are unavailable. Under these circumstances, it is advised that the client fall back on using "Full Synchronization", which states that the client should "retrieve and store as many of the most recent messages or threads as are necessary for your purpose".
This all makes sense. When I have issues with Partial Synchronization, I attempt to look through an inboxes messages by time range. To do this, I effectively store a record of the ( emailAddress, historyId, internalDate ) of each email I sync and then when falling back on Full Synchronization I attempt to sync all email since the most recent internalDate that I have already synced.
My issue is that the cases that seem to cause partial synchronization to fail also seem to cause Full Synchronization to fail, and many of these cases are caused by emails with internalDates in the future (I can't share these examples for privacy reasons). The failure case seems to be something like the following
I sync email E with historyId H and an internalDate I some time in the future
Some time passes
I receive a push notification from google indicating that their are new emails to sync
I lookup the most recent message that I have syncecd for this inboxId, finding email E
I attempt a partial sync using the listHistory [2] endpoint with historyId H
The listHistory request fails with a 404
I attempt a full sync using the listMessages [3] endpoint using the query newer_than:{hours_since-internalDate-I}, but this request doesn't make any sense since the internalDate of this message is in the future.
I can imagine a few different solutions to this problem. Perhaps I should simply ignore these emails as spam, or perhaps I should store a timestamp of when I synced each email and then perform a Full Synchronization on the timestamp I have stored.
Either way, this seems like a bug in the Gmail API, as the internalDate should really be when Gmail received the email. I initially suspected that this might be caused by Gmail's new schedule feature and that the internalDate might be when the email was scheduled in the future, but I confirmed that some of the examples I have are definitely for emails that the user's inbox received, not sent. Really not sure what to make of this edge case within the internalDate api.
So my question is, what is the advised way to handle bogus future internalDates? And is it a bug?
https://developers.google.com/gmail/api/guides/sync
https://developers.google.com/gmail/api/v1/reference/users/history/list
https://developers.google.com/gmail/api/v1/reference/users/messages/list
If you're sure this is a bug, you can head to Google's Issue Tracker (template here) and report it so their engineering team can take a look and see what is causing this error. Alternatively if this persists with other mails or users, you can open a support ticket directly with them by going to your admin dashboard and selecting 'Contact Support' in the ? menu in the top right. This way Google can take a look into the erroneous internalDates without the need for you to post any potentially sensitive data in a public forum.
In the mean time you can workaround this dynamically by making sure that you don't fetch mails with a time in the future (psuedo-code):
var now = new Date().getTime()
var q = "newer_than:1h before:" + now
GmailServiceConnect.Users.messages.list(userId = "user#domain.ext", q = q).execute()
But remember that Gmail uses milliseconds for Unix time not seconds so this will have to be adjusted accordingly.
I have a PHP generated .ics calendar file on my server.
Several clients are subscribed to this calendar, e.g. using Google Calendar and Apple iCal/Calendar.
I want to delete the calendar and all events in it, in a way that it is also removed from the clients.
It seems that if I delete the .ics file, the events will still exist in the clients.
Should I keep an empty .ics file? Or is there some syntax I should use to instruct the clients that the calendar is no longer to be used?
In HTTP the way to tell clients that the resource no longer exists, is to emit a 404 Not Found or a 410 Gone status code.
However, even though this is the 'correct way', in practice most clients won't automatically do something with this information.
I do think that this is the 'most correct' though, because calendar clients do tend to add a 'warning' or 'error' icon to the calendar, signaling the user that something is wrong (so they can manually clean it up).
However, if you just want the events to disappear automatically, your only option is to publish a calendar with 0 events.
I have implemented Google App Engine's Channel API feature in my application. Everything runs smoothly. I create new channels every one hour for every user. I have managed to maintain one channel per session (same channel for different tabs in a browser). I have implemented the onerror and onclose methods in such a way that every time they are invoked, a call is made to the server requesting for a valid token.
Sometimes, after the channel's been alive for a while, it gets disconnected. I can see failed HTTP calls to talkgadget.google.com on the JavaScript console. The URLs are something like this:
https://129.talkgadget.google.com/talkgadget/dch/bind?VER=8&clid=.....
These calls have responses like "401 (Token timed out)" or "401 (Token invalid)".
Which is indeed true, the token used by the client is invalid. It should get updated with the new token but the onerror or onclose methods aren't invoked. How am I supposed to figure out when this would happen or how to handle it? There is no real way to say if a client is disconnected or not except for the onerror or onclose methods. This issue is resolved if I refresh the page (I get the valid token from database every time the user refreshes).
I checked the socket objects's "readyState" property and it had the value 1. There are many who face this issue and as of date, there seems to be no valid solution offered by the folks at GAE.
Edit: I'm a premium account holder and this issue is holding back our deployments.
Edit 2: Having one channel per tab reduces the frequency of this happening. But it doesn't solve the problem completely.
It has been six days since I posted the question and there has been no response from the AppEngine team or any other users.
The workaround I applied was to have a button on the site that would fetch the (valid) token from the database, close the channel and then open it again with the token received.
Sometimes its a new token which should've been received before, sometimes its the same token that had been valid all along.
This issue cannot be replicated often I agree, but when it happens, it causes a lot of damage. I hope I find a solution soon.
Edit: Having one channel per tab reduces the frequency of this happening. But it doesn't solve the problem completely.
This is more of a request for pattern and discussion rather than a simple one-off question. I have a backbone app where user can be part of different roles. The routes are defined as usual:
routes:
"": "showHomePage"
"import": "showImportPage"
I would like the import page to be accessible only to certain user roles. I imagine I can do something like this:
showImportPage: ->
if not MyApp.CurrentUser.can_import
return
Which indeed works. Of course, as you can imagine, this is easily exploited by just using Chrome console, and even if I don't show the link anywhere it's quite simple to just go in the address bar and type it.
Even though the above should be enough to stop a normal user, my question is: how could I secure that route from being accessed?
The opinion I have until now is that the only way is to refer back to the server before serving that route, either by checking a special URL or by simply re-fetching the User model before accessing... I have this hitch, though, that this will basically defeat the purpose of the whole idea behind a "single-page-app", if every url must be authenticated by the server and I need to show the usual ajax spinner before allowing the user to navigate... I know the amount of data going back and forward is minimal (only the json user info or even less), but still...
What are your opinion or solutions if you ever had to face this problem?
I think your question is a great one.
I made a PhoneGap app using BackboneJS and Jquery mobile so I faced the same problems you are facing now.
I think authorization can't live solely on the client side since it is inherently wrong. What lives at the client, is fully controlled by the client, and that's something no one can change.
Sending a request to the server does not break the single-app-page paradigm as long as the request gets the minimal data needed and all logic/view components are located on the client.
Keep in mind that if you have sensitive data in that page that you don't want regular users to see, it also must be sent from the server after verifying the authorization of the request, so it is not only a JSON of the user info that must be sent, it is the data itself as well.
I wish someone else would prove me wrong here, but as far as it goes for me that's the deal.
I have the following tiny dilemma: I have a backbone app, which is almost entirely route based, i.e. if I do to nameoftheapp/photos/1/edit I should go to the edit page for a given photo. The problem is, since my view logic happens almost 100% on the client side (I use a thin service-based server for storage and validation) how do I avoid issues of the sort of an unauthorized user reaching that page? Of course, I can make the router do the check if the user is authorized, but this already leads to duplication of efforts in terms of validation. Of course, I cannot leave the server side without validation, because then the API would be exposed to access of any sort.
I don't see any other way for now. Unless someone comes up with a clever idea, I guess I will have to duplicate validation both client and server-side.
The fundamental rule should be "never trust the client". Never deliver to the client what they're not allowed to have.
So, if the user goes to nameoftheapp/photos/1/edit, presumably you try to fetch the image from the server.
The server should respond with a HTTP 401 response (unauthorized).
Your view should have an error handler for this and inform the user they're not authorized for that - in whatever way you're interested in - an error message on the edit view, or a "history.back()" to return to the previous "page".
So, you don't really have to duplicate the validation logic - you simply need your views to be able to respond meaningfully to the validation responses from the server.
You might say, "That isn't efficient - you end up making more API calls", but those unauthorized calls are not going to be a normal occurrence of a user using the app in any regular fashion, they're going to be the result of probing, and I can find out all the API calls anyway by watching the network tab and hit the API directly using whatever tools I want. So, there really will be no more API traffic then if you DID have validation in the client.
I encountered the same issue a while ago, and it seems the best practice is to use server-side validation. My suggestion... Use a templating engine like Underscore, which is a dependency of Backbone, design the templates, and for those routes that only authenticated users or those with rights to do so, can access... you ask the server for the missing data (usually small pieces of json data) based on some CSRF token, or session_id, or both, (or any other server-side validation method you choose), and you render the template... otherwise you render a predefined error with the same template... Logic is simple enough...