How long does the GAE "best effort" memcache usually last? - google-app-engine

Google is extremely vague about when memcache entries might expire in the shared memcache. This is completely understandable as it is free and shared but does anybody have any practical guidance for how long my entries will probably exist before they are removed from the shared cache? 1 hour? 1 day? 1 week? 1 month?
I plan to store some session tokens and am trying to figure out how long the sessions can be revisited.

The memcache can trashed all your datas at any time. You can't really trust the expiration time.
According to Google, your application must work fine even without or with memcache.
The memcache is just a plus to avoid to make a lot of datastore queries and manage correctly your quotas.
And for you, the memcache is not the place to store sessions tokens.
If you are using Python, you can user webapp2 sessions. It works wonderful with AppEngine.

I've honestly seen memcache last for a day, and as little as an hour.... And I never actively monitored memcache, so I wouldn't be surprised if you have way bigger or smaller timeframes in your memcache.
It's really hard to say, since (I guess) it depends on internal usage of the computer used for your instance and how much resources are needed by other computers around.
Unfortunately there is no real answer yo this question besides : code defensively, always check memcache and then if you don't find your data in memcache, you grab it from your permanent data (datastore I guess? or cloud storage) and then push it back to memcache for your next use

Related

AppEngine: Memcache without any instance

My problem is my memcache resets every night. What could the be problem?
I have a very simple low-traffic web site on Google App Engine in Java. I do not have any instances left at night because of lack of traffic. If there is no instance, does it mean that memcache clears all its data?
Memcache is not persistent and can be evicted by memory pressure. Share memcache is shared with other apps. You can buy dedicated memcache for reserved amount. What you described does not seem to be a problem at all.
One important question is, what is the timeout that you set?

Passing on goog-app-engine costs to the client

I have a google app engine application that uses the cloud datastore. My application usage is increasing, I cannot maintain it as “free” since I am paying
for the read/write operations to the datastore
the storage of data in the cloud
instance hours
I plan to charge “by the drink.” In other words, as I am charged for application usage, I will pass on that charge on to my clients. Before developing my own solution to do this, I realize that there must be countless others who have solved this problem. If so, what technique(s) have you employed?
Thank you in advance for your suggestions.
Unfortunately there is no silver bullet for this situation. Be advised, unless you really think this through keeping track of your users consumption might end up costing as much as the actual usage.
Without knowing anything else about your app its hard to help but my advice would be to use appstats to figure out the actual cost of a given service and charge the user per time accessed.
Most users do not like (actually, hate) to be charged for something they (a) do not understand, and (b) have no control over. If a phone company tells its customers to go ahead and use their service without telling them how much - exactly - they are going to pay, it will lose all customers in no time. How are you going to explain read/write datastore costs to your users, with indexed properties and all? How about query costs and instance hours? It's a difficult task even if all of your customers are software engineers.
I recommend charging users for something they understand, like creating a free and a premium version of your app with additional features, or charging per game, or per document processed (you did not tell us what your app is doing), etc.

Dedicated memcache vs shared memcache (python, google appengine)

So basically, I am trying to decide if I should go for dedicated memcache.
My scenario is as follows:
I am working on an application that provides real time analysis for some public data.
I am going to maintain a total 15kb key/value memcache (20 keys, variable values)
Meanwhile values are constantly changing (total keys/values around 20 updates every 3 seconds)
Hits to the website will perform request for these keys (also around a request per 3 seconds)
I am assuming that 10000 users hit the website immediately, this will produce about 20 * 10000 requests each 3 seconds.
Considering the size of the memcache (relatively small), but also the number of requests produced about 7000/second (memcache key/value access), would dedicated memcache be more of "risk averse" deal for this situation.
Thanks,
The cached data seems crucial to the correct operation of your web application. If you lost data it might be a disservice to thousands of users! Hopefully your app also periodically persists the cached data and automatically recovers from cache erasure.
Although the size of the data is small, shared memcache still has more risk than dedicated memcache of evicting some or all of the data at unpredictable times. The design must also deal correctly with partial data loss. Not only your memory pressure but also that from other applications and cloud operational factors are more likely to result in AppEngine discarding shared cache.
With this data size you will not get any benefits from using the dedicated memcache.
The rate at which you access memcache does is not relevant for this decision.

Memcache System - Keys getting dropped frequently

We currently have our application hosted in Google App Engine. Billing is enabled to that application. This application is still in beta that we are using for testing purpose. We have a logic of serving data from the Memcache if present, if not then we get the data from the datastore and update the memcache and serve the data. We are encountering strange behaviour related to Memcache. The data related to some keys in Memcache is getting dropped after few minutes after being set. We tried setting expiration time for the keys in the memcache, even that does not seem to work. Since the data is getting dropped from the memcache the data is again from the datastore which is increasing the billing for our application.
Currently nearly 80% of the billing is related to datastore read. The datastore read is high as the memcache is not working efficiently as it should be. Any insight why we are facing this issue would be really helpful.
Just an FYI, we are having around 75000 keys in the memcache with total size of 100 MB data. Our structure demands keeping such large number of keys in memcache, which I think should not be an issue.
Our application is being by 10 users and the billing amount per day is coming to around $40.
Thanks,
Krish
Unfortunately memcache will evict keys as and when it requires. Setting the time to expire only means the item will be in memcache for up to the expiry time.
Take a look at the docs regarding eviction.
Also, take a look at this for some more insight into ways around memcache issues.
Regarding your data structure, perhaps you could post a new question and we can see if others have advice for you.

using memcached for very small data, good idea?

I have a very small amount of data (~200 bytes) that I retrieve from the database very often. The write rate is insignificant.
I would like to get away from all the unnecessary database calls to fetch this almost static data. Is memcached a good use for this? Something else?
If it's of any relevance I'm running this on GAE using python. The data in question could be easily (de)serialized as json.
Memcache is well-suited for this - reading from the datastore is much more expensive than reading from memcache. This is especially true for small amounts of data for which the cost to retrieve is dominated by latency to the datastore.
If your app receives enough requests that instances typically stay alive for a little while, then you could go one step further and use App Caching to largely avoid memcache too. (Basically, cache the value in a global variable, and also app-cache the time the value was last updated. Provide an accessor for the value which retrieves the latest from memcache/db if it hasn't been updated in X minutes). Memcache is pretty cheap though, so this extra work might only make sense if you access this variable rather frequently.
If it changes less often than once per day, you could just hardcode it in webapp code, and reupload the file each time it changes.

Resources