Caching cloud storage files on app engine - google-app-engine

I am using app engine to serve a bunch of sklearn models. These models are around 100 mb in size, and there are around 25 of them.
Downloading them can take up to 15s at times, despite being in the designated app engine bucket, and is often dominating request times.
I currently use a FIFO cache layer wrapped around the GCS storage client, but cache hits aren't great as the different model are used quite interspersed and app engine memory is limited.
Memcache seems too small for this, and /tmp is also stored in RAM.
Is there a better solution for caching such files?

You can imagine different solution to solve your issue.
You can embed your models in your deployment. Like that, the model are already here with the service. When a new model version is released, you deployed a new app engine service revision
The problem with the precedent solution is the deployment frequency: when one of the model is updated you need to repackage and redeploy your App Engine service. The solution is the micro services. You can have 1 model per APp Engine service and therefore only deploy this one that has been updated. If you want only entry point, you can have a 26th app engine service wich is your entry point and will route the request to the correct model service.
You can also perform the same thing with Cloud Run, where you manage the container packaging and detail if you need special things. You have also more flexibility on the number of CPUs and the memory size.
Last point, after solving the download issue part, you could have cold start issue: the time that take your server to start and to load in memory your model (at the first request, when the instance start). Cloud Run proposes a min-instance feature to keep warm a certain number of instances and therefore to eliminate the cold start issue.

Related

Caching after GAE standard migration to Go 1.11/1.12

I've almost completed migrating based on google's instructions.
It's very nice to not have to call into the app-engine libraries whatsoever.
However, now I must replace my calls to app-engine-standard memcached.
Here's what the guide says: "To use a memcache service on App Engine, use Redis Labs Memcached Cloud instead of App Engine Memcache."
So is this my only option; a third party? They don't even list pricing on their page if GCE is selected.
I also see in the standard environment how-to guides there is a guide on Connecting to internal resources in a VPC network.
From that link it mentions Cloud Memorystore. I can't find any examples if this is advisable or possible to do on GAE standard. Of course it wasn't previously possible but now that GAE standard has become much more "standard", I think it should be possible?
Thanks for any advice on the best way forward.
Memorystore appears to be Google's replacement:
https://cloud.google.com/memorystore/
You connect to it using this guide:
https://cloud.google.com/appengine/docs/standard/go/using-memorystore
Alas it costs about $1.20/GB per day with no free quota.
Thus, if your data doesn't change, and requires less than 100MB of cache at a time, the first answer might be better (free). Also, your data won't explode the instance as you can control the max size of the cache.
However, if your data changes or you need more cache, MemoryStore is a more direct replacement to MemCache - just costs money.
I've been thinking about this. 2nd gen instances have twice the ram, so if global cache isn't required (as in items don't change once created - (name items using their sha256)), you can run your own local threadsafe memcache (such as https://github.com/dgraph-io/ristretto) and allocate some of the extra ram to it. It'll be faster than Memcache was, so requests can be serviced even faster, keeping the number of instances low.
You could make it global for data that does change, by using pub/sub between instances, but I think that's significantly more work.
To ease the migration to 1.12, I have been thinking of using this solution:
create a dedicated app using the 1.11 runtime.
setup twirp endpoints to act as a proxy for all the deprecated app engine services (memcache, mail, search...)

Fastest Open Source Content Management System for Cloud/Cluster deployment

Currently clouds are mushrooming like crazy and people start to deploy everything to the cloud including CMS systems, but so far I have not seen people that have succeeded in deploying popular CMS systems to a load balanced cluster in the cloud. Some performance hurdles seem to prevent standard open-source CMS systems to be deployed to the cloud like this.
CLOUD: A cloud, better load-balanced cluster, has at least one frontend-server, one network-connected(!) database-server and one cloud-storage server. This fits well to Amazon Beanstalk and Google Appengine. (This specifically excludes CMS on a single computer or Linux server with MySQL on the same "CPU".)
To deploy a standard CMS in such a load balanced cluster needs a cloud-ready CMS with the following characteristics:
The CMS must deal with the latency of queries to still be responsive and render pages in less than a second to be cached (or use a precaching strategy)
The filesystem probably must be connected to a remote storage (Amazon S3, Google cloudstorage, etc.)
Currently I know of python/django and Wordpress having middleware modules or plugins that can connect to cloud storages instead of a filesystem, but there might be other cloud-ready CMS implementations (Java, PHP, ?) and systems.
I myself have failed to deploy django-CMS to the cloud, finally due to query latency of the remote DB. So here is my question:
Did you deploy an open-source CMS that still performs well in rendering pages and backend admin? Please post your average page rendering access stats in microseconds for uncached pages.
IMPORTANT: Please describe your configuration, the problems you have encountered, which modules had to be optimized in the CMS to make it work, don't post simple "this works", contribute your experience and knowledge.
Such a CMS probably has to make fewer than 10 queries per page, if more, the queries must be made in parallel, and deal with filesystem access times of 100ms for a stat and query delays of 40ms.
Related:
Slow MySQL Remote Connection
Have you tried Umbraco?
It relies on database, but it keeps layers of cache so you arent doing selects on every request.
http://umbraco.com/azure
It works great on azure too!
I have found an excellent performance test of Wordpress on Appengine. It appears that Google has spent some time to optimize this system for load-balanced cluster and remote DB deployment:
http://www.syseleven.de/blog/4118/google-app-engine-php/
Scaling test from the report.
parallel
hits GAE 1&1 Sys11
1 1,5 2,6 8,5
10 9,8 8,5 69,4
100 14,9 - 146,1
Conclusion from the report the system is slower than on traditional hosting but scales much better.
http://developers.google.com/appengine/articles/wordpress
We have managed to deploy python django-CMS (www.django-cms.org) on GoogleAppEngine with CloudSQL as DB and CloudStore as Filesystem. Cloud store was attached by forking and fixing a django.storage module by Christos Kopanos http://github.com/locandy/django-google-cloud-storage
After that, the second set of problems came up as we discovered we had access times of up to 17s for a single page access. We have investigated this and found that easy-thumbnails 1.4 accessed the normal file system for mod_time requests while writing results to the store (rendering all thumb images on every request). We switched to the development version where that was already fixed.
Then we worked with SmileyChris to fix unnecessary access of mod_times (stat the file) on every request for every image by tracing and posting issues to http://github.com/SmileyChris/easy-thumbnails
This reduced access times from 12-17s to 4-6s per public page on the CMS basically eliminating all storage/"file"-system access. Once that was fixed, easy-thumbnails replaced (per design) file-system accesses with queries to the DB to check on every request if a thumbnail's source image has changed.
One thing for the web-designer: if she uses a image.width statement in the template this forces a ugly slow read on the "filesystem", because image widths are not cached.
Further investigation led to the conclusion that DB accesses are very costly, too and take about 40ms per roundtrip.
Up to now the deployment is unsuccessful mostly due to DB access times in the cloud leading to 4-5s delays on rendering a page before caching it.

GAE Go - "This request caused a new process to be started for your application..."

I've encountered this problem for a second time now, and I'm wondering if there is any solution to this. I'm running an application on Google App Engine that relies on frequent communication with a website through HTTP JSON RPC. It appears that GAE has a tendency to randomly display a message like this in the logs:
"This request caused a new process to be started for your application,
and thus caused your application code to be loaded for the first time.
This request may thus take longer and use more CPU than a typical
request for your application."
And reset all variables stored in RAM without warning. The same process happens over and over no matter how many times I set the variables again or upload newer code to GAE, although incrementing the app version number seems to solve the problem.
How can I get more information on this behaviour, how to avoid it and prevent data loss of my Golang applications on Google App Engine?
EDIT:
The variables stored in RAM are small classes of strings, bytes, bools and pointers. Nothing too complicated or big.
Google App Engine seems to "start a new process" in matter of seconds of heavier use, which shouldn't be long enough time for the application to be shut down for not being used. The timespan between application being uploaded to GAE, having its variable set and a new process being created is less than a minute.
Do you realize that GAE is a cloud hosting solution that automatically manages instances based on the load? This is it's main feature and reason people are using it.
When load increases, GAE creates a new instance, which , of course, has all RAM variables empty.
The solution is not to expect variables to be available or store them to permanent storage at the end of request (session, memcache, datastore) and load them if not present at the beginnig of request.
You can read about GAE instances in their documentation here, check out the performance section:
http://code.google.com/appengine/kb/java.html
In your case of having small data available, if its static then you can load it into memory on startup of a new instance. If it's dynamic data, you should be saving it to the database using their api.
My recommendation for keeping a GAE instance alive, either pay for the Always-On service or follow my recommendations for using a cron here:
http://rwyland.blogspot.com/2012/02/keeping-google-app-engine-gae-instances.html
I use what I call a "prime schedule" of a 3, 7, 11 minute cron job.
You should consider using Backends if you want long running instances with resident memory.

Latency accessing Google App Engine overseas

I am about to begin development of a web app in New Zealand for a NZ market for which scalability is a key requirement. I am contemplating using Google Apps Engine which I have used in the past for smaller projects where latency was not a big issue, because half the apps are client side Java script.
However, the new project requires fast AJAX response times. The local web-app companies charge about $175/month (much more than in the US I would imagine) for a dedicated server.
Is there likely to be a significant difference between the latency for AJAX requests if I use Google Apps Engine (hosted in the US I presume??) vs the local hosting company who host here in New Zealand? If so how big?
A service which may interest you in this context is CloudSleuth. They measure page load times from multiple locations. But select Asia/Oceania for Location. Then drill down for GAE to see page load time from various location. Unfortunately the closest will be Sydney, where page load for GAE currently is almost 20s.
From your explanation you would like to use App Engine as your backend, there should not be any latency problems other that the time your app would take to load and serve a request. But as they say, there is no better test like the one you do it yourself, so go ahead play with App Engine and see it for yourself!
Happy coding!
It's unavoidably the case that the latency for a request within New Zealand is going to be lower than the latency for a request to the US and back, all else being equal. There are several mitigating factors to consider, though:
The speed-of-light delay may not be significant for your application. The round trip time to the US and back is under 100 milliseconds; the latency generated by your app serving the request may be large enough that this is not a significant factor on the end-user latency.
Although your app is only in a single location at any one time, Google has caching frontends all around the world. Requests typically get routed to the closest one, and if your app generates cacheable responses, the frontend may be able to return a response from its cache immediately, without having to ever send the request to your app.
Some ISPs, particularly in places like NZ where international bandwidth is expensive, run transparent proxies. Likewise, so do organisations, and your browser itself has a cache. Any of these can satisfy the request in less time than a roundtrip, if the response is cacheable.
In the end, the question is whether or not the extra 100 milliseconds or so is acceptable. More often than not, the answer is yes, and it's worth the tradeoff of not having to handle machine provisioning, maintenance, etc etc yourself.
App Engine is not globally distributed.
The whole application is hosted around North America by default.
It you pay for the service you may request hosting within Europe instead, but there is no option to select any other regions (from https://developers.google.com/appengine/docs/python/gettingstartedpython27/uploading).

Relative advantages of storage using Amazon Web Services S3 vs Google Application Engine

What do you see as the advantages and disadvantages of Amazon Web Services S3 compared with Google Application Engine? The cost per gigabyte for the two is, at the time I ask, roughly similar; I have not seen any widespread complaints about the quality of service; so I think the decision of which one to use may depend on the API (of all things).
Google's API breaks your content into what they call static content, such as your CSS files, favicons, images, etc and non-static dynamically-generated HTTP responses. Requests for static stuff will be served to whoever requests it until your bandwidth limit is reached; non-static requests will be fulfilled until your bandwidth or CPU limit is reached. With respect to your non-static requests, you can provide any logic you are able to express in Python, so you can be choosy about who you serve.
Amazon's API treats all your content as blobs in a bucket, and provides an access protocol that lets you distinguish between a variety of fulfillable requests ranging from world-readable to owner-only. If you want to something that's not in the kit, though, I don't know what you do beyond being thoughtful about distributing your URIs.
What differences do you see between the two? Are there other cloud storage services you like? Zetta had a press release today, but they're looking for a minimum of ten terabytes on the beta application, and none of my clients are there (yet); and Joyent will probably do something in the near future.
The way I see it is the Google App Engine basically provides a sandbox for you to deploy your app as long as it is written with their requirements (Python etc). Amazon gives you a virtual machine with a lot more flexibility in what can be done but probably more work on your side needed. MS new Azure seems to be going down the GAE route, but replace Python with .NET.
GAE has a limit of 10MB each on static files uploaded through appcfg.py (look right at the bottom of http://code.google.com/appengine/docs/python/tools/uploadinganapp.html). Obviously you can write code to slice large files into bits and reassemble at download time, but it suggests to me that Google doesn't expect App Engine to be used just as a simple CDN, and that if you want to use it as one you'll have to do some work. S3 does the job out of the box, all you have to do is grab a third-party interface app.
If you want to do something non-standard with file access on S3, then probably Amazon expects you to spring for a server instance on EC2. Once this is done, you have much more flexibility than GAE's environment, but you pay more (in cash and probably in maintenance).
The plus point for GAE is that it has "cheap" on its side for small apps (up to 1GB storage, 1GB bandwidth and 1.3 million hits a day are free: http://code.google.com/appengine/docs/quotas.html). Depending on your use, this might be significant, or it might be irrelevant on the scale of your total bandwidth costs.
Coincidentally, I have just this last couple of days looked at GAE for the first time. I took an old Perl CGI script and turned it into a GAE app, which is up and running. About 10 hours total, including reading the GAE introductory docs and remembering how Python is supposed to work enough to write a couple of hundred lines. I'd speculate that's more effort than loading a bunch of files onto S3, but less effort than maintaining EC2 server(s). However, I haven't used Amazon.
[Edited to add: this sounds like the advantages are all with Amazon for commercial purposes. This may well be true, but then GAE is not yet mature and presumably will get better from here fairly rapidly. They only let people start paying in December or so, before that it was free-quota-only except by special arrangement with Google. While Google sometimes takes flack for its claims of "perpetual beta", I think GAE genuinely is still starting up. If your app is a good fit for the BigTable data paradigm, then it might scale better on GAE than EC2. For storage I assume that S3 is already good enough for all reasonable purposes, and Google's clever architecture gives GAE no advantages to compensate when all you're doing is serving files.]
* Except that Google has just offered me a preview of GAE's Java support.
** Just noticed that you can set up chron jobs, but they're limited by the same rules as any other request (30 second runtime, can't modify files, etc).

Resources