Is there any way to view app engine endpoints latency? - google-app-engine

I need to view the average response time of different endpoints on my app engine application. Is there any way to do so? Setting a response header maybe? Generating a log?
I need to know which endpoints are slow to see if they need optimization.
I've seen that you can get the average response time of all endpoints but i don't know if you can be more specific.

Related

Can Google App Engine Memcache Standard be accessed from an external server

I am trying to figure out how to access Google App Engine Memcache service from outside Google App Engine. Any help on how this can be done would be greatly appreciated.
Thanks in advance!
I don't think this is currently possible. I don't know if there is any technical argument for this or if this decision has been made simply for billing purposes. But it seems like memcache is intended to be an integral part of App Engine. The only relevant discussion I could find is this feature request. It calls for possibility of accesing memcached data of one App Engine project by another App Engine project. It seems to me that Google didn't consider such functionality to be beneficial. You could try filing your own feature request to make memcache a standalone service. In case you do not succeed (and I am afraid you won't), here is a simple workaround.
A simple workaround:
Create a simple App Engine project which would serve as a facade over memcache service. This dummy App Engine project would simply translate your HTTP requests to memcache API calls and return the obtained data in the body of a HTTP response. For example, to retrieve a memcache record you could send a GET request such as:
https://<your-poject-id>.appspot.com/get?key=<some-particular-key>
This call would get "translated" into:
memcache.get(<some-particular-key>);
And the obtained data appended to the HTTP response.
Since accessing memcache is free, you would only have to pay for instance time. I don't know what through-put are you expecting, but I can imagine scenarios where you could even fit into the free daily quota (currently 28 hours/day). All in all, the intermediate App Engine project should not come with significant cost in neither performance nor price.
Before using this workaround:
The above snippet of code is intended for illustration purposes only. There still remain some issues to be dealt with before using this approach in production. For example, as pointed out by Suken, anyone would be able to access your memcache if they knew what requests to send. Here are four additional things I would personally do:
Address the security issues by sending some authentication token with each request. An obvious necessity would be to make the calls over HTTPS to prevent man-in-the-middle attackers from obtaining this token. Note that App Engine's appspot.com subdomains are accessible via HTTPS by default.
Prefer batch API calls such as getAll() over their single record alternatives such as get(). Retrieving multiple records in one batch call is much faster than making multiple separate API calls.
Use POST requests (instead of GET) to access the facade application. You won't have to worry about your batch requests being to large. I only used GET request in the example above because it was easier to write.
Check if such usage of App Engine doesn't violate the Terms of Service. Personally, I don't believe it does. And I don't see why Google should mind. After all, you will be paying for instance hours.
EDIT: After giving this some more thought, I believe that the suggested workaround is actually what Google presumes you to do. Given that the Goolge's objective is to earn money, it would be unreasonable to provide a free service unless it was a part of a paid one. Of course, another billing schemes could be created. For example, allowing direct access only for developers who are willing to pay for dedicated memcache. The question is whether your use case is broad enough to convince Google to take some action.
No, AFAIK the Memcache service is not available outside GAE. To be even more specific it is only available inside the GAE standard environment, it is unavailable in the GAE flexible environment.
But some of the alternate solutions suggested for GAE flexible users might be useable for you as well. From Memcache:
The Memcache service is currently not available for the App Engine
flexible environment. An alpha version of the memcache service will be
available shortly. If you would like to be notified when the service
is available, fill out this early access form.
If you need access to a memcache service immediately, you can use the
third party memcache service from Redis Labs. To access this service,
see Caching Application Data Using Redis Labs Memcache.
You can also use Redis Labs Redis Cloud, a third party fully-managed
service. To access this service, see Caching Application Data Using
Redis Labs Redis.
As stated by other users the Memcache is not offered as a service outside GAE (Google App Engine). I would like to point out that implementing GAE facade over Memcache service has security ramifications. Please note that facade GAE Memcache app will be exposed on the public internet like any other GAE service. I am assuming that you want to use Memcache for internal use only. Another aspect to think about is writing into memcache. If you intend to write to memcache from outside GAE, then definitely avoid facade implementation. If comprised anyone will be able to use you facade implementation as their own cache without paying for it ;)
My suggestion is to spin up a stack using GCP Cloud Launcher. There are various stack templates available for both Redis and Memcache stacks. Further you can configure the template to use preemptible burstable instances to reduce the cost of your Memcache.

Avoid DoS attacks on App Engine

I have a small question with possibly a complex answer. I have tried to research around, but I think I may not know the keywords.
I want to build a web service that will send a JSON response, which would be used for another application. My goal is having the App Engine server crawl a set of webpages and store the relevant values so the second application (client) would not need to query everything. It will only go to my server with the already condensed information.
I know, it's pretty common, but how can I defend from attackers who wish to exhaust my App Engine resources/quota?
I have been thinking on limiting the amount of requests by IP (say.. 200 requests / 5 minutes), but is that feasible? Or is there a better, and more clever way of doing it?
First, you need to cache the JSON. don't hit the datastore for every request. use memcache or possibly, depending on your requirements, you can cache the JSON in a static file in Cloud Storage. This simple is the best defender against DDOS, since every request adds minimal overhead.
Also, take a look in the DDOS protection service offered by app engine:
https://developers.google.com/appengine/docs/java/config/dos
You could require users to log-in then generate and send an auth key to the client app that must accompany any requests to the app engine service.

Strategy: How do I exchange data directly between JavaScript and Google App Engine DataStore

I am somewhat new to Web development - specifically Google App Engine and JavaScript/HTML development, but I have an app deployed and working on Google App Engine and it is working ok.
I would like a user of my App to be able to store and retrieve a serialization of the app state in JSON using the GAE Datastore. (Note - This is only a user-initiated action - so channels seem to be overkill)
The examples provided by Google demonstrates one approach that allows the server-side Python implementation to do this. Specifically https://developers.google.com/appengine/docs/python/gettingstartedpython27/usingdatastore. I have this working ok.
But this approach seems rather inelegant especially if as an "app" I want to store and retrieve serialized chunks of data somewhat asynchronously without reloading the page/app each time (again, this is only ever user-initiated).
I have not been able to find any high-level guidance on an approach to do that (assuming it is possible).
Any suggestions/links/examples would be greatly appreciated.
Thank you!
Jeff
As with many things, this depends on your specific needs. If you just want direct access to datastore storage, the datastore is exposed as an independent service with an API.
If you instead want to assert logic over the usage and interact with your app in some fashion, you may also want to look at Google Cloud Endpoints. With an endpoints API, you gain a more structured API you can call directly from javascript, or generate client libraries to be consumed by other languages/platforms.

Stack Exchange API compliant request throttle implementation on Google App Engine Cloud infrastructure

I have been writing a Google Chrome extension for Stack Exchange. It's a simple extension that allows you to keep track of your reputation and get notified of comments on Stack Exchange sites.
Currently I've encountered with some issues that I can't handle myself.
My extension uses Google App Engine as its back-end to make external requests to Stack Exchange API. Each single client request from extension for new comments on single site can cause plenty of requests to api endpoint to prepare response even for non-skeetish user. Average user has accounts at least on 3 sites from Stack Exchange network, some has > 10!
Stack Exchange API has request limits:
A single IP address can only make a certain number of API requests per day (10,000).
The API will cut my requests off if I make more than 30 requests over 5 seconds from single IP address.
It's clear that all requests should be throttled to 30 per 5 seconds and currently I've implemented request throttle logic based on a distributed lock with memcached. I'm using memcached as a simple lock manager to coordinate the activity of GAE instances and throttle UrlFetch requests.
But I think it's a big failure to limit such powerful infrastructure to issue no more than 30 requests per 5 sec. Such api request rate does not allow me to continue development of new interesting and useful features and one day it will stop working properly at all.
Now my app has 90 users and growing and I need come up with solution how to maximize request rate.
As known App Engine makes external UrlFetch requests via the same pool of different IP's.
My goal is to write request throttle functionality to ensure compliance with the api terms of usage and to utilize GAE distributed capabilities.
So my question is how-to provide maximum practical API throughput while complying with api terms of usage and utilizing GAE distributed capabilities.
Advise to use another platform/host/proxy is just useless in my mind.
If you are searching a way to programmatically manage Google App Engine shared pool of IPs, I firmly believe that you are out of luck.
Anyway, quoting this advice that is part of the faq, I think you have more than a chance to keep on running your awesome app:
What should I do if I need more
requests per day?
Certain types of applications -
services and websites to name two -
can legitimately have much higher
per-day request requirements than
typical applications. If you can
demonstrate a need for a higher
request quota, contact us.
EDIT:
I was wrong, actually you don't have any chance.
Google App Engine [app]s are doomed.
First off: I'm using your extension and it rocks!
Have you consider using memcached and caching the results?
Instead of taking the results from the API directly, try first to find them on the cache if they are use it and if they are not: retrieve them and cache them and let them expire after X minutes.
Second, try to batch up users requests, instead of asking the reputation of a single user ask the reputation of several users together.

Is there any way for the Google App Engine's urlfetch to open and keep open a Twitter Streaming API connection?

The Twitter streaming api says that we should open a HTTP request and parse updates as they come in. I was under the impression that Google's urlfetch cannot keep the http request open past 10 seconds.
I considered having a cron job that polled my Twitter account every few seconds, but I think Google AppEngine only allows cron jobs once a minute. However, my application needs near-realtime access to my twitter #replies (preferably only a 10 second or less lag).
Are there any method for receiving real-time updates from Twitter?
Thanks!
Unfortunately, you can't use the urlfetch API for 'hanging gets'. All the data will be returned when the request terminates, so even if you could hold it open arbitrarily long, it wouldn't do you much good.
Have you considered using Gnip? They provide a push-based 'web hooks' notification system for many public feeds, including Twitter's public timeline.
I'm curious.
Wouldn't you want this to be polling twitter on the client side? Are you polling your public feed? If so, I would decentralize the work to the clients rather than the server...
It may be possible to use Google Compute Engine https://developers.google.com/compute/ to maintain unrestricted hanging GET connections, then call a webhook in your AppEngine app to deliver the data from your compute engine VM to where it needs to be in AppEngine.

Resources