How many Async calls we can call in one request in Google App Engine - google-app-engine

I'm writing an app on top of google app engine. Using Java as language.
I'm new to GAE. I want to use URL fetch service for async calls in application.
I'm having few doubts
1. How many async calls I can call in one appengine request ?
2. What happens if I call more async calls in one request, let's say 50
3. What happens these async calls takes more than 1 minute (the appengine request timeout)
4. What is default timeout for each async call (I'm assuming it is 60 sec.Is it correct)

There is no limit on the number of calls that App Engine can process in general. There are limits (quotas) set for different APIs: e.g. number of Mail API, Task API, URLFetch, etc. calls. Often quotas are different for paying customers, and in most cases you can request to increase a quota if your app requires it. In other words, these quotas do not reflect a technological limitation - they are in place to prevent abuse.
The number of requests that your app can process depends on how many instances you have running, and how fast each call can be processed. You can enable multithreading, so each instance can process multiple calls in parallel.
As for the timeouts, they depend on the instance type and the type of calls that you are making (Datastore API, Cloud Storage API, URLFetch, etc.)

Related

Publishing message to GCP pubsub using API is time Consuming

I have a node js app in mongodb cloud platform,which will be used for posting 1 million messages to a topic in GCP pubsub.Since the platform is not supporting the npm package #google-cloud/pubsub,we implemented it using the API reference for Pubsub.Upon load testing the app,I can see each message is taking 50 seconds for posting it to the topic.Ideally it should take less than 5 secs.It takes around 30 seconds for the access_token API call and 20 seconds for the message posting API call.Since each message posting is a independent event,we cannot maintain a session to store the access_token and reuse it and API_KEY authentication method is not available for GCP PubSub.Is the API method for gcp pubsub is very slow when compared to using library #google-cloud/pubsub ?.
Can anyone suggest a solution to improve performance of GCP PubSub using APIs
The PubSub client library are greatly optimized in several ways. The first one is the use of gRPC protocol instead of REST API. Then, there is message aggregation before a push to PubSub (500ms of wait by default). Then, there is various async mechanism to parallelize the processing.
So, a huge and great work done by the Client Library teams and hard (or expensive) to reproduce on your side. But you can, the sources are public, you can have a look to the client libraries!
The 30s for the access_token retrieval is too long. Are you sure that you haven't network issue? In any case, this token is valid for 1H. If you can reuse it in your subsequent call you will save a lot of time!

How to warm up app engine endpoint

I have appengine endpoint and trying to reduce latency on few first calls to newly created endpoint instance. Application is written in Java and endpoints are auto scaled.
To address this issue I configured idle instance, although even if instance is already created, first few calls routed to it consume some extra time. Following documentation I've implemented the custom servlet handling warm up requests and marked the EndpointsServlet as load on startup.
Inside the warm up servlet I've put code that initiates some commonly used services, load some data etc. Effect was almost impossible to notice.
After it I have implemented calls to methods exposed by the endpoint like that:
call("/_ah/api/teamly/v1/test/dummy")
It works for some cases (even most of them) and after calling few key methods instance is really ready to serve. The problem I'm facing now is that if I'm using auto scaling for some module I can't route the request to specific instance.
So the question is:
How should I properly warm up the endpoint instance to avoid load requests initiated from frontend.
You need to put a listener to /_ah/warmup and then make calls to any resources you want it to be warmed up. You can find detailed information at:
Configuring Warmup Requests to Improve Performance

Long-running script on Google App Engine

I'm attempting to create a microservice on Google App Engine that is not intended to handle HTTP requests.
Instead, I was hoping to have a continuously running Python script that monitors a remote queue--RabbitMQ, to be precise--and sends out an api-call to another service as tasks are pushed to the queue.
I was wondering, firstly, is it possible to run a script upon deployment--one that did not originate with a user action/request?
Secondly, how would I accomplish this?
Thanks in advance for your time!
You can deploy your "script" as a manually scaled module -- see https://cloud.google.com/appengine/docs/python/modules/ -- with exactly one instance. As the docs say, "When you start a manual scaling instance, App Engine immediately sends a /_ah/start request to each instance"; so, just set that module's handler for /_ah/start to the handler you want to run (in the module's yaml file and the WSGI app in the Python code, using whatever lightweight framework you like -- webapp2, falcon, flask, bottle, or whatever else... the framework won't be doing much for you in this case save the one-off routing).
Note that the number of free machine hours for manual scaling modules is limited to 8 hours per day (for the smaller, B1 instance class; proportionally fewer for larger instance classes), so you may need to upgrade to paid-app status if you need to run for more than 8 hours.
Like #brant said, App Engine is designed to handle HTTP requests. It's not a perfect fit for background jobs, unless you try to wrap your logic into one http request.
Further, App Engine will emit an error when the response timeout, depending on your scaling settings. If you want to try it, consider basic or manual scaling.
For this type of workload, I would suggest you use a VM.
I think there are a few problems with this design.
First, App Engine is designed to be an HTTP request processor, not a RabbitMQ message processor. GAE is intended for many small requests, not one long-running process.
Second, "RabbitMQ should not be exposed to the public internet, it wasn't created for such use case."
I would recommend that you keep the RabbitMQ clients on the same internal network as the RabbitMQ broker, and have the clients send HTTP requests to App Engine.

GAE: How to find out which requests take a long time to complete?

Say that in a Google App Engine application (Java) some requests take a very long time to complete; perhaps some even time out after 30 seconds. Does the GAE Console (Dashboard, Monitoring or similar) provide any way to list the URLs (or any other request properties, such as API method calls) associated with the long-running requests?
https://cloud.google.com/appengine/docs/python/tools/appstats
The Python SDK includes the Appstats library used for profiling the
RPC (Remote Procedure Call) performance of your application. An App
Engine RPC is a roundtrip network call between your application and an
App Engine Service API. For example, all of these API calls are RPC
calls:
Datastore calls such as ndb.get_multi(), ndb.put_multi(), or
ndb.gql(). Memcache calls such as memcache.get(), or
memcache.get_multi(). URL Fetch calls such as urlfetch.fetch(). Mail
calls such as mail.send().
Actually, the old Dashboard (https://appengine.google.com/dashboard) provides the info I wanted in the Current Load box (bottom left), in the Avg Latency (last hr) column.

Stackoverflow quota for a day

I'm using stackoverflow in my meteor application,
I'm calling api directly from my code like below
HTTP.call("GET", urlString,{params:{site:"stackoverflow"}},function(error,result)
{
console.log(result.data);
});
I'm not using any oauth or client id secret id in my calls.
In the response, I'm getting a variable called quota. with maximum pings 300
Is that means I can only call the api for 300 times, I want more than that, I'm even ready to pay for it.
Is there a way to increase that number.
Thanks
You are throttled because you have not registered your application. If you register your application, you will receive a quota increase to 10,000 hits per day. You will want to read the authentication documentation on how to utilize the keys you receive from registering your application.

Resources