Based on 100 requests.
Region: southamerica-east1
When executing a GET at xxx.appspot.com/api/v1/ping the average response time is +/- 50 ms.
Example: Load time: 83 ms
When activating dispach.yaml (gcloud app deploy dispatch.yaml) and executing the request with the new URL, xxx.mydomain.com/api/v1/ping, the average response time is 750 ms.
Example Load time: 589 ms
dispatch.yaml
dispatch:
- url: "*/api/*"
service: my-service
I'm using spring boot on the server. follow app.yaml
service: my-service
runtime: java
env: flex
threadsafe: true
runtime_config: # Optional
jdk: openjdk8
handlers:
- url: /api/*
script: this field is required, but ignored
manual_scaling:
instances: 1
resources:
cpu: 2
memory_gb: 2.3
How do I improve the response time?
Am I using the dispatch correctly to associate my requests with my domain?
curl -w "#curl-format.txt" -o ./ -s http://my.domnai.com/
time_namelookup: 0,253
time_connect: 0,328
time_appconnect: 0,000
time_pretransfer: 0,328
time_redirect: 0,000
time_starttransfer: 1,713
----------
time_total: 1,714
curl -w "#curl-format.txt" -o ./ -s http://my-app.appspot.com/
time_namelookup: 0,253
time_connect: 0,277
time_appconnect: 0,000
time_pretransfer: 0,277
time_redirect: 0,000
time_starttransfer: 0,554
----------
time_total: 0,554
Using a custom domain is rather orthogonal to using a dispatch file.
When App Engine receives a request it first needs to determine which application is the request destined to. By default it does that using exclusively the requests's domain name, be it appspot.com or a custom domain. From Requests and domains:
App Engine determines that an incoming request is intended for your
app by using the domain name of the request.
While making this decision it also determines a particular version of a service in the application to send the request to, based on the rules described in Routing via URL.
Requests using a custom domain might require some additional processing compared to using appspot.com (I'm unsure about this), which could explain some increase in the response time. This can be confirmed by measurements. But if so, I don't think there's anything you can do about it.
Note that a dispatch file is not required to make the above-mentioned routing decisions. Even if you use a custom domain. In fact there is no reference to the dispatch file anywhere in Adding a custom domain for your application. But if want to alter these decisions then you need to use a dispatch file.
The dispatch file allows to also take the request path into account (in addition to the request domain name) when making the routing decisions.
Using a dispatch file will increase the response time as the request domain and path must be sequentially compared against each and every rule in the dispatch file, until a match is found. If no match is found the request will be sent to the version of the app's default service configured to receive traffic. You can slightly reduce the processing time for particular services by placing their rules earlier in the dispatch file, but that's about all you can do.
Related
I'm using Cloud Tasks from GAE now.
Also, by setting GAE as the backend of the load balancer, the following processing is tested.
batch-service is a service I created.
Request to /job/test_cron from local machine
go to Load balancer
go to GAE's service(batch-servise) from Load balancer
Create Cloud Task and request /job/test_task from GAE
go to GAE's service(batch-servise)
process and complete
I made each setting assuming the above flow, but the request when creating a task in GAE does not go to batch-servise, but goes to default service.
Therefore, the actual processing is as follows.
Request to /job/test_cron from local machine
go to Load balancer
go to GAE's servise(batch-servise) from Load balancer
Create Cloud Task and request /job/test_task from GAE
go to GAE's servise(default servise)
process and complete
GAE uses dispatch.yaml to direct all requests like /job/~ to batch-servise.
Therefore, Requesting /job/test_cron directly to GAE works as expected.
When using a load balancer, I think that dispatch.yaml cannot be used because the IP of GAE is not used. Is this correct?
Also, if anyone else knows how to configure GAE dispatch, it would be very helpful if you could tell me.
To override default service you can define AppEngineRouting which defines routing characteristics specific to App Engine - service, version, and instance.
You can refer this sample which routes to the default service's /log_payload endpoint. And update to this:
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
appEngineRouting: {
service: 'batch-servise'
}
},
};
When using a load balancer, I think that dispatch.yaml cannot be used because the IP of GAE is not used. Is this correct?
The load balancer does not interfere or interact with routing rules in your dispatch.yaml file. The dispatch.yaml rules are not evaluated until a serverless NEG directs traffic to App Engine.
Configuring dispatch.yaml:
The root element in the dispatch.yaml file is dispatch: and contains a list of routing definitions that are specified by the following subelements.
Dispatch rules are order dependent, and only the first rule that matches a URL will be applied.
You may have a look at these Examples
For more information, see How Requests are Routed.
I have 2 service. One is hosted in Google App Engine and one is hosted in Cloud Run.
I use urlfetch (Python 2) imported from google.appengine.api in GAE to call APIs provided by the Cloud Run.
Occasionally there are a few (like <10 per week) DeadlineExceededError shown up like this:
Deadline exceeded while waiting for HTTP response from URL
But these few days such error suddenly occurs frequently (like ~40 per day). Not sure if it is due to Christmas peak hour or what.
I've checked Load Balancer logs of Cloud Run and turned out the request has never reached the Load Balancer.
Has anyone encountered similar issue before? Is anything wrong with GAE urlfetch?
I found a conversion which is similar but the suggestion was to handle the error...
Wonder what can I do to mitigate the issue. Many thanks.
Update 1
Checked again, found some requests from App Engine did show up in Cloud Run Load Balancer logs but the time is weird:
e.g.
Logs from GAE project
10:36:24.706 send request
10:36:29.648 deadline exceeded
Logs from Cloud Run project
10:36:35.742 reached load balancer
10:36:49.289 finished processing
Not sure why it took so long for the request to reach the Load Balancer...
Update 2
I am using GAE Standard located in US with the following settings:
runtime: python27
api_version: 1
threadsafe: true
automatic_scaling:
max_pending_latency: 5s
inbound_services:
- warmup
- channel_presence
builtins:
- appstats: on
- remote_api: on
- deferred: on
...
The Cloud Run hosted API gateway I was trying to call is located in Asia. In front of it there is a Google Load Balancer whose type is HTTP(S) (classic).
Update 3
I wrote a simple script to directly call Cloud Run endpoint using axios (whose timeout is set to 5s) periodically. After a while some requests were timed out. I checked the logs in my Cloud Run project, 2 different phenomena were found:
For request A, pretty much like what I mentioned in Update 1, logs were found for both Load Balancer and Cloud Run revision.
Time of CR revision log - Time of LB log > 5s so I think this is an acceptable time out.
But for request B, no logs were found at all.
So I guess the problem is not about urlfetch nor GAE?
Deadline exceeded while waiting for HTTP response from URL is actually a DeadlineExceededError. The URL was not fetched because the deadline was exceeded. This can occur with either the client-supplied deadline (which you would need to change), or the system default if the client does not supply a deadline parameter.
When you are making a HTTP request, App Engine maps this request to URLFetch. URLFetch has its own deadline that is configurable. See the URLFetch documentation.
You can set a deadline for each URLFetch request. By default, the deadline for a fetch is 5 seconds. You can change this default by:
Including the following appengine.api.urlfetch.defaultDeadline setting in your appengine-web.xml configuration file. Specify the timeout in seconds:
<system-properties>:
<property name="appengine.api.urlfetch.defaultDeadline" value="10"/>
</system-properties>
You can also adjust the default deadline by using the urlfetch.set_default_fetch_deadline() function. This function stores the new default deadline on a thread-local variable, so it must be set for each request, for example, in a custom middleware.
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(45)
If your Cloud Run service is processing long requests, you can increase the request timeout. If your service doesn't return a response within the time specified, the request ends and the service returns an HTTP 504 error.
Update the timeoutSeconds attribute in YAML file as :
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: SERVICE
spec:
template:
spec:
containers:
- image: IMAGE
timeoutSeconds: VALUE
OR
You can update the request timeout for a given revision at any time by using the following command:
gcloud run services update [SERVICE] --timeout=[TIMEOUT]
If requests are terminating earlier with error code 503, you might need to update the request timeout setting for your language framework:
Node.js developers might need to update the [server.timeout property via server.setTimeout][6] (use server.setTimeout(0) to achieve an unlimited timeout) depending on the version you are using.
Python developers need to update Gunicorn's default timeout.
I am struggling with the problem now for a couple of days. I hope someone can point out where I made the mistake.
I have a domain mydomain.com. (not purchased with Google.)
I have one AppEngine Standard Service (default) with my website.
Then I have 30 additional AppEngine Standard Services for my different api's (service1-service30).
When I browse to mydomain.com or www.mydomain.com I should be redirected to the "default" service.
When I browse to e.g., service25.mydomain.com I wanted to be redirected to the corresponding service. In this example service "service25"
What I did:
AppEngine Settings
DomainRegistrar
Dispatch.yaml:
dispatch:
- url: "mydomain.com/*"
service: default
- url: "www.mydomain.com/*"
service: default
Calling mydomain.com and www.mydomain.com are working as expected with an valid SSL certificate.
Calling service25.mydomain.com redirects to the "default" service
Calling service26.mydomain.com returns ERR_CONNECTION_CLOSED
Calling service27.mydomain.com returns DNS_PROBE_FINISHED_NXDOMAIN
Regarding Mapping Custom Domains and How Requests are Routed at least one of my previous methods should work and the default mapping/routing to the corresponding service name should be done.
If I add
- url: "service25.mydomain.com/*"
service: service25
it works, but due to the limitation of max 20 routes in the Dispatch.yaml file this is not working for me.
What did I do wrong?
Thank you very much in advance.
I currently use app engine standard environment with django. I want to have automatic scaling and always have at least one instance running.
Consulting the documentation it says that to use min_instances it is recommended to have warm up requests enabled.
My question is: is this mandatory? Is there no way to always have an active instance without using warm up requests?
This is probably more of a question for Google engineers. But, I think that they are required. The docs don't say "recommended"; They say "must":
Imagine if your instances shut down because of a server reboot. The warmup request gets them running again. A start request would also do the trick, but after some delay. It could be that Google depends on sending warmup requests after reboot, and not start.
UPDATE
You just need a simple url handler that returns a 200 response. Could be something as simple as this in your app.yaml:
- url: /_ah/warmup # just serve simple, quick
static_files: static/img/favicon.ico
upload: static/img/favicon.ico
Or better, in your urls.py, point the url handler to a view like this:
(r'^_ah/warmup$', 'warmup'),
in views.py:
from django.http import HttpResponse
def warmup():
return HttpResponse('hello', content_type='text/plain')
I am developing a GAE application. Using the localhost for development is a nuisance because there are some interacting components that require the system to be on the internet. However, I feel weird about having a pre-release version of the app live so I am enable it when I'm troubleshooting it and then disable it. It would be better to require admin login so I can have it online and keep it private. When I make the (very simple) necessary changes to app.yaml and update the app, nothing changes. I can still access it without being logged in (I checked that I was logged out of google). Any ideas? My app.yaml text is below. Incidentally, the only other handler that requires a login, remote_api, is also misbehaving. It returns the error 'This request did not contain a necessary header'.
application: (removed for privacy)
version: 1
runtime: python
api_version: 1
handlers:
- url: /remote_api
script: $PYTHON_LIB/google/appengine/ext/remote_api/handler.py
login: admin
- url: /stylesheets
static_dir: stylesheets
- url: /javascript
static_dir: javascript
- url: /images
static_dir: images
- url: /.*
script: example.py
login: admin
My best guess is that you weren't actually logged out. This can happen because there's a delay when you use the logout feature on other Google apps -- to avoid having to check back with the Google Account service for every request, App Engine uses a short-lived cookie that allows access regardless of what the Google Accounts service things until it times out (I think it's 5 minutes).
If you really want to check whether you can access this while logged out, use Chrome's Incognito Window. (Or wait 5 minutes. :-)
The remote_api behavior can also be explained: for security reasons (to thwart certain Javascript-based attacks) the remote_api handler doesn't let web browsers access the handler. It only accepts requests from the dedicated remote_api client library, which passes an extra header that Javascript code cannot set.
By the way, it's probably better to use the standard remote_api handler location and use the builtins clause to enable it:
builtins:
- remote_api: on