Start frontend Deferred task from backend on App Engine/Java - google-app-engine

Is it possible to start a Deferred frontend task from a Deferred backend on App Engine/Java. Deferred Tasks are started on the backend using a specific host with code like:
queue.add(withPayload(new MyDeferredTask()).header("Host",
BackendServiceFactory.getBackendService().getBackendAddress("backend1", 1));
And this works well. If a Deferred task is started from this backend then the task also runs on the backend. Is there a specific host to be used, or another means of explicitly starting a Deferred task on the frontend?
Update
I missed out a bit of important info in the original question: I'm talking about Deferred Tasks, where a payload is passed in. Starting a Deferred task from a backend starts the new Deferred task in the same backend. What I want to know is if its possible to explicitly start a Deferred task in the frontend, when its started from a backend. The original question above has been modified to reflect this.

To answer my old question, specifically for deferred tasks - if you have a task running on a backend and want to start a task on the frontend, you should explicitly specify the host of the frontend instance i.e. myapp.appspot.com. If you don't specify a host then the task will run on the same instance as the starting code.
To explcitly start on the frontend, regardless of the instance the caller is running on, do something like:
Queue queue = QueueFactory.getQueue("my-queue");
TaskOptions taskOptions = TaskOptions.Builder.withPayload(new MyDeferredTask());
taskOptions.header("Host", "myappid.appspot.com");
queue.add(taskOptions);

Just call the URL of your front-end "servlet you want to run" with the URL-fetch service or add a task to one queue with the servlet's url.
Front-ends just handle all the HTTP calls to your application and send it to the servlet configured in the web.xml file

Related

How to develop locally with service worker?

Instead of generating a build every time I make a change, I want to use the service worker while developing. I've already managed to use the https protocol with a valid certificate, but the service worker doesn't install it. I imagine it is related to the following error "No matching service worker detected. You may need to reload the page, or check that the service worker for the current page also controls the start of the URL from the manifest."

Cloud Tasks client ignores retry configuration

Basically what the title says. The API and client docs state that a retry can be passed to create_task:
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
But this simply doesn't work. Passing a Retry instance does nothing and the queue-level settings are still used. For example:
from google.api_core.retry import Retry
from google.cloud.tasks_v2 import CloudTasksClient
client = CloudTasksClient()
retry = Retry(predicate=lambda _: False)
client.create_task('/foo', retry=retry)
This should create a task that is not retry. I've tried all sorts of different configurations and every time it just uses whatever settings are set on the queue.
You can pass a custom predicate to retry on different exceptions. There is no formal indication that this parameter prevents retrying. You may check the Retry page for details.
Google Cloud Support has confirmed that task-level retries are not currently supported. The documentation for this client library is incorrect. A feature request exists here https://issuetracker.google.com/issues/141314105.
Task-level retry parameters are available in the Google App Engine bundled service for task queuing, Task Queues. If your app is on GAE, which I'm guessing it is since your question is tagged with google-app-engine, you could switch from Cloud Tasks to GAE Task Queues.
Of course, if your app relies on something that is exclusive to Cloud Tasks like the beta HTTP endpoints, the bundled service won't work (see the list of new features, and don't worry about the "List Queues command" since you can always see that in the configuration you would use in the bundled service). Barring that, here are some things to consider before switching to Task Queues.
Considerations
Supplier preference - Google seems to be preferring Cloud Tasks. From the push queues migration guide intro: "Cloud Tasks is now the preferred way of working with App Engine push queues"
Lock in - even if your app is on GAE, moving your queue solution to the GAE bundled one increases your "lock in" to GAE hosting (i.e. it makes it even harder for you to leave GAE if you ever want to change where you run your app, because you'll lose your task queue solution and have to deal with that in addition to dealing with new hosting)
Queues by retry - the GAE Task Queues to Cloud Tasks migration guide section Retrying failed tasks suggests creating a dedicated queue for each set of retry parameters, and then enqueuing tasks accordingly. This might be a suitable way to continue using Cloud Tasks

Testing multiple AppEngine services with task queues

I have an AppEngine app with 2 services, where Service A is queueing tasks for Service B using the task (push) queue. How do I test this using the development server? When running multiple services with the development server, each service gets a unique port number, and the task queue can't resolve the URL because the target URL is actually running on another port, i.e. Service A is on port 8080 and Service B is on port 8081. This all works great in production where everything is on the same port, but how do I go about testing this locally?
The push queue configuration allows for specifying the target service by name, which the development server understands. From Syntax:
target (push queues)
Optional. A string naming a service/version, a frontend version, or a
backend, on which to execute all of the tasks enqueued onto this
queue.
The string is prepended to the domain name of your app when
constructing the HTTP request for a task. For example, if your app ID
is my-app and you set the target to my-version.my-service, the
URL hostname will be set to
my-version.my-service.my-app.appspot.com.
If target is unspecified, then tasks are invoked on the same version
of the application where they were enqueued. So, if you enqueued a
task from the default application version without specifying a target
on the queue, the task is invoked in the default application version.
Note that if the default application version changes between the time
that the task is enqueued and the time that it executes, then the task
will run in the new default version.
If you are using services along with a dispatch file, your task's
HTTP request might be intercepted and re-routed to another service.
For example a basic queue.yaml would be along these lines:
queue:
- name: service_a
target: service_a
- name: service_b
target: service_b
I'm not 100% certain if this alone is sufficient, personally I'm also using a dispatch.yaml file as I need to route requests other than tasks. But for that you need to have a well-defined pattern in the URLs as host-name based patterns aren't supported in the development server. For example if the Service A requests use /service_a/... paths and Service B use /service_b/... paths then these would do the trick:
dispatch:
- url: "*/service_a/*"
service: service_a
- url: "*/service_b/*"
service: service_b
In your case it might be possible to achieve what you want with just a dispatch file - i.e. still using the default queue. Give it a try.

Programatically listing and sending requests to dynamic App Engine instances

I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).

How to auto-start an App-Engine backend when a pull-queue has tasks?

It looks like I can create a push-queue that will start backends to process tasks and that I can limit the number of workers to 1. However, is there a way to do this with a pull-queue?
Can App-Engine auto-start a named backend when a pull-queue has tasks and then let it expire when idle and the queue is empty?
It looks like I just need some way to call an arbitrary URL to "notify" it that there are tasks to process but I'm unable to find any documentation on how this can be done.
Use a cron task or a push queue to periodically start the backend. The backend can loop through the tasks (if any) in the pull queue, and then expire.
There isn't a notification system for pull queues, only inspection through queue statistics and empty/non-empty lease results.
First of all you need to decide scalability type you want to use for your module. I think that you should take a look to Basic Scaling (https://developers.google.com/appengine/docs/java/modules/)
Next, to process tasks from pull queue you can use Cron to check queues every several minutes. It will be important to request not basic scaling module, but frontend module, cause cron will start instances. The problem is that you will still need to pay for at least one instance of frontend cause your cron job will not allow it to shutdown.
So the solution could be the following:
Start cron every 1 or 5 minutes and call frontend
Check queue in frontend and issue URLFetch request to basic scaling module if there are tasks in pull queue
Process tasks in queue using basic scaling module
If you use F1 instances for frontend and b2 or greate for other modules it could save you some money.

Resources