I'm trying to set up two Google App Engine modules where one of the modules is configured with Basic scaling so it can handle long-running computation. The front-end module interacts with the user and enqueues tasks.
I need for the front-end module to be able to enqueue a task for the back-end module to pick up the task and execute it. I've gotten it mostly to work except when I enqueue the task, it gets assigned to run in the front-end module rather than the back-end module.
The problem is in the development server environment. On production App Engine it seems clear how to do it by simply stating in the header with the "Host" parameter:
Queue queue = QueueFactory.getDefaultQueue();
TaskOptions taskOptions = TaskOptions.Builder.withUrl("/longtest").param("content", content).header("Host", "nbsocialmetrics-backend");
log.info("SignGuestbookServlet taskOption " + taskOptions);
queue.add(taskOptions);
But in the development server, modules are addressed by port numbers rather than by module name. I don't think using the <target> parameter will work either because it also addresses the module by name rather than by port number.
I've found that the approach with the Host header DOES work on the development server with App Engine SDK version 1.9.15.
Also note this was also posted as an issue on code.google.com: Task queues don't work with multiple modules on development server
Related
Problem
It is not clear to me how to configure the Kotlin MPP (multiplatform platform project) using Gradle (Kotlin DSL) to use Vert.x web for Kotlin/JVM target with Kotlin React on the Kotlin/JS target.
Update
You can check out updated minimal example for a working solution
inspired by an approach of Alexey Soshin.
What I've tried
Have a look at my minimal example on GitHub of a Kotlin MPP with the Vert.x web server on the JVM target and Kotlin React on the JS target.
You can make it work if you:
First run Gradle task browserDevelopentRun (I don't understand magick behind it) and after browser opens and renders React SPA (single page application) you can
stop that task and then
start the Vert.x backend with task run.
After that, without refreshing the remaining SPA in the browser, you can confirm that it can communicate with the backend by pressing the button and it will alert received data.
Question
What are the possible ways/approaches to glue these two targets so when I run my application: JS target is assembled and served via JVM backend conveniently?
I am thinking that perhaps Gradle should trigger some of the Kotlin browser tasks and then make them available in some way for the Vert.x backend.
If you'd like to run a single task, though, you need that your server task would depend on your JS compile. In your build.gradle add the following:
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
dependsOn(tasks.getByName<org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack>("jsBrowserProductionWebpack"))
}
Now invoking run will also invoke WebPack.
Next you want to serve your files. There are different ways of doing it. One is to copy them to Vert.x resources directory using Gradle. Another is to point Vert.x to where WebPack puts them by default:
route().handler(StaticHandler.create("../../../distributions"))
There is a bunch of different things going on there.
First, both your Vert.x and Webpack run on the same port. Easiest way to fix that is to start Vert.x on some other port, like 18080:
.listen(18080, "localhost") { result ->
And then change your index.kt file to use that port:
val result: SomeData = get("http://localhost:18080/data")
Because we run on different ports now, we also need to specify CORS handler:
router.apply {
route().handler(CorsHandler.create("*"))
Last is the fact, that you cannot run two neverending Gradle tasks from the same process (ok, you can, but that's complicated). So what I suggest is that you open two Terminals, and run:
./gradlew run
In one, and
./gradlew jsBrowserDevelopmentRun
In another.
Having done all that, you should see this:
Now, this is for development mode. For production mode, you probably don't want to run jsBrowserDevelopmentRun, but tie jsBrowserProductionWebpack to your run and server spa.js from your Vert.x app using StaticHandler. But this answer is already too long.
Basically what the title says. The API and client docs state that a retry can be passed to create_task:
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
But this simply doesn't work. Passing a Retry instance does nothing and the queue-level settings are still used. For example:
from google.api_core.retry import Retry
from google.cloud.tasks_v2 import CloudTasksClient
client = CloudTasksClient()
retry = Retry(predicate=lambda _: False)
client.create_task('/foo', retry=retry)
This should create a task that is not retry. I've tried all sorts of different configurations and every time it just uses whatever settings are set on the queue.
You can pass a custom predicate to retry on different exceptions. There is no formal indication that this parameter prevents retrying. You may check the Retry page for details.
Google Cloud Support has confirmed that task-level retries are not currently supported. The documentation for this client library is incorrect. A feature request exists here https://issuetracker.google.com/issues/141314105.
Task-level retry parameters are available in the Google App Engine bundled service for task queuing, Task Queues. If your app is on GAE, which I'm guessing it is since your question is tagged with google-app-engine, you could switch from Cloud Tasks to GAE Task Queues.
Of course, if your app relies on something that is exclusive to Cloud Tasks like the beta HTTP endpoints, the bundled service won't work (see the list of new features, and don't worry about the "List Queues command" since you can always see that in the configuration you would use in the bundled service). Barring that, here are some things to consider before switching to Task Queues.
Considerations
Supplier preference - Google seems to be preferring Cloud Tasks. From the push queues migration guide intro: "Cloud Tasks is now the preferred way of working with App Engine push queues"
Lock in - even if your app is on GAE, moving your queue solution to the GAE bundled one increases your "lock in" to GAE hosting (i.e. it makes it even harder for you to leave GAE if you ever want to change where you run your app, because you'll lose your task queue solution and have to deal with that in addition to dealing with new hosting)
Queues by retry - the GAE Task Queues to Cloud Tasks migration guide section Retrying failed tasks suggests creating a dedicated queue for each set of retry parameters, and then enqueuing tasks accordingly. This might be a suitable way to continue using Cloud Tasks
I have an AppEngine app with 2 services, where Service A is queueing tasks for Service B using the task (push) queue. How do I test this using the development server? When running multiple services with the development server, each service gets a unique port number, and the task queue can't resolve the URL because the target URL is actually running on another port, i.e. Service A is on port 8080 and Service B is on port 8081. This all works great in production where everything is on the same port, but how do I go about testing this locally?
The push queue configuration allows for specifying the target service by name, which the development server understands. From Syntax:
target (push queues)
Optional. A string naming a service/version, a frontend version, or a
backend, on which to execute all of the tasks enqueued onto this
queue.
The string is prepended to the domain name of your app when
constructing the HTTP request for a task. For example, if your app ID
is my-app and you set the target to my-version.my-service, the
URL hostname will be set to
my-version.my-service.my-app.appspot.com.
If target is unspecified, then tasks are invoked on the same version
of the application where they were enqueued. So, if you enqueued a
task from the default application version without specifying a target
on the queue, the task is invoked in the default application version.
Note that if the default application version changes between the time
that the task is enqueued and the time that it executes, then the task
will run in the new default version.
If you are using services along with a dispatch file, your task's
HTTP request might be intercepted and re-routed to another service.
For example a basic queue.yaml would be along these lines:
queue:
- name: service_a
target: service_a
- name: service_b
target: service_b
I'm not 100% certain if this alone is sufficient, personally I'm also using a dispatch.yaml file as I need to route requests other than tasks. But for that you need to have a well-defined pattern in the URLs as host-name based patterns aren't supported in the development server. For example if the Service A requests use /service_a/... paths and Service B use /service_b/... paths then these would do the trick:
dispatch:
- url: "*/service_a/*"
service: service_a
- url: "*/service_b/*"
service: service_b
In your case it might be possible to achieve what you want with just a dispatch file - i.e. still using the default queue. Give it a try.
We have been using push queue for a very long time and have no problems in consuming the tasks from a dev server.
However during implementing a new service with pull queue, it became difficult to figure out how to do the same thing on the dev server.
Basically from the docs, what we can see is that you should use a REST api (we can't use the direct queue api as it is consumed by an external app) to lease/delete a task with the end point of
https://www.googleapis.com/taskqueue/v1beta1/projects/taskqueues
But obviously this will not work in local dev server, and it appears that no place have talking about this.
Just wondering if anyone had ever run into the same issue had can shed some light?
With Pull Queue, task consumer can be internal or external.
If you need it to work on dev server, then just create a handler (a servlet) and use internal API to add, lease and delete tasks.
I have a Silverlight 4 client running on a Facebook page hosted on Google App Engine. It's using gminifb to communicate with the Facebook API. The Silverlight client uses POST calls to the URIs for each method and passes the session information from Facebook with each call.
The project's growing and it would be super-useful if I could set up a unit-testing system to make a variety of the server calls so that when I make changes I can ensure everything else still works. I've worked with nUnit before and I like what I've read of PEX but I'm not sure how to apply them to this situation.
What're the choices for creating a test system for this? Pros/cons of each?
How do I get started setting something like this up?
Solved. I did it as follows:
Created a special user account to be used for testing on the server that bypassed the authentication. Only did this on the test environment by checking a debug flag in that environment's settings. This avoided creating any security hole in the live site (since the same debug flag will be false there.)
Created a C#.NET solution to test each API call. The host project is a console app (no need for a GUI) with three reusable synchronous methods:
SendFormRequest(WebRequest request, Dictionary<string,string> pairs),
GetJsonFromResponse(HttpWebResponse response),
and ResetAccount().
These three methods allow the host project to make HTTP requests on the server and to read the JSON responses.
Wrapped each server API call inside a method call in the host project.
Created an nUnit test project in the solution. Then simply created tests that call each wrapper method in the host project, using different parameters and changing values on the server.
Created a series of tests to verify correct error handling for invalid parameters and data.
It's working perfectly and has already identified a few minor issues that have been found. The result is immensely useful and will check for breaking changes on new deployments.