how to manually set a task to run in a gae queue for the second time - google-app-engine

I have a task that runs in GAE queue.
according to my logic, I want to determine if the task will run again or not.
I don't want it do be normally executed by the queue and then to put it again in the queue
because I want to have the ability to check the "X-AppEngine-TaskRetryCount"
and quit trying after several attempts.
To my understanding it seems that the only case that a task will re-executed is when an internal GAE error will happen (or If my code will take too long in a "DeadlineExceededException" cases..(And I don't want to hold the code "hostage" for that long :) )
How can I re-enter a task to the queue in a manner that GAE will set X-AppEngine-TaskRetryCount ++ ??

You can programmatically retry / restart a task using a self.error() in python.
From the docs: App engine retries a task by returning any HTTP status code outside of the range 200–299
And at the beginning of the task you can test for the number of retries using:
retries = int(self.request.headers['X-Appengine-Taskretrycount'])
if retries < 10 :
self.error(409)
return

Related

How to set a max retry value in App Engine Task Queue?

I have the following retry parameters:
<retry-parameters>
<task-retry-limit>7</task-retry-limit>
<task-age-limit>1d</task-age-limit>
<min-backoff-seconds>1</min-backoff-seconds>
<max-backoff-seconds>30</max-backoff-seconds>
</retry-parameters>
But when I check the queue, I see retries like 45. I had set the task-retry-limit to 7. So why is it going beyond that? How to set a max retry value? I am using App Engine standard with push based task queue and Java 8 env. Thanks.
private Queue fsQueue = QueueFactory.getQueue(FS_QUEUE_NAME);
// ...
Product fp = new Product();
fp.setId("someid");
// ...
TaskOptions opts = TaskOptions.Builder.withUrl("/api/task/fs/product").method(TaskOptions.Method.POST)
.payload(utils.toJSONBytes(fp), "application/json");
fsQueue.add(opts);
I think that your issue is related to the fact of using the queue.xml being deprecated. You should be using the queue.yaml instead.
You should also bear in mind that if you are using the Cloud Tasks API to manage your queue as well this might cause some collisions. In this documentation you'll find information on how to handle the most common problems.

Prometheus gauge with custom collector - go

i have problem on "reseting" the values for gauges because "life stops" as soon http servervice is started or when i start looping my "runJob" then is server not started..
the way how i am trying to establish this:
i load all the Job-s from YAML array.
i generate gauges from that, then i run loop to get some values for them.
Then i register them.
And after that i start http service for prometheus.
All works perfect, until next cycle - next cycle is just not starting.
I tryed to move functions inside functions etc..
so thats my main function:
//gets poll time from yaml (60s)
timerCh := time.Tick(time.Duration(appConf.PollTimeSec) * time.Second)
//loop after given time
for range timerCh {
runJobs()
}
//start new muxServer
server := http.NewServeMux()
log.Println("DEBUG: Starting server")
server.Handle(appConf.HostPath, promhttp.Handler())
http.ListenAndServe(":"+appConf.ListenerPort, server)
and my runJobs function basicly gets Http response codes
and ads them to prometheus gauge values.
-- everything is OK with that and it works very well on starting, but after i try to start it wheet sleep (as shown in main go) it just gets stuck -
Server is up and values do not change.
So i have (my optinion) two possible ways of fixing it:
My "runJobs" is infinitive loop what runs after every minute
thats why is server not started.
But when i add there a if statement that on first run(cycle) should server be started, then it still gets stuck when server gets started (next loop cycle just woun't get started)
And the other part, when i start the server first, then it never gets to the part where it starts runJobs()
Prefered outcome should be that:
server is started with first values, and after every minute it runs "runJobs" again.
The way it is written, your program will not go beyond that for loop. Try this instead:
go func() {
//loop after given time
for range timerCh {
runJobs()
}
}()
This runs the for loop in its own goroutine, and runJobs executes every so often.

Infinite AMQP Consumer with Alpakka

I'm trying to implement a very simple service connected to an AMQP broker with Alpakka. I just want it to consume messages from its queue as a stream at the moment they are pushed on a given exchange/topic.
Everything seemed to work fine in my tests, but when I tried to start my service, I realized that my stream was only consuming my messages once and then exited.
Basically I'm using the code from Alpakka documentation :
def consume()={
val amqpSource = AmqpSource.committableSource(
TemporaryQueueSourceSettings(connectionProvider, exchangeName)
.withDeclaration(exchangeDeclaration)
.withRoutingKey(topic),
bufferSize = prefetchCount
)
val amqpSink = AmqpSink.replyTo(AmqpReplyToSinkSettings(connectionProvider))
amqpSource.mapAsync(4)(msg => onMessage(msg)).runWith(amqpSink)
}
I tried to schedule the consume() execution every second, but I experienced OutOfMemoryException issues.
Is there any proper way to make this code run as an infinite loop ?
If you want to have a Source restarted when it fails or is cancelled, wrap it with RestartSource.withBackoff.

Task Queue completion callback

I am using google cloud task queue to do some long running tasks.
Once all the task have been completed i wanted to send some notification.
I am using below code to get number of pending task in my thread
QueueStatistics stats= taskQueue.fetchStatistics();
stats.getNumTasks();
but here i am continuously checking value return by getNumTask() method.
If it is zero then i notify others.
Is there any callback available which could notify me once all the task of my queue have been completed.
Regards,
If concurrently running tasks is not a must for your application, you can setup the queue with max-concurrent-requests set to 1, so that tasks will run one by one:
<queue-entries>
<queue>
<name>my-queue</name>
<rate>1/s</rate>
<max-concurrent-requests>1</max-concurrent-requests>
</queue>
</queue-entries>
Then, after push all tasks in the queue, you push a notification task to the same queue. The notification task will be the last one on the queue and will be executed after all tasks are completed.
Notes: be careful with auto retries when one of your tasks fails. This will make the notification task not the last one on queue. Maybe you can purge the queue on failure then retry.

How to implement automatic retries with GAE tasks?

Here is my code:
class PublishPhotosHandler(webapp.RequestHandler):
for argument in files_arguments:
taskqueue.add(url='/upload', params={'key': key})
class UploadWorker(webapp.RequestHandler):
def post(self):
key = self.request.get('key')
result = urlfetch.fetch(...)
# how to return there an error, so the task will be retried?
If a task fails to execute (by returning any HTTP status code outside of the range 200–299), App Engine retries the task until it succeeds. By default, the system gradually reduces the retry rate to avoid flooding your application with too many requests, but schedules retry attempts to recur at a maximum of once per hour until the task succeeds.
raising any exception will cause a non-2XX status code, therefore raising any exception will cause the the task to be queued up again and retried.

Resources