I'm using the java push queue API. I see there is an Queue.add() method which puts a task at the end:
https://cloud.google.com/appengine/docs/java/javadoc/com/google/appengine/api/taskqueue/Queue
is there a way to put a task at the front of the queue?
Thanks
There is no such feature by design & nature of queues (not just AppEngine but in general).
As a partial workaround you can use a separate "fast-lane" queue for tasks you want to prioritize. It can have higher rate then regular queue.
Also in some cases it can be beneficial to use "delete task" functionality.
I shared some thoughts on a similar problem recently: https://stackoverflow.com/a/38580017/1836506
There are a couple other solutions on that question that may give you ideas, too.
Related
I'm using a KafkaSink to sink the results to kafka like .sinkTo(kafkaSink). I'm trying to come up with an end to end integration test and want to use a simple sink for the same. I came across CollectSink where I can add results to a list and do the matchers. But, CollectSink being SinkFunction, I am not able to use it in .sinkTo, instead addSink is where it can be used.
I have tried PrintSink but I want to read the saved data again to do some matchers.
Can anyone help me on how I can add a test sink so that it can be used along with .sinkTo?
Thank you in advance
You could look at how the integration tests for the Immerok Cookbook are organized. https://www.docs.immerok.cloud/docs/cookbook/creating-dead-letter-queues-from-and-to-apache-kafka-with-apache-flink/ might be good place to start, since it illustrates how to approach testing a job with multiple sinks.
Disclaimer: I work for Immerok
I am doing a project in apache flink where I need to call multiple APIs so as to achieve my goal. The result of each API is required for the next API to work. Also as I am doing it on a KeyedStream, the same flow will be applicable to multiple data at once.
Below dig. can explain the scenario
/------API1---API2----
KeyedStream ----|------API1---API2----
\------API1---API2----
As I am doing all this, I am getting an exception saying "Buffer pool destroyed" after the job runs for sometime. Is it something related to API call, do I need to make use of Asynchronous function?? Please suggest. Thanks in advance.
a few things that are typically needed to help answer questions about Flink...
What version are you running?
How are you running it (from IDE, YARN cluster, stand-alone, etc)?
What's the complete stack trace for the exception?
(often) Can you share your code?
But at a high level, the "buffer pool destroyed" message you mentioned is not the root cause of failover, it's just a byproduct of Flink trying to kill off the workflow after an error has happened. So you need to dig deeper in the logs (typically Task Manager logs are where you'd look first).
I'm working with Go on App Engine and I'm trying to build an API that needs to perform a long-running background task - in this case it needs to parse and chunk a big file out to task queues. I'd like it to return a 200 and close the user connection immediately and let the process keep running in the background until it's complete (this could take 5-10 minutes). Task queues alone don't really work for my use case because parsing the initial file can take more than the time limit for an API request.
At first I tried a Go routine as a solution for this problem. This failed because my app engine context expired as soon as the parent function closed the user connection. (I suppose I could try writing a go routine that doesn't require a context, but then I'd lose logging and I'd need to fetch the entire remote file and pass it to the go routine.)
Looking through the docs, it looks like App Engine used to have functionality to support exactly what I want to do: [runtime.RunInBackground], but that functionality is now deprecated and the replacement isn't obvious.
Is there a "right" or recommended way to do background processing now?
I suppose I could put a link to my big file into a task queue, but if I understand correctly, even functions called through task queues have to complete execution within a specified amount of time (is it 90 seconds?) I need to be able to run longer than that.
Thanks for any help.
try using:
appengine.BackgroundContext()
it should be long-lived but will only work on GAE Flex
I've been looking into making a new queue strategy for my asterisk installation, my first project is to join the features of leastrecent and roundrobin in one queue.
I've found a lot of 3rd party callcenter solutions, but haven't been able to determine if any of them uses other strategies than the standards.
So far my thought is that i have to create my own module that adds the functionality. The documentation on creating modules is scarce, besides a well written guide by Russel Bryant.
Is it possible to make some sort of an extention to an existing module, or would i have to replace et completely?
Is there documentation of any sort about creating your own queue strategy ?
i'm running asterisk 11
Sure you can change queue.
Read apps/app_queue.c and extend it as needed. If you have enought skill to extend and TEST queue(multithreaded app), then have be no any issues read app_queue.c
Other solution is use AMI with AsyncAGI call.
http://www.moythreads.com/wordpress/2007/12/24/asterisk-asynchronous-agi/
PS. if you have question like that it is highly not recommended create callcenter. Read more books about asterisk and hire hi-skilled expert to help you. Otherwise very likly CC will not work ok under load.
I have installed PostgreSQL 8.4. What I want to do is call a web service through a C function, enabled by an insert/update trigger and pass the NEW values in this webservice. How can I do that, I searched the web and couldn't find an example.
Thanks in advance.
Please don't do this. If you do you will have wonderful questions like how you handle the web service being down. Also you will have to address what happens when your application rolls back. You can't uncall the web service. Also if the connection times out, your procedure will hang for quite a bit of time (retaining all locks etc) while waiting for the response which never comes.
A better approach is to use a queuing solution like pgq or pg_message_queue and queue up the data at trigger time, only to run it against the web service asynchronously.