Is there an elegant way to post messages to AWS SQS with visibility delay of longer than 15 minutes? - timer

In Amazon Web Services, their queues allow you to post messages with a visibility delay up to 15 minutes. What if I don't want messages visible for 6 months?
I'm trying to come up with an elegant solution to the poll/push problem. I can write code to poll the SQS (or a database) every few seconds, check for messages that are ready to be visible, then move them to a "visible queue", or something like that. I wish there was a simpler, more reliable method to have messages become visible in queues far into the future without me having to worry about my polling application working perfectly all the time.
I'm not married to AWS, SQS or any of that, but I'd prefer to find a cloud-friendly solution that is stable, reliable and will trigger an event far into the future without me having to worry about checking on its status every day.
Any thoughts or alternate trees for me to explore barking up are welcome.
Thanks!

It sounds like you might be misunderstanding the visibility delay. Its purpose is to make sure that the polling application doesn't pull the same item off the queue more than once.
In other words, when the item is pulled off the queue it becomes invisible for a predetermined period of time (default is 30 seconds, max is 15 minutes) in case the polling system has a cluster of machines reading from the queue all at once.
Here's the relevant documentation:
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/IntroductionArticle.html#AboutVT
...and the sentence in particular that relates to my comment is:
"Immediately after the component receives the message, the message is still in the queue. However, you don't want other components in the system receiving and processing the message again. Therefore, Amazon SQS blocks them with a visibility timeout, which is a period of time during which Amazon SQS prevents other consuming components from receiving and processing that message."
You should be able to use SQS for your purpose since you can leave an item in the queue for as long as you want.

7 years later, and Amazon still doesn't support the feature you need!
The two ways you can sort of get it to work are:
have messages contain a delivery target datetime in their message_attributes, and have the workers that consume the queue's messages just delete and recreate any message that is consumed before its target, with delay = max(0, min(secs_until_target_datetime, 900)) ; that would allow you to effectively schedule a message for any arbitrary future time;
or,
(slightly less frequent and costly:) similarly, if a message isn't due to be handled yet, recreate it and change its visibility timeout to be timeout = max(0, min(secs_until_target_datetime, 43200))
The disadvantage of using visibility timeout is that any read will re-trigger it.

There has been a direct AWS solution possible since 2016-12-01: AWS Step Functions
Each execution can last/idle up to one year, persists the state between transitions, and doesn't cost you any money while it waits.

Related

Extended windows

I have an always one application, listening to a Kafka stream, and processing events. Events are part of a session. And I need to do calculations based off of a sessions data. I am running into a problem trying to correctly run my calculations due to the length of my sessions. 90% of my sessions are done after 5 minutes. 99% are done after 1 hour. Sessions may last more than a day, due to this being a real-time system, there is no determined end. Session are unique, and show never collide.
I am looking for a way where I can process a window multiple times, either with an initial wait period and processing any later events after that, or a pure process per event type structure. I will need to keep all previous events around(ListState), as well as previously processed values(ValueState).
I previously thought allowedLateness would allow me to do this, but it seems the lateness is only considered for when the event should have been processed, it does not extend an actual window. GlobalWindows may also work, but I am unsure if there is a way to process a window multiple times. I believe I can used an evictor with GlobalWindows to purge the Windows after a period of inactivity(although admittedly, I did not research this yet, because I was unsure of how to trigger a GlobalWindow multiple times.
Any suggestions on how to achieve what I am looking to do would be greatly appreciated, I would also be happy to clarify any points needed.
If SessionWindows won't do the job, then you can use GlobalWindows with a custom Trigger and Evictor. The Trigger interface has onElement and timer-based callbacks that can fire whenever and as often as you like. If you go down this route, then yes, you'll also need to implement an Evictor to dispose of elements when they are no longer needed.
The documentation and the source code are helpful when trying to understand how this all fits together.

Is there a way to define a Flink count window, which evicts all messages after a given time, if the count is not reached?

I am currently working on a streaming program which aggregates the data from a number of messages (8), the aggregation requires all 8 messages, so i am using a count window. All 8 messages share the same unique key. However there is no guarantee that all 8 messages will arrive. So my question is two-fold:
First what happens to a Flink count window that never closes? I am assuming the windows simply accumulate overtime, consuming more and more ram.
Secondly can I close a count window if it does not receive all of its messages within a given time? I am looking for a solution that is as real-time as possible, I already tried using a time window, however the time-of-flight of the messages varies between a few millisecond and 40 seconds.
So essentially is there a way to define a window that triggers at 8 messages, and evicts all messages from the window after a given time (in this case after 60 seconds)?
The answer for your question regarding never closing windows is that the part of the state reserved for them will never be freed.
Your described behaviour could be implemented with custom trigger and evictor on Global Window. The trigger could either wait the expected time or number of elements before emitting window while the evictor would evict all messages if there are less than 8. For some referential implementation you can have a look at CountTrigger(emits on count) and EventTimeTrigger(emits on time). For the evictor have a look at CountEvictor.
For cases like this where you need to combine stateful stream processing with timers, ProcessFunction can be a good choice. See https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/process_function.html.

How to mark a message as "in progress" so other workers don't work on it

I'm attempting to use a pull queue to create a queue of image processing tasks that could take longer that the acktimeout limit of 10 minutes. I'm using node.js api and I'm wondering how I could have a worker grab a message off the pull queue, mark it as in progress so no other workers attempt to grab it, do its work and acknowledge the message after the processing is done. This processing could take up to an hour per worker. If an exception occurs, I'd like to remove the "in progress" status and allow other workers to pick up this message and attempt to work on it.
I was hoping there was something in pubsub that would allow me to do this. My alternative is to, before processing, store an entity (inProgressMessage) with the message id, ack id, status=pending, timestamp=now() into datastore, have the worker immediately return the ackid after receiving the message (this will allow other workers to attempt other messages), then the worker can work on the lengthy task. If successful, mark the entity status as complete, if failed in a non permanent way, requeue the task into pubsub, if failed in a permanent way that won't allow reqeueing, I can have cron that checks datastore for pending tasks older than several hours and have them either be deleted or requeued.
My alternative feels like i'm re-implementing alot of what pub sub is supposed to help with.
Let me know if you can think of a better way.
To take longer than the ack deadline to process a message, you'll want to use modifyAckDeadline. You can extend the deadline as many times as you need up to 10 minutes per call. Your workflow would be as follows:
Pull the message.
Start to process the message.
While you are not done with the message, if you are close to the 10 minute ack deadline, call modifyAckDeadline to extend the deadline.
Once done processing the message, ack it.
Please note that calling modifyAckDeadline does not guarantee that the message won't be delivered to another task. In certain circumstances like server restarts, the message may end up being delivered to another of your subscribers. However, in most normal circumstances, as long as you call modifyAckDeadline before the current ack deadline, you can prevent a message's redelivered as long as necessary.
When creating a topic (only), you can configure the acknowledge time to be whatever up to 10 minutes (https://cloud.google.com/pubsub/subscriber). Once a message has been pulled from the queue, no other worker (of the same subscriber) will be able to take it for processing, unless the ack ttl was reached, and then the message is automatically returned to queue.
Since you need a longer period, you will have to implement something on your own, or seek another queuing solution. I think the design you suggested is fairly simple to implement, and is not really a re-implementation of what pubsub does.

DeadlineExceededException and DataStore/Task Queue Operations

I'm doing some operations that should complete under 60 seconds but there may be some rare cases where it takes longer (but will never take longer than 10 minutes). It says in the app engine docs if you catch a DeadlineExceededException you have less than a second to do operations before it permanently fails. Would this be enough time to add a task to a queue and/or do a datastore write? I assume the safest way would be to add a task async/write a datastore entity (async) at the beginning of an operation and remove it from the queue if the operation completes. The latter method would use up twice as many api calls but is it worth it?
I would suggest to use the queue as default for all operations so you won't have to implement the fallback to it if you catch a dead line exceed error. It is more clean and easier to maintain along with the fact that the user doesn't have to wait for the operation to complete. In order to achieve this you can trigger your queue with an ajax call and get the result in the background, so the user will not wait for the operation to complete. Yes it worth's it, since it can "guarantee" the window of time you might need.
The runtime environment gives the request handler a little bit more time (less than a second) after raising the exception to prepare a custom response. so it would be sufficient to add that it into task queue.
If you do not want the client to keep polling for a task queue result, I suggest you have a look at the Channel API. It will enable you to implement push notifications to the client.
At the end of your task queue, you'll just have to send a notification to the client to let him now that is task has been processed.

libspotify playlist update latency

We're using libspotify to update playlists that we have generated against a single account that need to be kept up to date over time. We're using a fork of the spotify-api-server to do this https://github.com/tom-martin/spotify-api-server
After sending an update to a playlist's tracks using libspotify we generally wait for the callback that we passed to sp_playlist_add_callbacks to be called before we report a success to the user. Often this callback arrives within a suitable time frame but increasingly we're getting unacceptable delays in receiving this callback. Sometimes 30 seconds, sometimes even longer, sometimes minutes, sometimes hours. It seems that generally these delays are caused by libspotify pausing for a period and not calling any callbacks until it seemingly "unfreezes" and calls all the backed up callbacks in quick succession.
Is it reasonable to use this callback as an indicator of a successful playlist update? Is there any obvious reason for these long delays?
Are you correctly handling the notify_main_thread function to keep libSpotify running?
Also, sometimes the playlist system gets backed up, goes down or otherwise takes a while to respond to requests. Our own clients keep their own cache of what the playlist tree should look like once pending transactions are successful to keep the UI snappy.

Resources