Performance of new Amazon SNS Mobile Push Service - mobile

Does anyone have performance data for Amazon's new Mobile Push service?
We are looking at using it, but want to understand performance for:
How many requests per second it can handle
Latency for delivering a notifications to a device in seconds
How long it takes to send an identical notification to a million users (using topics)
Since Amazon doesn't publish performance numbers and because creating synthetic tests for mobile push are difficult, I was wondering if anyone had real-world data.

We've sent a message to around 300,000 devices and they are delivered almost instantaneously. Obviously we do not have access to each of those devices, but judging by a sampling of devices that are subscribed to various topics at different times, all receive the message less than 10 seconds from the actual send.
A single publish to a device from the AWS console is startlingly fast. It appears on your device at almost the same instant that you release the "Publish" button on the AWS console.
While the delay in the AWS delivery infrastructure is nominal, and will surely be driven to near zero as they improve and add to their infrastructure, the time between the user action that generates the message in your system and the actual message is received by AWS that says "send this notification" will likely be the larger portion of the delay in the end-to-end process. The limit per topic is 10,000 devices, so if you are sending to a million users, you'll have 100 (or more) topics to publish to. The time it takes your software to publish to all of these topics depends on how much parallelism you support in the operation. It takes somewhere around 50-100ms to publish to a topic, so if you do this serially, it may be up to 10 seconds before you even publish your message to the 100th topic.
UPDATE: As of August 19, 2014, the limit on the number of subscribers you can have per topic has been raised to 10,000,000:
https://aws.amazon.com/blogs/aws/sns-large-topics-and-mpns-auth-mode/

Related

Performance is *way* too slow with pubsub push -> app engine

I have a requirement to process ~20k calls per second. My system processes lists of ~1M entries and performs multiple jobs for each item. It is very “bursty” in nature as it isn’t always processing a list. I added a App Engine flex env (with Rails), using automatic scaling, with an test endpoint to wait 5 seconds and return. I push to the pubsub topic and a push subscription sends to App Engine. Running this hits a steady state of 20-30 requests per second.
I guessed that the problem was the interaction of the pubsub push volume algorithm interacting with the App Engine, but then I ran a second test where I just blasted curl requests as in a loop with multiple processes. This also ran at 20-30 rps.
I’m stuck at this point and wondering how to proceed. How can I configure the system for higher performance? I need a performance of three orders of magnitude from what I see.
Thanks so much for helping!
My experience with pub/sub is that is really slow for single messaging processing. I'd guess that there's an overhead for being http. And the average time I see is around 40ms outside google and 20ms if you are running your code on google servers.
What worked for me is to batch messages instead, I was able to get up to 100k msgs/sec when batching them in 1k messages per publish.

Preparing for a flash crowd on Google App Engine

I recently experienced a sharp, short-lived increase in the load of my service on Google App Engine. The load went from ~1-2 req/second to about 10 req/second for about a couple of hours. My number of dynamic instances scaled up pretty quickly but in the process I did get a number of "Request waited too long" timeout messages.
So the next time around, I would like to be prepared with enough idle instances to handle my load. But now the question is, how do I determine how many is adequate. I expect a much larger burst in load this time - from practically nothing to an average of 500 requests/second, possibly with a peak of 3000. This is to last between 15 minutes and 1 hour.
My main goal is to ensure that the information passed via HTTP Post is saved to the datastore by means of a single write.
Here are the steps I have taken to prepare for the burst:
I have pruned the fast path to disable analytics and other reporting, which typically generate 2 urlfetch requests.
The datastore write is to be deferred to a taskqueue via the deferred library
What I would like to know is:
1. Tips/insights into calculating how many idle instances one would need per N requests/second.
2. It seems that the maximum throughput of a task queue is 500/second. Is this the rate at which you can push tasks, and if not, then is there a cap on that? I'm guessing not, since these are probably just datastore writes, but I would like to be sure.
My fallback plan if I am not confident of saving all of the information for this flash mob is to set up a beefy Amazon EC2 instance, run a web server on it and make my clients send a backup request to this server.
You must understand that Idle Instances are only used when new frontend instances are being spun-up. This means that they are only used during traffic increases. When traffic is steady they are not used.
Now if your instance needs 20 sec to spin up and can handle 10 req/sec of steady traffic and you traffic INCREASE is 5 req/sec, then you'll need 20 * 5 / 10 = 10 idle instances if you don't want any requests dropped.
What you should do is:
Maximize instance throughput (number of requests it can handle): optimize code, use async db operations and enable Concurrent Requests.
Minimize your instance startup time. This is important because idle instances are used during spinning up of new instances and the time it takes to spin up a new instance directly relates to how many idle instances you need. If you use Java this means getting rid of any heavy frameworks that do classpath scanning (Spring, etc..).
Fourth, number of frontend instances needed is VERY application specific. But since you already had traffic increase you should know how many requests your frontend instance can handle per second.
Edit: There is one more obvious thing you should do: HTTP caching. GAE has a transparent HTTP cache which can be simply controlled via Cache-Control headers.
Also, if analytics has a big performance impact on your server, consider using client side analytics services (like Google Analytics). They also work for devices.

Google Email Migrator API is too slow

We know from the documentation there is a theoretical limit of 1 message per user per second, but we aren't coming anywhere close to that while running email migrations on a high-end server. What should we do? Should we increase the amount of threads per user to more than one (even though the documentation suggests only 1 thread per user)? I've used their GAMME tool and it blows the email migration api away in terms of speed, even on lower end servers.
Does anyone have any suggestions? It's not super-slow, but it's slow enough to be a pain.
The GAMME tool itself utilizes the Email Migration API, it's not doing anything special so there are likely other factors slowing your migration. Are you actually hitting the migration API from AppEngine? If so, you should be able to utilize appstats to profile your application and see if there are other bottlenecks. Where are you pulling messages from?
Do not attempt to use more than 1 thread per user migration, it won't work and you'll get performance issues. DO make sure that you are properly implementing exponential backoff. If your app doesn't acknowledge 503 error codes by backing off exponential (1 second the first time, then 2 seconds, 4, 8, etc) then Google will respond by further throttling your API calls.

Massive multi-user realtime application with Google App Engine

I'm building a multiuser realtime application with Google App Engine (Python) that would look like the Facebook livestream plugin: https://developers.facebook.com/docs/reference/plugins/live-stream/
Which means: 1 to 1 000 000 users on the same webpage can perform actions that are instantly notified to everyone else. It's like a group chat but with a lot of people...
My questions:
- Is App Engine able to scale to that kind of number?
- If yes, how would you design it?
- If no, what would be your suggestions?
Right now, this is my design:
- I'm using the App Engine Channel API
- I store every user connected in the memcache
- Everytime an action is performed, a notification task is added to a taskqueue
- The task consist in retrieving all users from memcache and send them a notification.
I know my bottleneck is in the task. Everybody is notified through the same task/ request. Right now, for 30 users connected, it lasts about 1 sec so for 100 000 users, you can imagine how long it could take.
How would you correct this?
Thanks a lot
How many updates per user do you expect per second? If each user updates just once every hour, you'll be sending 10^12 messages per hour -- every sent message results in 1,000,000 more sends. This is 277 million messages per second. Put another way, if every user sends a message an hour, that works out to 277 incoming messages per second, or 277 million outgoing messages.
So I think your basic design is flawed. But the underlying question: "how do I broadcast the same message to lots of users" is still valid, and I'll address it.
As you have discovered, the Channel API isn't great at broadcast because each call takes about 50ms. You could work around this with multiple tasks executing in parallel.
For cases like this -- lots of clients who need the exact same stateless data, I would encourage you to use polling, rather than the Channel API, since every client is going to receive the exact same information -- no need to send individualized messages to each client. Decide on an acceptable average latency (eg. 1 second) and poll at twice that rate (eg. 2 seconds). Write a very lightweight, memcache-backed servlet to just get the most recent block of data and let the clients de-dupe.

Google App Engine Channels API and sending heartbeat signals from client

Working on a GAE project and one requirement we have is that we want to in a timely manner be able to determine if a user has left the application. Currently we have this working, but is unreliable so I am researching alternatives.
The way we do this now is we have a function setup to run in JS on an interval that sends a heartbeat signal to the GAE app using an AJAX call. This works relatively well, but is generating a lot of traffic and CPU usage. If we don't hear a heartbeat from a client for several minutes, we determine they have left the application. We also have the unload function wired up to send a part message, again through an AJAX call. This works less then well, but most of the time not at all.
We are also making use of the Channels API. One thing I have noticed is that our app when using an open channel, the client seems to also be sending a heartbeat signal in the form of a call to http://talkgadget.google.com/talkgadget/dch/bind. I believe this is happening from the iFrame and/or JS that gets loaded when opening channel in the client.
My question is, can my app on the server side some how hook in to these calls to http://talkgadget.google.com/talkgadget/dch/bind and use this as the heartbeat signal? Is there a better way to detect if a client is still connected even if they aren't actively doing anything in the client?
Google have added this feature:
See https://developers.google.com/appengine/docs/java/channel/overview
Tracking Client Connections and Disconnections
Applications may register to be notified when a client connects to or
disconnects from a channel.
You can enable this inbound service in appengine-web.xml:
Currently the channel API bills you up-front for all the CPU time the channel will consume for two hours, so it's probably cheaper to send messages to a dead channel than to send a bunch of heartbeat messages to the server.
https://groups.google.com/d/msg/google-appengine/sfPTgfbLR0M/yctHe4uU824J
What I would try is attach a "please acknowledge" parameter to every Nth message (staggered to avoid every client acknowledging a single message). If 2 of these are ignored mute the channel until you hear from that client.
You can't currently use the Channel API to determine if a user is still online or not. Your best option for now depends on how important it is to know as soon as a user goes offline.
If you simply want to know they're offline so you can stop sending messages, or it's otherwise not vital you know immediately, you can simply piggyback pings on regular interactions. Whenever you send the client an update and you haven't heard anything from them in a while, tag the message with a 'ping request', and have the client send an HTTP ping whenever it gets such a tagged message. This way, you'll know they're gone shortly after you send them a message. You're also not imposing a lot of extra overhead, as they only need to send explicit pings if you're not hearing anything else from them.
If you expect long periods of inactivity and it's important to know promptly when they go offline, you'll have to have them send pings on a schedule, as you suggested. You can still use the trick of piggybacking pings on other requests to minimize them, and you should set the interval between pings as long as you can manage, to reduce load.
I do not have a good solution to your core problem of "hooking" the client to server. But I do have an interesting thought on your current problem of "traffic and CPU usage" for periodic pings.
I assume you have a predefined heart-beat interval time, say 1 min. So, if there are 120 clients, your server would process heart beats at an average rate of 2 per second. Not good if half of them are "idle clients".
Lets assume a client is idle for 15 minutes already. Does this client browser still need to send heart-beats at the constant pre-defined interval of 1 min?? Why not make it variable?
My proposal is simple: Vary the heart-beats depending on activity levels of client.
When the client is "active", heart-beats work at 1 per minute. When the client is "inactive" for more than 5 minutes, heart-beat rate slows down to 50% (one after every 2 minutes). Another 10 minutes, and heart-beat rate goes down another 50% (1 after every 4 minutes)... At some threshold point, consider the client as "unhooked".
In this method, "idle clients" would not be troubling the server with frequent heartbeats, allowing your app server to focus on "active clients".
Its a lot of javascript to do, but probably worth if you are having trouble with traffic and CPU usage :-)

Resources