Massive multi-user realtime application with Google App Engine - google-app-engine

I'm building a multiuser realtime application with Google App Engine (Python) that would look like the Facebook livestream plugin: https://developers.facebook.com/docs/reference/plugins/live-stream/
Which means: 1 to 1 000 000 users on the same webpage can perform actions that are instantly notified to everyone else. It's like a group chat but with a lot of people...
My questions:
- Is App Engine able to scale to that kind of number?
- If yes, how would you design it?
- If no, what would be your suggestions?
Right now, this is my design:
- I'm using the App Engine Channel API
- I store every user connected in the memcache
- Everytime an action is performed, a notification task is added to a taskqueue
- The task consist in retrieving all users from memcache and send them a notification.
I know my bottleneck is in the task. Everybody is notified through the same task/ request. Right now, for 30 users connected, it lasts about 1 sec so for 100 000 users, you can imagine how long it could take.
How would you correct this?
Thanks a lot

How many updates per user do you expect per second? If each user updates just once every hour, you'll be sending 10^12 messages per hour -- every sent message results in 1,000,000 more sends. This is 277 million messages per second. Put another way, if every user sends a message an hour, that works out to 277 incoming messages per second, or 277 million outgoing messages.
So I think your basic design is flawed. But the underlying question: "how do I broadcast the same message to lots of users" is still valid, and I'll address it.
As you have discovered, the Channel API isn't great at broadcast because each call takes about 50ms. You could work around this with multiple tasks executing in parallel.
For cases like this -- lots of clients who need the exact same stateless data, I would encourage you to use polling, rather than the Channel API, since every client is going to receive the exact same information -- no need to send individualized messages to each client. Decide on an acceptable average latency (eg. 1 second) and poll at twice that rate (eg. 2 seconds). Write a very lightweight, memcache-backed servlet to just get the most recent block of data and let the clients de-dupe.

Related

Using gmail api suspends an user account due to "Request limits"

Our application processes emails for clients. One client very often became "suspended". In GSuite Admin console he saw that he have hit "Requests limit". At the same time as gmail web interface is suspended, gmail mobile is working. Our application also do not see any interruption. We do not see nor '429 Too Many Requests "Too many concurrent requests for user"' nor '429 Too Many Requests "User-rate limit exceeded"' nor any other kind of errors.
Therefore, the user became suspended from time to time due to "request limit", we do not see any rejected requests due to limits/quotas. We do not do concurrent requests per user. That is, different users are processed concurrently but processing for one user is sequential. But we also use batching, biggest batch size is 20.
On a page about "Request limits" I found only that "the number of server requests a G Suite account can make at one time is limited". What does "one time" mean? Does batching make the issue worse or it helps? Do other apis (ex. Calendar api) counted here? Is gmail mobile counted here? Is there any way to get support for such question?
Any help is aprecated.
PS. Most calls to Gmail are history.list, messages.get (with max batch size is 5), threads.get (max batch size is 20). Calls of messages.attachments.get (max batch size is 2) are not so often. history.list and messages.get are called as new messages arrived to user mailbox. Sometimes messages.get and threads.get bursts. But we have a code to prevent getting out of quotas. Attachments are processed by 2, one batch (of 2) in 5 seconds, but we are going to slow it down to 1 attachment per 5 secs. In my opinion, the rate is low (per one user). Also the app calls other apis - gdrive api, calendar api, old contacts api.
We see 429 code very very rare. But one our client suffers from suspensions 1-2 times a week. Sometimes a suspension lasts for about 1 hour, sometimes a suspension is a sequence of several short periods of time

Performance of new Amazon SNS Mobile Push Service

Does anyone have performance data for Amazon's new Mobile Push service?
We are looking at using it, but want to understand performance for:
How many requests per second it can handle
Latency for delivering a notifications to a device in seconds
How long it takes to send an identical notification to a million users (using topics)
Since Amazon doesn't publish performance numbers and because creating synthetic tests for mobile push are difficult, I was wondering if anyone had real-world data.
We've sent a message to around 300,000 devices and they are delivered almost instantaneously. Obviously we do not have access to each of those devices, but judging by a sampling of devices that are subscribed to various topics at different times, all receive the message less than 10 seconds from the actual send.
A single publish to a device from the AWS console is startlingly fast. It appears on your device at almost the same instant that you release the "Publish" button on the AWS console.
While the delay in the AWS delivery infrastructure is nominal, and will surely be driven to near zero as they improve and add to their infrastructure, the time between the user action that generates the message in your system and the actual message is received by AWS that says "send this notification" will likely be the larger portion of the delay in the end-to-end process. The limit per topic is 10,000 devices, so if you are sending to a million users, you'll have 100 (or more) topics to publish to. The time it takes your software to publish to all of these topics depends on how much parallelism you support in the operation. It takes somewhere around 50-100ms to publish to a topic, so if you do this serially, it may be up to 10 seconds before you even publish your message to the 100th topic.
UPDATE: As of August 19, 2014, the limit on the number of subscribers you can have per topic has been raised to 10,000,000:
https://aws.amazon.com/blogs/aws/sns-large-topics-and-mpns-auth-mode/

Best practices to limit the number of calls to Mirror API

I, like everyone else I imagine, have a courtesy limit of 1000 Mirror API calls per day.
I see there's a batching facility that looks promising, but it appears to be able to batch only requests for a single credential. So even one customer, pushing to the API every 60 seconds will be 1440 requests/day. Ideally, 30 seconds is where I'd like to be. 2880 requests/day would be multiplied by the number of customers. It will get really big really fast.
I might be missing something, but I don't see a way around that.
If it were available I could glom all updates across all clients in the 30 second period into one giant message...
Is there a better design pattern to keep cards up-to-date with telemetry that's changing in real-time?
You can send requests to multiple users with a single batch request: instead of setting the Authorization header in the batch request, simply set the Authorization header in each sub-request.
Our Python and Java Quick Start projects have an example of using batch request to send an update to up to 10 users. This is also mentioned in the Building Glass Services with the Google Mirror API I/O session.
Otherwise, you can check the protocol documentation in our reference guide.
As Scarygami mentioned, each sub-request will consume quota so the only optimization is to save on bandwidth and HTTP requests, especially if using gzip encoding.

When is it better to use polling instead of the channel api?

I have an application where users can collaborate on photo albums. I currently use polling on the client to check for new content every 30 seconds. There can be any number of people uploading and viewing an album at any given time.
On the server side, I cache the data to return (so the query for new content is cheap). I assume that polling every 30 seconds from multiple clients will cause more instances to stay active (and thus increase costs).
Would it be overkill to use the channel api for the above use case instead of polling?
Does the channel api keep instances alive too?
Are there any use cases where polling is preferable instead of using the channel api?
I'm using channels but I'm finding they're not great. If a channel times out from a network disconnect, it somehow screws up the history on my browser. I've filed a bug a bit over a week ago, but it hasn't been acknowledged. There's another bug filed over a month ago that hasn't been acknowledged either - so don't expect quick support on channel issues.
Channels are nice to have - you can notify users in less than a second if status of some sort changes, but they're not reliable. Sometimes the disconnect event doesn't occur, but the channel just stops working. My current system uses channels, but also polls every 5-10 seconds. Because of the unreliability, I wouldn't use channels as a replacement for polling, just a way to give faster response.
Even then you'll have to work out whether it'll save you money. If you're expecting users to leave your app open for 15 minutes without hitting the server, then maybe you'll save some instance time. However, if your users are hitting the server anyways, your instances probably wouldn't get time to shut down. And keeping your instances up actually helps reduce cold starts a bit too.

Google App Engine Channels API and sending heartbeat signals from client

Working on a GAE project and one requirement we have is that we want to in a timely manner be able to determine if a user has left the application. Currently we have this working, but is unreliable so I am researching alternatives.
The way we do this now is we have a function setup to run in JS on an interval that sends a heartbeat signal to the GAE app using an AJAX call. This works relatively well, but is generating a lot of traffic and CPU usage. If we don't hear a heartbeat from a client for several minutes, we determine they have left the application. We also have the unload function wired up to send a part message, again through an AJAX call. This works less then well, but most of the time not at all.
We are also making use of the Channels API. One thing I have noticed is that our app when using an open channel, the client seems to also be sending a heartbeat signal in the form of a call to http://talkgadget.google.com/talkgadget/dch/bind. I believe this is happening from the iFrame and/or JS that gets loaded when opening channel in the client.
My question is, can my app on the server side some how hook in to these calls to http://talkgadget.google.com/talkgadget/dch/bind and use this as the heartbeat signal? Is there a better way to detect if a client is still connected even if they aren't actively doing anything in the client?
Google have added this feature:
See https://developers.google.com/appengine/docs/java/channel/overview
Tracking Client Connections and Disconnections
Applications may register to be notified when a client connects to or
disconnects from a channel.
You can enable this inbound service in appengine-web.xml:
Currently the channel API bills you up-front for all the CPU time the channel will consume for two hours, so it's probably cheaper to send messages to a dead channel than to send a bunch of heartbeat messages to the server.
https://groups.google.com/d/msg/google-appengine/sfPTgfbLR0M/yctHe4uU824J
What I would try is attach a "please acknowledge" parameter to every Nth message (staggered to avoid every client acknowledging a single message). If 2 of these are ignored mute the channel until you hear from that client.
You can't currently use the Channel API to determine if a user is still online or not. Your best option for now depends on how important it is to know as soon as a user goes offline.
If you simply want to know they're offline so you can stop sending messages, or it's otherwise not vital you know immediately, you can simply piggyback pings on regular interactions. Whenever you send the client an update and you haven't heard anything from them in a while, tag the message with a 'ping request', and have the client send an HTTP ping whenever it gets such a tagged message. This way, you'll know they're gone shortly after you send them a message. You're also not imposing a lot of extra overhead, as they only need to send explicit pings if you're not hearing anything else from them.
If you expect long periods of inactivity and it's important to know promptly when they go offline, you'll have to have them send pings on a schedule, as you suggested. You can still use the trick of piggybacking pings on other requests to minimize them, and you should set the interval between pings as long as you can manage, to reduce load.
I do not have a good solution to your core problem of "hooking" the client to server. But I do have an interesting thought on your current problem of "traffic and CPU usage" for periodic pings.
I assume you have a predefined heart-beat interval time, say 1 min. So, if there are 120 clients, your server would process heart beats at an average rate of 2 per second. Not good if half of them are "idle clients".
Lets assume a client is idle for 15 minutes already. Does this client browser still need to send heart-beats at the constant pre-defined interval of 1 min?? Why not make it variable?
My proposal is simple: Vary the heart-beats depending on activity levels of client.
When the client is "active", heart-beats work at 1 per minute. When the client is "inactive" for more than 5 minutes, heart-beat rate slows down to 50% (one after every 2 minutes). Another 10 minutes, and heart-beat rate goes down another 50% (1 after every 4 minutes)... At some threshold point, consider the client as "unhooked".
In this method, "idle clients" would not be troubling the server with frequent heartbeats, allowing your app server to focus on "active clients".
Its a lot of javascript to do, but probably worth if you are having trouble with traffic and CPU usage :-)

Resources