GAE Golang implementing some unique request queue? - google-app-engine

I have a Google App Engine Go application that is handling real-time notifications from a third party server. Those notifications need to be logged and processed more or less on the spot. However, the third party server has a nasty habit of sending two requests at the same time, sometimes 1 milisecond apart from one another - too fast to even make a datastore / memcache write indicating a semaphore.
I am wondering if there is a way to handle such concurrent requests neatly? Ideally I would want to put them on some stack that would be guaranteed to process items on it one at a time. Is something like this possible in GAE Golang?

Do a memcache add for the unique identifier of the message with a short timeout (doesn't actually matter). If the add succeeds, process the message.

Related

GAE Datastore / Task queues – Saving only one item per user at a time

I've developed a python app that registers information from incoming emails and saves this information to the GAE Datastore. Registering the emails works just fine. As part of the registration, emails with the same subject and recipients get a conversation ID. However, sometimes emails enter the system so fast after each other, that emails from the same conversation don't get the same ID. This happens because two emails from the same conversation are being processed at the same time and GAE doesn't see the other entry yet when running a query for this conversation.
I've been thinking of a way to prevent this, and think it would be best if the system processes only one email per user at a time (each sender has his own account). This could be done by having a push task queue that first checks if there is currently an email being processed for this user, and if so, put the new task in a pull queue from which it can be retrieved as soon as the previous task has been finished.
The big disadvantage of this, is that (I think) I can't run the push queue asynchronous, which obviously is a big performance disadvantage. Any ideas on what would be a better way to setup such a process?
Apparently this was a typical race-condition. I've made use of the Transactions functionality to prevent multiple processes writing at the same time. Documentation can be found here: https://cloud.google.com/appengine/docs/python/datastore/transactions

Best implementation of turn-based access on App Engine?

I am trying to implement a 2-player turn-based game with a GAE backend. The first thing this game requires is a very simple match making system that operates like this:
User A asks the backend for a match. The back ends tells him to come back later
User B asks the backend for a match. He will be matched with A.
User C asks the backend for a match. The back ends tells him to come back later
User D asks the backend for a match. He will be matched with C.
and so on...
(edit: my assumption is that if I can figure this one out, most other operation i a turn based game can use the same implementation)
This can be done quite easily in Apple Gamecenter and Xbox Live, however I would rather implement this on an open and platform independent backend like GAE. After some research, I have found the following options for a GAE implementation:
use memcache. However, there is no guarantee that the memcache is synchronized across different instances. I did some tests and could actually see match request disappearing due to memcache mis-synchronization.
Harden memcache with Sharding Counters. This does not always solve the multiple instance problem and mayabe results in high memcache quota usage.
Use memcache with Compare and Set. Does not solve the multiple instance problem when used as a mutex.
task queues. I have no idea how to use these but someone mentioned as a possible solution. However, I am afraid that queues will eat me GAE quota very quickly.
push queues. Same as above.
transaction. Same as above. Also probably very expensive.
channels. Same as above. Also probably very expensive.
Given that the match making is a very basic operation in online games, I cannot be the first one encountering this. Hence my questions:
Do you know of any safe mechanism for match making?
If multiple solutions exist, which is the cheapest (in terms of GAE quota usage) solution?
You could accomplish this using a cron tasks in a scheme like this:
define MatchRequest:
requestor = db.StringProperty()
opponent = db.StringProperty(default = '')
User A asks for a match, a MatchRequest entity is created with A as the requestor and the opponent blank.
User A polls to see when the opponent field has been filled.
User B asks for a match, a MatchRequest entity is created with B as as the requestor.
User B pools to see when the opponent field has been filled.
A cron job that runs every 20 seconds? or so runs:
Grab all MatchRequest where opponent == ''
Make all appropriate matches
Put all the MatchRequests as a transaction
Now when A and B poll next they will see that they they have an opponent.
According to the GAE docs on crons free apps can have up to 20 free cron tasks. The computation required for these crons for a small amount of users should be small.
This would be a safe way but I'm not sure if it is the cheapest way. It's also pretty easy to implement.

how many users in a GAE instance?

I'm using the Python 2.5 runtime on Google App Engine. Needless to say I'm a bit worried about the new costs so I want to get a better idea of what kind of traffic volume I will experience.
If 10 users simultaneously access my application at myapplication.appspot.com, will that spawn 10 instances?
If no, how many users in an instance? Is it even measured that way?
I've already looked at http://code.google.com/appengine/docs/adminconsole/instances.html but I just wanted to make sure that my interpretation is correct.
"Users" is a fairly meaningless term from an HTTP point of view. What's important is how many requests you can serve in a given time interval. This depends primarily on how long your app takes to serve a given request. Obviously, if it takes 200 milliseconds for you to serve a request, then one instance can serve at most 5 requests per second.
When a request is handled by App Engine, it is added to a queue. Any time an instance is available to do work, it takes the oldest item from the queue and serves that request. If the time that a request has been waiting in the queue ('pending latency') is more than the threshold you set in your admin console, the scheduler will start up another instance and start sending requests to it.
This is grossly simplified, obviously, but gives you a broad idea how the scheduler works.
First, no.
An instance per user is unreasonable and doesn't happen.
So you're asking how does my app scale to more instances? Depends on the load.
If you have much much requests per second then you'll get (automatically) another instance so the load is distributed.
That's the core idea behind App Engine.

Is a real-time multiplayer game using Google App Engine feasible?

I am currently developing a real-time multiplayer game, and have been evaluating various cloud-based hosting solutions. I am unsure whether App Engine fits my needs, and would be grateful for any feedback.
In essence, I want the system to work like this: Player A calculates round n, and generates a hash out of the game state at the end of that round. He then sends his commands for that round, and the hash, as a http POST to the server. Player B does the same thing, in parallel.
The server, while handling the POST from a player, first writes the received hash code to the memcache. If the hash from the other player is not yet in the memcache, it waits and periodically checks the memcache for the other players hash. As soon as both hashes are in the memcache, it compares them for equality. If they are equal, the server sends the commands of each player to the respectively other one as the http response.
A round like that should last around half a second, meaning two requests per player per second.
Of course, this way of doing it will only work if there are at least two instances of the application running, as two requests must be dealt with in parallel. Also, the memory cache must be consistent over all instances, be fairly reliable, and update immediately.
I cannot use XMPP because I want my game to be able to run within restricted networks, so it has to be limited to http on port 80.
Is there a way to enforce that two instances of the app are always running? Are there glaringly obvious flaws in my design? Do you think an architecture like this might work on App Engine? If not, what cloud based solution would you suggest?
I believe this could work. The key API for you to learn about / test would probably be the Channel API. That is what would allow back and forth communication between the client and server.
The next issue to worry about would be memcache. In general, it is reliable, but in the strictest sense we are supposed to assume that memcached data could disappear at any time.
If you decide that you can't risk losing the data like that, then you need to persist it in the datastore, which means you will have to experiment to make sure you can sustain 2 moves per turn. I think this is possible, but not trivially so. If you had said 1 move every 3 seconds I would say "no problem." But multiple updates to one entity per second start to bump up against the practical limit on writes per second, especially if they are transactional.
Having multiple instances running will not be a problem - you can pay to keep instances warm if necessary.

google app engine - is task queue a solution for QuotaExceededException?

I have a google app engine code that tries to send a mail with an attachment of size 379KB. The mail has two recipients - one on the "To" list and myself on the "BCC" list. Apparently, GAE is treating this as 2 different mails which makes it an attempt to send mails with attachment size 758KB(379*2) and is resulting in QuotaExceededException as it exceeds the per minute quota of 500 odd KB/minute. While the mail reaches the recipient on the "To" list, the one on the Bcc (myself) is not receiving the mail.
Can task queue service be considered for solution to this problem? will the task queue framework retry transmission of the mail to recipients who did not get the mail whenever QuotaExceededException occurs?
Further, I plan to extend the aforementioned code in such a way that it would send the same mail (with attachment) to several users. This would obvioulsy result in QuotaExceededException if transmission to all recipients is attempted without any time gap. Can Task queue service help me in this case in any way?
I think that Task Queues would cover this use case nicely. In fact, the example that Google uses in its documentation of Task Queues is one in which emails are sent through them.
Two things to think about:
Google lists Task Queues as an
experimental feature that may be
subject to change in future
releases, so if you are using this
for production code, be prepared for
your application's behavior to
change suddenly and without warning.
You'll need to configure your queue
such that it does not process emails
faster than they can be sent without
violating your quotas. Check out the
Queue Concepts section in the documentation.
Finally, have you considered hosting this large attachment as a URL and having the email contain a link to it? That'd make sending the emails much easier, and it'd be kinder to your overall bandwidth consumption, as only the recipients who really wanted it would get it.
Almost. The task queue will retry an action until it succeeds, but it will retry the whole task. AFAIK it doesn't know or remember anything about partial success. So if you just do your current action (sending to two recipients) as a task, I suspect that bad things will happen to the recipient in the To: field, as the task keeps sending them an email but failing overall, once a minute, forever...
So, you'll want to use two tasks (on the same queue): one task for each recipient.

Resources