I'm working on a very interesting project where we want to keep our game state in memory. When the server starts up, it loads the game state from the database, and that's the only time it ever reads from the database. Whenever the server changes its in-memory game state, it will issue a corresponding write to the database.
Every game is written to by only one server, and if a client sends an update to the wrong server, it is told the correct server's url, and the client can try again.
So, I'm looking for a way to have this game state persist in memory on App Engine, but I'm having a hard time. Everywhere I see says that one should not have state in memory like this, but it's a requirement for our system.
How do I have state in an App Engine server?
PS. Please don't tell me to change my design so that I don't have state in my server, that is a hard requirement.
Every instance has its own instance memory, which you can specify in settings.
The problems you are going to face:
Routing requests to the "correct" instances.
Instances may be restarted at any time.
You can keep your game states in Memcache, which acts as "memory" shared by all of your instances. This way your instances may become "stateless" - each instance finds the correct "game" in Memcache and updates it (and then the Datastore).
Related
I have noticed that apps like instagram keep some data persistent through app closures. Even if all internet connection is removed (perhaps via airplane mode) and the app is closed, reopening it still shows the last loaded data despite the fact that the app cannot call any loading functions from the database. I am curious as to how this is achieved? I would like to implement a similar process into my app (Xcode and swift 4), but I do not know which method is best. I know that NSUserDefaults can persist app data, but I have seen that this is for small and uncomplicated data, of which mine would not be. I know that I can store some of the data in an internal SQL db, via FMDB, but some of the data I would like to persist is image data, which I am not sure exactly how to save into SQL. I also know of Core Data but after reading through some of the documentation I have become a bit confused as to whether or not it fits my purpose. Which of these (or others?) would be best?
As an additional question, regardless of which persistence method I choose, I feel as though every time the data is actually loaded from the DB (when internet connection is available), which is in the viewDidLoad, I would need to be updating the data in the persistent storage in case the internet connection drops. I am concerned that this doubling of my writing procedures will slow the app down? Is there any validity to this concern? Or is it unavoidable anyway?
I make server for simple realtime multiplayer game, on google app engine, Python SDK.
Requests very simple, and process in 1ms maximum.
I hold all game data in static variables of instane.
I set 'Min Pending Latency' to 15 sec. for prevent spawn of second instance.
But some time second instance has created any way.
How i can disable, or kill second instance, if it has ben spawn, And process all requests in single instance only?
If you're fighting against the system, that's an indication that you're doing something wrong.
You should not try to manage all your requests inside a single instance. That defeats the whole purpose of using GAE. The problem of course is that you shouldn't be storing your data as static variables inside your instance. Even apart from the issues with other instances being started, every instance is stopped and restarted every so often: so your data would be lost.
You should keep your data in the places meant for that: in memcache, and in the datastore.
beside only 8 instance-hours is free,
you can do this with Module + Manual Scaling
https://developers.google.com/appengine/docs/java/modules/
https://developers.google.com/appengine/docs/python/modules/
I am having a problem and I need your help.
I am working with Play Framework v1.2.4 in java, and my server is uploaded in the Heroku servers.
All works fine, I can access to my databases and all is ok, but I am experiment troubles when I do a couple of saves to the database.
I have a method who store data many times in the database and return a notification to a mobile phone. My problem is that the notification arrives before the database finish to save the data, because when it arrives I request for the update data to the server, and it returns the data without the last update. After a few seconds I have trying to update again, and the data shows correctly, therefore I think there is a time-access problem.
The idea would be that when the databases end to save the data, the server send the notification.
I dont know if this is caused because I am using the free version of the Heroku Servers, but I want to be sure before purchasing it.
In general all requests to cloud databases are always slower than the same working on your local machine. Even simply query that on your computer needs just 0.0001 sec can be as slow as 0.5 sec in the cloud. Reason is simple clouds providers uses shared databases + (geo) replications, which just... cannot be compared to the database accessed only by one program on the same machine.
Also keep in mind that free Heroku DB plans doesn't offer ANY database cache, which means that every query is fetched from the cloud directly.
As we don't know your application it's hard to say what is the bottleneck anyway almost for sure you have at least 3 ways to solve your problem. They are not an alternatives, probably you will need to use (or at least check) all of them.
You need to risk some basic plan and see how things changed with paid version, maybe it will be good enough for you, maybe not.
Redesign your application to make less queries. For an example instead sending 10 queries to select 10 different rows, you will need to send one query, which selects all 10 records at once.
Use Play's cache API to avoid repeating selecting the same set of data again and again. For an example, if you have some categories, which changes rarely, but you need category tree for each article, you don't need to fetch categories from DB every time, instead you can store a List of categories in cache, so you will need to use only one request to fetch article's content (which can be cached for some short time as well...)
I'm building a GAE app that requires a cryptographic key to operate. I would like to avoid storing the key in code or in a persistent datastore, and instead upload the key whenever I start my app so that it will only reside in memory for the duration of the app's lifetime (from the time I upload the key until no instances are running.)
I understand that this is possible to do with a resident backend, but this seems too expensive (cheapest backend is currently 58$/month) just to keep one value in memory and serve it to other instances on demand.
Note that I'm not looking for a general robust shared-memory solution, just one value that is basically written once and read many times. Thanks.
I don't think that this can work the way you hope. The sources of data in GAE:
Files deployed with your app (war or whatever).
Per-instance memory (front-end or back-end).
Memcache.
Datastore (or SQL now, I suppose).
Blobstore.
Information retrieved via http requests (i.e. store it somewhere else).
1 and 4 are out, as per your question. 2 doesn't work by itself because the starting and stopping of instances is out of your control (it wouldn't scale otherwise), and persistent instances are expensive. 3 doesn't work by itself because Memcache can be cleared at any time. 5 is really no different than the datastore, as it is permanently stored on the Google's servers. Maybe you could try 6 (store it somewhere else), and retrieve it into per-instance memory during the instance startup. But I suspect that is no better security-wise (and, for that matter, doesn't match with what you said that you wanted).
It seems that a Memcache and local memory solution might work if you:
have your server instances clear the memcached key on exit and
existing server instances write/refresh the key regularly (for
example on every request).
That way the key will likely be there as long as an instance is operational and most likely not be there when starting up cold.
The same mechanism could also be used to propagate a new key and/or cycle server instances in the event of a key change.
I realize that all my data is gone when I log in... KEYS * shows nothing.
Luckily, I'm doing this in dev server.
What am I supposed to do if this happens in the future on production?
Am I supposed to back it up every second?
You can find a number of answers/options here:
http://redis.io/topics/persistence
From what I could gather, you should:
Configure your server instance to periodically persist its data to file every 5 minutes or so. That way at most you will lose a few minutes of data if the server goes down.
Configure your server instance to write an AOF redo log (append-only-file). You have various options to favor durability or performance.
Add at least one additional server and use that for replication. That way you will only ever lose any data if both/all of the servers go down simultaneously.
Redis is not the most durable way to store your data. With journaling mode your data is written to disk but you could still lose some data in the event of a crash.
Are you sure you have picked up the right solution for your service? Seems like you need something else than redis?
See also this; Is redis a durable datastore?