I would like to implement a chat system as part of a game I am developing on App Engine. To implement this, I would like to use WebSockets, and have clients connect to each other though a hub, in this case an instance of GCE. Assuming this game needed to scale to multiple instances on GCE, how would this work? If I had a client 1, and the load balancer directed that request of client 1 to instance A, and another client (2) came in and was directed to instance B, but those clients wanted to chat with each other, they would each be connected to different hubs, and would be unable to reach each other. How would this be set up to work with scale? Would I implement it using queues, where each instance listens on that queue, and if so, how would I do that?
Google Play Game Services offers exactly the functionality that you want but in regard to Android and ios clients. So this option may not be compatible with your game tech design.
In general you're reasoning correctly. Messages from client who want to talk to each other will most of the time hit different server instances. What you want to do is to make instances handle the communication between users. Pub/sub (publish-subscribe pattern) is very suitable pattern in this scenario. Roughly:
whenever there's a message directed to client X a message is published on the channel X,
whenever client X creates a session, instance handling it subscribes to channel X.
You can use one of many existing solutions for starters. It's very easy to set this up using redis. If you need something more low-level and more flexible check out zeromq.
You can expect single instance of either solution to be able to handle thousands of QPS.
Unfortunately I don't have any experience with scaling neither of these solutions so can't offer you any practical advice as to the limits of their scalability.
PS. There are also other topics you may want to explore such as: message persistence and failure recovery I didn't address here at all.
I didn't try to implement this yet but I'll probably have to soon, I think it should be fairly simple to handle it yourself.
You have: server 1 with list of clients and you have server 2 with another list of clients,
so if client wants to send data to another client which might be on server 2, you have to:
Lookup if the receiver is on current server - if it is, you just send it (standard)
Otherwise you send the same data to all other servers you have, so they would check their lists for particular client (or clients) and send data to them.
Related
I'm starting an app, and am facing a big doubt.
Relevant info on the app:
Users can chat (p2p, or via server)
Users queue before chatting (i.e omegle, chatroulette, wakie, etc)
Basically, these is the client-server operations of the app. I was
searching for ways to implement this, to not reinvent the wheel, and
so i found Pusher and Quickblox.
Pusher: This is where i have doubts. I need one server to send events to clients, another server to listen to client events via webhook, and yet another server to handle authentication. Though i suppose everything can be on the same server, havent tried this.
Quickblox: to use for chatting, looks good enough, no doubts here for now.
Minus chatting, the only network operation is the queueing, which should be very simple, thus i am left wondering if this is the proper course.
Since these decisions have a major impact on the project (i shiver at the thought of having to rollback), I thought i would ask for some opinions here.
This is my opinion about pusher since I only know pusher.
I'm in the midst of writing my app using pusher now for 1 to 1 chat. (And you can have everything on the same server. )
Implementation is easy, including client events, authentication etc and we do not need to bother about maintaining the infrastructure.
The problem I've encountered over the course of my project is cost. For just sending of messages between 2 person, every time you send a message you are using up 2 message credits minimum(one to the channel, one to subscriber). This is fine, but if you want to create feature like read status, delivery status, and 'user is typing' status, the number of messages adds up very quickly if you have to use 2 message credit for every of such simple client event.
Hence if you have cost consideration like I do, what I did was to use pusher for more critical feature like sending messages in a 1-1 chat and checking whether user are online. On the other side I am planning to use Slanger or other similar pusher self hosted solutions to implement other features like delivery status, read status, and 'user typing' status which I feel are good to have but not as mission critical as sending/receiving the messages itself.
I've read a lot on pusher.com and their pricing is quite reasonable comparing to building and managing the architecture myself, and their service is reliable so far. So it depends on how mission critical is your app.
https://ringpop.readthedocs.org/en/latest/
To my understanding, the sharding can be implemented in some library routines, and the application programs are just linked with the library. If the library is a RPC client, the sharding can be queried from the server side in real-time. So, even if there is a new partition, it is transparent to the applications.
Ringpop is application-layer sharding strategy, based on SWIM membership protocol. I wonder what is the major advantage at the application layer?
What is the other side, say the sharding in the system layer?
Thanks!
Maybe a bit late for this reply, but maybe someone still needs this information.
Ringpop has introduced the idea of 'sharding' inside application rather then data. It works more or less like an application level middleware, but with the advantage that it offers an easy way to build scalabale and fault-tolerance applications.
The things that Ringpop shards are the requests coming from clients to a specific service. This is one of its major advantages (there are mores, keep reading).
In a traditional SOA architecure, all requests for a specific serveice goes to a unique system that dispatch them among the workers for load balancing. These workers do not know each other, they are indipendent entities and cannot communicate between them. They do their job and sent back a reply.
Ringpop is the opposite: the workers know each other and can discover new ones, regularly talk among them to check their healthy status, and spread this information with the other workers.
How Ringpop shard the request?
It uses the concept of keyspaces. A keyspace is just a range of number, e.g. you are free to choice the range you like, but the obvious choice is hash the IDs of the objects in the application and use the hashing-function's codomain as range.
A keyspace can be imaginated as an hash "ring", but in practice is just a 4 or 8 byte integer.
A worker, e.g. a node that can serve a request for a specific service, is 'virtually' placed on this ring, e.g. it owns a contiguous portion of the ring. In practice, it has assigned a sub-range. A worker is in charge to handle all the requests belonging to its sub-range. Handle a request means two things:
- process the request and provide a response, or
- forward the request to another service that actually knows how to serve it
Every application is build with this behaviour embedded. There is the logic to handle a request or just forward it to another service that can handle it. The forwarding mechanism is nothing more than a remote call procedure, which is actually made using TChannel, the Uber's high performance forwarding for general RPC.
If you think on this, you can figure out that Ringpop is actually offering a very nice thing that traditionals SOA architecture do not have. The clients don't need to know or care about the correct instance that can serve their request. They can just send a request anywhere in Ringpop, and the receiver worker will serve it or forward to the rigth owner.
Ringpop has another interesting feature. New workers can dinamically enter the ring and old workers can leave the ring (e.g. because a crash or just a shutdown) without any service interrputions.
Ringpop implements a membership protocol based on SWIM.
It enable workers to discover each another and exclude a broken worker from the ring using a tcp-based gossip protocol. When a new worker is discovered by another worker, a new connection is established between them. Every worker map the status of the other workers sending a ping request at regular time intervals, and spread the status information with the other workers if a ping does not get a reply (e.g. piggyback membership update on a ping / gossip based)
These 3 elements consistent hashing, request forwarding and a membership protocol, make Ringpop an interesting solution to promote scalability and fault tolerance at application layer while keeping the complexity and operational overhead to a minimum.
we want to build an application (c#/.Net) for the following Scenario:
internal "alert System". Users should be informed about it-system outage, planned downtime for Services and so on.
only one-way : central Service will push Messages to user
we also Need the possibility to enable/disable a message, for example:
The message "there a Problems with mail System" should be removed from every Computer after the Problem is solved
we want to shedule Messages for planned maintanance
about 1000 windows Clients, we also want to "group" this Clients, so we can control which Client will get a message
First thought was writing small application which will query every X seconds a central database for new and existing Messages.
Maybe somebody has already worked on similar Project?
Is a Client with database query a way to go? Better to use other Technology, like WCF Service?
Thanks for your help
Marc
Sounds like you need an enhanced version of push notifications.
I'd suggest using push for all the messaging, it's delivered faster and I find it more reliable. Simply make the client connect to a message server and maintain the connection open. Whenever a message is supposed to be displayed to the client, have the server push it trough the connection (that's where the name comes from).
To group and manage the clients you could use a database, it's probably the best way to go, but the server needs to handle all the open connections, and databases can only store DATA, not virtual objects representing a connection, so the server software need to manage them in a different way.
My suggestion: Whenever the server receives an incoming client connection, it will accept and query the client computer for a ID number that will also be used to find that client's information in the database.
Then it will create a dictionary using that ID as key, and the connection as the value.
This way at the time of sending a message to a determined group, you can do in two ways:
1) You can load from the database the IDs that belong to that group, and then send the messages to them. You will have to check whether that ID exists in the dictionary's KEYS array, because it is possible that a determined client is not yet connected.
2) You can iterate of the KEYS array of dictionary, check to which group that ID is part of, and if it is the desires group, send it.
If you're dealing with a big number of clients, I suggest you use method 1.
To disable/remove a message from the client's computer, simply have the server send a special Command message that the client software interprets as "remove that message". To make this possible every non-command message must have unique IDs, so that command messages can tell the client software which message that command applies to.
Your project sounds very interesting.
I would be glad to help you by writing a library you could use, or just help you figure it out on your own if you prefer. (Free of charge, just for the experience).
I'm trying to figure out if PollingDuplex is the right way to go for my problem.
Here is my scenario:
1. 3rd party application sends a UDP packet with a client's IP address to a server app.
2. The server app needs to the notify the specified client and send along some data.
The client is a Silverlight application.
I've been looking at some guides and sample code (http://petermcg.wordpress.com/2008/09/03/silverlight-polling-duplex-part-1-architecture/) but I don't understand how clients are identified on the server using PollingDuplex. I understand that the clients register with the server and continually poll for messages. How would I make sure that only the right clients get the message designated for that client? In other words, the messages on the server should not be broadcasted to all polling clients but only sent to one specific client.
Any help is much appreciated.
Whether you're using Net.TCP or HttpDuplexBinding, clients can be identified using OperationContext.Current.Channel.SessionId. And more specifically, you can grab the actual channel that WCF uses to talk to them using OperationContext.Current.GetCallbackChannel<IMyCustomServiceInterface>(). You can store those in memory, perhaps associated with some other identifier passed up from the client, and when you need to communicate with the client in question (e.g., to pass them the data from the UDP packet), you call the appropriate method on that specific stored channel; and the client will get notified.
I should note that while I don't particularly recommend HttpDuplexBinding, apart from its quirks and stability and performance issues, it should work for what you're doing, and in exactly the same way as Net.TCP. Although the clients technically do "poll" the server, that's hidden from you. All you know on the server is that you're calling a method on a particular channel. The underlying binding code takes care of making sure that the right client gets notified.
Polling duplex is actually an entirely client side implementation that exists only for Silverlight (there's no regular .NET framework version of it, except a project on Codeplex Microsoft's own internal consulting services developed for a high profile client of theirs). There's nothing at all special about it on the server side.
It's not really meant to be used in production by Microsoft's own admission (we have a Microsoft contact at our company who admitted this to us candidly). It's not very robust or well implemented and can/will DoS your server under any kind of volume:
http://forums.silverlight.net/p/89970/239380.aspx
You're better off rolling your own client side polling mechanism - or (better and more scalable) using TCP with session in Silverlight 4, which provides true duplex support (because the connection is not stateless and thus supports true push notifications):
http://www.silverlightshow.net/items/WCF-NET.TCP-Protocol-in-Silverlight-4.aspx.
This is more out of curiosity and "for future reference" than anything, but how is Comet implemented on the database-side? I know most implementations use long-lived HTTP requests to "wait" until data is available, but how is this done on the server-side? How does the web server know when new data is available? Does it constantly poll the database?
What DB are you using? If it supports triggers, which many RDBMSs do in some shape or form, then you could have the trigger fire an event that actually tells the HTTP request to send out the appropriate response.
Triggers remove the need to poll... polling is generally not the best idea.
PostgreSQL seems to have pretty good support (even PL/Python).
this is very much application dependent. The most likely implementation is some sort of messaging system.
Most likely, your server side code will consist of quite a few parts:
a few app servers that hansle incoming requests,
a (separate) comet server that deals with all the open connections to clients,
the database, and
some sort of messaging infrastructure
the last one, the messaging infrastructure is really the key. This provides a way for the app servers to talk to the comet server. So when a request comes in the app server will put a message into the message queue telling the comet server to notify the correct client(s)
How messaging is implemented is, again, very much application dependent. A very simple implementation would just use a database table called messages and poll that.
But depending on the stack you plan on using there should be more sphisticated tools available.
In Rails I'm using Juggernaut which simply listens on some network port. Whenever there is data to send the Rails Application server opens a connection to this juggernaut push server and tells it what to send to the clients.