Mobile app client-server operations, pusher or just code it? - mobile

I'm starting an app, and am facing a big doubt.
Relevant info on the app:
Users can chat (p2p, or via server)
Users queue before chatting (i.e omegle, chatroulette, wakie, etc)
Basically, these is the client-server operations of the app. I was
searching for ways to implement this, to not reinvent the wheel, and
so i found Pusher and Quickblox.
Pusher: This is where i have doubts. I need one server to send events to clients, another server to listen to client events via webhook, and yet another server to handle authentication. Though i suppose everything can be on the same server, havent tried this.
Quickblox: to use for chatting, looks good enough, no doubts here for now.
Minus chatting, the only network operation is the queueing, which should be very simple, thus i am left wondering if this is the proper course.
Since these decisions have a major impact on the project (i shiver at the thought of having to rollback), I thought i would ask for some opinions here.

This is my opinion about pusher since I only know pusher.
I'm in the midst of writing my app using pusher now for 1 to 1 chat. (And you can have everything on the same server. )
Implementation is easy, including client events, authentication etc and we do not need to bother about maintaining the infrastructure.
The problem I've encountered over the course of my project is cost. For just sending of messages between 2 person, every time you send a message you are using up 2 message credits minimum(one to the channel, one to subscriber). This is fine, but if you want to create feature like read status, delivery status, and 'user is typing' status, the number of messages adds up very quickly if you have to use 2 message credit for every of such simple client event.
Hence if you have cost consideration like I do, what I did was to use pusher for more critical feature like sending messages in a 1-1 chat and checking whether user are online. On the other side I am planning to use Slanger or other similar pusher self hosted solutions to implement other features like delivery status, read status, and 'user typing' status which I feel are good to have but not as mission critical as sending/receiving the messages itself.
I've read a lot on pusher.com and their pricing is quite reasonable comparing to building and managing the architecture myself, and their service is reliable so far. So it depends on how mission critical is your app.

Related

Sharing data to client whenever API has results available?

I have the following scenario, can anyone guide what's the best approach:
Front End —> Rest API —> SOAP API (Legacy applications)
The Legacy applications behave unpredictably; sometimes it's very slow, sometimes fast.
The following is what needs to be achieved:
- As and when data is available to Rest API, the results should be made available to the client
- Whatever info is available, show the intermediate results.
Can anyone share insights in how to design this system?
you have several options to do that
polling from the UI - will require some changes to the API, the initial call will return a url where results will be available and the UI will check that out everytime
websockets - will require changing the api
server-sent events - essentially keeping the http connection open and pushing new results as they are available - sounds the closest to what you want
You want some sort of event-based API that the API consumers can subscribe to.
Event-driven architectures come in many forms - from event notification ('hey, I have new data, come and get it') to message/payload delivery; full-on publish/subscribe solutions to that allow consumers to subscribe to one or more "topics", with event back-up and replay functionality to relatively basic ones.
If you don't want a full-on eventing platform, you could look at WebHooks.
A great way to get started will be to start familiarizing yourself with some event-based architecture patterns. That last link is for Chris Richardson's website, he's got a lot of great info on such architectures and would be well worth a look.
In terms of the defining the event API, if you're familiar with OpenAPI, there's AsyncAPI which is the async equivalent.
In terms of solutions, there's a few well known platforms, including open source ones. The big cloud providers (Azure, GCP and AWS) will also have async / event based services you can use.
For more background there's this Wikipedia page (which I have not read - so can't speak for it's quality but it does look detailed).
Update: Webhooks
Webhooks are a bit like an ice-berg, there's more to them than might appear at first glance. A full-on eventing solution will have a very steep learning curve but will solve problems that you'll otherwise have to address separately (write your own code, etc). Two big areas to think about:
Consumer management. How will you onboard new consumers? Is it a small handful of internal systems / URLs that you can manage through some basic config, manually? Or is it external facing for public third parties? If it's the latter, will you need to provide auto-provisioning through a secure developer portal or get them to email/submit details for manual set-up at your end?
Error handling & missed events. Let's say you have an event, you call the subscribing webhook - but there's no response (or an error). What do you do? Do you retry? If so, how often, for how long? Once the consumer is back up what do you do - did you save the old events to replay? How many events? How do you track who has received what?
Polling
#Arnon is right to mention polling as an approach but I'd only do it if you have no other choice, or, if you have a very small number of internal system doing the polling, i.e - incurs low load, and you control both "ends" of the polling; in such a scenario its a valid approach.
But if its for an external API you'll need to implement throttling to protect your systems, as you'll have limited control over who's calling you and how much. Caching will be another obvious topic to explore in a polling approach.

Storing Chat Messages w/ Redis + Another Database

I've recently begun working on a realtime chat application as a side project, mainly for experience. The clients will initially only be iOS/Android applications.
For the backend, I'm utilizing NodeJs for two reasons:
I want to use websockets
And I've never worked on a project with node before so its good exp
My main question, as the question indicates is regarding the architecture of how I will be storing chat messages on the server.
By reading other posts regarding this topic and researching the internetz I've come up with the following:
Client sends message to server to send to another user through sockets
Message gets put into Redis(stores last 50 messages for each chat)
Message also gets added to a permanent store database(Relational or NoSQL) asynchronously
At a later point in time if the user needs to for any reason retrieve the messages for a chat again, he/she gets the 50 messages from Redis(fast) and if he/she scrolls up to see older messages, query the permanent store for next batch of older messages.
I'm in no way an expert on databases but this seemed to be a reasonable/scalable approach to me.
I'm looking for any advice you can regarding reliability/scalability/anything for this architecture that you can give me. I'm happy to provide more info if needed.
Thanks

Scaling WebSockets on Google Compute Engine

I would like to implement a chat system as part of a game I am developing on App Engine. To implement this, I would like to use WebSockets, and have clients connect to each other though a hub, in this case an instance of GCE. Assuming this game needed to scale to multiple instances on GCE, how would this work? If I had a client 1, and the load balancer directed that request of client 1 to instance A, and another client (2) came in and was directed to instance B, but those clients wanted to chat with each other, they would each be connected to different hubs, and would be unable to reach each other. How would this be set up to work with scale? Would I implement it using queues, where each instance listens on that queue, and if so, how would I do that?
Google Play Game Services offers exactly the functionality that you want but in regard to Android and ios clients. So this option may not be compatible with your game tech design.
In general you're reasoning correctly. Messages from client who want to talk to each other will most of the time hit different server instances. What you want to do is to make instances handle the communication between users. Pub/sub (publish-subscribe pattern) is very suitable pattern in this scenario. Roughly:
whenever there's a message directed to client X a message is published on the channel X,
whenever client X creates a session, instance handling it subscribes to channel X.
You can use one of many existing solutions for starters. It's very easy to set this up using redis. If you need something more low-level and more flexible check out zeromq.
You can expect single instance of either solution to be able to handle thousands of QPS.
Unfortunately I don't have any experience with scaling neither of these solutions so can't offer you any practical advice as to the limits of their scalability.
PS. There are also other topics you may want to explore such as: message persistence and failure recovery I didn't address here at all.
I didn't try to implement this yet but I'll probably have to soon, I think it should be fairly simple to handle it yourself.
You have: server 1 with list of clients and you have server 2 with another list of clients,
so if client wants to send data to another client which might be on server 2, you have to:
Lookup if the receiver is on current server - if it is, you just send it (standard)
Otherwise you send the same data to all other servers you have, so they would check their lists for particular client (or clients) and send data to them.

working with new channel creation limits

Google app engine seems to have recently made a huge decrease in free quotas for channel creation from 8640 to 100 per day. I would appreciate some suggestions for optimizing channel creation, for a hobby project where I am unwilling to use the paid plans.
It is specifically mentioned in the docs that there can be only one client per channel ID. It would help if there were a way around this, even if it were only for multiple clients on one computer (such as multiple tabs)
It occurred to me I might be able to simulate channel functionality by repeatedly sending XHR requests to the server to check for new messages, therefore bypassing limits. However, I fear this method might be too slow. Are there any existing libraries that work on this principle?
One Client per Channel
There's not an easy way around the one client per channel ID limitation, unfortunately. We actually allow two, but this is to handle the case where a user refreshes his page, not for actual fan-out.
That said, you could certainly implement your own workaround for this. One trick I've seen is to use cookies to communicate between browser tabs. Then you can elect one tab the "owner" of the channel and fan out data via cookies. See this question for info on how to implement the inter-tab communication: Javascript communication between browser tabs/windows
Polling vs. Channel
You could poll instead of using the Channel API if you're willing to accept some performance trade-offs. Channel API deliver speed is on the order of 100-200ms; if you could accept 500ms average then you could poll every second. Depending on the type of data you're sending, and how much you can fit in memcache, this might be a workable solution. My guess is your biggest problem is going to be instance-hours.
For example, if you have, say, 100 clients you'll be looking at 100qps. You should experiment and see if you can serve 100 requests in a second for the data you need to serve without spinning up a second instance. If not, keep increasing your latency (ie., decreasing your polling frequency) until you get to 1 instance able to serve your requests.
Hope that helps.

.NET CF mobile device application - best methodology to handle potential offline-ness?

I'm building a mobile application in VB.NET (compact framework), and I'm wondering what the best way to approach the potential offline interactions on the device. Basically, the devices have cellular and 802.11, but may still be offline (where there's poor reception, etc). A driver will scan boxes as they leave his truck, and I want to update the new location - immediately if there's network signal, or queued if it's offline and handled later. It made me think, though, about how to handle offline-ness in general.
Do I cache as much data to the device as I can so that I use it if it's offline - Essentially, each device would have a copy of the (relevant) production data on it? Or is it better to disable certain functionality when it's offline, so as to avoid the headache of synchronization later? I know this is a pretty specific question that depends on my app, but I'm curious to see if others have taken this route.
Do I build the application itself to act as though it's always offline, submitting everything to a local queue of sorts that's owned by a local class (essentially abstracting away the online/offline thing), and then have the class submit things to the server as it can? What about data lookups - how can those be handled in a "Semi-live" fashion?
Or should I have the application attempt to submit requests to the server directly, in real-time, and handle it if it itself request fails? I can see a potential problem of making the user wait for the timeout, but is this the most reliable way to do it?
I'm not looking for a specific solution, but really just stories of how developers accomplish this with the smoothest user experience possible, with a link to a how-to or heres-what-to-consider or something like that. Thanks for your pointers on this!
We can't give you a definitive answer because there is no "right" answer that fits all usage scenarios. For example if you're using SQL Server on the back end and SQL CE locally, you could always set up merge replication and have the data engine handle all of this for you. That's pretty clean. Using the offline application block might solve it. Using store and forward might be an option.
You could store locally and then roll your own synchronization with a direct connection, web service of WCF service used when a network is detected. You could use MSMQ for delivery.
What you have to think about is not what the "right" way is, but how your implementation will affect application usability. If you disable features due to lack of connectivity, is the app still usable? If you have stale data, is that a problem? Maybe some critical data needs to be transferred when you have GSM/GPRS (which typically isn't free) and more would be done when you have 802.11. Maybe you can run all day with lookup tables pulled down in the morning and upload only transactions, with the device tracking what changes it's made.
Basically it really depends on how it's used, the nature of the data, the importance of data transactions between fielded devices, the effect of data latency, and probably other factors I can't think of offhand.
So the first step is to determine how the app needs to be used, then determine the infrastructure and architecture to provide the connectivity and data access required.
I haven't used it myself, but have you looked into the "store and forward" capabilities of the CF? It may suit your needs. I believe it uses an Exchange mailbox as a message queue to send SOAP packets to and from the device.
The best way to approach this is to always work offline, then use message queues to handle sending changes to and from the device. When the driver marks something as delivered, for example, update the item as delivered in your local store and also place a message in an outgoing queue to tell the server it's been delivered. When the connection is up, send any queued items back to the server and get any messages that have been queued up from the server.

Resources