I'm building a multiplayer card game mainly in ReactJS and want to share some state between users (players). I need each user to be able to know when it's their turn to play and see some things about other users like score and current number of cards in hand, etc. For example, when one player's turn ends, I need the next player's app to recognize it's their turn and provide the applicable options.
It seems WebSockets or something based on it would be a good option. I'm still learning React and other pieces so I'm trying to keep the number of technologies down for now. I'm considering setting up rapid polling (every 200 ms, perhaps), either to an API or maybe checking against a file on the server or database entries.
Are there especially concerning performance issues or better options to get it working? I'm also learning about AWS and Docker and intend to host on AWS, if possible.
I recommend you use firebase which is a database to send and receive data online because the states of react js are not interlaced between devices
instead of sharing state between players you should explore the possibility of players sharing an instance of game object on the server.
for example :
game={
status:'running',
players:2,
p1:{state:'waiting'},
p2:{state:'active'}
}
time complexity for common state on server will be [n]
expected time complexity for state exchange or polling is [n^2] or more.
If you want to try serverless approach, you can look at AWS Lambda and Dynamo DB.
WebSockets won't work for you as you need one fixed party in WebSocket connection.
You could share the state peer-to-peer style with a WebRTC connection and using the DataChannel in that connection, but that's not an easy task.
(see for example: https://www.html5rocks.com/en/tutorials/webrtc/basics/#toc-rtcdatachannel)
The much more common option is to use a central data storage and either sync your state with that or just store it there in the first place.
You could use Firebase (see answer by David) or any other kind of database.
Firebase is particularly easy to set up, though.
If you have/need/want to use an API-backend anyways, you can use pretty much any kind of database layer in that and just talk REST or GraphQL with the API to share your state between players.
Recently I was asked to implement client-side failover for an app I wrote. Personally, all the failover/load-balancing schemes I can recall working with have been implemented server-side, through a middleman proxying server. The head of the server team though, wants me to do something different, which has lead me to a number of questions, and some frustration in not finding much in the way of discussion on the related techniques and frameworks. Having spent a little time thinking about the problem, I have a few concerns, and I was hoping folks here could help educate me :)...
So, a basic scheme where you might start with multiple server addresses in some form which I would rotate through tracking failures before giving up, etc. seems simple enough. He wants something a bit different. His desire is to set up our DNS servers to return multiple A Records for our various web services/servers. He then wants my client to intelligently failover between the various ip addresses when one fails. Similar in conception to how a lot of big web farms work. Again simple in conception, but in the details, I have concerns.
At this point I think, I should describe the environment I'm working in. The app in question is a mobile app running on Android/iOS. Internally, not that it's wildly important other than for leveraging frameworks, the app is written as an angular spa built on top of cordova (and thus is a hybrid mobile app). At the moment, I use ngResource for all my web api needs (which uses $http underneath).
A) So then to my first concern. To my knowledge, a DNS entry with multiple A records has no guarantee of what IP address(es) get delivered to any given client. Resolved DNS records are cached on intermediary DNS servers, usually only saving 1 record unless the DNS server is "smart". While some DNS servers might weight ip addresses by response times, tracerts, etc. most do not, even if they have more than one ip address. If I am correct, it would seem this scheme will not likely work well in practice.
B) My next conceptual hurdle is much more implementation specific. Apriori, it seems the only way I can get a full set of IP Addresses for a domain name is via a native plugin (e.g InetAddress.getAllByName in Java on Android). This would seem preferable to the second obvious answer--an external web service that would do the lookup for me. I'm not aware of any way to do this in Javascript, let alone any angular/$http/$resource module with this capability rolled into it? Any suggestions?
C) Having gotten multiple ip addresses (somehow), I can setup a promise chain with timeouts and failover. This is not wildly desireable to me, however. Currently I've built my app to have use an angular factory produced $resource objects with the appropriates paths defined (e.g. Accounts), which I use the standard dependency injection mechanism to gain access to in various controllers as needed. I'd rather not reinvent the wheel nor create a huge code footprint or maintenance issues for myself or anyone else, and thus I'd like to come up with the most "common practice" method for handling this that won't make some other angular/cordova coder lose his mind and start sacrificing swedish code fish to his cephalopodal altar erected in some perpetually dark corner of his (or her!) mind. Therein, assuming I've had to get a collection of IP Addresses for a dns name from an external-to-angular source at some point after the app has bootstrapped itself up (since these will be async calls at possibly arbitrary points in the app's lifecycle), (catches breath) how would folks recommend structuring the creation of my ngResource objects to reference them? I can imagine a couple different techniques, but I'm rather hoping this isn't as untread territory has my googling has lead me to believe, and folks will have some recommendations for me 8).
Thanks in advance!
I am building a mobile app using AngularJS and PhoneGap. The app allows the user to access a large amount of data-items. These data-items come with the app in form of a number of .json files.
One use-case is that a user can favorite any of those data-items.
Currently, I store the (ids of) the items that have been favorited in localStorage. It works and it's great and very simple.
But now I would like create an online-backend for the app. By this I mean that the (ids of) the items that have been favorited should also be stored on a server somewhere in some form of database.
Now my question is:
How do I best do this?
How do I keep the localStorage data and online-backend data in synch?
In particular, the user might not have an internet connection at the time were he favorites a data-item. Additionally, if the user favorites x data-items in a row, I would need to make x update calls to the server db, which clearly isn't great.
So, how do people do it?
Does Angular have anything build-in for this?
Is there any plugin?
Any other framework?
This very much seems like a common problem that must have a well-known solution?
I think you've almost got the entire solution. All you need to do is periodically (on app start load the data from the service if available, otherwise use current local storage, then maybe with a timer and on app close update the data if connected) send the JSON out to a service (I generally prefer PHP, but Python, Java, Ruby, Perl, whatever floats your boat) that puts it in a database. If you're concerned with merging synchronization changes you'll need to use timestamps in the data in local storage and in the database to make the right call on what should be inserted vs what should be updated.
I don't think there's a one size fits all solution to the problem, though I imagine someone may have crafted a library that handles the different potential scenarios the configuration may be as complicated as just writing the logic yourself.
I'm building a web app - primarily in php - but we need to pull down messages from twitter and various other services (email, sms). I'm writing a small service in node.js to handle the twitter connection etc. but am just trying to work out what is best to do with the content that is pulled down.
Right now I'm leaning towards a combination of MySQL for all our standard info with the main PHP app and Redis with the node.js service to store each message against a key that will probably be the username and some sort of unique identifier.
I've used redis before but this data needs to persist rather than being something that can expire like sessions. Redis' in memory nature makes me a bit nervous about this as, over time, with this being our main message store the dataset will quickly become unruly in RAM, will it not?
This blog post gives a good and concise overview for NoSQL-type databases. Perhaps you can find confirmation for or an alternative to Redis there. Since you have not given any numbers on how many and how often you need to pull data from the sources, it's hard to answer from my side.
And, Redis supports two methods of persistence: timed snapshots and an append-only journal files where changes to the db are written to. The second one is the safer alternative.
I am going to write a database application for the camp I work for. I am thinking about writing it in C# with a Windows GUI interface but using a browser as the application is seeming more and more appelaing for various reasons. What I am wondering is why someone would not choose to write an application as a web application. Ex. The back button can cause you some trouble. Are there other things that ayone can think of?
There are plenty of cons:
Speed and responsiveness tend to be significantly worse
Complicated UI widgets (such as tree controls) are harder to do
Rendering graphics of any kind is pretty tricky, 3D graphics is even harder
You have to mess around with logins
A centralised server means clients always need network access
Security restrictions may cause you trouble
Browser incompatibilities can cause a lot of extra work
UI conventions are less well-defined on the web - users may find it harder to use
Client-side storage is limited
The question is.. do enough of those apply to your project to make web the wrong choice?
One thing that was not mentioned here is the level of complexity and knowledge required to generate a good web application. The problem being unless you are doing something very simple, there is no "Single" knowledge or technology that goes into these applications.
For example if you were to write an application for some client server platform.. you may develop in Java or C++. For a complex web application you may have to have expertise in Java, Java Script, HTML, Flash, CSS, Ajax, SQL, J2EE.. etc. Also the components of a web based application are also more numerous, Web Application Server, HTTP Server, Database, Browser.. are tipical components but there could be more.. a client server app is tipical just what it says.. a client application and a Server application. My experience and personal preference is not web based .. web based is great for many things. But even though I am an IT Architect for a leading company that is completely emersed in Web Apps as the solution for everything... The cons are many still.. I do thing the technology will evolve and the cons will go away over time though.
Essentially the real limitations are only through of the platform, being the browser. If you have to account for all browsers in current use that can be a pain due to varying degrees of standards in each of them.
If have control of the which browser to use, that is everyone is on computers that you control on site, and say you install firefox on all of them, you could then leverage the latest Javascript and CSS standards to their fullest in your content delivery.
[edit] You could also look into options like the adobe integrated runtime or "AIR" as an option allowing you to code the front-end with traditional browser based options like xhtml/css/javascript, flash/flex and have the backend hooked up to your database online, only also providing functionality of a traditional desktop app at the same time.
The biggest difference and drawback I see with web applications is state management. Since the web is, by nature, stateless every thing you want to maintain has to be sent back and forth from the server with every request and response. How to efficiently store and retrieve it in a matter with respect to page size and performance is hard to do at times. Also the fact that there is no real standard (at least not that everyone adheres to) for browsers makes consistency really..........fun.
You need to have a network access to the server that you are going to have the web application on (if there are going to be multiple users for the application - which is typically the case).
Actually, there are more pros than cons - if you can give some details about your application, we could help a little more...
It completely depends on the requirements of your project. For the most part, there isn't much web applications cannot do these days. Admittedly, certain applications do belong on the desktop as browsers (while currently advancing, and rapidly), still are not quite there yet. From the advent of applications such as Google Docs, Gmail
There isn't much you -cannot- do on the web. If you're creating a World of Warcraft competitor however, the web is most certainly not the optimal solution. Again, unfortunately we'd need more insight on the application you're building for the camp. The best part about the web is that anyone with a browser can use your application.
Web applications delegate processing to a remote machine. Depending on the amount of processing, this can be a con. Consider a photo editor that's a web app.
Web applications also can't deal with a whole lot of data going back and forth to and from a client. You can watch video online.. when it's compressed. It will be awhile before we see any web-based video editing software.
Browser compatibility is also a hassle. You can't control the look-and-feel of the application 100%.
Vaibhav has a good point. What's your application?
A major one is down time for migrations... users will not expect the application to be down, ever, but realistically it will have to be down for major upgrades. When doing this with a desktop application, the user (or end-user systems admin) is in control of when upgrades happen; with an online app, they're not.
For applications which have large data, performance can be a major problem as you're storing a large number of users' data centrally, which means the IO performance will not be as good as it would be if you gave them all a laptop.
In general scalability gives problems for a server-based app. Desktop applications scale really well.
You can do an awful lot with a web-based app, but it is a lot easier to do certain things with a thick client:
Performance: You get simple access to the full power of the client's CPU.
Responsiveness: Interactivity is fast and easy.
Graphics: You can easily use graphics libraries such as DirectX and OpenGL to create fast impressive graphics.
Work with local files
Peer-to-peer
Deciding whether a web application is a good approach depends on what you are trying to achieve. However here are some more general cons of web applications:
Real integration with desktop apps (e.g. Outlook) is impossible
Drag and drop between your app and the desktop / other running apps
With a web application, there are more privacy concerns, when you are storing user data on your servers. You have to make sure that you don't loose/disclose it and your users have to be comfortable with the idea of storing that data on your servers.
Apart from that, there are many security problems, like Man-in-the-middle attacks, XSS or SQL injections.
You also need to make sure that you have enough computing power and bandwidth at hand.
"Ex. The back button can cause you some trouble."
You'll have to be specific on this. A lot of people make fundamental mistakes in their web applications and introduce bugs in how they handle transactions. If you do not use "Redirect after Post" (also known as Post-Redirect-Get, PRG design), then you've created a bug which appears as a problem with the back button.
A blanket statement that the back button in trouble is unlikely to be true. A specific example would clarify your specific question on this.
The back button really is not that much of an issue if you design your application correctly. You can use AJAX to manipulate parts of the current page, without adding items into the browser history (since the page itself wont change).
The biggest issue with designing web applications has to do with state, and the challenges that need to be programmed around. With a desktop application, state is easy to handle, you can leave a database connection opened, lock the record and wait for the user to make the changes and commit. With a web application, you could lock the record...but then what if the user closes the browser? These things must be overcome in the design of your application.
When designing a web application, make sure that each trip to the server "stands alone" and provides a complete answer. Always re-initialize your variables before performing any work and never assume anything. One of the challenges I ran into once was pulling "pages" of grid data back to the user. In a real busy system, with record additions/modifications happening in real time, the user navigation from page to page would vary greatly, sometimes even resulting in viewing the same set of a few records as new additions were added in-front of the query.