Event-based mobile server interaction - Synchronization / Conflicts Resolution - mobile

As title states I'm planning to use events (that's requirement) of user actions made on items (add/delete/update). Now here is some questions I'm trying to answer and I have feeling someone already have stumbled on same problem set:
1) Objects ID generation Server vs Client ? In case mobile client is offline - he needs to generate some ID for newly created object - so that subsequent actions on this object can have some reference on it. What would be the best approach? Generate some local ID and then assign global id generated on server (once object is pushed to server). Our architectural team thinks this is the right approach, but personally I don't see benefits of it. Especially in case user will logon on other mobile device and download all events - what Id's should he receive - Global or global + local. How this approach is better than just generating something similar to UUID (time + hardwareid + userid + incrementingId/randomId) on the client side?
2) What is the propper way of dealing with several mobile / web clients (user can be logged on single or several clients). First question here is conflicts resolution - here we have versions on our objects - to support optimistic locking - so the first who managed to do a commit is the winner. This seems to be clear. Other question is synchronization. Ideally it would be nice to download to client only updated information. So that we will need to send to client events that are not known to him. For this obviously we need to have information - which event come from which device. This is one more point for having sourceId for each event sent to server. But how can we deal with 2 or more web sessions? I'm noob in regards to WEB programming, but we would like to be stateless, does it means that some scripts on browser should generate some sort of sessionID which will be used as sourceID for all events generated on web client and instantly sent to server?
Does anybody had similar tasks? What solutions did you used / problems encountered?

You don't mention what type of backend you're using. If you're on an Oracle backend, you should take a look at Database Mobile Server (DMS):
http://www.oracle.com/technetwork/products/database-mobile-server/overview/index.html
It handles the data synchronization for you, including conflict resolution. Your mobile app will read and write from the local database, DMS handles the rest. It also handles device/application management on supported client platforms.
As for the problem described in 1), you seem to be asking if the unique identifier should be created on the client or server side. Without a more complete understanding of your architecture, I don't see a clear advantage either way. In any case, DMS can propagate the unique identifier from one side to the other, regardless of which way you decide to implement.
Hope that helps, good luck with your project.
-- Eric Jensen, Oracle PM

Related

Why do we use REST to connect to a database on a mobile app?

I am currently studying how to make cross-platform mobile apps (with xamarin forms), and I have heard that the "correct" way to connect to a database in a non-locale server (in my case located in Azure) is by using Rest Services (or rest APIs, or however is called), instead of connecting directly to the database with the server explorer option of VS like you would do in windows forms for example(using the SQL connection, dataset, etc. Which I think they are not necessary in the first case, I am not sure).
The only answer that I have received about this is that in mobile apps "They are not permanent connections. It connects, gives you data and disconnects. They are Asynchronous connections.", and that this is done "For optimization of connection resources. The mobile is suspended or the user passes the App to the background.".
But I still don't know if this is the actual reason, and if it is I don't understand how it optimizes the connection resources. So if someone has time to explain this I would appreciate it.
Thank you for your time, I hope I have explained myself correctly, and that you all have a great day.
As Jason said,the Security issues,with proper authorization having mediator is definitely much more secure than giving a user direct access to the database, because you restrict him to the end points which run only the queries you want to.And from the platform independence and maintenance,if the apps are developed in different languages and on different platforms,it may have benefit to create a common REST interface to allow sharing of data model, caching etc.For performance and scalability,that HTTP layer of your REST API provides another valuable caching mechanism. Your servers for your REST API can put caching headers on their responses, and these responses can be cached at the network layer, which scales exceptionally well.
you could read this link Why do people do REST API's instead of DBAL's?,I think the answers are pretty good

Listening for List updates using the SharePoint Client Object Model

I am looking for a decently efficient way to listen for List changes on a SharePoint site using only the Client Object Model. I understand how backwards this idea is, but I am trying to keep from having to push any libraries to the SharePoint servers on install. Everything is supposed to be drop and go on a local machine.
I've thought about a class that just loops a timer and keeps querying the ClientContext from the last date of successful query on, but that seems horribly inefficient.
I know this is a client object model, but is there any way to get notifications from the server on changes from the client only?
I am afraid that this is not possible by using the client object model. If you need to poll too often that the user experience suffers from the slow performance too much, you would need to catch the list changes on the server side. deploy a solution with a feature registering an SPItemEventReceiver to your list.
I understand your reluctance to push server-side code to the SP farm; without it, you can save discussions and explanations to the customer's administrators. However, some tasks are more efficient or even feasible only when run on the server. You can consider Sandbox Solutions for such functionality. They are deployed to SP not by the farm administrator but to a site collection by an site collection administrator by a friendly web UI. This needs less privileges, more relaxed company policies to comply with, and can be better accepted by your customers. You can develop, test and even use your solution in your site collection only without affecting the entire farm. Microsoft recommends even farm-wide solutions to be designed with as much as possible functionality in sandboxed solutions, putting only the necessary minimum to a farm solution.
If deploying the entire application as sandbox solution would not be possible, you could combine a sandboxed solution gathering the changes with an external web site requesting the gathered data from the site collection, or in you case with a client-only application as you are speaking about. (Sandboxed solutions have one big limitation: You cannot make a web request from within the site collection outside; you can only access the site collection from outside.)
--- Ferda

What is the best way to maintain and update offline changes between an IOS client and a server?

I have implemented an ios app that uses restkit with coredata and a grails backend with SQL. I got the "get" and "post" part working as long as there is network connectivity and as long as the server is up and running.
I am using an ID assigned by the server as a primary key in the core data to sync the objects. If I allow the users to create or update the objects when they are offline or when server has issues, what would be the best way to maintain the state and update the server at a later point in time? I would like to know what would be the best practices in this regard.
Thanks in advance!!
I've tackled this in a couple ways, though it's not a simple task.
In the first, I added an NSAttribute to all of my Core Data models to track their state (i.e. whether they should be sent with a POST or PUT request to the server). Then I monitored the reachability notifications (RestKit exposes these via RKReachabilityObserver) and sent the objects that had been modified with the appropriate method to the server. The code I used for this can be found here (but is outdated and shouldn't be used directly).
The second, very recent, method that I've devised is to create a Core Data entity that is essentially a queue, and changes in the model are registered in the queue, and synced with the server when the network is available. It's open for discussion on github as of this writing, and is NOT very thoroughly tested at this point, but here it is.

Is this the right architecture for our MMORPG mobile game?

These days I am trying to design architecture of a new MMORPG mobile game for my company. This game is similar to Mafia Wars, iMobsters, or RISK. Basic idea is to prepare an army to battle your opponents (online users).
Although I have previously worked on multiple mobile apps but this is something new to me. After a lot of struggle, I have come up with an architecture which is illustrated with the help of a high-level flow diagram:
We have decided to go with client-server model. There will be a centralized database on server. Each client will have its own local database which will remain in sync with server. This database acts as a cache for storing things that do not change frequently e.g. maps, products, inventory etc.
With this model in place, I am not sure how to tackle following issues:
What would be the best way of synchronizing server and client databases?
Should an event get saved to local DB before updating it to server? What if app terminates for some reason before saving changes to centralized DB?
Will simple HTTP requests serve the purpose of synchronization?
How to know which users are currently logged in? (One way could be to have client keep on sending a request to server after every x minutes to notify that it is active. Otherwise consider a client inactive).
Are client side validations enough? If not, how to revert an action if server does not validate something?
I am not sure if this is an efficient solution and how it will scale. I would really appreciate if people who have already worked on such apps can share their experiences which might help me to come up with something better. Thanks in advance.
Additional Info:
Client-side is implemented in C++ game engine called marmalade. This is a cross platform game engine which means you can run your app on all major mobile OS. We certainly can achieve threading and which is also illustrated in my flow diagram. I am planning to use MySQL for server and SQLite for client.
This is not a turn based game so there is not much interaction with other players. Server will provide a list of online players and you can battle them by clicking battle button and after some animation, result will be announced.
For database synchronization I have two solutions in mind:
Store timestamp for each record. Also keep track of when local DB
was last updated. When synchronizing, only select those rows that
have a greater timestamp and send to local DB. Keep a isDeleted flag
for deleted rows so every deletion simply behaves as an update. But
I have serious doubts about performance as for every sync request we
would have to scan the complete DB and look for updated rows.
Another technique might be to keep a log of each insertion or update
that takes place against a user. When the client app asks for sync,
go to this table and find out which rows of which table have been
updated or inserted. Once these rows are successfully transferred to
client remove this log. But then I think of what happens if a user
uses another device. According to logs table all updates have been
transferred for that user but actually that was done on another
device. So we might have to keep track of device also. Implementing
this technique is more time consuming but not sure if it out
performs the first one.
I've actually worked on some of the titles you mentioned.
I do not recommend using mysql, it doesn't scale up correctly, even if you shard. If you do you are loosing any benefits you might have in using a relational database.
You are probably better off using a no-sql database. Its is faster to develop, easy to scale and it is simple to change the document structure which is a given for a game.
If your game data is simple you might want to try couchDB, if you need advanced querying you are probably better of with MongoDB.
Take care of security at the start. They will try to hack the game for sure and if you have a number of clients released it is hard to make security changes backward compatible. SSL won't do much as the end user is the problem not an eavesdropper. Signing or encrypting your data will make it harder for a user to add items and gold to their accounts.
You should also define your architecture to support multiple clients without having a bunch of ifs and case statements. Read the client version and dispatch that client to the appropriate codebase.
Have a maintenance mode with flags for upgrading, maintenance, etc. It will cut you some slack if you need to re-shard your DB or any other change that might require downtime.
Client side validations are not enough, specially if using in app purchases. I agree with the above post. Server should control game logic.
As for DB sync, its best to memcache read only data. Typical examples are buyable items, maps, news, etc. User data is harder as you might not be able to afford loosing any modified data. The easiest setup is to cache user data for a couple of hours and write directly to the DB every time. If you are using no-sql it will probably withstand a high load without the need of using a persistence queue.
I see two potential problem hidden in the fact that you store all the state on the client, and then update the state on the server using a background thread.
How can the server validate the data being posted? If someone hacked your application, they could modify the code so whenever they swing their sword (or whatever they do in your game), it is always a hit. Doing that in a single player game is not that big a deal, but doing that in an MMORPG can ruin the experience for everyone else. So the server should validate every update of data - or even better, the server should be in charge of every business rule. So when you swing your sword against an opponent, that should be a server call, and the server returns whether or not it is a hit, and how many hit points the opponent lost.
What about interaction with other players (since you say it is an MMORP, there will be interaction with other players)? Since you say that you update the server, and get updates in a background thread, interaction will be sluggish. When you communicate with another character you have first wait for you background thread to sync data, but you also have to wait on the background thread of the other player to sync data.
Looks nice. But what is the client-side made of ? Web ? Can you use threading to synchronize both DB ? I should make the game in that way that it interacts immediately with the local DB, and let some background mechanism do the sync (something like a snapshot). This leads me to think about mysql replication. I think it is worth to be tried, but I never did. It also brings you answers to other questions. But what about the charge (how many customers are connected together) ?
http://dev.mysql.com/doc/refman/5.0/en/replication.html
Make your client issue commands to the server ("hit player"), and server send (relevant) events to client ("player was killed"). I wouldn't advice going with data synchronization. Server should be responsible for all important game decisions.

How do you keep two related, but separate, systems in sync with each other?

My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility.
The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database.
The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world.
The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application.
So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website?
It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick.
So far, I have thought about using the following types of approaches:
Bi-directional replication
Web service interfaces on both sides with code to sync the changes as they are made (in real time).
Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism).
Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you?
This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal.
You should be able to achieve near real time synchronization without the overhead or complexity of something like replication.
Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve.
There are some open source tools that can really make this easy for you if you are using .NET (especially if you want to use MSMQ).
nServiceBus by Udi Dahan
Mass Transit by Dru Sellers and Chris Patterson
There are commercial products also, and if you are considering a commercial option see here for a list of of options on .NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job.
If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ.
You may also want to consider reading Udi Dahan's blog, listening to some of his podcasts. Here are some more good resources to get you started.
I'm mid-way through a similar project except I have multiple sites that need to keep in sync over slow connections (dial-up in some cases).
Firstly you need to track changes, if you can use SQL 2008 (even the Express version is enough if the 2Gb limit isn't a problem) this will ease the pain greatly, just turn on Change Tracking on the database and each table. We're using SQL Server 2008 at the head office with the extended schema and SQL Express 2008 at each site with a sub-set of data and limited schema.
Secondly you need to track your changes, Sync Services does the trick nicely and supports using a WCF gateway into the main database. In this example you will need to use the Sync using SQL Express Client sample as a starting point, note that it's based on SQL 2005 so you'll need to update it to take advantage of the Change Tracking features in 2008. By default the Sync Services uses SQL CE on the clients, which I'm sure isn't enough in your case. You'll need a service that runs on your Web Server that periodically (could be as often as every 10 seconds if you want) runs the Synchronize() method. This will tell your main database about changes made locally and then ask the server for all changes made there. You can set up the get and apply SQL code to call stored procedures and you can add event handlers to handle conflicts (e.g. Client Update vs Server Update) and resolve them accordingly at each end.
We have a shop as a client, with three stores connected to the same VPN
Two of the shops have a computer running as a "server" for that shop and the the third one has the "master database"
To synchronize all to the master we don't have the best solution, but it works: there is a dedicated PC running an application that checks the timestamp of every record in every table of the two stores and if it is different that the last time you synchronize, it copies the results
Note that this works both ways. I.e. if you update a product in the master database, this change will propagate to the other two shops. If you have a new order in one of the shops, it will be transmitted to the "master".
With some optimizations you can have all the shops synchronize in around 20minutes
Recently I have had a lot of success with SQL Server Service Broker which offers reliable, persisted asynchronous messaging out of the box with very little implementation pain.
It is quick to set up and as you learn more you can use some of the more advanced features.
Unknown to most, it is also part of the desktop editions so it can be used as a workstation messaging system
If you have existing T-SQL skills they can be leveraged as all the code to read and write messages is done in SQL
It is blindingly fast
It is a vastly under-hyped part of SQL Server and well worth a look.
I'd say just have a job that copies the data in the pub database input table into a private database pending table. Then once you update the data on the private side have it replicated to the public side. If you don't have any of the replicated data on the public side updated it should be a fairly easy transactional replication solution.

Resources