How to handle multiple versions of an application? - version

I've build an iOS and Android app with a nodejs backend server that is now in production, but as I'm new to this I have no idea on how to handle update of server and apps.
First of all, how am I supposed to update the nodejs server without downtimes?
Second, let's suppose I have a chat on my app and for some reasons I have to change it but the change is not compatible with the previous versions, how am I supposed to act?
I think the question is not entirely clear, but I have no idea on what to search on google to point me in the right direction, anything would be helpfull

Updating server without downtime
The answer really depends upon how your infrastructure is configured.
One way would be to have a second server, configured with the new software, ready to go, and then you switch over to the new server. If you are going to be doing this a lot, then having a mechanism/tooling to do this will certainly simplify things. If things also go wildly wrong, you just switch back.
We use AWS. As part of launching an update, we provision a number of instances to match the current number (so we don't suddenly need several hundred more instances launching). When all the instances are ready to go, our load balancer switches from the current configuration to the new configuration. No one sees anything other than a slight delay as the caches start getting populated.
Handling incompatible data
This is where versioning comes in.
Versioning the API - The API to our application has several versions. Each of them is just a proxy to the latest form. So, when we upgrade the API to a new version, we update the mappers for the supported versions so that the input/output for the client doesn't change, but internally, the main library of code is operating only on the latest code. The mappers massage data between the user and the main libraries.
Versioning the data being messaged - As this is an app, the data coming in should be versioned, so app sending v1 data (or unversioned if you've not got a version in there already) has to be upgraded on the server to the v2 format. From then on, it is v2. On the way out, the v2 result needs to be mapped down to v1. It is important to understand that the mapping may not always be possible. If you've consolidated/split attributes from v1 to v2, you're going to have to work out how the data should look from the v1 and v2 perspectives.
Versioning the data being stored - Different techniques exist depending upon how the data is being stored. If you are using an RDBMS, then migrations and repeatables are very commonly used to upgrade data ready for a new app to operate on it. Things get interesting when you need to upgrade the software to temporarily support both patterns. If not using an RDBMS, a technique I've seen is to upgrade the data on read. So, say you have some sort of document store, when you read the document, check the version. If old, upgrade and save it. Now you can treat it as the latest version. The big advantage here is that there is no long running data migration taking place. Over time, the data is upgraded. A downside is that every read needs to do a version check. So. Maybe mix and match. Introduce the check/upgrade/save happen on every read. Create a data migration tool whose sole job is to trawl through the data. When all the data is migrated, drop the checks (as all the data is either new and therefor matches the latest version or has been migrated to the latest version) and the migrator.
I work in the PHP world and I use Phinx to handle DML (data) migrations and our own repeatables code to handle DDL (schema changes).

updating your backend server is a pain indeed. you can't really do that without downtime at all. what you can do though, assuming your clients access your server with a domain rather than with a plain IP address, is prepare another server with an as-up-to-date data as possible and do a DNS record update to redirect the data to it. keep in mind that DNS has a long update time in which some clients get to the old server and some to the new one (which means a big headache if data consistency is important to you)
changing the API is another pain. often times you need to support older versions of your application in parallel to the newer ones. most app stores will let you know the statistics of your app versions and when it's safe to drop support for an old version.
a common practice though is you have the API endpoints versioned so that version 1 of you app accesses URL/API/v1/... and version to accesses URL/API/v2/... which enables you sending different replies based on your client version. you increase the version every since you make a breaking change to the protocol. this makes a "future compatible" protocol
at some cases you initially add a mechanism that lets the server send a message to an old version of the client saying that their version is obsolete and they need to update...
most big apps already has such mechanism while most small apps just take the risk of some downtime and drop support for a few non-updated clients...

Related

How do you handle versioning of Spotfire dashboards?

Natural thing about software is that you enhance it, thus you create next versions of it. How do you handle that in concern of Spotfire ?
At least two ways I can think of.
First, in 7.5 and above you can spin up a test node and copy down any dxp you want from live to develop in test. Once the "upgrade" or changes are complete you then would backup the live version to disk somewhere... anywhere you do other backups, and deploy the new version to live.
For pre-7.5 the idea is the same but you would have to create a test folder in live with restricted access to test your upgrade on a web player.
Strictly speaking of "what version are you on" in regards to Analytics like there is in software isn't really the same in my opinion. There should only be one version of the truth. If you are to run multiple versions you'd have to manage their updates separately for caching which is cumbersome in my opinion. Also, realizing the analytic has a GUID which relates to its information sources means that running them in parallel in the same environment will cause duplication.
If this isn't what you were shooting for I'd love for you to elaborate on the original post and clarify anything I assumed. Cheers mate.
EDIT
Regarding the changes in 7.5, see this article from Tibco starting on p.42 which explains that Spotfire has a new topology with a service oriented architecture. In 7.5 onward, IIS is no longer used and to access the web player you doesn't even go to the "web server" anymore. The application server handles all access and is the central point for authentication and management.

Strategies for syncing data with server in PhoneGap [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm starting my first PhoneGap project, using AngularJS. It's a database driven app, using a REST API as the backend. To start with, I'm not going to store data locally at all, so it won't do much without Internet.
However, I would eventually like to have it store data locally, and sync when Internet is available, since I know I personally disable the Internet connections on my phone at times (air planes, low battery), or have no bars. I was wondering if you could point me toward some good resources for this type of syncing. Some recommended libraries? Or perhaps some discussions of the pitfalls and how to circumnavigate them. I've Googled a bit, but I think right now, I don't know the questions to ask.
Also, my intent to build it Internet-dependent first, and then add syncing.... Is that a good idea, or am I shooting myself in the foot? Do I need to build it syncing from the start?
I had someone suggest building the app as local-only first, rather that the Internet-only part first, which has a certain logic to it. The remote storage is kind of important to me. I know the decision there has a lot to do with my goals for the app, but from the stand point of building this, with the eventual goal being local storage + Internet storage, and two-way syncing, what's going to be easier? Or does it even make a difference?
To start with, I'm thinking of using UUIDs, rather than sequential integer primary keys. I've also thought about assigning each device an ID that is prefixed on any keys it generates, but that seems delicate. Anyone used either technique? Thoughts?
I guess I need a good system to tell what data's been synced. On the client side, I guess any records that get created/edited, can be flagged for syncing. But on the server-side, you have multiple clients, so that wouldn't work. I guess you could have a last_updated timestamp, and sync everything updated sync the last successful sync.
What about records edited in multiple places? If two client edit, and then want to sync, you have some ambiguity about merging, like when merging branches in git or other version control systems. How do you handle that? I guess git does it by storing diffs of every commit. I guess you could store diffs? The more I think about this, the more complicated it sounds. Am I over-thinking it or under-thinking it?
What about client side storage? I've thought about SQLite, or the PhoneGap local storage thing (http://docs.phonegap.com/en/1.2.0/phonegap_storage_storage.md.html). Recommendations? The syncing will be over a REST API, exchanging JSON, so I was thinking something that actually stores the data as JSON, or something JSON-like that's easy to convert, would be nice. On the other hand, if I'm going to have to exchange some sort of data diff format, maybe that's what I need to be storing?
Let me provide the answer to your question based on my experience related to the sync part as I don’t have enough experience with PhoneGap so will skip the question about PhoneGap local storage v SQLite.
I was wondering if you could point me toward some good resources for this type of syncing. Some recommended libraries?
There are a number of open source projects for syncing the PhoneGap app with the remote server. But you probably have to adjust them for your own needs or implement your own sync functionality. Below I listed some of the open-source projects. You must’ve already aware of them if you’d search the net.
PhoneGap sync plugin
Simple Offline Data Synchronization for Mobile Web and PhoneGap Applications
Synchronize a local WebSQL Db to a server
Couchbase Lite PhoneGap plugin
Additionally, you might consider the other options but that depends on your server side:
Microsoft Sync Framework Toolkit (Html5 sample is available)
OpenSync Framework - platform independent, general purpose synchronization engine
Also, my intent to build it Internet-dependent first, and then add syncing.... Is that a good idea, or am I shooting myself in the foot? Do I need to build it syncing from the start?
I believe the sync functionality is more like an additional module and shouldn’t be tightly coupled with the rest of your business logic. Once you start thinking about testing strategy for your sync you’ll realise it will be easier to test that if your sync facility is decoupled from the main code.
I think you can launch your app as soon as possible with the minimum required functionality without sync. But you’d better think about your architecture and the way you add the sync facility in advance.
To start with, I'm thinking of using UUIDs, rather than sequential integer primary keys. I've also thought about assigning each device an ID that is prefixed on any keys it generates, but that seems delicate. Anyone used either technique? Thoughts?
That depends on your project specifications and specifically your server side. For example, Azure mobile services allow only integer type for the primary keys. Although unique identifiers as primary keys are pretty handy in the distributed systems (has some disadvantages as well).
Related to assigning a device ID – I am not sure I understand the point although I don’t know your project specifics. Have a look at the sync algorithm that is used in our system (bidirectional sync using REST between multiple Android clients and central SQL Server).
What about records edited in multiple places? If two client edit, and then want to sync, you have some ambiguity about merging, like when merging branches in git or other version control systems. How do you handle that? I guess git does it by storing diffs of every commit. I guess you could store diffs? The more I think about this, the more complicated it sounds. Am I over-thinking it or under-thinking it?
This is where you need to think about how to handle the conflict resolution in your system.
If the probability of conflicts in your system will be high, e.g. users will be changing the same records quite often. Then you’d better track what fields (columns) of the records had been modified in your sync and then once the conflict is detected:
Iterate through each modified field of the server side record in conflict
Compare each modified field of the server record with the relevant field of the client.
If the client field was not modified then there is no conflict so just overwrite it with the server one.
Else there is a conflict so save the both field’s content into a temporary place for the report
At the end of sync produce the report of records in conflict.

How to setup deployments in Azure so that they use different databases depending on the environment?

You can easily swap two deployments between staging and production environment in the Azure Management Portal by swapping their VIP. When working on a staging version of the services we want to use a staging database as well so we don't risk clobbering actual customer data. However, after swapping staging and production services the now-production (and formerly staging) deployment should obviously work on the production database.
So essentially the database to use would depend on whether the instance runs in the Staging or Production environment. Is there a good way of achieving that? Relying on the VIP and hard-coding the database switching based on that is probably not the best idea, I guess.
My recommendation would be to stop using the "staging slot" of a service for the function you used a traditional "staging environment" for. When I'm speaking to folks about Windows Azure, I strongly recommend they use the staging slots only to smoke test a new deployment before it goes live. If they want a more protracted sort of testing, the kind many of us are used to having on-premises, then use a separate service and possibly even a separate subscription (the later is great if you want cost transparency).
All this said, your only real options are to have a second service configuration that is specific for production that you update to before you execute the VIP swap, or you write some code that allows the service to detect which slot it's in and pull the appropriate of two configuration settings.
However, as I outlined in the first paragraph, I think there's a better way to do things. :)
In a recent release of Azure Websites, the story here has changed. You may now specify that any app setting or connection string is a "slot setting", pinning it to the particular slot. To solve your issue, you would simply set the connection string(s) in each slot and take care to check 'Slot Setting'.
I'm less clear if this is an advisable approach now. Database schema migration and rollback aren't baked in, and I'm unsure how to handle that correctly. Also only app settings and connection strings work this way, so, for example, system.net.mail settings cannot be pinned to a slot. For that, you'd need to change code to get mail server info, etc. from app settings or else use some other approach.
Re: "When working on a staging version of the services we want to use a staging database as well so we don't risk clobbering actual customer data." There is not a built-in way to do this.
If you wish to test without risk to production data, consider doing this testing in another Azure account - one that doesn't even have access to the production database. Then, when you think the system is tested and ready to go live, only then bring it up into the staging slot next to your production instance for a final smoke test.
I can imagine scenarios where you'd also want to a run through a few scenarios on the staging instance before doing a VIP Swap, but don't want to pollute production data. For this, many companies use special accounts - data associated with these accounts is known (or marked somehow) to be not from real customers so can be skipped in reporting and billing and such.
Re: "Relying on the VIP and hard-coding the database switching based on that is probably not the best idea, I guess." If by hard-coding, you mean reading it from a config file, that is probably not a bad idea, if you use an approach as mentioned above. I have heard of some folks going with a "figure out if we are in a staging slot and do something different in the code" approach, but I rather recommend what I described above.

1 cakePHP core, multiple applications on different servers

I wondered if this would be possible. I'd like to centralize my cakePHP core files at 1 location, and want my several applications to use the same core. One reason is when updating I just need to update one core. Nowadays I always upload the whole cakephp package with each application.
But my applications are not all on the same server.
Unfortunately I'm not sure webservers can access files across physical servers; and even if it could via network shares, this would be an incredible performance hit.
Rather try to automate the core deploy using SVN or RSync tools.
SO, while it may technically be possible, I wouldn't advise it.
If your apps are at different servers than your cake core, you’ll need at least all servers to be in the same network so you can mount one server’s disk from the other one. Otherwise, you’ll need to upload the core into each app.
Assuming you can mount the disks, you can use the same cake core just replacing the paths in app/webroot/index.php
I wouldn't suggest to do either, but I would add also that updating core is good, but not always good. Especially if your applications are on a live servers (users are working on them).
I have bad experience in the past while doing such upgrades and there are several places in the application where some parts of the code were deprecated and the application stopped working. (I am speaking for 1.2 to 1.3 migrations), so my philosophy is: if you start with one version of the framework keep it the same, unless there is something critical which upper version will improve or fix.
I am not saying it's bad to upgrade, but be careful.
I'd always advise to keep the core up-to-date if possible, but each update needs to be well tested before being deployed. Even 1.3.x point updates can cause things to break in edge cases. As such, I'll keep an app updated if it's still under active development, but not necessarily if it's frozen in production.
Tying several different apps to the same core means you need to test all apps together when upgrading just one core. Work quickly multiplies this way. This is especially annoying if you depend on a bug fix in a newer core release, but this new release introduces some obscure problem in another app.
In the end each app is specifically written for a specific version of the Cake core. In theory the API should not change between point releases and things should just keep humming along when upgrading the core, but in practice that's not always how it works. As such, keep each app bundled with a core that it's tested and proven to work with. The extra hard disk space this requires is really no problem these days.

How do you keep two related, but separate, systems in sync with each other?

My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility.
The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database.
The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world.
The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application.
So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website?
It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick.
So far, I have thought about using the following types of approaches:
Bi-directional replication
Web service interfaces on both sides with code to sync the changes as they are made (in real time).
Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism).
Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you?
This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal.
You should be able to achieve near real time synchronization without the overhead or complexity of something like replication.
Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve.
There are some open source tools that can really make this easy for you if you are using .NET (especially if you want to use MSMQ).
nServiceBus by Udi Dahan
Mass Transit by Dru Sellers and Chris Patterson
There are commercial products also, and if you are considering a commercial option see here for a list of of options on .NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job.
If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ.
You may also want to consider reading Udi Dahan's blog, listening to some of his podcasts. Here are some more good resources to get you started.
I'm mid-way through a similar project except I have multiple sites that need to keep in sync over slow connections (dial-up in some cases).
Firstly you need to track changes, if you can use SQL 2008 (even the Express version is enough if the 2Gb limit isn't a problem) this will ease the pain greatly, just turn on Change Tracking on the database and each table. We're using SQL Server 2008 at the head office with the extended schema and SQL Express 2008 at each site with a sub-set of data and limited schema.
Secondly you need to track your changes, Sync Services does the trick nicely and supports using a WCF gateway into the main database. In this example you will need to use the Sync using SQL Express Client sample as a starting point, note that it's based on SQL 2005 so you'll need to update it to take advantage of the Change Tracking features in 2008. By default the Sync Services uses SQL CE on the clients, which I'm sure isn't enough in your case. You'll need a service that runs on your Web Server that periodically (could be as often as every 10 seconds if you want) runs the Synchronize() method. This will tell your main database about changes made locally and then ask the server for all changes made there. You can set up the get and apply SQL code to call stored procedures and you can add event handlers to handle conflicts (e.g. Client Update vs Server Update) and resolve them accordingly at each end.
We have a shop as a client, with three stores connected to the same VPN
Two of the shops have a computer running as a "server" for that shop and the the third one has the "master database"
To synchronize all to the master we don't have the best solution, but it works: there is a dedicated PC running an application that checks the timestamp of every record in every table of the two stores and if it is different that the last time you synchronize, it copies the results
Note that this works both ways. I.e. if you update a product in the master database, this change will propagate to the other two shops. If you have a new order in one of the shops, it will be transmitted to the "master".
With some optimizations you can have all the shops synchronize in around 20minutes
Recently I have had a lot of success with SQL Server Service Broker which offers reliable, persisted asynchronous messaging out of the box with very little implementation pain.
It is quick to set up and as you learn more you can use some of the more advanced features.
Unknown to most, it is also part of the desktop editions so it can be used as a workstation messaging system
If you have existing T-SQL skills they can be leveraged as all the code to read and write messages is done in SQL
It is blindingly fast
It is a vastly under-hyped part of SQL Server and well worth a look.
I'd say just have a job that copies the data in the pub database input table into a private database pending table. Then once you update the data on the private side have it replicated to the public side. If you don't have any of the replicated data on the public side updated it should be a fairly easy transactional replication solution.

Resources