I wondered if this would be possible. I'd like to centralize my cakePHP core files at 1 location, and want my several applications to use the same core. One reason is when updating I just need to update one core. Nowadays I always upload the whole cakephp package with each application.
But my applications are not all on the same server.
Unfortunately I'm not sure webservers can access files across physical servers; and even if it could via network shares, this would be an incredible performance hit.
Rather try to automate the core deploy using SVN or RSync tools.
SO, while it may technically be possible, I wouldn't advise it.
If your apps are at different servers than your cake core, you’ll need at least all servers to be in the same network so you can mount one server’s disk from the other one. Otherwise, you’ll need to upload the core into each app.
Assuming you can mount the disks, you can use the same cake core just replacing the paths in app/webroot/index.php
I wouldn't suggest to do either, but I would add also that updating core is good, but not always good. Especially if your applications are on a live servers (users are working on them).
I have bad experience in the past while doing such upgrades and there are several places in the application where some parts of the code were deprecated and the application stopped working. (I am speaking for 1.2 to 1.3 migrations), so my philosophy is: if you start with one version of the framework keep it the same, unless there is something critical which upper version will improve or fix.
I am not saying it's bad to upgrade, but be careful.
I'd always advise to keep the core up-to-date if possible, but each update needs to be well tested before being deployed. Even 1.3.x point updates can cause things to break in edge cases. As such, I'll keep an app updated if it's still under active development, but not necessarily if it's frozen in production.
Tying several different apps to the same core means you need to test all apps together when upgrading just one core. Work quickly multiplies this way. This is especially annoying if you depend on a bug fix in a newer core release, but this new release introduces some obscure problem in another app.
In the end each app is specifically written for a specific version of the Cake core. In theory the API should not change between point releases and things should just keep humming along when upgrading the core, but in practice that's not always how it works. As such, keep each app bundled with a core that it's tested and proven to work with. The extra hard disk space this requires is really no problem these days.
Related
We are working on old project which consists of multiple applications which all use the same database and strongly depend on each other. Because of the size of the project, we can't refactor the code so they all use the API as a single database source. The platform contains the following applications:
Website
Admin / CMS
API
Cronjobs
Right now we want to start implementing a CI/CD pipeline using Gitlab. We are currently experiencing problems, because we can't update the database for the deployment of one application without breaking all other applications (unless we deploy all applications).
I was thinking about a solution where one pipeline triggers all other pipelines. Every pipeline will execute all newly added database migrations and will test if the pipeline is still working like it should. If all pipelines succeeds, the deployment of all applications will be started.
I'm doubting if this is a good solution, because this change will only increase the already high coupling between our applications. Does anybody know a better solution how to implement CI/CD for our platform?
You have to stop thinking about these as separate applications. You have a monolith with multiple modules, but until they can be decoupled, they are all one application and will have to deployed as such.
Fighting this by pretending they aren't is likely a waste of time, your efforts would be better spent actually decoupling these systems.
There are likely a lot of solutions, but one that I've done in the past is create a separate repository for the CI/CD of the entire system.
Each individual repo builds that component, and then you can create tags as they are released or ready for CI at a system level.
The separate CI/CD repo pulls in the appropriate tags for each item and runs CI/CD against all of them as one unit. This allows you to specify which tag for each repo you want to specify, which should prevent this pipeline from failing when changes are made on the individual components.
Ask yourself why these "distinct applications" are using "one and the same database". Is that because every single one of all of those "distinct applications" all deal with "one and the same business semantics" ? If so, as Rob already stated, then you simply have one single application (and on top of that, there will be no decoupling precisely because your business semantics are singular/atomic/...).
Or are there discernable portions in the db structure such that a highly accurate mapping could be identified saying "this component uses that portion" etc. etc. ? In that case what is it that causes you to say stuff like "can't update the database for the deployment of ..." ??? (BTW "update the database" is not the same thing as "restructure the database". Please, please, please be precise.) The answer to that will identify what you've got to tackle.
I've build an iOS and Android app with a nodejs backend server that is now in production, but as I'm new to this I have no idea on how to handle update of server and apps.
First of all, how am I supposed to update the nodejs server without downtimes?
Second, let's suppose I have a chat on my app and for some reasons I have to change it but the change is not compatible with the previous versions, how am I supposed to act?
I think the question is not entirely clear, but I have no idea on what to search on google to point me in the right direction, anything would be helpfull
Updating server without downtime
The answer really depends upon how your infrastructure is configured.
One way would be to have a second server, configured with the new software, ready to go, and then you switch over to the new server. If you are going to be doing this a lot, then having a mechanism/tooling to do this will certainly simplify things. If things also go wildly wrong, you just switch back.
We use AWS. As part of launching an update, we provision a number of instances to match the current number (so we don't suddenly need several hundred more instances launching). When all the instances are ready to go, our load balancer switches from the current configuration to the new configuration. No one sees anything other than a slight delay as the caches start getting populated.
Handling incompatible data
This is where versioning comes in.
Versioning the API - The API to our application has several versions. Each of them is just a proxy to the latest form. So, when we upgrade the API to a new version, we update the mappers for the supported versions so that the input/output for the client doesn't change, but internally, the main library of code is operating only on the latest code. The mappers massage data between the user and the main libraries.
Versioning the data being messaged - As this is an app, the data coming in should be versioned, so app sending v1 data (or unversioned if you've not got a version in there already) has to be upgraded on the server to the v2 format. From then on, it is v2. On the way out, the v2 result needs to be mapped down to v1. It is important to understand that the mapping may not always be possible. If you've consolidated/split attributes from v1 to v2, you're going to have to work out how the data should look from the v1 and v2 perspectives.
Versioning the data being stored - Different techniques exist depending upon how the data is being stored. If you are using an RDBMS, then migrations and repeatables are very commonly used to upgrade data ready for a new app to operate on it. Things get interesting when you need to upgrade the software to temporarily support both patterns. If not using an RDBMS, a technique I've seen is to upgrade the data on read. So, say you have some sort of document store, when you read the document, check the version. If old, upgrade and save it. Now you can treat it as the latest version. The big advantage here is that there is no long running data migration taking place. Over time, the data is upgraded. A downside is that every read needs to do a version check. So. Maybe mix and match. Introduce the check/upgrade/save happen on every read. Create a data migration tool whose sole job is to trawl through the data. When all the data is migrated, drop the checks (as all the data is either new and therefor matches the latest version or has been migrated to the latest version) and the migrator.
I work in the PHP world and I use Phinx to handle DML (data) migrations and our own repeatables code to handle DDL (schema changes).
updating your backend server is a pain indeed. you can't really do that without downtime at all. what you can do though, assuming your clients access your server with a domain rather than with a plain IP address, is prepare another server with an as-up-to-date data as possible and do a DNS record update to redirect the data to it. keep in mind that DNS has a long update time in which some clients get to the old server and some to the new one (which means a big headache if data consistency is important to you)
changing the API is another pain. often times you need to support older versions of your application in parallel to the newer ones. most app stores will let you know the statistics of your app versions and when it's safe to drop support for an old version.
a common practice though is you have the API endpoints versioned so that version 1 of you app accesses URL/API/v1/... and version to accesses URL/API/v2/... which enables you sending different replies based on your client version. you increase the version every since you make a breaking change to the protocol. this makes a "future compatible" protocol
at some cases you initially add a mechanism that lets the server send a message to an old version of the client saying that their version is obsolete and they need to update...
most big apps already has such mechanism while most small apps just take the risk of some downtime and drop support for a few non-updated clients...
If you go to my Heroku-hosted to do list program, you can put test data in, but it's gone pretty soon. This is because, I learned, Heroku has an "ephemeral" filesystem and disposes of any data that users write to it via post. I don't know how to set up a PostgreSQL database or any other kind of database (although maybe I soon will, as I'm working through Hartl's Rails tutorial). I'm just using a humble YAML file. It works fine in my local environment.
Any suggestions for beginners to work around this problem, short of just learning how to host a database? Is there another free service I might use that would work without further setup? Any advice greatly welcome.
I fully understand that I can't do what I'm trying to do with Heroku (see e.g. questions like this one). I just want to understand my options better.
UPDATE: Looks like this and this might have some ideas about using Dropbox to host (read/write) flat files.
The answer is no. But I'll take a minute to explain why.
I realize that you aren't yet familiar with building web applications, databases, and all that stuff. And that's OK! This is an excellent question.
What you need to know, however, is that doing what you're asking is a really bad idea when you're trying to build scalable websites. And Heroku is a platform company that SPECIFICALLY tries to help developers building scalable websites. That's really what the platform excels at.
While Heroku is really easy to learn and use, it isn't targeted at beginners. It's meant for experienced developers. This is really clear if you take a look at what Heroku's principles are, and what policies they enforce on their platform.
Heroku goes out of their way to make building scalable websites really easy, and makes it VERY difficult to do things that would make building scalable websites harder.
So, let's talk for a second about why Heroku has an ephemeral file system in the first place!
This design decision forces you (the developer of the application) to store files that your application needs in a safer, faster, dedicated file storage service (like Amazon S3). This practice results in a lot of scalability benefits:
If your webservers don't need to write to disk, they can be deployed many many times without worrying about storage constraints.
No disks need to be shared across webservers. Sharing disks typically causes IO contention and can adversely affect performance.
It makes it easy to scale your web application horizontally across commodity servers, since disk resources aren't required.
So, the reason why you cannot store flat files on Heroku is because doing this causes scalability and performance problems, and would make it nearly impossible for Heroku to help you scale your application easily (which is their main goal).
That is why it is recommended to use a file storage service to store files (like Amazon S3), or a database for storing data (like Postgres).
What I'd recommend doing (personally) is using Heroku Postgres. You mentioned you're using rails, and rails has excellent Postgres support built in. It has what's called an ORM that let's you talk to the database using some very simple Ruby objects, and removes almost all the prerequisite database background to get things going. It's really fun / easy once you give it a try!
Finally: Heroku Postgres also has a great free plan, which means you can store the data for your todo app in it for no cost at all.
Hope this helps!
I have few Heroku apps (all based in the EU data centre) that are using the same database and queue.
I'm sharing them by adding the add-on to one of the apps, and then setting the same environment variable to the rest of the apps and all works fine.
Does it matter to which application I'm adding the add-ons?
Does this affect performance of anything else?
Does it matter to which application I'm adding the add-ons?
It should not, assuming the applications are all in the same region.
Does this affect performance of anything else?
It should not, assuming the applications are all in the same region.
Those said, this is a fragile thing to do. For example Heroku Postgres (and probably other 3rd party add-on providers) may change your DATABASE_URL in order to maintain high availability in the event of some unforeseen thing (sudden hardware failure, etc).
In that situation, the application that has the add-on attached will be restarted and receive the current DATABASE_URL; your other applications will not, and likely crash all over the place.
I was reading many articles about version control systems like SVN, Git and various branching models (feature based, release based and others) but none of them did not seem to fit our project requirements.
We (team) are going to develop a framework, which will be used as core for different applications. So, there will be one framework and more than one different applications built on that framework. Each application will have usual project cycle: builds, releases... Framework itself won't be released but may have tagged different versions. During the development of application, we want to commit some common features to the framework (if we see that feature is great and future applications should have it).
So each application is like a separate branch of framework, but it will never be fully merged back (because it's a separate application) and there is need do some commits to framework (trunk). Some online articles such commits (without merging whole branch to trunk) gives as negative examples, so we are confused.
What version control system and branching model do you recommend for such development cycle?
So each application is like a separate branch of framework, but it
will never be fully merged back (because it's a separate application)
and there is need do some commits to framework (trunk). Some online
articles such commits (without merging whole branch to trunk) gives as
negative examples, so we are confused.
This part scares me a bit. If you are going to have a framework, then you need to take care of it like any other lump of code, and you don't want multiple versions running around for any reason except maintenance of existing releases or work on future releases. So each of your "application" projects can have a branch where they modify the framework as required for the application, but I recommend the framework trunk be updated often so that it evolves in a way that best serves the needs of all of your applications. In general, when branching for code going forward, you want to sync up with the master and put code back into the master as quickly as possible to avoid lots of work handling merges and also give others the benefit of the work.
You should put your framework in a separate area (or repository if you are using a DVCS like git or hg) so that it's distinct and may have its own release cycle if necessary.
The DVCSs are all the rage these days, git and hg being the most popular, so you should look into them. They have different ways of handling branching. Their power lies in the fact that there is no centralized repository so it's more flexible and reliable for larger teams.