Migrating from Cake 1.3 to 2.0 and beyond - migrate existing, or only use for new? - cakephp

I'm nearling completion of my first CakePHP-driven website and just saw they're already working on CakePHP 2.0 (not the stable release yet).
My questions:
Is it incredibly time consuming to move to a new version of CakePHP (when it becomes the "stable" release that is)? I know they have migration guides, but - I've never used a framework before, so I've never had to migrate anything.
Do you migrate your code for existing projects, or leave it as is and use the new stable version for future projects only?
Where can I find what version of CakePHP I currently have installed? I've looked at the LICENCE and VERSION files, but cannot find the installed/current version listed in them.
These seem like simple questions, but I greatly appreciate any thoughts/advice - searching this on Google just brings up how-to-migrate pages, not pros/cons...etc.

I've migrated a few sites from CakePHP 1.2 to 1.3. In my experience, it takes 2-3 hours on sites that have 5-10 controllers and no custom plugins, etc. I find I typically only have to change the syntax on a handful of function calls, and when I figure out which ones, it is just a matter of doing a find / replace across the site. Of course it could be more of an issue going from 1.3 to 2.0, but I don't get the sense that it will be an especially drastic API change.
UPDATE: I'm now in the process of migrating to CakePHP 2.0 beta, and thought I should update this, as I'm finding the updates are more extensive and far-reaching than I had assumed when I wrote this. Migration guide here: https://github.com/cakephp/docs/blob/master/en/appendices/2-0-migration-guide.rst
ANOTHER UPDATE: Since people seem to be finding this useful, I just thought I'd point out that Cake now helpfully provides an upgrade shell that does some of the work for you. Note that although the documentation says it will do "most" of the work, I have found there are still quite a few function calls, etc. that will need to be updated manually (see migration guide).
http://book2.cakephp.org/en/console-and-shells/upgrade-shell.html
As dhofstet said, it will all depend on the size and complexity of your site.
Whether you upgrade at all is usually a judgment call, but sometimes you have to (e.g. Cake 1.2 has some code that will break if your host upgrades to PHP 5.3). You certainly wouldn't have the kind of security issues that an old WordPress, Drupal, etc install would have. I have seen some noticeable speed increases with Cake upgrades, so depending on the situation it could be worth the trouble just for that (Cake 2.0 finally drops PHP 4 suppport). Look at the release notes and see if there are things that appeal to you in the new version.
To see your version, in the cake/VERSION.txt file, look at the very last line. It's easy to miss, but it should just be a number, e.g. 1.3.8.

This question is difficult to answer as it depends on the size and complexity of your project(s). The "big" releases (from 1.1 -> 1.2, 1.2 -> 1.3, 1.3 -> 2.0) usually break stuff and so you have to budget some migration work. The migration between "smaller" releases (for example from 1.3.9 to 1.3.10), on the other hand, is usually easy, often it just means to replace the cake folder. In both cases it is useful to have tests.
I migrate the projects which are actively maintained.
You can find the CakePHP version in cake/config/config.php

I'm migrating an app 1.3»2.0rc1 right now and I got no big trouble.
I had to change names of folder/files, eg. app_controller.php » Controller/AppController.php
Follow the migration link (tmp link) http://book2.cakephp.org/en/appendices/2-0-migration-guide.html
plugins/components/.. from various source won't work (at minus, for point 1)
To update the code (which in my case wasn't needed as the app worked well) I've shell-baked a dummy table and looked at differences in code.. It's a good starting point
Authentication/Authorization changed in some config, but requires few changes.
Trees still working
Acl don't. But I'm quite sure it's my fault.
For now it's all, good work!

Related

AngularJS 1.4 full scale upgrade to Angular 8. Should I migrate to 1.5 then upgrade or just rewrite? [duplicate]

This question already has answers here:
AngularJS 1.4 --> Angular 9 migration vs bigbang rewrite [duplicate]
(2 answers)
Closed 2 years ago.
I have done a ton of research and haven't found anything that has helped me decide what will be the best route (vertical slicing, horizontal slicing, or just a complete rewrite). I am working on a very LARGE program that is very ugly with no comments and need to migrate it over to Angular 8 if possible or at least up to Angular 7. A lot seem to recommend https://angular.io/guide/upgrade however, they don't help too much in migrating to 1.5 first. Does anybody have experience with a large scale migration? Currently, the program is not being used so downtime is no issue.
It doesn't seem like it at first, but a re-write is usually a more cost effective solution than an upgrade. It seems like upgrade would be the fastest time to re-deployment but in my experience if you did the two side-by-side you might find the times similar, except that the migration deployment will have to be all or nothing, where as a re-write means you can deploy with reduced functionality and build up the feature.
More importantly, the on-going maintenance of an upgraded site becomes exponentially harder/time consuming. You're really applying band-aides over the top of previous patches and fixes.
There are new concepts, better native support for directives and controls that we used to use 3rd parties for or roll own own, and it's an entirely new language to understand. Take this opportunity to wipe the slate clean of your solution's technical debt.
Rewrite - Hybrid
Do you need to deploy eveything in one go? Are you interested in Re-Branding?
One thing the Microsoft have done well in the past is the hybrid roll-out of their Preview portals.
The best case study IMO is the Azure Portal.
A few years ago we had a pretty fully featured portal interface for managing Azure assets. This would later become known as the Classic Portal when they started work on an entirely new user experience.
At first release the menu system was largely complete, we could navigate most assets in the new portal but when you came to features that had not been redesigned yet the link took you back to the Classic Portal.
So you could do the same, have the two user interfaces deployed to different URLs, start by making sure the authentication and navigation is largely complete but make all links take the user back to the original interface. Then feature by feature, implement the new interface, but because you can't control everything, keep a button or link on each page that takes the user back to the original implementation until your regression testing confirms that you have reached feature parity.
That is another key take-away from the MS Hybrid approach, significant change like this WILL annoy your users. So while you are in transition, allow the user to choose when they themselves migrate over. Initially MS achieved this at login, the user could login at either of the main urls, and based on your profile you would be redirected to the portal of your choice.
The last step is to restrict access to features in the old interface, by making the navigation and links in the old portal navigate directly into the new interface.
- or less intrusive, add an 'end of life' banner in each page that you have compelted the re-write on in the old site.
Do not confuse this with the Preview Mode in Office 365, the Azure Preview Portal was a ground-up re-write and is still in progress. There are many licensing operations that I still use the Classic Portal for as I still manage some Classic Only azure assets that have not yet been re-deployed.
I would consider the following issues when choosing between Migration/Upgrade and a straight up re-write:
Migrate to AngularJS 1.5 first
This operation is only marginally easier than the upgrade to Angular2+. All of the arguments below apply equally to this process as they do the next upgrade.
One of the reasons to goto 1.5 first is to escape the legacy dependencies that do not have a simple direct upgrade to 2+.
During the upgrade to 1.5 you should consider implementing Component based architecture (if the current code does not already do so)
A key element of components is less configuration and simplifed design, so read this as less to upgrade, less that can go wrong
Components are of course more closely aligned with current Angular implementations, if you do not yet use AngularJS Components, that might be a good iterim step to understand before learning Angular 2+
No Comments
This is a bigger red flag than you think. If the code base is not documented then any sort of maintenance becomes incrementally harder as each time the code must be re-read and re-interpreted before you can affect change.
So if a migration is on the cards, where talking about every line of code at some point needed to be re-read and understood to make sure that it works correctly in the
AngularJS 1+ => Angular2+
While some of the core frameworks and 3rd party libraries can be migrated, most controller javascript files cannot be simply migrated to typescript without a fair amount of effort. It's pretty common in javascript to cut a lot of corners in terms of type definitions and locations of definitions that mean after migration you will spend a lot of time going back through most javascript files one method at a time.
Very large
This is a strong candidate for automation or migration but ultimately it means that the total surface area to test, debug and re-design is also very-large. If the initial migration does not compile, it could be a long road of tweaking before you can get the user interface up so you can start interface testing.

Is the Meanstack suitable for production?

I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page

Cake PHP v0.2.6 from 2006 - site questions

Today I was asked to work on a site using cake. This site appears to be using ake version 0.2.6 from 2006.
This site was build by a college student. Its convoluted to me.
I would like your pro tips on IF We should attempt to upgrade, or start fresh.
My real issue is that I have NEVER worked with Cake. EVER!
I am afraid that If I start this work i would not be able to finish it.
It also uses an extreme amount of SOAP.
Please advise.
If the cake version is too old like that, I recommend to start over. The newer versions have made a lot of improvement and upgrading from that to say version 2.3 I think just isn't worth the trouble, really.
Besides, if you've never worked with cake, I think you'll get confused trying to make v0.2.6 to 2.3 because of all the changes. It'll be a testimony to your courage not to throw the towel.
If you decide to start over, you should consider if cake is the right thing for you (you'll have to do a revamp of everything, you might as well consider other fw options). The great thing I found when I started working with cake was the documentation, so you'll be fine if you maintain that path.
And SOAP... it should consume SOAP ws or make them available? I've work with cake & soap, so it's definitely doable, but you also have to take that into consideration and see if maybe other framework will make that work easier.
So, conclusion: start fresh, almost always good :)
And investigate a little bit around to see if the new version of cake is the right alternative for you based on your needs (no one really can decide that except for you).

migrator.net vs fluentmigrator vs migsharp

I am currently investigating possible options of a migration framework/tool. I like the idea of ruby migrations on which the above frameworks are based.
So I am asking for your experience, opinions and maybe a comparison between them. Are you using them in production?
thanks for responses. The goal of this question was to get a feeling about which tools is used most in the developer community but it seems that migrations are not a hot topic here.
Anyway, I have decided to go with MigSharp as the codebase seem to be pretty clean and it is quite easy to handle and had build in support for MS SQL CE. Second runner up would have been FluentMigrator where I was not able to produce a working example for compact edition.
Cheers
I use FluentMigrator in production, and am a longtime contributor to FM. I think your question is to general; be more specific. Also, FM has a google group which is fairly active if you want FM information.
FM is derived from migrator.net, as I recall. It uses a fluent-syntax, and supports multiple databases. We have taken some inspiration from rails migrations, but it's definitely not a port. Worth checking out.
One thing I've learned is not to put your migrations in the same assembly as you app code. Separate them into a migration assembly, and use that for migrating your databases.
Also, you should always work on multiple environments to avoid problems with migrations run straight against production. I always have at least a development and production environment, and most of the time there is a testing environment as well.
I use mig#.
It works well, but you will need to have some guidelines for usage - as migrations can get complicated.
We use sequence number on the end of our migrations rather than a date-time stamp. This is because we don't know when the date time stamp was set (when they begun the source code change-set; just before committing; some time inbetween) different developers could use different approaches.
Names such as Migration_0000034.cs give you plenty of space.
At this point, I would stick with migrator.net. I like the promise of FluentMigrator, but it seems to not have any better active development than migrator.net (see the issues and pull requests that have languished on their github site).
There is also no easy way to do an ExecuteScalar(). I'd add it, but I don't want to create my own fork, and I see no reason that a pull request would actually land in the master. (Execute.WithConnection is an Action so it will fire on demand rather than when I need it to fire)
So for me, I'm heading back to migrator.net.

How do you track versions in Bugzilla?

We are in the process of migrating our bug tracking to Bugzilla from a really old version of track and I am running out of Advil.
We have a legacy application that has been around for a long time. Mix in the fact that our versioning management has been through a few iterations it generated a lot of different versions in the wild. To make matters worse, because of contractual limitations it is not always possible to upgrade the clients to the latest and greatest, so we must branch, fix, test and release, on the version they currently have, yielding yet another version number.
The end result is that the version combo box is ludicrously long. Lastly, for various reasons, we want to track three different version information :
the version in which the bug was found (version), the version in which we plan to fix (milestone) the bug and the version in which it has ultimately been fixed (open to suggestions). here is my problem in fact... this can actually be multiple numbers where we did a retroactive fix for some of these customers (this happens VERY often).
This is where I need your collective wisdom :
How do you keep track of these versions (found, planned and multiple fixed) in Bugzilla?
What are the best practices around linking versions and bug tracking ?
Answers
It seems that cloning the bug for each version is a good way to track, thus the target version is always tracked in the milestone as well as the fixed version, and the buggy version is always the native version.
Also to have each clone block the original bug make it a good way to trace the history back to the original submission.
Although I have accepted the answer I still welcome your input.
Often, if we need to fix something in multiple released versions (generally branches in the source code repository), the bug will be cloned for each branch so that all the commits and release status can be tracked separately. I think the only time we don't do this is when the change is not directly related to the codebase itself and cannot be fixed simply by updating our libraries.
As for version tracking in general, this has struck me as a reasonable way to do things, given that we generally only need to support 2-3 major versions (plus the trunk) at any time. If you have multiple disjoint versions that need supporting, e.g. customer-specific deployments, then things are going to be harder to track. (Arguably this is going to cause headaches in general and it would be better to unify things to a more central version theme).
I use Bugzilla to keep track not only of bugs, but also of new features, enhancements, and vague ideas. For each planned and released version, I have a Tracking Bug (something that I saw on the original Mozilla bugzilla, and found to be useful).
So if you have a bug report, you enter the bug with the version number that it was reported. Create additional bugs (one for each version you plan to fix it in) which all depend on (block) the original bug and block the version-specific Tracking Bugs.
If all bugs blocking the original bug are closed/verified (whatever your QA implements), you can also close the original bug.
I was looking for a similar feature in TFS, and while doing some investigation, I found that there is an enhancement request to manage "sightings" in Bugzilla:
"Bug 55970 - (bz-branch) Bugzilla needs to deal better with branches (implement sightings)":
https://bugzilla.mozilla.org/show_bug.cgi?id=55970
There is also a proposed design:
https://bug55970.bugzilla.mozilla.org/attachment.cgi?id=546912
For information, we are going to implement something similar in TFS 2010, with a "Bug Parent" or "Bug Master" to hold the information about the bug itself (repro steps, severity, technical info, impacted components...), that can have child of type "Bug Child" or "Sighting" that will hold the information specific to a given branch (target milestone, priority, specific information for that branch...).
We are using jira and still have this problem. I think it is a question of requirements and how are versions used rather than any one tool.
Who uses versions and how do they use them?
How are versions related to milestones in a project plan?
We use a 4 dectet version (major.minor.patch.buildNO). buildNo is the SVN head revision # at time of build. Each version is stored in JIRA and issues have an affects version and fixed-in version field that's a multi select.
After a short while we have many versions. Jira does allow us to control the list in two ways
1. Archive versions (greyed out from pick list)
2. Merge versions (rolls several versions together into a new version - no undo)
We have used Archive, but have avoided Merge due to the lack of the undo. So we still have a list of many many versions.
I'm sure you could probably accomplish a merge action in Bugzilla with some scripting and time, the question is: when is it OK to merge several older versions together?
If I have released, do I need to know that I have 17 builds between start and release? Do I need to keep the knowledge of a bug being found in build 1, fixed in 2, found again in 7, fixed again 9? Or is Found in release 1.0.0 fixed in release 1.0.1 good enough?
i'm going to ask a large question on this topic later on today, but I know the basic answer already:
- Depends on how your team wants to track things.
Implementation is fun, but it all comes down to requirements, goals and working back from user experience to solution. Which is rough when people don't necessarily know how that want to use something that doesn't quite exist in the form they'd like to use.
I have created a custom field (string) to list to version(s) (as V.M.P.B) where a bug has been fixed.
I have created also another custom field (string) to list to version(s) affected by a bug.
Doing that you are able to perform quick-search on specific version.

Resources