Our team recently switched from using WSED 5.2 to using RAD 7.5 for our code development (we're a little behind the curve in development software). This, as well as a move to a WAS 7 server, is a mandatory move for us to complete, so moving to a different developer or server is not an option.
Since moving to this new developer, I've been having problems making changes to the app and testing them on my local server. Refreshing the server doesn't help, nor does a Clean or a Publish to the server. Even clearing and rebuilding the projects doesn't seem to do the trick. The only thing that does allow me to view and test code changes with 100% reliability is a complete system reset followed by a clean and rebuild of the project - which obviously is not what we were hoping for with the new developer tool.
It occurs to me that it may have something to do with my code view - we're using a ClearCase code sharing system, and by choice we decided to try using a Dynamic View for my code, because we thought it would be great and fast and dynamic at the time. But now, looking at the problems I'm having just testing the code, I'm wondering if it was a mistake to use a Dynamic View.
Are there any reported problems with using Dynamic View on a file-shared coding project with RAD 7.5 that we should be aware of? And are there any fixes for it? Please note that while it wouldn't be ideal, 'switch to static view' would be an acceptable 'soultion'.
If you operations involves not just punctual read access, but massive read/write operation, I would recommend using a snapshot view, and see if the issue persists.
You can customize the load rules in order to load in said snapshot view only what you need.
Related
This question already has answers here:
AngularJS 1.4 --> Angular 9 migration vs bigbang rewrite [duplicate]
(2 answers)
Closed 2 years ago.
I have done a ton of research and haven't found anything that has helped me decide what will be the best route (vertical slicing, horizontal slicing, or just a complete rewrite). I am working on a very LARGE program that is very ugly with no comments and need to migrate it over to Angular 8 if possible or at least up to Angular 7. A lot seem to recommend https://angular.io/guide/upgrade however, they don't help too much in migrating to 1.5 first. Does anybody have experience with a large scale migration? Currently, the program is not being used so downtime is no issue.
It doesn't seem like it at first, but a re-write is usually a more cost effective solution than an upgrade. It seems like upgrade would be the fastest time to re-deployment but in my experience if you did the two side-by-side you might find the times similar, except that the migration deployment will have to be all or nothing, where as a re-write means you can deploy with reduced functionality and build up the feature.
More importantly, the on-going maintenance of an upgraded site becomes exponentially harder/time consuming. You're really applying band-aides over the top of previous patches and fixes.
There are new concepts, better native support for directives and controls that we used to use 3rd parties for or roll own own, and it's an entirely new language to understand. Take this opportunity to wipe the slate clean of your solution's technical debt.
Rewrite - Hybrid
Do you need to deploy eveything in one go? Are you interested in Re-Branding?
One thing the Microsoft have done well in the past is the hybrid roll-out of their Preview portals.
The best case study IMO is the Azure Portal.
A few years ago we had a pretty fully featured portal interface for managing Azure assets. This would later become known as the Classic Portal when they started work on an entirely new user experience.
At first release the menu system was largely complete, we could navigate most assets in the new portal but when you came to features that had not been redesigned yet the link took you back to the Classic Portal.
So you could do the same, have the two user interfaces deployed to different URLs, start by making sure the authentication and navigation is largely complete but make all links take the user back to the original interface. Then feature by feature, implement the new interface, but because you can't control everything, keep a button or link on each page that takes the user back to the original implementation until your regression testing confirms that you have reached feature parity.
That is another key take-away from the MS Hybrid approach, significant change like this WILL annoy your users. So while you are in transition, allow the user to choose when they themselves migrate over. Initially MS achieved this at login, the user could login at either of the main urls, and based on your profile you would be redirected to the portal of your choice.
The last step is to restrict access to features in the old interface, by making the navigation and links in the old portal navigate directly into the new interface.
- or less intrusive, add an 'end of life' banner in each page that you have compelted the re-write on in the old site.
Do not confuse this with the Preview Mode in Office 365, the Azure Preview Portal was a ground-up re-write and is still in progress. There are many licensing operations that I still use the Classic Portal for as I still manage some Classic Only azure assets that have not yet been re-deployed.
I would consider the following issues when choosing between Migration/Upgrade and a straight up re-write:
Migrate to AngularJS 1.5 first
This operation is only marginally easier than the upgrade to Angular2+. All of the arguments below apply equally to this process as they do the next upgrade.
One of the reasons to goto 1.5 first is to escape the legacy dependencies that do not have a simple direct upgrade to 2+.
During the upgrade to 1.5 you should consider implementing Component based architecture (if the current code does not already do so)
A key element of components is less configuration and simplifed design, so read this as less to upgrade, less that can go wrong
Components are of course more closely aligned with current Angular implementations, if you do not yet use AngularJS Components, that might be a good iterim step to understand before learning Angular 2+
No Comments
This is a bigger red flag than you think. If the code base is not documented then any sort of maintenance becomes incrementally harder as each time the code must be re-read and re-interpreted before you can affect change.
So if a migration is on the cards, where talking about every line of code at some point needed to be re-read and understood to make sure that it works correctly in the
AngularJS 1+ => Angular2+
While some of the core frameworks and 3rd party libraries can be migrated, most controller javascript files cannot be simply migrated to typescript without a fair amount of effort. It's pretty common in javascript to cut a lot of corners in terms of type definitions and locations of definitions that mean after migration you will spend a lot of time going back through most javascript files one method at a time.
Very large
This is a strong candidate for automation or migration but ultimately it means that the total surface area to test, debug and re-design is also very-large. If the initial migration does not compile, it could be a long road of tweaking before you can get the user interface up so you can start interface testing.
I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page
I am currently investigating possible options of a migration framework/tool. I like the idea of ruby migrations on which the above frameworks are based.
So I am asking for your experience, opinions and maybe a comparison between them. Are you using them in production?
thanks for responses. The goal of this question was to get a feeling about which tools is used most in the developer community but it seems that migrations are not a hot topic here.
Anyway, I have decided to go with MigSharp as the codebase seem to be pretty clean and it is quite easy to handle and had build in support for MS SQL CE. Second runner up would have been FluentMigrator where I was not able to produce a working example for compact edition.
Cheers
I use FluentMigrator in production, and am a longtime contributor to FM. I think your question is to general; be more specific. Also, FM has a google group which is fairly active if you want FM information.
FM is derived from migrator.net, as I recall. It uses a fluent-syntax, and supports multiple databases. We have taken some inspiration from rails migrations, but it's definitely not a port. Worth checking out.
One thing I've learned is not to put your migrations in the same assembly as you app code. Separate them into a migration assembly, and use that for migrating your databases.
Also, you should always work on multiple environments to avoid problems with migrations run straight against production. I always have at least a development and production environment, and most of the time there is a testing environment as well.
I use mig#.
It works well, but you will need to have some guidelines for usage - as migrations can get complicated.
We use sequence number on the end of our migrations rather than a date-time stamp. This is because we don't know when the date time stamp was set (when they begun the source code change-set; just before committing; some time inbetween) different developers could use different approaches.
Names such as Migration_0000034.cs give you plenty of space.
At this point, I would stick with migrator.net. I like the promise of FluentMigrator, but it seems to not have any better active development than migrator.net (see the issues and pull requests that have languished on their github site).
There is also no easy way to do an ExecuteScalar(). I'd add it, but I don't want to create my own fork, and I see no reason that a pull request would actually land in the master. (Execute.WithConnection is an Action so it will fire on demand rather than when I need it to fire)
So for me, I'm heading back to migrator.net.
I'm in the process of starting up a web site project. My plan is to roll out the site in a somewhat rudimentary form first and then add to the site functionality along the way.
I'm using Subsonic 3 for my DAL, and I'm expecting the database will go through multiple versions as the sites evolve. This means I'll need some kind of versioning and migration tools. I understand that Subsonic has built in migration possibilities, but I'm having difficulties grasping how to use these tools, in my scenario.
First there's the SimpleRepository model, where the Subsonic "automagically" handles the migrations as i develop my site. I can see how this works on my dev-machine, but I'm not sure how to handle deployments with this.
Would Subsonic run the necessary migrations on my live site as the appropriate methods are called?
Is there some way I can force all necessary migrations on a site while taking the site offline, when using the Simplerepository model? (Else I would expect random users to experience severe performance cuts, as the migration routines kick in)
Would I be better off using the ActiveRecord model, and then handling migrations with the Subsonic.Schema.Migrator? (I suspect so)
Do you know of any good resources explaining how to handle this situation with the migrator? (I read the doc, but I can't piece together how I would use this in practice)
Thanks for listening/replying.
Regards
Jesper Hauge
I would advise against ever running migrations against a live site. SubSonic's migrations are really there to make development simpler and should never be used against a live environment. To be honest even using SubSonic.Schema.Migrator you're still going to bump into the fact that refactoring databases is an incredibly hard problem. For example renaming a column in a table using management studio is trivial, but what happens in the background involves creating an entirely new table and migrating all the constraints, data etc. before renaming the new table.
The most effective way I've found for dealing with this is:
Script all database changes as you make them in your development environment (SQL Server Management Studio will do this for you) and add these scripts to your source control.
As part of deployment (obviously backup first) run the migration scripts and then deploy the updated application on success.
Whether you use ActiveRecord or SimpleRepository is then down to whether you want the extra features/complexity of ActiveRecord.
Hope this helps
i would use activerecord easy to use and any changes you just run the TT files, you would then just build or publish your slution and done ???? SVN will keep your multiple versions of the build stage so if you make a tit of it you just drop back a revision.
When creating an auto updating feature for a .NET WinForms application, how does it update the DLLs and not affect the currently running application?
Since the application is running during the update process, won't there be a lock on the DLLs (because those DLLs will have to be overwritten during the update).
Usually you would download the new files into a separate area. Then shutdown and restart and at startup you look for and use the new files if found. Always keeping a last known working version on the side so that the user can revert to something that definitely works if the download causes problems.
ClickOnce is a good technology from Microsoft that does this for you and you can use it directly from Visual Studio 2008.
You'll have to shutdown your application and restart it, as other people have already commented.
I wrote an open-source code to do just that in a transparent mode - including an external update application to do the actual cold update. See http://www.code972.com/blog/2010/08/nappupdate-application-auto-update-framework-for-dotnet/
The code is at http://github.com/synhershko/NAppUpdate (Licensed under the Apache 2.0 license)
I have a seperate 'launcher' application that checks for updates via a web service. If there are updates, it downloads them and then executes my application, which is in a seperate assembly.
The other alternatives are using things like ClickOnce, or downloading the files to a seperate area and restarting the app, as someone else mentioned.
Be warned about ClickOnce, though - it's not as flexible as it sounds. And if you deploy to a system that requires elevating your program to a higer security level to run, you might run into problems if you don't have a certificate for your app installed. I found it very difficult to get straight answers on the Internet to things like certificate management when it comes to ClickOnce. If you have a complex app, you may want to just roll your own updater, which is what I ended up having to do.
If you publish via ClickOnce, all of that tends to be handled for you. It has it's own pro's and con's but usually easier than trying to code it all yourself.
Both Wikipedia and 15seconds have decent info on using ClickOnce, how it works, etc.
As others have stated, ClickOnce isn't as flexible as rolling your own solution but it is a LOT less complicated. It has a small learning curve at first, but with pretty much everything bundled into Visual Studio and the use of Wizards, it usually doesn't take long to stumble onto a working solution.
As deployments get more complex (i.e. beyond than just having prerequisites or application code that needs updating) and you need to do a lot of post-install or pre-install tasks, there are things like WiX which give you somewhat of a hybrid solution between Windows Installer and ClickOnce, with the cost of flexibility being a much steeper learning curve.
The only reason I try to avoid custom installers is that you end up spending way too much time trying to get it just right to handle a bunch of different "What If" scenarios...
These days Windows can do such updates automatically for you with AppInstaller if your app is packaged in the MSIX package.
It downloads the new version of the app in another folder inside ProgramFiles\WindowsApps, then when a user runs the app via the start menu, the system knows what folder it should use. The previous version gets deleted when not in use.
If you want to know how to package your app this way I collected my findings in this answer.