A recent question in the Codename One discussion forum raised a question I often face when I'm waiting for a fix.
Sometimes the Codename One team indicates a fix would be coming in a couple of weeks and other times they indicate that its already fixed. Some of that opacity obviously relates to the update of the cloud servers but its unclear for me whether its just the cloud server & the plugin or is there something that I'm missing. Why isn't there a single update process?
I'd really like a more definitive answer like How does Codename One work?
for this.
Codename One is comprised of several different pieces and an update usually means we update only one of them. At a high level there are really just 2 major types of updates: libraries & servers.
We update libraries once every 3-5 weeks we update servers all the time (sometimes more than once per day sometimes ever 3-4 days).
Here is a slightly more accurate overview of what it means to update Codename One:
Plugin & related tools - the plugin itself provides the project properties, server connectivity and designer/gui builder tools. It updates as part of the native IDE update process once every 3-5 weeks. You need to explicitly accept an update prompt from the IDE in order to get this update. Bugs in the plugin itself or features for the designer/GUI builder need to go thru that process...
Build.xml - this is technically a part of the plugin update but you need to actually accept changes that we make to the build.xml to get some functionality. On occasion a new feature (e.g. the new GUI builder) needs to update the build.xml code but this will only happen if you go into project properties, click OK and accept the prompt to update the build.xml (if such an update exists).
Client libraries - these are the API's you use when writing Codename One code (typically CodenameOne.jar and related ports). We usually issue an update to those once every 3-5 weeks together with the plugin update. The plugin ships with these but they are only applied to new projects... When you send a build we implicitly update your libraries to the latest version using a separate update process, you can also use "Update Client Libs" within the Codename One preference to update these manually without sending a build.
Device libraries - when you send a build to the servers we use the latest version of the client libraries that might be newer than what you see in the client libs but might not be the latest git master. This allows us to rapidly deploy & test device fixes. This also allows you to work with the code and use newer features that weren't pushed to the Client Libraries. The process of updating the servers is a bit adhoc so there is some opacity around that, we are looking at making this more transparent.
VM & builders - The builder code and VM relate to the server side scripts that generate the code. When you have a compilation error on the servers or need an enhancement there we need to deploy it in a process similar to the device libraries deployment.
Certificate wizard update - this tool is updated in a completely separate update process despite shipping in the plugin. We had a lot of concerns about Apple changing things suddenly when initially creating this so we decided to allow this to update instantly.
Related
I am working on Adobe CQ. I created 2-3 versions(1.2,1.2,1.3) for a particular page in my author instance. Now I tried to package my content page and installed it in another instance. I couldn't see the versions of the page which I installed in another instance.
Can anyone help me out doing this?? I want to migrate my content pages along with their versions from one CQ instance to another??
We are in the same situation. You can extract prior version details using the packaging approach, but you will be precluded from reloading them in due to the new Oak security model. The next issue is that you would need to extract and transform the data, and then reinsert due to the node ID's potentially differing, especially if you are using partial data sets to extract.
Where we have gotten to, and are proving now, is to use the new migration tool to move content from instance to instance, which purportedly has a version extract tool. I will update details here when we get our results back.
UPDATE:
We have tested the CRX2OAK migration tool, and it indeed does move versions across. Using the tool, you can specify filters to only migrate a subset of content, which will then drag the version details across as well.
It seems this approach works quite well for both single tenancy and multi tenancy approaches as it used to using a package for content.
Unfortunately, it can't be used as a portable backup system, as it is an instance to instance solution. It does, however, work well for blue/green deployment strategies.
Versions are stored by path '/jcr:system/jcr:versionStorage' in AEM.
To transfer pages with their versions just create a package with filters for content which you want to move and the version storage path as well, download package and install in other AEM.
If anyone comes across this question like me, here is the summarised answer:
You can use crx2oak utility available from link below to migrate pages and page version across instances:
https://repo.adobe.com/nexus/content/groups/public/com/adobe/granite/crx2oak/
This is a powerful utility with multiple uses (especially in upgrades) as documented in links below:
https://docs.adobe.com/docs/en/aem/6-2/deploy/upgrade/using-crx2oak.html
https://jackrabbit.apache.org/oak/docs/migration.html
The source and destination repositories need to be offline while running this utility so best to plan ahead for this type of migration.
HTH
Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.
I'm trying to divide my solution by three configurations:
Development
Testing
Release
All above will have different publishing location, so users can work with release, do their test in testing and see what is new in development release. All three versions will be build with different name postfixes and icons and installed on each user workstation.
For now I get :
Unable to install this application because an application with the
same identity is already installed. To install this application,
either modify the manifest version for this application or uninstall
the preexisting application."
I can't even install this more than once at one workstation.
So What can I do to achive this?
You can not install the same application multiple times unless you change the deployment. The easiest way to do this is by changing the assembly name. This article explains this.
As time past, I can now see that the solution was quite close, just required me to be able to specify my requirements first.
So, now I can tell that it mostly depends on number of such configurations:
if it is limited and low, i.e. live/test/dev, you can have each as separate project in solution, like AppLive, AppTest, AppDev, this requires refactoring to move everything that is common into separate projects, but it makes code and releases clearer and easier to manage.
if those configurations are unlimited, or number is high, than way to go is to load configurations from file and pick one from the pool based on custom logic.
Currently I'm using mix of both, as I want to be able to release test versions earlier than live, but also my application is used by multiple branches, and each of them has some unique styling, logos and such, so this is applied from embed xml file, and proper set is identified based on Active Directory entries.
Is there any utility that will copy the "official" build of a windows forms app from a central network share and launch it (from a client desktop)? I want to make sure users get the latest version when I update the binaries on the central network share.
ClickOnce is user un-friendly so I'm looking for something else...
Is it possible you could revise your question to describe what it is you find unfriendly about ClickOnce? In my office we have found ClickOnce to be the most efficient and user-friendly way of updating and distributing applications desktop business apps that we have ever had. I'm wondering if the best way to resolve your question might be to address the issues you have with ClickOnce, rather than integrating/rolling another solution.
I've done this before by the following method:
1 - Keep the "official" build at a specific network location
2 - User launches program from their local machine
3 - At launch, program compares its' own file version # to the one on the server.
4 - If the two versions are different, copy the new version down from the server and relaunch.
Pretty simple, and it works as long as you are in an intranet environment.
Step 4 is the only tricky part. You can't replace a file while it's in use, so you have to either
1 - first rename the current (in-use) file and then copy down the new one. Since you will be updating many times, you'll also want to delete any existing renamed copies that are hanging around.
or
2 - Have the user launch a "helper" application that does the version check, updates if necessary, and then launches the real app. Of course then you have to deal with updating the helper app.
We have a tool that would do that, which has been in use before there was such a thing as Windows Update (or any other update.)
The problem with any sort of update of this fashion is the security level of the user. Many times you need to be administrator to perform certain functions.
Our solution is two part/one executable: 1. a service mode that runs local system or admin to perform such operations. 2. an executable which can be called by an app to fetch via UNC, HTTP, FTP the updates for an application and apply them.
The basic process is this:
1. Application checks its version number; we use a central database to list all applications and their version numbers.
2. If the application is a minor revision we give the user an opt out on the install; if it is a major revision we require an install.
3. Once the update is confirmed, we call the updater executable which in concert with its service mode product, retrieves the updates, installs them, and relaunches the application.
If you are interested, go to the website listed in my profile and send us a support request address to me and I will give you more details and the codebase if desired.
Check out this one:
.NET Client Applications: .NET Application Updater Component
It is a white paper which discuses in detail on what it takes to make an application auto-updatable.
When creating an auto updating feature for a .NET WinForms application, how does it update the DLLs and not affect the currently running application?
Since the application is running during the update process, won't there be a lock on the DLLs (because those DLLs will have to be overwritten during the update).
Usually you would download the new files into a separate area. Then shutdown and restart and at startup you look for and use the new files if found. Always keeping a last known working version on the side so that the user can revert to something that definitely works if the download causes problems.
ClickOnce is a good technology from Microsoft that does this for you and you can use it directly from Visual Studio 2008.
You'll have to shutdown your application and restart it, as other people have already commented.
I wrote an open-source code to do just that in a transparent mode - including an external update application to do the actual cold update. See http://www.code972.com/blog/2010/08/nappupdate-application-auto-update-framework-for-dotnet/
The code is at http://github.com/synhershko/NAppUpdate (Licensed under the Apache 2.0 license)
I have a seperate 'launcher' application that checks for updates via a web service. If there are updates, it downloads them and then executes my application, which is in a seperate assembly.
The other alternatives are using things like ClickOnce, or downloading the files to a seperate area and restarting the app, as someone else mentioned.
Be warned about ClickOnce, though - it's not as flexible as it sounds. And if you deploy to a system that requires elevating your program to a higer security level to run, you might run into problems if you don't have a certificate for your app installed. I found it very difficult to get straight answers on the Internet to things like certificate management when it comes to ClickOnce. If you have a complex app, you may want to just roll your own updater, which is what I ended up having to do.
If you publish via ClickOnce, all of that tends to be handled for you. It has it's own pro's and con's but usually easier than trying to code it all yourself.
Both Wikipedia and 15seconds have decent info on using ClickOnce, how it works, etc.
As others have stated, ClickOnce isn't as flexible as rolling your own solution but it is a LOT less complicated. It has a small learning curve at first, but with pretty much everything bundled into Visual Studio and the use of Wizards, it usually doesn't take long to stumble onto a working solution.
As deployments get more complex (i.e. beyond than just having prerequisites or application code that needs updating) and you need to do a lot of post-install or pre-install tasks, there are things like WiX which give you somewhat of a hybrid solution between Windows Installer and ClickOnce, with the cost of flexibility being a much steeper learning curve.
The only reason I try to avoid custom installers is that you end up spending way too much time trying to get it just right to handle a bunch of different "What If" scenarios...
These days Windows can do such updates automatically for you with AppInstaller if your app is packaged in the MSIX package.
It downloads the new version of the app in another folder inside ProgramFiles\WindowsApps, then when a user runs the app via the start menu, the system knows what folder it should use. The previous version gets deleted when not in use.
If you want to know how to package your app this way I collected my findings in this answer.