I am working on a project where we'll be using Tomcat 6.0.20 for Development and production.
I came across some issues related to hot deployment which requires one to set Context.antiResourceLocking to false in server.xml. I had some questions on antiResourceLocking and antiJARLocking.
I have gone through the reference at http://tomcat.apache.org/tomcat-6.0-doc/config/context.html.
What I can't understand is what exactly do you mean by a JAR getting locked or a resource getting locked. What I have read so far is that the locking problem usually comes when you are undeploying an application which fails due to a process having a lock on the file/jar. Can someone please point me to anything where I can read more on this issue?
My questions are:
1) If I set antiJARLocking and/or antiResourceLocking to false what are the problems that I can get? Can some one please provide an example?
2) Is it a bad practice to set these attributes to false in a production environment?
2) Is it true that locking won't occur on a Linux box as frequently as it can happen on a Windows box?
Appreciate your help.
Thank you.
Govind N.
Here are my answers to these:
1) From what I can tell, setting antiJARLocking and/or antiResourceLocking to false could only cause problems on Windows (though I vaguely remember a Tomcat developer claiming that it also affects Linux -- I'm disregarding that because I have seen zero evidence of it, and no examples / detailed explanations proving it).
2) It is only bad practice to set these to false when Tomcat is running on Windows.
Second 2)!! I have been running Tomcat on multiple Linux distributions and versions for more than ten years. I have never once seen a jar locking or resource locking problem due to not setting one of those attributes to true. As far as I know, it doesn't happen, but it might depend on the filesystem implementation you're using, and I always either used EXT2, EXT3, or EXT4.
If you still have questions about this, ask about it on the Tomcat-user mailing list.
Cheers!
Jason Brittain
Co-author, Tomcat: The Definitive Guide (O'Reilly)
1) I had built a system around svn to automatically build and deploy a webapp, the deployment was made via the Tomcat ant tasks, and with antiJARLocking and/or antiResourceLocking to false the application was not undeployed properly, because tomcat could not delete some jars and the log4j.properties config file, thus the deploy failed. So I had to set these properties to true, and tomcat did a copy of the webapp in the temp dir. This makes the deployment slower and with nearly each redeploy the temp dir grew in size, so I had to make a procedure to delete older deployments of my app from the temp dir. It is safe to delete deployments from the temp dir at any time, because tomcat will redeploy the app to the temp dir.
2) From the tomcat docs I understood that the problems with jar locking or resource locking occur only on Windows. I wouldn't set these properties to true in a production environment, because there's no need to redeploy so often, and with java it's always a good idea to do a server restart after redeploy in production (an OutOfMemoryError is always lurking in the dark, even if your own code doesn't leak). Another minor issue is that the app being deployed to the temp dir, if you modify a jsp or another file in the webapps dir, it won't be redeployed unless you copy the changes to the temp dir also.
Related
I have a python27 appengine application. My application generates a 500 error early in the code initialization, and I can inspect the stack trace in the StackDriver debugger in the GCP console.
I've since patched the code, and I've re-deployed under the same service name and version name (i.e. gcloud app deploy --version=SAME). Unfortunately, the old error still comes up, and line numbers in the stack traces reflect the files in the buggy deployment. If I use the code viewer to debug the error, I am however brought to the updated patched code in the online viewer -- and there is a mismatch. It behave as if the app instance is holding on to a previous snapshot of the code.
I'm fuzzy on the freshness and eventual consistency guarantees of GAE. Do I have to wait to get everything to serve the latest deployed version? Can I force it to use the newer code right away?
Things I've tried:
I initially assumed the problem had to do with versioning, i.e. maybe requests being load-balanced between instances with the same version, but each with slightly different code. I'm a bit fuzzy on the actual rules that govern which GAE instance gets chosen for a new request (esp whether GAE tries to reuse previous instances based on a source IP). I'm also fuzzy on whether or not active instances get destroyed right away when different code is redeployed under the same version name.
To take that possibility out of the equation, I tried pushing to a new version name, and then deleting all previous versions (using gcloud app versions list to get the list). But it doesn't help -- I still get stack traces from the old code, despite the source being up to date in the GCP console debugger. Waiting a couple hours doesn't do anything either.
I've tried two things:
disabling and re-enabling the application in GAE->Settings
I'd also noticed that there were some .pyc files uploaded in the snapshot, so I removed those and re-deployed.
I discovered that (1) is a very effective way to stop all running appengine instances. When you deploy a new version of a project, a traffic split is created (i.e. 0% for the old version and 100% for the new), but in my experience old instances might still be running if they've been used recently (despite them being configured to receive 0% of traffic). Toggling kills them all immediately. I unfortunately found that my stale code was still being used after re-enabling.
(2) did the trick. It wasn't obvious that .pyc were being uploaded. I discovered it by looking at GCP->StackDriver->Debug and I saw .pyc files in the tree snapshot.
I had recently updated my .gitignore to ignore locally installed pip runtime dependencies for the project (output of pip install -t lib requirements.txt). I don't want those in git, but they do need to ship as part of my appengine project. I had removed the #!.gitignore special include line from .gcloudignore. However, I forgot to re-add *.pyc into my .gcloudignore.
Another way to see the complete set of files included in an app deployment is to increase the verbosity to info on the gcloud app deploy command -- you see a giant json manifest with checksums. I don't typically leave that on because it's hard to visually inspect, but I would have spotted the .pyc in there.
Our internal Nexus repository has an artifact that we wish we never published, but unfortunately we did. And various environments installed it.
I'd like to delete it from the repository so that nobody downloads the erroneous release again, but for the people who already downloaded & installed that artifact, it seems a bit deceptive to pretend the release never happened. Is there a way to "archive" or "disable" an artifact so that it still is preserved somewhere for analysis or archeological purposes, but won't be installed by someone pulling artifacts for installation?
There isn't a feature in NXRM for disabling access to individual artifacts. This is a bit hacky but you could achieve it by
Creating a new hosted repository (either raw format or the same
format that you are currently using)
Marking that repository as "offline"
Moving the artifact to the archived repository.
Step 3 is the problematic part: If you are an NXRM PRO user on a recent version there are REST calls you can use for moving components. See here for more details https://help.sonatype.com/repomanager3/staging#Staging-RESTEndpoints
If you are an OSS user you will likely have to republish the same artifact to the archive repository and then delete it from the original repository.
(Note the above assumes you are using NXRM3 - if you are using NXRM2 take a look at https://blog.sonatype.com/2010/04/nexus-tip-moving-artifacts-between-nexus-repositories/)
This is a subject of common discussion, but through all my research I have not actually found a sound answer to this.
I develop my websites offline, and then launch them live through my hosting account.
I utilize codeigniter, and on that basis there are some fundamental differences between my offline and online copies, namely base urls and database configurations. As such I cannot simply develop and test my websites offline and then upload them as it requires small configuration changes which are easy to overlook and good lead to a none working live website.
The other factor is that when I am developing offline, I might add a database table or a column whilst creating some functionality. When I upload my local developments to my host, they often do not work as I have forgotten to upload the new database structure. Obviously this cannot happen - there cannot be any opportunity for a damaged or broken live website.
Further to this, I'd like to be able to have logs of my development - version control of sorts such that if i develop a feature, and then something else stops working I can easily look backwards to at least see the code changes which could have caused the change.
My fourth requirement is as follows: if i go away on holiday for a week without my development laptop, and then get a bug report, I have no way of fixing it. If i fix it on the live copy, not only is it dangerous, but i'll inevitably not update it on my local copy - as such when i update my live copy next time, that change will be lost. Is there a way that on any computer i can access my development setup, edit and test, launch to the live site, whilst also committing it such that my laptop local copy is up to date.
So yes.. in general im looking for a solution to make my development processes more efficient/suitable. Any ideas?
Thanks
Don't deploy by simply copying. Deploy by using a script (I use Apache Ant) that will automate the copy of specific files for each environment, the replacement of some values, etc.
This just needs rigor. Make a todo list while developing, and check that every modification on the server is done. You might also test the deploy procedure on a pre-production server which has an similar configuration as the production server, make sure everything is OK, and then apply the same, tested procedure on the production server
Just use a version control system. SVN or Git are two free candidates.
Make your version control server available from anywhere. If it's an open-source project, free hosting solutions exist. Of course, if you don't have a development computer wvailable, you'll have to checkout the whole project, and probably install some tools to be able to develop, test and deploy. Just try to make it as easy as possible, or always have your laptop available. If you plan to work, have your toolbox with you. If you don't plan to work, then don't work. When you have finished some development, commit to the server. When you go back to your laptop, update your working copy from the server.
Small additions and clarifications to JB
Use any VCS, which can work (in a good way) with branches - your local and prod systems are good candidates for separate branches, where you share common code but have branch-specific config. It'll require some changes in your everyday workflow (code in "test", merge finished with "prod", deploy /by tools, not hand/ only after merge...), but it's fair price
Changing of workflow, again. As JB noted - don't deploy by hand, don't deploy wrong branch, don't deploy "prod" before finished merge. But now build-tools are rather smart, you can check such pre-condition inside builder
Just use VCS, maybe DVCS will be somehow better. I say strong "No-no" for Git as first VCS, but you have wide choice even without it - SVN (poor branch|merge comparing to DVCS), Bazaar (not a tool of my dream, but, who knows), Mercurial, Fossil SCM, Monotone
Don't work on live, never do anyting outside your SCM. One source of changes is a rule of happy developer. Or don't work at all at free-time, or have codebase always reacheable for you (free code-hosting /GoogleCode, SourceForge, BitBucket, Github, Assembla, LaunchPad/ or own server), get it as needed, change, save, deploy
We have an net 4.0 winforms application that we publish with clickonce to the client pc's. The installation is about 80 MB. The application is offline available and the update occurs in the startup of the app using
ApplicationDeployment.CurrentDeployment.Update
Each time we do an update of the application everything works fine and each client gets udpated. However the application cache keeps growing in size... We noticed that more then two versions are kept in the LocalAppData folder. The size of the clickonce installation folder is more then 1GB.
ClearOnlineAppCache works only for online applications and we don't find any information to clean the LocalAppData for offline application.
Is there any way to manage previous versions of our application in the LocalAppData folder from our client pc's?
Update:
We removed our custom update code and used the update mechanism of the Clickonce framework. Now old versions are removed properly and only two versions are kept in LocalAppData. I have still no idea why all versions are kept when we update through the custom update code.
I've seen this issue before, but I clarified with the ClickOnce lead at Microsoft before answering.
It keeps two versions of the deployment plus there are extra folders for each assembly. When processing an update, ClickOnce figures out which files have changed by comparing against the assembly it has already cached, and it only downloads the ones that have changed. The deployment folders have hard links to the assemblies in the separate folders. So you might see additional files, but it's not actually the file, it's a link to the files in the assembly-only folders. Explorer will show it as a file, but it's not. So unless you're running out of disk space and are just concerned about the folder size, be aware that the information reported by Windows Explorer may not be accurate.
There is an answer to this problem here
I wrote a function to clean old ClickOnce versions in the client side.
In my machine I've freed 6Gb of space. I don't want to even know the total space used by old versions org wide...
When creating an auto updating feature for a .NET WinForms application, how does it update the DLLs and not affect the currently running application?
Since the application is running during the update process, won't there be a lock on the DLLs (because those DLLs will have to be overwritten during the update).
Usually you would download the new files into a separate area. Then shutdown and restart and at startup you look for and use the new files if found. Always keeping a last known working version on the side so that the user can revert to something that definitely works if the download causes problems.
ClickOnce is a good technology from Microsoft that does this for you and you can use it directly from Visual Studio 2008.
You'll have to shutdown your application and restart it, as other people have already commented.
I wrote an open-source code to do just that in a transparent mode - including an external update application to do the actual cold update. See http://www.code972.com/blog/2010/08/nappupdate-application-auto-update-framework-for-dotnet/
The code is at http://github.com/synhershko/NAppUpdate (Licensed under the Apache 2.0 license)
I have a seperate 'launcher' application that checks for updates via a web service. If there are updates, it downloads them and then executes my application, which is in a seperate assembly.
The other alternatives are using things like ClickOnce, or downloading the files to a seperate area and restarting the app, as someone else mentioned.
Be warned about ClickOnce, though - it's not as flexible as it sounds. And if you deploy to a system that requires elevating your program to a higer security level to run, you might run into problems if you don't have a certificate for your app installed. I found it very difficult to get straight answers on the Internet to things like certificate management when it comes to ClickOnce. If you have a complex app, you may want to just roll your own updater, which is what I ended up having to do.
If you publish via ClickOnce, all of that tends to be handled for you. It has it's own pro's and con's but usually easier than trying to code it all yourself.
Both Wikipedia and 15seconds have decent info on using ClickOnce, how it works, etc.
As others have stated, ClickOnce isn't as flexible as rolling your own solution but it is a LOT less complicated. It has a small learning curve at first, but with pretty much everything bundled into Visual Studio and the use of Wizards, it usually doesn't take long to stumble onto a working solution.
As deployments get more complex (i.e. beyond than just having prerequisites or application code that needs updating) and you need to do a lot of post-install or pre-install tasks, there are things like WiX which give you somewhat of a hybrid solution between Windows Installer and ClickOnce, with the cost of flexibility being a much steeper learning curve.
The only reason I try to avoid custom installers is that you end up spending way too much time trying to get it just right to handle a bunch of different "What If" scenarios...
These days Windows can do such updates automatically for you with AppInstaller if your app is packaged in the MSIX package.
It downloads the new version of the app in another folder inside ProgramFiles\WindowsApps, then when a user runs the app via the start menu, the system knows what folder it should use. The previous version gets deleted when not in use.
If you want to know how to package your app this way I collected my findings in this answer.