Build not set indefinitely by RM - ms-release-management

The combination of XAML build definition that is triggering a release was setting the build to be retained indefinitely.
The new non-XAML build doesn't get set to be retained indefinitely by the TFS RM releases.
Am I missing something or I have to set through code?
On-prem TFS.

You're not missing anything. The behavior is different for the new build system, and I'm glad that it is. It tended to be a problem more often than it was a benefit.

Related

Do i need to stop my localhost server when pushing, pulling or merging code to/from github repository?

I have two terminals open in VS Code for React development. One for running the server and one for running bash commands. Now i've been told this is a bad practise as it creates problems in the build of the project. If you want to commit, pull, push, merge you should stop the server and perform these actions and then restart server. Stopping/Restarting the server takes time and seems like a hassle to me. If anyone could answer that thanks
Largely speaking, this should be fine. Assuming you are using some kind of development server (which is monitoring files for changes), any changes that are applied because of a git pull/merge/etc will cause your app to reload (usually in some optimised way).
AN exception to this can be if files that configure the development server change, as it has probably read its configuration on start and thus those changes might not take effect.
Another exception is adding/changing dependencies. If some new/different packages were added to your package.json, then you'd need to rerun npm install again before starting the development server again.
Pushing code to git while the server is running is not problematic either.
So in summary, it's usually fine to do this and is not destructive in any way. If something unpredictable happens, try restarting the dev server (and possible rerunning npm install).

Appengine runs a stale version of the code -- and stack traces don't match source code

I have a python27 appengine application. My application generates a 500 error early in the code initialization, and I can inspect the stack trace in the StackDriver debugger in the GCP console.
I've since patched the code, and I've re-deployed under the same service name and version name (i.e. gcloud app deploy --version=SAME). Unfortunately, the old error still comes up, and line numbers in the stack traces reflect the files in the buggy deployment. If I use the code viewer to debug the error, I am however brought to the updated patched code in the online viewer -- and there is a mismatch. It behave as if the app instance is holding on to a previous snapshot of the code.
I'm fuzzy on the freshness and eventual consistency guarantees of GAE. Do I have to wait to get everything to serve the latest deployed version? Can I force it to use the newer code right away?
Things I've tried:
I initially assumed the problem had to do with versioning, i.e. maybe requests being load-balanced between instances with the same version, but each with slightly different code. I'm a bit fuzzy on the actual rules that govern which GAE instance gets chosen for a new request (esp whether GAE tries to reuse previous instances based on a source IP). I'm also fuzzy on whether or not active instances get destroyed right away when different code is redeployed under the same version name.
To take that possibility out of the equation, I tried pushing to a new version name, and then deleting all previous versions (using gcloud app versions list to get the list). But it doesn't help -- I still get stack traces from the old code, despite the source being up to date in the GCP console debugger. Waiting a couple hours doesn't do anything either.
I've tried two things:
disabling and re-enabling the application in GAE->Settings
I'd also noticed that there were some .pyc files uploaded in the snapshot, so I removed those and re-deployed.
I discovered that (1) is a very effective way to stop all running appengine instances. When you deploy a new version of a project, a traffic split is created (i.e. 0% for the old version and 100% for the new), but in my experience old instances might still be running if they've been used recently (despite them being configured to receive 0% of traffic). Toggling kills them all immediately. I unfortunately found that my stale code was still being used after re-enabling.
(2) did the trick. It wasn't obvious that .pyc were being uploaded. I discovered it by looking at GCP->StackDriver->Debug and I saw .pyc files in the tree snapshot.
I had recently updated my .gitignore to ignore locally installed pip runtime dependencies for the project (output of pip install -t lib requirements.txt). I don't want those in git, but they do need to ship as part of my appengine project. I had removed the #!.gitignore special include line from .gcloudignore. However, I forgot to re-add *.pyc into my .gcloudignore.
Another way to see the complete set of files included in an app deployment is to increase the verbosity to info on the gcloud app deploy command -- you see a giant json manifest with checksums. I don't typically leave that on because it's hard to visually inspect, but I would have spotted the .pyc in there.

How am I supposed to manage db revisions alongside codebase revisions?

We have a Rails app with a PostgreSQL database. We use git for version control.
We're only two developers on the project, so we both have to do a little of everything, and when emergencies arise we often have to drop everything to address them.
We have a main branch (called staging just to be difficult 🌚) which we only use directly for quick fixes, minor copy changes, etc. For bigger features, we work on independent feature branches.
When I work on a feature that requires changes to the database, I naturally have to create migrations that alter the schema. Let's say I'm working on feature-emoji, and I create a migration 20150706101741_add_emoji_to_users.rb. I run rake db:migrate to get on with my work.
Later, I'm informed of some bug I need to address. I switch to staging to start work on it; however, now my app will misbehave because the db schema does not match what the app expects. So before doing git checkout staging, I have to remember to do rake db:rollback. And then later when I switch back to feature-emoji, I have to run rake db:migrate again.
This whole flow is sort of okay-ish when dealing with just two branches, but when the git rebases and git merges happen, it gets complicated.
Is there no better way to handle versioning of code and db in parallel? Or am I doomed to have to run annoying rake tasks every time I want to change branches?
There is no easy answer to this. You could perhaps set up something like a git hook to check for changes to schema.rb, and fail the checkout if any are present; but there are lots of edge cases to check for in such a setup.
Ultimately, the responsibility lies with the human developer to restore untracked parts of their environment — e.g. the database — to a clean state before switching branches.

MS Release management clean up of drop folders

We have four stages in our release path (DEV, TEST, UAT and PROD) and I had been planning on using the validation step on DEV as a release to TEST gateway. If the developers think that it's worth the test team looking at a build they approve it otherwise they reject it (with different approvers on the other stages).
I'm looking to get some sort of clean up running on the build drop folder for builds that have been rejected (or abandonded). Either by deleting them or by changing the keep indefinitely flag on the TFS build.
Is there anyway to do this manually (or better yet automatically?)
I suspect it could be done by querying the RM database and calling the TFS api but I'd like to save the effort of doing this myself.
Meant to say that this is partially covered here (with a "no"):
How do we delete a release in TFS 2013 Release Management?
But it's only really the drop folder I care about not the release.
The answer is still basically "no". It's clear you already get how all of the pieces work -- the retain indefinitely flag is set when a release starts, and it's up to you to manually clear it if you don't want the build to be retained.
That said, it really should be a configurable option. It just isn't.

Tomcat 6 | What's the significance of antiResourceLocking & antiJARLocking?

I am working on a project where we'll be using Tomcat 6.0.20 for Development and production.
I came across some issues related to hot deployment which requires one to set Context.antiResourceLocking to false in server.xml. I had some questions on antiResourceLocking and antiJARLocking.
I have gone through the reference at http://tomcat.apache.org/tomcat-6.0-doc/config/context.html.
What I can't understand is what exactly do you mean by a JAR getting locked or a resource getting locked. What I have read so far is that the locking problem usually comes when you are undeploying an application which fails due to a process having a lock on the file/jar. Can someone please point me to anything where I can read more on this issue?
My questions are:
1) If I set antiJARLocking and/or antiResourceLocking to false what are the problems that I can get? Can some one please provide an example?
2) Is it a bad practice to set these attributes to false in a production environment?
2) Is it true that locking won't occur on a Linux box as frequently as it can happen on a Windows box?
Appreciate your help.
Thank you.
Govind N.
Here are my answers to these:
1) From what I can tell, setting antiJARLocking and/or antiResourceLocking to false could only cause problems on Windows (though I vaguely remember a Tomcat developer claiming that it also affects Linux -- I'm disregarding that because I have seen zero evidence of it, and no examples / detailed explanations proving it).
2) It is only bad practice to set these to false when Tomcat is running on Windows.
Second 2)!! I have been running Tomcat on multiple Linux distributions and versions for more than ten years. I have never once seen a jar locking or resource locking problem due to not setting one of those attributes to true. As far as I know, it doesn't happen, but it might depend on the filesystem implementation you're using, and I always either used EXT2, EXT3, or EXT4.
If you still have questions about this, ask about it on the Tomcat-user mailing list.
Cheers!
Jason Brittain
Co-author, Tomcat: The Definitive Guide (O'Reilly)
1) I had built a system around svn to automatically build and deploy a webapp, the deployment was made via the Tomcat ant tasks, and with antiJARLocking and/or antiResourceLocking to false the application was not undeployed properly, because tomcat could not delete some jars and the log4j.properties config file, thus the deploy failed. So I had to set these properties to true, and tomcat did a copy of the webapp in the temp dir. This makes the deployment slower and with nearly each redeploy the temp dir grew in size, so I had to make a procedure to delete older deployments of my app from the temp dir. It is safe to delete deployments from the temp dir at any time, because tomcat will redeploy the app to the temp dir.
2) From the tomcat docs I understood that the problems with jar locking or resource locking occur only on Windows. I wouldn't set these properties to true in a production environment, because there's no need to redeploy so often, and with java it's always a good idea to do a server restart after redeploy in production (an OutOfMemoryError is always lurking in the dark, even if your own code doesn't leak). Another minor issue is that the app being deployed to the temp dir, if you modify a jsp or another file in the webapps dir, it won't be redeployed unless you copy the changes to the temp dir also.

Resources