TFS: List changesets that have not been merged - changeset

Environment
TFS 2010. Three branches: Main, Development and Release.
Question
I would like to easily retrieve a list of changesets that have not been fully merged into all three branches.
For Example
Lets says I have a changeset, 100, that was a bugfix and checked in directly into Release. I can use the Tracking feature to visualize that it exists only in Release.
But that requires me to know to look at that changeset. I'm looking for a generic list that would show me any changeset that exists in one branch, but not in all three.
What I know
I know I can compare Release to Main to see the differences. Is that my only option?
I try to associate changesets with work items, so I could query a list of non-closed work items and then as a 'rule', I could verify that a changeset has been fully merged before closing it. And perform code compare to verify.

From the Developer Command Prompt, you can also use the tf.exe merge command.
tf merge /candidate /recursive Release Main
will show you all the changesets that were made to Release but haven't been merged into Main.

You can get a simple list of changesets through the IDE by choosing the "Selected Changes" option when merging things onto a build.
Another option is to use the API. VersionControlServer has a property named GetMergeCandidates which returns an array of MergeCandidate which has the changeset and if it has been partially merged already as properties.

Related

Merging Development Branch to Main: There were no changes to merge

My main branch has some files that have different codes from the same file of development branch. The development branch is the one that has the correct version of these files but when I am trying to merge it to main branch(target); I am getting a message saying
There was no changes to merge
How can I resolve that problem so that the main branch has the correct version of those files?
When merging files TFS doesn't just look at the differences between the two branches, but it also keeps track of whether you've ignored these changes in a previous merge attempt. When merging TFS offers you 3 options when there are conflicts:
Merge
Keep Source
Keep Target
When you pick Keep target or when you manually merge and deselect certain changes, TFS will mark these changes as "resolved" and will not offer them again when you try to merge in the future. This is called a "merge credit".
You can also create these issues using the commandline when using tf merge /discard which will tell TFS to ignore the changes in those files/changesets when considering merges.
There are two ways to force TFS to reconsider these changes:
Use force merge. On the commandline you can initiate a merge in which TFS will temporarily ignore it's records and will offer you every different file for merging. This can be a lot of work, but once done your merge history will be back in shape. To issue a force merge run tf merge $/Source/Folder/File $/Target/Folder/File /force /version:T This will almost certainly raise a merge conflict which you can resolve to get the right changes in the target branch.
Undo the previous merge using Rollback. If you've recently done the merge in which changesets have been discarded. Find it in the history, rightclick the changeset and pick Rollback and check in the code that has been undone. This will actually remove all of the changes in that changeset and will reset the "merge credits". Once this has been done you can redo the merge and do it right this time. This can also be done from the command line using tf rollback

How to undo a merge of file in ClearCase?

I was experimenting in the ClearCase merge manager and mistakenly merged one file to another file (which turned out to be parent of the file).
I want to revert it back now.
How to cancel that merge for that specific file?
All you need to do, if you haven't yet completed the merge is to:
undo checkout the merged checked out version in the destination view (the view used to make the merge)
relaunch the merge manager.
You will see that, during the second execution of the merge manager:
any version already merged will directly be displayed as merged
the version you undo checked out will again be listed as to be merged.
I know this post is old, but perhaps adding an additional detail will be useful to future users as it was to me.
In add one detaul to VonC's answer, I saved my merge before re-launching the merge manager, and hit Resume upon re-launch instead of New.
This approach worked as VonC said.

How to exclude binaries from being delivered?

Recently i was asked to deliver interproject delivery.
While delivering lot of merge request was popped for even binaries.
I skipped them and did draw merge arrow one by one. It was nerve breaking work.
Is there any way to exclude binaries from being merged or some command line option to draw merge for all the binaries?
( I am using clearcase UCM)
Yes, you can change their type manager to a type with the option "copy on merge" (or never merge, if you really don't want them merged at all).
That type manager inherit from compressed_file, but would resolve any merge by copying the source version over the checked out destination version.
See Clearcase UCM is trying to merge pdf files as an example.
You also have the IBM technote: "Handling binary files in ClearCase".
So you have the choice between:
or:

Reverse Changset of an activity in Clearcase

I have a requirement posted by the development team to reverse all changes in a given UCM activity. Constraint being we do not have delete rights. Meaning I know I can do a lsactivity to list all elements in an activity with their respective versions and then in the easy world would be able to delete those versions.
But the SCM policy does not permit us to delete/rmver anything. So I am left with back merging 1 version back. Meaning let us say I have version 5 of a.java checked into an activity. One way I think to achieve this, is find version 4 (using -predecessor) and blind copy this ver 4 as ver 6. Assume that each file has only 1 version in an activity this time. If a file had more than 1 versions checked in through an activity, this would be more complex, so lets ignore that for now.
Any other ideas or whether my approach would/would not work ?
One more robust way would be to:
list all files in an activity
for each file, find the oldest version
make a negative merge, or subtractive merge.
A subtractive merge can be performed to exclude or bypass bad versions on a branch without actually removing the bad versions.
Cleartool merge using the -delete option will allow a user to merge from the last known good version to a new version on the same branch which excludes the work done in the versions identified as bad versions.
That would be compliant with your SCM policies in place.
That is in essence what does the cset.pl script mentioned by Tamir, as I explain in "Clearcase: how to rollback all changes on specific branch?"
ccperl cset.pl -undo myActivity

What are the best practices for database scripts under code control

We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!

Resources