I have an integration stream with production baseline and several development streams as child streams. Now, when there are independent changes in different streams , it works fine. Now, there is a change in a file in stream A which is delievered to int stream. But, stream B is not aware of this and he makes his changes but when B is delivered to int stream, things dont work because B is not aware of the changes which were done by A and did not take that while writing his code. bOTH dev streams use hijacked files and snapshot view.
I see two solutions here but not sure if it really would work.
1. Merge changes from int to B as soon as A is delivered to int. Here, there could be issue when there is hijack file with changes in same lines.
2.Merge changes from all dev stream to B which does not look good as B may not need alll these changes.
Could you please advise how best to tackle this?
Ideally, you would rebase B with a baseline from int (or, if inconvenient, deliver from int to B), in order to get all changes from int into B, and resolve potential conflict locally (in B UCM view)
Then, and only then, you put a new baseline on B, and deliver that to int.
Related
Theory says that 3 specific problems can occur related to database transactions attempting to be serializable :
Lost update, where you write over another non-commited write
Dirty read, where you read a write before it is commited
Repeatable read, where you read twice without getting the same information
I am however confused because in most of the exercices I have done online, the R before W seems also to be an issue.
In this specific exercice, taken from here, the R before W is highlighted as a problem.
My best guest is that R before W is a problem because serialization means that I could change the orders of those instructions without it changing the outcome in both transactions.
The problem is that the solution provided (to push W(y) to the end) makes no sense with that explanation, as pushing it doesn't prevent the order to change after that.
Another exercice going in the same direction:
(Taken from here)
Suppose you make a baseline in a child stream (e.g. release), name it baseline_rel_X and then at the same time you deliver the changes to the parent stream (e.g. integration) and also make a baseline there called baseline_int_Y. At that point, the baselines baseline_rel_X and baseline_int_Y are effectively the same (notwithstanding different streams, each element will be the same if compared).
Is there a way to relate (establish equality) between a baseline in the parent stream and its corresponding baseline in a child stream, in this example relate baseline_rel_X to baseline_int_Y, given that their names are different?
The reason we want to do this is to help us list all the files to be deployed to PROD, which corresponds to the parent (integration) stream above. We make many baselines in our child (release) stream, which corresponds to our TEST/UAT environment, until we collect enough changes to make one in the integration stream, which goes into PROD. You could say that there is a one-to-many relationship between baselines in PROD and TEST/UAT. So we want to take the integration baseline that is currently in PROD, relate it to its original baseline in the release stream (that unfortunately has a different name), and then do a diff between that and the most recent baseline to list all the changes we've made since we last released to PROD.
I know this sounds confusing but I am hoping it makes some sense.
Rather than trying to:
do some complex or long cleartool diffbl
rely on a naming convention you cannot change
I would record in an attribute the name of the baseline form which you did the deliver (baseline_rel_X) into the object baseline:baseline_int_Y#\pvob.
Other ways would be:
seek a baseline in rel which would be close enough of the deliver activity name (named after the stream and the deliver date): that is quite imprecise.
look for the hyperlink which should exist between the source and the destination baseline. Again, a bit complex.
What is difference between branches and streams in ClearCase?
A branch is a classic versioning way to parallelize the history of versions for a given file: See "When should you branch"
A Stream is not a branch: it is just a metadata able to memorize what baseline any view referencing that Stream will see.
When you create a Stream, nothing happen (no branch is created).
But a Stream name will be used when a file is checked out: any view will set its config spec in order to create a branch named after the Stream in order to isolate the development effort in said branch.
(See "How do I create a snapshot view of some project or stream in ClearCase?")
This is why it is important to adequately name a Stream: If I create a Stream named "VonC", you will eventually see (in the version tree for any modified file) a branch named "VonC": what is the purpose of a branch "VonC"?
If I create a Stream named "REL2.2_FIX", you will see branches named "REL2.2_FIX" and will infer that any view referencing that Stream is there to produce fixes on the release 2.2: a much more useful name. (This is why I don't like the "one stream per developer model")
So if you have any writable component, a Stream could be considered as a template for branches:
You declare what you need in a stream (what baseline you want to see)
You create a view on that stream
Any checkout will create a branch named after the Stream.
(And that is why so many UCM users mix or equate "Stream" with "branch")
But if you have only non-writable components in your project, then a Stream is just the list of baselines (labels on components) that you want to see in any view you will create on said Stream.
That becomes a visualization mechanism, useful for testing environment where you only need to access precise versions of a set of components in order to test your system.
In that case, no branches will ever be created, since no checkout will ever be made on any file: the component are declared non-writable in the UCM project.
The other major difference between a Stream and a branch is the organization of Stream in a hierarchy (parent Stream / sub-Streams).
That hierarchy simply don't exist for branches: when you have 3 branches A, B, C:
you don't know where to merge from branch A once you have finish your work on it.
any merge you do has the same meaning: A->B, or C->A, or B->C, or ...
With Stream, you would have:
MyProject_Int
|
--MyProject_Dev
|
-- MyProject_Feature1
The hierarchy of Streams is there to:
introduce a possible workflow of merges (you know where you should merge from one Stream to another: namely its parent. It is not mandatory, but at least you have a visual way of knowing that:
Feature1, once fully developed, will get back (be merged to) MyProject_Dev (its parent Stream), and that:
MyProject_Dev, once a stable state is reached, can be merged into its parent Stream MyProject_Int, where integration tests can be conducted while development go on uninterrupted in MyProject_Dev.
add a meaning to those merges:
merging from a sub-stream to its parent or any other parent stream (for instance, you can merge directly from MyProject_Feature1 to MyProject_Int if you have to) is called a deliver.
merging from a parent Stream (like MyProject_Dev) to an immediate sub-Stream (like (MyProject_Feature1) is called a rebase.
Its purpose is to ensure that Feature1 is developed with the latest changes of Dev, in order to make the final deliver as painless as possible: with regular rebases, the common set of code would not have diverged too much between the two parallelized histories of those two branches derived from those two Streams.
Keep in mind that those two UCM operations deliver and rebase are, at their core, no more than simple merges between two branches A and B.
However, because of their names, you know that you don't merge just between any two branches, but between a sub-Stream and a parent Stream (deliver), or between a parent Stream and a sub-Stream (rebase).
I have a project where I need to perform a number of operations on a dynamic view. If any of those operations fails, or some error comes up in the program, I need to be able to back out the commits.
The straightfoward way seems to be to simply put the commands into a queue and then, when my program finishes processing, execute the queue. However, I am concerned about some exceptional event interrupting the commits and causing an inconsistent dataset on the server.
Or, in other words, I'm looking for a way to create a svn-style 'changeset' in Clearcase dynamic views. The script language I'm using is Perl, if that matters.
Ideas?
The atomicity of operation in ClearCase being at the file-level, there is no strict equivalent of a svn changeset (i.e. a "revision").
The closest thing of a changeset in ClearCase is the notion of activity (in UCM), or a label set on a collection of files (a UCM Baseline is actually closer, since it represents labels you cannot move, on a pre-defined set of files -- UCM component --)
Now, UCM or not, I would recommend:
locking the branch on which you will make checkins
(that way, the vob is still accessible, and nobody is trying to add other versions on that particular branch during your "atomic" operation)
do your checkins
unlock the branch
In case of trouble, while the branch is still lock, you can 'ct rmver' the versions added. (Note: to use with care: a rmver can not be undone)
Note1: if you are not working in UCM, you will have to record all checked-in versions in order to be able to rmver them
Note2: when I said "lock the branch", I meant of course: "lock for everyone except you" (-nusers yourLogin). That way, only you can make checkins (that applies to all files in LATEST on the branch on which you are working (main or another).
The problem, with this approach, is what the clients (the other users with their dynamic views in LATEST on the branch) will see during your atomic transaction.
Since those are dynamic views, they will see the checked-in files while these files are checked-in, one by one. That may not be good, especially if there are 200 files and if the all process takes more than a minute.
One solution would be to have those client views set their config spec to the following:
element * .../myBranch/FREEZED_LATEST
element * .../myBranch/LATEST
If you are not doing atomic changeset commit, the label FREEZED_LATEST does not exist, and all the client views are displaying LATEST, as they should. Any checkin is immediatly seen by all.
But during your atomic commit, you could:
first set a label FREEZED_LATEST on all the current files (currently in LATEST, that is)
That means, all the clients will only see those specific versions during the atomic commit
do your process (all the way, or roll back: either way, the branch is locked, and the config spec of the clients still shows the same "freezed" content)
delete the label FREEZED_LATEST (all the clients go on seeing the new LATEST resulting from your atomic operation, and can make new versions with some checkouts of their own)
With v7.1.1 ClearCase supports atomic commits.You will be able to treat a set of files as one unit and check them in or rollback based on a given criteria.For more info , for more info see
https://publib.boulder.ibm.com/infocenter/cchelp/v7r1m0/index.jsp?topic=/com.ibm.rational.clearcase.relnotes.doc/topics/c_cc_relnotes_features.htm
Lock out all other users.
Do a backup of your server.
Do your commits.
If something goes horribly wrong, restore clearcase from backup.
I haven't used clearcase in years, so here are a few stray and naive thoughts.
Look ahead and determine if files are out of sync.
I would lock all the files you're about to check in before checking them in, and if you fail to lock one, abort the whole mess, with a useful message.
Can you "delete" a check in? Or revert, so HEAD looks at a previous version? Define your undo of a check in.
Can you make a temporary branch, check-in, then merge/rebase (my terminology is lose here).
That way your rollback is to kill the branch. Though I remember coworkers cursing clearcase because of it's branching.
In general, queuing actions is great, but use the queue to identify potential problems before they occur. In addition, define your actions and their UNDO criteria, so if they want to do something that isn't pseudo-atomic, you can warn them, "This might get messy".
I have 3 projects A, B based on A, C based on A.
Changes A should first be merged to B and then from B to C.
There are also changes in B not affecting A but some of this changed need to be merged in C.
There some changes from A which have been incorrectly merged directly from A to C bypassing B.
(I'm using the word "merged" because we needed to merge those manually because automatic delivery would include a bunch of activities we don't need to deliver to B and C).
To fix the problem I now need to merge the changes which have been not merged in B but have been merged in C in B and I'm looking for a way to list all the versions in C which have been created by merging from A so that I can merged the changes for those files into B.
Thanks
list all the versions in C which have been created by merging from A
Those versions should be listed in the merge activity you has to create when you merged directly from A to C (using findmerge, I presume).
The only problem is, did you create a special "merge" activity during that findmerge?
You may just have reused the current activity on C, meaning that activity would contain versions from the current work on C, plus the versions merged from A.
The other approach would be to merge the same activities (the ones concerned by the findmerge from A to C) from A to B.
The next "normal" merge from B to C would:
do nothing for files already merged from A (since they have also been merged to B according this "other approach"
merge evolutions from B to C for any other modified files.
I didn't use it for these merges, did it from the GUI version tree tool creating identical activities in C for corresponding activities in A and merged file by file.
Unless you have only one or two files to merge, findmerge is the command to use, because:
it can takes into account one or several activities
and it is not bound by the same "activities dependencies" than the ones enforced with a deliver or a rebase UCM operation.
In short, findmerge is your classical merge, able to read versions within UCM activities, but does a non-UCM merge (no hyperlink between UCM baselines).