I did a rebase yesterday with parent stream, when I did a folder compare with all the files in both parent and child stream I see 32 changes. That includes the changes which I had delivered to parent stream from a different child stream
This is what I did:
Delivered code to "Int" Stream from Stream B
Baselined "Int" Stream and then recommened that baseline
rebased Stream A
Below is my stream structure:
Int -> A
-> B
I am not sure how I pull the changes again from Int again.
I am not sure how I pull the changes again from Int again
You A should have everything it needs from Int.
If you need B to get those changes from Int (delivered "from a different child stream"), you can also rebase B, like you did A.
If the issue is:
changes exist in Int
A doesn't reflect those changes after rebase
Then display a version tree of the parent folder of those changes (both in the source and in the destination view): you might see some clue as to why the versions of the elements in that folder wasn't merged.
For instance, check that parent folder doesn't have an evil twin in both branches.
If that were the case, you would need to restore the proper content of that folder in Int, before rebasing again A (and even B).
If the issue is:
changes exist in Int
B doesn't reflect those changes after rebase
You cannot rebase from Int to B
You need to:
make a dummy change on Int
put a new baseline on Int
rebase that baseline on B
Related
I'm having a ClearCase read-only branch A.
I want to create an exact copy/replica of Branch A and create another Branch B.
Can we make such an exact replica of branch A at byte level?
Is this possible?
Update:
As suggested by #VonC, I tried to make a change in the Branch A, let's call it "A_read" and it gave me an error in the checkout. Sorry had to paint some info as per company policy.
You simply create a viewB which starts at a label A you have set in viewA.
Or you start from A/LATEST:
That viewB will have a selection rule like:
element .../A/LATEST -mkbranch B
# or
element * A -mkbranch B
element * /main/LATEST
That means:
it select the latest version in branch A (.../A because A can derive from /main or /main/someOtherBranch, or /main/X/Y/...)
it will create branch B only if there is a checkout.
A branch in ClearCase has a starting point, and will have version of its own only once changes are versioned (checkout/checkin).
Right now, with that selection rule, branch B is identical to A, in that it starts from the versions selected by A.
Can someone guide me to deal with this in the right and best way? I have two active dev branches where-in the same code base is being modified and one integration branch in a base clearcase environment. But i wanted to prevent code promotion from branch 2 to integration branch and allow merge only from branch 1 to integration branch. Please advise.
If there are different users delivering from dev streams to integration streams, you could (using cleartool lock -nusers ... stream:aStream#\vobs\apvob):
lock devstream1 for all except dev1 (that way you are sure dev1 can only work on devstream1),
lock devstream2 for all except dev2 (that way you are sure dev2 can only work on devstream2),
lock intstream for all except you and dev1 (that way only dev1 can deliver to intstream)
What if I or dev1 mistakenly promoted code from devstream2 to intstream
Then you would need a preop deliver_start trigger (with mktrtype).
That trigger would control the OIDs of the streams in the trigger since these are immutable: cleartool describe -fmt %On <stream-name>
If one of them is the one for devstream2, the trigger would exit in error, denying the deliver.
Since it sounds like your using Base ClearCase, you can use a preop 'checkin' trigger. The script the trigger executes would look to see if the checked out version about to be checked in has any incoming Merge hyperlink(s). If it does, the script can verify that the "from" end of the hyperlink is coming from branch1 and exit with a 0 status if so. If it's coming from any other branch, the script will print a descriptive error message and exit with a non-zero status (thus preventing the checkin).
When creating the trigger type, you can limit the scope of the trigger to the integration branch (which I'll call 'my_int_branch' in the example below) which helps with performance. The command line might look something like this:
% cleartool mktrtype -element -all -preop checkin -brtype my_int_branch -exec path_to_allow_branch1_merge_script allow_branch1_merge
In the script, you can get the Merge hyperlink(s) attached to the checked out version with something like:
cleartool describe -fmt '%[hlink:Merge]p\n' $CLEARCASE_PN
If there are any incoming Merge hyperlinks, you'll get one line per hyperlink looking something like this:
"Merge#2877#/vobs/myvob" <- "/vobs/myvob/mydir/file.c##/main/branch1/3"
The script then just has to verify that the outer branch of the "from" version is "branch1".
Running CC 8.0.0.3 w/UCM and ClearQuest enabled.
We have a build system which is supposed to run mkbl -view after a successful build in the integration stream. Normally this completes in a few seconds after each build. That works fine, but it turns out one build job omitted the mkbl command.
I am trying to retroactively apply the mkbl command for those. I need to do this 4x to come up to date.
BL label - #activities - # element versions
1.2.6 - 57 - 513
1.2.7 - 16 - 107
1.3.0 - 26 - 159
1.4.0 - 60 - 460
I have attempted the command:
cleartool mkbl -view my_view -act ${ACT_LIST} -inc -c "${LABEL}" ${LABEL}
where ACT_LIST is the list of activities (activity#/mypvob), since the prior baseline, LABEL is my label
It's been running over 12 hours and still has not even come back indicating creating baseline.
Am I doing this wrong? Is it just slow? Is it possible to retroactively apply a baseline?
Thanks.
First, it would be quicker for your first non-missing baseline is promoted to "full" instead of incremental: the delta (of files or activities) between two baselines would be quicker to compute.
You can keep creating incremental baselines on top of that full one.
Then you can start testing making a baseline with just one activity and see if that complete: making multiple baselines with the same label will lead to multiple baselines with different ID (same title).
Is it possible to retroactively apply a baseline?
Baselines are created sequentially on the stream: they cannot be "inserted".
If you already started to put new baselines on that stream (after missing a couple), you cannot try and create ones before.
But the OP mentions:
No, there were no subsequent baselines and the first baseline's list of activities goes back to the last actual baseline.
Since mkbl fails (too long), I would:
change the config spec of the UCM view in order to select files before a certain date (add a time-based selection rule at the start of its config spec)
put a label on all files of the component (mklbtype+mklabel)
import that non-UCM label as a baseline (mkbl -import)
See if you can create your baselines that way.
The OP Ian W concludes in the comments:
neither incremental nor full worked.
I tried mkbl -import, but then I could not recommend and use as a future baseline, so I gave up.
I had the 4 baseline equivalents all labelled, then made a full of the latest, which worked and should keep me moving forward
In the GFS paper, Section 4.1 describes how GFS is able to make concurrent mutations within a directory while only requiring a read lock on the directory for each client - there's no actual inode in GFS, so clients are free to create, remove, or mutate /x/y/somefile while only requiring a read lock on /x/ and /x/y/.
If there are no inodes, then is it still possible to maintain an explicit tree structure? The only way I can see this working is if the master maintains a flattened, 1-dimensional mapping from directory or file names to their metadata, allowing for fast file creation and manipulation.
Suppose that some client of GFS wanted to scan the names of all files in a directory - for instance, ls. Without an iteration over all metadata nodes, how is this possible?
It might be possible for a client to maintain their own version of what they think the directory tree looks like in the GFS, but this will only work if each client keeps to their own directory.
A master lookup table offers access to a single conceptual tree of nodes. It does this by listing all paths of names to nodes. Some nodes are directories. The only data is owned by non-directory leaf nodes. Eg these paths:
/a/b/foo
/a/b/c/bar
/a/baz/
describe this tree:
\
a/--b/--foo
| \
| c/--bar
baz/
Every path identifies a node. The nodes that are the children of a node are the ones whose paths are one name longer in the lookup table. To list a node's children nodes is to list all the paths in the lookup table that are one name longer than its path. What the paper means by metatdata is info like whether and how a node is locked and for a non-directory leaf node where its (unshared) data is.
One doesn't navigate by visiting directory nodes that own data that gives child and parent node names and whether they are directories, as in Unix/Linux. Copying a leaf means copying its data to another leaf's, like Unix/Linux cat, not cp. I presume one can copy a subtree, which would add new paths in the lookup table and copy data for non-directory leaves.
One cannot use technical terms like "file" or "directory" as if they mean the same thing in both systems. What one can do is consider GFS and Unix/Linux to both manage the same kind of tree of paths of names through directory nodes to directory leaves and non-directory data-owning leaves. But after that the other parts of the file system state (metadata and data) and their operators differ. In your mind put "GFS-" and "Unix/Linux-" in front of every technical term other than those refering to trees of named nodes.
EDIT: Examples.
1.
Suppose that some client of GFS wanted to scan the names of all files
in a directory - for instance, ls. Without an iteration over all
metadata nodes, how is this possible?
A directory listing would return the paths in the lookup table that extend the given directory's path. GFS will offer file server commands to do such things or that support doing such things, hiding its implementation. It would be enough (but slow) to be able iterate through the lookup table. Eg ls /a/b:
/a/b/foo
/a/b/c/bar
2.
To copy source node children to be target node children: For each path that extends the source's path, add to the lookup table the path got by replacing that prefix by the target path. Presumably the copy command creating the new nodes copies associated data for non-directories. Eg copy children of /a/ to /a/b/c/ adds:
/a/b/c/b/foo
/a/b/c/b/c/bar
/a/b/c/baz/
giving:
\
a/--b/--foo
| \
| c/--bar
| |--b/--foo
| | \
| | c/--bar
| baz/
baz/
Is there a way I can create a view that will give me a snapshot of all the files modified in a specific ClearCase branch?
For example, say I have two branches:
product_1.0_dev
product_migration_1.0_dev
The second branch is conceived as a testing ground for upgrading our core framework dependencies. I know that if I modify a file in product_migration_1.0_dev, then I will have a \1 version under this branch, so there has to be a way to write a load rule to get this info easily into a snapshot.
Any ideas?
That would be a selection rule (not a load rule)
element * .../product_migration_1.0_dev/LATEST
element * .../product_1.0_dev/LATEST
element * /main/LATEST
Note the '...' notation (see version selector), an ellipsis wildcard which allows to select a branch at any branch level.
Note that would list all files, including the ones you want.
If you want to see only the files for a particular branch, you still need to select their parent directories: and those might not have a version in the product_migration_1.0_dev branch.
So the following config spec (that I invite you to test in a dynamic view first: it is quicker, then you will report that config spec in a snapshot view, with its own load rules) would be more precise:
element * .../product_migration_1.0_dev/LATEST
element -directory * .../product_1.0_dev/LATEST
element -directory * /main/LATEST
So you would select files and directories having a LATEST in product_migration_1.0_dev branch.
Otherwise, you select directories only in product_1.0_dev branch or in main branch.
That way, you are sure to select the parent directory of an element which might have a version in product_migration_1.0_dev branch.
If you don't do that, your view won't ever be able to select the files, because their parent directories are not accessible (none of their versions is selected from which a product_migration_1.0_dev branch starts).