I administer a SCM environment with ClearCase, that has a lot of VOBS.
Many of these VOBS are not used since a long time ago. I would like to know whether it is possible to determine the last modification time on these vobs.
Another doubt is: if I only unregister these VOBS, the CPU and Memory consumption will decrease in the VOB Server?
In theory, to put these vobs online again, I will only have to run a register command, right?
Is there any other approach that you guys could recommend to me to manage this scenario (VOBs not being used for a long time)?
Many of these VOBS are not used since a long time ago. I would like to know whether it is possible to determine the last modification time on these vobs.
You can try and use cleartool lshis -all on a vob tag.
I had a script which filtered the last events with:
cleartool lshis -fmt "%Xn\t%Sd\t%e\t%h\t%u \n" -since 01-Oct-2010 -all <vobname>| grep -v lock | head -1 | grep -o '20[0-9][0-9]-[0-9][0-9]-[0-9][0-9]'
Another doubt is: if I only unregister these VOBS, the CPU and Memory consumption will decrease in the VOB Server?
Yes, because there wouldn't be anymore vob_server process associated with that vob.
In theory, to put these vobs online again, I will only have to run a register command, right?
Yes, although I prefer unregister/rmtag (as in "Removing ClearCase vobs") before registering and mktagging.
Related
Can someone to point me to lock a file only on a specific branch in clearcase? Note that i want the same file to be modified in all other branches that other teams working on...
Locking the branches as appropriate might help.But it does not sound like a good idea. Please share your thoughts.
You can lock a specific branch instance.
cleartool lock co.exe##/main/foo
Locks that branch instance and will block anyone from modifying that branch while allowing all other instances -- like ci.exe##/main/foo/2 to be checked out and used.
Depending on your view setup, you may have to use lsvtree or cleartool find to find all the branch instances.
This wouldn't be a simple clearool lock, as it would lock the element for all branches.
A simple approach would be a cleartool checkout -reserved, but that owuld prevent checking on other branches as well.
That leaves you with a preop checkout trigger, using the trigger environment variables CLEARCASE_BRTYPE :
cleartool mktrtype -c "Prevent checkout on a branch" -element -all -preop checkout -execwin "ccperl \\shared\path\to\triggers\lock_on_branch.bat" LOCK_ON_BRANCH
The script will use:
CLEARCASE_XPN
(All operations; element triggers only) Same as CLEARCASE_ID_STR, but prepended with CLEARCASE_PN and CLEARCASE_XN_SFX values, to form a complete VOB-extended path name of the object involved in the operation.
CLEARCASE_BRTYPE
(All operations that can be restricted by branch type) Branch type involved in the operation that caused the trigger to fire. In a rename operation, the old name of the renamed branch type object.
I have a file test.cpp somebody added a few lines of code i don't know when but i'm assuming it was in a specific range of time that i know.I want to find the activity that was used to deliever this changes, i found a lot of versions of this element in the version tree of this element, but all the activities that i was able to see were as a result of a rebase, i need to find the source activity that was in charge of adding this few lines of code.
Is there any way to do that ?
For each deliver activity (that you can see in the version tree), you can list the contributing activities with
cleartool lsact -contrib activity:anact#/apvob # on unix #/vobs/apvob
See "Finding which developer activities were delivered in a specific delivery"
Then you need to describe each activity found, to see if your file is in it.
cleartool descr -l activity:anact#/avob
Obviously, you also can use a cleartool annotate, in order to see the versions in that file: see "How to use ClearCase Annotate".
If you see one line which interest you, check its version n# 'x' and use cleartool descr -l file#/main/.../x to find its corresponding activity.
Quite simply I want to get a count of files in a directory (and all nested sub-directories) as quickly as possible.
I know how to do this using find paired with wc -l and similar methods, however these are extremely slow and they pass through each file entry in each directory and count them that way.
Is this the fastest method, or are there alternatives? For example; I don't need to find specific types of files, so I'm fine with grabbing symbolic-links, hidden files etc. if I can get the file-count more quickly by counting everything with no further processing involved.
The fastest methos is to use locate + wc or similar. It can't be faster. The main disatvantage of the method that it counts not actual files, but the files that are in the locate's database. And this database can be alread 1 day old.
So it depends on your task: if it tolerates delays, I would prefer locate.
On my superfast SSD-based machine:
$ time find /usr | wc -l
156610
real 0m0.158s
user 0m0.076s
sys 0m0.072s
$ time locate /usr | wc -l
156612
real 0m0.079s
user 0m0.068s
sys 0m0.004s
On a normal machine the difference will be much much bigger.
How often is the locate database updated depends on the configuration of the host.
By default, it is updated each day (it is made using cron). But you can configure the system so, that the script will run every hour or even frquently. Of course, you can run it not periodically, but on demand (I thank you William Pursell for the hint).
Try this script as an alternative:
find . -type d -exec bash -c 'd="{}"; arr=($d/*); echo "$d:${#arr[#]}"' \;
In my quick basic testing it came faster than wc -l
I currently have a jacked-up delivery from a child stream to a parent in ClearCase. If I try to undo the delivery it tells me I can't because the "integration activity has checkins" or "checked in versions".
If I try to resume the delivery it says it encountered an error attempting to checkout or merge an element, but doesn't specifically tell me which one.
So I'm looking for a way to either:
Manually stop the delivery (undo all checkouts in the parent stream?)
Find out what element is causing the delivery problem (is it same as the one causing the undo problem)
or Find out what element is causing the undo problem and find a way to undo the checkin (I don't know how to do this. I tried to delete a version in the version tree, but I don't have permission).
For 3/ "Find out what element is causing the undo problem", this is easy (but not recommended): you need to remove all the checked-in versions done during the complete phase of the deliver.
And that is by far the most dangerous solution, especially if any type of activity (other checkins, baselines, ...) has been done on the destination Stream (the stream to which you are delivering file, ie the Stream with the view you are using to deliver to)
You can see those checked-in file by describing the deliver activity (which always starts with deliverbl.xxx)
cleartool descr -l activity:deliverbl.xxx#\myPVob
1/ and 2/ are linked.
A good solution to easily detect the issue is to resume the deliver graphically: open the ClearCase project Explorer (clearprojexp), right-click on the source Stream and select deliver (Baseline or Activities, to default or alternate target: it doesn't matter).
ClearCase will detect that a deliver is in progress and will propose to resume.
All you need to do is check all the files with a red circle and white cross (not the files with a yellow warning sign, those are not blocking the deliver).
Once you have one of those files, right-click on it, and select "display element merge": you will have a more precise error message that you can copy-paste.
If those files are in lost+found directory, all you need to do is to edit the config spec of the view used for the deliver, and add a non-selection rule to avoid selecting anything from lost+found:
cd /path/to/your/view
cleartool edcs
#add at the start of the config spec
element /myVob/lost+found/... -none
Then resume again your deliver, and you will see that those 'lost+found' files become ignored (with a warning non-blocking status attached to them).
If those files aren't in lost+found and are failing the deliver because of "Not a vob object <directory name>", the first check to do is to go to the parent directory of said files in a shell session and type cleartool ls: you will see their status.
In this case, the OP Ian reports them as hijacked, so it was simply about undoing their hijacked status.
He reports also having to delete (rmname) some binary files, although my answer to the question "Clearcase UCM is trying to merge pdf files" is pointing to an alternative solution (copy merge).
My recommendation: in that particular state (deliver with checkins already there), try hard to complete the deliver, not to cancel it.
I have several files that I rsync'd over to the vob and they all have times 40 minutes in the future.
I tried touch, and all that does is maintain the time 40 minutes into the future from when I touch.
I guess that ClearCase is in charge of setting the modification time and is overriding touch.
Is there another way? Is there a way to tell ClearCase to stop messing up the file time?
What option did you use when adding those files to source control?
As explained in this help page:
To preserve the modification time of the file being checked in, use the -ptime option.
If you omit the -ptime option, the modification time of the new version is set to the checkin time.
The mkelem man page adds:
On some UNIX and Linux platforms, it is important that the modification time be preserved for archive files (libraries) created by ar(1) (and perhaps updated with ranlib(1)).
The link editor, ld(1), generates an error message if the modification time does not match a time recorded in the archive itself. Be sure to use this option, or (more reliably) store archive files as elements of a user-defined type, created with the mkeltype –ptime command. This causes –ptime to be invoked when the element is checked in.
Unless you remove those files and re-create them, I don't think you can change the "Created on" time.