File churn in CVS - file

I'm looking to find the number of times that each file has changed on a particular branch in our cvs repository. I'm particularly looking for all the files which have changed the most. A "top 40" list would be good enough.

This was added as an edit by the original asker, I have converted it to a community wiki answer because it should be an answer, not an edit.
In this case, the branch has been in use for about 6 months. If I set to the latest in that branch ("cvs -z9 co -r r80m-1 ..."), it looks like the last number of the revision is the number of changes in the current branch -- if the file has been changed in the past 180 days, then it's on this branch. I'm using linux, so I eventually did it this way:
for file in `find . \! \( -name CVS -prune \) -type f -mtime -180`
do
cvs status "$file" | grep Working.revision | gawk -v FNAME=$file '{ print FNAME gensub(/(\.)([0-9]*)$/, "\\1\\2 churn:\\2 ", 1) }' >> cvs_churn.txt
done
sort -k3 -t: -n cvs_churn.txt | uniq
So, for each line in "cvs status" output like:
Working revision: 1.2.34
The gawk command changes it to:
./path/file.c Working revision: 1.2.34 churn:34
and I can then sort on ":34".
This works, but it's pretty crude. I'm hoping others may be able to answer with better approaches.
I've seen in some other questions eg: Free CVS reporting tools people have mentioned statCVS. It sounds interesting (more than I need, but some of the other info might also be useful). However, it says it only works on the "default" branch. The documentation was a little unclear -- can I set to the branch of interest, and use it for this?

Related

Vlookup-like function using awk in ksh

Disclaimers:
1) English is my second language, so please forgive any gramatical horrors you may find. I am pretty confident you will be able to understand what I need despite these.
2) I have found several examples in this site that address questions/problems similar to mine, though I was unfortunately not able to figure out the modifications that would need to be introduced to fit my needs.
3) You will find some text in capital letters here and there. Is is of course not me "shouting" at you, but only a way to make portions of text stand out. Plase do not consider this an act of unpoliteness.
4) For those of you who get to the bottom of this novella alive, THANKS IN ADVANCE for your patience, even if you do not get to be able to/feel like help/ing me. My disclamer here would be the fact that, after surfing the site for a while, I noticed that the most common "complaint" from people willing to help seems to be lack of information (and/or the lack of quality) provided by the ones seeking for help. I then preferred to be accused of overwording if need be... It would be, at least, not a common offense...
The "Problem":
I have 2 files (a and b for simplification). File a has 7 columns separated by commas. File b has 2 columns separated by commas.
What I need: Whenever the data in the 7th column of file a matches -EXACT MATCHES ONLY- the data on the 1st column of file b, a new line, containing the whole line of file a plus column 2 of file b is to be appended into a new file "c".
--- MORE INFO IN THE NOTES AT THE BOTTOM ---
file a:
Server Name,File System,Path,File,Date,Type,ID
horror,/tmp,foldera/folder/b/folderc,binaryfile.bin,2014-01-21 22:21:59.000000,typet,aaaaaaaa
host1,/,somefolder,test1.txt,2016-08-18 00:00:20.000000,typez,11111111
host20,/,somefolder/somesubfolder,usr.cfg,2015-12-288 05:00:20.000000,typen,22222222
hoster,/lol,foolie,anotherfile.sad,2014-01-21 22:21:59.000000,typelol,66666666
hostie,/,someotherfolder,somefile.txt,2016-06-17 18:43:12.000000,typea,33333333
hostile,/sad,folder22,higefile.hug,2016-06-17 18:43:12.000000,typeasd,77777777
hostin,/var,folder30,someotherfile.cfg,2014-01-21 22:21:59.000000,typo,44444444
hostn,/usr,foldie,tinyfile.lol,2016-08-18 00:00:20.000000,typewhatever,55555555
server10,/usr,foldern,tempfile.tmp,2016-06-17 18:43:12.000000,tipesad,99999999
file b:
ID,Size
11111111,215915
22222222,1716
33333333,212856
44444444,1729
55555555,215927
66666666,1728
88888888,1729
99999999,213876
bbbbbbbb,26669080
Expected file c:
Server Name,File System,Path,File,Date,Type,ID,Size
host1,/,somefolder,test1.txt,2016-08-18 00:00:20.000000,typez,11111111,215915
host20,/,somefolder/somesubfolder,usr.cfg,2015-12-288 05:00:20.000000,typen,22222222,1716
hoster,/lol,foolie,anotherfile.sad,2014-01-21 22:21:59.000000,typelol,66666666,1728
hostie,/,someotherfolder,somefile.txt,2016-06-17 18:43:12.000000,typea,33333333,212856
hostin,/var,folder30,someotherfile.cfg,2014-01-21 22:21:59.000000,typo,44444444,1729
hostn,/usr,foldie,tinyfile.lol,2016-08-18 00:00:20.000000,typewhatever,55555555,215927
server10,/usr,foldern,tempfile.tmp,2016-06-17 18:43:12.000000,tipesad,99999999,213876
Additional notes:
0) Notice how line with ID "aaaaaaaa" in file a does not make it into file c since ID "aaaaaaaa" is not present in file b. Likewise, line with ID "bbbbbbbb" in file b does not make it into file c since ID "bbbbbbbb" is not present in file a and it is therefore never looked out for in the first place.
1) Data is clearly completely made out due to confidenciality issues, though the examples provided fairly resemble what the real files look like.
2) I added headers just to provide a better idea of the nature of the data. The real files don't have it, so no need to skip them on the source file nor create it in the destination file.
3) Both files come sorted by default, meaning that IDs will be properly sorted in file b, while they will be most likely scrambled in file a. File c should preferably follow the order of file a (though I can manipulate later to fit my needs anyway, so no worries there, as long as the code does what I need and doesn't mess up with the data by combining the wrong lines).
4) VERY VERY VERY IMPORTANT:
4.a) I already have a "working" ksh code (attached below) that uses "cat", "grep", "while" and "if" to do the job. It worked like a charm (well, acceptably) with 160K-lines sample files (it was able to output 60K lines -approx- an hour, which, in projection, would yield an acceptable "20 days" to produce 30 million lines [KEEP ON READING]), but somehow (I have plenty of processor and memory capacity) cat and/or grep seem to be struggling to process a real life 5Million-lines file (both file a and b can have up to 30 million lines each, so that's the maximum probable amount of lines in the resulting file, even assuming 100% lines in file a find it's match in file b) and the c file is now only being feed with a couple hundred lines every 24 hours.
4.b) I was told that awk, being stronger, should succeed where the more weaker commands I worked with seem to fail. I was also told that working with arrays might be the solution to my performance problem, since all data is uploded to memory at once and worked from there, instead of having to cat | grep file b as many times as there are lines in file a, as I am currently doing.
4.c) I am working on AIX, so I only have sh and ksh, no bash, therefore I cannot use the array tools provided by the latter, that's why I thought of AWK, that and the fact that I think AWK is probably "stronger", though I might be (probably?) wrong.
Now, I present to you the magnificent piece of ksh code (obvious sarcasm here, though I like the idea of you picturing for a brief moment in your mind the image of the monkey holding up and showing all other jungle-crawlers their future lion king) I have managed to develop (feel free to laugh as hard as you need while reading this code, I will not be able to hear you anyway, so no feelings harmed :P ):
cat "${file_a}" | while read -r line_file_a; do
server_name_file_a=`echo "${line_file_a}" | awk -F"," '{print $1}'`
filespace_name_file_a=`echo "${line_file_a}" | awk -F"," '{print $2}'`
folder_name_file_a=`echo "${line_file_a}" | awk -F"," '{print $3}'`
file_name_file_a=`echo "${line_file_a}" | awk -F"," '{print $4}'`
file_date_file_a=`echo "${line_file_a}" | awk -F"," '{print $5}'`
file_type_file_a=`echo "${line_file_a}" | awk -F"," '{print $6}'`
file_id_file_a=`echo "${line_file_a}" | awk -F"," '{print $7}'`
cat "${file_b}" | grep ${object_id_file_a} | while read -r line_file_b; do
file_id_file_b=`echo "${line_file_b}" | awk -F"," '{print $1}'`
file_size_file_b=`echo "${line_file_b}" | awk -F"," '{print $2}'`
if [ "${file_id_file_a}" = "${file_id_file_b}" ]; then
echo "${server_name_file_a},${filespace_name_file_a},${folder_name_file_a},${file_name_file_a},${file_date_file_a},${file_type_file_a},${file_id_file_a},${file_size_file_b}" >> ${file_c}.csv
fi
done
done
One last additional note, just in case you wonder:
The "if" section was not only built as a mean to articulate the output line, but it servers a double purpose, while safe-proofing any false positives that may derive from grep, IE 100 matching 1000 (Bear in mind that, as I mentioned earlier, I am working on AIX, so my grep does not have the -m switch the GNU one has, and I need matches to be exact/absolute).
You have reached the end. CONGRATULATIONS! You've been awarded the medal to patience.
$ cat stuff.awk
BEGIN { FS=OFS="," }
NR == FNR { a[$1] = $2; next }
$7 in a { print $0, a[$7] }
Note the order for providing the files to the awk command, b first, followed by a:
$ awk -f stuff.awk b.txt a.txt
host1,/,somefolder,test1.txt,2016-08-18 00:00:20.000000,typez,11111111,215915
host20,/,somefolder/somesubfolder,usr.cfg,2015-12-288 05:00:20.000000,typen,22222222,1716
hoster,/lol,foolie,anotherfile.sad,2014-01-21 22:21:59.000000,typelol,66666666,1728
hostie,/,someotherfolder,somefile.txt,2016-06-17 18:43:12.000000,typea,33333333,212856
hostin,/var,folder30,someotherfile.cfg,2014-01-21 22:21:59.000000,typo,44444444,1729
hostn,/usr,foldie,tinyfile.lol,2016-08-18 00:00:20.000000,typewhatever,55555555,215927
server10,/usr,foldern,tempfile.tmp,2016-06-17 18:43:12.000000,tipesad,99999999,213876
EDIT: Updated calculation
You can try to predict how often you are calling another program:
At least 7 awk's + 1 cat + 1 grep for each line in file a multiplied by 2 awk's for each line in file b.
(9 * 160.000).
For file b: 2 awk's, one file open and one file close for each hit. With 60K output, that would be 4 * 60.000.
A small change in the code can change this into "only" 160.000 times a grep:
cat "${file_a}" | while IFS=, read -r server_name_file_a \
filespace_name_file_a folder_name_file_a file_name_file_a \
file_date_file_a file_type_file_a file_id_file_a; do
grep "${object_id_file_a}" "${file_b}" | while IFS="," read -r line_file_b; do
if [ "${file_id_file_a}" = "${file_id_file_b}" ]; then
echo "${server_name_file_a},${filespace_name_file_a},${folder_name_file_a},${file_name_file_a},${file_date_file_a},${file_type_file_a},${file_id_file_a},${file_size_file_b}"
fi
done
done >> ${file_c}.csv
Well, try this with your 160K files and see how much faster it is.
Before I explain that this still is the wrong way I will make another small improvement: I will move the cat for the while loop to the end (after done).
while IFS=, read -r server_name_file_a \
filespace_name_file_a folder_name_file_a file_name_file_a \
file_date_file_a file_type_file_a file_id_file_a; do
grep "${object_id_file_a}" "${file_b}" | while IFS="," read -r line_file_b; do
if [ "${file_id_file_a}" = "${file_id_file_b}" ]; then
echo "${server_name_file_a},${filespace_name_file_a},${folder_name_file_a},${file_name_file_a},${file_date_file_a},${file_type_file_a},${file_id_file_a},${file_size_file_b}"
fi
done
done < "${file_a}" >> ${file_c}.csv
The main drawback of the solutions is that you are reading the complete file_b again and again with your grep for each line in file a.
This solution is a nice improvement in the performance, but still a lot overhead with grep. Another huge improvement can be found with awk.
The best solution is using awk as explained in What is "NR==FNR" in awk? and found in the answer of #jas.
It is only one system call and both files are only read once.

ClearCase UCM: Branch created of file that is not part of an activity. What happened?

I have somehow created a branch of a file in clearcase UCM that is not part of an activity. I have no idea how to reproduce this, but my stream is showing many files with this symptom. How can I find these files, remove them, and prevent it from happening again in the future?
Here is an example of one such file, names redacted to protect the innocent:
xxxxxxxxxxx.cpp##/main/xxx-integration/xxxxxx-xxxxxxxx/0 Rule: .../xxx-xxxxxxx/LATEST
A ct lsact -long | grep <filename> returns no results.
Update:
I used a find command to track down all the files that are on the branch given (and redacted) above, though I still do not understand the issue.
Per VonC's answer, where is what I ended up doing:
cleartool find . -type f -version "version(.../xxx/LATEST)&&version(.../xxx/0)" -print | tee ~/tmp/files2
I then read through the list of files generated to make sure they made sense, then I verified they were not attached to an activity and removed the versions:
cat ~/tmp/files2 | while read
do
if [ -z "$(ct describe -fmt "%[activity]p" $REPLY)" ]
then
ct rmbranch -f ${REPLY%/0}
fi
done
That can happen ig those file were checkout in a base ClearCase view, ie a non-UCM view, withg a simple config spec:
element * .../xxx-integration/LATEST -mkbranch xxxxxx-xxxxxxxx
You can use a find command similar to "How can I find all elements on a branch with version LATEST that has no label applied?".
The difference is: for each version found, you need to describe it in order to check if there is an activity attached to it or not (with a fmt_ccase):
cleartool describe -fmt "%[activity]p" "$CLEARCASE_XPN"

How to get the contributing activity details on integration stream?

I need help in knowing how to get the details of when a particular contributing activity is delivered to integration stream.
I used to use diffbl -activity baseline1 baseline2 in cleartool to get the list of activities made from one baseline to another baseline.
Now the new need is that i need to get the date time of when some of the activities listed as an output of diffbl are delivered.
I tried using lsact, describe but i am getting the "Activity not found" error.
Probably because the activity I am querying at is a contributing activity.
Could somebody know how to get the date time of when a contributing activity is delivered or how to customize the output of "diffbl -activity baseline1 baseline2" to get the activity date time details as well?.
When I look at the cleartool diffbl man page, I don't see any formatting option.
That means you need to parse the result of that command, feeding each activity to a cleartool describe -fmt, using one of the fmt_ccase option to display what you want.
This thread gives you an idea of the process to follow, but it is in bash (unix), to be adapted for windows if you need it:
for act in $(ct diffbl -act bl1#/vobs/apvob bl2#/vobs/apvob | grep ">>" | grep -v "deliver." | cut -f2 -d " "); do echo "Activity: $act"; cleartool desc -fmt "%d\n" activity:$act; echo; done
In multiple line for readibility:
for act in $(ct diffbl -act bl1#/vobs/apvob bl2#/vobs/apvob
| grep ">>"
| grep -v "deliver."
| cut -f2 -d " ");
do
echo "Activity: $act"; cleartool desc -fmt "%d\n" activity:$act; echo;
done
Note that by excluding "deliver." activities, we are focusing only on contributing activities, as explained in "How to find files asssociated with a ClearCase UCM activity?".
The OP Lax reports having successfully managed to extract the names of the activities, with a:
desc -fmt "%Nd\n" "activity:myActivityId"
(#\pvob being already part of the result of the diffbl command. Lax is just parsing the activityid from the diffbl results and putting it to desc command)
He adds:
I am needing this in the context of C#, so parsing is just like parsing any other string: I am using a regex to seperate the output to my interested activities. ex:
Regex.Matches(diffBlOutput, "myInterestedPattern");
And for each match in regex result, I get the activity with
RegexMatch.Groups["activity"].ToString()
activityid is actually a substring of this string as the result is always "activtyid activityName" so, substring(0,result.indexOf(' ')); gets me the activity id.

Moving things in terminal based on their name

Edit: I think this has been answered successfully, but I can't check 'til later. I've reformatted it as suggested though.
The question: I have a series of files, each with a name of the form XXXXNAME, where XXXX is some number. I want to move them all to separate folders called XXXX and have them called NAME. I can do this manually, but I was hoping that by naming them XXXXNAME there'd be some way I could tell Terminal (I think that's the right name, but not really sure) to move them there. Something like
mv *NAME */NAME
but where it takes whatever * was in the first case and regurgitates it to the path.
This is on some form of Linux, with a bash shell.
In the real life case, the files are 0000GNUmakefile, with sequential numbering. I'm having to make lots of similar-but-slightly-altered versions of a program to compile and run on a cluster as part of my research. It would probably have been quicker to write a program to edit all the files and put in the right place in the first place, but I didn't.
This is probably extremely simple, and I should be able to find an answer myself, if I knew the right words. Thing is, I have no formal training in programming, so I don't know what to call things to search for them. So hopefully this will result in me getting an answer, and maybe knowing how to find out the answer for similar things myself next time. With the basic programming I've picked up, I'm sure I could write a program to do this for me, but I'm hoping there's a simple way to do it just using functionality already in Terminal. I probably shouldn't be allowed to play with these things.
Thanks for any help! I can actually program in C and Python a fair amount, but that's through trial and error largely, and I still don't know what I can do and can't do in Terminal.
SO many ways to achieve this.
I find that the old standbys sed and awk are often the most powerful.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p'
If you're satisfied that the commands look right, pipe the command line through a shell:
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p' | sh
I put NAME in brackets and used \2 so that if it varies more than your example indicates, you can come up with a regular expression to handle your filenames better.
To do the same thing in gawk (GNU awk, the variant found in most GNU/Linux distros):
ls | gawk '/^[0-9]{4}NAME$/ {printf("mv -iv %s %s/%s\n", $1, substr($0,0,4), substr($0,5))}'
As with the first sample, this produces commands which, if they make sense to you, can be piped through a shell by appending | sh to the end of the line.
Note that with all these mv commands, I've added the -i and -v options. This is for your protection. Read the man page for mv (by typing man mv in your Linux terminal) to see if you should be comfortable leaving them out.
Also, I'm assuming with these lines that all your directories already exist. You didn't mention if they do. If they don't, here's a one-liner to create the directories.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mkdir -p \1:p' | sort -u
As with the others, append | sh to run the commands.
I should mention that it is generally recommended to use constructs like for (in Tim's answer) or find instead of parsing the output of ls. That said, when your filename format is as simple as /[0-9]{4}word/, I find the quick sed one-liner to be the way to go.
Lastly, if by NAME you actually mean "any string of characters" rather than the literal string "NAME", then in all my examples above, replace NAME with .*.
The following script will do this for you. Copy the script into a file on the remote machine (we'll call it sortfiles.sh).
#!/bin/bash
# Get all files in current directory having names XXXXsomename, where X is an integer
files=$(find . -name '[0-9][0-9][0-9][0-9]*')
# Build a list of the XXXX patterns found in the list of files
dirs=
for name in ${files}; do
dirs="${dirs} $(echo ${name} | cut -c 3-6)"
done
# Remove redundant entries from the list of XXXX patterns
dirs=$(echo ${dirs} | uniq)
# Create any XXXX directories that are not already present
for name in ${dirs}; do
if [[ ! -d ${name} ]]; then
mkdir ${name}
fi
done
# Move each of the XXXXsomename files to the appropriate directory
for name in ${files}; do
mv ${name} $(echo ${name} | cut -c 3-6)
done
# Return from script with normal status
exit 0
From the command line, do chmod +x sortfiles.sh
Execute the script with ./sortfiles.sh
Just open the Terminal application, cd into the directory that contains the files you want moved/renamed, and copy and paste these commands into the command line.
for file in [0-9][0-9][0-9][0-9]*; do
dirName="${file%%*([^0-9])}"
mkdir -p "$dirName"
mv "$file" "$dirName/${file##*([0-9])}"
done
This assumes all the files that you want to rename and move are in the same directory. The file globbing also assumes that there are at least four digits at the start of the filename. If there are more than four numbers, it will still be caught, but not if there are less than four. If there are less than four, take off the appropriate number of [0-9]s from the first line.
It does not handle the case where "NAME" (i.e. the name of the new file you want) starts with a number.
See this site for more information about string manipulation in bash.

In ClearCase, how can I view old version of a file in a static view, from the command line?

In a static view, how can I view an old version of a file?
Given an empty file (called empty in this example) I can subvert diff to show me the old version:
% cleartool diff -ser empty File##/main/28
This feels like a pretty ugly hack. Have I missed a more basic command? Is there a neater way to do this?
(I don't want to edit the config spec - that's pretty tedious, and I'm trying to look at a bunch of old versions.)
Clarification: I want to send the version of the file to stdout, so I can use it with the rest of Unix (grep, sed, and so on.) If you found this question because you're looking for a way to save a version of an element to a file, see Brian's answer.
I'm trying to look at a bunch of old versions
I am not sure if you are speaking about "a bunch of old versions" of one file, "a bunch of old versions" from several files.
To visualize several old versions of one file, the simplest mean is to display its version tree (ct lsvtree -graph File), and then select a version, right-click on it and 'Send To' an editor which accepts multiple files (like Notepad++). In a few click you will have a view of those old versions.
Note: you must have CC6.0 or 7.0.1 IFix01 (7.0.0 and 7.0.1 fail to 'sent to' a file with the following error message "Access to unnamed file was denied")
But to visualize several old versions of different files, I would recommend a dynamic view and editing the config spec of that view (and not the snapshot view you are currently working with), in order to quickly select all those old files (hopefully through a simple select rule like 'element * aLabel')
[From the comments:]
what's the idiomatic way to "cat" an earlier revision of a file?
The idiomatic way is through a dynamic view (that you configure with the exact same config spec than your existing snapshot view).
You can then browse (as in 'change directory to') the various extended paths of a file.
If you want to cat all versions of a branch of a file, you go in:
cd /view/MyView/vobs/myVobs/myPath/myFile##/main/[...]/maBranch
cat 1
cat 2
...
cat x
'1', '2', ... 'x' being the version 1, 2, ... x of your file within that branch.
For a snapshot view, the extended path is not accessible, so your "hack" is the way to go.
However, 2 remarks here:
to quickly display all previous revisions of a snapshot file in a given branch, you can type:
(one line version for copy-paste, Unix syntax:)
cleartool find addon.xml -ver 'brtype(aBranch) && !version(.../aBranch/LATEST) && ! version(.../aBranch/0)' -exec 'cleartool diff -ser empty "$CLEARCASE_XPN"'
(multi-line version for readability:)
cleartool find addon.xml -ver 'brtype(aBranch) &&
!version(.../aBranch/LATEST) &&
! version(.../aBranch/0)'
-exec 'cleartool diff -ser empty "$CLEARCASE_XPN"'
you can quickly have an output a little nicer with
(one line version for copy-paste, Unix syntax:)
cleartool find addon.xml -ver 'brtype(aBranch) && !version(.../aBranch/LATEST) && ! version(.../aBranch/0)' -exec 'cleartool diff -ser empty "$CLEARCASE_XPN"' | ccperl -nle '$a=$_; $b = $a; $b =~ s/^>+\s(?:file\s+\d+:\s+)?//g;print $b if $a =~/^>/'
(multi-line version for readability:)
cleartool find addon.xml -ver 'brtype(aBranch) &&
!version(.../aBranch/LATEST) &&
! version(.../aBranch/0)'
-exec 'cleartool diff -ser empty "$CLEARCASE_XPN"'
| ccperl -nle '$a=$_; $b = $a;
$b =~ s/^>+\s(?:file\s+\d+:\s+)?//g;
print $b if $a =~/^>/'
That way, the output is nicer.
The "cleartool get" command (man page) mentioned below by Brian don't do stdout:
The get command copies only file elements into a view.
On a UNIX or Linux system, copy /dev/hello_world/foo.c##/main/2 into the current directory.
cmd-context get –to foo.c.temp /dev/hello_world/foo.c##/main/2
On a Windows system, copy \dev\hello_world\foo.c##\main\2 into the C:\build directory.
cmd-context get –to C:\build\foo.c.temp \dev\hello_world\foo.c##\main\2
So maybe than, by piping the result to a cat (or type in windows), you can then do something with the output of said cat (type) command.
cmd-context get –to C:\build\foo.c.temp \dev\hello_world\foo.c##\main\2 | type C:\build\foo.c.temp
I know this is an old thread...but I couldn't let this thrashing go by unresolved....
Static views have a "ct get" command that does exactly what you are looking for.
cleartool get -to ~/foo File##/main/28
will save this version of the file in ~/foo.
[ Rewritten based on the first comment ]
All files in Clearcase, including versions, are available in the virtual directory structure. I don't have a lot of familiarity with static views, but I believe they still go through a virtual fs; they just get updated differently.
In that case, you can just do:
cat File##/main/28
It can get ugly if you also have to find the right version of a directory that contained that file element. We have a PERL script at work that uses this approach to analyze historical changes made to files, and we quickly ran out of command-line space on Windows to actually run the commands!
If File is a Clearcase element, and cat File works, and the view is set correctly, then try:
cat File##/main/28
(note: without the ct shell-- you shouldn't need this if you're already in the view.)
Try typing:
ct ls -l File
If it shows the file with an extended name similar to the above, then you should be able to cat the file using an extended name.
ct shell cat File##version

Resources