I'm a ClearCase newbie and up until now have been used to SVN. Therefore, I'm a bit confused on the steps I need to take to create a new directory structure containing multiple files to ClearCase.
So, say for example there is an existing directory structure within ClearCase as follows:
\ParentDirectory
\ChildDirectory1
\File1
\File2
\ChildDirectory2
\ChildDirectory3
\File1
\ChildDirectory4
If I want to add a new subdirectory to this structure, ChildDirectory5, which will contain a number of other files, how do I go about this? From what I have been reading, I will need to first of all check out the parent directory and then use the mkelem command to make each subdirectory and file.
However, I have already created the necessary files and directories on my local machine so I just need to check them into ClearCase somehow. With SVN, all I would've needed to do was copy the parent folder into a checked out repo and do an add & commit command sequence on it.
As explained in How can I use ClearCase to “add to source control …” recursively?, you have to use clearfsimport which does what you are saying (checkout the parent directories, mkelem for the elements)
clearfsimport -preview -rec -nset c:\sourceDir\ChildDirectory5 m:\MyView\MyVob\ParentDirectory
Note the :
-preview option: it will allow to check what would happen without actually doing anything.
'*' used only in Windows environment, in order to import the content of a directory
-nset option (see my previous answer about nset).
I would recommend dynamic view for those initialization phases where you need to import a lot of data: you can quickly see what your view looks like without making any update (like "without updating your workspace"):
ClearCase allows to access the data in two ways:
snapshot view (like a SVN workspace, except all the .svn are actually externalized in a view storage outside the workspace)
dynamic view: all your files are visible through the network (instant access/update)
I use a variant of this script (I call it "ctadd"):
#!/usr/bin/perl
use strict;
use Getopt::Attrribute;
(our $nodo : Getopt(nodo));
(our $exclude_pat : Getopt(exclude_pat=s));
for my $i (#ARGV) {
if ($i =~ /\s/) {
warn "skipping file with spaces ($i)\n";
next;
}
chomp(my #files = `find $i -type f`);
#files = grep !/~$/, #files; # emacs backup files
#files = grep !/^\#/, #files; # emacs autosave files
if (defined($exclude_pat)) {
#files = grep !/$exclude_pat/, #files;
}
foreach (#files) {
warn "skipping files with spaces ($_)\n" if /\s/ ;
}
#files = grep !/\s/, #files;
foreach (#files) {
my $cmd = "cleartool mkelem -nc -mkp \"$_\"";
print STDERR "$cmd\n";
system($cmd) unless $nodo;
}
}
The -mkpath option of cleartool mkelem will automatically create and/or check out any needed directories.
For this script, -nodo will have it simply output the commands, and -exclude will allow you to specify a pattern for which any file that matches it will be excluded.
Note that Getopt::Attribute is not part of the standard Perl distribution, but it is available on a CPAN mirror near you.
You have to import your local directory structure. The command is clearfsimport.
Related
I have a machine that is misbehaving (dns and thus clearcase isn't working at the moment). I was hoping to access the checked out files I had in that view (and a few other view private files) and start over my work on another machine while I wait for the IT admin guys to come back to work tomorrow.
Is is possible to get at my checked out files from just the view storage directory (i.e. ~/views/peeterj_gcc6.vws/...)?
i.e. find in the viewstorage dir shows lots of paths that are surely my view private files:
./.s/00019/8000149553ab76a5fontconfig.Turbo.bfc
./.s/00019/80003d3353ac5afftestinc_Subpool.compilecmd
./.s/00019/8000445a53ac65b3sqlnlscnvtbls6-LE.u
./.s/00019/8000045e53ab62eccdeSystemPageInterface.hpp
./.s/00019/8000556053ac934ftestinc_sqlhhid.C
but I'm not sure how to map from these to the original file names within the view.
EDIT:
I was able to brute force this task, where ~/tmp/f2 contained a list of the files of interest:
cd ~/views/peeterj_gcc6.vws/
for i in `cat ~/tmp/f2` ; do echo $i `find . -name "*$i"` ; done | grep ' ' | f.pl
where f.pl is the following perl filter:
#!/usr/bin/perl
use strict ;
use warnings ;
my $vsdir = "$ENV{HOME}/views/peeterj_gcc6.vws" ;
while (<>)
{
chomp ;
my ($f, #rest) = split( / /, $_ ) ;
my #match = () ;
foreach my $p (#rest)
{
if ( $p =~ m,/[0-9a-f]+$f$, )
{
push( #match, $p ) ;
goto DONE ; # hack. Just pick first.
}
}
if ( scalar(#match) )
{
DONE:
print "cp $vsdir/#match $f\n" ;
}
}
So, I'll re-pose the question: Is there a way to systematically map the names of the files in the view storage directory to the paths that they would be in in the view when clearcase is functional?
Is there a way to systematically map the names of the files in the view storage directory to the paths that they would be in in the view when ClearCase is functional?
Not really consistently, not even for their name.
If you look at the IBM technote "Locating view private files in the storage directory", their advice is:
Go into the .s sub-directory
Located under this directory are many numbered directories.
Browse through the numbered directories, searching for the view-private file.
All the files that are listed in these directories are view-private files. The file names of the files will be preceded by an ID number.
Example:
The view private file, help.txt, in the directory under .s, is named
241ae3df.000c.help.txt
Note: View private files that have been renamed in the view are not renamed in the view storage directory.
For instance, if you create a view private file called help.txt, and then you rename it to new.txt, the physical file in the view storage directory would still be named 241ae3df.000c.help.txt
So if you had another working view, you could try and copy the files you find in the old view storage one in a similar path in the new view storage, and see if that works.
I don't get the scenario of this given code. All I wanted is to compare the files that is given below. But, in this script nothings happen. I assume that this given code can executed wherever like in /root and it will run. Please check this out.
#!/bin/bash
for file in /var/files/sub/old/*
do
# Strip path from file name
file="${file##*/}"
# Strip everything after the first hyphen
prefix="${file%%-*}-"
# Strip everything before the second-to-last dot
suffix="$(echo $file | awk -F. '{ print "."$(NF-1)"."$NF }')"
# Create new file name from $prefix and $suffix, and any version number
new=$(echo "/var/files/new/${prefix}"*"${suffix}")
# If file exists in the 'new' folder:
if test -f "${new}"
then
# Do string comparison to see if new file is lexicographically "greater than" old
if [[ "${new##*/}" > "${file}" ]]
then
# If so, delete the old version.
rm /var/sub/files/old/"${file}"
else
# 'new' file is NOT newer, delete it instead.
rm "${new}"
fi
fi
done
# Move all new files into the old folder.
mv /var/files/new/* /var/files/sub/old/
Example files inside of each sub- directories ..
/var/files/sub/old/
firefox-24.5.0-1.el5_10.i386.rpm
firefox-24.5.0-1.el5_10.x86_64.rpm
google-1.6.0-openjdk-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
google-1.6.0-openjdk-demo-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
/var/files/new/
firefox-25.5.0-1.el5_10.i386.rpm
firefox-25.5.0-1.el5_10.x86_64.rpm
ie-1.6.0-openjdk-devel-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
ie-1.6.0-openjdk-javadoc-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
ie-1.6.0-openjdk-src-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
google-2.6.0-openjdk-demo-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
In this instance, I want to get the files that are the same. So the files that are the same in the given example are:
firefox-24.5.0-1.el5_10.i386.rpm
firefox-24.5.0-1.el5_10.x86_64.rpm
google-1.6.0-openjdk-demo-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
in the old/ directory and for the new/ directory the equivalents are:
firefox-25.5.0-1.el5_10.i386.rpm
firefox-25.5.0-1.el5_10.x86_64.rpm
google-2.6.0-openjdk-demo-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
The files have similarity for their first characters. It will display in the terminal. After that, there will be another comparing again of the files and the comparison is about which file is more updated one by the number after the name of the file like: firefox-24.5.0-1.el5_10.i386.rpm compared with firefox-25.5.0-1.el5_10.i386.rpm. So in that instance the firefox-24.5.0-1.el5_10.i386.rpm will be replaced by firefox-25.5.0-1.el5_10.i386.rpm because it has a greater value and more updated one and same as other files that are similar. And if the old one is removed and the new will take replacement of it.
So at this moment after the script has been executed the output will be like this.
/var/files/sub/old/
google-1.6.0-openjdk-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
firefox-25.5.0-1.el5_10.i386.rpm
firefox-25.5.0-1.el5_10.x86_64.rpm
ie-1.6.0-openjdk-devel-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
ie-1.6.0-openjdk-javadoc-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
ie-1.6.0-openjdk-src-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
google-2.6.0-openjdk-demo-1.6.0.0-5.1.13.3.el5_10.x86_64.rpm
/var/files/new/
<<empty all files here must to moved to other directory take as a replacement>>
Can anyone help me to make a script for this ? above is just an example. Let's assume that there are lots of files to considered as similar and need to removed and moved.
You can use rpm to get the name of the package without version or architecture strings:
rpm -qi -p /firefox-25.5.0-1.el5_10.i386.rpm
Gives:
Name : firefox
Version : 25.5.0
Release : 1.el5_10
Architecture: i386
....
So you can compare the Names to find related packages.
If the goal here is to have the newrpms directory have only the newest version of each RPM from a combination of sources then you most likely want to simply combine all the files in a single directory and then use the repomanage tool (from the yum-utils package, at least on CentOS) to have it inform you which of the RPMS are old and remove them.
Something like:
repomanage --old combined_rpms_directory | xargs -r rm
As to your initial script
for i in $(\ls -d ./new/*);
do
diff ${i} newrpms/;
rm ${i}
done
You generally don't want to "parse" the output from ls, especially when a glob will do what you want just as easily (for i in ./new/* in this case).
diff ${i} newrpms/ is attempting to diff a file and a directory (or two directories if your ls/glob happened to catch a directory) but in neither case will diff do what you want there. That being said what diff does doesn't really matter because, as Barmar said in his comment
your script is removing them without testing the result of diff
A bash script that does the checking. Here's how it works:
Traverse over each file in the old files directory. Get the prefix (package name with no version, architecture, etc), eg. firefox-; get the suffix (architecture.rpm), eg. .i386.rpm.
Attempt to match prefix and suffix with any version number within the new files directory, ie. firefox-*.i386.rpm. If there is a match, $new will contain the file name, eg. firefox-25.5.0-1.el5_10.i386.rpm; if no match, $new will equal the literal string firefox-*.i386.rpm which is not a file.
Check new files directory for existence of $new.
If it exists, check that $new is indeed newer than the old version. This is done by lexicographical string comparison, ie. firefox-24.5.0-1.el5_10.i386.rpm is less than firefox-25.5.0-1.el5_10.i386.rpm because it comes earlier in the alphabet. Conveniently, sane versioning schemes also happen to be alphabetical. NB: this may fail, for example, when comparing version 2 to version 10.
A new version of a file in the old files directory has been found! In this case, get rid of the old file with rm. If the file in the new directory is not newer, then delete it instead.
Done removing old versions. Old files directory has only files without newer versions.
Move all new files into old directory, leaving newest files in old directory, and new directory empty.
#!/bin/bash
for file in /var/files/sub/old/*
do
# Strip path from file name
file="${file##*/}"
# Strip everything after the first hyphen
prefix="${file%%-*}-"
# Strip everything before the second-to-last dot
suffix="$(echo $file | awk -F. '{ print "."$(NF-1)"."$NF }')"
# Create new file name from $prefix and $suffix, and any version number
new=$(echo "/var/files/new/${prefix}"*"${suffix}")
# If file exists in the 'new' folder:
if test -f "${new}"
then
# Do string comparison to see if new file is lexicographically "greater than" old
if [[ "${new##*/}" > "${file}" ]]
then
# If so, delete the old version.
rm /var/sub/files/old/"${file}"
else
# 'new' file is NOT newer, delete it instead.
rm "${new}"
fi
fi
done
# Move all new files into the old folder.
mv /var/files/new/* /var/files/sub/old/
I am looking for a script to rename files and directories that have special characters in them.
My files:
?rip?ev <- Directory
- Juhendid ?rip?evaks.doc <- Document
- ?rip?ev 2 <- Subdirectory
-- t?ts?.xml <- Subdirectory file
They need to be like this:
ripev <- Directory
- Juhendid ripevaks.doc <- Document
- ripev 2 <- Subdirectory
-- tts.xml <- Subdirectory file
I need to change the files and the folders so that the filetype stays the same as it is for example .doc and .xml wont be lost. Last time I did it with rename it lost every filetype and the files were moved to mother directory in this case ?rip?ev directory and subdirectories were empty. Everything was located under the mother directory /home/samba/.
So in this case I need just to rename the question mark in the file name and directory name, but not to move it anywhere else or lose any other character or the filetype. I have been looking around google for a answer but haven't found one. I know it can be done with find and rename, but haven't been able to over come the complexity of the script. Can anyone help me please?
You can just do something like this
find -name '*\?*' -exec bash -c 'echo mv -iv "$0" "${0//\?/}"' {} \;
Note the echo before the mv so you can see what it does before actually changing anything. Otherwise above:
searches for ? in the name (? is equivalent to a single char version of * so needs to be escaped)
executes a bash command passing the {} as the first argument (since there is no script name it's $0 instead of $1)
${0//\?/} performs parameter expansion in bash replacing all occurrences of ? with nothing.
Note also that file types do not depend on the name in linux, but this should not change any file extension unless they contain ?.
Also this will rename symlinks as well if they contain ? (not clear whether or not that was expected from question).
I usually do this kind of thing in Perl:
#!/usr/bin/perl
sub procdir {
chdir #_[0];
for (<*>) {
my $oldname = $_;
rename($oldname, $_) if s/\?//g;
procdir($_) if -d;
}
chdir "..";
}
procdir("top_directory");
I want to check in a directory and all the sub-directories into the clear case.
Is there a specific command to achieve it?
Currently I am going into each directory and manually checking in each file.
I would recommend this question:
Now the problem is to checkin everything that has changed.
It is problematic since often not everything has changed, and ClearCase will trigger an error message when trying to check in an identical file. Meaning you will need 2 commands:
ct lsco -r -cvi -fmt "ci -nc \"%n\"\n" | ct
ct lsco -r -cvi -fmt "unco -rm %n\n" | ct
(with 'ct being 'cleartool' : type 'doskey ct=cleartool $*' on Windows to set that alias)
But if by "checkin" you mean:
"enter into source control for the first time"
"updating a large number of files which may have changed on an existing versionned directory"
I would recommend creating a dynamic view and clearfsimport your snapshot tree (with the new files) in the dynamic view.
See this question or this question.
the clearfsimport script is better equipped to import multiple times the same set of files, and automatically:
add new files,
make new version of existing files previously imported (but modified in the source set of files re-imported)
remove files already imported but no longer present in the source set of files.
make a clear log of all operations made during the import process.
:
clearfsimport -preview -rec -nset c:\sourceDir\* m:\MyView\MyVob\MyDestinationDirectory
did you used -recurse option in the clearfsimport command.
Example: clearfsimport -recurse source_dir .
This should help.
If you're using the Windows client, right-click on the parent folder, select Search, leave the file name field empty, click Search, select all the files in the result window (ctrl-A), right-click on them and select ClearCase -> Add to Source Control
If you are in windows you may try,
for /f "usebackq" %i in (`cleartool lsco -cview -me -r -s`) do cleartool ci -nc %i
In a static view, how can I view an old version of a file?
Given an empty file (called empty in this example) I can subvert diff to show me the old version:
% cleartool diff -ser empty File##/main/28
This feels like a pretty ugly hack. Have I missed a more basic command? Is there a neater way to do this?
(I don't want to edit the config spec - that's pretty tedious, and I'm trying to look at a bunch of old versions.)
Clarification: I want to send the version of the file to stdout, so I can use it with the rest of Unix (grep, sed, and so on.) If you found this question because you're looking for a way to save a version of an element to a file, see Brian's answer.
I'm trying to look at a bunch of old versions
I am not sure if you are speaking about "a bunch of old versions" of one file, "a bunch of old versions" from several files.
To visualize several old versions of one file, the simplest mean is to display its version tree (ct lsvtree -graph File), and then select a version, right-click on it and 'Send To' an editor which accepts multiple files (like Notepad++). In a few click you will have a view of those old versions.
Note: you must have CC6.0 or 7.0.1 IFix01 (7.0.0 and 7.0.1 fail to 'sent to' a file with the following error message "Access to unnamed file was denied")
But to visualize several old versions of different files, I would recommend a dynamic view and editing the config spec of that view (and not the snapshot view you are currently working with), in order to quickly select all those old files (hopefully through a simple select rule like 'element * aLabel')
[From the comments:]
what's the idiomatic way to "cat" an earlier revision of a file?
The idiomatic way is through a dynamic view (that you configure with the exact same config spec than your existing snapshot view).
You can then browse (as in 'change directory to') the various extended paths of a file.
If you want to cat all versions of a branch of a file, you go in:
cd /view/MyView/vobs/myVobs/myPath/myFile##/main/[...]/maBranch
cat 1
cat 2
...
cat x
'1', '2', ... 'x' being the version 1, 2, ... x of your file within that branch.
For a snapshot view, the extended path is not accessible, so your "hack" is the way to go.
However, 2 remarks here:
to quickly display all previous revisions of a snapshot file in a given branch, you can type:
(one line version for copy-paste, Unix syntax:)
cleartool find addon.xml -ver 'brtype(aBranch) && !version(.../aBranch/LATEST) && ! version(.../aBranch/0)' -exec 'cleartool diff -ser empty "$CLEARCASE_XPN"'
(multi-line version for readability:)
cleartool find addon.xml -ver 'brtype(aBranch) &&
!version(.../aBranch/LATEST) &&
! version(.../aBranch/0)'
-exec 'cleartool diff -ser empty "$CLEARCASE_XPN"'
you can quickly have an output a little nicer with
(one line version for copy-paste, Unix syntax:)
cleartool find addon.xml -ver 'brtype(aBranch) && !version(.../aBranch/LATEST) && ! version(.../aBranch/0)' -exec 'cleartool diff -ser empty "$CLEARCASE_XPN"' | ccperl -nle '$a=$_; $b = $a; $b =~ s/^>+\s(?:file\s+\d+:\s+)?//g;print $b if $a =~/^>/'
(multi-line version for readability:)
cleartool find addon.xml -ver 'brtype(aBranch) &&
!version(.../aBranch/LATEST) &&
! version(.../aBranch/0)'
-exec 'cleartool diff -ser empty "$CLEARCASE_XPN"'
| ccperl -nle '$a=$_; $b = $a;
$b =~ s/^>+\s(?:file\s+\d+:\s+)?//g;
print $b if $a =~/^>/'
That way, the output is nicer.
The "cleartool get" command (man page) mentioned below by Brian don't do stdout:
The get command copies only file elements into a view.
On a UNIX or Linux system, copy /dev/hello_world/foo.c##/main/2 into the current directory.
cmd-context get –to foo.c.temp /dev/hello_world/foo.c##/main/2
On a Windows system, copy \dev\hello_world\foo.c##\main\2 into the C:\build directory.
cmd-context get –to C:\build\foo.c.temp \dev\hello_world\foo.c##\main\2
So maybe than, by piping the result to a cat (or type in windows), you can then do something with the output of said cat (type) command.
cmd-context get –to C:\build\foo.c.temp \dev\hello_world\foo.c##\main\2 | type C:\build\foo.c.temp
I know this is an old thread...but I couldn't let this thrashing go by unresolved....
Static views have a "ct get" command that does exactly what you are looking for.
cleartool get -to ~/foo File##/main/28
will save this version of the file in ~/foo.
[ Rewritten based on the first comment ]
All files in Clearcase, including versions, are available in the virtual directory structure. I don't have a lot of familiarity with static views, but I believe they still go through a virtual fs; they just get updated differently.
In that case, you can just do:
cat File##/main/28
It can get ugly if you also have to find the right version of a directory that contained that file element. We have a PERL script at work that uses this approach to analyze historical changes made to files, and we quickly ran out of command-line space on Windows to actually run the commands!
If File is a Clearcase element, and cat File works, and the view is set correctly, then try:
cat File##/main/28
(note: without the ct shell-- you shouldn't need this if you're already in the view.)
Try typing:
ct ls -l File
If it shows the file with an extended name similar to the above, then you should be able to cat the file using an extended name.
ct shell cat File##version