How to clear the content of multiple files in UNIX - file

I often have to clear content of all files in a directory (i.e. Apache Tomcat logs). I am using UNIX's redirection operator this way:
> catalina.out
How can I apply it to all files in the current directory (to clear all logs)?
I tried > *.* but it doesn't make sense. Thanks for help.

A little loop can make it:
for file in * .* <---- * for normal files // .* for hidden files
do
echo "clearing $file"
> $file
done

Related

SOLR POST files with no extension

I am using SOLR 5 and I want to scan documents that have no extensions. Unfortunately changing the file to have extensions is not an option in my case.
the command I am using is simply:
$bin/post -c mycore ../foldertobescaned -type application/pdf
the command works fine for documents that do have extension but I am getting:
Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
If renaming the files is not an option, you can use the following script as a workaround until Solr improves its post method. It is a simple bash for loop that submits each file individually and works regardless of the file extension. Note that this script will be slower than using post on the whole folder, because each individual file transfer needs to be initialized.
Save the script below as postFolderToSolr.sh inside your Solr folder (so that Solrs bin/ folder is a subdirectory), make it executable with chmod +x postFolderToSolr.sh and then use it as follows: ./postFolderToSolr.sh mycore /home/user1/foldertobescaned/ application/pdf
Using no arguments or the wrong number of arguments prints a short usage message as help.
#!/bin/bash
set -o nounset
if [ "$#" -ne 3 ]
then
echo "Post contents of a folder to Solr."
echo
echo "Usage: postFolderToSolr.sh <colletionName> </path/to/folder> <MIME>"
echo
exit 1
fi
collection=$1
inputPath=${2%/} # remove suffix / if it exists
mime=$3
for element in $inputPath"/"*; do
bin/post -c $collection -type $mime $element
done

Script for renameing special characters files and directories

I am looking for a script to rename files and directories that have special characters in them.
My files:
?rip?ev <- Directory
- Juhendid ?rip?evaks.doc <- Document
- ?rip?ev 2 <- Subdirectory
-- t?ts?.xml <- Subdirectory file
They need to be like this:
ripev <- Directory
- Juhendid ripevaks.doc <- Document
- ripev 2 <- Subdirectory
-- tts.xml <- Subdirectory file
I need to change the files and the folders so that the filetype stays the same as it is for example .doc and .xml wont be lost. Last time I did it with rename it lost every filetype and the files were moved to mother directory in this case ?rip?ev directory and subdirectories were empty. Everything was located under the mother directory /home/samba/.
So in this case I need just to rename the question mark in the file name and directory name, but not to move it anywhere else or lose any other character or the filetype. I have been looking around google for a answer but haven't found one. I know it can be done with find and rename, but haven't been able to over come the complexity of the script. Can anyone help me please?
You can just do something like this
find -name '*\?*' -exec bash -c 'echo mv -iv "$0" "${0//\?/}"' {} \;
Note the echo before the mv so you can see what it does before actually changing anything. Otherwise above:
searches for ? in the name (? is equivalent to a single char version of * so needs to be escaped)
executes a bash command passing the {} as the first argument (since there is no script name it's $0 instead of $1)
${0//\?/} performs parameter expansion in bash replacing all occurrences of ? with nothing.
Note also that file types do not depend on the name in linux, but this should not change any file extension unless they contain ?.
Also this will rename symlinks as well if they contain ? (not clear whether or not that was expected from question).
I usually do this kind of thing in Perl:
#!/usr/bin/perl
sub procdir {
chdir #_[0];
for (<*>) {
my $oldname = $_;
rename($oldname, $_) if s/\?//g;
procdir($_) if -d;
}
chdir "..";
}
procdir("top_directory");

Replace all files of a certain type in all directories and subdirectories?

I have played around with the find command and anything else I can think of but nothing will work.
I would like my bash script to be able to find all of a file type in a given directory and all of its subdirectories and replace the file with another.
EX: lets say
/home/test1/randomfolder/index.html
/home/test1/randomfolder/stuff.html
/home/different/stuff/index.html
/home/different/stuff/another.html
Each of those .html files need to be found when the program is given /home/ as a directory to search in, and then replaced by echoing the other file into them.
Is this possible in bash?
This should more or less get you going in the right direction:
for file in `find . -type f -name \*.html`; do echo "new content" > $file; done

cleartool question

Lets say I have a directory at \testfolder, and the latest is currently at /main/10. I know that the operation resulting in testfolder##/main/6 is to remove a file named test.txt.
What's a sequence of cleartool operations that can be done in a script that will take "testfolder##/main/6" and "test.txt" as input, and will cat out the contents of test.txt as of that time?
One way I can think of is to get the time of /main/6 operation, create a view with config spec -time set to that time, and then cat the test.txt at the directory. But I'm wondering if I can do this in a easier way that doesn't involve manipulating config specs, perhaps through "cleartool find" and extended path names
If you are using a dynamic view, you should be explore directly the extended pathnames of testfolder in order to access the content of test.txt.
cd m:\myview\myVob\path\to\testfolder
# In version 5 of testfolder, test.txt was still there
cd ##/main/5
# Note: test.txt is a directory! only LATEST is a file
type test.txt#/main/LATEST
The OP adds:
how about if test.txt was moved from testFolder to testFolder2, and then a new version of test.txt is checked in? In this when I go into testfolder##/main/5, test.txt##/main/LATEST is incorrect...
Technically, this is a case of evil twins: 2 objects of the same names exists (one in testfolder##/main/5, one in testfolder##/main/10) with different history.
You need, to get back the former test.txt (a like rollbacking a file), remove your current test.txt and get back the old one currently moved to Folder2. (cleartool move)
cd testFolder2
cleartool checkout -c "move test.txt back to testFolder"
cd ../testFolder
cleartool checkout -c "get back test.txt from testFolder2"
cleartool rmname test.txt
cleartool move ../testFolder2/test.txt
cleartool ci -nc .
cleartool ci -nc ../testFolder2

Creating a new subdirectory structure in ClearCase?

I'm a ClearCase newbie and up until now have been used to SVN. Therefore, I'm a bit confused on the steps I need to take to create a new directory structure containing multiple files to ClearCase.
So, say for example there is an existing directory structure within ClearCase as follows:
\ParentDirectory
\ChildDirectory1
\File1
\File2
\ChildDirectory2
\ChildDirectory3
\File1
\ChildDirectory4
If I want to add a new subdirectory to this structure, ChildDirectory5, which will contain a number of other files, how do I go about this? From what I have been reading, I will need to first of all check out the parent directory and then use the mkelem command to make each subdirectory and file.
However, I have already created the necessary files and directories on my local machine so I just need to check them into ClearCase somehow. With SVN, all I would've needed to do was copy the parent folder into a checked out repo and do an add & commit command sequence on it.
As explained in How can I use ClearCase to “add to source control …” recursively?, you have to use clearfsimport which does what you are saying (checkout the parent directories, mkelem for the elements)
clearfsimport -preview -rec -nset c:\sourceDir\ChildDirectory5 m:\MyView\MyVob\ParentDirectory
Note the :
-preview option: it will allow to check what would happen without actually doing anything.
'*' used only in Windows environment, in order to import the content of a directory
-nset option (see my previous answer about nset).
I would recommend dynamic view for those initialization phases where you need to import a lot of data: you can quickly see what your view looks like without making any update (like "without updating your workspace"):
ClearCase allows to access the data in two ways:
snapshot view (like a SVN workspace, except all the .svn are actually externalized in a view storage outside the workspace)
dynamic view: all your files are visible through the network (instant access/update)
I use a variant of this script (I call it "ctadd"):
#!/usr/bin/perl
use strict;
use Getopt::Attrribute;
(our $nodo : Getopt(nodo));
(our $exclude_pat : Getopt(exclude_pat=s));
for my $i (#ARGV) {
if ($i =~ /\s/) {
warn "skipping file with spaces ($i)\n";
next;
}
chomp(my #files = `find $i -type f`);
#files = grep !/~$/, #files; # emacs backup files
#files = grep !/^\#/, #files; # emacs autosave files
if (defined($exclude_pat)) {
#files = grep !/$exclude_pat/, #files;
}
foreach (#files) {
warn "skipping files with spaces ($_)\n" if /\s/ ;
}
#files = grep !/\s/, #files;
foreach (#files) {
my $cmd = "cleartool mkelem -nc -mkp \"$_\"";
print STDERR "$cmd\n";
system($cmd) unless $nodo;
}
}
The -mkpath option of cleartool mkelem will automatically create and/or check out any needed directories.
For this script, -nodo will have it simply output the commands, and -exclude will allow you to specify a pattern for which any file that matches it will be excluded.
Note that Getopt::Attribute is not part of the standard Perl distribution, but it is available on a CPAN mirror near you.
You have to import your local directory structure. The command is clearfsimport.

Resources