What's the simplest way on Linux to "copy" a directory hierarchy so that a new hierarchy of directories are created while all "files" are just symlinks pointing back to the actual files on the source hierarchy?
cp -s does not work recursively.
I just did a quick test on a linux box and cp -sR /orig /dest does exactly what you described: creates a directory hierarchy with symlinks for non-directories back to the original.
cp -as /root/absolute/path/name dest_dir
will do what you want. Note that the source name must be an absolute path, it cannot be relative. Else, you'll get this error: "xyz-file: can make relative symbolic links only in current directory."
Also, be careful as to what you're copying: if dest_dir already exists, you'll have to do something like:
cp -as /root/absolute/path/name/* dest_dir/
cp -as /root/absolute/path/name/.* dest_dir/
Starting from above the original & new directories, I think this pair of find(1) commands will do what you need:
find original -type d -exec mkdir new/{} \;
find original -type f -exec ln -s {} new/{} \;
The first instance sets up the directory structure by finding only directories in the original tree and recreating them in the new tree. The second creates the symlinks to the original files in the new tree.
There's also the "lndir" utility (from X) which does such a thing; I found it mentioned here: Debian Bug report #301030: can we move lndir to coreutils or debianutils? , and I'm now happily using it.
I googled around a little bit and found a command called lns, available from here.
If you feel like getting your hands dirty
Here is a trick that will automatically create the destination folder, subfolders and symlink all files recursively.
In the folder where the files you want to symlink and sub folders are:
create a file shell.sh:
nano shell.sh
copy and paste this charmer:
#!/bin/bash
export DESTINATION=/your/destination/folder/
export TARGET=/your/target/folder/
find . -type d -print0 | xargs -0 bash -c 'for DIR in "$#";
do
echo "${DESTINATION}${DIR}"
mkdir -p "${DESTINATION}${DIR}"
done' -
find . -type f -print0 | xargs -0 bash -c 'for file in "$#";
do
ln -s "${TARGET}${file}" "${DESTINATION}${file}"
done' -
save the file ctrl+O
close the file ctrl+X
Make your script executable chmod 777 shell.sh
Run your script ./shell.sh
Happy hacking!
I know the question was regarding shell, but since you can call perl from shell, I wrote a tool to do something very similar to this, and posted it on perlmonks a few years ago. In my case, I generally wanted directories to remain links until I decide otherwise. It'd be a fairly trivial change to do this automatically and recursively.
Related
I am trying to get the following done. I have circa 40 directories of different species, each with 100s of sequence files that contain orthologous sequences. The sequence files are similarly named for each of the species directories. I want to concatenate the identically named files of the 40 species directories into a single sequence file which is named similarly.
My data looks as follows, e.g.:
directories: Species1 Species2 Species3
Within directory (similar for all): sequenceA.fasta sequenceB.fasta sequenceC.fasta
I want to get single files named: sequenceA.fasta sequenceB.fasta sequenceC.fasta
where the content of the different files from the different species is concatenated.
I tried to solve this with a loop (but this never ends well with me!):
ls . | while read FILE; do cat ./*/"$FILE" >> ./final/"$FILE"; done
This resulted in empty files and errors. I did try to find a solution elsewhere, e.g.: (https://www.unix.com/unix-for-dummies-questions-and-answers/249952-cat-multiple-files-according-file-name.html, https://unix.stackexchange.com/questions/424204/how-to-combine-multiple-files-with-similar-names-in-different-folders-by-using-u) but I have been unable to edit them to my case.
Could anyone give me some help here? Thanks!
In a root directory where your species directories reside, you should run the following:
$ mkdir output
$ find Species* -type f -name "*.fasta" -exec sh -c 'cat {} >> output/`basename {}`' \;
It traverses all the files recursively and merges the contents of files with identical basename into one under output directory.
EDIT: even though this was an accepted answer, in a comment the OP mentioned that the real directories don't match a common pattern Species* as shown in the original question. In this case you can use this:
$ find -type f -not -path "./output/*" -name "*.fasta" -exec sh -c 'cat {} >> output/`basename {}`' \;
This way, we don't specify the search pattern but rather explicitly omit output directory to avoid duplicates of already processed data.
Background
I am following some instructions from a teammate. These instructions include a command to checkout, then copy .a files from a make command from one vob to another. The commands were given to me as such:
ct co -nc -unr /vobs/sbov/ABC/libs/qwert/*.a
find . -name '*.a' | grep -v ABCDE | xargs -I {} cp {} /vobs/sbov/ABC/libs/quert
This should have no problem working normally...except recently numerous .a files in that directory have changed from files to symlinks. Symlinks are not clearcase elements. Therefore, the commands attempted to checkout, then copy to, various non-clearcase entities as opposed to the actual files. Hence my question...
Question
How do I modify the commands above to manipulate the actual files the symlinks point to, as opposed to the symlinks themselves?
Try first a cp with a de reference option
find . -name '*.a' | grep -v ABCDE | xargs -I {} cp -L {} /vobs/sbov/ABC/libs/quert
^^^^^^^^
That should help getting actual files instead of symlinks.
Finding a way to rename multiple files on a multiple folder
folder i.e. I have file called "jobsforms.html.bak" on a multiple folder under:
/home/sites/juk/jobsforms.html.bak
/home/sites/juan/jobsforms.html.bak
/home/sites/pedro/jobsforms.html.bak
/home/sites/luois/jobsforms.html.bak
I want to rename all the files found as: "jobsforms.html" how can I do that.
I was trying to do this aproach.
find /home/sites -name "jobsform.html.bak" -exec bash -c 'mv "$1" "${1%/*}"/jobsform.html' -- {} \;
Anyone can help me how to go about to do this.
Than you,
David
You could pipe the output of find to awk, using the sub function to remove a substring from the filename:
find /home/sites -name "jobsforms.html.bak" | awk '{ori=$0; sub(/\.bak$/,"",$0); system("mv \""ori"\" "$0)}'
Using Terminal and Shell/Bash commands is there a way to retrive specific files from a web directory? I.e.
Directory: www.site.com/samples/
copy all files ending in ".h" into a folder
The folder contains text files, and other files associated that are of no use.
Thanks :)
There are multiple ways of achieving this recursively:
1. using find
1.1 making directorys using find -p to create recursive folders without errors
cd path;
mkdir backup
find www.site.com/samples/ -type d -exec mkdir -p {} backup/{} \;
1.2 finding specific files and copying to backup folder -p to perserve permissions
find www.site.com/samples/ -name \*.h -exec cp -p {} backup/{} \;
Using tar well actually for reverse type of work i.e. to exclude specific files which the part of the question related to text files matches this answer more:
You can have as many excludes as you liked added on
tar --exclude=*.txt --exclude=*.filetype2 --exclude=*.filetype3 -cvzf site-backup.tar.gz www.site.com
mv www.site.com www.site.com.1
tar -xvzf site-backup.tar.gz
You can use the wget for that, but if there are no links to that files. I.e. they exist, but they are not referenced from any html page, then bruteforce is the only option.
cp -aiv /www.site.com/samples/*.h /somefolder/
http://linux.die.net/man/1/cp
I have a folder with multiple sub-folders and each sub-folder contains 10-15 files. I want to perform a certain operation only on the text files in these folders. The folders contain other types of files as well. For now, I am just trying to write a simple for loop to access every file.
for /r in *.txt; do "need to perform this on every file"; done
This gives me an error -bash: ``/R': not a valid identifier
Thanks for the help.
P.S I am using cygwin on Win 7.
Your /r is the problem, that's not a valid identifier (as bash said, you need to drop the /). Also, this won't recurse into subdirectories. If your operation is simple, you can directly use the exec option of find. {} is a placeholder for the filename.
find . -name "*.txt" -exec ls -l {} \;
Otherwise, try something like
for r in $( find . -name "*.txt" ) ; do
echo $r
#more actions...
done
With bash:
shopt -s globstar
for file in **/*.txt; do ...
I would use "find" for your application case
Something like
find . -name "*.txt" -exec doSomeThing {} \;