I'm trying to convert a hierarcy of TIFF image files into JPG, and out of boredom, I want to do find and ffmpeg in a single file.
So I set find to invoke sh with the -s flag, like thins:
#!/bin/sh
export IFS=""
find "$#" -iname 'PROC????.tif' -exec sh -s {} + << \EOF
for t ; do
ffmpeg -y -v quiet -i $t -c:v mjpeg ${t%.*}.jpg
rm $t
done
EOF
However, there's just too many files in the directory hierarchy, and find chopped filename array into several small pieces, and sh -s was only succesfully invoked for the first argument chunk.
The question being: how could one feed such in-body command to every sh invocation in the find command?
Update
The tag "heredoc" on the question is intended for receiving answers that do not rely on external file or self-referencing through $0. It is also intended that no filename would go through string-array processing such as padding with NUL-terminator or newline, and can be directly passed as arguments.
The heredoc is being used as the input to find. I think your best bet is to not use a heredoc at all, but just use a string:
#!/bin/sh
find "$#" -iname 'PROC????.tif' -exec sh -c '
for t ; do
ffmpeg -y -v quiet -i "$t" -c:v mjpeg "${t%.*}.jpg" &&
rm "$t"
done
' sh {} +
I am re-writing your code below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=${imgfile%.*}.jpg
cd "$path"
ffmpeg -y -v quiet -i "$imgfile" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
IF you don't want to change the current directory you can do it like below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
# If you list down all the .tif file use below command
# find "$1" -name "*.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=$path${imgfile%.*}.jpg
ffmpeg -y -v quiet -i "$filepath" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
rm -rf /tmp/imagefile_list.txt
Related
I am trying to write a shell script that reads a file line by line and executes a command with its arguments taken from the space-delimited fields of each line.
To be more precise, I need to download a file from an URL which is given in the second column to the path given in the first column using wget. But I don't know how to load this file and get the values in script.
File.txt
file-18.log https://example.com/temp/file-1.log
file-19.log https://example.com/temp/file-2.log
file-20.log https://example.com/temp/file-3.log
file-21.log https://example.com/temp/file-4.log
file-22.log https://example.com/temp/file-5.log
file-23.pdf https://example.com/temp/file-6.pdf
Desired output is
wget url[1] -o url[0]
wget https://example.com/temp/file-1.log -o file-18.log
wget https://example.com/temp/file-2.log -o file-19.log
...
...
wget https://example.com/temp/file-6.pdf -o file-23.pdf
Use read and a while loop in bash to iterate over the file line-by-line and call wget on each iteration:
while read -r NAME URL; do wget "$URL" -o "$NAME"; done < File.txt
Turning a file into arguments to a command is a job for xargs:
xargs -a File.txt -L1 wget -o
xargs -a File.txt: Extract arguments from the File.txt file.
-L1: Pass all arguments from 1 line to the command.
wget -o Pass arguments to the wget command.
You can count, using a for loop and the output of seq like so:
In bash, you can add numbers using $((C+3)).
This will get you:
COUNT=6
OFFSET=18
for C in `seq "$((COUNT-1))"`; do
wget https://example.com/temp/file-${C}.log -o file-$((C+OFFSET-1)).log
done
wget https://example.com/temp/file-${COUNT}.pdf -o file-$((COUNT+OFFSET-1)).pdf
Edit: Sorry, I misread your question. So if you have a file with the file mappings, you can use awk to get the URL and the FILE and then do the download:
cat File.txt | while read L; do
URL="$(echo "${L}" | awk '{print $1}'"
FILE="$(echo "${L}" | awk '{print $2}'"
wget "${URL}" -o "${FILE}"
done
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 1 year ago.
I thought that my problem is trivial, but I cannot figure out, why my scripts only performing once in array.
I have a jenkins job (bash script). This job gathering hostnames and sends ssh commands, through script, using gathered info:
rm /tmp/hosts
docker exec -t tmgnt_consul_1 consul members -status=alive | grep -v Node | awk '{print $1}' | cut -d : -f1 >> /tmp/hosts
sed -i '/someunnecessaryinfo/d' /tmp/hosts
echo >> /tmp/hosts
shopt -s lastpipe
while IFS= read -r line; do
echo "host is >>$line<<";
url="http://111.111.111.111:8500/v1/catalog/nodes"
term_IP=`curl -s $url | jq -r --arg Node "${line}" '.[] | select(.Node == "'${line}'" )|.Address' --raw-output`
echo $term_IP
sudo bash -x /home/rtm/t_mgnt/check_fw $term_IP
done < /tmp/hosts
Second script:
#!/bin/bash
term_IP=$1
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
if [ $? != 0 ]; then
sudo sshpass -p 'some.pass' \
scp -n -o StrictHostKeyChecking=no -r /home/rtm/t_mgnt/nv9 user#$term_IP:
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo mv nv9 /root/"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo dpkg -i /root/nv9/libudev0_175-0ubuntu9_amd64.deb"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
else
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
fi
The job is working fine, and returns correct values, but only for the first element of array.
PS - I already searched through this and other sites, and - following answer didn't help me - Shell script while read line loop stops after the first line (already "ssh -n -o").
Perhaps you can point me, what I missed.
Possibly this ssh call eats your input:
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
^^^
Try adding -n.
I wan't to run a script that does some file alteration on .php files.
There are hundreds of EmailController.php files in different sites that should be modified based on the site-name depending on what folder they are located.
#!/bin/bash
source /root/sitenames.txt
sed -i 's#'"/var/vmail/skeleton.com/"'#'"/var/vmail/$sitename/"'#g' /var/www/$sitename/web/EmailController.php
The easiest way would be to read sitenames.txt file that would contain the domain-names one per line and substitute that domain with $sitename in the bash script.
#tom-fenech is right on in saying this should be in config file rather than hardcoded into your .php files. Regardless, you need to change what you have. And, you'll need to do something like this to change to a config file anyways.
Short Answer
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
Which is mostly equivalent to:
find "${skeldir}" -type f -print0 \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
I like the fgrep version better because it runs sed on a smaller set of files than find (assuming your pattern isn't in every file).
Long answer
Putting this together:
$ cat /tmp/x.sh
#!/bin/sh
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
[ -d "${skeldir}" ] && rm -rf "${skeldir}"
mkdir -p "${skeldir}/subdir"
echo 'ignore this line' \
| tee "${skeldir}/file1.php" "${skeldir}/subdir/file2.php" "${skeldir}/file3.php" \
> "${skeldir}/subdir/file4.php"
echo "foo /var/vmail/${skelsite}/ bar" \
| tee -a "${skeldir}/file1.php" >> "${skeldir}/subdir/file2.php"
echo "BEFORE:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
# make changes (--null/-0 ensures you can have spaces, etc, in filenames)
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
# Alternate:
# find "${skeldir}" -type f -print0 \
# | xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
echo "AFTER:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
And see what happens:
$ /tmp/x.sh
BEFORE:
Files that have "skeleton.com": 2
Files that have "example.com": 0
AFTER:
Files that have "skeleton.com": 0
Files that have "example.com": 2
You may consider running a backup before doing this! Something like:
$ rsync -avP --delete /var/www/$sitename/ /var/www.backup/$sitename/
I need a shell script to remove files without an extension (like .txt or any other extension). For example, I found a file named as imeino1 (without .txt or any other thing) and I want to delete them via shell script, so if any developer know about this part, please explain how to do it.
No finds, no pipes, just plain old shell:
#!/bin/sh
for file in "$#"; do
case $file in
(*.*) ;; # do nothing
(*) rm -- "$file";;
esac
done
Run with a list of files as argument.
Assuming you mean a UNIX(-like) shell, you can use the rm command:
rm imeino1
rm -rvf `ls -lrth|grep -v ".txt"`
ls -lrth|grep -v ".txt" should be inside back-quotes `…`(or, better, inside $(…)).
If other filenames are not containing "." then instead of giving .txt for grep -v, you can give
rm -rvf `ls -lrth|grep -v "."`
This will remove all the directories and files in the path without extension.
rm -vf `ls -lrth|grep -v "."` won't remove directories, but will remove all the files without extension (if the filename does not contain the character ".").
for file in $(find . -type f | grep -v '\....$') ; do rm $file 2>/dev/null; done
Removes all files not ending in .??? in the current directory.
To remove all files in or below the current directory that contain no dot in the name, regardless of whether the names contain blanks or newlines or any other awkward characters, you can use a POSIX 2008-compliant version of find (such as found with GNU find, or BSD find):
find . -type f '!' -name '*.*' -exec rm {} +
This looks for files (not directories, block devices, …) with a name that does not match *.* (so does not contain a .) and executes the rm command on conveniently large groups of such file names.
I have multiple(more than 100) .c files and I want to change a particular text from all the file in which that text exists. I am using ubuntu!
How can I do it?(I will prefer command line rather than installing any application)
Thanks a lot!
OLD=searchtext
NEW=replacedtext
YOURFILE=/path/to/your/file
TMPFILE=`mktemp`
sed "s/$OLD/$NEW/g" $YOURFILE > $TMPFILE && mv $TMPFILE $YOURFILE
rm -rf $TMPFILE
you can also use find to find your files:
find /path/to/parent/dir -name "*.c" -exec sed 's/$OLD/$NEW/g' {} > $TMPFILE && mv $TMPFILE {} \;
find /path/to/parent/dir -name "*.c" finds all files with name *.c under /path/to/parent/dir. -exec command {} \; executes the command in the found file. {} stands for the found file.
You should check out sed, which lets your replace some text with other text (among other things)
example
sed s/day/night/ oldfile newfile
will change all occurences of "day" with "night" in the oldfile, and store the new, changed version in the newfile
to run on many files, there are a few things you could do:
use foreach in your favorite shell
use find like this
find . -name "namepattern" -exec sed -i "sed-expr" "{}" \;
use file patterns like this: sed -i "sed-expr" *pattern?.cpp
where *pattern?.cpp is just a name pattern for all files that starts with some string, then has "pattern" in them, and has any letter and a ".cpp" suffix