For the life of me, I cannot figure out why I can't store the output of the mediainfo --Inform command into an array. I've done for loops in Bash before without issue, perhaps I'm missing something really obvious here. Or, perhaps I'm going about it the completely wrong way.
#!/bin/bash
for file in /mnt/sda1/*.mp4
do vidtime=($(mediainfo --Inform="Video;%Duration%" $file))
done
echo ${vidtime[#]}
The output is always the time of the last file processed in the loop and the rest of the elements of the array are null.
I'm working on a script to endlessly play videos on a Raspberry Pi, but I'm finding that omxplayer isn't always exiting at the end of a video, it's really hard to reproduce so I've given up on troubleshooting the root cause. I'm trying to build some logic to kill off any omxplayer processes that are running longer than they should be.
Give this a shot. Note the += operator. You might also want to add quotes around $file if your filenames contain spaces:
#!/bin/bash
for file in /mnt/sda1/*.mp4
do vidtime+=($(mediainfo --Inform="Video;%Duration%" "$file"))
done
echo ${vidtime[#]}
It's more efficient to do it this way:
read -ra vidtime < <(exec mediainfo --Inform='Video;%Duration% ' -- /mnt/sda1/*.mp4)
No need to use a for loop and repeatingly call mediainfo.
Related
Command ffmpeg -i file-1.mp4 -vf ass=file-1a.ass burned-1.mp4
works to burn file-1a.ass subtitles on file-1.mp4 video.
But each time I have to reiterate the same command on over 40 different videos and subtitles and each time I have to wait for rendering the output.
So perhaps there is a way to automatically reiterate the same command on all the files.
Looking for a reply found the loop command
for f in *; do ffmpeg $f;
But I am confused how to use it with 2 files, the .mp4 and the .ass file, and also the output file which should have the same number
I imagine should put the same name on each couple of files, such as:
1.mp4 1.ass
2.mp4 2.ass
3.mp4 3.ass
etc
and then
for f in *; do ffmpeg -i $f.mp4 -vf ass=$f.ass $f-output.mp4
But I have no clear idea
You have the right idea. But it won’t work if the loop executes with f == 1.mp4, then again with f == 1.ass, and so on.
So you want to modify the loop to only iterate over .mp4 files. Then you want to strip the .mp4 extension from the value of f, that is, strip the last 4 characters from the value of f, using ${f:0: -4} (this means “get a substring of f, starting at character 0 and ending at 5 characters before the end”).
You obviously want to terminate the loop with done. I also suggest wrapping the parameters in quotes, to prevent word splitting (that is, if the filenames contain certain characters, they might be split into multiple arguments to ffmpeg).
Putting it all together:
for f in *.mp4; do f=${f%.*}; ffmpeg -i "$f.mp4" -vf ass="$f.ass" "$f-output.mp4"; done
Of course, once you have run this, you need to get rid of all the output files before you can run it again. Or you can just put the output files in a different directory to begin with.
Edit: Another user posted an answer, which seems to have been deleted. It was a good answer but lacked explanation. It was basically the same as my answer, except that it used ${f%.mp4} to strip the .mp4 extension. My answer is probably slightly more complex but slightly more efficient, so it’s basically a matter of personal preference.
Edit 2: Based on the link provided by llogan’s comment, I have made these changes:
Remove the quotes in the assignment, as assignments are not subject to word splitting (this is also stated in the bash man page).
Use ${f%.*} to strip the extension. This strips a dot followed by any sequence of characters from the end. It looks for the shortest possible match, so it’s really looking for a dot followed by any sequence of non-dot characters at the end.
I am trying to run TreSpex analysis on a series of trees, which are saved in newick format as .fasta.txt files in a folder.
I have a list of Taxa names saved in a .txt file
I enter:
perl TreSpEx.v1.pl -fun e -ipt *fasta.txt -tf Taxa_List.txt
But it won't run. I tried writing a loop for each file within the folder but am not very good with them and my line of
for i in treefile/; do perl TreSpEx.v1.1.pl -fun e -ipt *.fasta.txt -tf Taxa_List.txt; done
won't work because -ipt apparently needs a name that starts with a letter or number
In your second example you are actually doing the same thing as in first (but posible several times).
I'm not familiar with TreSpEx or know Bash very well for that matter (which it seems you are using), but you might try something like below.
for i in treefile/*.fasta.txt ; do
perl TreSpEx.v1.1.pl -fun e -ipt $i -tf Taxa_List.txt;
done
Basically, you need to use a variable from the for loop (i) to pass name of each file to the command.
I'm running Ubuntu 14.04 and am trying to fill an array in a shell script so that I can loop over it and utilize its contents to fill a text file. However, there's a snag: it doesn't seem to be filling.
I've simplified the larger script that I'm working with down to the essential issue, reprinted below:
WL_START=1
WL_END=5
WL_INC=1
wl_range=$(seq $WL_START $WL_INC $WL_END)
declare -a WL
for i in $wl_range # loop through sequence and fill array
do
WL[$i]=${wl_range[$i]}
done
echo $wl_range
echo ${wl_range[1]}
echo $WL
echo ${WL[1]}
However, my output looks like this:
1 2 3 4 5
empty line
empty line
empty line
Any ideas? I know that people say to just use seq to fill the array, but I had the same problem there as well.
Too much work.
WL=($(seq $WL_START $WL_INC $WL_END))
wl_range is a string consisting of space-delimited numbers, not an array. Your for loop should simply look like
for i in $wl_range; do
WL[i]=$i
done
That said, don't use the for loop; use #IgnacioVazquez-Abrams' answer.
while read FILE;
do
echo "$FILE"
done
Pretty trivial code... but I have no idea what could possibly be messing it up...I've looked everywhere at this seems to be correct...
Did added quotations but no luck-
I'm trying to read every file in the directory
- tried adding " in $*;" to the end of the first line with no luck
So is there a way to iterate through all the files and pipe each one to read?
Ok and is there a way for it to iterate through ONLY files and not directories?
Well, it doesn't freeze up. It simply waits for input. That's what read FILE is supposed to do: read a line from standard input (=terminal unless a redirection is present) and store it in the FILE variable.
BTW, there's an extra semicolon you might want to remove; or did you perhaps mean to write
while read FILE; do
echo $FILE
done
If you meant to iterate over every file in a directory, use
for file in *; do
echo "<$file>"
done
If you meant to iterate over the arguments given to your script, use
for arg in "$#"; do
echo "<$arg>"
done
You should likely put echo "$FILE" instead of just echo $FILE. Remember the contents of $FILE will replace the variable and then the command is executed.
For example, if you have just:
echo $FILE
but the value of file is hello; shutdown, you could be in for a world of hurt. :)
I was wondering how bad would be the impact in the performance of a program migrated to shell script from C.
I have intensive I/O operations.
For example, in C, I have a loop reading from a filesystem file and writing into another one. I'm taking parts of each line without any consistent relation. I'm doing this using pointers. A really simple program.
In the Shell script, to move through a line, I'm using ${var:(char):(num_bytes)}. After I finish processing each line I just concatenate it to another file.
"$out" >> "$filename"
The program does something like:
while read line; do
out="$out${line:10:16}.${line:45:2}"
out="$out${line:106:61}"
out="$out${line:189:3}"
out="$out${line:215:15}"
...
echo "$out" >> "outFileName"
done < "$fileName"
The problem is, C takes like half a minute to process a 400MB file and the shell script takes 15 minutes.
I don't know if I'm doing something wrong or not using the right operator in the shell script.
Edit: I cannot use awk since there is not a pattern to process the line
I tried commenting the "echo $out" >> "$outFileName" but it doesn't gets much better. I think the problem is the ${line:106:61} operation. Any suggestions?
Thanks for your help.
I suspect, based on your description, that you're spawning off new processes in your shell script. If that's the case, then that's where your time is going. It takes a lot of OS resource to fork/exec a new process.
As donitor and Dietrich sugested, I did a little research about the AWK language and, again, as they said, it was a total success. here is a little example of the AWK program:
#!/bin/awk -f
{
option=substr($0, 5, 9);
if (option=="SOMETHING"){
type=substr($0, 80, 1)
if (type=="A"){
type="01";
}else if (type=="B"){
type="02";
}else if (type=="C"){
type="03";
}
print substr($0, 7, 3) substr($0, 49, 8) substr($0, 86, 8) type\
substr($0, 568, 30) >> ARGV[2]
}
}
And it works like a charm. It takes barely 1 minute to process a 500mb file
What's wrong with the C program? Is it broken? Too hard to maintain? Too inflexible? You are more of a Shell than a C expert?
If it ain't broke, don't fix it.
A look at Perl might be an option, too. Easier than C to modify and still speedy I/O; and it's much harder to create useless forks in Perl than in the shell.
If you told us exactly what the C program does, maybe there's a simple and faster-than-light solution with sed, grep, awk or other gizmos in the Unix tool box. In other words, tell us what you actually want to achieve, don't ask us to solve some random problem you ran into while pursuing what you think is a step towards your actual goal.
Alright, one problem with your shell script is the repeated open in echo "$out" >> "outFileName". Use this instead:
while read line; do
echo "${line:10:16}.${line:45:2}${line:106:61}${line:189:3}${line:215:15}..."
done < "$fileName" > "$outFileName"
As an alternative, simply use the cut utility (but note that it doesn't insert the dot after the first part):
cut -c 10-26,45-46,106-166 "$fileName" > "$outFileName"
You get the idea?