Unix script stalls/freezing on loop for some odd reason? - loops

while read FILE;
do
echo "$FILE"
done
Pretty trivial code... but I have no idea what could possibly be messing it up...I've looked everywhere at this seems to be correct...
Did added quotations but no luck-
I'm trying to read every file in the directory
- tried adding " in $*;" to the end of the first line with no luck
So is there a way to iterate through all the files and pipe each one to read?
Ok and is there a way for it to iterate through ONLY files and not directories?

Well, it doesn't freeze up. It simply waits for input. That's what read FILE is supposed to do: read a line from standard input (=terminal unless a redirection is present) and store it in the FILE variable.
BTW, there's an extra semicolon you might want to remove; or did you perhaps mean to write
while read FILE; do
echo $FILE
done
If you meant to iterate over every file in a directory, use
for file in *; do
echo "<$file>"
done
If you meant to iterate over the arguments given to your script, use
for arg in "$#"; do
echo "<$arg>"
done

You should likely put echo "$FILE" instead of just echo $FILE. Remember the contents of $FILE will replace the variable and then the command is executed.
For example, if you have just:
echo $FILE
but the value of file is hello; shutdown, you could be in for a world of hurt. :)

Related

Trouble storing the output of mediainfo video times into an array

For the life of me, I cannot figure out why I can't store the output of the mediainfo --Inform command into an array. I've done for loops in Bash before without issue, perhaps I'm missing something really obvious here. Or, perhaps I'm going about it the completely wrong way.
#!/bin/bash
for file in /mnt/sda1/*.mp4
do vidtime=($(mediainfo --Inform="Video;%Duration%" $file))
done
echo ${vidtime[#]}
The output is always the time of the last file processed in the loop and the rest of the elements of the array are null.
I'm working on a script to endlessly play videos on a Raspberry Pi, but I'm finding that omxplayer isn't always exiting at the end of a video, it's really hard to reproduce so I've given up on troubleshooting the root cause. I'm trying to build some logic to kill off any omxplayer processes that are running longer than they should be.
Give this a shot. Note the += operator. You might also want to add quotes around $file if your filenames contain spaces:
#!/bin/bash
for file in /mnt/sda1/*.mp4
do vidtime+=($(mediainfo --Inform="Video;%Duration%" "$file"))
done
echo ${vidtime[#]}
It's more efficient to do it this way:
read -ra vidtime < <(exec mediainfo --Inform='Video;%Duration% ' -- /mnt/sda1/*.mp4)
No need to use a for loop and repeatingly call mediainfo.

trying to use while read loop to save result to an array

Quick question, can i do this?:
while IFS=: read menu script
do
echo "$x. $menu"
command[x]="$script"
let x++
done < file.txt
read two strings per line from a file, print one and save the other to an array..
file.txt looks like this:
File Operations:~/scripts/project/File_Operations.sh
Directory Operations:~/scripts/project/Directory_Operations.sh
Process Management:~/scripts/project/Process_Management.sh
Search Operations:~/scripts/project/Search_Operations.sh
Looks right. 2 things.
You need to initialise x. x=0
When you use x as a subscript it needs the $. i.e. command[$x]="$script"
And done forget the {}s when referencing the command array. e.g. ${command[0]}
What shell are you using? Works for me in bash, I just prepended the following two lines to the script:
#!/bin/bash
x=0
Without setting $x to 0, the user is presented with
. File Operations
1. Directory Operations
2. Process Management
3. Search Operations

batch file echo line with 0 does not write to file

I have this in a batch file...
ECHO ActionNumber=0>> %wkdir%\some.ini
...problem is, it never gets written to the file but rather displayed on the console like this...
ActionNumber=
If I had...
ECHO ActionNumber=20>> %wkdir%\some.ini
...it get's written just fine
How can I write a line to this file that is simply "ActionNumber=0"
(without quotes, I'm just showing that it needs to be all one line with no spaces, no trailing space either)
>>%wkdir%\some.ini ECHO ActionNumber=20
Unfortunately, the space-after-digit solution echoes the space into the file, so you have trailing spaces.
(That's a gotcha with any digit immediately preceding a redirector)
The 0 before the < makes the parser thinks you are attempting to redirect stdin. Probably the simplest solution is to move the redirection as Peter Wright has done. Another option is to enclose the command in parentheses.
(ECHO ActionNumber=0)>> %wkdir%\some.ini
Use a Space after the 0
ECHO ActionNumber=0 >> %wkdir%\so
Edit: Thanks to #Christian K
0>> redirects to the Console stdin (stdout is 1 and stderr is 2)

Deleting contents of a file in Tcl

I'm having some troubles deleting the contents of a text file. From what I can tell, I cannot seem to rename or delete this file and create a new one with the same name, due to permission issues with the PLM software we use. Unfortunately, I am on my own here, since no one seems to know what exactly is wrong.
I can read and write to this file, however. So I've been looking at the seek command and doing something like this:
set f [open "C:/John/myFile.txt" "a+"]
seek $f 0
set fp [tell $f]
seek $f 0 end
set end [tell $f]
# Restore current file pointer
seek $f $fp
while { $fp < $end } {
puts -nonewline $f " "
incr fp
}
close $f
This seems to replace all the lines with spaces, but I'm not sure this is the correct way to approach this. Can someone give me some pointers? I'm still relatively new to Tcl.
Thanks!
If you've got at least Tcl 8.5, open the file in r+ or w+ mode (experimentation may be required) and then use chan truncate:
chan truncate $f 0
If you're using 8.4 or before, you have instead to do this (and it only works for truncating to empty):
close [open $thefilename "w"]
(The w mode creates the file if it doesn't exist, and truncates it to empty on open if it does. The rest of the program might or might not like this!)
Note however that this does not reset where other channels open on the file think they are. This can lead to strange effects (such as writing at a large offset, with the OS filling out the preceding bytes with zeroes) even without locking.
close [open $path w]
And voila, an empty file. If this file does not yet exist, it will be created.
A really easy way to do this is to just over-write your file with an empty file. For example create an empty file (you can do this manually or using the following TCL code):
set blank_file [open "C:/tmp/blank.txt" "w"]
close $blank_file
Then just over-write your original file with the blank file as follows:
file rename -force "C:/tmp/blank.txt" "C:/John/myFile.txt"
Of course, you may have permissions problems if something else has grabbed the file.
You say the file is opened exclusively with another process but you can write to it ?! I think you have permission problems. Are you using Linux or Unix ?! (It seems it is a Windows system but permission problems usually occur on Linux/Unix systems, It is weird, isn't it ?!)
The file is not exclusively opened if you are able to read and write to it and you may have no appropriate permission to delete the file.
Also it is better to test the code on a file you know you have all permissions on it. If the code is working you can focus on your target file. Also you can Google for 'how to file operations in Tcl'. Read this Manipulating Files With Tcl

How do I add an operator to Bash in Linux?

I'd like to add an operator ( e.g. ^> ) to handle prepend instead append (>>). Do I need to modify Bash source or is there an easier way (plugin, etc)?
First of all, you'd need to modify bash sources and quite heavily. Because, above all, your ^> would be really hard to implement.
Note that bash redirection operators usually do a very simple writes, and work on a single file (or program in case of pipes) only. Excluding very specific solutions, you usually can't write to a beginning of a file for the very simple reason you'd need to move all remaining contents forward after each write. You could try doing that but it will be hard, very ineffective (since every write will require re-writing the whole file) and very unsafe (since with any error you will end up with random mix of old and new version).
That said, you are indeed probably better off with a function or any other solution which would use a temporary file, like others suggested.
For completeness, my own implementation of that:
prepend() {
local tmp=$(tempfile)
if cat - "${1}" > "${tmp}"; then
mv "${tmp}" "${1}"
else
rm -f "${tmp}"
# some error reporting
fi
}
Note that you unlike #jpa suggested, you should be writing the concatenated data to a temporary file as that operation can fail and if it does, you don't want to lose your original file. Afterwards, you just replace the old file with new one, or delete the temporary file and handle the failure any way you like.
Synopsis the same as with the other solution:
echo test | prepend file.txt
And a bit modified version to retain permissions and play safe with symlinks (if that is necessary) like >> does:
prepend() {
local tmp=$(tempfile)
if cat - "${1}" > "${tmp}"; then
cat "${tmp}" > "${1}"
rm -f "${tmp}"
else
rm -f "${tmp}"
# some error reporting
fi
}
Just note that this version is actually less safe since if during second cat something else will write to disk and fill it up, you'll end up with incomplete file.
To be honest, I wouldn't personally use it but handle symlinks and resetting permissions externally, if necessary.
^ is a poor choice of character, as it is already used in history substitution.
To add a new redirection type to the shell grammar, start in parse.y. Declare it as a new %token so that it may be used, add it to STRING_INT_ALIST other_token_alist[] so that it may appear in output (such as error messages), update the redirection rule in the parser, and update the lexer to emit this token upon encountering the appropriate characters.
command.h contains enum r_instruction of redirection types, which will need to be extended. There's a giant switch statement in make_redirection in make_cmd.c processing redirection instructions, and the actual redirection is performed by functions throughout redir.c. Scattered throughout the rest of source code are various functions for printing, copying, and destroying pipelines, which may also need to be updated.
That's all! Bash isn't really that complex.
This doesn't discuss how to implement a prepending redirection, which will be difficult as the UNIX file API only provides for appending and overwriting. The only way to prepend to a file is to rewrite it entirely, which (as other answers mention) is significantly more complex than any existing shell redirections.
Might be quite difficult to add an operator, but perhaps a function could be enough?
function prepend { tmp=`tempfile`; cp $1 $tmp; cat - $tmp > $1; rm $tmp; }
Example use:
echo foobar | prepend file.txt
prepends the text "foobar" to file.txt.
I think bash's plugin architecture (loading shared objects via the 'enable' built-in command) is limited to providing additional built-in commands. The redirection operators are part of they syntax for running simple commands, so I think you would need to modify the parser to recognize and handle your new ^> operator.
Most Linux filesystems do not support prepending. In fact, I don't know of any one that has a stable userspace interface for it. So, as stated by others already, you can only rely on overwriting, either just the initial parts, or the entire file, depending on your needs.
You can easily (partially) overwrite initial file contents in Bash, without truncating the file:
exec {fd}<>"$filename"
printf 'New initial contents' >$fd
exec {fd}>&-
Above, $fd is the file descriptor automatically allocated by Bash, and $filename is the name of the target file. Bash opens a new read-write file descriptor to the target file on the first line; this does not truncate the file. The second line overwrites the initial part of the file. The position in the file advances, so you can use multiple commands to overwrite consecutive parts in the file. The third line closes the descriptor; since there is only a limited number available to each process, you want to close them after you no longer need them, or a long-running script might run out.
Please note that > does less than you expected:
Remove the > and the following word from the commandline, remembering the redirection.
When the commandline is processed and the command can be launched, calling fork(2) (or clone(2)), to create a new process.
Modify the new process according to the command. That includes things like modified environment variables (SOMEVAR=foo yourcommand), but also changed filedescriptors. At this point, a > yourfile from the cmdline will have the effect that the file is open(2)'ed at the stdout filedescriptor (that is #1) in write-only mode truncating the file to zero bytes. A >> yourfile would have the effect that the file is oppend at stdout in write-only mode and append mode.
(Only now launch the program, like execv(yourprogram, yourargs)
The redirections could, for a simple example, be implemented like
open(yourfile, O_WRONLY|O_TRUNC);
or
open(yourfile, O_WRONLY|O_APPEND);
respectively.
The program then launched will have the correct environment set up, and can happily write to fd1. From here, the shell is not involved. The real work is not done by the shell, but by the operating system. As Unix doesn't have a prepend mode (and it would be impossible to integrate that feature correctly), everything you could try would end up in a very lousy hack.
Try to re-think your requirements, there's always a simpler way around.

Resources