how do this simple batch rename in bash shell? - file

I'm trying to rename all files that are of *_[m|f].txt to *.txt.
If an instance of _[m|f] appears in the file name, we can assume it always
appears right before the .txt extension.
Examples:
+-----------------------+---------------------+
| input | output |
+-----------------------+---------------------+
| hello_hi_wefwef_f.txt | hello_hi_wefwef.txt |
+-----------------------+---------------------+
| ya_yo_sup_m.txt | ya_yo_sup.txt |
+-----------------------+---------------------+
I am currently trying this:
for i in *_[m|f].txt ; do
mv "$i" "${i/-_[m|f].txt/.txt}"
done
but it's complaining there is no | operator for regex. Is there a simple way to do what I want?

Modify your script like this:
for i in *_[mf].txt; do
mv "$i" "${i/_[fm]\.txt/\.txt}"
done

Related

Windows Powershell: Display File Path along with Name and LastWriteTime

I'm using a Windows Powershell code that is very good but with one caveat. See below:
PS F:\Bizfi> dir -r | Out-GridView | Select FullName, LastWriteTime
The caveat is that it does not display the file path. Is there a way to include the file path within the name or as a separate property?
Thanks
If you want to mimic normal Get-ChildItem output:
dir -r | Select Mode,LastWriteTime,Length,Name,Fullname | Out-GridView

using command variable in column output not working

I'm using a script that uses curl to obtain specific array values from a configuration. I'd like to place the output into columns separating values (values are unknown to script). Here's my code:
# get overlay networks and their details
get_overlay=`curl -H "X-Person-Token: $auth_token" -H "X-Person-Email: $auth_email" -k "$api_host/api/v1/networks"`
# array of overlay names with uuid
overlay_name=`echo $get_overlay | jq '.[] | .name'`
overlay_uuid=`echo $get_overlay | jq '.[] | .uuid'`
echo ""
echo -e "Overlay UUID\n$oname $ouuid" | column -t
exit 0
Here's the ouput:
Overlay UUID
"TESTOVERLAY"
"Auto_API_Overlay"
"ANOTHEROVERLAYTEST" "ea178905-6ab0-4154-ab05-412dc4b39151"
"e5be9dbe-b0fc-4e30-aaf5-ac4bdcd863a7"
"850ebf6b-3651-4cf1-aae1-5a6c03fad61b"
What I was expecting was:
Overlay UUID
"TESTOVERLAY" "ea178905-6ab0-4154-ab05-412dc4b39151"
"Auto_API_Overlay" "e5be9dbe-b0fc-4e30-aaf5-ac4bdcd863a7"
"ANOTHEROVERLAYTEST" "850ebf6b-3651-4cf1-aae1-5a6c03fad61b"
I'm an absolute beginner at this, any insight is very much appreciated.
Thanks!
I would suggest using paste to combine your two variables line by line:
paste <(printf 'Overlay\n%s\n' "$name") <(printf 'UUID\n%s\n' "$uuid") | column -t
Two process substitutions are used to pass the contents of each variable along with their titles.

Bash executing a subset of lines in script

I have a file which contains commands similar to:
cat /home/ptay89/test/01.out
cat /home/ptay89/testing/02.out
...
But I only want a few of them executing. For example, if I only want to see the output files ending in 1.out, I can do this:
cat commands | grep 1.out | sh
However, I get the following output for each of the lines in the commands file:
: cannot be loaded - no such file or directoryst/01.out
When I copy and past the commands I want from the file directly, it works fine. Are there better ways of doing this?
You probably have spurious carriage returns in your file (created under Windows?). Use tr instead of cat to remove them:
tr -d '\015' <commands | grep 1.out | sh
Try doing a
grep -e '^cat.*out' commands | grep 1.out | sh
That should ignore any weird characters and take only the ones you need.

Sorting a bash array

I'm trying to sort the output of this code by size of the file. Currently I have:
IFS=!
FILEARRAY=(`find * -printf %f!`)
to get all of the file names out of the directory. I've tried piping it all sorts of ways and nothing works. Is it even possible to do like this or do I need to go about getting the file names in my array a different way?
Thanks
Try something like this instead:
FILEARRAY=$(find * -printf '%s~%f\n' | sort -n | awk -F"~" '{print $2}')
This should give you a list of file names sorted by size.
Not sure what you are trying to achieve here but to extract the size of the files you might want to use sed. to pass it to sort or some other sorting utility check out xargs which gives you some extra features when piping and might be of some use.
Edit:
If you are trying to sort all of the files in the current directory by size,
somthing like this:
find ./ -name "*" | xargs ls -s | sort -n
should work.
Does not use bash arrays. Also does not parse ls
find . -type f -printf '%s:%f\n' | sort -t: -n -k1 | cut -d: -f2-

How do I prevent the vim from using the wrong viewport after the :make command?

I often have multiple viewports opened in vim, using the :vsp and :sp commands. After I've been editing for a while, I'll often run the :make command from within vim. When I get errors, vim will then show me the lines that gcc says caused my errors. However, vim will often open the file with errors in another viewport, even if that file is already open. An Example:
Before Make
--------------------
| | |
| file 1 | file 2 |
| | |
| | |
--------------------
Ok, assume there are errors in file 2
--------------------
| | |
| file 2 | file 2 |
| | |
| | |
--------------------
vim now jumps to the error line in the left viewport, even though the right viewport already had that file open.
Is there some way to tell vim not to use the file one viewport if the file that the error is in is already open in vim?
Try setting the option switchbuf=useopen.

Resources