I am just creating bash script to: Print a long listing of the files in the login directory for a specific month. The user is prompted to enter the first 3 letters of a month name, starting with a capital, and the program will display a long list of all files that were last modified in that month.
For example, if user entered “Jul”, all the files that were last modified in July will be listed.
Is it possible to sort files by date and then limit them? or can it be done differently?
Take a look at this answer: https://stackoverflow.com/a/5289636/851273
It covers both month and year, though you can remove the match against the year.
read mon
la -la | grep $mon
You can grep -i for case insensitive grep. So user inputs can become case insensitive.
Note: This is crude because it returns results that have the text matching the month name. Ex: it will return files that are name after month. TO refine this you will have to just look at the date column
Here is the script that should do it
Month=Dec
ls -ltr |awk '$6 ~ /'$Month'/ {print $9}'
This will have awk look at the date field from the ls field ($6), ls -ltr will sort it by date. This will then expand the variable $Month and use that to search the $6 field, and print out the file name (the 9th field $9).
Related
This question already has an answer here:
Find files in current directory sorted by modified time and store result in an array
(1 answer)
Closed 12 months ago.
I would like to get a list (or array) of all files in my current directory which is sorted by modification date. In the terminal, something like ls -lt works, but that should not be used in a bash script (http://mywiki.wooledge.org/BashPitfalls#for_i_in_.24.28ls_.2A.mp3.29)...
I tried to use the -nt opterator (https://tips.tutorialhorizon.com/2017/11/18/nt-file-test-operator-in-bash/) but I am hoping that there is a more simple and elegant solution to this.
This might help you:
In bash with GNU extensions:
Creating an array
mapfile -d '' a < <(find -maxdepth 1 -type f "%T# %p\0" | sort -z -k1,1g | cut -z -d ' ' -f2)
or looping over the files:
while read -r -d '' _ file; do
echo "${file}"
done < <(find -maxdepth 1 -type f "%T# %p\0" | sort -z -k1,1g)
Here we build up a list of files with the NULL-character as the delimiter. The field itself consists of the modification date in the epoch followed by a space and the file name. We use sort to sort that list by modification date. The output of this is passed to a while loop that reads the fields per zero-terminated record. The first field is the modification date which we read in _ and the remainder is passed to file.
In ZSH:
If you want to use another shell like zsh, you can just do something like:
a=( *(Om) )
or
for file in *(Om); do echo "${file}"; done
here Om is a glob-modifier that tells ZSH to sort the output by modification date.
I have a number of project folders that all got their date modified set to the current date & time somehow, despite not having touched anything in the folders. I'm looking for a way to use either a batch applet or some other utility that will allow me to drop a folder/folders on it and have their date modified set to the date modified of the most recently modified file in the folder. Can anyone please tell me how I can do this?
In case it matters, I'm on OS X Mavericks 10.9.5. Thanks!
If you start a Terminal, and use stat you can get the modification times of all the files and their corresponding names, separated by a colon as follows:
stat -f "%m:%N" *
Sample Output
1476985161:1.png
1476985168:2.png
1476985178:3.png
1476985188:4.png
...
1476728459:Alpha.png
1476728459:AlphaEdges.png
You can now sort that and take the first line, and remove the timestamp so you have the name of the newest file:
stat -f "%m:%N" *png | sort -rn | head -1 | cut -f2 -d:
Sample Output
result.png
Now, you can put that in a variable, and use touch to set the modification times of all the other files to match its modification time:
newest=$(stat -f "%m:%N" *png | sort -rn | head -1 | cut -f2 -d:)
touch -r "$newest" *
So, if you wanted to be able to do that for any given directory name, you could make a little script in your HOME directory called setMod like this:
#!/bin/bash
# Check that exactly one parameter has been specified - the directory
if [ $# -eq 1 ]; then
# Go to that directory or give up and die
cd "$1" || exit 1
# Get name of newest file
newest=$(stat -f "%m:%N" * | sort -rn | head -1 | cut -f2 -d:)
# Set modification times of all other files to match
touch -r "$newest" *
fi
Then make that executable, just necessary one time, with:
chmod +x $HOME/setMod
Now, you can set the modification times of all files in /tmp/freddyFrog like this:
$HOME/setMod /tmp/freddyFrog
Or, if you prefer, you can call that from Applescript with a:
do shell script "$HOME/setMod " & nameOfDirectory
The nameOfDirectory will need to look Unix-y (like /Users/mark/tmp) rather than Apple-y (like Macintosh HD:Users:mark:tmp).
Using pdksh
stat() command is unavailable on system.
I need to loop through the amount of files found and store their dates in an array. $COMMAND stores the number of files found in $location as can be seen below.
Can someone help me please?
COMMAND=`find $location -type f | wc -l`
CMD_getDate=$(find $location -type f | xargs ls -lrt | awk '{print $6} {print $7}')
Well, first, you don't need to do the wc. The size of the array will tell you how many there are. Here's a simple way to build an array of dates and names (designed for pdksh; there are better ways to do it in AT&T ksh or bash):
set -A files
set -A dates
find "$location" -type f -ls |&
while read -p inum blocks symode links owner group size rest; do
set -A files "${files[#]}" "${rest##* }"
set -A dates "${dates[#]}" "${rest% *}"
done
Here's one way to examine the results:
print "Found ${#files[#]} files:"
let i=0
while (( i < ${#files[#]} )); do
print "The file '${files[i]}' was modified on ${dates[i]}."
let i+=1
done
This gives you the full date string, not just the month and day. That might be what you want - the date output of ls -l (or find -ls) is variable depending on how long ago the file was modified. Given these files, how does your original format distinguish between the modification times of a and b?
$ ls -l
total 0
-rw-rw-r--+ 1 mjreed staff 0 Feb 3 2014 a
-rw-rw-r--+ 1 mjreed staff 0 Feb 3 04:05 b
As written, the code above would yields this for the above directory with location=.:
Found 2 files:
The file './a' was modified on Feb 3 2014.
The file './b' was modified on Feb 3 00:00.
It would help if you indicated what the actual end goal is..
OK, sedAwkPerl-fu-gurus. Here's one similar to these (Extract specific strings...) and (Using awk to...), except that I need to use the number extracted from columns 4-10 in each line of File A (a PO number from a sales order line item) and use it to locate all related lines from File B and print them to a new file.
File A (purchase order details) lines look like this:
xxx01234560000000000000000000 yyy zzzz000000
File B (vendor codes associated with POs) lines look like this:
00xxxxx01234567890123456789001234567890
Columns 4-10 in File A have a 7-digit PO number, which is found in columns 7-13 of file B. What I need to do is parse File A to get a PO number, and then create a new sub-file from File B containing only those lines in File B which have the POs found in File A. The sub-file created is essentially the sub-set of vendors from File B who have orders found in File A.
I have tried a couple of things, but I'm really spinning my wheels on trying to make a one-liner for this. I could work it out in a script by defining variables, etc., but I'm curious whether someone knows a slick one-liner to do a task like this. The two referenced methods put together ought to do it, but I'm not quite getting it.
Here's a one-liner:
egrep -f <(cut -c4-10 A | sed -e 's/^/^.{6}/') B
It looks like the POs in file B actually start at column 8, not 7, but I made my regex start at column 7 as you asked in the question.
And in case there's the possibility of duplicates in A, you could increase efficiency by weeding those out before scanning file B:
egrep -f <(cut -c4-10 A | sort -u | sed -e 's/^/^.{6}/') B
sed 's_^...\(\d\{7\}\).*_/^.\{6\}\1/p_' FIRSTFILE > FILTERLIST
sed -n -f FILTERLIST SECONDFILE > FILTEREDFILE
The first line generates a sed script from firstfile than the second line uses that script to filter the second line. This can be combined to one line too...
If the files are not that big you can do something like
awk 'BEGIN { # read the whole FIRSTFILE PO numbers to an array }
substr($0,7,7} in array { print $0 }' SECONDFILE > FILTERED
You can do it like (but it will find the PO numbers anywhere on a line)
fgrep -f <(cut -b 4-10 FIRSTFILE) SECONDFILE
Another way using only grep:
grep -f <(grep -Po '^.{3}\K.{7}' fileA) fileB
Explanation:
-P for perl regex
-o to select only the match
\K is Perl positive lookbehind
I am new to shell scripting and what I need is to read from a file that contains a 2d array. Assume there is a file named test.dat which contains values as:
- Paris London Lisbon
- Manchester Nurnberg Istanbul
- Stockholm Kopenhag Berlin
What is the easiest way to select an element from this table in linux bash scripts? For example, the user inputs -r 2 -c 2 test.dat that implies to selecting the element at row[2] and column[2] (Nurnberg).
I have seen the read command and googled but most of the examples were about 1d array.
This one looks familiar but could not understand it exactly.
awk is great for this:
$ awk 'NR==row{print $col}' row=2 col=2 file
Nurnberg
NR==row{} means: on number of record number row, do {} Number of record normally is the number of line.
{print $col} means: print the field number col.
row=2 col=2 is giving both parameters to awk.
Update
One more little question: How can I transform this into a sh file so
that when I enter -r 2 -c 2 test.dat into prompt, I get to run the
script so that it reads from the file and echoes the output? –
iso_9001_.
For example:
#!/bin/bash
file=$1
row=$2
col=$3
awk 'NR==row{print $col}' row=$row col=$col $file
And you execute like:
./script a 3 2
Kopenhag