change folder modified date based on most recent file modified date in folder - file

I have a number of project folders that all got their date modified set to the current date & time somehow, despite not having touched anything in the folders. I'm looking for a way to use either a batch applet or some other utility that will allow me to drop a folder/folders on it and have their date modified set to the date modified of the most recently modified file in the folder. Can anyone please tell me how I can do this?
In case it matters, I'm on OS X Mavericks 10.9.5. Thanks!

If you start a Terminal, and use stat you can get the modification times of all the files and their corresponding names, separated by a colon as follows:
stat -f "%m:%N" *
Sample Output
1476985161:1.png
1476985168:2.png
1476985178:3.png
1476985188:4.png
...
1476728459:Alpha.png
1476728459:AlphaEdges.png
You can now sort that and take the first line, and remove the timestamp so you have the name of the newest file:
stat -f "%m:%N" *png | sort -rn | head -1 | cut -f2 -d:
Sample Output
result.png
Now, you can put that in a variable, and use touch to set the modification times of all the other files to match its modification time:
newest=$(stat -f "%m:%N" *png | sort -rn | head -1 | cut -f2 -d:)
touch -r "$newest" *
So, if you wanted to be able to do that for any given directory name, you could make a little script in your HOME directory called setMod like this:
#!/bin/bash
# Check that exactly one parameter has been specified - the directory
if [ $# -eq 1 ]; then
# Go to that directory or give up and die
cd "$1" || exit 1
# Get name of newest file
newest=$(stat -f "%m:%N" * | sort -rn | head -1 | cut -f2 -d:)
# Set modification times of all other files to match
touch -r "$newest" *
fi
Then make that executable, just necessary one time, with:
chmod +x $HOME/setMod
Now, you can set the modification times of all files in /tmp/freddyFrog like this:
$HOME/setMod /tmp/freddyFrog
Or, if you prefer, you can call that from Applescript with a:
do shell script "$HOME/setMod " & nameOfDirectory
The nameOfDirectory will need to look Unix-y (like /Users/mark/tmp) rather than Apple-y (like Macintosh HD:Users:mark:tmp).

Related

How to run a command on all .cs files in directory and store file path as a variable to be used as command on windows

I'm trying to run the following command on each file of a directory.
svn blame FILEPATH | gawk '{print $2}' | sort | uniq -c
It works well however it only works on individual files. For whatever reason, it won't run on the directory as a whole. I was hoping to create some form of batch script that would iterate through the directory and would grab the file path and store it as a variable to be used in the command. However, I've never written a batch script nor do I know the first thing about them. I tried this loop but couldn't get it to work
set codedirectory=%C:\Repo\Pineapple% for %codedirectory% %%i in (*.cs) do
but I'm not necessarily sure what to do next. Unfortunately, this all has to be run on windows. Any help would be greatly appreciated. Thanks!
use for and find, similar to example on
https://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html
for i in $(find . -name "*.cs"); do
svn blame $i | gawk '{print $2}' | sort | uniq -c
done

passing values of text file to array in shell scripting

My script fetches the names of directories in a path and stores in a text file.
#!/bin/bash
MYDIR="/bamboo/artifacts"
DIRS=`ls -d /bamboo/artifacts/* | cut -d'/' -f4 > plan_list.txt`
plan_list.txt:
**************
PLAN1
PLAN2
PLAN3
Now I am trying to pass each of these directory names to a URL to get output like this.
http://bamboo1.test.com:8080/browse/PLAN1
http://bamboo1.test.com:8080/browse/PLAN2
http://bamboo1.test.com:8080/browse/PLAN3
The script to do that doesn't seem to work
bambooServer="http://bamboo1.test.com:8080/browse/"
for DIR in $DIRS
do
echo `$bambooServer+$DIR`
done
Could someone please tell me what I am missing here? Instead of storing the ls command output to a plan_list.txt file i tried passing to array but that didn't work well too.
DIRS=`ls -d /bamboo/artifacts/* | cut -d'/' -f4 > plan_list.txt`
DIRS is just an empty variable since your command is not producing any output and just redirecting output to plan_list.txt.
You can rewrite your script like this:
#!/bin/bash
mydir="/bamboo/artifacts"
cd "$mydir"
bambooServer="http://bamboo1.test.com:8080/browse/"
for dir in */
do
echo "$bambooServer$dir"
done
*/ is the glob pattern to get all the directories in your current path.

importing data from a CSV in Bash

I have a CSV file that I need to use in a bash script. The CSV is formatted like so.
server1,file.name
server1,otherfile.name
server2,file.name
server3,file.name
I need to be able to pull this information into either an array or in some other way so that I can then filter the information and only pull out data for a single server that i can then pass to another command within the script.
I need it to go something like this.
Import workfile.csv
check hostname | return only lines from workfile.csv that have the hostname as column one and store column 2 as a variable.
find / -xdev -type f -perm -002 | compare to stored info | chmod o-w all files not in listing
I'm stuck using bash because of the environment that I'm working in.
The csv can be to big for adding all filenames in the find parameter list.
You also do not want to call find in a loop for every line in the csv.
Solution:
First make a complete list of files in a tmp file.
Second parse the csv and filter the files.
Third is chmod -w.
The next solution stores the files in a tmp
Make a script that gets the servername as a parameter.
See comment in the code:
# Before EDIT:
# Hostname by parameter 1
# Check that you have a hostname
if [ $# -ne 1 ]; then
echo "Usage: $0 hostname"
# Exit script, failure
exit 1
fi
hostname=$1
# Edit, get hostname by system call
hostname=$(hostname)
# Or: hostname=$(hostname -s)
# Additional check
if [ ! -f workfile.csv ]; then
echo "inputfile missing"
exit 1
fi
# After edits, ${hostname} is now filled.
find / -xdev -type f -perm -002 -name "${file}" > /tmp/allfiles.tmp
# Do not use cat workfile.csv | grep ..., you do not need to call cat
# grep with ^ for beginning of line, add a , for a complete first field
# grep "^${hostname}," workfile.csv
# cut for selecting second field with delimiter ','
# cut -d"," -f2
# while read file => can be improved with xargs but lets start with this.
grep "^${hostname}," workfile.csv | cut -d"," -f2 | while read file; do
# Using sed with #, not /, since you need / in the search string
# Variable in sed mist be outside the single quotes and in double quotes
# Add $ after the file for end-of-line
# delete the line with the file (#searchstring#d)
sed -i '#/'"${file}"'$#d' /tmp/allfiles.tmp
done
echo "Review /tmp/allfiles.tmp before chmodding all these files"
echo "Delete the echo and exit when you are happy"
# Just an exit for testing
exit
# Using < is for avoiding a call to cat
</tmp/allfiles.tmp xargs chmod -w
It might be easier when you can chmod -w all the files and chmod +w the files in the csv. This is a little different than you asked, since all files from the csv are writable after this process, maybe you do not want that.

ls not working well with nohup

I want to list the number of files in a directory in shell-script.
This command works well:
let number_of_files=`ls $direc -l| wc -l`
My problem is that when I use this command with nohup, it doesn't work well.
The same happens when trying to get a file:
file_name=` ls -1 $direc | head -$file_number | tail -1`
Do you know any other option to do it?
I know that in c there is a function:
num_of_files=scandir(directory,&namelist,NULL,NULL);
I also include the full command-line:
nohup sh script_name.sh > log.txt &
Do you know any other way in shell-script that works well with nohup?
Thanks.
Try something like this,
NUMBER_OF_FILES=$(find . -maxdepth 1 -type f | wc -l)
echo $NUMBER_OF_FILES
That is find (from the current directory) to a max depth of 1 (e.g. the current directory only) everything that is of type "file", and then count the number of lines. Finally, assign the result of that to NUMBER_OF_FILES.

Need bash to separate cat'ed string to separate variables and do a for loop

I need to get a list of files added to a master folder and copy only the new files to the respective backup folders; The paths to each folder have multiple folders, all named by numbers and only 1 level deep.
ie /tester/a/100
/tester/a/101 ...
diff -r returns typically "Only in /testing/a/101: 2093_thumb.png" per line in the diff.txt file generated.
NOTE: there is a space after the colon
I need to get the 101 from the path and filename into separate variables and copy them to the backup folders.
I need to get the lesserfolder var to get 101 without the colon
and mainfile var to get 2093_thumb.png from each line of the diff.txt and do the for loop but I can't seem to get the $file to behave. Each time I try testing to echo the variables I get all the wrong results.
#!/bin/bash
diff_file=/tester/diff.txt
mainfolder=/testing/a
bacfolder= /testing/b
diff -r $mainfolder $bacfolder > $diff_file
LIST=`cat $diff_file`
for file in $LIST
do
maindir=$file[3]
lesserfolder=
mainfile=$file[4]
# cp $mainfolder/$lesserFolder/$mainfile $bacfolder/$lesserFolder/$mainfile
echo $maindir $mainfile $lesserfolder
done
If I could just get the echo statement working the cp would work then too.
I believe this is what you want:
#!/bin/bash
diff_file=/tester/diff.txt
mainfolder=/testing/a
bacfolder= /testing/b
diff -r -q $mainfolder $bacfolder | egrep "^Only in ${mainfolder}" | awk '{print $3,$4}' > $diff_file
cat ${diff_file} | while read foldercolon mainfile ; do
folderpath=${foldercolon%:}
lesserFolder=${folderpath#${mainfolder}/}
cp $mainfolder/$lesserFolder/$mainfile $bacfolder/$lesserFolder/$mainfile
done
But it is much more reliable (and much easier!) to use rsync for this kind of backup. For example:
rsync -a /testing/a/* /testing/b/
You could try a while read loop
diff -r $mainfolder $bacfolder | while read dummy dummy dir file; do
echo $dir $file
done

Resources