How to find multiple files present in a directory in ksh (onAIX)
I am trying below one:
if [ $# -lt 1 ];then
echo "Please enter the path"
exit
fi
path=$1
if [ [ ! f $path/cc*.csv ] && [ ! f $path/cc*.rpt ] && [ ! f $path/*.xls ] ];then
echo "All required files are not present\n"
fi
I am getting error like check[6]: !: unknown test operator //check is my file name.
what is wrong in my script. Could someone help me on this.
My simplest idea:
N=$(ls -1 *mask* 2>/dev/null | wc -l)
echo $N
Related
I have a script that removes invalid characters from files due to one drive restrictions. It works except for file that have * { } in them. I have used the * { } but that is not working (ignores those files). Script is below. Not sure what I am doing wrong here.
#Renames FOLDERS with space at the end
IFS=$'\n'
for file in $(find -d . -name "* ")
do
target_name=$(echo "$file" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
if [ "$file" != "$target_name" ]; then
if [ -e $target_name ]; then
echo "WARNING: $target_name already exists, file not renamed"
else
echo "Move $file to $target_name"
mv "$file" "$target_name"
fi
fi
done
#end Folder rename
#Renames FILES
declare -a arrayGrep=(\? \* \, \# \; \: \& \# \+ \< \> \% \$ \~ \% \: \< \> )
echo "array: ${arrayGrep[#]}"
for i in "${arrayGrep[#]}"
do
for file in $(find . | grep $i )
do
target_name=$(echo "$file" | sed 's/\'$i'/-/g' )
if [ "$file" != "$target_name" ]; then
if [ -e $target_name ]; then
echo "WARNING: $target_name already exists, file not renamed"
else
echo "Move $file to $target_name"
mv "$file" "$target_name"
fi
fi
done
done ````
There are some issues with your code:
You've got some duplicates in arrayGrep
You have some quoting issues: $1 in the grep and sed commands must be protected from the shell
some of the disallowed characters, even if quoted to protect from the shell, could be mis-parsed by grep as meta-characters
You loop rather a lot when all substitutions could happen simultaneously
for file in $(find ...) has to read the entire list into memory, which might be problematic for large lists. It also breaks with some filenames due to word-splitting. Piping into read is better
find can do path filtering without grep (at least for this set of disallowed characters)
sed is fine to use but tr is a little neater
A possible rewrite is:
badchars='?*,#;:&#+<>%$~'
find . -name "*[$badchars]*" | while read -r file
do
target_name=$(echo "$file" | tr "$badchars" - )
if [ "$file" != "$target_name" ]; then
if [ -e $target_name ]; then
echo "WARNING: $target_name already exists, file not renamed"
else
echo "Move $file to $target_name"
mv "$file" "$target_name"
fi
fi
done
If using bash, you can even do parameter expansion directly,
although you have to embed the list:
target_name="${file//[?*,#;:&#+<>%$~]/-}"
an enhancement idea
If you choose a suitable character (eg. =), you can rename filenames reversibly (but only if the length of the new name wouldn't exceed the maximum allowed length for a filename). One possible algorithm:
replace all = in the filename with =3D
replace all the other disallowed characters with =hh where hh is the appropriate ASCII code in hex
You can reverse the renaming by:
replace all =hh (except =3D) with the character corresponding to ASCII code hh
replace all =3D with =
I'm trying to write this script
It's read variables from csv file and make SSH connections thourgh these variables and then run an if condition inside the SSH.
#!/bin/bash
# ------------------------------------------
INPUT=Mnt.csv
OLDIFS=$IFS
IFS=','
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read rom alias port ver
do
ssh -n $alias "cd /opt/app;
echo ========================================================
find . -maxdepth 4 -type d -name '$rom' -print;
echo $port;
if [ ${ver} -eq 'v3' ]
then
cat /opt/app/v3
elif
cat /opt/app/v2
fi
;
exit"
done < $INPUT
IFS=$OLDIFS
It gives this error:
bash: -c: line 9: syntax error near unexpected token `fi'
bash: -c: line 9: ` fi'
Please can you help
Thanks
I've just corrected it like this anything work
"cd /opt/app;
echo ========================================================
find . -maxdepth 4 -type d -name '$rom' -print
echo $port
if [ ${ver} = "v3" ]
then
cat /opt/app/v3
else
cat /opt/app/v2
fi
exit"
Given:
a C file with several included header files
a bunch of include file search folder
Is there a way to generate some kind of include map for the C file?
Though IDEs can sometimes help locate the definition/declaration of a symbol in the header file. But I think an include map can give me a better insight into how these files are related when project gets complicated. And identify issues such as circular includes.
ADD 1
A similar thread but not much helpful.
It only generates an include hierarchy in text in the Output window when building.
And it only works for native VC++ project. Not work for NMAKE C project.
Displaying the #include hierarchy for a C++ file in Visual Studio
ADD 2
I just tried the Include Manager mentioned in above thread. Though not free, it's not expensive and perfectly fits in my scenario.
Not sure if this is quite what you're after, but I was curious what a graph of this would look like, so I threw this together. It's a bit of a mess, but workable for a throw-away script:
#!/bin/bash
INCLUDE_DIRS=()
# Add any include dirs here
# INCLUDE_DIRS+=("/usr/include")
# If you want to add flags for some pkg-config modules (space
# separated)
PKG_CONFIG_PKGS=""
FOLLOW_SYS_INCLUDES=y
while read dir; do
dir="$(readlink -f "${dir}")"
for d in "${INCLUDE_DIRS[#]}"; do
if [ "${dir}" = "${d}" ]; then
continue
fi
done
INCLUDE_DIRS+=("${dir}")
done < <(echo | cc -Wp,-v -x c - -fsyntax-only 2>&1 | grep '^ ' | cut -b2-)
PROCESSED=()
while read flag; do
if [ -n "${flag}" ]; then
INCLUDE_DIRS+=("${flag}")
fi
done < <(pkg-config --cflags-only-I "${PKG_CONFIG_PKGS}" | sed -E 's/-I([^ ]*)/\1\n/g')
function not_found {
echo " \"$1\" [style=filled,color=lightgrey];"
echo " \"$2\" -> \"$1\""
}
function handle_file {
filename="${1}"
for f in "${PROCESSED[#]}"; do
if [ "${f}" = "${1}" ]; then
echo " \"$2\" -> \"$1\""
return
fi
done
PROCESSED+=("$1")
if [ -n "${2}" ]; then
echo " \"${2}\" -> \"${1}\";"
fi
if [ ! "${FOLLOW_SYS_INCLUDES}" = "y" ]; then
for d in "${INCLUDE_DIRS[#]}"; do
case "${1}" in
"${d}"/*)
return
;;
esac
done
fi
parse_file "$1"
}
function handle_include {
case "${1}" in
/*)
handle_file "${name}" "$2"
return
;;
esac
for dir in "${INCLUDE_DIRS[#]}"; do
if [ -f "${dir}/${1}" ]; then
handle_file "${dir}/${1}" "$2"
return
fi
done
not_found "${1}" "${2}"
}
function handle_include_2 {
case "${1}" in
/*)
handle_file "${1}" "$2"
return
;;
esac
FILE="$(readlink -f "$(dirname "${2}")/${1}")"
if [ -f "${FILE}" ]; then
handle_file "${FILE}" "$2"
fi
}
function parse_file {
while read name; do
handle_include "$name" "$1";
done < <(grep '^[ \t]*#[ \t]*include <' "$1" | sed -E 's/[ \t]*#[ \t]*include ?<([^>]+)>.*/\1/')
while read name; do
handle_include_2 "$name" "$1" "$PWD";
done < <(grep '^[ \t]*#[ \t]*include "' "$1" | sed -E 's/[ \t]*#[ \t]*include \"([^"]+)\"/\1/')
}
echo "digraph G {"
echo "graph [rankdir = \"LR\"];"
parse_file "$(readlink -f "${1}")" "" "$PWD"
echo "}"
Pass it a file and it will generate a graphviz file. Pipe it to dot:
$ ./include-map.sh /usr/include/stdint.h | dot -Tx11
And you have something nice to look at.
Recently almost any of the featured IDE can help you in that built in.Visual Studio or Jetbrain's Clion are proper for that. You did not wrote neither platform or environment, but maybe it worth to give a try even with efforts to properly set the project that compiles.
Back in days I found also useful to generate documentation with doxygen, as I remember that will also create such maps /links/ as well.
My code does not work. In the if-statement it says "too many arguments". I know there are alternative ways to do this but I want to find out what is wrong with my code.
for fname in "*"
do
h=$fname
if [ -f $h ]
then
echo $fname
fi
done
Remove the double quotes around * see:
for fname in *
do
h=$fname
if [ -f $h ]
then
echo $fname
fi
done
hope this will work.
Double quotes: Variables are expanded when enclosed in double quotes.
That's why your code is throwing the error: "too many arguments".
read LOCATION
for filename in $LOCATION
do
if [[ -d $filename ]]; then
echo "$filename /"
elif [[ -f $PASSED ]]; then
echo "$filename *"
else
echo "$filename is not valid"
exit 1
fi
done
I have done like this but I am having trouble with shellscript I have written. I am confused with tail command functionality and also when I see output of error.log on terminal it shows lines with 'e' deleted from words.
I have written like this please guide me how can I get my problem solved. I want to read this error.log file line by line and during reading lines I want to split fixed number of lines to small files with suffix i.e log-aa,log-ab,... I did this using split command. After splitting I want to filter lines with GET or POST word in them using regex and store this filtered lines into new file. After this store gets completed I need to delete all these log-* files.
I have written like this:
enter code here
processLine(){
line="$#"
echo $line
$ tail -f $FILE
}
FILE="/var/log/apache2/error.log"
if [ "$1" == "/var/log/apache2/error.log" ]; then
FILE="/dev/stdin"
else
FILE="$1"
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
#BAKIFS=$IFS
#IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<"$FILE"
#sed -e 's/\[debug\].*\(data\-HEAP\)\:\/-->/g' error.log > /var/log/apache2/error.log.1
while read -r line
do
processLine $line
done
exec 0<&3
IFS=$BAKIFS
logfile="/var/log/apache2/error.log"
pattern="bytes"
# read each new line as it gets written
# to the log file
#tail -1 $logfile
tail -fn0 $logfile | while read line ; do
# check each line against our pattern
echo "$line" | grep -i "$pattern"
#sed -e 's/\[debug\].*\(data\-HEAP\)\:/-->/g' error.log >/var/log/apache2/error.log
split -l 1000 error.log log-
FILE2="/var/log/apache2/log-*"
if [ "$1" == "/var/log/apache2/log-*" ]; then
FILE2="/dev/stdin"
else
FILE2="$1"
if [ ! -f $FILE2 ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE2 ]; then
echo "$FILE: can not read"
exit 2
fi
fi
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<"$FILE2"
while read -r line
do
processLine $line
echo $line >>/var/log/apache2/url.txt
done
exec 0<&3
IFS=$BAKIFS
find . -name "var/log/apache2/logs/log-*.*" -delete
done
exit 0
The below code deletes files after reading and splitting error.log but when I put tail -f $FILE it stops deleting files I want to delete log-* files after it reaches last line of error.log file:
enter code here
processLine(){
line="$#"
echo $line
}
FILE=""
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
#BAKIFS=$IFS
#IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<"$FILE"
while read -r line
do
processLine $line
split -l 1000 error.log log-
cat log-?? | grep "GET\|POST" > storefile
#tail -f $FILE
done
rm log-??
exec 0<&3
#IFS=$BAKIFS
exit 0
Your code seems unnecessarily long and complex, and the logic is unclear. It should not have been allowed to grow this big without working correctly.
Consider this:
split -l 1000 error.log log-
cat log-?? | grep "GET\|POST" > storefile
rm log-??
Experiment with this. If these three commands do what you expect, you can add more functionality (e.g. using paths, checking for the existence of error.log), but don't add code until you have this part working.