What does 'Modify profile to enable bash completion?' mean? - google-app-engine

I'm trying to install google app engine. The instructions said to use this command:
$ curl https://sdk.cloud.google.com/ | bash
Now, the installer is asking me this question:
Modify profile to enable bash completion? (Y/n)?
What does that mean?
Edit:
I answered yes, then I was presented with this question:
The Google Cloud SDK installer will now prompt you to update an rc
file to bring the Google Cloud CLIs into your environment.
Enter path to an rc file to update, or leave blank to use
[/Users/7stud/.bash_profile]: /Users/7stud/.bashrc
Backing up [/Users/7stud/.bashrc] to [/Users/7stud/.bashrc.backup].
[/Users/7stud/.bashrc] has been updated. Start a new shell for the
changes to take effect.
The installer added the following to my .bashrc file (Mac OSX 10.6.8):
# The next line updates PATH for the Google Cloud SDK.
source '/Users/7stud/google-cloud-sdk/path.bash.inc'
# The next line enables bash completion for gcloud.
source '/Users/7stud/google-cloud-sdk/completion.bash.inc'
The first script is this:
script_link="$( readlink "$BASH_SOURCE" )" || script_link="$BASH_SOURCE"
apparent_sdk_dir="${script_link%/*}"
if [ "$apparent_sdk_dir" == "$script_link" ]; then
apparent_sdk_dir=.
fi
sdk_dir="$( command cd -P "$apparent_sdk_dir" && pwd -P )"
bin_path="$sdk_dir/bin"
export PATH=$bin_path:$PATH
And the next script is this:
_python_argcomplete() {
local IFS=''
COMPREPLY=( $(IFS="$IFS" COMP_LINE="$COMP_LINE" COMP_POINT="$COMP_POINT" _ARGCOMPLETE_COMP_WORDBREAKS="$COMP_WORDBREAKS" _ARGCOMPLETE=1 "$1" 8>&1 9>&2 1>/dev/null 2>/dev/null) )
if [[ $? != 0 ]]; then
unset COMPREPLY
fi
}
complete -o nospace -o default -F _python_argcomplete "gcloud"
_completer() {
command=$1
name=$2
eval '[[ "$'"${name}"'_COMMANDS" ]] || '"${name}"'_COMMANDS="$('"${command}"')"'
set -- $COMP_LINE
shift
while [[ $1 == -* ]]; do
shift
done
[[ $2 ]] && return
grep -q "${name}\s*$" <<< $COMP_LINE &&
eval 'COMPREPLY=($'"${name}"'_COMMANDS)' &&
return
[[ "$COMP_LINE" == *" " ]] && return
[[ $1 ]] &&
eval 'COMPREPLY=($(echo "$'"${name}"'_COMMANDS" | grep ^'"$1"'))'
}
unset bq_COMMANDS
_bq_completer() {
_completer "CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=1 bq help | grep '^[^ ][^ ]* ' | sed 's/ .*//'" bq
}
unset gcutil_COMMANDS
_gcutil_completer() {
_completer "CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=1 gcutil help | grep -v '^information' | grep '^[a-z]' | sed -e 's/ .*//' -e '/^$/d'" gcutil
}
complete -o default -F _bq_completer bq
complete -o nospace -F _python_argcomplete gsutil
complete -o default -F _gcutil_completer gcutil

I did a little search and what I understand so far is that Bash Completion is bash support for commands auto-complete.
So what GAE seems to be asking for here is to allow that for its command line tools.
It seems git has the same feature as well.

Related

cmake cannot resolve local path when I using remote toolchain on CLion 2020.3

/usr/bin/cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_MAKE_PROGRAM=/usr/bin/make -DCMAKE_C_COMPILER=/usr/lib/llvm/11/bin/clang -DCMAKE_CXX_COMPILER=/usr/lib/llvm/11/bin/clang++ -G "CodeBlocks - Unix Makefiles" /home/a_user_name/CLion_Programmes/VM_D
-- Configuring done
-- Generating done
-- Build files have been written to: /home/a_user_name/CLion_Programmes/VM_D/cmake-build-debug
Cannot resolve path: D:\MyProgrammes\CL\VM_D\cmake-build-debug
[Failed to reload]
Client : Windows 10 20H2
Host : Gentoo Linux on Hyper-V
connect via openssh
When I set up my environment I used this :
https://www.jetbrains.com/help/clion/remote-projects-support.html\
thanks for help :)
It seems that there is a problem with the tar file creation on the server. Are there any error messages in the FileTransfer window? Can you check if the tar files were created in your /tmp folder?
I was using a tar wrapper as described in https://youtrack.jetbrains.com/issue/CPP-17421#focus=Comments-27-4040675.0-0 and it was not working with the current version, so no tar file was created which resulted in that error message.
I fixed the tar wrapper as follows:
#!/bin/bash
# Uncomment this line to get details about files beeing transfered
#TAR_LOGFILE=~/.clion_tar_calls.txt
redirect_cmd() {
if [[ ! -z ${TAR_LOGFILE} ]]; then
echo "Executing tar: $#" >> ${TAR_LOGFILE}
"$#" >> ${TAR_LOGFILE}
else
"$#" > /dev/null
fi
}
if [[ ! -z ${TAR_LOGFILE} ]]; then
echo "`date` Called tar at $PWD with parameters: $#" >> ${TAR_LOGFILE}
if [[ "$*" == *--files-from* ]]; then
files=$(echo "$#" | sed 's/.*--files-from=\([^[:space:]]*\).*/\1/')
cat ${files} >> ${TAR_LOGFILE}
fi
fi
if [[ $PWD =~ cmake-build- ]]; then
excludes=('--exclude=*.o' '--exclude=*.gcno' '--exclude=*.gcda' '--exclude=*.a' '--exclude=bin' '--exclude=lib')
first="$1"
shift
file="$1"
shift
redirect_cmd exec /bin/tar "$first" "$file" "${excludes[#]}" "$#" --verbose
else
exec /bin/tar "$#"
fi

Problem with executing only first element into array [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 1 year ago.
I thought that my problem is trivial, but I cannot figure out, why my scripts only performing once in array.
I have a jenkins job (bash script). This job gathering hostnames and sends ssh commands, through script, using gathered info:
rm /tmp/hosts
docker exec -t tmgnt_consul_1 consul members -status=alive | grep -v Node | awk '{print $1}' | cut -d : -f1 >> /tmp/hosts
sed -i '/someunnecessaryinfo/d' /tmp/hosts
echo >> /tmp/hosts
shopt -s lastpipe
while IFS= read -r line; do
echo "host is >>$line<<";
url="http://111.111.111.111:8500/v1/catalog/nodes"
term_IP=`curl -s $url | jq -r --arg Node "${line}" '.[] | select(.Node == "'${line}'" )|.Address' --raw-output`
echo $term_IP
sudo bash -x /home/rtm/t_mgnt/check_fw $term_IP
done < /tmp/hosts
Second script:
#!/bin/bash
term_IP=$1
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
if [ $? != 0 ]; then
sudo sshpass -p 'some.pass' \
scp -n -o StrictHostKeyChecking=no -r /home/rtm/t_mgnt/nv9 user#$term_IP:
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo mv nv9 /root/"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo dpkg -i /root/nv9/libudev0_175-0ubuntu9_amd64.deb"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
else
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
fi
The job is working fine, and returns correct values, but only for the first element of array.
PS - I already searched through this and other sites, and - following answer didn't help me - Shell script while read line loop stops after the first line (already "ssh -n -o").
Perhaps you can point me, what I missed.
Possibly this ssh call eats your input:
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
^^^
Try adding -n.

Changing site-names in a bash script

I wan't to run a script that does some file alteration on .php files.
There are hundreds of EmailController.php files in different sites that should be modified based on the site-name depending on what folder they are located.
#!/bin/bash
source /root/sitenames.txt
sed -i 's#'"/var/vmail/skeleton.com/"'#'"/var/vmail/$sitename/"'#g' /var/www/$sitename/web/EmailController.php
The easiest way would be to read sitenames.txt file that would contain the domain-names one per line and substitute that domain with $sitename in the bash script.
#tom-fenech is right on in saying this should be in config file rather than hardcoded into your .php files. Regardless, you need to change what you have. And, you'll need to do something like this to change to a config file anyways.
Short Answer
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
Which is mostly equivalent to:
find "${skeldir}" -type f -print0 \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
I like the fgrep version better because it runs sed on a smaller set of files than find (assuming your pattern isn't in every file).
Long answer
Putting this together:
$ cat /tmp/x.sh
#!/bin/sh
skeldir="/tmp/skeleton"
skelsite="skeleton.com"
sitename="example.com"
[ -d "${skeldir}" ] && rm -rf "${skeldir}"
mkdir -p "${skeldir}/subdir"
echo 'ignore this line' \
| tee "${skeldir}/file1.php" "${skeldir}/subdir/file2.php" "${skeldir}/file3.php" \
> "${skeldir}/subdir/file4.php"
echo "foo /var/vmail/${skelsite}/ bar" \
| tee -a "${skeldir}/file1.php" >> "${skeldir}/subdir/file2.php"
echo "BEFORE:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
# make changes (--null/-0 ensures you can have spaces, etc, in filenames)
fgrep -lr --null "/var/vmail/${skelsite}/" "${skeldir}" \
| xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
# Alternate:
# find "${skeldir}" -type f -print0 \
# | xargs -0 sed -i "" "s#/var/vmail/${skelsite}/#/var/vmail/${sitename}/#g"
echo "AFTER:"
echo " Files that have \"${skelsite}\": $(fgrep -lr "/var/vmail/${skelsite}/" "${skeldir}" | wc -l)"
echo " Files that have \"${sitename}\": $(fgrep -lr "/var/vmail/${sitename}/" "${skeldir}" | wc -l)"
And see what happens:
$ /tmp/x.sh
BEFORE:
Files that have "skeleton.com": 2
Files that have "example.com": 0
AFTER:
Files that have "skeleton.com": 0
Files that have "example.com": 2
You may consider running a backup before doing this! Something like:
$ rsync -avP --delete /var/www/$sitename/ /var/www.backup/$sitename/

Heredoc commands for find . -exec sh {} +

I'm trying to convert a hierarcy of TIFF image files into JPG, and out of boredom, I want to do find and ffmpeg in a single file.
So I set find to invoke sh with the -s flag, like thins:
#!/bin/sh
export IFS=""
find "$#" -iname 'PROC????.tif' -exec sh -s {} + << \EOF
for t ; do
ffmpeg -y -v quiet -i $t -c:v mjpeg ${t%.*}.jpg
rm $t
done
EOF
However, there's just too many files in the directory hierarchy, and find chopped filename array into several small pieces, and sh -s was only succesfully invoked for the first argument chunk.
The question being: how could one feed such in-body command to every sh invocation in the find command?
Update
The tag "heredoc" on the question is intended for receiving answers that do not rely on external file or self-referencing through $0. It is also intended that no filename would go through string-array processing such as padding with NUL-terminator or newline, and can be directly passed as arguments.
The heredoc is being used as the input to find. I think your best bet is to not use a heredoc at all, but just use a string:
#!/bin/sh
find "$#" -iname 'PROC????.tif' -exec sh -c '
for t ; do
ffmpeg -y -v quiet -i "$t" -c:v mjpeg "${t%.*}.jpg" &&
rm "$t"
done
' sh {} +
I am re-writing your code below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=${imgfile%.*}.jpg
cd "$path"
ffmpeg -y -v quiet -i "$imgfile" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
IF you don't want to change the current directory you can do it like below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
# If you list down all the .tif file use below command
# find "$1" -name "*.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=$path${imgfile%.*}.jpg
ffmpeg -y -v quiet -i "$filepath" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
rm -rf /tmp/imagefile_list.txt

Need Shell script to monitor files on remote SUSE dir ApmPerfMonitor ApmPerfMonitor

We
would need monitoring on below folder for respective directories & sub directories to see if the files in the directory are greater than 100 files. Also none of the file should sit more than 4 hrs.
If files in the directory is more than 100 we would need an alert. Not sure why this script is working. Could you please confirm?
Path – /export/ftpaccounts/image-processor/working/
The Script:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
if [ -f ${LOCKFILE} ]; then
exit 0
fi
touch ${LOCKFILE}
NUM=`find /mftstaging/vim/inbound/active \
-ignore_readdir_race -depth -type f -m min +60 -print |
xargs wc -l`
if [[ ${NUM:0:1} -ne 0 ]]; then
echo "${NUM:0:1} files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
The format of your original post made it difficult to tell what you were trying to accomplish. If I understand you just want to find the number of files in the remote directory that are greater than 60 minutes old, then with a couple of changes your script should work fine. Try:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
ACTIVE=/mftstaging/vim/inbound/active
[ -f ${LOCKFILE} ] && exit 0
touch ${LOCKFILE}
# NUM=`find /mftstaging/vim/inbound/active \
# -ignore_readdir_race -depth -type f -m min +60 -print |
# xargs wc -l`
NUM=$(find $ACTIVE -type f -mmin +60 | wc -l)
## if [ $NUM -gt 100 ]; then # if you are test for more than 100
if [ $NUM -gt 0 ]; then
echo "$NUM files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
Note: you will want to implement some logic that deals with a stale lock file, and perhaps use trap to insure the lock is removed regardless of how the script terminates. e.g.:
trap 'rm -rf ${LOCKFILE}' SIGTERM SIGINT EXIT

Resources