Compile and run code in nested folders - c

I am running Ubuntu. I have a folder "Project", and in that folder I have a bunch of sub-folders. In each of the sub-folders I have either a .c file, a .jar file or a .py file. I want to iterate over all the files, and for each file, compile it, and run 5 times with different input it using the "time" command to time the execution time.
I want to create a shell script for this, but I can't seem to find a good way to recurse over all the files in the sub-folders.

If you are using Bash4 you can set globstar to recurse all subdirs without caring about depth
#!/bin/bash
shopt -s globstar
for file in /path/to/your/files/**; do
case "${file##.*}" in
c)
gcc -c "$file"
;;
jar)
java -jar "$file"
;;
py)
python "$file"
;;
esac
done

If all of the subfolders are the same depth you can use for i in ./*/*/*/*.py with the appropriate number of *'s. Use one loop for each format since the actions will be different anyways.

Something like:
for folder in *
do
if [ -d $folder] then
cd $folder
for file in *.py
do
if [ -f $file ] then
do your stuff ..
fi
done
for file in *.c
do
if [ -f $file ] then
fi
done
.......
cd ..
fi
done

Related

Shell script to remove files with no extension

I need a shell script to remove files without an extension (like .txt or any other extension). For example, I found a file named as imeino1 (without .txt or any other thing) and I want to delete them via shell script, so if any developer know about this part, please explain how to do it.
No finds, no pipes, just plain old shell:
#!/bin/sh
for file in "$#"; do
case $file in
(*.*) ;; # do nothing
(*) rm -- "$file";;
esac
done
Run with a list of files as argument.
Assuming you mean a UNIX(-like) shell, you can use the rm command:
rm imeino1
rm -rvf `ls -lrth|grep -v ".txt"`
ls -lrth|grep -v ".txt" should be inside back-quotes `…`(or, better, inside $(…)).
If other filenames are not containing "." then instead of giving .txt for grep -v, you can give
rm -rvf `ls -lrth|grep -v "."`
This will remove all the directories and files in the path without extension.
rm -vf `ls -lrth|grep -v "."` won't remove directories, but will remove all the files without extension (if the filename does not contain the character ".").
for file in $(find . -type f | grep -v '\....$') ; do rm $file 2>/dev/null; done
Removes all files not ending in .??? in the current directory.
To remove all files in or below the current directory that contain no dot in the name, regardless of whether the names contain blanks or newlines or any other awkward characters, you can use a POSIX 2008-compliant version of find (such as found with GNU find, or BSD find):
find . -type f '!' -name '*.*' -exec rm {} +
This looks for files (not directories, block devices, …) with a name that does not match *.* (so does not contain a .) and executes the rm command on conveniently large groups of such file names.

Edit multiple files in ubuntu

I have multiple(more than 100) .c files and I want to change a particular text from all the file in which that text exists. I am using ubuntu!
How can I do it?(I will prefer command line rather than installing any application)
Thanks a lot!
OLD=searchtext
NEW=replacedtext
YOURFILE=/path/to/your/file
TMPFILE=`mktemp`
sed "s/$OLD/$NEW/g" $YOURFILE > $TMPFILE && mv $TMPFILE $YOURFILE
rm -rf $TMPFILE
you can also use find to find your files:
find /path/to/parent/dir -name "*.c" -exec sed 's/$OLD/$NEW/g' {} > $TMPFILE && mv $TMPFILE {} \;
find /path/to/parent/dir -name "*.c" finds all files with name *.c under /path/to/parent/dir. -exec command {} \; executes the command in the found file. {} stands for the found file.
You should check out sed, which lets your replace some text with other text (among other things)
example
sed s/day/night/ oldfile newfile
will change all occurences of "day" with "night" in the oldfile, and store the new, changed version in the newfile
to run on many files, there are a few things you could do:
use foreach in your favorite shell
use find like this
find . -name "namepattern" -exec sed -i "sed-expr" "{}" \;
use file patterns like this: sed -i "sed-expr" *pattern?.cpp
where *pattern?.cpp is just a name pattern for all files that starts with some string, then has "pattern" in them, and has any letter and a ".cpp" suffix

Build cscope.out files in a separate directory

I've source code of a huge project in one directory (/my/src/) and I want the cscope files to be built in some other directory (/my/db/). How can I do that ?
Try following steps:
1 . Generate cscope.files
find /my/src -name '*.c' -o -name '*.h' > /my/db/cscope.files
2 . Run cscope on the generated result
cscope -i /my/db/cscope.files
3 . Export to environment variable
CSCOPE_DB=/my/db/cscope.out; export CSCOPE_DB
If you have large number of files that are not part of git repo and if it is unnecessarily making cscope command slow. You can use below command to create cscople file (will work for java/javascript/python/c/go/c++ project).
git ls-files | grep '\.js$\|\.java$\|\.py$\|\.go$\|\.c$\|\.cpp$\|\.cc$\|\.hpp$' > /my/db/cscope.files
cscope -b -i /my/db/cscope.files
CSCOPE_DB=/my/db/cscope.out; export CSCOPE_DB

Batch processing pandoc conversions

I've searched high and low to try and work out how to batch process pandoc.
How do I convert a folder and nested folders containing html files to markdown?
I'm using os x 10.6.8
You can apply any command across the files in a directory tree using find:
find . -name \*.md -type f -exec pandoc -o {}.txt {} \;
would run pandoc on all files with a .md suffix, creating a file with a .md.txt suffix. (You will need a wrapper script if you want to get a .txt suffix without the .md, or do ugly things with subshell invocations.) {} in any word from -exec to the terminating \; will be replaced by the filename.
I made a bash script that would not work recursively, perhaps you could adapt it to your needs:
#!/bin/bash
newFileSuffix=md # we will make all files into .md
for file in $(ls ~/Sites/filesToMd );
do
filename=${file%.html} # remove suffix
newname=$filename.$newFileSuffix # make the new filename
# echo "$newname" # uncomment this line to test for your directory, before you break things
pandoc ~/Sites/filesToMd/$file -o $newname # perform pandoc operation on the file,
# --output to newname
done
# pandoc Catharsis.html -o test
This builds upon the answer by geekosaur to avoid the .old.new extension and use just .new instead. Note that it runs silently, displaying no progress.
find -type f -name '*.docx' -exec bash -c 'pandoc -f docx -t gfm "$1" -o "${1%.docx}".md' - '{}' \;
After the conversion, when you're ready to delete the original format:
find -type f -name '*.docx' -delete

How do I easily package libraries needed to analyze a core dump (i.e. packcore)

The version of GDB that is available on HPUX has a command called "packcore", which creates a tarball containing the core dump, the executable and all libraries. I've found this extremely useful when trying to debug core dumps on a different machine.
Is there a similar command in the standard version of GDB that I might find on a Linux machine?
I'm looking for an easy command that someone that isn't necessarily a developer can run when things go bad on a production machine.
The core file includes the command from which it was generated. Ideally this will include the full path to the appropriate executable. For example:
$ file core.29529
core.29529: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/bin/sleep 60'
Running ldd on an ELF binary will show what libraries it depends on:
$ ldd /bin/sleep
linux-vdso.so.1 => (0x00007fff1d3ff000)
libc.so.6 => /lib64/libc.so.6 (0x0000003d3ce00000)
/lib64/ld-linux-x86-64.so.2 (0x0000003d3ca00000)
So now I know the executable and the libraries needed to analyze the core dump.
The tricky part here is extracting the executable path from the core file. There doesn't appear to be a good tool for reading this directly. The data is encoded in a prpsinfo structure (from /usr/include/sys/procfs.h), and you can find the location size of the data using readelf:
$ readelf -n core.29529
Notes at offset 0x00000468 with length 0x00000558:
Owner Data size Description
CORE 0x00000150 NT_PRSTATUS (prstatus structure)
CORE 0x00000088 NT_PRPSINFO (prpsinfo structure)
CORE 0x00000130 NT_AUXV (auxiliary vector)
CORE 0x00000200 NT_FPREGSET (floating point registers)
...so one could in theory write a code snippet to extract the command line from this structure and print it out in a way that would make this whole process easier to automate. You could, of course, just parse the output of file:
$ file core.29529 | sed "s/.*from '\([^']*\)'/\1/"
/bin/sleep 60
So that's all the parts. Here's a starting point for putting it all together:
#!/bin/sh
core=$1
exe=$(file $core | sed "s/.*from '\([^']*\)'/\1/" | awk '{print $1}')
libs=$(
ldd $exe |
awk '
/=> \// {print $3}
! /=>/ {print $1}
'
)
cat <<EOF | tar -cah -T- -f $1-all.tar.xz
$libs
$exe
EOF
For my example, if I name this script packcore and run it on the core file from the sleep command, I get this:
$ packcore core.29529
tar: Removing leading `/' from member names
$ tar -c -f core.29529-all.tar.xz
core.29529
lib64/libc.so.6
lib64/ld-linux-x86-64.so.2
bin/sleep
As it stands this script is pretty fragile; I've made lots of assumptions about the output from ldd based on only this sample output.
Here's a script that does the necessary steps (tested only on RHEL5, but might work elsewhere too):
#!/bin/sh
#
# Take a core dump and create a tarball of all of the binaries and libraries
# that are needed to debug it.
#
include_core=1
keep_workdir=0
usage()
{
argv0="$1"
retval="$2"
errmsg="$3"
if [ ! -z "$errmsg" ] ; then
echo "ERROR: $errmsg" 1>&2
fi
cat <<EOF
Usage: $argv0 [-k] [-x] <corefile>
Parse a core dump and create a tarball with all binaries and libraries
needed to be able to debug the core dump.
Creates <corefile>.tgz
-k - Keep temporary working directory
-x - Exclude the core dump from the generated tarball
EOF
exit $retval
}
while [ $# -gt 0 ] ; do
case "$1" in
-k)
keep_workdir=1
;;
-x)
include_core=0
;;
-h|--help)
usage "$0" 0
;;
-*)
usage "$0" 1 "Unknown command line arguments: $*"
;;
*)
break
;;
esac
shift
done
COREFILE="$1"
if [ ! -e "$COREFILE" ] ; then
usage "$0" 1 "core dump '$COREFILE' doesn't exist."
fi
case "$(file "$COREFILE")" in
*"core file"*)
break
;;
*)
usage "$0" 1 "per the 'file' command, core dump '$COREFILE' is not a core dump."
;;
esac
cmdname=$(file "$COREFILE" | sed -e"s/.*from '\(.*\)'/\1/")
echo "Command name from core file: $cmdname"
fullpath=$(which "$cmdname")
if [ ! -x "$fullpath" ] ; then
usage "$0" 1 "unable to find command '$cmdname'"
fi
echo "Full path to executable: $fullpath"
mkdir "${COREFILE}.pack"
gdb --eval-command="quit" "${fullpath}" ${COREFILE} 2>&1 | \
grep "Reading symbols" | \
sed -e's/Reading symbols from //' -e's/\.\.\..*//' | \
tar --files-from=- -cf - | (cd "${COREFILE}.pack" && tar xf -)
if [ $include_core -eq 1 ] ; then
cp "${COREFILE}" "${COREFILE}.pack"
fi
tar czf "${COREFILE}.pack.tgz" "${COREFILE}.pack"
if [ $keep_workdir -eq 0 ] ; then
rm -r "${COREFILE}.pack"
fi
echo "Done, created ${COREFILE}.path.tgz"
I've written shell script for this. It uses ideas from the answers above and adds some usage information and additional commands. In future I'll possibly add command for quick debugging in docker container with gdb.

Resources