Adding a header file for all files in a directory - c

I have a directory with lots of C files. For each of these C files, I need to add the line #include config.h. Is there a way to avoid doing this manually?
I'm thinking that maybe there is a neat way to tell the preprocessor to do that, or maybe pass a well-constructed flag to the linker. How can I get all files in a directory to "include" the same header file?

Either use -include switch
$ cat foo.c
int main(void) {
printf("%s\n", "ohai there o/");
return 0;
}
$ gcc -include stdio.h foo.c
or write a script that will insert that include for you automatically
in these examples editing .c files, and adding the include in the very 1st line of the file.
assuming bash shell, GNU sed :
while read -r; do
sed -i '1i #include "config.h"' "$REPLY"
done < <(find /path/to/project/dir -type f -name "*.c")
or with something POSIX
find /path/to/project/dir -type f -name "*.c" | while read -r file; do
ed "$file" <<EOF
0a
#include "config.h"
.
wq
EOF
done

Related

Can I find all .c files, compile them and create a static library with them all at once?

I want to find all my c source files from parent directory, compile them and create a static library with them all at once. I tried this
ar -rc libmylib.a < gcc -c < find ../ -type f -name '*.c' but it throws:
bash: gcc: No such file or directory.
gcc doesn't print the object file name so you'll have to divide it up into two command lines. Example:
find .. -type f -name '*.c' -exec gcc -c {} \;
ar -rc libmylib.a *.o

How to find the whole path to a library using the C preprocessor?

I'm looking for a simple bash script which, when given the name of a system header, will return its full path from which it would be read in a #include <header> statement. I already have an analogous thing for looking up the library archive used by linker.
ld -verbose -lz -L/some/other/dir | grep succeeded | sed -e 's/^\s*attempt to open //' -e 's/ succeeded\s*$//'
For example, this will return the path of libz archive (/lib/x86_64-linux-gnu/libz.so on my system).
For the requested script I know that I could take a list of include directories used by gcc and search them for the file myself, but I'm looking for a more accurate simulation of what's happening inside the preprocessor (unless it's that simple).
Pipe the input to preprocessor and then process the output. Gcc preprocessor output inserts # lines with information and flags that you can parse.
$ f=stdlib.h
$ echo "#include <$f>" | gcc -xc -E - | sed '\~# [0-9]* "\([^"]*/'"$f"'\)" 1 .*~!d; s//\1/'
/usr/include/stdlib.h
It can output multiple files, because gcc has #include_next and can improperly detect in some complicated cases where multiple filenames are included with the same name, like in f=limits.h. So you could also filter exactly second line, knowing that the first line is always going to be stdc-predef.h:
$ f=limits.h; echo "#include <$f>" | gcc -xc -E - | sed '/# [0-9]* "\([^"]*\)" 1 .*/!d;s//\1/' | sed '2!d'
/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0/include-fixed/limits.h
But really search the include paths yourself, it's not that hard:
$ f=limits.h; echo | gcc -E -Wp,-v - 2>&1 | sed '\~^ /~!d; s/ //' | while IFS= read -r path; do if [[ -e "$path/$f" ]]; then echo "$path/$f"; break; fi; done
/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0/include-fixed/limits.h
You can use the preprocessor to do the work:
user#host:~$ echo "#include <stdio.h>" > testx.c && gcc -M testx.c | grep 'stdio.h'
testx.o: testx.c /usr/include/stdc-predef.h /usr/include/stdio.h \
You can add a bit bash-fu to cut the part you are interested in

how to count header files in linux command?

I would like to know I can count how many specific header files some files include
How many distinct specific calls some files contain (for example "#include" calls)
Thank you very much in advance
If you want to count only header files in folder and subfolder using
this,
**find . -iname "*.h" -type f | wc -l**
If you want to count only source files in folder and subfolder using
this,
**find . -iname "*.c" -type f | wc -l**
If you want to count only library files in folder and subfolder using
this,
**find . -iname "*.a" -type f | wc -l**
Use grep with the -c flag to count
For a list of files to search in like file1 and file2:
grep -c #include file1 file2 ...
Or recursively in a directory:
grep -c -r "#include" ~/MyCPrograms/

Shell script to remove files with no extension

I need a shell script to remove files without an extension (like .txt or any other extension). For example, I found a file named as imeino1 (without .txt or any other thing) and I want to delete them via shell script, so if any developer know about this part, please explain how to do it.
No finds, no pipes, just plain old shell:
#!/bin/sh
for file in "$#"; do
case $file in
(*.*) ;; # do nothing
(*) rm -- "$file";;
esac
done
Run with a list of files as argument.
Assuming you mean a UNIX(-like) shell, you can use the rm command:
rm imeino1
rm -rvf `ls -lrth|grep -v ".txt"`
ls -lrth|grep -v ".txt" should be inside back-quotes `…`(or, better, inside $(…)).
If other filenames are not containing "." then instead of giving .txt for grep -v, you can give
rm -rvf `ls -lrth|grep -v "."`
This will remove all the directories and files in the path without extension.
rm -vf `ls -lrth|grep -v "."` won't remove directories, but will remove all the files without extension (if the filename does not contain the character ".").
for file in $(find . -type f | grep -v '\....$') ; do rm $file 2>/dev/null; done
Removes all files not ending in .??? in the current directory.
To remove all files in or below the current directory that contain no dot in the name, regardless of whether the names contain blanks or newlines or any other awkward characters, you can use a POSIX 2008-compliant version of find (such as found with GNU find, or BSD find):
find . -type f '!' -name '*.*' -exec rm {} +
This looks for files (not directories, block devices, …) with a name that does not match *.* (so does not contain a .) and executes the rm command on conveniently large groups of such file names.

Batch processing pandoc conversions

I've searched high and low to try and work out how to batch process pandoc.
How do I convert a folder and nested folders containing html files to markdown?
I'm using os x 10.6.8
You can apply any command across the files in a directory tree using find:
find . -name \*.md -type f -exec pandoc -o {}.txt {} \;
would run pandoc on all files with a .md suffix, creating a file with a .md.txt suffix. (You will need a wrapper script if you want to get a .txt suffix without the .md, or do ugly things with subshell invocations.) {} in any word from -exec to the terminating \; will be replaced by the filename.
I made a bash script that would not work recursively, perhaps you could adapt it to your needs:
#!/bin/bash
newFileSuffix=md # we will make all files into .md
for file in $(ls ~/Sites/filesToMd );
do
filename=${file%.html} # remove suffix
newname=$filename.$newFileSuffix # make the new filename
# echo "$newname" # uncomment this line to test for your directory, before you break things
pandoc ~/Sites/filesToMd/$file -o $newname # perform pandoc operation on the file,
# --output to newname
done
# pandoc Catharsis.html -o test
This builds upon the answer by geekosaur to avoid the .old.new extension and use just .new instead. Note that it runs silently, displaying no progress.
find -type f -name '*.docx' -exec bash -c 'pandoc -f docx -t gfm "$1" -o "${1%.docx}".md' - '{}' \;
After the conversion, when you're ready to delete the original format:
find -type f -name '*.docx' -delete

Resources