Doxygen: Outputting Version Numbers - versioning

I would like to have Doxygen display the source code version number as part of the main page or the title header.
Presently, our code has the version defined as a text literal:
/*!
* \brief Text literal containing the build number portion of the
* ESG Application Version.
*/
static const char build_version_text[] = "105";
I have searched the internet for a method to get the 105 from the above statement into the Doxygen main page (or header) with no luck.
Background
We have a build server that updates the text string as part of a nightly build operation. The file is updated, then checked into the Software Configuration Management system. The build server is also capable of generating the documentation. We would also like to have the developers be able to check out the code, the build the Doxygen documentation at their workstations.
We are using Doxygen version 1.8.11.

What you're looking for is to set the PROJECT_NUMBER config option based on the value in your source. I don't think this can be done, but the way I would go about achieving the same result is as follows.
Since the project version is updated when a build script runs, have the build script generate an extra file, for example Doxyversion. Have the content of the file be:
PROJECT_NUMBER = "<versiontext>"
Update your main Doxyfile and replace
PROJECT_NUMBER =
with
#INCLUDE = "<pathToDoxyversion>"
Edit:
A solution I can think of that does not require duplicating the version string requires parsing the version string out from the file into an environment variable. Then you can set PROJECT_NUMBER to
PROJECT_NUMBER=$(ENV_VAR)
Another option is you can call doxygen with
( cat Doxyfile ; echo "PROJECT_NUMBER=$ENV_VAR" ) | doxygen
Both solutions would require the developers to know to do this when generating the documentation, or wrapping the entire doxygen call in a script. Also potential portability issues.

Full solution below, from a real example.
Main page
In the documentation for the main page (or anywhere, really), use special markers for the text to substitute dynamically.
Main page source:
https://github.com/mysql/mysql-server/blob/8.0/sql/mysqld.cc#L22
See the special ${DOXYGEN_GENERATION_DATE} markers
Doxygen input filters
In the doxygen configuration file, define an input filter for the file containing the special markers. For example,
FILTER_PATTERNS = "*/sql/mysqld.cc=./doxygen-filter-mysqld"
Implement the doxygen-filter-mysqld script to:
Find the dynamic value to substitute (in your case, parse the value of build_version_text)
Replace (sed) the special marker with the value
Output the result to stdout
For example:
CMD1="s/\\\${DOXYGEN_GENERATION_DATE}/"`date -I`"/g"
...
sed -e ${CMD1} -e ${CMD2} -e ${CMD3} $1
Results
Result is at
http://devdocs.no.oracle.com/mysql-server/8.0.0/
See Also
All this is a work around for that I think should be a good Doxygen feature.
See bug#769679 (Feature Request: doxygen command to expand an environment variable) that was entered for this.
https://bugzilla.gnome.org/show_bug.cgi?id=769679

Related

Frama-c: save plugin analysis results in c file

I'am new in frama-c. So I apologize in advance for my question.
I would like to make a plugin that will modify the source code, clone some functions, insert some functions calls and I would like my plugin to generate a second file that will contain the modified version of the input file.
I would like to know if it is possible to generate a new file c with frama-c. For example, the results of the Sparecode and Semantic constant folding plugins are displayed on the terminal directly and not in a file. So I would like to know if Frama-c has the function to write to a file instead of sending the result of the analysis to the standard output.
Of course we can redirect the output of frama-c to a file.c for example, but in this case, for the plugin scf for example, the results of value is there and I found that frama-c replaces for example the "for" loops by while.
But what I would like is that frama-c can generate a file that will contain my original code plus the modifications that I would have inserted.
I looked in the directory src / kernel_services / ast_printing but I have not really found functions that can guide me.
Thanks.
On the command line, option -ocode <file> indicates that any subsequent -print will be done in <file> instead of the standard output (use -ocode "" after that if you want to print on stdout again). Note that -print prints the code corresponding to the current project. You can use -then-on <prj> to change the project you're interested in. More information is of course available in the user manual.
All of this is of course available programmatically. In particular, File.pretty_ast by defaults pretty-prints (i.e. output a C program) the AST of the current project on stdout, but takes two optional argument for changing the project or the formatter to which the output should be done.

How to add a new filter to ffmpeg library

I am trying to add functionality to FFmpeg library. The issue is that in developer guide there are just general instruction on how to do it. I know that when I want to add something to ffmpeg I need to register the new functionality and rebuild the library so I can then call it somehow like so:
ffmpeg -i input.avi -vf "myfilter" out.avi
I do not want to officialy contribute. I would like to try to create the extra functionality and test it. The question is - is there any scelet file where the basic structure would be ready and you would just get a pointer to a new frame and processed it? Some directions or anything, because the source files are kinda hard to read without understanding its functions it calls inside.
The document in the repo is worth a read: ffmpeg\doc\writing_filters.txt
The steps are:
Add an appropriate line to the: ffmpeg\libavfilter\Makefile
OBJS-$(CONFIG_MCSCALE_CUDA_FILTER) += vf_mcscale_cuda.o
vf_mcscale_cuda.ptx.o scale_eval.o
Add an appropriate line to the: ffmpeg\libacfilter\allfilters.c
extern AVFilter ff_vf_mcscale_cuda;
The change in (2) does not become recognized until ./configure scans the files again to configure the build, so run Configure and when you next run make the filter should be generated. Happy days.
i was faced with a problem to add transform_v1 filter (see details on transform 360 filters at https://www.diycode.cc/projects/facebook/transform360 ) to ffmpeg with the version N-91732-g1124df0. i did exactly according to writing_filters.txt but transform_v1.o is not linked?
i added the object file (vf_transform_v1.o) in Makefile of libavfilter.
OBJS-$(CONFIG_TRANSFORM_V1_FILTER)+= vf_transform_v1.o
i checked that the define CONFIG_TRANSFORM_V1_FILTER=1 is present in config.h .
However, after the compilation transform_v1 is still not recognized.
i resolved this issue in an awkward way, i added explicitly vf_transform_v1.o in OBJ-list without conditioning by the global define CONFIG_TRANSFORM_V1_FILTER:
OBJS+= vf_transform_v1.o

GCC -D equivalent for iarbuild.exe

I have a build machine server I am maintaining which is using Makefiles infrastructure.
As part of that infrastructure, I'm passing a few arguments to the Makefile from the build machine (example: user, build-server name, and various build variables known only when compiling for a specific project).
Some of these variables are aggregated to the code using gcc -D directive
-DSOME_VAR=VAL
I've now been asked to migrate an Iar Project into my build system. That is not a problem in itself, only I can't find any way to introduce preprocessor defines using iarbuild.exe command line tool.
I guess I could use an existing H file and edit it before compiling (using sed for example), but that's an ugly hack I would rather avoid if I can.
How do I properly achieve this with IAR?
I recently solved this using a combination of option #2 and the -varfile argvarfile option to iarbuild.exe. For my case I am controlling the output of cpputest. I need easy to read outputs for IDE builds but junit formatted outputs for build server builds. Here's my setup as an example.
Create a global variable in the IDE. Tools->Configure Custom Argument Variables...
Select global tab. Create group JUNIT. Create variable USE_JUNIT. Set the value to 0.
In the Project->Options->C/C++ Compiler->Preprocessor section add an entry for
JUNIT_OUTPUT=$USE_JUNIT$
In the code use
#if JUNIT_OUTPUT == 1
#define FLAGS "-ojunit"
#else
#define FLAGS "-v"
#endif
Create a file called jUnitOut.txt and put the following into it.
<?xml version="1.0" encoding="iso-8859-1"?>
<iarUserArgVars>
<group active="true" name="JUNIT">
<variable>
<name>USE_JUNIT</name>
<value>1</value>
</variable>
</group>
</iarUserArgVars>
Call iarbuild.exe with the normal options plus -varfile jUnitOut.txt
Some observations
Regarding #1 you don't actually need to create a global variable but when you do IAR creates ...\AppData\Roaming\IAR Embedded Workbench\global.custom_argvars. This file must be present for iarbuild.exe to use the -varfile you provide. Also, you can create workspace variables as well. These are stored in a file in the local project directory. This file can be added to source control so global variables can be avoided. IDE builds use the global and workspace variables while iarbuild will use the -varfile
Regarding #4 I didn't find any documentation on how to format the argvarfile. So, I created a workspace variable in the IDE, found the file it created to store the variable and then cut/pasted from that file into my jUnitOut.txt
To my understanding iarbuild does not support passing such parameters directly.
There are two possibilities that were suggested by IAR support and that both work for me (using 7.40.2):
1) Use a preinclude file
Go to Project->Options->C/C++ Compiler->Preprocessor
Add a preinclude file (e.g. preinclude.h)
Now have your build script generate that preinclude file before starting iarbuild
2) Use "Defined symbols"
Go to Project->Options->C/C++ Compiler->Preprocessor
Add an option to "Defined symbols" and use environment variable, e.g. "SOMEVAR=$_SOMEVAL_$"
On the cmd line, set the environment variable, e.g. "set SOMEVAR=myvalue"
Run iarbuild
The 2nd method is little more elegant, but the build will fail if the environment variable is not set, so I'll probably go with the 1st method.
This may answer your question:
To see the command line parameters, enable the option IAR Embedded Workbench IDE > Tools > Options... > IDE Options > Messages > Show build messages > select 'All'.
which is part of the web page at:
http://supp.iar.com/Support/?Note=47884

Prevent CEDET semantic from parsing certain file types

I have to work with a C/C++ build environment that drops intermediate files all over the place:
.i files containing the output of the C-preprocessor (roughly raw C)
.s files containing the input of the C-assembler
CEDET (I assume the semantic analyzer) eventually finds these files and attempts to index them. This results in jumping to .i files containing raw C for definitions and generally slowing down parsing and loading of the .semanticdb.
I never open these files in emacs, so they must be being loaded by the background analyser.
Is it possible to prevent the analyser from loading these files? I can't find any configuration options that define the file-types that are parsed by the background analyser.
If you never need C mode for these files, here's a quick fix:
(add-to-list 'auto-mode-alist '("\\.i\\'" . fundamental-mode))
(add-to-list 'auto-mode-alist '("\\.s\\'" . fundamental-mode))
The answer from abo-abo gave me the clues I needed. The grep implementation (used by EDE) of semantic-symref-perform-search uses auto-mode-alist to find matching files for a given semantic mode (based on the current buffer's mode - eg `c-mode) when trying to resolve symbols.
The final fix I used is to specifically eliminate the default entries in the auto-mode-alist using:
(delete '("\\.i\\'" . c-mode) auto-mode-alist)
(delete '("\\.ii\\'" . c++-mode) auto-mode-alist)
Adding fundamental-mode entries as suggested by abo-abo seems to work also, however I was concerned that since the c-mode entries were still in the list a change in implementation could result in them being reactivated.

how to get doxygen to produce call & caller graphs for c functions

I've spent some time reviewing the docs and going through my doxy config file from end to end. I cut doxygen loose on my config file and it produces documentation and indices for structs and cpp classes but I don't see call or caller graphs for the multitude of c functions in my source tree.
Can anybody tell me how to configure doxygen to produces these call and caller trees ? I do have graphviz installed.
You have to set HAVE_DOT, CALL_GRAPH and CALLER_GRAPH to YES.
Also make sure the path to dot is in your PATH variable.
If that still doesn't work, you might have to set EXTRACT_ALL and/or EXTRACT_STATIC, depending on your functions.
For MacOS users:
Install Doxygen and Graphviz as:
brew install doxygen
brew install graphviz
Go to your project folder, and from Terminal set to this path run
doxygen -g
A doxygen file will be generated, named as Doxyfile. Go ahead and open up this file in any editor and find these parameters and replace their values to YES at their locations:
HAVE_DOT = YES
EXTRACT_ALL = YES
EXTRACT_PRIVATE = YES
EXTRACT_STATIC = YES
CALL_GRAPH = YES
CALLER_GRAPH = YES
DISABLE_INDEX = YES
GENERATE_TREEVIEW = YES
RECURSIVE = YES
You can also set name of your project in this Doxyfile. Save the file and then run this command in the terminal:
doxygen Doxyfile
This will generate two more folders named as html and latex. Go to the html folder and open annotated.html to view all details of your project. You will also view png images of the call graphs embedded in the html that are relevant (to some functions/classes for example).
Setting the path to "dot" (/usr/local/bin/) via the "Expert" tab controls in the GUI did the trick!
doxywizard is also useful. It gives you all the options in a GUI. Selecting any option shows quick help about that option.
You might also be interested in COLLABORATION_GRAPH or GRAPHICAL_HIERARCHY.
Quite convenient.
I had the same problem for my C global functions. Enabling CLANG_ASSISTED_PARSING did help display callgraphs for some functions, yet not all of them.

Resources