GCC -D equivalent for iarbuild.exe - c

I have a build machine server I am maintaining which is using Makefiles infrastructure.
As part of that infrastructure, I'm passing a few arguments to the Makefile from the build machine (example: user, build-server name, and various build variables known only when compiling for a specific project).
Some of these variables are aggregated to the code using gcc -D directive
-DSOME_VAR=VAL
I've now been asked to migrate an Iar Project into my build system. That is not a problem in itself, only I can't find any way to introduce preprocessor defines using iarbuild.exe command line tool.
I guess I could use an existing H file and edit it before compiling (using sed for example), but that's an ugly hack I would rather avoid if I can.
How do I properly achieve this with IAR?

I recently solved this using a combination of option #2 and the -varfile argvarfile option to iarbuild.exe. For my case I am controlling the output of cpputest. I need easy to read outputs for IDE builds but junit formatted outputs for build server builds. Here's my setup as an example.
Create a global variable in the IDE. Tools->Configure Custom Argument Variables...
Select global tab. Create group JUNIT. Create variable USE_JUNIT. Set the value to 0.
In the Project->Options->C/C++ Compiler->Preprocessor section add an entry for
JUNIT_OUTPUT=$USE_JUNIT$
In the code use
#if JUNIT_OUTPUT == 1
#define FLAGS "-ojunit"
#else
#define FLAGS "-v"
#endif
Create a file called jUnitOut.txt and put the following into it.
<?xml version="1.0" encoding="iso-8859-1"?>
<iarUserArgVars>
<group active="true" name="JUNIT">
<variable>
<name>USE_JUNIT</name>
<value>1</value>
</variable>
</group>
</iarUserArgVars>
Call iarbuild.exe with the normal options plus -varfile jUnitOut.txt
Some observations
Regarding #1 you don't actually need to create a global variable but when you do IAR creates ...\AppData\Roaming\IAR Embedded Workbench\global.custom_argvars. This file must be present for iarbuild.exe to use the -varfile you provide. Also, you can create workspace variables as well. These are stored in a file in the local project directory. This file can be added to source control so global variables can be avoided. IDE builds use the global and workspace variables while iarbuild will use the -varfile
Regarding #4 I didn't find any documentation on how to format the argvarfile. So, I created a workspace variable in the IDE, found the file it created to store the variable and then cut/pasted from that file into my jUnitOut.txt

To my understanding iarbuild does not support passing such parameters directly.
There are two possibilities that were suggested by IAR support and that both work for me (using 7.40.2):
1) Use a preinclude file
Go to Project->Options->C/C++ Compiler->Preprocessor
Add a preinclude file (e.g. preinclude.h)
Now have your build script generate that preinclude file before starting iarbuild
2) Use "Defined symbols"
Go to Project->Options->C/C++ Compiler->Preprocessor
Add an option to "Defined symbols" and use environment variable, e.g. "SOMEVAR=$_SOMEVAL_$"
On the cmd line, set the environment variable, e.g. "set SOMEVAR=myvalue"
Run iarbuild
The 2nd method is little more elegant, but the build will fail if the environment variable is not set, so I'll probably go with the 1st method.

This may answer your question:
To see the command line parameters, enable the option IAR Embedded Workbench IDE > Tools > Options... > IDE Options > Messages > Show build messages > select 'All'.
which is part of the web page at:
http://supp.iar.com/Support/?Note=47884

Related

Can the meson project version be assigned dynamically?

I am new to Meson so please forgive me if this is a stupid question.
Simple Version of the Question:
I want to be able to assign a dynamic version number to the meson project version at build time. Essentially meson.project_version()=my_dynamic_var or project( 'my_cool_project', 'c', version : my_dynamic_var') (which of course won't work).
I would rather not pre-process the file if I don't have to.
Some background if anybody cares:
My build system dynamically comes up with a version number for the project. In my case, it is using a bash script. I have no problem getting that version into my top level meson.build file using run_command and scraping stdout from there. I have read that using doing it this way is bad form so if there is another way to do this.. I am all ears.
I am also able to create and pass the correct -DPRODUCT_VERSION="<my_dynamic_var>" via add_global_arguments so I COULD just settle for that.. but I would like the meson project itself to carry the same version for the logs and so I can use meson.project_version() to get the version in subprojects for languages other than c/c++.
The short answer, as noted in comments to the question, appears to be no. There is no direct way to set the version dynamically in the project call.
However, there are some work arounds, and the first looks promising for the simple case:
(1) use meson rewriting capability
$ meson rewrite kwargs set project / version 1.0.0
Then obviously use an environment variable instead of 1.0.0.
(2) write a wrapper script which reads the version from the environment and substitutes it into your meson.build file in the project call.
(3) adopt conan.io and have your meson files generated.
(4) use build options. This option, while not as good as (1) might work for other work flows.
Here's how option (4) works.
create a meson_options.txt file in your meson root directory
add the following line:
option('version', type : 'string', value : '0.0.0', description : 'project version')
then create a meson.build file that reads this option.
project('my_proj', 'cpp')
version = get_option('version')
message(version)
conf_data = configuration_data()
conf_data.set('version', version)
When you go to generate your project, you have an extra step of setting options.
$ meson build && cd build
$ meson configure -Dversion=$BUILD_VERSION
Now the version is available as a build option, then we use a configuration_data object to make it available for substitution into header/source files (which you might want to get it into shared libraries or what not).
configure_file(
input : 'config.hpp.in',
output : 'config.hpp',
configuration : conf_data
)
And config.hpp.in looks something like this:
#pragma once
#include <string>
const static std::string VERSION = "#version#";
When we do the configure_file call, #version# will get substituted for the version string we set in the meson configure step.
So this way is pretty convoluted, but like I said, you may still end up doing some of it, e.g. to print copyright info and what not.
As of 0.60.3 you may directly assign version from run_command which means the following will work without any meson_options.txt.
project('randomName', 'cpp',
version : run_command('git', 'rev-parse', '--short', 'HEAD').stdout().strip(),
default_options : [])
In particular, it is also possible to assign the result of a bash script, simply invoke it instead of git.

Append version number to output file for C application in Eclipse

I have a version.h header file where I have the version of my application defined:
#define VERSION 0x0100
I would like to add it as a suffix to the output file. So instead of having myapp.elf I would like to have myapp_0100.elf. Is there a way to use symbols in the compilation options?
You can do the opposite. Define a variable in Eclipse and use it when compiling.
Go to the Project Properties-> C/C++ Build -> Build variables
Define a new variable blah with value 0100. Then in the build settings, depending on your project type you can pass the -DVERSION=${blah} to the compiler. It will define the symbol called VERSION with the value given.
Now in Project Properties-> C/C++ Build -> Setting choose the Build Artifact tab. In the artifact name you can set myapp_${blah}.elf. Again, if your project is non-CDT managed, you can pass this variable to the makefile in order it to process it instead.

Doxygen: Outputting Version Numbers

I would like to have Doxygen display the source code version number as part of the main page or the title header.
Presently, our code has the version defined as a text literal:
/*!
* \brief Text literal containing the build number portion of the
* ESG Application Version.
*/
static const char build_version_text[] = "105";
I have searched the internet for a method to get the 105 from the above statement into the Doxygen main page (or header) with no luck.
Background
We have a build server that updates the text string as part of a nightly build operation. The file is updated, then checked into the Software Configuration Management system. The build server is also capable of generating the documentation. We would also like to have the developers be able to check out the code, the build the Doxygen documentation at their workstations.
We are using Doxygen version 1.8.11.
What you're looking for is to set the PROJECT_NUMBER config option based on the value in your source. I don't think this can be done, but the way I would go about achieving the same result is as follows.
Since the project version is updated when a build script runs, have the build script generate an extra file, for example Doxyversion. Have the content of the file be:
PROJECT_NUMBER = "<versiontext>"
Update your main Doxyfile and replace
PROJECT_NUMBER =
with
#INCLUDE = "<pathToDoxyversion>"
Edit:
A solution I can think of that does not require duplicating the version string requires parsing the version string out from the file into an environment variable. Then you can set PROJECT_NUMBER to
PROJECT_NUMBER=$(ENV_VAR)
Another option is you can call doxygen with
( cat Doxyfile ; echo "PROJECT_NUMBER=$ENV_VAR" ) | doxygen
Both solutions would require the developers to know to do this when generating the documentation, or wrapping the entire doxygen call in a script. Also potential portability issues.
Full solution below, from a real example.
Main page
In the documentation for the main page (or anywhere, really), use special markers for the text to substitute dynamically.
Main page source:
https://github.com/mysql/mysql-server/blob/8.0/sql/mysqld.cc#L22
See the special ${DOXYGEN_GENERATION_DATE} markers
Doxygen input filters
In the doxygen configuration file, define an input filter for the file containing the special markers. For example,
FILTER_PATTERNS = "*/sql/mysqld.cc=./doxygen-filter-mysqld"
Implement the doxygen-filter-mysqld script to:
Find the dynamic value to substitute (in your case, parse the value of build_version_text)
Replace (sed) the special marker with the value
Output the result to stdout
For example:
CMD1="s/\\\${DOXYGEN_GENERATION_DATE}/"`date -I`"/g"
...
sed -e ${CMD1} -e ${CMD2} -e ${CMD3} $1
Results
Result is at
http://devdocs.no.oracle.com/mysql-server/8.0.0/
See Also
All this is a work around for that I think should be a good Doxygen feature.
See bug#769679 (Feature Request: doxygen command to expand an environment variable) that was entered for this.
https://bugzilla.gnome.org/show_bug.cgi?id=769679

how to get doxygen to produce call & caller graphs for c functions

I've spent some time reviewing the docs and going through my doxy config file from end to end. I cut doxygen loose on my config file and it produces documentation and indices for structs and cpp classes but I don't see call or caller graphs for the multitude of c functions in my source tree.
Can anybody tell me how to configure doxygen to produces these call and caller trees ? I do have graphviz installed.
You have to set HAVE_DOT, CALL_GRAPH and CALLER_GRAPH to YES.
Also make sure the path to dot is in your PATH variable.
If that still doesn't work, you might have to set EXTRACT_ALL and/or EXTRACT_STATIC, depending on your functions.
For MacOS users:
Install Doxygen and Graphviz as:
brew install doxygen
brew install graphviz
Go to your project folder, and from Terminal set to this path run
doxygen -g
A doxygen file will be generated, named as Doxyfile. Go ahead and open up this file in any editor and find these parameters and replace their values to YES at their locations:
HAVE_DOT = YES
EXTRACT_ALL = YES
EXTRACT_PRIVATE = YES
EXTRACT_STATIC = YES
CALL_GRAPH = YES
CALLER_GRAPH = YES
DISABLE_INDEX = YES
GENERATE_TREEVIEW = YES
RECURSIVE = YES
You can also set name of your project in this Doxyfile. Save the file and then run this command in the terminal:
doxygen Doxyfile
This will generate two more folders named as html and latex. Go to the html folder and open annotated.html to view all details of your project. You will also view png images of the call graphs embedded in the html that are relevant (to some functions/classes for example).
Setting the path to "dot" (/usr/local/bin/) via the "Expert" tab controls in the GUI did the trick!
doxywizard is also useful. It gives you all the options in a GUI. Selecting any option shows quick help about that option.
You might also be interested in COLLABORATION_GRAPH or GRAPHICAL_HIERARCHY.
Quite convenient.
I had the same problem for my C global functions. Enabling CLANG_ASSISTED_PARSING did help display callgraphs for some functions, yet not all of them.

Locating data files in C program built with Autotools

I have a C program built using Autotools. In src/Makefile.am, I define a macro with the path to installed data files:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"'
The problem is that I need to run make install before I can test the binary (since it needs to be able to find the data files).
I can define another macro with the path of the source tree so the data files can be located without installing:
AM_CPPFLAGS = -DAM_INSTALLDIR='"$(pkgdatadir)"' -DAM_TOPDIR='"$(abs_top_srcdir)"'
Now, I would like the following behavior:
If the binary was installed via make install, use AM_INSTALLDIR to fetch data files.
If the binary was not installed, use AM_TOPDIR to fetch data files.
Is this possible? Is there a better approach to this problem?
What I do (in https://http://rhdunn.github.com/cainteoir/) is:
const char *basedir = getenv("CAINTEOIR_DATADIR");
if (!basedir)
basedir = DATADIR "/" PACKAGE; // e.g. /usr/share/cainteoir-engine
and then run it (in tests/harness.py) as:
CAINTEOIR_DATADIR=`pwd`/data src/apps/metadata/metadata test_file.epub
This then allows the user to change the location of where to get the data if they wish.
Making the program able to use a run-time configuration as proposed by reece is a good solution. If for some reason you do not want it to be configurable at run-time, a common solution is to build a test binary differently than the installed binary (there are other problems associated with this, in particular ensuring that the program you are testing has behavior that is consistent with the program that is installed.) An easy way to do that is something like:
bin_PROGRAMS = foo
check_PROGRAMS = test-foo
test_foo_SOURCES = $(foo_SOURCES)
AM_CPPFLAGS = -DINSTALLDIR='"$(pkgdatadir)"'
test_foo_CPPFLAGS = -DINSTALLDIR='"$(abs_top_srcdir)"'
Rather than using a binary with a different name, you might want to have a dedicated tests directory and build the program using the same name as the original.
Note that I've changed the name from AM_INSTALLDIR to INSTALLDIR. Automake reserves names
beginning with "AM_" for its own use, and by using that name you are stomping on Automake's
namespace.
A bit of additional information first: The data files are under active development, and I have various scripts that need to call binaries using local data files, whereas installed binaries should use stable, installed data files.
My original solution made use of an environment variable, as proposed by reece. But I didn't want to manage setting up environment variables in various places, and I didn't want any risk of the wrong data files being picked up due to a mistake.
So the solution I ended up with was to define macros for both locations at build time, and add a flag (-local) to the binaries to force local data files to be used.

Resources