Predefined cpu target macro for Cortex-M0+ - c

I am currently using predefined cpu target macros to make software run on multiple cpu targets.
#ifdef __TARGET_CPU_CORTEX_M0
[do something here]
#elif __TARGET_CPU_CORTEX_M3
[do something here]
#else
#error Unsupported compiler platform
#endif
Example:
This works for Cortex-M0 and Cortex-M3, but I can't figure out what macro to use for Cortex-M0+. Does anyone know which macro I can use?
I use the armcc compiler.

This is documented, albeit rather obliquely. The relevant macro name is derived from the command-line option, thus --cpu=Cortex-M0plus defines __TARGET_CPU_CORTEX_M0PLUS.
Annoyingly, whilst it doesn't show up in the --cpu=list output, the compiler (I tried armcc version 5.04) does also recognise the option --cpu=Cortex-M0+, for which it defines the macro __TARGET_CPU_CORTEX_M0_
In general, invoking armcc --cpu=xx --list_macros /dev/null will show what macros are defined for cpu option xx (or an error if it isn't supported).

Related

Passing custom compiler flags to bsds gcc

So, depending on how I decide to compile a program, I want to be able to execute a set of functions. This could normally be done with just a few variables and comparisons, but since I will be distributing it to systems that only have the ELF, it needs to be known at compile time what to run. Is it possible to pass in a custom gcc flag, say -flagset that makes it then set a MACRO in my code if that flag is set? I seen How to specify custom compiler flags for Visual Studio Compiler but that is a bit vague and not appropriate for my needs
From the gcc manual:
3.13 Options Controlling the Preprocessor
-D name
Predefine name as a macro, with definition 1.
-D name=definition
The contents of definition are tokenized and processed as if they appeared during translation phase three in a ‘#define’ directive.
That is, you can set any macro value via the -D option and that will be seen by the code. Example:
gcc -DSOME_FLAG test.c
Then in the code it can be checked as such:
#ifdef SOME_FLAG
/* do code for SOME_FLAG enabled case */
#endif

What is the compiler-defined macro for WASM?

What is the macro that clang and/or gcc would define when compiling for a WASM backend?
To clarify, one can write platform-specific code using macros the compiler defines like so:
#if _WIN32
// Windows-specific code
#elif __linux__
// Linux-specific code
#elif __APPLE__
// macOS-specific code
#else
#error Unsupported platform
#endif
I would like to do the same thing specifying WebAssembly as one of the potential backends.
As per #Jonathan Leffler's comment, there does not appear to be a standard macro that is defined across compilers.
My current solution for working with different compilers is to create a separate build job for WASM that defines a macro. For gcc and clang, it passes the flag -D__WASM__ to define a __WASM__ macro.
In my setup, I just change an environment variable and my build script selects the appropriate build flags.

`__noinline__` macro conflict between GLib and CUDA

I'm working on an application using both GLib and CUDA in C. It seems that there's a conflict when importing both glib.h and cuda_runtime.h for a .cu file.
7 months ago GLib made a change to avoid a conflict with pixman's macro. They added __ before and after the token noinline in gmacros.h: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2059
That should have worked, given that gcc claims:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
However, CUDA does use __ in its macros, and __noinline__ is one of them. They acknowledge the possible conflict, and add some compiler checks to ensure it won't conflict in regular c files, but it seems that in .cu files it still applies:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
/* gcc allows users to define attributes with underscores,
e.g., __attribute__((__noinline__)).
Consider a non-CUDA source file (e.g. .cpp) that has the
above attribute specification, and includes this header file. In that case,
defining __noinline__ as below would cause a gcc compilation error.
Hence, only define __noinline__ when the code is being processed
by a CUDA compiler component.
*/
#define __noinline__ \
__attribute__((noinline))
I'm pretty new to CUDA development, and this is clearly a possible issue that they and gcc are aware of, so am I just missing a compiler flag or something? Or is this a genuine conflict that GLib would be left to solve?
Environment: glib 2.70.2, cuda 10.2.89, gcc 9.4.0
Edit: I've raised a GLib issue here
It might not be GLib's fault, but given the difference of opinion in the answers so far, I'll leave it to the devs there to decide whether to raise it with NVidia or not.
I've used nemequ's workaround for now and it compiles without complaint.
GCC's documentation states:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
Now, that's only assuming you avoid double-underscored names the compiler and library use; and they may use such names. So, if you're using NVCC - NVIDIA could declare "we use noinline and you can't use it".
... and indeed, this is basically the case: The macro is protected as follows:
#if defined(__CUDACC__) || defined(__CUDA_ARCH__) || defined(__CUDA_LIBDEVICE__)
#define __noinline__ __attribute__((noinline))
#endif /* __CUDACC__ || __CUDA_ARCH__ || __CUDA_LIBDEVICE__ */
__CUDA_ARCH__ - only defined for device-side code, where NVCC is the compiler (ignoring clang CUDA support here).
__CUDA_LIBDEVICE__ - Don't know where this is used, but you're certainly not building it, so you don't care about that.
__CUDACC__ defined when NVCC is compiling the code.
So in regular host-side code, including this header will not conflict with Glib's definitions.
Bottom line: NVIDIA is (basically) doing the right thing here and it shouldn't be a real problem.
GLib is clearly in the right here. They check for __GNUC__ (which is what compilers use to indicate compatibility with GNU C, AKA the GNU extensions to C and C++) prior to using __noinline__ exactly as the GNU documentation indicates it should be used: __attribute__((__noinline__)).
GNU C is clearly doing the right thing here, too. Compilers offering the GNU extensions (including GCC, clang, and many many others) are, well, compilers, so they are allowed to use the double-underscore prefixed identifiers. In fact, that's the whole idea behind them; it's a way for compilers to provide extensions without having to worry about conflicts to user code (which is not allowed to declare double-underscore prefixed identifiers).
At first glance, NVidia seems to be doing the right thing, too, but they're not. Assuming you consider them to be the compiler (which I think is correct), they are allowed to define double-underscore prefixed macros such as __noinline__. However, the problem is that NVidia also defines __GNUC__ (quite intentionally since they want to advertise support for GNU extensions), then proceeds to define __noinline__ in an incompatible way, breaking an API provided by GNU C.
Bottom line: NVidia is in the wrong here.
As for what to do about it, well that's a less interesting question but there are a few options. You could (and should) file an issue with NVidia to fix their compiler. In my experience they're pretty good about responding quickly but unlikely to get around to fixing the problem in a reasonable amount of time.
You could also send a patch to GLib to work around the problem by doing something like
#if defined(__CUDACC__)
__attribute__((noinline))
#elif defined(__GNUC__)
__attribute__((__noinline__))
#else
...
#endif
If you're in control of the code which includes glib, another option would be to do something like
#undef __noinline__
#include glib_or_file_which_includes_glib
#define __noinline__ __attribute__((noinline))
My advice would be to do all three, but especially the first one (file an issue with NVidia) and find a way to work around it in your code until NVidia fixes the problem.

How to check that microprocessor is Altera Nios?

I writes some C-program code for Altera/Nios II microprocessor (uP). This code will be different with Altera Arm 9 microprocessor. So I need to write 2 different code pieces for different uP-s. How can I check in execution time which uP is present. Or more simple, current uP is Nios or not.
As the two processors are from different architectures, you will not be able to check which processor is running at run-time. You could do it at compile time, as you will have a specific define flag set by your toolchain (see https://sourceforge.net/p/predef/wiki/Architectures/). For Arm it should be __arm__ or similar, depending on the toolchain you are using for the HPS.
#ifdef __arm__
<specific code for HPS>
#else
<specific code for NIOS>
#endif /* __arm__ */
You can also look at the toolchain's defines using the c pre-processor command (cpp):
<toolchain>-cpp -dM /dev/null
Note: for Arm processor, the MIDR register could be used to know which type you are running and this one could be accessed at runtime. But when building for NIOS II, you would have a compilation error. So you need to use the preprocessor to call specific Arm register name and to remove the code when building for NiosII.
Presumably it will be compiled with a different compiler? These compilers will (very likely) have a #define of some sort which you can use to build different code for each one.
You can make the compiler dump all its default preprocessor defines using:
echo | ./nios2-elf-gcc.exe -dM -E -
This will in particular emit:
#define nios2 1

Detecting users OS in terminal application, in C

How do I determine a user's OS in terminal application, in C?
For example, in the code below, what should I replace windows and linux with?
/* pseudo code */
if(windows)
{system(cls)}
else if(linux)
{system(clear)}
else{...}
I should mention that I am a beginner at C, and need something like this so my code can work on windows and/or linux, without making separate source for each.
Typically, this is done with macros in the build system (since you have to BUILD the code for each system anyway.
e.g. gcc -DLINUX myfile.c
and then in myfile.c
#ifdef LINUX
... do stuff for linux ...
#else if defined(WINDOWS)
... do something for windows ...
#else if ... and so on.
...
#endif
(Most of the time, you can find some way that doesn't actually require the addition of a -D<something> on the command line, by using predefined macros for the tools you are using to compile for that architecture).
Alternatively, you ca do the same thing, but much quicker and better (but not 100% portable) by printing the ANSI escape sequence for "clear screen":
putstr("\033" "2J");
yes, that's two strings, because if you write "\0332J" the compile will use the character 0332, not character 033, followed by the digit 2. So two strings next to each other will do the trick.
I believe you can avoid runtime check by specializing your 'functions' during compilation. So, how about this then:
#ifdef WIN32
CLEAR = cls
#elif __linux__
CLEAR = clear
#endif
Predefs vary from compiler to compiler, so here's a good list to have: http://sourceforge.net/p/predef/wiki/OperatingSystems/
It is probably better to detect the environment at compile time rather than runtime. With compiled languages like C you aren't going to have the same compiler output running on different platforms as you would with a lanugage such as Java so you don't need to do this kind of check at runtime.
This is the header I use to work out what platform my code is being compiled on. It will define different macros depending on the OS (as well as other things).
Something like this in use:
#if defined(UTIL_PLATFORM_WINDOWS)
printf("windows\n");
#elif defined(UTIL_PLATFORM_UNIXLIKE)
printf("Unix\n");
#endif

Resources