I would like to ask you about assign specific memory adress for variable in C language.
I need to setup Understand SciTool software, and I have some issues about it.
Please have a look:
#define dPU1 0xF0031
__SFR_EXTERN__ __near __no_init volatile union
{
TByte ioPU1;
TBitfieldByte ioPU1_Bits;
} #dPU1;
dPU1 is a register adress (Renesas RL78).
Understand SciTool cant process it. I recived those messages:
[E] pasting formed '#dSMR02', an invalid preprocessing token;
[E] expected ';' after union
[E] expected identifier or '('
I can't find any information about "#" in C language.
Any idea?
Thanks!
Many compilers for embedded control accept certain extensions to place objects at absolute addresses.
Apparently your compiler allows to specify it via this notation.
In contrast, code analyzers are generic tools. They rarely know such extensions and so you receive this error message.
This is a good reason to wrap such an extension in a macro. This macro will be differently defined depending on the tool that parses the source. If your compiler reads the source, it provides the absolute address. If the analyzer reads the source, it expands to nothing.
This suggestion is untested:
#if defined(/* some macro that is automatically set by your compiler */)
#define AT(x) #x
#else
#define AT(x)
#endif
#define dPU1 0xF0031
__SFR_EXTERN__ __near __no_init volatile union
{
TByte ioPU1;
TBitfieldByte ioPU1_Bits;
} AT(dPU1);
The # operator is not standard C; you should find it documented as an extension in the manual for whatever compiler you are using.
The problem here is that static analysis tools need not be aware of such extensions.
Your compiler may offer an alternative method of locating objects that will not trouble the static analysis parser. For example the IAR compiler supports this extension, but has an alternative #pragma location directive:
#pragma location = 0xF0031
__SFR_EXTERN__ __near __no_init volatile union
{
TByte ioPU1;
TBitfieldByte ioPU1_Bits;
} dPU1;
Since the required behaviour when encountering an unrecognised pragma is to ignore it, your analyser should accept this code.
Related
#define P2VAR(ptrtype, memclass, ptrclass) ptrclass ptrtype * memclass
can anybody explain this declaration?
The C preprocessor is just a simple search-and-replace machine when it comes to macros. (Actually, it is not that simple.)
So if you write for example (shamelessly copied from the URL Raymond found):
P2VAR( uint8, SPI_VAR_FAST, SPI_APPL_DATA ) Spi_FastPointerToApplData;
It will be replaced by (this process is commonly called "it will expand to"):
SPI_APPL_DATA uint8 * SPI_VAR_FAST Spi_FastPointerToApplData;
Now you will need to know how SPI_APPL_DATA and SPI_VAR_FAST are defined. These seem to be macros, too, to enable the usage of different compilers and/or target systems.
Since this first example from the linked page is obviously just this, an example for some microcontroller, let's assume that you would like to use the another compiler and target system. This should be a standard C compiler for your PC as target, because, let's say, you will simulate your program. Then you will provide this macro definition:
#define P2VAR(ptrtype, memclass, ptrclass) ptrtype *
It ignores the parameters memclass and ptrclass and expands to:
uint8 * Spi_FastPointerToApplData;
So this macro is a way to leave the source code alone, even if you change compilers or target systems. That's why the page is titled "Compiler Abstraction".
I know that macros in C such as:
#define VARNULL (u8)0
doesn't store this VARNULL in RAM, but this of course will increase the code size in the FLASH.
But what if I have a multi-line macro such as:
#define CALL_FUNCS(x) \
do { \
func1(x); \
func2(x); \
func3(x); \
} while (0)
By knowing that func1, func2, and func3 are functions from different .c files. Does this means that these functions will be stored in RAM? And of course in the FLASH (the code).
Kindly correct me if I'm wrong?
You keep saying that "of course" the macros will be "stored" in flash memory on your target device, but that is not true.
The macros exist in the source code only; they are replaced with their defined values during compilation. The program in flash memory will not "contain" them in any meaningful way.
Macros, and any other directive prefixed with a # are processed before C compilation by the pre-processor; they do not generate any code, but rather generate source code that is then processed by the compiler as if you had typed in the code directly. So in your example the code:
int main()
{
CALL_FUNCS(2) ;
}
Results in the following generated source code:
int main()
{
do { \
func1(2);
func2(2);
func3(2);
} while (0) ;
}
Simple as that. If you never invoke the macro, it will generate exactly no code. If you invoke it multiple times, it will generate code multiple times. There is nothing clever going on the macro is merely a textual replacement generated before compilation; what the compiler does with that depends entirely on what the macro expands to and not the fact that it is a macro - the compiler sees only the generated code, not the macro definition.
With respect to const vs #define, a literal constant macro is also jyst a textual replacement and will be placed in the code as a literal constant. A const on the other hand is a variable. The compiler may simply insert a literal constant where that generates less code that fetching the constant from memory, in C++ that is guaranteed for simple types, and it would be unusual for a C compiler not to behave in the same way. However, because it is a variable you can take it's address - if your code does take the address of a const, then the const will necessarily have storage. Whether that storage is in RAM or ROM depends on your compiler and linker configuration - you should consult the toolchain documentation to see how it handles const storage.
One benefit of using a const is that const variables have strong typing and scope unlike macros.
I recently got a snippet of code in Linux kernel:
static int
fb_mmap(struct file *file, struct vm_area_struct * vma)
__acquires(&info->lock)
__releases(&info->lock)
{
...
}
What confused me is the two __functions following static int fb_mmap() right before "{",
a).What is the purpose of the two __funtions?
b).Why in that position?
c).Why do they have the prefix "__"?
d).Are there other examples similar to this?
Not everything ending with a pair of parenthesis is a function (call). In this case they are parameterized macro expansions. The macros are defined as
#define __acquires(x) __attribute__((context(x,0,1)))
#define __releases(x) __attribute__((context(x,1,0)))
in file include/linux/compiler.h in the kernel build tree.
The purpose of those macros expanding into attribute definitions is to annotate the function symbols with information about which locking structures the function will acquire (i.e. lock) and release (i.e. unlock). The purpose of those in particular is debugging locking mechanisms (the Linux kernel contains some code that allows it to detect potential deadlock situations and report on this).
https://en.wikipedia.org/wiki/Sparse
__attribute__ is a keyword specific to the GCC compiler, that allows to assign, well, attributes to a given symbol
http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html#Function-Attributes
Since macros are expanded at the text level, before the compiler is even looking at it, the result for your particular snippet, that the actual compilers sees would be
static int
fb_mmap(struct file *file, struct vm_area_struct * vma)
__attribute__((context(&info->lock,0,1)))
__attribute__((context(&info->lock,1,0)))
{
…
}
Those macros start with a double underscore __ to indicate, that they are part of the compiler environment. All identifiers starting with one or two underscores are reserved for the compiler environment implementation. In the case of the Linux kernel, because Linux is a operating system kernel that does not (because it simply is not availible) use the standard library, it's natural for it, do define it's own compiler environment definitions, private to it. Hence the two underscores to indicate, that this is compiler environment/implementation specific stuff.
They're probably macros defined with #define. You should look for the definition of such macros and see what they expand to. They might expand to some pragma giving hints to the compiler; they might expand to nothing giving hints to the developers or some analysis tool. The meaning might vary
The __attribute__ these macros evaluate to are compiler-specific features. man gcc explains some of the uses.
The prefix __ typically is used to avoid name clashes; double underscore as prefix and postfix mark an identifier as being used by the compiler itself.
More on gcc attributes can be found here.
More on the kernel use of these can be found here.
Those are macro's defined as
# define __acquires(x) __attribute__((context(x,0,1)))
# define __releases(x) __attribute__((context(x,1,0)))
in Linux/include/linux/compiler.h
I have an existing C-code (3rd party source, I can't change it) which will not be accepted by PC-Lint (Version 9.0). The code is running in an embedded environment, Green Hills Compiler is used.
Has anyone know-how on how to configure PC-Lint to accept the code definition?
I have attached only the error message for the first member in the struct.
Here are the defines from header file:
typedef struct
{
uint32_t PINSEL0; // see ERROR message from PCLint, line 153 in LPC23.h
uint32_t PINSEL1;
uint32_t PINSEL2;
} LPC_PINCON_TypeDef;
#define LPC_PINCON_BASE (0xE002C000)
#define LPC_PINCON ((LPC_PINCON_TypeDef *) LPC_PINCON_BASE)
#define PINSEL_BASE_ADDR 0xE002C000
#define PINSEL0 (*(volatile unsigned long *)(PINSEL_BASE_ADDR + 0x00))
/**************************/
/* function in c-file */
void Port_Init()
{
LPC_PINCON->PINSEL0 &= ~(3 << 4); //p0.2
LPC_PINCON->PINSEL0 |= (1 << 4); //
LPC_PINCON->PINSEL0 &= ~(3 << 6); //p0.3
LPC_PINCON->PINSEL0 |= (1 << 6); //
// etc................
}
/*******************************************/
// ERRORS from PC-Lint
// **********ERROR MESSAGES**************
#... (volatile unsigned long *)(PINSEL_BASE_ADDR + 0x00))
uint32_t PINSEL0;
LPC23.h 153 Error 10: Expecting identifier
#... BASE_ADDR + 0x00))
uint32_t PINSEL0;
LPC23.h 153 Error 102: Illegal parameter specification
#... BASE_ADDR + 0x00))
uint32_t PINSEL0;
LPC23.h 153 Error 10: Expecting ';'
An excerpt from the PC-lint FAQ:
How do I tell lint not to complain about my compiler headers?
Lint uses the label of "library" header to designate those headers over
which a programmer has no control (such as compiler headers). By default
all #includes from a foreign directory, or enclosed within < > , are
considered "library." This can be modified through the use of the +libclass
option, and further fine-tuned with the +/-libdir and +/-libh options.
You can then use the -wlib , -elib and -elibsym options to control just those
messages being emitted from library headers. Compiler options files distributed
with PC-lint usually contain a -wlib(1) option which limits lint output from
library headers to errors only (suppressing warning and informational messages).
I guess, that should fit your needs. If not, a minimal example that reproduce your warnings would be nice; the stuff above riddles me, since the #define of PINSEL0 is after its use as identifier in the struct.
If the #define PINSEL0 ... macro definition is active when the Port_Init() function is compiled, I can't understand how you aren't getting compiler errors. It seems that there must be something (an #ifdef or whatever) that's disabling the PINSEL0 macro during compilation - it's not necessary (and is harmful) if you're using the LPC_PINCON_TypeDef struct to access the registers.
You'll need to make sure that same controlling option/macro/whatever is set when you run the lint step.
Can you show the actual LPC23.h file (or point to it on the web somewhere)? A similar file I've found (http://www.keil.com/dd/docs/arm/philips/lpc23xx.h) uses only the 'direct macro' technique, and doesn't provide the LPC_PINCON_TypeDef struct member access technique.
I assume that LPC_PINCON_TypeDef and the macro PINSEL0 are from or for different situations. I hope you are allowed to change one or the other, since the definitions are in immediate conflict.
If I assume that the code itself compiles correctly, then both definitions are never used simultaneously within one translation unit, and PC Lint probably/possibly uses incorrect settings.
I think you may not have presented the implicit macro definitions for the compiler to Lint. At least the __ghs__ macro has to be defined, use the option -d__ghs__. And check the manual for further options.
You may want to check the exact files and their inclusion order using the option -vf (or for completeness you may use -vaif to inspect what search locations Lint uses for locating include files); but careful, the output is quite large and scrolls off the window and even its buffer easily. It's probably best to pipe the output into a file and inspect it afterward.
And though I hesitate to point to my own website, but if you want, take a look at my PDF "How to wield PC Lint", you'll find simple steps from zero to properly linting your code using PC Lint, with all the options to be set.
If all doesn't help, you'd have to elaborate on the setup you're using and the options for both the compiler and for PC Lint.
I am looking for a strange macro definition, on purpose: I need a macro defined in such a way, that in the event the macro is effectively used in compiled code, the compiler will unfailingly produce an error.
The background: Since C11 had introduced several new keywords, and a new C++11 standard also added a few, I would like to introduce a header file in my projects (mostly using C89/C95 compilers with a few additions) to force developers to refrain from using these new keywords as identifier names, unless, of course, they are recognized as keywords in the intended fashion.
In the ancient past, I did this for new like this:
#define new *** /* C++ keyword, do not use */
And yes, it worked. Until it didn't, when a programmer forgot the underscore in a parameter name:
void myfunction(uint16_t new parameter);
I used variants since, but I've never been challenged again.
Now I intend to create a file with all keywords not supported by various compilers, and I'm looking for a dependable solution, at best with a not too confusing error message. "Syntax error" would be OK, but "parameter missing" would be confusing already.I'm thinking along the lines of
#define atomic +*=*+ /* C11 derived keyword; do not use */
and aside from my usual hesitation, I'm quite sure that any use (but not the definition) of the macro will produce an error.
EDIT: To make it even more difficult, MISRA will only allow the use of the basic source and execution character set, so # or $ are not allowed.
But I'd like to ask the community: Do you have a better macro value? As effective, but shorter? Or even longer but more dependable in some strange situation? Or a completely different method to generate an error (only using the compiler, please, not external tools!) when a "discouraged" identifier is used for any purpose?
Disclaimer:
And, yes, I know I can use a grep or a parser to run on a nightly build, and report the warnings it finds. But dropping an immediate error on the developers desk is quicker, and certain to be fixed before checking in.
If the sport is for the shortest tokensequence that always produces an error, any combination of two 1 character operators that can't legally occur together, but
don't use ({ or }) because gcc has a special meaning for that
don't use any sort of unbalanced parentheses because they can lead you far away until the error is recognized
don't use < or > because they could match template parameters for C++
don't use prefix operators as second character
don't use postfix operators as first character
This leave some possibilities
.., .| and other combinations with . since . expects a following identifier
&|, &/, &^, &,, &;
!|, !/, !^, !,, !;
But actually to be more user friendly I'd also first place a _Pragma in it so the compiler would also spit a warning.
#define atomic _Pragma("message \"some instructive text that you should read\"") ..
I think you can just use an illegal symbol:
#define bad_name #
Another one that would work would be this:
static const char *illegal_keyword = "";
#define bad_name (illegal_keyword = "bad_name")
It would error you that you are changing a constant. Also, the error message will usually be quite good:
Line 8: error: called object 'illegal_keyword = "printf"' is not a function
And the final one that is perhaps the shortest and will always work is this:
#define bad_name #
Because the preprocessor will never replace twice, and # is illegal outside of the prepocessor this will always error.
#define atomic do not use atomic
The expansion is not recursive so it stops. The only way to stop it from being a compilation error is:
#define do
#define not
#define use
but that's verboten because do and not are keywords.
The error message might even include 'atomic'. You can increase the probability of that by rephrasing the message:
#define atomic atomic cannot be used
(Now you are not playing with keywords in the message, though.)
I think [[]] isn't a valid sequence of tokens anywhere, so you could use that:
#define keyword [[]]
The error will be a syntax error, complaining about [ or ].
My attempt:
#define new new[-1]
#define atomic atomic[-1]