Why MISRA-C checker gives error checking STM32 HAL? - c

I've started a project with HAL library using stm32cubemx, but there seems to be an HAL and MISRA-C compliance problem.
I'm using Keil for my software development and I've added PC-Lint (MISRA-C checker) to check C standards. When I run PC-lint to check MISRA-C rules, I receive lots of MISRA-C 2012 violation rules that arises from HAL commands.
Note, I have not entered any source files other than those generated by stm32cube.
For example, the generated files contains these two lines:
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
__attribute__((section(".revsh_text"))) __STATIC_INLINE __ASM int16_t __REVSH(int16_t value)
but I receive these errors consequently:
: Note 934: Taking address of near auto variable 'GPIO_InitStruct' (arg. no. 2) [MISRA 2012 Rule 1.3, required]
__attribute__((section(".revsh_text"))) __STATIC_INLINE __ASM int16_t __REVSH(int16_t value)
I also receive lots of other errors. According to this , HAL is in compliance with MISRA C 2012. What is wrong?

Most board support packages result in violations of MISRA C rules.
This is because, generally, these BSPs are doing things that (most of the time) are not good ideas. Sometimes, they are necessary (eg memory mapped registers require a conversion of an integer to a pointer, although this can be done in the linker)
They often also include lots of compiler specific magic (so Rule 1.1, Rule 1.2 and Directive 1.1 are worth proper review!)
So you need to separate your violations into three groups.
Correct violations that need fixing
Necessary violations that need a deviation
Advisory violations that you can accept (and document your reasoning)
Remember: Deviations are an acceptable aspect of MISRA Compliance! Any clip-board monitor that says you are not allowed Deviations needs some re-education.
For group 2, MISRA has published some Deviation Permits to help you...
Disclaimer: See profile
PS: and to save #Lundin making his comment, and notwithstanding my professional affiliation, PC-Lint is not the best choice of analyser.
-- Edited --
According to this , HAL is in compliance with MISRA C 2012.
According to the (now added) link
HAL and LL APIs are production–ready, developed in compliance with
MISRA-C®:2012 guidelines and checked with CodeSonar static analysis
tool. Reports are available on demand.
You need to ask for those reports and see what they say. I'm tempted to ask for them myself.......
Since they are claiming Compliance with MISRA C 2012, there will be a Guideline Compliance Summary (as well as a few other things)... if not, they are not compliant.

There are two different issues here: the static analyser tool and the ST library:
Rule 1.3 just says you should have no undefined or critically unspecified behavior in your program. A very broad and vague rule. Returning a pointer to a local variable from a function would have been a violation of that rule. Passing a pointer to a local variable to a function is not. HAL_GPIO_Init doesn't do anything with that pointer other than reading it.
This is what we call "false positive", also known as tool bug. Ask your tool vendor why their tool gives incorrect diagnostic messages.
However, since HAL_GPIO_Init doesn't modify the passed pointer, it should have declared the function as:
void HAL_GPIO_Init(GPIO_TypeDef *GPIOx, const GPIO_InitTypeDef *GPIO_Init)`
Since they did not, it is a MISRA-C violation of rule 8.13. The function is not MISRA compliant unless there are documented deviations. The source Doxygen comments for the implementation of this function (that I admittedly found on some random Github) doesn't mention MISRA at all, so it is pretty safe to assume that the library is not MISRA compliant.

It is not the HAL library problem only a problem with your code (even if it was generated by the Cube.
What is wrong here?
You pass the reference to the local variable. Everything is fine if the code id single threaded or the called function will not use the data after the function returns.
But (even if you do not multithread) many functions will work in the background. For example *_IT and *_DMA functions. You you pass the reference to the buffer having autoatic storage duration you will be in big problems.
So the Lint is correct. You do a potentially dangerous operation.

Related

Rule 2.3: A project shall not contains unused typedef declarations

"Rule 2.3: A project shall not contains unused typedef declarations: If a type is declared but not used, then it is unclear to a reviewer if the type is redundant or it has been left unused by mistake."
After generating code in matlab simulink via AUTOSAR, I use Misra rule check and Polyspace code prover to check the generated code. Is there a way for Misra to remove the RTE .h file from its list and check the rest of the libraries and code? (I do not want to MISRA check the RTE.h file. but how ??)
This is typically a tool-specific activity...
Depending on the tool, you can specify that file to be not checked, or include appropriate annotations in the file to switch off specific guideline checks.
Also, as Rule 2.3 is Advisory, in accordance with MISRA Compliance, you can Disapply it (with justification) either for a single file or more generally. Again, that would be implemented within your tool.
I can give guidance on one particular tool (see profile), but you're better talking with your vendor, if the manual is not helpful.
As an aside:
MISRA is due to release a Guideline Reclassification Plan for automatically generated code - this proposes that users Disapply Rule 2.3 for auto-code.
If you want to claim MISRA Compliance, the analysis needs to include all files (including your RTE.H) although some relaxation is permitted for adopted code (including auto-generated code).
See profile for affiliation.

Dependency of MISRA-C coding rules checker to the compiler

I have started using a tool allowing to check the compliancy to MISRA-C 2012. The tool is Helix QAC. During the configuration it requests to select one compiler. My understanding is that MISRA-C (and coding rules in general) are not linked to a compiler toolchain, since one of their objective is portability. Moreover one rule of MISRAC is to not use language extensions (obviously this rule may be disabled or there may be exceptions to it). Helix documentation or support is rather vague about this (still trying to get more info from them) and just mention the need to know the integer type length or the path of standard includes. But the rules analysis should be independant from int size and the interface of standard includes is standard so the actual files should not be needed.
What are the dependencies between a MISRA-C rules checker and the compiler ?
Some of the Guidelines depend on knowing what the implementation is doing - this is particularly the case with the implementation defined aspects, including (but not limited to integer sizes, maximum/minimum values, method of implementing boolean etc)
MISRA C even has a section 4.2 Understanding the compiler which coupled with 4.3 Understanding the static analysis tool addresses these issues.
There is one thing every MISRA-C checker needs to know and that's what type you use as bool. This is necessary since MISRA-C:2012 still supports C90 which didn't have standard support for a boolean type. (C99 applications should use _Bool/bool, period.) It also needs to know which constants that false and true correspond to, in case stdbool.h with false and true is unavailable. This could be the reason why it asks which compiler that is used. Check Appendix D - Essential types for details.
Type sizes of int etc isn't relevant for the MISRA checker to know. Though it might be nice with some awareness of non-standard extensions. We aren't allowed to use non-standard extensions or implementation-defined behavior without documenting them. The usual suspects being inline assembler, interrupts, memory allocation at specific places and so on. But once we have documented them in our deviation to Dir 1.1/Rule 1.1, we might want to disable warnings about using those specific, allowed deviations. If the MISRA checker is completely unaware of a certain feature, then how can you disable the warning caused by it?

Should I use ANSI C (C89)?

It's 2012. I'm writing some code in C. Should I be still be using C89? Are there still compilers that do not support C99?
I don't mind using /* */ instead of //.
I'm not sure about C89 forbids mixing declarations and code. I'm kind of leaning towards the idea that it's actually more readable to have all the declarations in one place, and if it isn't, the function is too long.
VLAs look useful but I haven't needed them yet.
Should I stick with C89 if I don't have a compelling reason not to? Are there other things I haven't considered?
Unless you know that you cannot use a C99-compatible compiler (the Visual Studio C compiler is the most prominent candidate) there is no good reason for not using the nice things C99 gives you.
However, even if you need to support that compiler you can use some C99 features - just not all of them.
One feature of C99 that is incredibly handy is being able to do for(int i = ...) instead of having to declare your loop variable on top of the function - especially since C actually has a block scope. That's the kind of declaration where having it on top really doesn't improve the readability.
There is a reason (or many) why C89 was superseded by C99. If you know for sure that no C99 compiler is available for your particular piece of work (unlikely unless you are stuck with Visual Studio which never supported C officially anyway), then you need to stay with C89 but otherwise you should certainly put yourself in a position where you can benefit from the last 20+ years of improvement. There is nothing inherently slower about C99.
Perhaps you should even consider looking at the newest C11 standard. There has been some important fixes for dealing with Unicode that any C programmer could benefit from (other changes in the standard are absolutely minimal)...
Good code is a mixture of performance, scalability, readability, and maintainability.
In my opinion, C99 makes code easier to read and maintain. Very, very few compilers don't support C99, so I say go with it. Use the tools you have available, unless you are certain you will need to compile your project with a compiler that requires the earlier standard.
Check out these links to learn more about the advantages to C99:
http://www.kuro5hin.org/?op=displaystory;sid=2001/2/23/194544/139
http://en.wikipedia.org/wiki/C99#Design
Note that C99 also supports library functions such as snprintf, which are very useful, and have better floating point support. Also, I find macros to be extremely helpful, especially when working with math-intensive applications (like cryptographic algorithms)
I disagree with Paul R's "bottom line" comment. There are multiple cases where C89 is advantageous for portability.
Targeting embedded systems, which may or may not have compilers supporting C99:
https://groups.google.com/forum/#!topic/comp.arch.embedded/WNvhw3T_9pI%5B1-25%5D
Targeting the TinyCC compiler, as might be required in a restricted environment where installing a gigantic toolchain is either impractical or not allowed. (TCC is no longer being developed, and Bellard's last statement as to ISOC99 support was that it was "heading towards" full compliance.)
Supporting dynamic compilation via libtcc (see above).
Targeting MSVC, as others have noted.
For source-compatibility with projects that may be required by their company to use the C89 standard. This is especially relevant if you're writing an open source library, and want to maximize its application in some industry.
As cegfault noted, some of the C99 features as listed on Wikipedia can be very useful, but none I would consider indispensable if your priority is portability, or any of the above reasons apply.
It appears Microsoft hasn't budged on C99 compliance. SimonRev from Beijer Electronics commented on a related MSDN thread in November 2016:
In broad strokes, the only parts of the C99 compiler that were
implemented are those parts that they needed to keep the C++ compiler
up to date.
Microsoft has done basically nothing to the C compiler since VC6, and
they haven't made much secret that C++ is their vision of the future
of native code, not C.
In conclusion, if you want portability for embedded or restricted systems, dynamic compilation, MSVC, or compatibility with proprietary source code, I would say C89 is advantageous.

Make tolower as static

I need to make the standard library function tolower static instead of "public' scope.
I am compiling with MISRA C:2004, using IAR Embedded Workbench compiler. The compiler is declaring tolower as inline:
inline
int tolower(int _C)
{
return isupper(_C) ? (_C + ('A' - 'a')) : _C;
}
I am getting the following error from the compiler:
Error[Li013]: the symbol "tolower" in RS232_Server.o is public but
is only needed by code in the same module - all declarations at
file scope should be static where possible (MISRA C 2004 Rule 8.10)
Here are my suggested solutions:
Use tolower in another module, in a dummy circumstance, so that it
is needed by more than one module.
Implement the functionality without using tolower. This is an embedded system.
Add a "STATIC" macro, defined as empty by default, but can be
defined to static before the ctype.h header file is included.
I'm looking for a solution to the MISRA linker error. I would prefer to make the tolower function static only for the RS232_Server translation unit (If I make tolower static in the standard header file, it may affect other future projects.)
Edit 1:
Compiler is IAR Embedded Workbench 6.30 for ARM processor.
I'm using an ARM7TDMI processor in 32-bit mode (not Thumb mode).
The tolower function is used with a debug port.
Edit 2:
I'm also getting the error for _LocaleC_isupper and _LocaleC_tolower
Solution:
I notified the vendor of the issue according as recommended by Michael Burr.
I decided not to rewrite the library routine because of localization
issues.
I implemented a function pointer in the main.c file as suggested by
gbulmer; however this will be commented incredibly because it should
be removed after IAR resolves their issue.
I'd suggest that you disable this particular MISRA check (you should be able to do that just for the RS232_Server translation unit) rather than use the one of the workarounds you suggest. In my opinion the utility of rule 8.10 is pretty minimal, and jumping through the kinds of hoops in the suggested workarounds seems more likely to introduce a higher risk than just disabling the rule. Keep in mind that the point of MISRA is to make C code less likely to have bugs.
Note that MISRA recognizes that "in some instances it may be necessary to deviate from the rules" and there is a documented "Deviation procedure" (section 4.3.2 in MISRA-C 2004).
If you won't or can't disable the rule for whatever reason, in my opinion you should probably just reimplement tolower()'s functionality in your own function, especially if you don't have to deal with locale support. It may also be worthwhile to open a support incident with IAR. You can prod them with rule 3.6 says that "All libraries used in production code shall be written to comply with [MISRA-C]".
Who sells the MISRA linker? It seems to have an insane bug.
Can you work around it by taking its address int (*foo)(int) = tolower;at file scope?
Edit: My rationale is:
a. that is the stupidest thing I've seen this decade, so it may be a bug, but
b. pushing it towards an edge case (a symbol having its name exported via a global) might shut it up.
For that to be correct behaviour, i.e. not a bug, it would have to be a MISRA error to include any library function once, like initialise_system_timer, initialise_watchdog_timer, ... , which just hurts my head.
Edit: Another thought. Again based on an assumption that this is an edge-case error.
Theory: Maybe the compiler is doing both the inline, and generating an implementation of the function. Then the function is (of course) not being called. So the linker rules are seeing that un-called function.
GNU has options to prevent in-lining.
Can you do the same for that use of tolower? Does that change the error? You could do a test by
#define inline /* nothing */
before including ctype.h, then undef the macro.
The other test is to define:
#define inline static inline
before including ctype.h, which is the version of inline I expect.
EDIT2:
I think there is a bug which should be reported. I would imagine IAR have a workaround. I'd take their advice.
After a nights sleep, I strongly suspect the problem is inline int tolower() rather than static inline int tolower or int tolower, because it makes most sense. Having a public function which is not called does seem to be the symptoms of a bug in a program.
Even with documentation, all the coding approaches have downsides.
I strongly support the OP, I would not change a standard header file. I think there are several reasons. For example, a future upgrade of the tool chain (which comes with a new set of headers) breaks an old app if it ever gets maintained. Or simply building the application on a different machine gives an error, merging two apparently correct applications may give an error, ... . Nothing good is likely to come of that. -100
Use #define ... to make the error go away. I do not like using macros to change standard libraries. This seems like the seeds of a long-term bad idea. If in future another part of the program uses another function with a similar problem, the application of the 'fix' gets worse. Some inexperienced developer might get the job of maintaining the code, and 'learns' wrapping strange pieces of #define trickery around #include is 'normal' practice. It becomes company 'practice' to wrap #include <ctype.h> in weird macro workarounds, which remains years after it was fixed. -20
Use a command line compiler option to switch off inlining. If this works, and the semantics are correct, i.e. including the header and using the function in two source files does not lead to a multiple defined function, then okay. I would expect it leads to an error, but it is worth confirming as part of the bug report. It lays a frustrating trap for someone else, who comes along in the future. If a different inline standard library function is used, and for some reason a person has to look at the generated code it won't be included either. They might go a bit crazy wondering why inline is not honoured. I conjecture the reason they are looking at generated code is because performance is critical. In my experience, people would spend a lot of time looking at code, baffled before looking at the build script for a program which works. If suppressing inline is used as a fix, I do feel it is better to do it everywhere than for one file. At least the semantics are consistent and the high-level comment or documentation might get noticed. Anyone using the build scripts as a 'quick check' will get consistent behaviour which might cause them to look at the documentation. -1 (no-inline everywhere) -3 (no-inline on one file)
Use tolower in a bogus way in a second source file. This has the small benefit that the build system is not 'infected' with extra compiler options. Also it is a small file which will give the opportunity to explain the error being worked around. I do not like this much but I like it more than the fiddling with standard headers. My current concern is it might not work. It might include two copies which the linker can't resolve. I do like it is better than the weird 'take its address and see if the linker shuts up` (which I do think is an interesting way to test the edge cases). -2
Code your own tolower. I don't understand the wider context of the application. My reaction is not to code replacements for library functions because I am concerned about testing (and the unit tests which introduce even more code) and long term maintenance. I am even more nervous with character I/O because the application might need to become capable of handling a wider character set, for example UTF-8 encoding, and a user-defined tolower will tend to get out of synch. It does sound like a very specific application (debugging to a port), so I could cope. I don't like writing extra code to work around things which look like bugs, especially if it is commercial software. -5
Can you convert the error to a warning without losing all of the other checking? I still feel it is a bug in the toolchain, so I'd prefer it to be a warning, rather than silence so that there is a hook for the incident and some documentation, and there is less chance of another error creeping in. Otherwise go with switching off the error. +1 (Warning) 0 (Error)
Switching the error off seems to lose the 'corporate awareness' that (IMHO) IAR owes you an explanation, and longer term fix, but it seems much better than messing with the build system, writing macro-nastiness to futz with standard libraries, or writing your own code which increases your costs.
It may just be me, but I dislike writing code to work around a deficiency in a commercial product. It feels like this is where the vendor should be pleased to have the opportunity to justify its license cost. I remember Microsoft charged us for incidents, but if the problem was proven to be theirs, the incident and fix were free. The right thing to do seems to be give them a chance to earn the money. Products will have bugs, so silently working around it without giving them a chance to fix it seems less helpful too.
First of all, MISRA-C:2004 does not allow inline nor C99. Upcoming MISRA 2012 will allow it. If you try to run C99 or non-standard code through a MISRA-C:2004 static analyser, all bets are off. The MISRA checker should give you an error for the inline keyword.
I believe a MISRA-C compliant version of the code would look like:
static uint8_t tolower(uint8_t ch)
{
return (uint8_t)(isupper(ch) ? (uint8_t)((uint8_t)(ch + 'A') - 'a') :
ch);
}
Some comments on this: MISRA encourages char type for character literals, but at the same time warns against using the plain char type, as it has implementation-defined signedness. Therefore I use uint8_t instead. I believe it is plain dumb to assume that there exist ASCII tables with negative indices.
(_C + ('A' - 'a')) is most certainly not MISRA-compliant, as MISRA regards it, it contains two implicit type promotions. MISRA regards character literals as char, rather than int, like the C standard.
Also, you have to typecast to underlying type after each expression. And because the ?: operator contains implicit type promotions, you must typecast the result of it to underlying type as well.
Since this turned out to be quite an unreadable mess, the best idea is to forget all about ?: entirely and rewrite the function. At the same time we can get rid of the unnecessary reliance on signed calculations.
static uint8_t tolower (uint8_t ch)
{
if(isupper(ch))
{
ch = (uint8_t)(ch - 'A');
ch = (uint8_t)(ch + 'a');
}
return ch;
}

Is C99 backward compatible with C89?

I'm used to old-style C and and have just recently started to explore c99 features. I've just one question: Will my program compile successfully if I use c99 in my program, the c99 flag with gcc and link it with prior c99 libraries?
So, should I stick to old C89 or evolve?
I believe that they are compatible in that respect. That is as long as the stuff that you are compiling against doesn't step on any of the new goodies. For instance, if the old code contains enum bool { false, true }; then you are in trouble. As a similar dinosaur, I am slowly embracing the wonderful new world of C99. After all, it has only been out there lurking for about 10 years now ;)
You should evolve. Thanks for listening :-)
Actually, I'll expand on that.
You're right that C99 has been around for quite a while. You should (in my opinion) be using that standard for anything other than legacy code (where you just fix bugs rather than add new features). It's probably not worth it for legacy code but you should (as with all business decisions) do your own cost/benefit analysis.
I'm already ensuring my new code is compatible with C1x - while I'm not using any of the new features yet, I try to make sure it won't break.
As to what code to look out for, the authors of the standards take backward compatibility very seriously. Their job was not ever to design a new language, it was to codify existing practices.
The phase they're in at the moment allows them some more latitude in updating the language but they still follow the Hippocratic oath in terms of their output: "first of all, do no harm".
Generally, if your code is broken with a new standard, the compiler is forced to tell you. So simply compiling your code base will be an excellent start. However, if you read the C99 rationale document, you'll see the phrase "quiet change" appear - this is what you need to watch out for.
These are behavioral changes in the compiler that you don't need to be informed about and may be the source of much angst and gnashing of teeth if your application starts acting strange. Don't worry about the "quiet change in c89" bits - if they were a problerm, you would have already been bitten by them.
That document, by the way, is an excellent read to understand why the actual standard says what it says.
Some C89 features are not valid C99
Arguably, those features exist only for historical reasons, and should not be used in modern C89 code, but they do exist.
The C99 N1256 standard draft foreword paragraph 5 compares C99 to older revisions, and is a good place to start searching for those incompatibilities, even though it has by far more extensions than restrictions.
Implicit int return and variable types
Mentioned by Lutz in a comment, e.g. the following are valid C89:
static i;
f() { return 1; }
but not C99, in which you have to write:
static int i;
int f() { return 1; }
This also precludes calling functions without prototypes in C99: Are prototypes required for all functions in C89, C90 or C99?
n1256 says:
remove implicit int
Return without expression for non void function
Valid C89, invalid C99:
int f() { return; }
I think in C89 it returns an implementation defined value. n1256 says:
return without expression not permitted in function that returns a value
Integer division with negative operand
C89: rounds to an implementation defined direction
C99: rounds to 0
So if your compiler rounded to -inf, and you relied on that implementation defined behavior, your compiler is now forced to break your code on C99.
https://stackoverflow.com/a/3604984/895245
n1256 says:
reliable integer division
Windows compatibility
One major practical concern is being able to compile in Windows, since Microsoft does not intend to implement C99 fully too soon.
This is for example why libgit2 limits allowed C99 features.
Respectfully: Try it and find out. :-)
Though, keep in mind that even if you need to fix a few minior compiling differences, moving up is probably worth it.
If you don't violate the explicit C99 features,a c90 code will work fine c99 flag with another prior c99 libraries.
But there are some dos based libraries in C89 like ,, that will certainly not work.
C99 is much flexible so feel free to migrate :-)
The calling conventions between C libraries hasn't changed in ages, and in fact, I'm not sure it ever has.
Operating systems at this point rely heavily on the C calling conventions since the C APIs tend to be the glue between the pieces of the OS.
So, basically the answer is "Yes, the binaries will be backwards compatible. No, naturally, code using C99 features can't later be compiled with a non-C99 compiler."
It's intended to be backwards compatible. It formalizes extensions that many vendors have already implemented. It's possible, maybe even probable, that a well written program won't have any issues when compiling with C99.
In my experience, recompiling some modules and not others to save time... wastes a lot of time. Usually there is some easily overlooked detail that needs the new compiler to make it all compatible.
There are a few parts of the C89 Standard which are ambiguously written, and depending upon how one interprets the rule about types of pointers and the objects they're accessing, the Standard may be viewed as describing one of two very different languages--one of which is semantically much more powerful and consequently usable in a wider range of fields, and one of which allows more opportunities for compiler-based optimization. The C99 Standard "clarified" the rule to make clear that it makes no effort to mandate compatibility with the former language, even though it was overwhelmingly favored in many fields; it also treats as undefined some things that were defined in C89 but only because the C89 rules weren't written precisely enough to forbid them (e.g. the use of memcpy for type punning in cases where the destination has heap duration).
C99 may thus be compatible with the language that its authors thought was described by C89, but is not compatible with the language that was processed by most C89 compilers throughout the 1990s.

Resources