Suppose I have
typedef struct {
unsigned short bar : 1;
} foo_bf;
typedef union {
unsigned short val;
foo_bf bf;
} foo_t;
How do I correctly assign a value to this bitfield from an type e.g uint16_t?
uint16_t myValue = 1;
foo_t foo;
foo.bf.bar = myValue
Running PC-Lint, this turns into a MISRA error:
Expression assigned to a narrower or different essential type.
I tried to limit the number of used bits without any success.
foo.bf.bar = (myValue 0x1U)
Is there any chance to get it MISRA complient if I have to use a uint16_t value as origin?
MISRA-C's essential type model isn't really applicable to bit-fields. The terms narrower and wider refer to the size in bytes (see 8.10.2). So it isn't obvious if a static analyser should warn here or not, since the rules for essential type do not address bit-fields.
EDIT: I was wrong here, see the answer by Andrew. Appendix D.4 tells how to translate a bit-field type to the matching essential type category.
However, using bit-fields in a MISRA-C application is a bad idea. Bit-fields are very poorly specified by the standard, and therefore non-deterministic and unreliable. Also, MISRA-C 6.1 requires that you document how your compiler supports bit-fields with uint16_t, as that is not one of the standard integer types allowed for bit-fields.
But the real deal-breaker here is Directive 1.1, which requires that all implementation-defined behavior is documented and understood. For a MISRA-C implementation, I once actually tried to document all implementation-defined aspects of bit-fields. Soon I found myself writing a whole essay, because there are so many problems with them. See this for the top of the iceberg.
The work-around for not having to write such a "bit-field behavior book" is to unconditionally ban the use of bit-fields entirely in your coding standard. They are a 100% superfluous feature anyway. Use bit-wise operators instead.
Appendix D.4 of MISRA C:2012 is usefully titled "The essential types of bit fields"
For a bit-field which is implemented with an essentially Boolean type, it is essentially Boolean
For a bit-field which is implemented with a signed type, it is the Signed Type of Lowest Rank which is able to represent the bit field
For a bit-field which is implemented with a unsigned type, it is the Unsigned Type of Lowest Rank which is able to represent the bit field
The Unsigned Type of Lowest Rank of a single-bit unsigned integer would be uint8_t (aka unsigned char) - assuming that the tool does not interpret a single-bit as being boolean...
Beyond observing that this looks like a mis-diagnosis by PC-Lint, a workaround that avoids any possibility of doubt would to cast:
foo.bf.bar = (uint8_t)myValue
As an aside MISRA C:2012 Rule 6.1 gives guidance on the use of types other than signed/unsigned int for bit-fields...
Related
I have this MISRA C:2004 violation typedefs that indicate size and signedness should be used in place of the basic types
for example I have this piece of code, where I did not understand the right solution to avoid this violation
static int handlerCalled = 0;
int llvm_test_diagnostic_handler(void) {
LLVMContextRef C = LLVMGetGlobalContext();
LLVMContextSetDiagnosticHandler(C, &diagnosticHandler, &handlerCalled);
The MISRA rule is aimed at the fact that C does not define the exact size, range, or representation of its standard integer types. The stdint.h header mitigates this issue by providing several families of typedefs expressing the implementation-supported integer types that provide specific combinations of signedness, size, and representation. Each C implementation provides a stdint.h header appropriate for that implementation.
You should comply with the MISRA rule by using the types defined in your implementation's stdint.h header, choosing the types that meet your needs from among those it actually supports (or those you expect it to support). For example, if you want a signed integer type exactly 32 bits wide, with no padding bits, and expressed in two's complement representation, then that is int32_t -- if your implementation provides that at all (it would be surprising, but not impossible, for such a type not to be available).
For example,
#include <stdint.h>
// relies on the 'int32_t' definition from the above header:
static int32_t handlerCalled = 0;
The point I was raising in my comment was that you seemed to say that you not only included the header, but also defined your own typedef for uint32_t. You must not define your own typedef for this or other types in the scope of stdint.h. At best it is redundant to do so, but at worst it satisfies the MISRA checker yet breaks your code.
Some background:
the header stdint.h is part of the C standard since C99. It includes typedefs that are ensured to be 8, 16, 32, and 64-bit long integers, both signed and unsigned. This header is not part of the C89 standard, though, and I haven't yet found any straightforward way to ensure that my datatypes have a known length.
Getting to the actual topic
The following code is how SQLite (written in C89) defines 64-bit integers, but I don't find it convincing. That is, I don't think it's going to work everywhere. Worst of all, it could fail silently:
/*
** CAPI3REF: 64-Bit Integer Types
** KEYWORDS: sqlite_int64 sqlite_uint64
**
** Because there is no cross-platform way to specify 64-bit integer types
** SQLite includes typedefs for 64-bit signed and unsigned integers.
*/
#ifdef SQLITE_INT64_TYPE
typedef SQLITE_INT64_TYPE sqlite_int64;
typedef unsigned SQLITE_INT64_TYPE sqlite_uint64;
#elif defined(_MSC_VER) || defined(__BORLANDC__)
typedef __int64 sqlite_int64;
typedef unsigned __int64 sqlite_uint64;
#else
typedef long long int sqlite_int64;
typedef unsigned long long int sqlite_uint64;
#endif
typedef sqlite_int64 sqlite3_int64;
typedef sqlite_uint64 sqlite3_uint64;
So, this is what I've been doing so far:
Checking that the "char" data type is 8 bits long, since it's not guaranteed to be. If the preprocessor variable "CHAR_BIT" is not equal to 8, compilation fails
Now that "char" is guaranteed to be 8 bits long, I create a struct containing an array of several unsigned chars, which correspond to several bytes in the integer.
I write "operator" functions for my datatypes. Addition, multiplication, division, modulo, conversion from/to string, etc.
I have abstracted this process in a header file, which is the best I can do with what I know, but I wonder if there is a more straightforward way to achieve this.
I'm asking because I want to write a portable C library.
First, you should ask yourself whether you really need to support implementations that don't provide <stdint.h>. It was standardized in 1999, and even many pre-C99 implementations are likely to provide it as an extension.
Assuming you really need this, Doug Gwyn, a member of the ISO C standard committee, created an implementation of several of the new headers for C9x (as C99 was then known), compatible with C89/C90. The headers are in the public domain and should be reasonably portable.
http://www.lysator.liu.se/(nobg)/c/q8/index.html
(As I understand it, the name "q8" has no particular meaning; he just chose it as a reasonably short and unique search term.)
One rather nasty quirk of integer types in C stems from the fact that many "modern" implementations will have, for at least one size of integer, two incompatible signed types of that size with the same bit representation and likewise two incompatible unsigned types. Most typically the types will be 32-bit "int" and "long", or 64-bit "long" and "long long". The "fixed-sized" types will typically alias to one of the standard types, though implementations are not consistent about which one.
Although compilers used to assume that accesses to one type of a given size might affect objects of the other, the authors of the Standard didn't mandate that they do so (probably because there would have been no point ordering people to do things they would do anyway and they couldn't imagine any sane compiler writer doing otherwise; once compilers started doing so, it was politically difficult to revoke that "permission"). Consequently, if one has a library which stores data in a 32-bit "int" and another which reads data from a 32-bit "long", the only way to be assured of correct behavior is to either disable aliasing analysis altogether (probably the sanest choice while using gcc) or else add gratuitous copy operations (being careful that gcc doesn't optimize them out and then use their absence as an excuse to break code--something it sometimes does as of 6.2).
Hi I have one requirement to be done like example first bit of register should be made 0(reset ) and MSB 4 bits need to be made 1 every time.
I can do this in two steps using :
int val=0x0f;
val |=0xf0;
val &=~(1<<0);
printf("val is %d\n",val);
I am expecting two lines of code to be framed in 1 logic.I am trying but looking for some good logic from experts.
I got one logic like val = (val |=0xf0) & ~(1<<0); but cant use this type of logic since some coding standards used are MISRA.
Please can any one give me some better logic.
Thanks in advance.
It is entirely unclear what you mean, so here's some MISRA code review:
int is not MISRA compliant. Use a fixed-width integer type, preferably from stdint.h.
0x0f literals like these are not MISRA compliant. All literals need to have an u or U suffix, even hexadecimal ones.
val |=0xf0; is not MISRA compliant. You aren't allowed to use bit-wise operators on signed types.
val &=~(1<<0) is not MISRA compliant. There is an important MISRA rule stating that the result of the ~ operator must always be cast to the intended type (called underlying type or effective type depending on MISRA version). Furthermore, MISRA does not allow you to use shift operators on signed types.
printf is not MISRA compliant. MISRA doesn't allow stdio.h in production code.
Otherwise, the code is readable and clear. Apart from the fact that int is not 8 bits wide, so your talk about setting the MSBs doesn't make any sense.
Naturally, if you try to merge this non-compliant readable code into an unreadable one-liner, it will remain non-compliant. You have to start by making the original code MISRA compliant. Might be wise to actually run it through your static analyser.
unsigned val = whatever();
val = (val | 0xF0u) & ~1u;
The nested assignment is unnecessary. I don't have a MISRA copy handy, but I think this should pass. The use of unsigned literals and types is because MISRA disallows bitwise operations on signed integers.
I am in the early stages of framing stuff out on a new project.
I defined a function with a return type of "bool"
I got this output from PC-Lint
Including file sockets.h (hdr)
bool sock_close(uint8_t socket_id);
^
"LINT: sockets.h (52, 1) Note 970: Use of modifier or type '_Bool' outside of a typedef [MISRA 2012 Directive 4.6, advisory]"
I went ahead and defined this in another header to shut lint up:
typedef bool bool_t;
Then I started wondering why I had to do that and why it changed anything. I turned to MISRA 2012 Dir 4.6. It is concerned mostly about the width of primitive types like short, int, and long, their width, and how they are signed.
The standard does not give any amplification, rational, exception, or example for bool.
bool is explicitly defined as _Bool in stdbool.h in C99. So does this criteria really apply bool?
I thought _Bool was explicitly always the "smallest standard unsigned integer type large enough to store the values 0 and 1" according to section 6.2.5 of C99. So we know bool is unsigned. Is it then just a matter of the fact that _Bool is not fixed width and subject being promoted somehow that's the issue? Because the rational would seem to contradict that notion.
Adherence to this guideline does not guarantee portability because the size of the int type may determine whether or not an expression is subject to integer promotion.
How does just putting typedef bool bool_t; change anything - because I do nothing to indicate the width or the signdedness in doing so? The width of bool_t will just be platform dependent too. Is there a better way to redefine bool?
A type must not be defined with a specific length unless the implemented type is actually of that length
so typedef bool bool8_t; should be totally illegal.
Is Gimpel wrong in their interpretation of Directive 4.6 or are they spot on?
Use of modifier or type '_Bool' outside of a typedef [MISRA 2012 Directive 4.6, advisory]
That's nonsense, directive 4.6 is only concerned about using the types in stdint.h rather than int, short etc. The directive is about the basic numerical types. bool has nothing to do with that directive whatsoever, as it is not a numerical type.
For reasons unknown, MISRA-C:2012 examples use a weird type called bool_t, which isn't standard. But MISRA does by no means enforce this type to be used anywhere, particularly they do not enforce it in directive 4.6, which doesn't even mention booleans. MISRA does not discourage the use of bool or _Bool anywhere.
Is Gimpel wrong in their interpretation of Directive 4.6
Yes, their tool is giving incorrect diagnostics.
In addition, you may have to configure the tool (if possible) to tell it which bool type that is used. 5.3.2 mentions that you might have to do so if not using _Bool, implying that all static analysers must understand _Bool. But even if the bool type is correctly configured, dir 4.6 has nothing to do with it.
A potential concern with Boolean types is that a lot of code prior to C99 used a single-byte type to hold true/false values, and a fair amount of it may have used the name "bool". Attempting to store any multiple of 256 into most such types would be regarded as storing zero, while storing a non-zero multiple of 256 into a c99 "bool" would yield 1. If a piece of code which uses a C99 "bool" is ported into a piece of code that uses a typedef'ed byte, the resulting code could very easily malfunction (it's somewhat less likely that code written for a typedef'ed byte would rely upon any particular behavior when storing a value other than 0 or 1).
I am working with embedded device, with 32K of memory, writing in plain C using IAR EWARM v6.30.
To make code more readable I would like to define some enum types, for example, something like
{RIGHT_BUTTON, CENTER_BUTTON, LEFT_BUTTON}
instead of using 0, 1, 2 values, but I am afraid it will take additional memory that is already scarce.
So I have 2 questions:
1) Can I force enum to be of short or byte type intead of int?
2) What is an exact memory imprint of defining enum type?
In fully compliant ISO C the size and type of an enum constant is that of signed int. Some embedded systems compilers deliberately do not comply with that as an optimisation or extension.
In ISO C++ "The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration.", so a compiler is free to use the smallest possible type, and most do, but are not obliged to do so.
In your case (IAR EWARM), the manual clearly states:
No option required, in fact you'd need to use --enum_is_int to force compliant behaviour. Other compilers may behave differently or have different extensions, pragmas or options to control this. Such things will normally be defined in the documentation.
If you really need to keep the data size down to a char then you can always use a set of #define constant values to represent the enum states and only ever use these values in the your assignments and tests.
For a conforming compiler, an enumerated constant is always of type int (equivalently, signed int). But such constants aren't typically stored in memory, so their type probably won't have much effect on memory requirements.
A declared object of the enumerated type is of the enumerated type itself, which is compatible with char or with some signed or unsigned integer type. The choice of type is implementation-defined (i.e., the compiler gets to choose, but it must document how it makes the choice); the only requirement is that the type has to be capable of storing the values of all the constants.
It's admittedly odd that the constants are of type int rather than the enumerated type, but that's how the language is defined (the reasons are historical, and C++ has different rules).
For example, given:
enum foo { x, y, z };
enum foo obj;
obj = z;
the expression z is of type int and has the value 2 (just like the decimal constant 2), but the object obj is of type enum foo and may be as small as one byte, depending on the compiler. The assignment obj = z; involves an implicit conversion from int to enum foo (that conversion may or may not require additional code).
Some compilers may provide some non-standard way to specify the type to be chosen for an enumerated type. Some may even violate the standard in some way. Consult your compiler's documentation, print out the value of sizeof (enum foo), and, if necessary, examine the generated code.
It's likely that your compiler will make reasonable decisions within the constraints imposed by the language. For a compiler targeted at memory-poor embedded systems, it's particularly likely that the compiler will either choose a small type, or will let you specify one. Consult your compiler's documentation.
As Ian's answer suggests, if you want to control memory usage yourself, you can use char or unsigned char objects. You can still use an enum definition to define the constants, though. For example:
enum { x, y, z }; // No tag, so you can't declare objects of this type
typedef unsigned char foo; // an enum_foo object is guaranteed to be 1 byte
foo obj = z;
Reference: section 6.7.2.2 of the C standard. The link is to a 1.7-megabyte PDF of a recent draft of the 2011 ISO C standard; this particular section hasn't changed significantly since 1989.
An ANSI C compiler will always represent an enum as an int to represent variables of type enum.
http://en.wikipedia.org/wiki/Enumerated_type#C_and_syntactically_similar_languages
One option to use ints in your program would be to use them to define values, but cast to char when actually used
char value = (char)Buttons.RIGHT_BUTTON;