structure double output - c

struct node
{
double a : 23;
int b;
}s;
int main()
{
printf("%d\n",sizeof(s));
}
Why do this produce a compile error? I want to know why we cannot do bit-fields with double datatype.

My answer is for C. I have no idea if it applies to C++.
I suggest you do not try to write multi-language source files. It is hard work.
no prototype in scope for printf
type of sizeof(s) and type required by "%d" do not match
missing return 0; in main (for C89)
What compiler error do you get?
I want to know that we can not do bitwise with double datatype
Because the C99 Standard says so, eg (emphasis is mine)
6.7.2.1/9
A bit-field is interpreted as a signed or unsigned integer type consisting of the specified
number of bits.

C provides a special type of structure member known as a bit field, which is an integer with an explicitly specified number of bits.
Non-integral types cannot be used as base types for bit fields.

Quoted from Wiki :
C also provides a special type of structure member known as a bit field, which is an integer with an explicitly specified number of bits. A bit field is declared as a structure member of type int, signed int, unsigned int, or _Bool, following the member name by a colon (:) and the number of bits it should occupy. The total number of bits in a single bit field must not exceed the total number of bits in its declared type.
in the statement double a : 23; you are using bit field for double which is an error.You should use int instead.
Edit:
The Behavior is implementation dependent use anything other than these.Char may work on your system but it may fail on other platform as it's not part of standard.

Yep, You can't apply bit fields for double ,that's why it is giving compilation error.
Bit fiels are allowed only for signed & unsigned int ,_bool data type.

Related

Can I treat an `enum` variable as an `int` in C17?

TL;DR: Is it right to assume, given enum NAME {...};, that enum NAME n is the same as int n during execution? Can n be operated on as if it were a signed int, even though it is declared as enum NAME? The reason: I really want to use enum types for return flags, as a type 'closed' with respect to bit-operations.
For example: Let typedef enum FLAGS { F1 = 0x00000001, F2 = 0x00000002, F3 = 0x00000004 } FLAGS ;
Then, FLAGS f = F1 | F2; assigns 3 to f, throwing no related errors or warnings. This and numerous other compiler-permitted usage scenarios, such as f++, makes me think I could legit treat f as if it were a signed int. Compiler used: MSVC'19, 16.9.1, with setting "C17 (2018) Standard (/std:c17)";
I searched the standard (the sketch here) and looked at other related questions, to find no mention of what suspect (and wished) to be a "silent promotion" of enum NAME x to signed int x, even though the identifiers have that type. This leads me to believe that the way enum behaves when assigned a value that isn't a member, is implementation dependent. I'm asking, in part, in order to confirm or deny this claim.
C 2018 6.7.2.2 4 says:
Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined, but shall be capable of representing the values of all the members of the enumeration…
So the answer to “Can I treat an enum variable as an int in C17?” is no, as an object with enumerated type might be effectively a char or other integer type different from int.
However, it is effectively an integer type, so FLAGS f = F1 | F2; will work: The FLAGS type must be capable of representing its values F1 and F2, so whatever type is used for FLAGS must contain all the bits of F1 and of F2, so it contains all the bits of F1 | F2.
Technically, you could construct a trap representation by manipulating bits, so it is not guaranteed that the type is closed under bit operations. For example, if a C implementation used two’s complement for 32-bit int but reserved the bit pattern 1000…0000 as a trap representation, then INT_MIN & -2 would be a trap representation. (INT_MIN would have the bit pattern 1000…0001, for 231−1, and -2 would have the pattern 1111…1110.) This does not occur in C implementations without trap representations in its integer types.
We might question whether the fact that two types (an enumeration and its implementation-defined integer type) are compatible means we can use one as the other. Two types are compatible if they are the same (6.2.7 1), and the only things that can make types compatible but not the same involve qualifiers (like const) that are not an issue for this or involve other properties (such as array dimensions) that are not relevant to simple integer types.
This is in chapter 6.4.4.3 of the PDF you linked:
An identifier declared as an enumeration constant has type int.
Your thought of a promotion of enum NAME x to signed int x is not really true, as it is the identifier NAME that is of type int. The value x is of the type you use to define the identifier, and it is promoted to int.
Additionally, integer promotion takes place in integer operations.
EDIT
Some compilers are quite serious about the difference between enum and int, especially if they have an option to reduce the bit width to the smallest possible. For example, the one I'm using in a job's project, automatically inserts checks on each usage of an enum value against the defined values. Additionally, IIRC, it rejects all implicit conversions, we need to cast explicitly similarly to:
FLAGS f = (FLAGS)((int)F1 | (int)F2);
But this is an extension of this special beast called with specific safety options...

MISRA warning 12.4: integer conversion resulted in truncation (negation operation)

In a huge macro I have in a program aimed for a 16-bit processor, the following code (simplified) appears several times:
typedef unsigned short int uint16_t;
uint16_t var;
var = ~0xFFFF;
MISRA complains with the warning 12.4: integer conversion resulted in truncation. The tool used to get this is Coverity.
I have checked the forum but I really need a solution (instead of changing the negation by the actual value) as this line is inside a macro with varying parameters.
I have tried many things and here is the final attempt which fails also:
var = (uint16_t)((~(uint16_t)(0xFFFFu))&(uint16_t)0xFFFFu);
(the value 0xFFFF is just an example. In the actual code, the value is a variable which can take whatever value (but 16 bits))
Do you have any other idea please? Thanks.
EDIT:
I have tried then to use 32bits value and the result is the same with the following code:
typedef unsigned int uint32_t;
uint32_t var;
var = (uint32_t)(~(uint32_t)(0xFFFF0000u));
Summary:
Assuming you are using a static analyser for MISRA-C:2012, you should have gotten warnings for violations against rule 10.3 and 7.2.
Rule 12.4 is only concerned with wrap-around of unsigned integer constants, which can only occur with the binary + and - operators. It seems irrelevant here.
The warning text doesn't seem to make sense for neither MISRA-C:2004 12.4 nor MISRA-C:2012 12.4. Possibly, the tool is displaying the wrong warning.
There is however a MISRA:2012 rule 10.3 that forbids to assign a value to a variable that is of a smaller type than intended in the expression.
To use MISRA terms, the essential type of ~0xFFFF is unsigned, because the hex literal is of type unsigned int. On your system, unsigned int is apparently larger than uint16_t (int is a "greater ranked" integer type than short in the standard 6.3.1.1, even if they are of the same size). That is, uint16_t is of a narrower essential type than unsigned int, so your code does not conform to rule 10.3. This is what your tool should have reported.
The actual technical issue, which is hidden behind the MISRA terms, is that the ~ operator is dangerous because it comes with an implicit integer promotion. Which in turn causes code like for example
uint8_t x=0xFF;
~x << n; // BAD, always a bug
to invoke undefined behavior when the value 0xFFFFFF00 is left shifted.
It is therefore always good practice to cast the result of the ~ operator to the correct, intended type. There was even an explicit rule about this in MISRA 2004, which has now merged into the "essential type" rules.
In addition, MISRA (7.2) states that all integer constants should have an u or U suffix.
MISRA-C:2012 compliant code would look like this:
uint16_t var;
var = (uint16_t)~0xFFFFu;
or overly pedantic:
var = (uint16_t)~(uint16_t)0xFFFFu;
When the compiler looks at the right side, first it sees the literal 0xFFFF. It is automatically promoted to an integer which is (obvious from the warning) 32-bit in your system. Now we can imagine that value as 0x0000FFFF (whole 32-bit). When the compiler does the ~ operation on it, it becomes 0xFFFF0000 (whole 32-bit). When you write var = ~0xFFFF; the compiler in fact sees var = 0xFFFF0000; just before the assign operation. And of course a truncation happens during this assignment...

Initializing bit-fields

When you write
struct {
unsigned a:3, b:2;
} x = {10, 11};
is x.b guaranteed to be 3 by ANSI C (C89)? I have read and reread the standard, but can't seem to find exactly that case.
For example, "result that cannot be represented by the
resulting unsigned integer type is reduced modulo the number that is
one greater than the largest value that can be represented by the
resulting unsigned integer type." speaks about computation, not about initialization. And moreover, bit-field is not really a type.
Also, (when speaking about unsigned t:4) "contains values in the range [0,15]", but it doesn't necessarily mean that initializer must be reduced modulo 16 to be mapped to [0,15].
Struct initialization is really painstakingly detailedly described, but I really can't seem to find exactly that behavior. (Of course compilers do exactly that. And IBM documentation says " when you assign a value that is out of range to a bit field, the low-order bit pattern is preserved and the appropriate bits are assigned.", but I'd like to know if ANSI C standardizes that.
"ANSI C"/C89 has been obsolete for 25 years. Therefore, my answer cites the current C standard ISO 9899:2011, also known as C11.
Pretty much everything related to bit-fields in the C standard is poorly defined. Typically, you will not find anything explicitly addressing the behavior of bit fields, but their behavior is rather specified implicitly, "between the lines". This is why you should avoid using bit fields.
However, I believe that this specific case is well-defined: it should work like any other integer initialization.
The detailed struct initialization rules you mention (6.7.9) show how the literal 11 in the initializer list is related to the variable b. Nothing strange with that. What then applies is "simple assignment", the same thing that would happen as if you wrote x.b = 11;.
When doing any kind of assignment or initialization in C, the right operand is converted to the type of the left operand. This is specified by C11 6.5.16:
In simple assignment (=), the value of the right operand is converted
to the type of the assignment expression and replaces the value stored
in the object designated by the left operand.
In your case, the literal 11 of type int is converted to a bit field of unsigned int:2.
Therefore, the rule you are looking for should be found in the chapter dealing with conversions (C11 6.3). What applies is what you already cited in your question, C11 6.3.1.3:
...if the new type is unsigned, the value is converted by repeatedly
adding or subtracting one more than the maximum value that can be
represented in the new type until the value is in the range of the new
type.
The maximum value of an unsigned int:2 is 3. One more than the maximum value is 3+1=4. The compiler should repeatedly subtract this from the value 11:
11 - (3+1) = 7 does not fit, subtract once more:
7 - (3+1) = 3 does fit, store value 3
But then of course, this is the very same thing as taking the 2 least significant bits of the decimal value 11 and storing them in the bit field.
WRT "speaks about computation, not about initialization", the C89 standard explicitly applies the rules of assignment and conversion to initialization. It also says:
A bit-field is interpreted as an integral type consisting of the specified number of bits.
Given those, while a compiler warning would clearly be in order, it seems that throwing away upper-order bits is guaranteed by the standard.

C compiler flag to ignore sign

I am currently dealing with code purchased from a third party contractor. One struct has an unsigned char field while the function that they are passing that field to requires a signed char. The compiler does not like this, as it considers them to be mismatched types. However, it apparently compiles for that contractor. Some Googling has told me that "[i]t is implementation-defined whether a char object can hold negative values". Could the contractor's compiler basically ignore the signed/unsigned type and treat them the same? Or is there a compiler flag that will treat them the same?
C is not my strongest language--just look at my tags on my user page--so any help would be much appreciated.
Actually char, signed char and unsigned char are three different types. From the standard (ISO/IEC 9899:1990):
6.1.2.5 Types
...
The three types char, signed char and
unsigned char are collectively called
the character types.
(and in C++ for instance you have to (or at least should) write override functions with three variants of them if you have a char argument)
Plain char might be treated signed or unsigned by the compiler, but the standard says (also in 6.1.2.5):
An object declared as type char is
large enough to store any member of
the basic execution character set. If
a member of the required source
character set in 5.2.1 is stored in a
char object, its value is guarantied
to be positive. If other quantities
are stored in a char object, the
behavior is implementation-defined:
the values are treated as either
signed or nonnegative integers.
and
An object declared as type signed char occupies the same amount of storage as a ''plain'' char object.
The characters referred to in 5.2.1 are A-Z, a-z, 0-9, space, tab, newline and the following 29 graphic characters:
! " # % & ' ( ) * + , - . / :
; < = > ? [ \ ] ^ _ { | } ~
Answer
All of that I interpret to basically mean that ascii characters with value less than 128 are guarantied to be positive. So if the values stored always are less than 128 it should be safe (from a value preserving perspective), although not so good practice.
This is compiler-dependent. For example, in VC++ there's a compiler option and a corresponding _CHAR_UNSIGNED macro defined if that option instructs to use unsigned char by default.
I take it that you're talking about fields of type signed char and unsigned char, so they're explicitly wrong. If one of them was simply char, it might match in whatever compiler the contractor is using (IIRC, it's implementation-defined whether char is signed or unsigned), but not in yours. In that case, you might be able to get by with a command-line option or something to change yours.
Alternatively, the contractor might be using a compiler, or compiler options, that allow him to compile while ignoring errors or warnings. Do you know what sort of compilation environment he has?
In any case, this is not good C. If one of the types is just char, it relies on implementation-defined behavior, and therefore isn't portable. If not, it's flat wrong. I'd take this up with the contractor.

What is the size of an enum in C?

I'm creating a set of enum values, but I need each enum value to be 64 bits wide. If I recall correctly, an enum is generally the same size as an int; but I thought I read somewhere that (at least in GCC) the compiler can make the enum any width they need to be to hold their values. So, is it possible to have an enum that is 64 bits wide?
Taken from the current C Standard (C99): http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf
6.7.2.2 Enumeration specifiers
[...]
Constraints
The expression that defines the value of an enumeration constant shall be an integer
constant expression that has a value representable as an int.
[...]
Each enumerated type shall be compatible with char, a signed integer type, or an
unsigned integer type. The choice of type is implementation-defined, but shall be
capable of representing the values of all the members of the enumeration.
Not that compilers are any good at following the standard, but essentially: If your enum holds anything else than an int, you're in deep "unsupported behavior that may come back biting you in a year or two" territory.
Update: The latest publicly available draft of the C Standard (C11): http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1570.pdf contains the same clauses. Hence, this answer still holds for C11.
An enum is only guaranteed to be large enough to hold int values. The compiler is free to choose the actual type used based on the enumeration constants defined so it can choose a smaller type if it can represent the values you define. If you need enumeration constants that don't fit into an int you will need to use compiler-specific extensions to do so.
While the previous answers are correct, some compilers have options to break the standard and use the smallest type that will contain all values.
Example with GCC (documentation in the GCC Manual):
enum ord {
FIRST = 1,
SECOND,
THIRD
} __attribute__ ((__packed__));
STATIC_ASSERT( sizeof(enum ord) == 1 )
Just set the last value of the enum to a value large enough to make it the size you would like the enum to be, it should then be that size:
enum value{a=0,b,c,d,e,f,g,h,i,j,l,m,n,last=0xFFFFFFFFFFFFFFFF};
We have no control over the size of an enum variable. It totally depends on the implementation, and the compiler gives the option to store a name for an integer using enum, so enum is following the size of an integer.
In C language, an enum is guaranteed to be of size of an int. There is a compile time option (-fshort-enums) to make it as short (This is mainly useful in case the values are not more than 64K). There is no compile time option to increase its size to 64 bit.
Consider this code:
enum value{a,b,c,d,e,f,g,h,i,j,l,m,n};
value s;
cout << sizeof(s) << endl;
It will give 4 as output. So no matter the number of elements an enum contains, its size is always fixed.

Resources