I have an odd problem with gcc 4.3 and I wanted to know if it is a specific problem with the compiler or if it is a general C problem.
Granted, I use a really odd construct, but I like it, because it allows me to enforce some rules that otherwise would not be possible.
The project is split in several modules and each module has a structure that is opaque.
There's a typedef struct <tag> <type> declaration in the header and in 1 c file, there is a struct tag { ... }; and all function refer to an element via a <type> *.
Each module knows its own structure, the structure of the other modules are not visible. In one module, I do not work with 1 element, but with a fixed array of elements. This means that some functions of that module work with a pointer to array.
Let's call that module wdi. So I have for example
void write_all(wdi_type (*wdis)[MAX_WDI]);
and for allocation (I know very unusual syntax) to return directly the right pointer to an array.
wdi_type (*wdi_alloc(void))[MAX_WDI];
This works well under GNU-C 3.4.6 (Solaris SPARC), under cc, the sun compiler v12 it compiles also (couldn't test it though because another part of the app breaks).
On gcc 4.3.3 (also tested on 4.4.6 x86-64 and 4.6.2 ARM) though, it doesn't. I get the compilation error array type has incomplete element type.
I don't understand why the compiler would need that information at that stage. It doesn't need the size of other opaque structures either.
Is it a gcc bug?
What does the standard say?
I wasn't able to find something about it. Should I file a bug report to GNU?
The standard (well, the N1570 draft of the C2011 standard) says, in 6.2.5 (20)
An array type describes a contiguously allocated nonempty set of objects with a particular member object type, called the element type. The element type shall be complete whenever the array type is specified.
(emphasis by me)
The corresponding passage in the C99 standard was less forceful:
An array type describes a contiguously allocated nonempty set of objects with a particular member object type, called the element type.36)
36) Since object types do not include incomplete types, an array of incomplete type cannot be constructed.
it didn't explicitly forbid specifying an array type for an incomplete element type, only constructing such an array.
I haven't been able to find out when and why footnote 36 was replaced with the emphasised sentence, but it was before November 2010.
It would seem that gcc-4.x rejects the code based on the new version, while gcc-3.4.6 accepted it based on the older version, so I don't think it is a bug, and the code is explicitly invalid according to the current standard.
Related
The C11 spec on enums1, states that the enumerator constants must have type int (1440-1441):
1440 The expression that defines the value of an enumeration constant shall be an integer constant expression that has a value representable as an int.
1441 The identifiers in an enumerator list are declared as constants that have type int and may appear wherever such are permitted.107)
However, it indicates that the backing type of the enum can be either a signed int, and unsigned int, or a char, so long as it fits the range of constants in the enum (1447-1448):
1447 Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type.
1448 The choice of type is implementation-defined,108) but shall be capable of representing the values of all the members of the enumeration.
This seems to indicate that only the compiler can know the width of an enum type, which is fine until you consider an array of enum types as part of a dynamically linked library.
Say you had a function:
enum my_enum return_fifth(enum my_enum[] lst) {
return lst[5];
}
This would be fine when linked to statically, because the compiler knows the size of a my_enum, but any other C code linking to it may not.
So, how is it possible for one C library to dynamically link to another C library, and know how the compiler decided to implement the enums? (Or do most modern compilers just stick with int/uint and forgo using chars altogether?
1Okay, I know this website is not quite the C11 standard, where as this one is a bit closer: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
C standard doesn't says anything about dynamical library or even static library, these concepts doesn't exist in standard. This is in the implemented behavior domain.
But as you said nothing prevent a compiler to use different type for an enumeration, this mean the one compiler can use a type and another use a different type.
This would be fine when linked to statically
In fact no, let say that A is a compiler that used char and B is a compiler that used int, and let say these types are not the same size. You compile a static library with the A compiler, and you statically link this library to a program compiled by B. This is static and still, B can't know that A doesn't use the B type for the enum.
So, how is it possible for one C library to dynamically link to another C library
Well, as I said this is not possible for static library, for the same reason, this is not possible for dynamic library.
Or do most modern compilers just stick with int/uint and forgo using chars altogether?
Big compiler generally talk between them to use the same rules on the same environment, yes. But nothing in C guaranty this behavior. (On the compatibility problem: a lot of people use "The C ABI", despite the fact it doesn't exist in the standard.)
So the best advice is to compile your dynamic library with the same compiler and option that compile your main program, also check the documentation of your compiler is a big plus.
The complete definition of the enum needs to be visible at the time it is used. Then the compiler will know what the size will be.
Forward declarations of enum types is not allowed.
Consider the following code on a platform where the ABI does not insert padding into unions:
union { int xi; } x;
x.xi = 1;
I believe that the second line exhibits undefined behaviour as it violates the strict aliasing rule:
The object referred to by x.xi is the same object as the object referred to by x. Both are the same region of storage and the term object is defined in ISO 9899:2011 §3.15 as:
object
1 region of data storage in the execution environment, the contents of which can represent values
2 NOTE When referenced, an object may be interpreted as having a particular type; see 6.3.2.1.
As an object is not more than a region of storage, I conclude that as x and x.xi occupy the same storage, they are the same object.
The effective type of x is union { int xi; } as that's the type it has been declared with. See §6.5 ¶6:
6 The effective type of an object for an access to its stored value is the declared type of the object, if any.87) If a value is stored into an object having no declared type through an lvalue having a type that is not a character type, then the type of the lvalue becomes the effective type of the object for that access and for subsequent accesses that do not modify the stored value. If a value is copied into an object having no declared type using memcpy or memmove, or is copied as an array of character type, then the effective type of the modified object for that access and for subsequent accesses that do not modify the value is the effective type of the object from which the value is copied, if it has one. For all other accesses to an object having no declared type, the effective type of the object is simply the type of the lvalue used for the access.
87) Allocated objects have no declared type.
By the wording of ¶6 it is also clear that each object can only have one effective type.
In the statement x.xi I access x through the lvalue x.xi typed int. This is not one of the types listed in §6.5 ¶7:
7 An object shall have its stored value accessed only by an lvalue expression that has one of the following types:88)
a type compatible with the effective type of the object,
a qualified version of a type compatible with the effective type of the object,
a type that is the signed or unsigned type corresponding to the effective type of the
object,
a type that is the signed or unsigned type corresponding to a qualified version of the
effective type of the object,
an aggregate or union type that includes one of the aforementioned types among its
members (including, recursively, a member of a subaggregate or contained union), or
a character type.
88) The intent of this list is to specify those circumstances in which an object may or may not be aliased.
Therefore, the second line exhibits undefined behaviour.
As this interpretation is clearly wrong, where lies my misreading of the standard?
The error is thinking that x and x.xi are the same object.
The union is an object and it contains member objects1. They are distinct objects, each with it's own type.
1. (Quoted from: ISO/IEC 9899:20x1 6.2.5 Types 20)
A union type describes an overlapping nonempty set of member objects, each of
which has an optionally specified name and possibly distinct type.
Outside of the rules which forbid the use of pointers to access things of other types, the term "object" refers to a contiguous allocation of storage. Each individual variable of automatic or static duration is an independent object (since an implementation could arbitrarily scatter them throughout memory) but any region of memory created by malloc would be a single object--effectively of type "char[]", no matter how many different ways the contents therein were indexed and accessed.
The C89 rules regarding pointer type access could be made workable if, in addition to the special rule for character-pointer types, there were a corresponding rule for suitably-aligned objects of character-array types, or for objects with no declared type that were effectively "char[]". Interpreting the rules in such a fashion would limit their application to objects that had declared types. This would have allowed most of the optimizations that would have been practical in 1989, but as compilers got more sophisticated it became more desirable to be able to apply similar optimizations to allocated storage; since the rules weren't set up for that, there was little clarity as to what was or was not permissible.
By 1999, there was a substantial overlap between the kinds of pointer-based
accesses some programs needed to do, and the kinds of pointer-based accesses
that compilers were assuming programs wouldn't do, so any single C99 standard
would have either required that some C99 implementations be made less efficient
than they had been, or else allow C99 compilers to behave arbitrarily with a
large corpus of code that relies upon techniques that some compilers didn't
support.
The authors of C99, rather than resolving the situation by defining
directives to specify different aliasing modes, attempted to "clarify" it
by adding language that either requires applying a different definition of
"object" from the one used elsewhere, or else requires that each allocated
region hold either one array of a single type or a single structure which
may contain a flexible array member. The latter restriction might be usable
in a language being designed from scratch, but would effectively invalidate
a huge amount of C code. Fortunately or unfortunately, however, the authors of the Standard were to get away with such sloppy drafting since compiler writers were, at least until recently, more interested in doing what was necessary to make a compiler useful than in doing the minimum necessary to comply with the poorly-written Standard.
If one wants to write code that will work with a quality compiler, ensure that any aliasing is done in ways that a compiler would have to be obtuse to ignore (e.g. if a function receives a parameter of type T*, casts it to U*, and then
accesses the object as a U*, a compiler that's not being obtuse should have no
trouble recognizing that the function might really be accessing a T*). If one wants to write code that will work with the most obtuse compiler imaginable... that's impossible, since the Standard doesn't require that an implementation be incapable of processing anything other than a possibly-contrived and useless program. If one wants to write code that will work on gcc, the author's willingness to support constructs will be far more relevant than what the Standard has to say about them.
This is mainly a followup to Should definition and declaration match?
Question
Is it legal in C to have (for example) int a[10]; in one compilation unit and extern int a[4]; in another one ?
(You can find a working example in my answer to ref'd question)
Disclaimers :
I know it is dangerous and would not do it in production code
I know that if you have both in same compilation unit (typically through inclusion of a .h in the file containing the definition) compilers detects an error
I have already read Jonathan Leffler' excellent answer to How do I use extern to share variables between source files? but could not find the answer to this specific point there - even if Jonathan showed even worse usages ...
Even if different comments in referenced post spotted that as UB, I could not find any authoritative reference for it. So I would say that there is no UB here and that second compilation unit will have access to the beginning of the array, but I would really like a confirmation - or instead a reference about why it is UB
It is undefined behavior.
Section 6.2.7.2 of C99 states:
All declarations that refer to the same object or function shall have
compatible type; otherwise, the behavior is undefined.
NOTE: As mentioned in the comments below, the important part here is [...] that refer to the same object [...], which is further defined in 6.2.2:
In the set of translation units and libraries that constitutes an
entire program, each declaration of a particular identifier with
external linkage denotes the same object or function.
About the type compatibility rules for array types, section 6.7.5.2.4 of C99 clarifies what it means for two array types to be compatible:
For two array types to be compatible, both shall have compatible
element types, and if both size specifiers are present, and are
integer constant expressions, then both size specifiers shall have the
same constant value. If the two array types are used in a context
which requires them to be compatible, it is undefined behavior if the
two size specifiers evaluate to unequal values.
(Emphasis mine)
In the real world, as long as you stick to 1D arrays, it is probably harmless, because there is no bounds checking and the address of the first element remains the same regardless of the size specifier, but note that the sizeof operator will return different values in each source file (opening a wonderful opportunity to write buggy code).
Things start to get really ugly if you decide to extrapolate on this example and declare multidimensional arrays with different dimension sizes, because the offset of each element in the array will not match with the real dimensions any more.
Yes, it is legal. The language allows it.
In your specific case there will be no undefined behavior as the extern declared array is smaller than the actually allocated array.
It can be used in a case where the declaring module uses the "unpublished" array elements for e.g. housekeeping of its algorithms (abstraction hiding).
The Python documentation claims that the following does not work on "some platforms or compilers":
int foo(int); // Defined in another translation unit.
struct X { int (*fptr)(int); } x = {&foo};
Specifically, the Python docs say:
We’d like to just assign this to the tp_new slot, but we can’t, for
portability sake, On some platforms or compilers, we can’t statically
initialize a structure member with a function defined in another C
module, so, instead, we’ll assign the tp_new slot in the module
initialization function just before calling PyType_Ready(). --http://docs.python.org/extending/newtypes.html
Is the above standard C89 and/or C99? What compilers specifically cannot handle the above?
That kind of initialization has been permitted since at least C90.
From C90 6.5.7 "Initialization"
All the expressions in an initializer for an object that has static storage duration or in an initializer list for an object that has aggregate or union type shall be constant expressions.
And 6.4 "Constant expressions":
An address constant is a pointer to an lvalue designating an object of static storage duration, or to a function designator; it shall be created explicitly, using the unary & operator...
But it's certainly possible that some implementations might have trouble with the construct - I'd guess that wouldn't be true for modern implementations.
According to n1570 6.6 paragraph 9, the address of a function is an address constant, according to 6.7.9 this means that it can be used to initialize global variables. I am almost certain this is also valid C89.
However,
On sane platforms, the value of a function pointer (or any pointer, other than NULL) is only known at runtime. This means that the initialization of your structure can't take place until runtime. This doesn't always apply to executables but it almost always applies to shared objects such as Python extensions. I recommend reading Ulrich Drepper's essay on the subject (link).
I am not aware of which platforms this is broken on, but if the Python developers mention it, it's almost certainly because one of them got bitten by it. If you're really curious, try looking at an old Python extension and seeing if there's an appropriate message in the commit logs.
Edit: It looks like most Python modules just do the normal thing and initialize type structures statically, e.g., static type obj = { function_ptr ... };. For example, look at the mmap module, which is loaded dynamically.
The example is definitively conforming to C99, and AFAIR also C89.
If some particular (oldish) compiler has a problem with it, I don't think that the proposed solution is the way to go. Don't impose dynamic initialization to platforms that behave well. Instead, special case the weirdos that need special treatment. And try to phase them out as quickly as you may.
I have a structure with no members (for the moment) and I would like to know if it is possible to suppress the warning I get:
warning: struct has no members
Is it possible to add a member and keep the sizeof the struct zero? Any other solution?
In c the behaviour of an empty structure is compiler dependent versus c++ where it is part of the spec (explanations here)
C++
A class with an empty sequence of members and base class objects is an empty class. Complete objects and member subobjects of an empty class type shall have nonzero size.
in C it is rather more murky since the c99 standard has some language which implies that truly empty structures aren't allowed (see TrayMan's answer) but many compilers do allow it (e.g gcc).
Since this is compiler dependent it is unlikely that you will get truly portable code in this case. As such non portable ways to suppress the warning may be your best bet.
In VS you would use #pragma warning
in GCC from 4.2.1 you have Diagnostic Pragmas
if you just need the struct symbol for casting and function arguments then just:
typedef struct _Interface Interface;
this will create the symbol for an opaque type.
Technically this isn't even valid C.
TrayMan was a little off in his analysis, yes 6.2.6.1 says:
Except for bit-fields, objects are composed of contiguous sequences of one or more bytes, the number, order, and encoding of which are either explicitly specified or implementation-defined.
but tie that with 6.2.5-20, which says:
— A structure type describes a sequentially allocated nonempty set of member objects (and, in certain circumstances, an incomplete array), each of which has an optionally specified name and possibly distinct type.
and now you can conclude that structures are going to be one or more bytes because they can't be empty. Your code is giving you a warning, while the same code will actually fail to compile on Microsoft's Visual Studio with an error:
error C2016: C requires that a struct or union has at least one member
So the short answer is no, there isn't a portable way to avoid this warning, because it's telling you you're violating the C standards. You'll have to use a compiler specific extension to suppress it.
C99 standard is somewhat ambiguous on this, but seems to say that an empty struct should have non-zero size.
6.2.6.1
Except for bit-fields, objects are composed of contiguous sequences of one or more bytes,
the number, order, and encoding of which are either explicitly specified or
implementation-defined.
struct zero_information { int:0; };
The above code snippet will yield a non-zero value from sizeof(struct zero_information), but it might help you get what you looking for as 100% of all the storage that is allocated for it, is padding (only accessible through hacks, although I don't remember from the top of my head if acceding the padding is undefined behavior).
Is it possible to add a member and keep the sizeof the struct zero?
Nope. FWIW, C++ allows empty structs but the sizeof() is always non-zero for an empty struct.
Any other solution?
Not any easy ones. It's worth noting that empty structs are only somewhat supported in C and disallowed in C99.
Empty structs are supported in C++ but different compilers implement them with varying results (for sizeof and struct offsets), especially once you start throwing inheritance into the mix.
If you're not requiring "too strict" adherence, you might get away with this:
struct empty {
char nothing[0];
};
This is a GCC extension, though.
I was kind of hoping I'd be able to use the C99 feature called "flexible arrays", declared like this:
struct empty99
{
char nothing[]; // This is a C99 "flexible array".
};
but that doesn't work; they require that there is at least one normal struct member first, they can't be the only member.