I have an array of const struct MyType[];, declared in a C-header and defined in the accompanying C-file -- whose signature cannot be changed easily. I want to provide a constexpr cpp interface to this database and are looking for a way to convert/adopt/use the const array inside a constexpr context.
Template magic maybe? But the indirection would still be present, and the initialization-time of the used array is not well-defined...
Once proof-of-concept I had was providing the properties of the database via a number of std::integral_constant<T,value>. But this relied on a generated cpp-duplication of the C-database. Not efficient, cumbersome, and fragile as hell!
Any tips, ideas or devastating cites from the standard?
A variable marked constexpr must be fully computable at compile time, not link time. So if your array is defined in a .cpp file, separately from the declaration - it cannot be constexpr. No need to quote the standard here, it's very simple.
(I'm ignoring your mention of using .c files, that's not really part of the question I assume.)
Related
I'm trying to use rlutil.h but everytime these function are used in more than one header I have compiler error about multiple definition of 20-30 variables. rlutil is a simple header to color terminal in linux and windows in C and C++.
The variables are something like that
const RLUTIL_STRING_T ANSI_CONSOLE_TITLE_POST = "\007";
and the typedef something like that
typedef const char* RLUTIL_STRING_T;
I tried to add my own C guard but it didn't worked.
I tried to layering the .h with my own .h/.c to make new function using the rlutil.h function but the problem is still here.
I tried to make the variables extern but it's worst
I'm building it with gcc on ubuntu.
I'm gonna try this at home with MVSC2017 but I think the behavior will be the same.
Any idea ?
I can provide more information.
Sorry for my english i'm not a native speaker
Thank's a lot
The problem is that the header is only set up so that it works with C++, where the const values defined in the header rlutil.h are private to each translation unit (TU) — think source file plus headers — that includes the header. By contrast, in C, they are normal global variables defined in each TU that includes rlutil.h, leading to the multiple definitions problem.
There isn't a trivial fix — unless switching from C to C++ is deemed trivial. The header attempts to be language-neutral between C and C++, but it fails on this count. Once again, proof that C and C++ are different languages.
In C, you would need to have code like:
extern const RLUTIL_STRING_T ANSI_CONSOLE_TITLE_POST;
in the header and then one source file would define the values:
const RLUTIL_STRING_T ANSI_CONSOLE_TITLE_POST = "\007"; // James Bond!
Alternatively, you could consider using static in the header:
static const RLUTIL_STRING_T ANSI_CONSOLE_TITLE_POST = "\007";
Each C file that includes this header would have its own collection of defined variables. In C, you'd be subject to compiler warnings about unused variables, which is not desirable. In C++, you might get warnings about using static instead of an anonymous namespace. It isn't clear that this is a good solution, therefore.
If you're brave, you could read the tail end of my answer to How do I use extern to share variables between files, but the header is probably not in your control and you really need to report the trouble to the maintainers of the code. (If you are the maintainer, then think about whether a scheme such as that outlined in the answer to the other question will help.)
I know that it's poor practice to not include function prototypes, but if you don't, then the compiler will infer a prototype based on what you pass into the function when you call it (according to this answer). My question is why does the compiler infer the prototype from what you pass into the function rather than the definition of the function itself? I can imagine some kind of preprocessing step where all declared functions are identified and checked to see if a prototype exists for each one. If one doesn't have a prototype, the first line of the function is copied and stuck under the existing prototypes. Why isn't this done?
Because the C compiler was designed as a single pass compiler, where any given file does not know about the other source files that make up the project.
Although compilers have gotten more sophisticated, and may do multiple passes, the general outline of the compilation process framework remains as it was in K&R's day:
Pre-process each source file(macro text replacement only).
Compile the processed source into an object file.
Link the objects into an executable or library.
Inferring prototypes would have to happen in the first step, but the compiler does not know about the existence of any other objects which may contain the function definition at that time.
It might be possible to make a compiler which did what you suggest, but not without breaking the existing rules for how to infer prototypes. A change with such big consequences would make the language no longer C.
The major use for prototypes is to declare a function and inform the compiler about the number and type of arguments in cases where the definition is not visible. Since C was originally compiled single-pass, the definition is not visible when it occurs later in the translation unit, but the more important case from a modern perspective is when the definition is not visible at all, due to lying in a separate translation unit, possibly even in a library file that exists only in compiled form and where no information about the function's type is recorded.
My project incorporates a stack, which has a number of user-defined types (typedef). The problem is that many of these type definitions conflict with our in-house type definitions. That is, the same symbol name is being used. Is there any way to protect against this?
The root of the problem is that to use the stack in our application, or wrapper code, as the case may be, a certain header file must be included. This stack header file in turn includes the stack provider's types definition file. That's the problem. They should have included their type definition file via a non-public include path, but they didn't. Now, there are all sorts of user-defined type conflicts for very common names, such as BYTE, WORD, DWORD, and so forth.
Since you probably can't easily change the program stack you are using, you will have to start with your own code.
The first thing to do is (obviously) to limit the number of names in the global namespace, as far as possible. Don't use global variables, just use static ones, as an example.
The next step is to adopt a naming convention for your code modules. Suppose you have an "input module" in the project. You could then for example prefix all functions in the input module "inp".
void inp_init (void);
void inp_get (int input);
#define INP_SOMECONSTANT 4
typedef enum
{
INP_THIS,
INP_THAT,
} inp_something_t;
And so on. Whenever these items are used elsewhere in the code, they will not only have a unique identifier, it will also be obvious to the reader which module they belong to, and therefore what purpose they have. So while fixing the namespace conflicts, you gain readability at the same time.
Something like the above could be the first steps to implementing a formal coding standard, something you need to do sooner or later anyway as a professional programmer.
I suggest you define a wrapping header that redefines all of the functions and structures exported by the stack in terms of your own types. This header is then included in your system files but not in the stack files (where it would conflict). You can then compile and link but there is a weak point at the interface. If you select your types correctly in your redefinitions, it should work correctly, leaving only an maintenance problem on each update from the stack supplier...
I think that I've come up with a reasonable workaround, for the time being, but as Lundin stated, a formal coding standard is needed for a long-term solution.
Basically what I did was to move the inclusion of the required stack header file to before the inclusion of our in-house type definitions file. Then, between those two includes I added a compiler macro to set a defined constant dependent on whether the stack's header file single-include protection definition has been defined. Then, I used that conditional defined constant as a conditional compile option in our in-house type definition file to prevent the conflicting data-types from being re-defined. It's a little sloppy, but progress can only be made in incremental steps.
I'm following a guide to learn curses, and all of the C code within prototypes functions before main(), then defines them afterward. In my C++ learnings, I had heard about function prototyping but never done it, and as far as I know it doesn't make too much of a difference on how the code is compiled. Is it a programmer's personal choice more than anything else? If so, why was it included in C at all?
Function prototyping originally wasn't included in C. When you called a function, the compiler just took your word for it that it would exist and took the type of arguments you provided. If you got the argument order, number, or type wrong, too bad – your code would fail, possibly in mysterious ways, at runtime.
Later versions of C added function prototyping in order to address these problems. Your arguments are implicitly converted to the declared types under some circumstances or flagged as incompatible with the prototype, and the compiler could flag as an error the wrong order and number of types. This had the side effect of enabling varargs functions and the special argument handling they require.
Note that, in C (and unlike in C++), a function declared foo_t func() is not the same as a function declared as foo_t func(void). The latter is prototyped to have no arguments. The former declares a function without a prototype.
In C prototyping is needed so that your program knows that you have a function called x() when you have not gotten to defining it, that way y() knows that there is and exists a x(). C does top down compilation, so it needs to be defined before hand is the short answer.
x();
y();
main(){
}
y(){
x();
}
x(){
...
more code ...
maybe even y();
}
I was under the impression that it was so customers could have access to the .h file for libraries and see what functions were available to them, without having to see the implementation (which would be in another file).
Useful to see what the function returns/what parameters.
Function prototyping is a remnant from the olden days of compiler writing. It used to be considered horribly inefficient for a compiler to have to make multiple passes over a source file to compile it.
In C, in certain contexts, referring to a function in one manner is syntactically equivalent to referring to a variable: consider taking a pointer to a function versus taking a pointer to a variable. In the compiler's intermediate representation, the two are semantically distinct, but syntactically, whether an identifier is a variable, a function name, or an invalid identifier cannot be determined from the context.
Since it's not determinable from the context, without function prototypes, the compiler would need to make an extra pass over each one of your source files each time one of them compiles. This would add an extra O(n) factor for any compilation (that is, if compilation were O(m), it would now be O(m*n)), where n is the number of files in your project. In large projects, where compilation is already on the order of hours, having a two-pass compiler is highly undesirable.
Forward declaring all your functions would allow the compiler to build a table of functions as it scanned the file, and be able to determine when it encountered an identifier whether it referred to a function or a variable.
As a result of this, C (and by extension, C++) compilers can be extremely efficient in compilation.
It allows you to have a situation in which say you can have an iterator class defined in a separate .h file which includes the parent container class. Since you've included the parent header in the iterator, you can't have a method like say "getIterator()" because the return type would have to be the iterator class and therefore it would require that you include the iterator header inside the parent header creating a cyclic loop of inclusions (one includes the other which includes itself which includes the other again, etc.).
If you put the iterator class prototype inside the parent container, you can have such a method without including the iterator header. It only works because you're simply saying that such an object exists and will be defined.
There are ways of getting around it like having a precompiled header, but in my opinion it's less elegant and comes with a slew of disadvantages. Of couurse this is C++, not C. However, in practice you might have a situation in which you'd like to arrange code in this fashion, classes aside.
I have a C project where all code is organized in *.c/*.h file pairs, and I need to define a constant value in one file, which will be however also be used in other files. How should I declare and define this value?
Should it be as static const ... in the *.h file? As extern const ... in the *.h file and defined in the *.c file? In what way does it matter if the value is not a primitive datatype (int, double, etc), but a char * or a struct? (Though in my case it is a double.)
Defining stuff inside *.h files doesn't seem like a good idea generally; one should declare things in the *.h file, but define them in the *.c file. However, the extern const ... approach seems inefficient, as the compiler wouldn't be able to inline the value, it instead having to be accessed via its address all the time.
I guess the essence of this question is: Should one define static const ... values in *.h files in C, in order to use them in more that one place?
The rule I follow is to only declare things in H files and define them in C files. You can declare and define in a single C file, assuming it will only be used in that file.
By declaration, I mean notify the compiler of its existence but don't allocate space for it. This includes #define, typedef, extern int x, and so on.
Definitions assign values to declarations and allocate space for them, such as int x and const int x. This includes function definitions; including these in header files frequently lead to wasted code space.
I've seen too many junior programmers get confused when they put const int x = 7; in a header file and then wonder why they get a link error for x being defined more than once. I think at a bare minimum, you would need static const int x so as to avoid this problem.
I wouldn't be too worried about the speed of the code. The main issue with computers (in terms of speed and cost) long ago shifted from execution speed to ease of development.
If you need constants (real, compile time constants) you can do that three ways, putting them into header files (there is nothing bad with that):
enum {
FOO_SIZE = 1234,
BAR_SIZE = 5678
};
#define FOO_SIZE 1234
#define BAR_SIZE 5678
static const int FOO_SIZE = 1234;
static const int BAR_SIZE = 5678;
In C++, i tend to use the enum way, since it can be scoped into a namespace. For C, i use the macro. This basicially comes down to a matter of taste though. If you need floating point constants, you can't use the enumeration anymore. In C++ i use the last way, the static const double, in that case (note in C++ static would be redundant then; they would become static automatically since they are const). In C, i would keep using the macros.
It's a myth that using the third method will slow down your program in any way. I just prefer the enumeration since the values you get are rvalues - you can't get their address, which i regard as an added safety. In addition, there is much less boiler-plate code written. The eye is concentrated on the constants.
Do you really have a need to worry about the advantage of inline? Unless you're writing embedded code, stick to readability. If it's really a magic number of something, I'd use a define; I think const is better for things like const version strings and modifying function call arguments. That said, the define in .c, declare in .h rule is definitely a fairly universally accepted convention, and I wouldn't break it just because you might save a memory lookup.
As a general rule, you do not define things as static in a header. If you do define static variables in a header, each file that uses the header gets its own private copy of whatever is declared static, which is the antithesis of DRY principle: don't repeat yourself.
So, you should use an alternative. For integer types, using enum (defined in a header) is very powerful; it works well with debuggers too (though the better debuggers may be able to help with #define macro values too). For non-integer types, an extern declaration (optionally qualified with const) in the header and a single definition in one C file is usually the best way to go.
I'd like to see more context for your question. The type of the value is critical, but you've left it out. The meaning of the const keyword in C is quite subtle; for example
const char *p;
does not mean that pointer p is a constant; you can write p all you like. What you cannot write is the memory that p points to, and this stays true even as p's value changes. This is about the only case I really understand; in general, the meaning of the subtle placement of const eludes me. But this special case is extremely useful for function parameters because it extracts a promise from the function that the memory the argument points to will not be mutated.
There is one other special case everyone should know: integers. Almost always, constant, named integers should be defined in a .h file as enumeration literals. enum types not only allow you to group related constants together in a natural way, but also allow you the names of those constants to be seen in the debugger, which is a huge advantage.
I've written tens of thousands of lines of C; probably hundreds if I try to track it down. (wc ~/src/c/*.c says 85 thousand, but some of that is generated, and of course there's a lot of C code lurking elsewhere). Aside from the two cases about, I've never found much use for const. I would be pleased to learn a new, useful example.
I can give you an indirect answer. In C++ (as opposed to C) const implies static. Thatis to say in C++ static const is the same thing as const. So that tells you how that C++ standards body feels about the issue i.e. all consts should be static.
for autoconf environment:
You can always define constants in the configure file as well. AC_DEFINE() i guess is the macro to define across the entire build.
To answer the essence of your question:
You generally do NOT want to define a static variable in a header file.
This would cause you to have duplicated variables in each translation units (C files) that include the header.
variables in a header should really be declared extern since that is the implied visibility.
See this question for a good explanation.
Actually, the situation might not be so dire, as the compiler would probably convert a const type to a literal value. But you might not want to rely on that behavior, especially if optimizations are turned off.
In C++, you should always use
const int SOME_CONST = 17;
for constants and never
#define SOME_CONST 17
Defines will almost always come back and bite you later. Consts are in the language, and are strongly typed, so you won't get weird errors because of some hidden interaction. I would put the const in the appropriate header file. As long as it's #pragma once (or #ifndef x / #define x / #endif), you won't ever get any compile errors.
In vanilla C, you might have compatibility problems where you must use #defines.