Why use <stdbool.h> instead of _Bool? - c

Any time I had the need of a Boolean type I was told to either create one, or better yet, use stdbool.h.
Since stdbool.h uses typedef bool _Bool, is there a reason to use the header instead just using type _Bool? Is it just for the additional macros (/* #define true 1 #define false 0 */)?

The obvious type to add into the language was bool. But unfortunately, plenty of code was written that included bool in other shapes and forms. Recall that support for a boolean type was added only in C99.
So the C language committee had no choice but to pull out a reserved identifier for it (_Bool). But, since the obvious choice of type name is still the same, stdbool.h was added to allow users the obvious name. That way, if your code didn't have a home-brewed bool, you could use the built in one.
So do indeed use stdbool.h if you aren't bound to some existing home-brewed bool. It will be the standard type, with all the benefits that type brings in.

The common practice has always been to use bool but when the type was officially introduced into the standard in C99, they didn't want to break the "roll-your-own" implementations. So they made the type _Bool as kind of a hack around the unofficial bools. Now there's no type name collision. Anyway, point is, use bool unless a legacy codebase breaks.

They are same. bool is an alias for _Bool.
Before C99 we used we dont have this type. (Earlier the use was limited to an integer tyoe with 0 as false and 1 as true).
You may not use it. Even you can undef bool (but it is recommended not to do so). But including it (stdbool.h and bool alias of _Bool) is good because then if someday it becomes reserved your code complies to that.1
1. You can use bool other way but it is better not to. Because in general when this stdbool.h is introduced it bears the plan of gradually making it standard and then even more stricter rule applies where we can't use bool as something other and it will be reserved as keyword.

Related

Index into an array of values of some `enum` type in C

The C11 spec on enums1, states that the enumerator constants must have type int (1440-1441):
1440 The expression that defines the value of an enumeration constant shall be an integer constant expression that has a value representable as an int.
1441 The identifiers in an enumerator list are declared as constants that have type int and may appear wherever such are permitted.107)
However, it indicates that the backing type of the enum can be either a signed int, and unsigned int, or a char, so long as it fits the range of constants in the enum (1447-1448):
1447 Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type.
1448 The choice of type is implementation-defined,108) but shall be capable of representing the values of all the members of the enumeration.
This seems to indicate that only the compiler can know the width of an enum type, which is fine until you consider an array of enum types as part of a dynamically linked library.
Say you had a function:
enum my_enum return_fifth(enum my_enum[] lst) {
return lst[5];
}
This would be fine when linked to statically, because the compiler knows the size of a my_enum, but any other C code linking to it may not.
So, how is it possible for one C library to dynamically link to another C library, and know how the compiler decided to implement the enums? (Or do most modern compilers just stick with int/uint and forgo using chars altogether?
1Okay, I know this website is not quite the C11 standard, where as this one is a bit closer: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
C standard doesn't says anything about dynamical library or even static library, these concepts doesn't exist in standard. This is in the implemented behavior domain.
But as you said nothing prevent a compiler to use different type for an enumeration, this mean the one compiler can use a type and another use a different type.
This would be fine when linked to statically
In fact no, let say that A is a compiler that used char and B is a compiler that used int, and let say these types are not the same size. You compile a static library with the A compiler, and you statically link this library to a program compiled by B. This is static and still, B can't know that A doesn't use the B type for the enum.
So, how is it possible for one C library to dynamically link to another C library
Well, as I said this is not possible for static library, for the same reason, this is not possible for dynamic library.
Or do most modern compilers just stick with int/uint and forgo using chars altogether?
Big compiler generally talk between them to use the same rules on the same environment, yes. But nothing in C guaranty this behavior. (On the compatibility problem: a lot of people use "The C ABI", despite the fact it doesn't exist in the standard.)
So the best advice is to compile your dynamic library with the same compiler and option that compile your main program, also check the documentation of your compiler is a big plus.
The complete definition of the enum needs to be visible at the time it is used. Then the compiler will know what the size will be.
Forward declarations of enum types is not allowed.

Does MISRA C 2012 say not to use bool

I am in the early stages of framing stuff out on a new project.
I defined a function with a return type of "bool"
I got this output from PC-Lint
Including file sockets.h (hdr)
bool sock_close(uint8_t socket_id);
^
"LINT: sockets.h (52, 1) Note 970: Use of modifier or type '_Bool' outside of a typedef [MISRA 2012 Directive 4.6, advisory]"
I went ahead and defined this in another header to shut lint up:
typedef bool bool_t;
Then I started wondering why I had to do that and why it changed anything. I turned to MISRA 2012 Dir 4.6. It is concerned mostly about the width of primitive types like short, int, and long, their width, and how they are signed.
The standard does not give any amplification, rational, exception, or example for bool.
bool is explicitly defined as _Bool in stdbool.h in C99. So does this criteria really apply bool?
I thought _Bool was explicitly always the "smallest standard unsigned integer type large enough to store the values 0 and 1" according to section 6.2.5 of C99. So we know bool is unsigned. Is it then just a matter of the fact that _Bool is not fixed width and subject being promoted somehow that's the issue? Because the rational would seem to contradict that notion.
Adherence to this guideline does not guarantee portability because the size of the int type may determine whether or not an expression is subject to integer promotion.
How does just putting typedef bool bool_t; change anything - because I do nothing to indicate the width or the signdedness in doing so? The width of bool_t will just be platform dependent too. Is there a better way to redefine bool?
A type must not be defined with a specific length unless the implemented type is actually of that length
so typedef bool bool8_t; should be totally illegal.
Is Gimpel wrong in their interpretation of Directive 4.6 or are they spot on?
Use of modifier or type '_Bool' outside of a typedef [MISRA 2012 Directive 4.6, advisory]
That's nonsense, directive 4.6 is only concerned about using the types in stdint.h rather than int, short etc. The directive is about the basic numerical types. bool has nothing to do with that directive whatsoever, as it is not a numerical type.
For reasons unknown, MISRA-C:2012 examples use a weird type called bool_t, which isn't standard. But MISRA does by no means enforce this type to be used anywhere, particularly they do not enforce it in directive 4.6, which doesn't even mention booleans. MISRA does not discourage the use of bool or _Bool anywhere.
Is Gimpel wrong in their interpretation of Directive 4.6
Yes, their tool is giving incorrect diagnostics.
In addition, you may have to configure the tool (if possible) to tell it which bool type that is used. 5.3.2 mentions that you might have to do so if not using _Bool, implying that all static analysers must understand _Bool. But even if the bool type is correctly configured, dir 4.6 has nothing to do with it.
A potential concern with Boolean types is that a lot of code prior to C99 used a single-byte type to hold true/false values, and a fair amount of it may have used the name "bool". Attempting to store any multiple of 256 into most such types would be regarded as storing zero, while storing a non-zero multiple of 256 into a c99 "bool" would yield 1. If a piece of code which uses a C99 "bool" is ported into a piece of code that uses a typedef'ed byte, the resulting code could very easily malfunction (it's somewhat less likely that code written for a typedef'ed byte would rely upon any particular behavior when storing a value other than 0 or 1).

Why is C standard bool not bool_t?

C's stdbool.h adds a #define for the type _Bool to bool.
I know a #define was used instead of a typedef to allow #undef-ing for legacy code clashes. But why wasn't bool_t (to match the other standard types) used in place of bool? Is the _t used for typedef types only?
I ask because the platform I am working on adds a typedef for bool_t to _Bool.
EDIT: It's been correctly pointed out that the following is probably opinion based so consider it as only my reason for the primary question above.
So I'm wondering should I use that or continue to use bool? I kind of like the way the former makes the boolean standard type form match the other standard types.
C introduced the bool macro (that expands to _Bool) with the C99 Standard. They probably chose the name bool instead of bool_t because C++ in 1998 named their boolean type bool. In C++98 all integral types (except wchar_t already in C90) don't use a _t suffix.
I think you just found that the standard is inconsistent. bool is not the only type whose name does not have the _t suffix. Consider, for example, <setjmp.h>. It defines jmp_buf which is clearly a typedef yet does not end in _t.
I think the main reason why it's not bool_t is to make it look like it belongs to the family of standard types, int etc. However, they couldn't name the type bool due to compatibility purposes so hence the _Bool.
As for whether you should use bool or bool_t I think the question is rather opinion-based and cannot be answered just by using facts. However, I recommend you select one or the other and document that decision well in your coding guidelines.

Is using char as a bool in C bad practice?

Is there any downside to using
typedef char bool;
enum boolean { false, true };
in C to provide a semantic boolean type?
In C99, you should be using stdbool.h, which defines bool, true, and false.
Otherwise what you have is fine. Using just the enum may be a bit simpler, but if you really want to save space what you have works.
The short answer is: it's fine. It's particularly good if you needed to make large arrays of them, although I would be tempted to just use the C99 built-in1.
Since you asked "is there any downside..." I suppose I could remark that there have been important machines that did not actually have a character load instruction. (The Cray and initial DEC Alpha come to mind.) Machines in the future may suddenly go all minimal once again.
It will always be fast to load a standard integral type.
It will probably always be fast to load a single character.
1. See C99 6.2.5. There is a built-in type _Bool. Then, if you include <stdbool.h> (see C99 7.16) you get an alias, the more gracefully named bool, and defines for true and false. If you use this it will clash with your typedef but I'm sure it would be an easy thing to fix.
The downside of typedef char bool; is that if you compile with a C99 implementation and happen to include <stdbool.h>, this ends up as typedef char _Bool;, which is wrong. Also, if you ever tried to compile the code as C++, you'd have issues (that's not necessarily a problem, but it could be).
It would probably better either to use <stdbool.h> if your implementation provides one or to use a different name for the type, like BOOL.
I would suggest using a bit to represent true or false, rather than a character. A character uses 8 bits, We can set 1 for true and 0 for false with just 1 bit. That will be more memory efficient and also satisfies the purpose. (e.g) char flag:1;
Reference : http://en.wikipedia.org/wiki/Bit_field

difference between #define and enum{} in C [duplicate]

What's the difference between using a define statement and an enum statement in C/C++ (and is there any difference when using them with either C or C++)?
For example, when should one use
enum {BUFFER = 1234};
over
#define BUFFER 1234
enum defines a syntactical element.
#define is a pre-preprocessor directive, executed before the compiler sees the code, and therefore is not a language element of C itself.
Generally enums are preferred as they are type-safe and more easily discoverable. Defines are harder to locate and can have complex behavior, for example one piece of code can redefine a #define made by another. This can be hard to track down.
#define statements are handled by the pre-processor before the compiler gets to see the code so it's basically a text substitution (it's actually a little more intelligent with the use of parameters and such).
Enumerations are part of the C language itself and have the following advantages.
1/ They may have type and the compiler can type-check them.
2/ Since they are available to the compiler, symbol information on them can be passed through to the debugger, making debugging easier.
Enums are generally prefered over #define wherever it makes sense to use an enum:
Debuggers can show you the symbolic name of an enums value ("openType: OpenExisting", rather than "openType: 2"
You get a bit more protection from name clashes, but this isn't as bad as it was (most compilers warn about re#defineition.
The biggest difference is that you can use enums as types:
// Yeah, dumb example
enum OpenType {
OpenExisting,
OpenOrCreate,
Truncate
};
void OpenFile(const char* filename, OpenType openType, int bufferSize);
This gives you type-checking of parameters (you can't mix up openType and bufferSize as easily), and makes it easy to find what values are valid, making your interfaces much easier to use. Some IDEs can even give you intellisense code completion!
Define is a preprocessor command, it's just like doing "replace all" in your editor, it can replace a string with another and then compile the result.
Enum is a special case of type, for example, if you write:
enum ERROR_TYPES
{
REGULAR_ERR =1,
OK =0
}
there exists a new type called ERROR_TYPES.
It is true that REGULAR_ERR yields to 1 but casting from this type to int should produce a casting warning (if you configure your compiler to high verbosity).
Summary:
they are both alike, but when using enum you profit the type checking and by using defines you simply replace code strings.
It's always better to use an enum if possible. Using an enum gives the compiler more information about your source code, a preprocessor define is never seen by the compiler and thus carries less information.
For implementing e.g. a bunch of modes, using an enum makes it possible for the compiler to catch missing case-statements in a switch, for instance.
enum can group multiple elements in one category:
enum fruits{ apple=1234, orange=12345};
while #define can only create unrelated constants:
#define apple 1234
#define orange 12345
#define is a preprocessor command, enum is in the C or C++ language.
It is always better to use enums over #define for this kind of cases. One thing is type safety. Another one is that when you have a sequence of values you only have to give the beginning of the sequence in the enum, the other values get consecutive values.
enum {
ONE = 1,
TWO,
THREE,
FOUR
};
instead of
#define ONE 1
#define TWO 2
#define THREE 3
#define FOUR 4
As a side-note, there is still some cases where you may have to use #define (typically for some kind of macros, if you need to be able to construct an identifier that contains the constant), but that's kind of macro black magic, and very very rare to be the way to go. If you go to these extremities you probably should use a C++ template (but if you're stuck with C...).
If you only want this single constant (say for buffersize) then I would not use an enum, but a define. I would use enums for stuff like return values (that mean different error conditions) and wherever we need to distinguish different "types" or "cases". In that case we can use an enum to create a new type we can use in function prototypes etc., and then the compiler can sanity check that code better.
Besides all the thing already written, one said but not shown and is instead interesting. E.g.
enum action { DO_JUMP, DO_TURNL, DO_TURNR, DO_STOP };
//...
void do_action( enum action anAction, info_t x );
Considering action as a type makes thing clearer. Using define, you would have written
void do_action(int anAction, info_t x);
For integral constant values I've come to prefer enum over #define. There seem to be no disadvantages to using enum (discounting the miniscule disadvantage of a bit more typing), but you have the advantage that enum can be scoped, while #define identifiers have global scope that tromps everything.
Using #define isn't usually a problem, but since there are no drawbacks to enum, I go with that.
In C++ I also generally prefer enum to const int even though in C++ a const int can be used in place of a literal integer value (unlike in C) because enum is portable to C (which I still work in a lot) .
If you have a group of constants (like "Days of the Week") enums would be preferable, because it shows that they are grouped; and, as Jason said, they are type-safe. If it's a global constant (like version number), that's more what you'd use a #define for; although this is the subject of a lot of debate.
In addition to the good points listed above, you can limit the scope of enums to a class, struct or namespace. Personally, I like to have the minimum number of relevent symbols in scope at any one time which is another reason for using enums rather than #defines.
Another advantage of an enum over a list of defines is that compilers (gcc at least) can generate a warning when not all values are checked in a switch statement. For example:
enum {
STATE_ONE,
STATE_TWO,
STATE_THREE
};
...
switch (state) {
case STATE_ONE:
handle_state_one();
break;
case STATE_TWO:
handle_state_two();
break;
};
In the previous code, the compiler is able to generate a warning that not all values of the enum are handled in the switch. If the states were done as #define's, this would not be the case.
enums are more used for enumerating some kind of set, like days in a week. If you need just one constant number, const int (or double etc.) would be definetly better than enum. I personally do not like #define (at least not for the definition of some constants) because it does not give me type safety, but you can of course use it if it suits you better.
Creating an enum creates not only literals but also the type that groups these literals: This adds semantic to your code that the compiler is able to check.
Moreover, when using a debugger, you have access to the values of enum literals. This is not always the case with #define.
While several answers above recommend to use enum for various reasons, I'd like to point out that using defines has an actual advantage when developing interfaces. You can introduce new options and you can let software use them conditionally.
For example:
#define OPT_X1 1 /* introduced in version 1 */
#define OPT_X2 2 /* introduced in version 2 */
Then software which can be compiled with either version it can do
#ifdef OPT_X2
int flags = OPT_X2;
#else
int flags = 0;
#endif
While on an enumeration this isn't possible without a run-time feature detection mechanism.
Enum:
1. Generally used for multiple values
2. In enum there are two thing one is name and another is value of name name must be distinguished but value can be same.If we not define value then first value of enum name is 0 second value is 1,and so on, unless explicitly value are specified.
3. They may have type and compiler can type check them
4. Make debugging easy
5. We can limit scope of it up to a class.
Define:
1. When we have to define only one value
2. It generally replace one string to another string.
3. It scope is global we cannot limit its scope
Overall we have to use enum
There is little difference. The C Standard says that enumerations have integral type and that enumeration constants are of type int, so both may be freely intermixed with other integral types, without errors. (If, on the other hand, such intermixing were disallowed without explicit casts, judicious use of enumerations could catch certain programming errors.)
Some advantages of enumerations are that the numeric values are automatically assigned, that a debugger may be able to display the symbolic values when enumeration variables are examined, and that they obey block scope. (A compiler may also generate nonfatal warnings when enumerations are indiscriminately mixed, since doing so can still be considered bad style even though it is not strictly illegal.) A disadvantage is that the programmer has little control over those nonfatal warnings; some programmers also resent not having control over the sizes of enumeration variables.

Resources