Objective-C : BOOL vs bool - c

I saw the "new type" BOOL (YES, NO).
I read that this type is almost like a char.
For testing I did :
NSLog(#"Size of BOOL %d", sizeof(BOOL));
NSLog(#"Size of bool %d", sizeof(bool));
Good to see that both logs display "1" (sometimes in C++ bool is an int and its sizeof is 4)
So I was just wondering if there were some issues with the bool type or something ?
Can I just use bool (that seems to work) without losing speed?

From the definition in objc.h:
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
typedef bool BOOL;
#else
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
#define YES ((BOOL)1)
#define NO ((BOOL)0)
So, yes, you can assume that BOOL is a char. You can use the (C99) bool type, but all of Apple's Objective-C frameworks and most Objective-C/Cocoa code uses BOOL, so you'll save yourself headache if the typedef ever changes by just using BOOL.

As mentioned above, BOOL is a signed char. bool - type from C99 standard (int).
BOOL - YES/NO. bool - true/false.
See examples:
bool b1 = 2;
if (b1) printf("REAL b1 \n");
if (b1 != true) printf("NOT REAL b1 \n");
BOOL b2 = 2;
if (b2) printf("REAL b2 \n");
if (b2 != YES) printf("NOT REAL b2 \n");
And result is
REAL b1
REAL b2
NOT REAL b2
Note that bool != BOOL. Result below is only ONCE AGAIN - REAL b2
b2 = b1;
if (b2) printf("ONCE AGAIN - REAL b2 \n");
if (b2 != true) printf("ONCE AGAIN - NOT REAL b2 \n");
If you want to convert bool to BOOL you should use next code
BOOL b22 = b1 ? YES : NO; //and back - bool b11 = b2 ? true : false;
So, in our case:
BOOL b22 = b1 ? 2 : NO;
if (b22) printf("ONCE AGAIN MORE - REAL b22 \n");
if (b22 != YES) printf("ONCE AGAIN MORE- NOT REAL b22 \n");
And so.. what we get now? :-)

At the time of writing this is the most recent version of objc.h:
/// Type to represent a boolean value.
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
It means that on 64-bit iOS devices and on WatchOS BOOL is exactly the same thing as bool while on all other devices (OS X, 32-bit iOS) it is signed char and cannot even be overridden by compiler flag -funsigned-char
It also means that this example code will run differently on different platforms (tested it myself):
int myValue = 256;
BOOL myBool = myValue;
if (myBool) {
printf("i'm 64-bit iOS");
} else {
printf("i'm 32-bit iOS");
}
BTW never assign things like array.count to BOOL variable because about 0.4% of possible values will be negative.

The Objective-C type you should use is BOOL. There is nothing like a native boolean datatype, therefore to be sure that the code compiles on all compilers use BOOL. (It's defined in the Apple-Frameworks.

Yup, BOOL is a typedef for a signed char according to objc.h.
I don't know about bool, though. That's a C++ thing, right? If it's defined as a signed char where 1 is YES/true and 0 is NO/false, then I imagine it doesn't matter which one you use.
Since BOOL is part of Objective-C, though, it probably makes more sense to use a BOOL for clarity (other Objective-C developers might be puzzled if they see a bool in use).

Another difference between bool and BOOL is that they do not convert exactly to the same kind of objects, when you do key-value observing, or when you use methods like -[NSObject valueForKey:].
As everybody has said here, BOOL is char. As such, it is converted to an NSNumber holding a char. This object is indistinguishable from an NSNumber created from a regular char like 'A' or '\0'. You have totally lost the information that you originally had a BOOL.
However, bool is converted to an CFBoolean, which behaves the same as NSNumber, but which retains the boolean origin of the object.
I do not think that this is an argument in a BOOL vs. bool debate, but this may bite you one day.
Generally speaking, you should go with BOOL, since this is the type used everywhere in the Cocoa/iOS APIs (designed before C99 and its native bool type).

The accepted answer has been edited and its explanation become a bit incorrect. Code sample has been refreshed, but the text below stays the same. You cannot assume that BOOL is a char for now since it depends on architecture and platform.
Thus, if you run you code at 32bit platform(for example iPhone 5) and print #encode(BOOL) you will see "c". It corresponds to a char type.
But if you run you code at iPhone 5s(64 bit) you will see "B". It corresponds to a bool type.

As mentioned above BOOL could be an unsigned char type depending on your architecture, while bool is of type int. A simple experiment will show the difference why BOOL and bool can behave differently:
bool ansicBool = 64;
if(ansicBool != true) printf("This will not print\n");
printf("Any given vlaue other than 0 to ansicBool is evaluated to %i\n", ansicBool);
BOOL objcBOOL = 64;
if(objcBOOL != YES) printf("This might print depnding on your architecture\n");
printf("BOOL will keep whatever value you assign it: %i\n", objcBOOL);
if(!objcBOOL) printf("This will not print\n");
printf("! operator will zero objcBOOL %i\n", !objcBOOL);
if(!!objcBOOL) printf("!! will evaluate objcBOOL value to %i\n", !!objcBOOL);
To your surprise if(objcBOOL != YES) will evaluates to 1 by the compiler, since YES is actually the character code 1, and in the eyes of compiler, character code 64 is of course not equal to character code 1 thus the if statement will evaluate to YES/true/1 and the following line will run.
However since a none zero bool type always evaluates to the integer value of 1, the above issue will not effect your code. Below are some good tips if you want to use the Objective-C BOOL type vs the ANSI C bool type:
Always assign the YES or NO value and nothing else.
Convert BOOL types by using double not !! operator to avoid unexpected results.
When checking for YES use if(!myBool) instead of if(myBool != YES) it is much cleaner to use the not ! operator and gives the expected result.

I go against convention here. I don't like typedef's to base types. I think it's a useless indirection that removes value.
When I see the base type in your source I will instantly understand it. If it's a typedef I have to look it up to see what I'm really dealing with.
When porting to another compiler or adding another library their set of typedefs may conflict and cause issues that are difficult to debug. I just got done dealing with this in fact. In one library boolean was typedef'ed to int, and in mingw/gcc it's typedef'ed to a char.

Also, be aware of differences in casting, especially when working with bitmasks, due to casting to signed char:
bool a = 0x0100;
a == true; // expression true
BOOL b = 0x0100;
b == false; // expression true on !((TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH), e.g. MacOS
b == true; // expression true on (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
If BOOL is a signed char instead of a bool, the cast of 0x0100 to BOOL simply drops the set bit, and the resulting value is 0.

Related

C Language program keeps getting bool and true and false as errors [duplicate]

C doesn't have any built-in boolean types. What's the best way to use them in C?
From best to worse:
Option 1 (C99 and newer)
#include <stdbool.h>
Option 2
typedef enum { false, true } bool;
Option 3
typedef int bool;
enum { false, true };
Option 4
typedef int bool;
#define true 1
#define false 0
Explanation
Option 1 will work only if you use C99 (or newer) and it's the "standard way" to do it. Choose this if possible.
Options 2, 3 and 4 will have in practice the same identical behavior. #2 and #3 don't use #defines though, which in my opinion is better.
If you are undecided, go with #1!
A few thoughts on booleans in C:
I'm old enough that I just use plain ints as my boolean type without any typedefs or special defines or enums for true/false values. If you follow my suggestion below on never comparing against boolean constants, then you only need to use 0/1 to initialize the flags anyway. However, such an approach may be deemed too reactionary in these modern times. In that case, one should definitely use <stdbool.h> since it at least has the benefit of being standardized.
Whatever the boolean constants are called, use them only for initialization. Never ever write something like
if (ready == TRUE) ...
while (empty == FALSE) ...
These can always be replaced by the clearer
if (ready) ...
while (!empty) ...
Note that these can actually reasonably and understandably be read out loud.
Give your boolean variables positive names, ie full instead of notfull. The latter leads to code that is difficult to read easily. Compare
if (full) ...
if (!full) ...
with
if (!notfull) ...
if (notfull) ...
Both of the former pair read naturally, while !notfull is awkward to read even as it is, and becomes much worse in more complex boolean expressions.
Boolean arguments should generally be avoided. Consider a function defined like this
void foo(bool option) { ... }
Within the body of the function, it is very clear what the argument means since it has a convenient, and hopefully meaningful, name. But, the call sites look like
foo(TRUE);
foo(FALSE):
Here, it's essentially impossible to tell what the parameter meant without always looking at the function definition or declaration, and it gets much worse as soon if you add even more boolean parameters. I suggest either
typedef enum { OPT_ON, OPT_OFF } foo_option;
void foo(foo_option option);
or
#define OPT_ON true
#define OPT_OFF false
void foo(bool option) { ... }
In either case, the call site now looks like
foo(OPT_ON);
foo(OPT_OFF);
which the reader has at least a chance of understanding without dredging up the definition of foo.
A boolean in C is an integer: zero for false and non-zero for true.
See also Boolean data type, section C, C++, Objective-C, AWK.
Here is the version that I used:
typedef enum { false = 0, true = !false } bool;
Because false only has one value, but a logical true could have many values, but technique sets true to be what the compiler will use for the opposite of false.
This takes care of the problem of someone coding something that would come down to this:
if (true == !false)
I think we would all agree that that is not a good practice, but for the one time cost of doing "true = !false" we eliminate that problem.
[EDIT] In the end I used:
typedef enum { myfalse = 0, mytrue = !myfalse } mybool;
to avoid name collision with other schemes that were defining true and false. But the concept remains the same.
[EDIT] To show conversion of integer to boolean:
mybool somebool;
int someint = 5;
somebool = !!someint;
The first (right most) ! converts the non-zero integer to a 0, then the second (left most) ! converts the 0 to a myfalse value. I will leave it as an exercise for the reader to convert a zero integer.
[EDIT]
It is my style to use the explicit setting of a value in an enum when the specific value is required even if the default value would be the same. Example: Because false needs to be zero I use false = 0, rather than false,
[EDIT]
Show how to limit the size of enum when compiling with gcc:
typedef __attribute__((__packed__)) enum { myfalse = 0, mytrue = !myfalse } mybool;
That is, if someone does:
struct mystruct {
mybool somebool1;
mybool somebool2;
mybool somebool3;
mybool somebool4;
}
the size of the structure will be 4 bytes rather than 16 bytes.
If you are using a C99 compiler it has built-in support for bool types:
#include <stdbool.h>
int main()
{
bool b = false;
b = true;
}
http://en.wikipedia.org/wiki/Boolean_data_type
First things first. C, i.e. ISO/IEC 9899 has had a boolean type for 19 years now. That is way longer time than the expected length of the C programming career with amateur/academic/professional parts combined when visiting this question. Mine does surpass that by mere perhaps 1-2 years. It means that during the time that an average reader has learnt anything at all about C, C actually has had the boolean data type.
For the datatype, #include <stdbool.h>, and use true, false and bool. Or do not include it, and use _Bool, 1 and 0 instead.
There are various dangerous practices promoted in the other answers to this thread. I will address them:
typedef int bool;
#define true 1
#define false 0
This is no-no, because a casual reader - who did learn C within those 19 years - would expect that bool refers to the actual bool data type and would behave similarly, but it doesn't! For example
double a = ...;
bool b = a;
With C99 bool/ _Bool, b would be set to false iff a was zero, and true otherwise. C11 6.3.1.2p1
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1. 59)
Footnotes
59) NaNs do not compare equal to 0 and thus convert to 1.
With the typedef in place, the double would be coerced to an int - if the value of the double isn't in the range for int, the behaviour is undefined.
Naturally the same applies to if true and false were declared in an enum.
What is even more dangerous is declaring
typedef enum bool {
false, true
} bool;
because now all values besides 1 and 0 are invalid, and should such a value be assigned to a variable of that type, the behaviour would be wholly undefined.
Therefore iff you cannot use C99 for some inexplicable reason, for boolean variables you should use:
type int and values 0 and 1 as-is; and carefully do domain conversions from any other values to these with double negation !!
or if you insist you don't remember that 0 is falsy and non-zero truish, at least use upper case so that they don't get confused with the C99 concepts: BOOL, TRUE and FALSE!
typedef enum {
false = 0,
true
} t_bool;
C has a boolean type: bool (at least for the last 10(!) years)
Include stdbool.h and true/false will work as expected.
Anything nonzero is evaluated to true in boolean operations, so you could just
#define TRUE 1
#define FALSE 0
and use the constants.
Just a complement to other answers and some clarification, if you are allowed to use C99.
+-------+----------------+-------------------------+--------------------+
| Name | Characteristic | Dependence in stdbool.h | Value |
+-------+----------------+-------------------------+--------------------+
| _Bool | Native type | Don't need header | |
+-------+----------------+-------------------------+--------------------+
| bool | Macro | Yes | Translate to _Bool |
+-------+----------------+-------------------------+--------------------+
| true | Macro | Yes | Translate to 1 |
+-------+----------------+-------------------------+--------------------+
| false | Macro | Yes | Translate to 0 |
+-------+----------------+-------------------------+--------------------+
Some of my preferences:
_Bool or bool? Both are fine, but bool looks better than the keyword _Bool.
Accepted values for bool and _Bool are: false or true. Assigning 0 or 1 instead of false or true is valid, but is harder to read and understand the logic flow.
Some info from the standard:
_Bool is NOT unsigned int, but is part of the group unsigned integer types. It is large enough to hold the values 0 or 1.
DO NOT, but yes, you are able to redefine bool true and false but sure is not a good idea. This ability is considered obsolescent and will be removed in future.
Assigning an scalar type (arithmetic types and pointer types) to _Bool or bool, if the scalar value is equal to 0 or compares to 0 it will be 0, otherwise the result is 1: _Bool x = 9; 9 is converted to 1 when assigned to x.
_Bool is 1 byte (8 bits), usually the programmer is tempted to try to use the other bits, but is not recommended, because the only guaranteed that is given is that only one bit is use to store data, not like type char that have 8 bits available.
Nowadays C99 supports boolean types but you need to #include <stdbool.h>.
Example:
#include <stdbool.h>
int main()
{
bool arr[2] = {true, false};
printf("%d\n", arr[0] && arr[1]);
printf("%d\n", arr[0] || arr[1]);
return 0;
}
Output:
0
1
It is this:
#define TRUE 1
#define FALSE 0
You can use a char, or another small number container for it.
Pseudo-code
#define TRUE 1
#define FALSE 0
char bValue = TRUE;
You could use _Bool, but the return value must be an integer (1 for true, 0 for false).
However, It's recommended to include and use bool as in C++, as said in
this reply from daniweb forum, as well as this answer, from this other stackoverflow question:
_Bool: C99's boolean type. Using _Bool directly is only recommended if you're maintaining legacy code that already defines macros for bool, true, or false. Otherwise, those macros are standardized in the header. Include that header and you can use bool just like you would in C++.
Conditional expressions are considered to be true if they are non-zero, but the C standard requires that logical operators themselves return either 0 or 1.
#Tom: #define TRUE !FALSE is bad and is completely pointless. If the header file makes its way into compiled C++ code, then it can lead to problems:
void foo(bool flag);
...
int flag = TRUE;
foo(flag);
Some compilers will generate a warning about the int => bool conversion. Sometimes people avoid this by doing:
foo(flag == TRUE);
to force the expression to be a C++ bool. But if you #define TRUE !FALSE, you end up with:
foo(flag == !0);
which ends up doing an int-to-bool comparison that can trigger the warning anyway.
If you are using C99 then you can use the _Bool type. No #includes are necessary. You do need to treat it like an integer, though, where 1 is true and 0 is false.
You can then define TRUE and FALSE.
_Bool this_is_a_Boolean_var = 1;
//or using it with true and false
#define TRUE 1
#define FALSE 0
_Bool var = TRUE;
This is what I use:
enum {false, true};
typedef _Bool bool;
_Bool is a built in type in C. It's intended for boolean values.
You can simply use the #define directive as follows:
#define TRUE 1
#define FALSE 0
#define NOT(arg) (arg == TRUE)? FALSE : TRUE
typedef int bool;
And use as follows:
bool isVisible = FALSE;
bool isWorking = TRUE;
isVisible = NOT(isVisible);
and so on

C Compare enumerate with invalid value

I would like try to understand how is working the compilator when we compare an enumerate with invalid value, and what the program is doing during execution.
I found strange source code during my work, and did not understand the behaviour of the program, which was not giving me the expected result.
I wrote the following little program to summarize my problem.
I create an enum E_Number and I instanciate a variable a, with the value -1.
Then I perform comparison on a to check if it belongs to the range of the enum.
(I know, this is really strange, but this is exactly what i found in source code !)
I expected the result tells me Not in range because of the fail of the first condition (a >= FIRST_ENUM).
But it was the fail of the second condition (a < NB_MAX_NUMBER) which gave me the right result (see the printf())...
If I cast a in (int) in the if conditions, I get excepted results.
So what is happening during the execution ? Is the program considering -1 as an other possible enum value which will be positionned after NB_MAX_NUMBER ? What is the rule for > and < operator on enum ?
#include <stdio.h>
#define FIRST_ENUM 0
typedef enum{
NUM_1 = FIRST_ENUM,
NUM_2,
NUM_3,
NB_MAX_NUMBER
}E_Number;
int main()
{
E_Number a = -1;
if ((a >= FIRST_ENUM) && (a < NB_MAX_NUMBER))
{
printf("In Range\n");
}
else
{
printf("Not in Range\n");
}
printf("1st condition = %s\n", (a >= FIRST_ENUM)?"TRUE":"FALSE");
printf("2nd condition = %s\n", (a < NB_MAX_NUMBER)?"TRUE":"FALSE");
return 0;
}
gcc program.c
.\a.exe
Not in Range
1st condition = TRUE
2nd condition = FALSE
I am working with MINGW compilator ( gcc (x86_64-win32-seh-rev1, Built by MinGW-W64 project) 4.9.2 )
In your case the compiler consider E_Number as unsigned int because all the legal values are unsigned, so -1 is considered to be ~0u which is >= FIRST_ENUM and < NB_MAX_NUMBER
I have the same behavior with gcc version 6.3.0 20170516 (Raspbian 6.3.0-18+rpi1+deb9u1)
pi#raspberrypi:~ $ ./a.out
Not in Range
1st condition = TRUE
2nd condition = FALSE
But, if I change your definitions like that :
#include <stdio.h>
#define FIRST_ENUM -1
typedef enum{
NUM_1 = FIRST_ENUM,
NUM_2,
NUM_3,
NB_MAX_NUMBER
}E_Number;
int main()
{
E_Number a = -2;
if ((a >= FIRST_ENUM) && (a < NB_MAX_NUMBER))
{
printf("In Range\n");
}
else
{
printf("Not in Range\n");
}
printf("1st condition = %s\n", (a >= FIRST_ENUM)?"TRUE":"FALSE");
printf("2nd condition = %s\n", (a < NB_MAX_NUMBER)?"TRUE":"FALSE");
return 0;
}
the behavior change and the enum is considered to be an int and I have :
pi#raspberrypi:~ $ ./a.out
Not in Range
1st condition = FALSE
2nd condition = TRUE
Enumarator constants are of type int. The enumerator type is an unspecified integer type capable of representing all the enumerator constants.
6.7.2.2p4:
Each enumerated type shall be compatible with char, a signed integer
type, or an unsigned integer type. The choice of type is
implementation-defined,128) but shall be capable of representing the
values of all the members of the enumeration. The enumerated type is
incomplete until immediately after the } that terminates the list of
enumerator declarations, and complete thereafter.
Since you haven't enumerated any negative values, that type may well be an unsigned type. If it is, then (E_Number)some_integer will always be greater than or equal to zero (0==FIRST_ENUM).
If you expand the enum list to:
typedef enum{
NUM_NOPE=-1,
NUM_1 = FIRST_ENUM,
NUM_2,
NUM_3,
NB_MAX_NUMBER
}E_Number;
you'll force the compiler to use a signed type and the results will reverse.
Quote from ISO/IEC 9899:1999, 6.7.2.2p3
Each enumerated type shall be compatible with char, a signed integer
type, or an unsigned integer type. The choice of type is
implementation-defined, 108) but shall be capable of representing the
values of all the members of the enumeration.
So, when you declare an enumeration, you cannot be sure a priori about what kind of data will the implementation of C choose to store that variable. Optimisation reasons, the compiler may not choose an integer type on 4 bytes if you store enumeration constants between [-128, +127]. The implementation may choose char to store an enumerated variable, but you cannot be sure. Any integer data type can be chosen as time as it can store all possible values.

How can bool variable be not equal to both True and False?

According to accepted answer of this question
What is the benefit of terminating if … else if constructs with an else clause?
There is a corruption case (in embedded system) that can cause a bool variable (which is 1 bit) differ to both True and False, it means the else path in this code could be covered instead of be a dead code.
if (g_str.bool_variable == True) {
...
}
else if (g_str.bool_variable == False) {
...
}
else {
//handle error
}
I try to find out but there's still no clue for it.
Is it possible ?
and
How ?
Edit: For more clearly, I will give the declaration of the bool variable like:
struct {
unsigned char bool_variable : 1;
} g_str;
And also define:
#define True 1
#define False 0
unsigned char bool_variable : 1 is not a boolean variable. It is a 1 bit integer bit-field. _Bool bool_variable is a boolean variable.
A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type. It is implementation-defined whether atomic types are permitted. > C11dr §6.7.2.1
So right away unsigned char bool_variable : 1, it is implementation-defined if it is allowed.
If such an implementation treated unsigned char bit-fields like int bit-fields, as unsigned char range can fit in int range, then troubles occur with 1-bit int bit-fields. It is implementation defined if a 1-bit int bit-field takes on the values of 0, 1 or 0, -1. This leads to the //handle error clause of this if() block.
if (g_str.bool_variable == True) { // same as if (g_str.bool_variable == 1)
...
}
else if (g_str.bool_variable == False) { // same as if (g_str.bool_variable == 0)
...
}
else {
//handle error
}
The solution is to simplify the if() test:
if (g_str.bool_variable) {
...
}
else {
...
}
With bit-fields, it is a corner in C where unsigned int, signed int are different, but int bit-fields less than the full width of an int may be treated as signed int or unsigned int. With bit-fields, it is best to be explicit and use _Bool, signed int, unsigned int. Note: using unsigned is synonymous with unsigned int.
This code may have a race condition. The magnitude of the problem will depend on exactly what the compiler emits when it compiles this code.
Here's what might be happening. Your code first checks bool_variable == True, which evaluates false. Execution skips the first block and jumps to the else if. Your code then checks bool_variable == False, which also evaluates false so you fall into the final else. You are doing two discrete tests on bool_variable. Something else (such as another thread or an ISR) may be altering the value of bool_variable during the brief window of time after the first test has run and before the second test.
You can avoid the problem completely by using if (bool == True) {} else {} instead of re-testing for false. That version would only check the value once, eliminating the window where corruption can happen. The separate False check doesn't really buy you anything in the first place since by definition a one-bit-wide field can only take on two possible values, so !True must be the same as False. Even if you were using a larger boolean type that could technically take on more than two discrete values, you should be using it as if it could only have two (such as 0=false, everything else=True).
This hints at a much larger problem, though. Even with only one variable check instead of two, you have one thread reading the variable and another altering it at practically the same time. The corruption occurring immediately before the True check would possibly still give you erroneous results but be even harder to detect. You need some sort of locking mechanism (mutex, spinlock, etc) to ensure that only one thread is accessing that field at a time.
The only way to prove any of this for certain, though, is to step through it with a debugger or hardware probe and watch the value change between the two tests. If that's not an option, you may be able to de-couple the blocks by changing the else if to if and storing the value of bool_variable before each of the two tests. Any time the two differ, then something external has corrupted your value.
The way you've defined things, this wouldn't happen on an x86. But it could happen with some compiler/cpu combination.
Consider the following hypothetical assembly code for the if-else-else construct in question.
mv SP(0), A # load 4 bytes from stack to register A
and A, 0x1 # isolate bit 1 i.e. bool_variable
cmp A, 0x1 # compare to 1 i.e. True
jmp if equal L1
cmp A, 0x0 # compare to 0 i.e. False
jmp if equal L2
<second else block>
jmp L3
L1:
<if block>
jmp L3
L2:
<first else block>
L3:
<code>
Now consider the hypothetical machine code for some of these instructions.
opcode-register-value machine-code corrupted-code
and A, 0x1 01 03 01 010301 010303
cmp A, 0x1 02 03 01 020301 020302
cmp A, 0x0 02 03 00 020300 020304
One or more of bit corruptions shown above will cause the code to execute the second else block.
The reason I wrote that example like it did, using "mybool", FALSE and TRUE, was to indicate that this is a non-standard/pre-standard boolean type.
Before C got language support for boolean types, you would invent your own boolean type like this:
typedef { FALSE, TRUE } BOOL;
or possibly:
#define FALSE 0
#define TRUE 1
typedef unsigned char BOOL;
In either situation you get a BOOL type which is larger than 1 bit, and can therefore either be 0, 1 or something else.
Had I written the same example using stdbool bool/_Bool, false and true it wouldn't have made any sense. Because then the compiler might implement the code as a bit-field and a single bit can only have values 1 or 0.
In retrospect, a better example of the use of defensive programming might have been something like this:
typedef enum
{
APPLES,
ORANGES
} fruit_t;
fruit_t fruit;
if(fruit == APPLES)
{
// ...
}
else if(fruit == ORANGES)
{
// ...
}
else
{
// error
}

Using boolean values in C

C doesn't have any built-in boolean types. What's the best way to use them in C?
From best to worse:
Option 1 (C99 and newer)
#include <stdbool.h>
Option 2
typedef enum { false, true } bool;
Option 3
typedef int bool;
enum { false, true };
Option 4
typedef int bool;
#define true 1
#define false 0
Explanation
Option 1 will work only if you use C99 (or newer) and it's the "standard way" to do it. Choose this if possible.
Options 2, 3 and 4 will have in practice the same identical behavior. #2 and #3 don't use #defines though, which in my opinion is better.
If you are undecided, go with #1!
A few thoughts on booleans in C:
I'm old enough that I just use plain ints as my boolean type without any typedefs or special defines or enums for true/false values. If you follow my suggestion below on never comparing against boolean constants, then you only need to use 0/1 to initialize the flags anyway. However, such an approach may be deemed too reactionary in these modern times. In that case, one should definitely use <stdbool.h> since it at least has the benefit of being standardized.
Whatever the boolean constants are called, use them only for initialization. Never ever write something like
if (ready == TRUE) ...
while (empty == FALSE) ...
These can always be replaced by the clearer
if (ready) ...
while (!empty) ...
Note that these can actually reasonably and understandably be read out loud.
Give your boolean variables positive names, ie full instead of notfull. The latter leads to code that is difficult to read easily. Compare
if (full) ...
if (!full) ...
with
if (!notfull) ...
if (notfull) ...
Both of the former pair read naturally, while !notfull is awkward to read even as it is, and becomes much worse in more complex boolean expressions.
Boolean arguments should generally be avoided. Consider a function defined like this
void foo(bool option) { ... }
Within the body of the function, it is very clear what the argument means since it has a convenient, and hopefully meaningful, name. But, the call sites look like
foo(TRUE);
foo(FALSE):
Here, it's essentially impossible to tell what the parameter meant without always looking at the function definition or declaration, and it gets much worse as soon if you add even more boolean parameters. I suggest either
typedef enum { OPT_ON, OPT_OFF } foo_option;
void foo(foo_option option);
or
#define OPT_ON true
#define OPT_OFF false
void foo(bool option) { ... }
In either case, the call site now looks like
foo(OPT_ON);
foo(OPT_OFF);
which the reader has at least a chance of understanding without dredging up the definition of foo.
A boolean in C is an integer: zero for false and non-zero for true.
See also Boolean data type, section C, C++, Objective-C, AWK.
Here is the version that I used:
typedef enum { false = 0, true = !false } bool;
Because false only has one value, but a logical true could have many values, but technique sets true to be what the compiler will use for the opposite of false.
This takes care of the problem of someone coding something that would come down to this:
if (true == !false)
I think we would all agree that that is not a good practice, but for the one time cost of doing "true = !false" we eliminate that problem.
[EDIT] In the end I used:
typedef enum { myfalse = 0, mytrue = !myfalse } mybool;
to avoid name collision with other schemes that were defining true and false. But the concept remains the same.
[EDIT] To show conversion of integer to boolean:
mybool somebool;
int someint = 5;
somebool = !!someint;
The first (right most) ! converts the non-zero integer to a 0, then the second (left most) ! converts the 0 to a myfalse value. I will leave it as an exercise for the reader to convert a zero integer.
[EDIT]
It is my style to use the explicit setting of a value in an enum when the specific value is required even if the default value would be the same. Example: Because false needs to be zero I use false = 0, rather than false,
[EDIT]
Show how to limit the size of enum when compiling with gcc:
typedef __attribute__((__packed__)) enum { myfalse = 0, mytrue = !myfalse } mybool;
That is, if someone does:
struct mystruct {
mybool somebool1;
mybool somebool2;
mybool somebool3;
mybool somebool4;
}
the size of the structure will be 4 bytes rather than 16 bytes.
If you are using a C99 compiler it has built-in support for bool types:
#include <stdbool.h>
int main()
{
bool b = false;
b = true;
}
http://en.wikipedia.org/wiki/Boolean_data_type
First things first. C, i.e. ISO/IEC 9899 has had a boolean type for 19 years now. That is way longer time than the expected length of the C programming career with amateur/academic/professional parts combined when visiting this question. Mine does surpass that by mere perhaps 1-2 years. It means that during the time that an average reader has learnt anything at all about C, C actually has had the boolean data type.
For the datatype, #include <stdbool.h>, and use true, false and bool. Or do not include it, and use _Bool, 1 and 0 instead.
There are various dangerous practices promoted in the other answers to this thread. I will address them:
typedef int bool;
#define true 1
#define false 0
This is no-no, because a casual reader - who did learn C within those 19 years - would expect that bool refers to the actual bool data type and would behave similarly, but it doesn't! For example
double a = ...;
bool b = a;
With C99 bool/ _Bool, b would be set to false iff a was zero, and true otherwise. C11 6.3.1.2p1
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1. 59)
Footnotes
59) NaNs do not compare equal to 0 and thus convert to 1.
With the typedef in place, the double would be coerced to an int - if the value of the double isn't in the range for int, the behaviour is undefined.
Naturally the same applies to if true and false were declared in an enum.
What is even more dangerous is declaring
typedef enum bool {
false, true
} bool;
because now all values besides 1 and 0 are invalid, and should such a value be assigned to a variable of that type, the behaviour would be wholly undefined.
Therefore iff you cannot use C99 for some inexplicable reason, for boolean variables you should use:
type int and values 0 and 1 as-is; and carefully do domain conversions from any other values to these with double negation !!
or if you insist you don't remember that 0 is falsy and non-zero truish, at least use upper case so that they don't get confused with the C99 concepts: BOOL, TRUE and FALSE!
typedef enum {
false = 0,
true
} t_bool;
C has a boolean type: bool (at least for the last 10(!) years)
Include stdbool.h and true/false will work as expected.
Anything nonzero is evaluated to true in boolean operations, so you could just
#define TRUE 1
#define FALSE 0
and use the constants.
Just a complement to other answers and some clarification, if you are allowed to use C99.
+-------+----------------+-------------------------+--------------------+
| Name | Characteristic | Dependence in stdbool.h | Value |
+-------+----------------+-------------------------+--------------------+
| _Bool | Native type | Don't need header | |
+-------+----------------+-------------------------+--------------------+
| bool | Macro | Yes | Translate to _Bool |
+-------+----------------+-------------------------+--------------------+
| true | Macro | Yes | Translate to 1 |
+-------+----------------+-------------------------+--------------------+
| false | Macro | Yes | Translate to 0 |
+-------+----------------+-------------------------+--------------------+
Some of my preferences:
_Bool or bool? Both are fine, but bool looks better than the keyword _Bool.
Accepted values for bool and _Bool are: false or true. Assigning 0 or 1 instead of false or true is valid, but is harder to read and understand the logic flow.
Some info from the standard:
_Bool is NOT unsigned int, but is part of the group unsigned integer types. It is large enough to hold the values 0 or 1.
DO NOT, but yes, you are able to redefine bool true and false but sure is not a good idea. This ability is considered obsolescent and will be removed in future.
Assigning an scalar type (arithmetic types and pointer types) to _Bool or bool, if the scalar value is equal to 0 or compares to 0 it will be 0, otherwise the result is 1: _Bool x = 9; 9 is converted to 1 when assigned to x.
_Bool is 1 byte (8 bits), usually the programmer is tempted to try to use the other bits, but is not recommended, because the only guaranteed that is given is that only one bit is use to store data, not like type char that have 8 bits available.
Nowadays C99 supports boolean types but you need to #include <stdbool.h>.
Example:
#include <stdbool.h>
int main()
{
bool arr[2] = {true, false};
printf("%d\n", arr[0] && arr[1]);
printf("%d\n", arr[0] || arr[1]);
return 0;
}
Output:
0
1
It is this:
#define TRUE 1
#define FALSE 0
You can use a char, or another small number container for it.
Pseudo-code
#define TRUE 1
#define FALSE 0
char bValue = TRUE;
You could use _Bool, but the return value must be an integer (1 for true, 0 for false).
However, It's recommended to include and use bool as in C++, as said in
this reply from daniweb forum, as well as this answer, from this other stackoverflow question:
_Bool: C99's boolean type. Using _Bool directly is only recommended if you're maintaining legacy code that already defines macros for bool, true, or false. Otherwise, those macros are standardized in the header. Include that header and you can use bool just like you would in C++.
Conditional expressions are considered to be true if they are non-zero, but the C standard requires that logical operators themselves return either 0 or 1.
#Tom: #define TRUE !FALSE is bad and is completely pointless. If the header file makes its way into compiled C++ code, then it can lead to problems:
void foo(bool flag);
...
int flag = TRUE;
foo(flag);
Some compilers will generate a warning about the int => bool conversion. Sometimes people avoid this by doing:
foo(flag == TRUE);
to force the expression to be a C++ bool. But if you #define TRUE !FALSE, you end up with:
foo(flag == !0);
which ends up doing an int-to-bool comparison that can trigger the warning anyway.
If you are using C99 then you can use the _Bool type. No #includes are necessary. You do need to treat it like an integer, though, where 1 is true and 0 is false.
You can then define TRUE and FALSE.
_Bool this_is_a_Boolean_var = 1;
//or using it with true and false
#define TRUE 1
#define FALSE 0
_Bool var = TRUE;
This is what I use:
enum {false, true};
typedef _Bool bool;
_Bool is a built in type in C. It's intended for boolean values.
I would use a C version test to use the builtin C99 boolean type if available or fallback on an ad hoc implementation otherwise.
#include <stdint.h>
#if __STDC_VERSION__ < 199901L
# define bool uint_fast8_t
# define true 1
# define false 0
#else
# include <stdbool.h>
#endif /* __STDC_VERSION__ < 199901L */
You can simply use the #define directive as follows:
#define TRUE 1
#define FALSE 0
#define NOT(arg) (arg == TRUE)? FALSE : TRUE
typedef int bool;
And use as follows:
bool isVisible = FALSE;
bool isWorking = TRUE;
isVisible = NOT(isVisible);
and so on

How to neatly avoid C casts losing truth

I'm quite happy that, in C, things like this are bad code:
(var_a == var_b) ? TRUE : FALSE
However, what's the best way of dealing with this:
/* Header stuff */
#define INTERESTING_FLAG 0x80000000
typedef short int BOOL;
void func(BOOL);
/* Code */
int main(int argc, char *argv[])
{
unsigned long int flags = 0x00000000;
... /* Various bits of flag processing */
func(flags & INTERESTING_FLAG); /* func never receives a non-zero value
* as the top bits are cut off when the
* argument is cast down to a short
* int
*/
}
Is it acceptable (for whatever value of acceptable you're using) to have (flags & FLAG_CONST) ? TRUE : FALSE?
I would in either case called func with (flags & INTERESTING_FLAG) != 0 as an argument to indicate that a boolean parameter is required and not the arithmetic result of flags & INTERESTING_FLAG.
I'd prefer (flags & CONST_FLAG) != 0. Better still, use the _Bool type if you have it (though it's often disguised as bool).
Set your compiler flags as anally as possible, to warn you of any cast that loses bits, and treat warnings as errors.
Some people don't like it, but I use !!.
ie
!!(flags & CONST_FLAG)
(not as a to_bool macro as someone else suggested, just straight in the code).
If more people used it, it wouldn't be seen as unusual so start using it!!
This may not be a popular solution, but sometimes macros are useful.
#define to_bool(x) (!!(x))
Now we can safely have anything we want without fear of overflowing our type:
func(to_bool(flags & INTERESTING_FLAG));
Another alternative might be to define your boolean type to be an intmax_t (from stdint.h) so that it's impossible for a value to be truncated into falseness.
While I'm here, I want to say that you should be using a typedef for defining a new type, not a #define:
typedef short Bool; // or whatever type you end up choosing
Some might argue that you should use a const variable instead of a macro for numeric constants:
const INTERESTING_FLAG = 0x80000000;
Overall there are better things you can spend your time on. But macros for typedefs is a bit silly.
You could avoid this a couple different ways:
First off
void func(unsigned long int);
would take care of it...
Or
if(flags & INTERESTING_FLAG)
{
func(true);
}
else
{
func(false);
}
would also do it.
EDIT: (flags & INTERESTING_FLAG) != 0 is also good. Probably better.
This is partially off topic:
I'd also create a help function that makes it obvious to the reader what the purpose of the check is so you don't fill your code with this explicit flag checking all over the place. Typedefing the flag type would make it easier to change flag type and implementation later.
Modern compilers supports the inline keyword that can get rid of the performance overhead in a function call.
typedef unsigned long int flagtype;
...
inline bool hasInterestingFlag(flagtype flags) {
return ((flags & INTERESTING_FLAG) != 0);
}
Do you have anything against
flags & INTERESTING_FLAG ? TRUE : FALSE
?
This is why you should only use values in a "boolean" way when these values have explicitly boolean semantics. Your value does not satisfy taht rule, since it has a pronounced integer semantics (or, more precisely, bit-array semantics). In order to convert such a value to boolean, compare it to 0
func((flags & INTERESTING_FLAG) != 0);

Resources