C doesn't have any built-in boolean types. What's the best way to use them in C?
From best to worse:
Option 1 (C99 and newer)
#include <stdbool.h>
Option 2
typedef enum { false, true } bool;
Option 3
typedef int bool;
enum { false, true };
Option 4
typedef int bool;
#define true 1
#define false 0
Explanation
Option 1 will work only if you use C99 (or newer) and it's the "standard way" to do it. Choose this if possible.
Options 2, 3 and 4 will have in practice the same identical behavior. #2 and #3 don't use #defines though, which in my opinion is better.
If you are undecided, go with #1!
A few thoughts on booleans in C:
I'm old enough that I just use plain ints as my boolean type without any typedefs or special defines or enums for true/false values. If you follow my suggestion below on never comparing against boolean constants, then you only need to use 0/1 to initialize the flags anyway. However, such an approach may be deemed too reactionary in these modern times. In that case, one should definitely use <stdbool.h> since it at least has the benefit of being standardized.
Whatever the boolean constants are called, use them only for initialization. Never ever write something like
if (ready == TRUE) ...
while (empty == FALSE) ...
These can always be replaced by the clearer
if (ready) ...
while (!empty) ...
Note that these can actually reasonably and understandably be read out loud.
Give your boolean variables positive names, ie full instead of notfull. The latter leads to code that is difficult to read easily. Compare
if (full) ...
if (!full) ...
with
if (!notfull) ...
if (notfull) ...
Both of the former pair read naturally, while !notfull is awkward to read even as it is, and becomes much worse in more complex boolean expressions.
Boolean arguments should generally be avoided. Consider a function defined like this
void foo(bool option) { ... }
Within the body of the function, it is very clear what the argument means since it has a convenient, and hopefully meaningful, name. But, the call sites look like
foo(TRUE);
foo(FALSE):
Here, it's essentially impossible to tell what the parameter meant without always looking at the function definition or declaration, and it gets much worse as soon if you add even more boolean parameters. I suggest either
typedef enum { OPT_ON, OPT_OFF } foo_option;
void foo(foo_option option);
or
#define OPT_ON true
#define OPT_OFF false
void foo(bool option) { ... }
In either case, the call site now looks like
foo(OPT_ON);
foo(OPT_OFF);
which the reader has at least a chance of understanding without dredging up the definition of foo.
A boolean in C is an integer: zero for false and non-zero for true.
See also Boolean data type, section C, C++, Objective-C, AWK.
Here is the version that I used:
typedef enum { false = 0, true = !false } bool;
Because false only has one value, but a logical true could have many values, but technique sets true to be what the compiler will use for the opposite of false.
This takes care of the problem of someone coding something that would come down to this:
if (true == !false)
I think we would all agree that that is not a good practice, but for the one time cost of doing "true = !false" we eliminate that problem.
[EDIT] In the end I used:
typedef enum { myfalse = 0, mytrue = !myfalse } mybool;
to avoid name collision with other schemes that were defining true and false. But the concept remains the same.
[EDIT] To show conversion of integer to boolean:
mybool somebool;
int someint = 5;
somebool = !!someint;
The first (right most) ! converts the non-zero integer to a 0, then the second (left most) ! converts the 0 to a myfalse value. I will leave it as an exercise for the reader to convert a zero integer.
[EDIT]
It is my style to use the explicit setting of a value in an enum when the specific value is required even if the default value would be the same. Example: Because false needs to be zero I use false = 0, rather than false,
[EDIT]
Show how to limit the size of enum when compiling with gcc:
typedef __attribute__((__packed__)) enum { myfalse = 0, mytrue = !myfalse } mybool;
That is, if someone does:
struct mystruct {
mybool somebool1;
mybool somebool2;
mybool somebool3;
mybool somebool4;
}
the size of the structure will be 4 bytes rather than 16 bytes.
If you are using a C99 compiler it has built-in support for bool types:
#include <stdbool.h>
int main()
{
bool b = false;
b = true;
}
http://en.wikipedia.org/wiki/Boolean_data_type
First things first. C, i.e. ISO/IEC 9899 has had a boolean type for 19 years now. That is way longer time than the expected length of the C programming career with amateur/academic/professional parts combined when visiting this question. Mine does surpass that by mere perhaps 1-2 years. It means that during the time that an average reader has learnt anything at all about C, C actually has had the boolean data type.
For the datatype, #include <stdbool.h>, and use true, false and bool. Or do not include it, and use _Bool, 1 and 0 instead.
There are various dangerous practices promoted in the other answers to this thread. I will address them:
typedef int bool;
#define true 1
#define false 0
This is no-no, because a casual reader - who did learn C within those 19 years - would expect that bool refers to the actual bool data type and would behave similarly, but it doesn't! For example
double a = ...;
bool b = a;
With C99 bool/ _Bool, b would be set to false iff a was zero, and true otherwise. C11 6.3.1.2p1
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1. 59)
Footnotes
59) NaNs do not compare equal to 0 and thus convert to 1.
With the typedef in place, the double would be coerced to an int - if the value of the double isn't in the range for int, the behaviour is undefined.
Naturally the same applies to if true and false were declared in an enum.
What is even more dangerous is declaring
typedef enum bool {
false, true
} bool;
because now all values besides 1 and 0 are invalid, and should such a value be assigned to a variable of that type, the behaviour would be wholly undefined.
Therefore iff you cannot use C99 for some inexplicable reason, for boolean variables you should use:
type int and values 0 and 1 as-is; and carefully do domain conversions from any other values to these with double negation !!
or if you insist you don't remember that 0 is falsy and non-zero truish, at least use upper case so that they don't get confused with the C99 concepts: BOOL, TRUE and FALSE!
typedef enum {
false = 0,
true
} t_bool;
C has a boolean type: bool (at least for the last 10(!) years)
Include stdbool.h and true/false will work as expected.
Anything nonzero is evaluated to true in boolean operations, so you could just
#define TRUE 1
#define FALSE 0
and use the constants.
Just a complement to other answers and some clarification, if you are allowed to use C99.
+-------+----------------+-------------------------+--------------------+
| Name | Characteristic | Dependence in stdbool.h | Value |
+-------+----------------+-------------------------+--------------------+
| _Bool | Native type | Don't need header | |
+-------+----------------+-------------------------+--------------------+
| bool | Macro | Yes | Translate to _Bool |
+-------+----------------+-------------------------+--------------------+
| true | Macro | Yes | Translate to 1 |
+-------+----------------+-------------------------+--------------------+
| false | Macro | Yes | Translate to 0 |
+-------+----------------+-------------------------+--------------------+
Some of my preferences:
_Bool or bool? Both are fine, but bool looks better than the keyword _Bool.
Accepted values for bool and _Bool are: false or true. Assigning 0 or 1 instead of false or true is valid, but is harder to read and understand the logic flow.
Some info from the standard:
_Bool is NOT unsigned int, but is part of the group unsigned integer types. It is large enough to hold the values 0 or 1.
DO NOT, but yes, you are able to redefine bool true and false but sure is not a good idea. This ability is considered obsolescent and will be removed in future.
Assigning an scalar type (arithmetic types and pointer types) to _Bool or bool, if the scalar value is equal to 0 or compares to 0 it will be 0, otherwise the result is 1: _Bool x = 9; 9 is converted to 1 when assigned to x.
_Bool is 1 byte (8 bits), usually the programmer is tempted to try to use the other bits, but is not recommended, because the only guaranteed that is given is that only one bit is use to store data, not like type char that have 8 bits available.
Nowadays C99 supports boolean types but you need to #include <stdbool.h>.
Example:
#include <stdbool.h>
int main()
{
bool arr[2] = {true, false};
printf("%d\n", arr[0] && arr[1]);
printf("%d\n", arr[0] || arr[1]);
return 0;
}
Output:
0
1
It is this:
#define TRUE 1
#define FALSE 0
You can use a char, or another small number container for it.
Pseudo-code
#define TRUE 1
#define FALSE 0
char bValue = TRUE;
You could use _Bool, but the return value must be an integer (1 for true, 0 for false).
However, It's recommended to include and use bool as in C++, as said in
this reply from daniweb forum, as well as this answer, from this other stackoverflow question:
_Bool: C99's boolean type. Using _Bool directly is only recommended if you're maintaining legacy code that already defines macros for bool, true, or false. Otherwise, those macros are standardized in the header. Include that header and you can use bool just like you would in C++.
Conditional expressions are considered to be true if they are non-zero, but the C standard requires that logical operators themselves return either 0 or 1.
#Tom: #define TRUE !FALSE is bad and is completely pointless. If the header file makes its way into compiled C++ code, then it can lead to problems:
void foo(bool flag);
...
int flag = TRUE;
foo(flag);
Some compilers will generate a warning about the int => bool conversion. Sometimes people avoid this by doing:
foo(flag == TRUE);
to force the expression to be a C++ bool. But if you #define TRUE !FALSE, you end up with:
foo(flag == !0);
which ends up doing an int-to-bool comparison that can trigger the warning anyway.
If you are using C99 then you can use the _Bool type. No #includes are necessary. You do need to treat it like an integer, though, where 1 is true and 0 is false.
You can then define TRUE and FALSE.
_Bool this_is_a_Boolean_var = 1;
//or using it with true and false
#define TRUE 1
#define FALSE 0
_Bool var = TRUE;
This is what I use:
enum {false, true};
typedef _Bool bool;
_Bool is a built in type in C. It's intended for boolean values.
I would use a C version test to use the builtin C99 boolean type if available or fallback on an ad hoc implementation otherwise.
#include <stdint.h>
#if __STDC_VERSION__ < 199901L
# define bool uint_fast8_t
# define true 1
# define false 0
#else
# include <stdbool.h>
#endif /* __STDC_VERSION__ < 199901L */
You can simply use the #define directive as follows:
#define TRUE 1
#define FALSE 0
#define NOT(arg) (arg == TRUE)? FALSE : TRUE
typedef int bool;
And use as follows:
bool isVisible = FALSE;
bool isWorking = TRUE;
isVisible = NOT(isVisible);
and so on
Related
C doesn't have any built-in boolean types. What's the best way to use them in C?
From best to worse:
Option 1 (C99 and newer)
#include <stdbool.h>
Option 2
typedef enum { false, true } bool;
Option 3
typedef int bool;
enum { false, true };
Option 4
typedef int bool;
#define true 1
#define false 0
Explanation
Option 1 will work only if you use C99 (or newer) and it's the "standard way" to do it. Choose this if possible.
Options 2, 3 and 4 will have in practice the same identical behavior. #2 and #3 don't use #defines though, which in my opinion is better.
If you are undecided, go with #1!
A few thoughts on booleans in C:
I'm old enough that I just use plain ints as my boolean type without any typedefs or special defines or enums for true/false values. If you follow my suggestion below on never comparing against boolean constants, then you only need to use 0/1 to initialize the flags anyway. However, such an approach may be deemed too reactionary in these modern times. In that case, one should definitely use <stdbool.h> since it at least has the benefit of being standardized.
Whatever the boolean constants are called, use them only for initialization. Never ever write something like
if (ready == TRUE) ...
while (empty == FALSE) ...
These can always be replaced by the clearer
if (ready) ...
while (!empty) ...
Note that these can actually reasonably and understandably be read out loud.
Give your boolean variables positive names, ie full instead of notfull. The latter leads to code that is difficult to read easily. Compare
if (full) ...
if (!full) ...
with
if (!notfull) ...
if (notfull) ...
Both of the former pair read naturally, while !notfull is awkward to read even as it is, and becomes much worse in more complex boolean expressions.
Boolean arguments should generally be avoided. Consider a function defined like this
void foo(bool option) { ... }
Within the body of the function, it is very clear what the argument means since it has a convenient, and hopefully meaningful, name. But, the call sites look like
foo(TRUE);
foo(FALSE):
Here, it's essentially impossible to tell what the parameter meant without always looking at the function definition or declaration, and it gets much worse as soon if you add even more boolean parameters. I suggest either
typedef enum { OPT_ON, OPT_OFF } foo_option;
void foo(foo_option option);
or
#define OPT_ON true
#define OPT_OFF false
void foo(bool option) { ... }
In either case, the call site now looks like
foo(OPT_ON);
foo(OPT_OFF);
which the reader has at least a chance of understanding without dredging up the definition of foo.
A boolean in C is an integer: zero for false and non-zero for true.
See also Boolean data type, section C, C++, Objective-C, AWK.
Here is the version that I used:
typedef enum { false = 0, true = !false } bool;
Because false only has one value, but a logical true could have many values, but technique sets true to be what the compiler will use for the opposite of false.
This takes care of the problem of someone coding something that would come down to this:
if (true == !false)
I think we would all agree that that is not a good practice, but for the one time cost of doing "true = !false" we eliminate that problem.
[EDIT] In the end I used:
typedef enum { myfalse = 0, mytrue = !myfalse } mybool;
to avoid name collision with other schemes that were defining true and false. But the concept remains the same.
[EDIT] To show conversion of integer to boolean:
mybool somebool;
int someint = 5;
somebool = !!someint;
The first (right most) ! converts the non-zero integer to a 0, then the second (left most) ! converts the 0 to a myfalse value. I will leave it as an exercise for the reader to convert a zero integer.
[EDIT]
It is my style to use the explicit setting of a value in an enum when the specific value is required even if the default value would be the same. Example: Because false needs to be zero I use false = 0, rather than false,
[EDIT]
Show how to limit the size of enum when compiling with gcc:
typedef __attribute__((__packed__)) enum { myfalse = 0, mytrue = !myfalse } mybool;
That is, if someone does:
struct mystruct {
mybool somebool1;
mybool somebool2;
mybool somebool3;
mybool somebool4;
}
the size of the structure will be 4 bytes rather than 16 bytes.
If you are using a C99 compiler it has built-in support for bool types:
#include <stdbool.h>
int main()
{
bool b = false;
b = true;
}
http://en.wikipedia.org/wiki/Boolean_data_type
First things first. C, i.e. ISO/IEC 9899 has had a boolean type for 19 years now. That is way longer time than the expected length of the C programming career with amateur/academic/professional parts combined when visiting this question. Mine does surpass that by mere perhaps 1-2 years. It means that during the time that an average reader has learnt anything at all about C, C actually has had the boolean data type.
For the datatype, #include <stdbool.h>, and use true, false and bool. Or do not include it, and use _Bool, 1 and 0 instead.
There are various dangerous practices promoted in the other answers to this thread. I will address them:
typedef int bool;
#define true 1
#define false 0
This is no-no, because a casual reader - who did learn C within those 19 years - would expect that bool refers to the actual bool data type and would behave similarly, but it doesn't! For example
double a = ...;
bool b = a;
With C99 bool/ _Bool, b would be set to false iff a was zero, and true otherwise. C11 6.3.1.2p1
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1. 59)
Footnotes
59) NaNs do not compare equal to 0 and thus convert to 1.
With the typedef in place, the double would be coerced to an int - if the value of the double isn't in the range for int, the behaviour is undefined.
Naturally the same applies to if true and false were declared in an enum.
What is even more dangerous is declaring
typedef enum bool {
false, true
} bool;
because now all values besides 1 and 0 are invalid, and should such a value be assigned to a variable of that type, the behaviour would be wholly undefined.
Therefore iff you cannot use C99 for some inexplicable reason, for boolean variables you should use:
type int and values 0 and 1 as-is; and carefully do domain conversions from any other values to these with double negation !!
or if you insist you don't remember that 0 is falsy and non-zero truish, at least use upper case so that they don't get confused with the C99 concepts: BOOL, TRUE and FALSE!
typedef enum {
false = 0,
true
} t_bool;
C has a boolean type: bool (at least for the last 10(!) years)
Include stdbool.h and true/false will work as expected.
Anything nonzero is evaluated to true in boolean operations, so you could just
#define TRUE 1
#define FALSE 0
and use the constants.
Just a complement to other answers and some clarification, if you are allowed to use C99.
+-------+----------------+-------------------------+--------------------+
| Name | Characteristic | Dependence in stdbool.h | Value |
+-------+----------------+-------------------------+--------------------+
| _Bool | Native type | Don't need header | |
+-------+----------------+-------------------------+--------------------+
| bool | Macro | Yes | Translate to _Bool |
+-------+----------------+-------------------------+--------------------+
| true | Macro | Yes | Translate to 1 |
+-------+----------------+-------------------------+--------------------+
| false | Macro | Yes | Translate to 0 |
+-------+----------------+-------------------------+--------------------+
Some of my preferences:
_Bool or bool? Both are fine, but bool looks better than the keyword _Bool.
Accepted values for bool and _Bool are: false or true. Assigning 0 or 1 instead of false or true is valid, but is harder to read and understand the logic flow.
Some info from the standard:
_Bool is NOT unsigned int, but is part of the group unsigned integer types. It is large enough to hold the values 0 or 1.
DO NOT, but yes, you are able to redefine bool true and false but sure is not a good idea. This ability is considered obsolescent and will be removed in future.
Assigning an scalar type (arithmetic types and pointer types) to _Bool or bool, if the scalar value is equal to 0 or compares to 0 it will be 0, otherwise the result is 1: _Bool x = 9; 9 is converted to 1 when assigned to x.
_Bool is 1 byte (8 bits), usually the programmer is tempted to try to use the other bits, but is not recommended, because the only guaranteed that is given is that only one bit is use to store data, not like type char that have 8 bits available.
Nowadays C99 supports boolean types but you need to #include <stdbool.h>.
Example:
#include <stdbool.h>
int main()
{
bool arr[2] = {true, false};
printf("%d\n", arr[0] && arr[1]);
printf("%d\n", arr[0] || arr[1]);
return 0;
}
Output:
0
1
It is this:
#define TRUE 1
#define FALSE 0
You can use a char, or another small number container for it.
Pseudo-code
#define TRUE 1
#define FALSE 0
char bValue = TRUE;
You could use _Bool, but the return value must be an integer (1 for true, 0 for false).
However, It's recommended to include and use bool as in C++, as said in
this reply from daniweb forum, as well as this answer, from this other stackoverflow question:
_Bool: C99's boolean type. Using _Bool directly is only recommended if you're maintaining legacy code that already defines macros for bool, true, or false. Otherwise, those macros are standardized in the header. Include that header and you can use bool just like you would in C++.
Conditional expressions are considered to be true if they are non-zero, but the C standard requires that logical operators themselves return either 0 or 1.
#Tom: #define TRUE !FALSE is bad and is completely pointless. If the header file makes its way into compiled C++ code, then it can lead to problems:
void foo(bool flag);
...
int flag = TRUE;
foo(flag);
Some compilers will generate a warning about the int => bool conversion. Sometimes people avoid this by doing:
foo(flag == TRUE);
to force the expression to be a C++ bool. But if you #define TRUE !FALSE, you end up with:
foo(flag == !0);
which ends up doing an int-to-bool comparison that can trigger the warning anyway.
If you are using C99 then you can use the _Bool type. No #includes are necessary. You do need to treat it like an integer, though, where 1 is true and 0 is false.
You can then define TRUE and FALSE.
_Bool this_is_a_Boolean_var = 1;
//or using it with true and false
#define TRUE 1
#define FALSE 0
_Bool var = TRUE;
This is what I use:
enum {false, true};
typedef _Bool bool;
_Bool is a built in type in C. It's intended for boolean values.
You can simply use the #define directive as follows:
#define TRUE 1
#define FALSE 0
#define NOT(arg) (arg == TRUE)? FALSE : TRUE
typedef int bool;
And use as follows:
bool isVisible = FALSE;
bool isWorking = TRUE;
isVisible = NOT(isVisible);
and so on
If the _Bool type acts like an integer and doesn't enforce that a value is true/false or 1/0, for example:
_Bool bools[] = {0,3,'c',0x17};
printf("%d", bools[2]);
> 1
What is the advantage of having that there? Is it just a simple way to coerce things to see how they would evaluate for 'truth-ness', for example:
printf("%d\n", (_Bool) 3);
> 1
Or how is this helpful or useful in the C language?
What advantage does _Bool give?
The value of a _Bool is either 0 or 1. Nothing else, unlike an int.
Conversion to a _Bool always converts non-zero to 1 and only 0 to 0.
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1.
Examples:
#include <math.h>
#include <stdlib.h>
_Bool all_false[] = { 0, 0.0, -0.0, NULL };
_Bool all_true[] = { 13, 0.1, 42.0, "Hello", NAN };
Notice the difference of conversion/casting to int vs; _Bool: (int) 0.1 --> 0, yet (_Bool) 0.1 --> 1.
Notice the difference of conversion/casting to unsigned vs; _Bool: (unsigned) 0x100000000 --> 0, yet (_Bool) 0x100000000 --> 1.
_Bool adds clarity to boolean operations.
_Bool is a distinctive type from int, char, etc. when used with _Generic.
Prior to C99, C lacked _Bool. Much early code formed their own types bool, Bool, boolean, bool8, bool_t, .... Creating a new type _Bool brought uniformity to this common, yet non-uniform practice. <stdbool.h> is available to use bool, true, false. This allows older code, which does not include <stdbool.h> to not break, yet newer code to use cleaner names.
OP's example with "doesn't enforce that a value is true/false or 1/0" does enforce that bools[2] had a value of 1. It did not enforce that the initializer of 'c', an int, had to be in the range of [0...1] nor of type _Bool, much like int x = 12.345; is allowed. In both cases, a conversion occurred. Although the 2nd often generates a warning.
The advantage is legibility, nothing more. For example:
bool rb() {
if (cond && f(y)) {
return true;
}
return false;
}
Versus:
int rb() {
if (cond && f(y)) {
return 1;
}
return 0;
}
There's really no other benefit to it. For those that are used to working in C code without bool, it's largely cosmetic, but for those used to C++ and its bool it may make coding feel more consistent.
As always, an easy way to "cast to a boolean value" is just double negation, like:
!!3
Where that will reduce it to a 0 or 1 value.
Consider this:
(bool) 0.5 -> 1
( int) 0.5 -> 0
As you can see, _Bool does not act like an integer.
According to accepted answer of this question
What is the benefit of terminating if … else if constructs with an else clause?
There is a corruption case (in embedded system) that can cause a bool variable (which is 1 bit) differ to both True and False, it means the else path in this code could be covered instead of be a dead code.
if (g_str.bool_variable == True) {
...
}
else if (g_str.bool_variable == False) {
...
}
else {
//handle error
}
I try to find out but there's still no clue for it.
Is it possible ?
and
How ?
Edit: For more clearly, I will give the declaration of the bool variable like:
struct {
unsigned char bool_variable : 1;
} g_str;
And also define:
#define True 1
#define False 0
unsigned char bool_variable : 1 is not a boolean variable. It is a 1 bit integer bit-field. _Bool bool_variable is a boolean variable.
A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type. It is implementation-defined whether atomic types are permitted. > C11dr §6.7.2.1
So right away unsigned char bool_variable : 1, it is implementation-defined if it is allowed.
If such an implementation treated unsigned char bit-fields like int bit-fields, as unsigned char range can fit in int range, then troubles occur with 1-bit int bit-fields. It is implementation defined if a 1-bit int bit-field takes on the values of 0, 1 or 0, -1. This leads to the //handle error clause of this if() block.
if (g_str.bool_variable == True) { // same as if (g_str.bool_variable == 1)
...
}
else if (g_str.bool_variable == False) { // same as if (g_str.bool_variable == 0)
...
}
else {
//handle error
}
The solution is to simplify the if() test:
if (g_str.bool_variable) {
...
}
else {
...
}
With bit-fields, it is a corner in C where unsigned int, signed int are different, but int bit-fields less than the full width of an int may be treated as signed int or unsigned int. With bit-fields, it is best to be explicit and use _Bool, signed int, unsigned int. Note: using unsigned is synonymous with unsigned int.
This code may have a race condition. The magnitude of the problem will depend on exactly what the compiler emits when it compiles this code.
Here's what might be happening. Your code first checks bool_variable == True, which evaluates false. Execution skips the first block and jumps to the else if. Your code then checks bool_variable == False, which also evaluates false so you fall into the final else. You are doing two discrete tests on bool_variable. Something else (such as another thread or an ISR) may be altering the value of bool_variable during the brief window of time after the first test has run and before the second test.
You can avoid the problem completely by using if (bool == True) {} else {} instead of re-testing for false. That version would only check the value once, eliminating the window where corruption can happen. The separate False check doesn't really buy you anything in the first place since by definition a one-bit-wide field can only take on two possible values, so !True must be the same as False. Even if you were using a larger boolean type that could technically take on more than two discrete values, you should be using it as if it could only have two (such as 0=false, everything else=True).
This hints at a much larger problem, though. Even with only one variable check instead of two, you have one thread reading the variable and another altering it at practically the same time. The corruption occurring immediately before the True check would possibly still give you erroneous results but be even harder to detect. You need some sort of locking mechanism (mutex, spinlock, etc) to ensure that only one thread is accessing that field at a time.
The only way to prove any of this for certain, though, is to step through it with a debugger or hardware probe and watch the value change between the two tests. If that's not an option, you may be able to de-couple the blocks by changing the else if to if and storing the value of bool_variable before each of the two tests. Any time the two differ, then something external has corrupted your value.
The way you've defined things, this wouldn't happen on an x86. But it could happen with some compiler/cpu combination.
Consider the following hypothetical assembly code for the if-else-else construct in question.
mv SP(0), A # load 4 bytes from stack to register A
and A, 0x1 # isolate bit 1 i.e. bool_variable
cmp A, 0x1 # compare to 1 i.e. True
jmp if equal L1
cmp A, 0x0 # compare to 0 i.e. False
jmp if equal L2
<second else block>
jmp L3
L1:
<if block>
jmp L3
L2:
<first else block>
L3:
<code>
Now consider the hypothetical machine code for some of these instructions.
opcode-register-value machine-code corrupted-code
and A, 0x1 01 03 01 010301 010303
cmp A, 0x1 02 03 01 020301 020302
cmp A, 0x0 02 03 00 020300 020304
One or more of bit corruptions shown above will cause the code to execute the second else block.
The reason I wrote that example like it did, using "mybool", FALSE and TRUE, was to indicate that this is a non-standard/pre-standard boolean type.
Before C got language support for boolean types, you would invent your own boolean type like this:
typedef { FALSE, TRUE } BOOL;
or possibly:
#define FALSE 0
#define TRUE 1
typedef unsigned char BOOL;
In either situation you get a BOOL type which is larger than 1 bit, and can therefore either be 0, 1 or something else.
Had I written the same example using stdbool bool/_Bool, false and true it wouldn't have made any sense. Because then the compiler might implement the code as a bit-field and a single bit can only have values 1 or 0.
In retrospect, a better example of the use of defensive programming might have been something like this:
typedef enum
{
APPLES,
ORANGES
} fruit_t;
fruit_t fruit;
if(fruit == APPLES)
{
// ...
}
else if(fruit == ORANGES)
{
// ...
}
else
{
// error
}
TRUE/FALSE is usually defined in C as below. Are these definitions part of C standard which is supported by varies compiler implementations?
#define TRUE 1
#define FALSE 0
No for TRUE or FALSE. Yes for true and false in C99 or later if you include <stdbool.h>.
C99 and C11 define an integral type with Boolean semantics called _Bool, but no actual true/false keywords exist.
The _Bool type is only capable of storing the values 1 and 0. Any value that doesn't compare equal to 0 is converted to 1, and any value comparing equal to 0 is converted to 0.
By including <stdbool.h>, the _Bool type is allowed to be written as bool as with some other languages, and true and false C preprocessor macros are defined to be 1 and 0 respectively.
Before that, it was somewhat of a convenience for someone to define constants like
#define TRUE 1
#define FALSE 0
typedef char BOOL; /* or #define BOOL char */
or sometimes
typedef enum {False, True} BOOL;
Unlike C99, however, both have at least one flaw:
BOOL bClicked = False;
++bClicked, ++bClicked;
if (bClicked == False)
printf ("False\n");
else if (bClicked == True)
printf ("True\n");
else
printf ("Unknown: %d\n", bClicked);
That will print "Unknown: 2" because the defined BOOL type isn't a true Boolean type.
The C99 version:
_Bool bClicked = 0;
++bClicked, ++bClicked;
if (bClicked == 0)
printf ("False\n");
else if (bClicked == 1)
printf ("True\n");
else
printf ("Unknown: %d\n", bClicked);
That will print "True" because _Bool can only store 0 and 1, so incrementing 1 to get 2, which compares as not equal to 0, results in 1.
Of course, most people just use the language to their advantage rather than actually comparing against True/False constants:
if (bClicked)
{
/* True if bClicked does not compare equal to 0 */
}
else
{
/* False */
}
Because of that behavior, there isn't any real need for a Boolean type or true/false constants; they exist purely for indication of intent.
I vaguely remember someone ranting that some Windows API functions return a value of type BOOL, but TRUE and FALSE weren't the only possible return values, so despite returning a value that should have been a simple comparison as in that last bit of code, more comparisons were needed to handle all possible cases. Had there been an actual Boolean type back then, most likely those functions, whatever they are/were would have returned a value of type int instead. A BOOL return type suggests only two values can be returned, but apparently that wasn't the case with those functions, perhaps because there was a third (error) return value.
I saw the "new type" BOOL (YES, NO).
I read that this type is almost like a char.
For testing I did :
NSLog(#"Size of BOOL %d", sizeof(BOOL));
NSLog(#"Size of bool %d", sizeof(bool));
Good to see that both logs display "1" (sometimes in C++ bool is an int and its sizeof is 4)
So I was just wondering if there were some issues with the bool type or something ?
Can I just use bool (that seems to work) without losing speed?
From the definition in objc.h:
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
typedef bool BOOL;
#else
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
#define YES ((BOOL)1)
#define NO ((BOOL)0)
So, yes, you can assume that BOOL is a char. You can use the (C99) bool type, but all of Apple's Objective-C frameworks and most Objective-C/Cocoa code uses BOOL, so you'll save yourself headache if the typedef ever changes by just using BOOL.
As mentioned above, BOOL is a signed char. bool - type from C99 standard (int).
BOOL - YES/NO. bool - true/false.
See examples:
bool b1 = 2;
if (b1) printf("REAL b1 \n");
if (b1 != true) printf("NOT REAL b1 \n");
BOOL b2 = 2;
if (b2) printf("REAL b2 \n");
if (b2 != YES) printf("NOT REAL b2 \n");
And result is
REAL b1
REAL b2
NOT REAL b2
Note that bool != BOOL. Result below is only ONCE AGAIN - REAL b2
b2 = b1;
if (b2) printf("ONCE AGAIN - REAL b2 \n");
if (b2 != true) printf("ONCE AGAIN - NOT REAL b2 \n");
If you want to convert bool to BOOL you should use next code
BOOL b22 = b1 ? YES : NO; //and back - bool b11 = b2 ? true : false;
So, in our case:
BOOL b22 = b1 ? 2 : NO;
if (b22) printf("ONCE AGAIN MORE - REAL b22 \n");
if (b22 != YES) printf("ONCE AGAIN MORE- NOT REAL b22 \n");
And so.. what we get now? :-)
At the time of writing this is the most recent version of objc.h:
/// Type to represent a boolean value.
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
It means that on 64-bit iOS devices and on WatchOS BOOL is exactly the same thing as bool while on all other devices (OS X, 32-bit iOS) it is signed char and cannot even be overridden by compiler flag -funsigned-char
It also means that this example code will run differently on different platforms (tested it myself):
int myValue = 256;
BOOL myBool = myValue;
if (myBool) {
printf("i'm 64-bit iOS");
} else {
printf("i'm 32-bit iOS");
}
BTW never assign things like array.count to BOOL variable because about 0.4% of possible values will be negative.
The Objective-C type you should use is BOOL. There is nothing like a native boolean datatype, therefore to be sure that the code compiles on all compilers use BOOL. (It's defined in the Apple-Frameworks.
Yup, BOOL is a typedef for a signed char according to objc.h.
I don't know about bool, though. That's a C++ thing, right? If it's defined as a signed char where 1 is YES/true and 0 is NO/false, then I imagine it doesn't matter which one you use.
Since BOOL is part of Objective-C, though, it probably makes more sense to use a BOOL for clarity (other Objective-C developers might be puzzled if they see a bool in use).
Another difference between bool and BOOL is that they do not convert exactly to the same kind of objects, when you do key-value observing, or when you use methods like -[NSObject valueForKey:].
As everybody has said here, BOOL is char. As such, it is converted to an NSNumber holding a char. This object is indistinguishable from an NSNumber created from a regular char like 'A' or '\0'. You have totally lost the information that you originally had a BOOL.
However, bool is converted to an CFBoolean, which behaves the same as NSNumber, but which retains the boolean origin of the object.
I do not think that this is an argument in a BOOL vs. bool debate, but this may bite you one day.
Generally speaking, you should go with BOOL, since this is the type used everywhere in the Cocoa/iOS APIs (designed before C99 and its native bool type).
The accepted answer has been edited and its explanation become a bit incorrect. Code sample has been refreshed, but the text below stays the same. You cannot assume that BOOL is a char for now since it depends on architecture and platform.
Thus, if you run you code at 32bit platform(for example iPhone 5) and print #encode(BOOL) you will see "c". It corresponds to a char type.
But if you run you code at iPhone 5s(64 bit) you will see "B". It corresponds to a bool type.
As mentioned above BOOL could be an unsigned char type depending on your architecture, while bool is of type int. A simple experiment will show the difference why BOOL and bool can behave differently:
bool ansicBool = 64;
if(ansicBool != true) printf("This will not print\n");
printf("Any given vlaue other than 0 to ansicBool is evaluated to %i\n", ansicBool);
BOOL objcBOOL = 64;
if(objcBOOL != YES) printf("This might print depnding on your architecture\n");
printf("BOOL will keep whatever value you assign it: %i\n", objcBOOL);
if(!objcBOOL) printf("This will not print\n");
printf("! operator will zero objcBOOL %i\n", !objcBOOL);
if(!!objcBOOL) printf("!! will evaluate objcBOOL value to %i\n", !!objcBOOL);
To your surprise if(objcBOOL != YES) will evaluates to 1 by the compiler, since YES is actually the character code 1, and in the eyes of compiler, character code 64 is of course not equal to character code 1 thus the if statement will evaluate to YES/true/1 and the following line will run.
However since a none zero bool type always evaluates to the integer value of 1, the above issue will not effect your code. Below are some good tips if you want to use the Objective-C BOOL type vs the ANSI C bool type:
Always assign the YES or NO value and nothing else.
Convert BOOL types by using double not !! operator to avoid unexpected results.
When checking for YES use if(!myBool) instead of if(myBool != YES) it is much cleaner to use the not ! operator and gives the expected result.
I go against convention here. I don't like typedef's to base types. I think it's a useless indirection that removes value.
When I see the base type in your source I will instantly understand it. If it's a typedef I have to look it up to see what I'm really dealing with.
When porting to another compiler or adding another library their set of typedefs may conflict and cause issues that are difficult to debug. I just got done dealing with this in fact. In one library boolean was typedef'ed to int, and in mingw/gcc it's typedef'ed to a char.
Also, be aware of differences in casting, especially when working with bitmasks, due to casting to signed char:
bool a = 0x0100;
a == true; // expression true
BOOL b = 0x0100;
b == false; // expression true on !((TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH), e.g. MacOS
b == true; // expression true on (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
If BOOL is a signed char instead of a bool, the cast of 0x0100 to BOOL simply drops the set bit, and the resulting value is 0.