How to neatly avoid C casts losing truth - c

I'm quite happy that, in C, things like this are bad code:
(var_a == var_b) ? TRUE : FALSE
However, what's the best way of dealing with this:
/* Header stuff */
#define INTERESTING_FLAG 0x80000000
typedef short int BOOL;
void func(BOOL);
/* Code */
int main(int argc, char *argv[])
{
unsigned long int flags = 0x00000000;
... /* Various bits of flag processing */
func(flags & INTERESTING_FLAG); /* func never receives a non-zero value
* as the top bits are cut off when the
* argument is cast down to a short
* int
*/
}
Is it acceptable (for whatever value of acceptable you're using) to have (flags & FLAG_CONST) ? TRUE : FALSE?

I would in either case called func with (flags & INTERESTING_FLAG) != 0 as an argument to indicate that a boolean parameter is required and not the arithmetic result of flags & INTERESTING_FLAG.

I'd prefer (flags & CONST_FLAG) != 0. Better still, use the _Bool type if you have it (though it's often disguised as bool).

Set your compiler flags as anally as possible, to warn you of any cast that loses bits, and treat warnings as errors.

Some people don't like it, but I use !!.
ie
!!(flags & CONST_FLAG)
(not as a to_bool macro as someone else suggested, just straight in the code).
If more people used it, it wouldn't be seen as unusual so start using it!!

This may not be a popular solution, but sometimes macros are useful.
#define to_bool(x) (!!(x))
Now we can safely have anything we want without fear of overflowing our type:
func(to_bool(flags & INTERESTING_FLAG));
Another alternative might be to define your boolean type to be an intmax_t (from stdint.h) so that it's impossible for a value to be truncated into falseness.
While I'm here, I want to say that you should be using a typedef for defining a new type, not a #define:
typedef short Bool; // or whatever type you end up choosing
Some might argue that you should use a const variable instead of a macro for numeric constants:
const INTERESTING_FLAG = 0x80000000;
Overall there are better things you can spend your time on. But macros for typedefs is a bit silly.

You could avoid this a couple different ways:
First off
void func(unsigned long int);
would take care of it...
Or
if(flags & INTERESTING_FLAG)
{
func(true);
}
else
{
func(false);
}
would also do it.
EDIT: (flags & INTERESTING_FLAG) != 0 is also good. Probably better.

This is partially off topic:
I'd also create a help function that makes it obvious to the reader what the purpose of the check is so you don't fill your code with this explicit flag checking all over the place. Typedefing the flag type would make it easier to change flag type and implementation later.
Modern compilers supports the inline keyword that can get rid of the performance overhead in a function call.
typedef unsigned long int flagtype;
...
inline bool hasInterestingFlag(flagtype flags) {
return ((flags & INTERESTING_FLAG) != 0);
}

Do you have anything against
flags & INTERESTING_FLAG ? TRUE : FALSE
?

This is why you should only use values in a "boolean" way when these values have explicitly boolean semantics. Your value does not satisfy taht rule, since it has a pronounced integer semantics (or, more precisely, bit-array semantics). In order to convert such a value to boolean, compare it to 0
func((flags & INTERESTING_FLAG) != 0);

Related

C Language program keeps getting bool and true and false as errors [duplicate]

C doesn't have any built-in boolean types. What's the best way to use them in C?
From best to worse:
Option 1 (C99 and newer)
#include <stdbool.h>
Option 2
typedef enum { false, true } bool;
Option 3
typedef int bool;
enum { false, true };
Option 4
typedef int bool;
#define true 1
#define false 0
Explanation
Option 1 will work only if you use C99 (or newer) and it's the "standard way" to do it. Choose this if possible.
Options 2, 3 and 4 will have in practice the same identical behavior. #2 and #3 don't use #defines though, which in my opinion is better.
If you are undecided, go with #1!
A few thoughts on booleans in C:
I'm old enough that I just use plain ints as my boolean type without any typedefs or special defines or enums for true/false values. If you follow my suggestion below on never comparing against boolean constants, then you only need to use 0/1 to initialize the flags anyway. However, such an approach may be deemed too reactionary in these modern times. In that case, one should definitely use <stdbool.h> since it at least has the benefit of being standardized.
Whatever the boolean constants are called, use them only for initialization. Never ever write something like
if (ready == TRUE) ...
while (empty == FALSE) ...
These can always be replaced by the clearer
if (ready) ...
while (!empty) ...
Note that these can actually reasonably and understandably be read out loud.
Give your boolean variables positive names, ie full instead of notfull. The latter leads to code that is difficult to read easily. Compare
if (full) ...
if (!full) ...
with
if (!notfull) ...
if (notfull) ...
Both of the former pair read naturally, while !notfull is awkward to read even as it is, and becomes much worse in more complex boolean expressions.
Boolean arguments should generally be avoided. Consider a function defined like this
void foo(bool option) { ... }
Within the body of the function, it is very clear what the argument means since it has a convenient, and hopefully meaningful, name. But, the call sites look like
foo(TRUE);
foo(FALSE):
Here, it's essentially impossible to tell what the parameter meant without always looking at the function definition or declaration, and it gets much worse as soon if you add even more boolean parameters. I suggest either
typedef enum { OPT_ON, OPT_OFF } foo_option;
void foo(foo_option option);
or
#define OPT_ON true
#define OPT_OFF false
void foo(bool option) { ... }
In either case, the call site now looks like
foo(OPT_ON);
foo(OPT_OFF);
which the reader has at least a chance of understanding without dredging up the definition of foo.
A boolean in C is an integer: zero for false and non-zero for true.
See also Boolean data type, section C, C++, Objective-C, AWK.
Here is the version that I used:
typedef enum { false = 0, true = !false } bool;
Because false only has one value, but a logical true could have many values, but technique sets true to be what the compiler will use for the opposite of false.
This takes care of the problem of someone coding something that would come down to this:
if (true == !false)
I think we would all agree that that is not a good practice, but for the one time cost of doing "true = !false" we eliminate that problem.
[EDIT] In the end I used:
typedef enum { myfalse = 0, mytrue = !myfalse } mybool;
to avoid name collision with other schemes that were defining true and false. But the concept remains the same.
[EDIT] To show conversion of integer to boolean:
mybool somebool;
int someint = 5;
somebool = !!someint;
The first (right most) ! converts the non-zero integer to a 0, then the second (left most) ! converts the 0 to a myfalse value. I will leave it as an exercise for the reader to convert a zero integer.
[EDIT]
It is my style to use the explicit setting of a value in an enum when the specific value is required even if the default value would be the same. Example: Because false needs to be zero I use false = 0, rather than false,
[EDIT]
Show how to limit the size of enum when compiling with gcc:
typedef __attribute__((__packed__)) enum { myfalse = 0, mytrue = !myfalse } mybool;
That is, if someone does:
struct mystruct {
mybool somebool1;
mybool somebool2;
mybool somebool3;
mybool somebool4;
}
the size of the structure will be 4 bytes rather than 16 bytes.
If you are using a C99 compiler it has built-in support for bool types:
#include <stdbool.h>
int main()
{
bool b = false;
b = true;
}
http://en.wikipedia.org/wiki/Boolean_data_type
First things first. C, i.e. ISO/IEC 9899 has had a boolean type for 19 years now. That is way longer time than the expected length of the C programming career with amateur/academic/professional parts combined when visiting this question. Mine does surpass that by mere perhaps 1-2 years. It means that during the time that an average reader has learnt anything at all about C, C actually has had the boolean data type.
For the datatype, #include <stdbool.h>, and use true, false and bool. Or do not include it, and use _Bool, 1 and 0 instead.
There are various dangerous practices promoted in the other answers to this thread. I will address them:
typedef int bool;
#define true 1
#define false 0
This is no-no, because a casual reader - who did learn C within those 19 years - would expect that bool refers to the actual bool data type and would behave similarly, but it doesn't! For example
double a = ...;
bool b = a;
With C99 bool/ _Bool, b would be set to false iff a was zero, and true otherwise. C11 6.3.1.2p1
When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1. 59)
Footnotes
59) NaNs do not compare equal to 0 and thus convert to 1.
With the typedef in place, the double would be coerced to an int - if the value of the double isn't in the range for int, the behaviour is undefined.
Naturally the same applies to if true and false were declared in an enum.
What is even more dangerous is declaring
typedef enum bool {
false, true
} bool;
because now all values besides 1 and 0 are invalid, and should such a value be assigned to a variable of that type, the behaviour would be wholly undefined.
Therefore iff you cannot use C99 for some inexplicable reason, for boolean variables you should use:
type int and values 0 and 1 as-is; and carefully do domain conversions from any other values to these with double negation !!
or if you insist you don't remember that 0 is falsy and non-zero truish, at least use upper case so that they don't get confused with the C99 concepts: BOOL, TRUE and FALSE!
typedef enum {
false = 0,
true
} t_bool;
C has a boolean type: bool (at least for the last 10(!) years)
Include stdbool.h and true/false will work as expected.
Anything nonzero is evaluated to true in boolean operations, so you could just
#define TRUE 1
#define FALSE 0
and use the constants.
Just a complement to other answers and some clarification, if you are allowed to use C99.
+-------+----------------+-------------------------+--------------------+
| Name | Characteristic | Dependence in stdbool.h | Value |
+-------+----------------+-------------------------+--------------------+
| _Bool | Native type | Don't need header | |
+-------+----------------+-------------------------+--------------------+
| bool | Macro | Yes | Translate to _Bool |
+-------+----------------+-------------------------+--------------------+
| true | Macro | Yes | Translate to 1 |
+-------+----------------+-------------------------+--------------------+
| false | Macro | Yes | Translate to 0 |
+-------+----------------+-------------------------+--------------------+
Some of my preferences:
_Bool or bool? Both are fine, but bool looks better than the keyword _Bool.
Accepted values for bool and _Bool are: false or true. Assigning 0 or 1 instead of false or true is valid, but is harder to read and understand the logic flow.
Some info from the standard:
_Bool is NOT unsigned int, but is part of the group unsigned integer types. It is large enough to hold the values 0 or 1.
DO NOT, but yes, you are able to redefine bool true and false but sure is not a good idea. This ability is considered obsolescent and will be removed in future.
Assigning an scalar type (arithmetic types and pointer types) to _Bool or bool, if the scalar value is equal to 0 or compares to 0 it will be 0, otherwise the result is 1: _Bool x = 9; 9 is converted to 1 when assigned to x.
_Bool is 1 byte (8 bits), usually the programmer is tempted to try to use the other bits, but is not recommended, because the only guaranteed that is given is that only one bit is use to store data, not like type char that have 8 bits available.
Nowadays C99 supports boolean types but you need to #include <stdbool.h>.
Example:
#include <stdbool.h>
int main()
{
bool arr[2] = {true, false};
printf("%d\n", arr[0] && arr[1]);
printf("%d\n", arr[0] || arr[1]);
return 0;
}
Output:
0
1
It is this:
#define TRUE 1
#define FALSE 0
You can use a char, or another small number container for it.
Pseudo-code
#define TRUE 1
#define FALSE 0
char bValue = TRUE;
You could use _Bool, but the return value must be an integer (1 for true, 0 for false).
However, It's recommended to include and use bool as in C++, as said in
this reply from daniweb forum, as well as this answer, from this other stackoverflow question:
_Bool: C99's boolean type. Using _Bool directly is only recommended if you're maintaining legacy code that already defines macros for bool, true, or false. Otherwise, those macros are standardized in the header. Include that header and you can use bool just like you would in C++.
Conditional expressions are considered to be true if they are non-zero, but the C standard requires that logical operators themselves return either 0 or 1.
#Tom: #define TRUE !FALSE is bad and is completely pointless. If the header file makes its way into compiled C++ code, then it can lead to problems:
void foo(bool flag);
...
int flag = TRUE;
foo(flag);
Some compilers will generate a warning about the int => bool conversion. Sometimes people avoid this by doing:
foo(flag == TRUE);
to force the expression to be a C++ bool. But if you #define TRUE !FALSE, you end up with:
foo(flag == !0);
which ends up doing an int-to-bool comparison that can trigger the warning anyway.
If you are using C99 then you can use the _Bool type. No #includes are necessary. You do need to treat it like an integer, though, where 1 is true and 0 is false.
You can then define TRUE and FALSE.
_Bool this_is_a_Boolean_var = 1;
//or using it with true and false
#define TRUE 1
#define FALSE 0
_Bool var = TRUE;
This is what I use:
enum {false, true};
typedef _Bool bool;
_Bool is a built in type in C. It's intended for boolean values.
You can simply use the #define directive as follows:
#define TRUE 1
#define FALSE 0
#define NOT(arg) (arg == TRUE)? FALSE : TRUE
typedef int bool;
And use as follows:
bool isVisible = FALSE;
bool isWorking = TRUE;
isVisible = NOT(isVisible);
and so on

C Macro: get smallest type for an integer constant

Why I need to figure out the smalles type of a literal (Backstory)
I've written a set of macros to create and use fifos. Macros allow for a generic, yet still very fast implementation on all systems with static memory allocation, such as in small embedded systems. The guys over at codereview did not have any major concerns with my implementation either.
The data is put into anonymous struts, all data is accessed by the identifier of that struct. Currently the functions-like macros to create these structs look like this
#define _fff_create(_type, _depth, _id) \
struct {uint8_t read; uint8_t write; _type data[_depth];} _id = {0,0,{}}
#define _fff_create_deep(_type, _depth, _id) \
struct {uint16_t read; uint16_t write; _type data[_depth];} _id = {0,0,{}}
What I'm looking for
Now I'd like to merge both of these into one macro. To do this I've to figure the minimum required size of read and write to index _depth amount of elements at compile time. Parameters name starting with _ indicate only a literal or a #define value might be passed, both are known at compile time.
Thus I hope to find a macro typeof_literal(arg) which returns uint8_t if arg<256 or uint16_t else.
What I've tried
GCC 4.9.2. offers a command called typeof(). However when used with any literal it returns an int type, which is two byte on my system.
Another feature of GCC 4.9.2 is a compound statement. typeof(({uint8_t u8 = 1; u8;})) will correctly return uint8_t. However I could not figure out a way to put a condition for the type in that block:
typeof(({uint8_t u8 = 1; uint16_t u16 = 1; input ? u8 : u16;})) always returns uint16_t because of the type promotion of the ?: operator
if(...) can't be used either, as any command will happen in "lower" blocks
Macros can't contain #if, which make them unusable for this comparison either.
Can't you just leave it like that?
I realize there might not be a solution to this problem. That's ok too; the current code is just a minor inconvinience. Yet I'd like to know if there's a tricky way around this. A solution to this could open up new possibilities for macros in general. If you are sure that this can't be possible, please explain why.
I think the building block you are looking for is __builtin_choose_expr, which is a lot like the ternary operator, but does not convert its result to a common type. With
#define CHOICE(x) __builtin_choose_expr (x, (int) 1, (short) 2)
this
printf ("%zu %zu\n", sizeof (CHOICE (0)), sizeof (CHOICE (1)));
will print
2 4
as expected.
However, as Greg Hewgill points out, C++ has better facilities for that (but they are still difficult to use).
The macro I was looking for can indeed be written with __builtin_choose_expr as Florian suggested. My solution is attached below, it has been tested and is confirmed working. Use it as you wish!
#define typeof_literal(_literal) \
typeof(__builtin_choose_expr((_literal)>0, \
__builtin_choose_expr((_literal)<=UINT8_MAX, (uint8_t) 0, \
__builtin_choose_expr((_literal)<=UINT16_MAX, (uint16_t) 0, \
__builtin_choose_expr((_literal)<=UINT32_MAX, (uint32_t) 0, (uint64_t) 0))), \
__builtin_choose_expr((_literal)>=INT8_MIN, (int8_t) 0, \
__builtin_choose_expr((_literal)>=INT16_MIN, (int16_t) 0, \
__builtin_choose_expr((_literal)>=INT32_MIN, (int32_t) 0, (int64_t) 0)))))

Use of ampersands in C if statement criteria

I'm new to C, and am trying to make sense of some code from NREL available here so that I may program a similar function in R. Here's the part of the code I cannot seem to figure out:
long S_solpos (struct posdata *pdat)
{
if ( pdat->function & L_DOY )
doy2dom( pdat );
}
In particular, what is the evaluation criteria asking in:
if ( pdat->function & L_DOY )
I understand that pdat is a is a pointer to the posdata structure, and from the header file I know that that "function" is a variable in the posdata structure which contains various integer codes:
struct posdata
{
int function;
and that L_DOY can be one such function:
/*Define the function codes*/
#define L_DOY 0x0001
#define L_GEOM 0x0002
#define L_ZENETR 0x0004
I would assume that the if statement is checking whether the function variable within pdat corresponds to the code for L_DOY. However, I am still very new to C, and have been unable to find any examples or explanations that utilize the ampersand in an if statement like this.
Thanks in advance for any help.
It means bitwise-and. The value it's testing is a set of bit flags that can have one or more set. It's checking whether the L_DOY flag specifically is set, because bitwise-and keeps bits that appear in both operands, so 0b0101 & 0b0011 would produce 0b0001 (the only bit set in both operands). Since L_DOY is only a single bit, the low bit, it's checking if that bit is set in function; it doesn't care if other bits are set, or not.

How do I tell if a C integer variable is signed?

As an exercise, I'd like to write a macro which tells me if an integer variable is signed. This is what I have so far and I get the results I expect if I try this on a char variable with gcc -fsigned-char or -funsigned-char.
#define ISVARSIGNED(V) (V = -1, (V < 0) ? 1 : 0)
Is this portable? Is there a way to do this without destroying the value of the variable?
#define ISVARSIGNED(V) ((V)<0 || (-V)<0 || (V-1)<0)
doesn't change the value of V. The third test handles the case where V == 0.
On my compiler (gcc/cygwin) this works for int and long but not for char or short.
#define ISVARSIGNED(V) ((V)-1<0 || -(V)-1<0)
also does the job in two tests.
If you're using GCC you can use the typeof keyword to not overwrite the value:
#define ISVARSIGNED(V) ({ typeof (V) _V = -1; _V < 0 ? 1 : 0 })
This creates a temporary variable, _V, that has the same type as V.
As for portability, I don't know. It will work on a two's compliment machine (a.k.a. everything your code will ever run on in all probability), and I believe it will work on one's compliment and sign-and-magnitude machines as well. As a side note, if you use typeof, you may want to cast -1 to typeof (V) to make it safer (i.e. less likely to trigger warnings).
#define ISVARSIGNED(V) ((-(V) < 0) != ((V) < 0))
Without destroying the variable's value. But doesn't work for 0 values.
What about:
#define ISVARSIGNED(V) (((V)-(V)-1) < 0)
This simple solution has no side effects, including the benefit of only referring to v once (which is important in a macro). We use the gcc extension "typeof" to get the type of v, and then cast -1 to this type:
#define IS_SIGNED_TYPE(v) ((typeof(v))-1 <= 0)
It's <= rather than just < to avoid compiler warnings for some cases (when enabled).
A different approach to all the "make it negative" answers:
#define ISVARSIGNED(V) (~(V^V)<0)
That way there's no need to have special cases for different values of V, since ∀ V ∈ ℤ, V^V = 0.
A distinguishing characteristic of signed/unsigned math is that when you right shift a signed number, the most significant bit is copied. When you shift an unsigned number, the new bits are 0.
#define HIGH_BIT(n) ((n) & (1 << sizeof(n) * CHAR_BITS - 1))
#define IS_SIGNED(n) (HIGH_BIT(n) ? HIGH_BIT(n >> 1) != 0 : HIGH_BIT(~n >> 1) != 0
So basically, this macro uses a conditional expression to determine whether the high bit of a number is set. If it's not, the macro sets it by bitwise negating the number. We can't do an arithmetic negation because -0 == 0. We then shift right by 1 bit and test whether sign extension occurred.
This assumes 2's complement arithmetic, but that's usually a safe assumption.
Why on earth do you need it to be a macro? Templates are great for this:
template <typename T>
bool is_signed(T) {
static_assert(std::numeric_limits<T>::is_specialized, "Specialize std::numeric_limits<T>");
return std::numeric_limits<T>::is_signed;
}
Which will work out-of-the-box for all fundamental integral types. It will also fail at compile-time on pointers, which the version using only subtraction and comparison probably won't.
EDIT: Oops, the question requires C. Still, templates are the nice way :P

Objective-C : BOOL vs bool

I saw the "new type" BOOL (YES, NO).
I read that this type is almost like a char.
For testing I did :
NSLog(#"Size of BOOL %d", sizeof(BOOL));
NSLog(#"Size of bool %d", sizeof(bool));
Good to see that both logs display "1" (sometimes in C++ bool is an int and its sizeof is 4)
So I was just wondering if there were some issues with the bool type or something ?
Can I just use bool (that seems to work) without losing speed?
From the definition in objc.h:
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
typedef bool BOOL;
#else
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
#define YES ((BOOL)1)
#define NO ((BOOL)0)
So, yes, you can assume that BOOL is a char. You can use the (C99) bool type, but all of Apple's Objective-C frameworks and most Objective-C/Cocoa code uses BOOL, so you'll save yourself headache if the typedef ever changes by just using BOOL.
As mentioned above, BOOL is a signed char. bool - type from C99 standard (int).
BOOL - YES/NO. bool - true/false.
See examples:
bool b1 = 2;
if (b1) printf("REAL b1 \n");
if (b1 != true) printf("NOT REAL b1 \n");
BOOL b2 = 2;
if (b2) printf("REAL b2 \n");
if (b2 != YES) printf("NOT REAL b2 \n");
And result is
REAL b1
REAL b2
NOT REAL b2
Note that bool != BOOL. Result below is only ONCE AGAIN - REAL b2
b2 = b1;
if (b2) printf("ONCE AGAIN - REAL b2 \n");
if (b2 != true) printf("ONCE AGAIN - NOT REAL b2 \n");
If you want to convert bool to BOOL you should use next code
BOOL b22 = b1 ? YES : NO; //and back - bool b11 = b2 ? true : false;
So, in our case:
BOOL b22 = b1 ? 2 : NO;
if (b22) printf("ONCE AGAIN MORE - REAL b22 \n");
if (b22 != YES) printf("ONCE AGAIN MORE- NOT REAL b22 \n");
And so.. what we get now? :-)
At the time of writing this is the most recent version of objc.h:
/// Type to represent a boolean value.
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
It means that on 64-bit iOS devices and on WatchOS BOOL is exactly the same thing as bool while on all other devices (OS X, 32-bit iOS) it is signed char and cannot even be overridden by compiler flag -funsigned-char
It also means that this example code will run differently on different platforms (tested it myself):
int myValue = 256;
BOOL myBool = myValue;
if (myBool) {
printf("i'm 64-bit iOS");
} else {
printf("i'm 32-bit iOS");
}
BTW never assign things like array.count to BOOL variable because about 0.4% of possible values will be negative.
The Objective-C type you should use is BOOL. There is nothing like a native boolean datatype, therefore to be sure that the code compiles on all compilers use BOOL. (It's defined in the Apple-Frameworks.
Yup, BOOL is a typedef for a signed char according to objc.h.
I don't know about bool, though. That's a C++ thing, right? If it's defined as a signed char where 1 is YES/true and 0 is NO/false, then I imagine it doesn't matter which one you use.
Since BOOL is part of Objective-C, though, it probably makes more sense to use a BOOL for clarity (other Objective-C developers might be puzzled if they see a bool in use).
Another difference between bool and BOOL is that they do not convert exactly to the same kind of objects, when you do key-value observing, or when you use methods like -[NSObject valueForKey:].
As everybody has said here, BOOL is char. As such, it is converted to an NSNumber holding a char. This object is indistinguishable from an NSNumber created from a regular char like 'A' or '\0'. You have totally lost the information that you originally had a BOOL.
However, bool is converted to an CFBoolean, which behaves the same as NSNumber, but which retains the boolean origin of the object.
I do not think that this is an argument in a BOOL vs. bool debate, but this may bite you one day.
Generally speaking, you should go with BOOL, since this is the type used everywhere in the Cocoa/iOS APIs (designed before C99 and its native bool type).
The accepted answer has been edited and its explanation become a bit incorrect. Code sample has been refreshed, but the text below stays the same. You cannot assume that BOOL is a char for now since it depends on architecture and platform.
Thus, if you run you code at 32bit platform(for example iPhone 5) and print #encode(BOOL) you will see "c". It corresponds to a char type.
But if you run you code at iPhone 5s(64 bit) you will see "B". It corresponds to a bool type.
As mentioned above BOOL could be an unsigned char type depending on your architecture, while bool is of type int. A simple experiment will show the difference why BOOL and bool can behave differently:
bool ansicBool = 64;
if(ansicBool != true) printf("This will not print\n");
printf("Any given vlaue other than 0 to ansicBool is evaluated to %i\n", ansicBool);
BOOL objcBOOL = 64;
if(objcBOOL != YES) printf("This might print depnding on your architecture\n");
printf("BOOL will keep whatever value you assign it: %i\n", objcBOOL);
if(!objcBOOL) printf("This will not print\n");
printf("! operator will zero objcBOOL %i\n", !objcBOOL);
if(!!objcBOOL) printf("!! will evaluate objcBOOL value to %i\n", !!objcBOOL);
To your surprise if(objcBOOL != YES) will evaluates to 1 by the compiler, since YES is actually the character code 1, and in the eyes of compiler, character code 64 is of course not equal to character code 1 thus the if statement will evaluate to YES/true/1 and the following line will run.
However since a none zero bool type always evaluates to the integer value of 1, the above issue will not effect your code. Below are some good tips if you want to use the Objective-C BOOL type vs the ANSI C bool type:
Always assign the YES or NO value and nothing else.
Convert BOOL types by using double not !! operator to avoid unexpected results.
When checking for YES use if(!myBool) instead of if(myBool != YES) it is much cleaner to use the not ! operator and gives the expected result.
I go against convention here. I don't like typedef's to base types. I think it's a useless indirection that removes value.
When I see the base type in your source I will instantly understand it. If it's a typedef I have to look it up to see what I'm really dealing with.
When porting to another compiler or adding another library their set of typedefs may conflict and cause issues that are difficult to debug. I just got done dealing with this in fact. In one library boolean was typedef'ed to int, and in mingw/gcc it's typedef'ed to a char.
Also, be aware of differences in casting, especially when working with bitmasks, due to casting to signed char:
bool a = 0x0100;
a == true; // expression true
BOOL b = 0x0100;
b == false; // expression true on !((TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH), e.g. MacOS
b == true; // expression true on (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
If BOOL is a signed char instead of a bool, the cast of 0x0100 to BOOL simply drops the set bit, and the resulting value is 0.

Resources