How do I byte-swap a signed number in C? - c

I understand that casting from an unsigned type to a signed type of equal rank produces an implementation-defined value:
C99 6.3.1.3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
This means I don't know how to byte-swap a signed number. For instance, suppose I am receiving two-byte, twos-complement signed values in little-endian order from a peripheral device, and processing them on a big-endian CPU. The byte-swapping primitives in the C library (like ntohs) are defined to work on unsigned values. If I convert my data to unsigned so I can byte-swap it, how do I reliably recover a signed value afterward?

As you say in your question the result is implementation-defined or an implementation-defined signal is raised - i.e. depends on the platform/compiler what happens.

To byte-swap a signed number while avoiding as much implementation-defined behavior as possible, you can make use of a wider signed intermediate, one that can represent the entire range of the unsigned type with the same width as the signed value you wanted to byte-swap. Taking your example of little-endian, 16-bit numbers:
// Code below assumes CHAR_BIT == 8, INT_MAX is at least 65536, and
// signed numbers are twos complement.
#include <stdint.h>
int16_t
sl16_to_host(unsigned char b[2])
{
unsigned int n = ((unsigned int)b[0]) | (((unsigned int)b[1]) << 8);
int v = n;
if (n & 0x8000) {
v -= 0x10000;
}
return (int16_t)v;
}
Here's what this does. First, it converts the little-endian value in b to a host-endian unsigned value (regardless of which endianness the host actually is). Then it stores that value in a wider, signed variable. Its value is still in the range [0, 65535], but it is now a signed quantity. Because int can represent all the values in that range, the conversion is fully defined by the standard.
Now comes the key step. We test the high bit of the unsigned value, which is the sign bit, and if it's true we subtract 65536 (0x10000) from the signed value. That maps the range [32768, 655535] to [-32768, -1], which is precisely how a twos-complement signed number is encoded. This is still happening in the wider type and therefore we are guaranteed that all the values in the range are representable.
Finally, we truncate the wider type to int16_t. This step involves unavoidable implementation-defined behavior, but with probability one, your implementation defines it to behave as you would expect. In the vanishingly unlikely event that your implementation uses sign-and-magnitude or ones-complement representation for signed numbers, the value -32768 will be mangled by the truncation, and may cause the program to crash. I wouldn't bother worrying about it.
Another approach, which may be useful for byteswapping 32-bit numbers when you don't have a 64-bit type available, is to mask out the sign bit and handle it separately:
int32_t
sl32_to_host(unsigned char b[4])
{
uint32_t mag = ((((uint32_t)b[0]) & 0xFF) << 0) |
((((uint32_t)b[1]) & 0xFF) << 8) |
((((uint32_t)b[2]) & 0xFF) << 16) |
((((uint32_t)b[3]) & 0x7F) << 24);
int32_t val = mag;
if (b[3] & 0x80) {
val = (val - 0x7fffffff) - 1;
}
return val;
}
I've written (val - 0x7fffffff) - 1 here, instead of just val - 0x80000000, to ensure that the subtraction happens in a signed type.

I understand that casting from an unsigned type to a signed type of equal rank produces an implementation-defined value.
It will be implementation-defined only because the signedness format in C is implementation-defined. For example, two's complement is one such implementation-defined format.
So the only issue here is if either side of the transmission would not be two's complement, which is not likely going to happen in the real world. I would not bother to design programs to be portable to obscure, extinct one's complement computers from the dark ages.
This means I don't know how to byte-swap a signed number. For instance, suppose I am receiving two-byte, twos-complement signed values in little-endian order from a peripheral device, and processing them on a big-endian CPU
I suspect a source of confusion here is that you think a generic two's complement number will be transmitted from a sender that is either big or little endian and received by one which is either big/little. Data transmission protocols don't work like that though: they explicitly specify endianess and signedness format. So both sides have to adapt to the protocol.
And once that's specified, there's really no rocket science here: you are receiving 2 raw bytes. Store them in an array of raw data. Then assign them to your two's complement variable. Suppose the protocol specified little endian:
int16_t val;
uint8_t little[2];
val = (little[1]<<8) | little[0];
Bit shifting has the advantage of being endian-independent. So the above code will work no matter if your CPU is big or little. So although this code contains plenty of ugly implicit promotions, it is 100% portable. C is guaranteed to treat the above as this:
val = (int16_t)( ((int)((int)little[1]<<8)) | (int)little[0] );
The result type of the shift operator is that of its promoted left operand. The result type of | is the balanced type (usual arthmetic conversions).
Shifting signed negative numbers would give undefined behavior, but we get away with the shift because the individual bytes are unsigned. When they get implicitly promoted, the numbers are still treated as positive.
And since int is guaranteed to be at least 16 bits, the code will work on all CPUs.
Alternatively, you could use pedantic style that completely excludes all implicit promotions/conversions:
val = (int16_t) ( ((uint32_t)little[1] << 8) | (uint32_t)little[0] );
But this comes at the cost of readability.

Related

Is conversion of some data type to a larger data type the same as arithmetic right shift?

This is a two fold question .
I have been reading up on the intricacies of how compilers process code and I am having this confusion. Both processes seem to be following the same logic of sign extension for signed integers. So is conversion simply implemented as an arithmetic right shift?
One of the examples states a function as
Int Fun1(unsigned word ) {
Return (int) ((word << 24) >> 24 );
}
The argument passed is 0x87654321.
Since this would be signed when converted to binary, how would the shift happen? My logic was that the left shift should extract the last 8 bits leaving 0 as the MSB and this would then we extended while right shifting. Is this logic correct?
Edit: I understand that the downvote is probably due to unspecified info. Assume a 32 bit big endian machine with two's complement for signed integers.
Given OP's "Assume a 32 bit ... machine with two's complement for signed integers." (This implies 32-bit unsigned)
0x87654321 Since this would be signed when converted to binary
No. 0x87654321 is a hexadecimal constant. It has the type unsigned. It is not signed.
// Int Fun1(unsigned word ) {
int Fun1(unsigned word) {
Return (int) ((word << 24) >> 24 );
}
Fun1(0x87654321) results in unsigned word having the value of 0x87654321. No type nor value conversion occurred.
word << 24 has the value of 0x21000000 and still the type of unsigned.
(word << 24) >> 24 has the value of 0x21 and still the type of unsigned.
Casting to int retains the same value of 0x21, but now type int.
So is conversion simply implemented as an arithmetic right shift ?
Doubtful, as no signed shifting is coded. C does not specify how the compiler realizes the C code. A pair of shifts, or a mask or a multiply/divide may have occurred.

How to do a bit representation in a C-standard way?

As per the C standard the value representation of a integer type is implementation defined. So 5 might not be represented as 00000000000000000000000000000101 or -1 as 11111111111111111111111111111111 as we usually assume in a 32-bit 2's complement. So even though the operators ~, << and >> are well defined, the bit patterns they will work on is implementation defined. The only defined bit pattern I could find was "§5.2.1/3 A byte with all bits set to 0, called the null character, shall exist in the basic execution character set; it is used to terminate a character string.".
So my questions is - Is there a implementation independent way of converting integer types to a bit pattern?
We can always start with a null character and do enough bit operations on it to get it to a desired value, but I find it too cumbersome. I also realise that practically all implementations will use a 2's complement representation, but I want to know how to do it in a pure C standard way. Personally I find this topic quite intriguing due to the matter of device-driver programming where all code written till date assumes a particular implementation.
In general, it's not that hard to accommodate unusual platforms for the most cases (if you don't want to simply assume 8-bit char, 2's complement, no padding, no trap, and truncating unsigned-to-signed conversion), the standard mostly gives enough guarantees (a few macros to inspect certain implementation details would be helpful, though).
As far as a strictly conforming program can observe (outside bit-fields), 5 is always encoded as 00...0101. This is not necessarily the physical representation (whatever this should mean), but what is observable by portable code. A machine using Gray code internally, for example, would have to emulate a "pure binary notation" for bitwise operators and shifts.
For negative values of signed types, different encodings are allowed, which leads to different (but well-defined for every case) results when re-interpreting as the corresponding unsigned type. For example, strictly conforming code must distinguish between (unsigned)n and *(unsigned *)&n for a signed integer n: They are equal for two's complement without padding bits, but different for the other encodings if n is negative.
Further, padding bits may exist, and signed integer types may have more padding bits than their corresponding unsigned counterparts (but not the other way round, type-punning from signed to unsigned is always valid). sizeof cannot be used to get the number of non-padding bits, so e.g. to get an unsigned value where only the sign-bit (of the corresponding signed type) is set, something like this must be used:
#define TYPE_PUN(to, from, x) ( *(to *)&(from){(x)} )
unsigned sign_bit = TYPE_PUN(unsigned, int, INT_MIN) &
TYPE_PUN(unsigned, int, -1) & ~1u;
(there are probably nicer ways) instead of
unsigned sign_bit = 1u << sizeof sign_bit * CHAR_BIT - 1;
as this may shift by more than the width. (I don't know of a constant expression giving the width, but sign_bit from above can be right-shifted until it's 0 to determine it, Gcc can constant-fold that.) Padding bits can be inspected by memcpying into an unsigned char array, though they may appear to "wobble": Reading the same padding bit twice may give different results.
If you want the bit pattern (without padding bits) of a signed integer (little endian):
int print_bits_u(unsigned n) {
for(; n; n>>=1) {
putchar(n&1 ? '1' : '0'); // n&1 never traps
}
return 0;
}
int print_bits(int n) {
return print_bits_u(*(unsigned *)&n & INT_MAX);
/* This masks padding bits if int has more of them than unsigned int.
* Note that INT_MAX is promoted to unsigned int here. */
}
int print_bits_2scomp(int n) {
return print_bits_u(n);
}
print_bits gives different results for negative numbers depending on the representation used (it gives the raw bit pattern), print_bits_2scomp gives the two's complement representation (possibly with a greater width than a signed int has, if unsigned int has less padding bits).
Care must be taken not to generate trap representations when using bitwise operators and when type-punning from unsigned to signed, see below how these can potentially be generated (as an example, *(int *)&sign_bit can trap with two's complement, and -1 | 1 can trap with ones' complement).
Unsigned-to-signed integer conversion (if the converted value isn't representable in the target type) is always implementation-defined, I would expect non-2's complement machines to differ from the common definition more likely, though technically, it could also become an issue on 2's complement implementations.
From C11 (n1570) 6.2.6.2:
(1) For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2N-1, so that objects of that type shall be capable of representing values from 0 to 2N-1 using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.
(2) For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; signed char shall not have any padding bits. There shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed
type and N in the unsigned type, then M≤N ). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways:
the corresponding value with sign bit 0 is negated (sign and magnitude);
the sign bit has the value -(2M) (two's complement);
the sign bit has the value -(2M-1) (ones' complement).
Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones' complement), is a trap representation or a normal value. In the case of sign and magnitude and ones' complement, if this representation is a normal value it is called a negative zero.
To add to mafso's excellent answer, there's a part of the ANSI C rationale which talks about this:
The Committee has explicitly restricted the C language to binary architectures, on the grounds that this stricture was implicit in any case:
Bit-fields are specified by a number of bits, with no mention of “invalid integer” representation. The only reasonable encoding for such bit-fields is binary.
The integer formats for printf suggest no provision for “invalid integer” values, implying that any result of bitwise manipulation produces an integer result which can be printed by printf.
All methods of specifying integer constants — decimal, hex, and octal — specify an integer value. No method independent of integers is defined for specifying “bit-string constants.” Only a binary encoding provides a complete one-to-one mapping between bit strings and integer values.
The restriction to binary numeration systems rules out such curiosities as Gray code and makes
possible arithmetic definitions of the bitwise operators on unsigned types.
The relevant part of the standard might be this quote:
3.1.2.5 Types
[...]
The type char, the signed and unsigned integer types, and the
enumerated types are collectively called integral types. The
representations of integral types shall define values by use of a pure
binary numeration system.
If you want to get the bit-pattern of a given int, then bit-wise operators are your friends. If you want to convert an int to its 2-complement representation, then arithmetic operators are your friends. The two representations can be different, as it is implementation defined:
Std Draft 2011. 6.5/4. Some operators (the unary operator ~, and the
binary operators <<, >>, &, ^, and |, collectively described as
bitwise operators) are required to have operands that have integer
type. These operators yield values that depend on the internal
representations of integers, and have implementation-defined and
undefined aspects for signed types.
So it means that i<<1 will effectively shift the bit-pattern by one position to the left, but that the value produced can be different than i*2 (even for smal values of i).

Maximum value of typedefed signed type

I was reading John Regehr's blog on how he gives his students an assignment about saturating arithmetic. The interesting part is that the code has to compile as-is while using typedefs to specify different integer types, see the following excerpt of the full header:
typedef signed int mysint;
//typedef signed long int mysint;
mysint sat_signed_add (mysint, mysint);
mysint sat_signed_sub (mysint, mysint);
The corresponding unsigned version is simple to implement (although I'm actually not sure if padding bits wouldn't make that problematic too), but I actually don't see how I can get the maximum (or minimum) value of an unknown signed type in C, without using macros for MAX_ und MIN_ or causing undefined behavior.
Am I missing something here or is the assignment just flawed (or more likely I'm missing some crucial information he gave his students)?
I don't see any way to do this without making assumptions or invoking implementation-defined (not necessarily undefined) behavior. If you assume that there are no padding bits in the representation of mysint or of uintmax_t, however, then you can compute the maximum value like this:
mysint mysint_max = (mysint)
((~(uintmax_t)0) >> (1 + CHAR_BITS * (sizeof(uintmax_t) - sizeof(mysint))));
The minimum value is then either -mysint_max (sign/magnitude or ones' complement) or -mysint_max - 1 (two's complement), but it is a bit tricky to determine which. You don't know a priori which bit is the sign bit, and there are possible trap representations that differ for different representations styles. You also must be careful about evaluating expressions, because of the possibility of "the usual arithmetic conversions" converting values to a type whose representation has different properties than those of the one you are trying to probe.
Nevertheless, you can distinguish the type of negative-value representation by computing the bitwise negation of the mysint representation of -1. For two's complement the mysint value of the result is 0, for ones' complement it is 1, and for sign/magnitude it is mysint_max - 1.
If you add the assumption that all signed integer types have the same kind of negative-value representation then you can simply perform such a test using an ordinary expression on default int literals. You don't need to make that assumption, however. Instead, you can perform the operation directly on the type representation's bit pattern, via a union:
union mysint_bits {
mysint i;
unsigned char bits[sizeof(mysint)];
} msib;
int counter = 0;
for (msib.i = -1; counter < sizeof(mysint); counter += 1) {
msib.bits[counter] = ~msib.bits[counter];
}
As long as the initial assumption holds (that there are no padding bits in the representation of type mysint) msib.i must then be a valid representation of the desired result.
I don't see a way to determine the largest and smallest representable values for an unknown signed integer type in C, without knowing something more. (In C++, you have std::numeric_limits available, so it is trivial.)
The largest representable value for an unsigned integer type is (myuint)(-1). That is guaranteed to work independent of padding bits, because (§ 6.3.1.3/1-2):
When a value with integer type is converted to another integer type… if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
So to convert -1 to an unsigned type, you add one more than the maximum representable value to it, and that result must be the maximum representable value. (The standard makes it clear that the meaning of "repeatedly adding or subtracting" is mathematical.)
Now, if you knew that the number of padding bits in the signed type was the same as the number of padding bits in the unsigned type [but see below], you could compute the largest representable signed value from the largest representable unsigned value:
(mysint)( (myuint)(-1) / (myuint)2 )
Unfortunately, that's not enough to compute the minimum representable signed value, because the standard permits the minimum to be either one less than the negative of the maximum (2's-complement representation) or exactly the negative of the maximum (1's-complement or sign/magnitude representations).
Moreover, the standard does not actually guarantee that the number of padding bits in the signed type is the same as the number of padding bits in the unsigned type. All it guarantees is that the number of value bits in the signed type be no greater than the number of value bits in the unsigned type. In particular, it would be legal for the unsigned type to have one more padding bit than the corresponding signed type, in which case they would have the same number of value bits and the maximum representable values would be the same. [Note: a value bit is neither a padding bit nor the sign bit.]
In short, if you knew (for example by being told) that the architecture were 2's-complement and that corresponding signed and unsigned types had the same number of padding bits, then you could certainly compute both signed min and max:
myuint max_myuint = (myuint)(-1);
mysint max_mysint = (mysint)(max_myuint / (my_uint)2);
mysint min_mysint = (-max_mysint) - (mysint)1;
Finally, casting an out-of-range unsigned integer to a signed integer is not undefined behaviour, although most other signed overflows are. The conversion, as indicated by §6.3.1.3/3, is implementation-defined behaviour:
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
Implementation-defined behaviour is required to be documented by the implementation. So, suppose we knew that the implementation was gcc. Then we could examine the gcc documentation, where we would read the following, in the section "C Implementation-defined behaviour":
Whether signed integer types are represented using sign and
magnitude, two's complement, or one's complement, and whether the
extraordinary value is a trap representation or an ordinary value
(C99 6.2.6.2).
GCC supports only two's complement integer types, and all bit
patterns are ordinary values.
The result of, or the signal raised by, converting an integer to a
signed integer type when the value cannot be represented in an
object of that type (C90 6.2.1.2, C99 6.3.1.3).
For conversion to a type of width N, the value is reduced modulo
2^N to be within range of the type; no signal is raised.
Knowing that signed integers are 2s-complement and that unsigned to signed conversions will not trap, but will produce the expected pattern of low-order bits, we can find the maximum and minimum values for any signed type starting with the maximum representable value for the widest unsigned type, uintmax_t:
uintmax_t umax = (uintmax_t)(-1);
while ( (mysint)(umax) < 0 ) umax >>= 1;
mysint max_mysint = (mysint)(umax);
mysint min_mysint = (-max_mysint) - (mysint)1;
This is a suggestion for getting the MAX value of a specific type set with typedef without using any library
typedef signed int mysint;
mysint size; // will give the size of the type
size=sizeof(mysint)*(mysint)8-(mysint)1; // as it is signed so a bit
// will be deleted for sign bit
mysint max=1;//start with first bit
while(--size)
{
mysint temp;
temp=(max<<(mysint)1)|(mysint)1;// set all bit to 1
max=temp;
}
/// max will contain the max value of the type mysint
If you assume eight-bit chars and a two's complement representation (both reasonable on all modern hardware, with the exception of some embedded DSP stuff), then you just need to form an unsigned integer (use uintmax_t to make sure it's big enough) with sizeof(mysint)*8 - 1 1's in the bottom bits, then cast it to mysint. For the minimum value, negate the maximum value and subtract one.
If you don't want to assume those things, then it's still possible, but you'll need to do some more digging through limits.h to compensate for the size of chars and the sign representation.
I guess this should work irrespective of negative number representation
// MSB is 1 and rests are zero is minimum number in both 2's and 1's
// compliments representations.
mysint min = (1 << (sizeof(mysint) * 8 - 1));
mysint max = ~x;

Bitwise AND on signed chars

I have a file that I've read into an array of data type signed char. I cannot change this fact.
I would now like to do this: !((c[i] & 0xc0) & 0x80) where c[i] is one of the signed characters.
Now, I know from section 6.5.10 of the C99 standard that "Each of the operands [of the bitwise AND] shall have integral type."
And Section 6.5 of the C99 specification tells me:
Some operators (the unary operator ~ , and the binary operators << , >> , & , ^ , and | ,
collectively described as bitwise operators )shall have operands that have integral type.
These operators return
values that depend on the internal representations of integers, and
thus have implementation-defined aspects for signed types.
My question is two-fold:
Since I want to work with the original bit patterns from the file, how can I convert/cast my signed char to unsigned char so that the bit patterns remain unchanged?
Is there a list of these "implementation-defined aspects" anywhere (say for MVSC and GCC)?
Or you could take a different route and argue that this produces the same result for both signed and unsigned chars for any value of c[i].
Naturally, I will reward references to relevant standards or authoritative texts and discourage "informed" speculation.
As others point out, in all likelyhood your implementation is based on two's complement, and will give exactly the result you expect.
However, if you're worried about the results of an operation involving a signed value, and all you care about is the bit pattern, simply cast directly to an equivalent unsigned type. The results are defined under the standard:
6.3.1.3 Signed and unsigned integers
...
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
This is essentially specifying that the result will be the two's complement representation of the value.
Fundamental to this is that in two's complement maths the result of a calculation is modulo some power of two (i.e. the number of bits in the type), which in turn is exactly equivalent to masking off the relevant number of bits. And the complement of a number is the number subtracted from the power of two.
Thus adding a negative value is the same as adding any value which differs from the value by a multiple of that power of two.
i.e:
(0 + signed_value) mod (2^N)
==
(2^N + signed_value) mod (2^N)
==
(7 * 2^N + signed_value) mod (2^N)
etc. (if you know modulo, that should be pretty self-evidently true)
So if you have a negative number, adding a power of two will make it positive (-5 + 256 = 251), but the bottom 'N' bits will be exactly the same (0b11111011) and it will not affect the outcome of a mathematical operation. As values are then truncated to fit the type, the result is exactly the binary value you expected with even if the result 'overflows' (i.e. what you might think happens if the number was positive to start with - this wrapping is also well defined behaviour).
So in 8-bit two's complement:
-5 is the same as 251 (i.e 256 - 5) - 0b11111011
If you add 30, and 251, you get 281. But that's larger than 256, and 281 mod 256 equals 25. Exactly the same as 30 - 5.
251 * 2 = 502. 502 mod 256 = 246. 246 and -10 are both 0b11110110.
Likewise if you have:
unsigned int a;
int b;
a - b == a + (unsigned int) -b;
Under the hood, this cast is unlikely to be implemented with arithmetic and will certainly be a straight assignment from one register/value to another, or just optimised out altogether as the maths does not make a distinction between signed and unsigned (intepretation of CPU flags is another matter, but that's an implementation detail). The standard exists to ensure that an implementation doesn't take it upon itself to do something strange instead, or I suppose, for some weird architecture which isn't using two's complement...
unsigned char UC = *(unsigned char*)&C - this is how you can convert signed C to unsigned keeping the "bit pattern". Thus you could change your code to something like this:
!(( (*(unsigned char*)(c+i)) & 0xc0) & 0x80)
Explanation(with references):
761 When a pointer to an object is converted to a pointer to a character type, the result points to the lowest addressed byte of the object.
1124 When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1.
These two implies that unsigned char pointer points to the same byte as original signed char pointer.
You appear to have something similar to:
signed char c[] = "\x7F\x80\xBF\xC0\xC1\xFF";
for (int i = 0; c[i] != '\0'; i++)
{
if (!((c[i] & 0xC0) & 0x80))
...
}
You are (correctly) concerned about sign extension of the signed char type. In practice, however, (c[i] & 0xC0) will convert the signed character to a (signed) int, but the & 0xC0 will discard any set bits in the more significant bytes; the result of the expression will be in the range 0x00 .. 0xFF. This will, I believe, apply whether you use sign-and-magnitude, one's complement or two's complement binary values. The detailed bit pattern you get for a specific signed character value varies depending on the underlying representation; but the overall conclusion that the result will be in the range 0x00 .. 0xFF is valid.
There is an easy resolution for that concern — cast the value of c[i] to an unsigned char before using it:
if (!(((unsigned char)c[i] & 0xC0) & 0x80))
The value c[i] is converted to an unsigned char before it is promoted to an int (or, the compiler might promote to int, then coerce to unsigned char, then promote the unsigned char back to int), and the unsigned value is used in the & operations.
Of course, the code is now merely redundant. Using & 0xC0 followed by & 0x80 is entirely equivalent to just & 0x80.
If you're processing UTF-8 data and looking for continuation bytes, the correct test is:
if (((unsigned char)c[i] & 0xC0) == 0x80)
"Since I want to work with the original bit patterns from the file,
how can I convert/cast my signed char to unsigned char so that the bit
patterns remain unchanged?"
As someone already explained in a previous answer to your question on the same topic, any small integer type, be it signed or unsigned, will get promoted to the type int whenever used in an expression.
C11 6.3.1.1
"If an int can represent all values of the original type (as
restricted by the width, for a bit-field), the value is converted to
an int; otherwise, it is converted to an unsigned int. These are
called the integer promotions."
Also, as explained in the same answer, integer literals are always of the type int.
Therefore, your expression will boil down to the pseudo code (int) & (int) & (int). The operations will be performed on three temporary int variables and the result will be of type int.
Now, if the original data contained bits that may be interpreted as sign bits for the specific signedness representation (in practice this will be two's complement on all systems), you will get problems. Because these bits will be preserved upon promotion from signed char to int.
And then the bit-wise & operator performs an AND on every single bit regardless of the contents of its integer operand (C11 6.5.10/3), be it signed or not. If you had data in the signed bits of your original signed char, it will now be lost. Because the integer literals (0xC0 or 0x80) will have no bits set that corresponds to the sign bits.
The solution is to prevent the sign bits from getting transferred to the "temporary int". One solution is to cast c[i] to unsigned char, which is completely well-defined (C11 6.3.1.3). This will tell the compiler that "the whole contents of this variable is an integer, there are no sign bits to be concerned about".
Better yet, make a habit of always using unsigned data in every form of bit manipulations. The purist, 100% safe, MISRA-C compliant way of re-writing your expression is this:
if ( ((uint8_t)c[i] & 0xc0u) & 0x80u) > 0u)
The u suffix actually enforces the expression to be of unsigned int, but it is good practice to always cast to the intended type. It tells the reader of the code "I actually know what I am doing and I also understand all weird implicit promotion rules in C".
And then if we know our hex, (0xc0 & 0x80) is pointless, it is always true. And x & 0xC0 & 0x80 is always the same as x & 0x80. Therefore simplify the expression to:
if ( ((uint8_t)c[i] & 0x80u) > 0u)
"Is there a list of these "implementation-defined aspects" anywhere"
Yes, the C standard conveniently lists them in Appendix J.3. The only implementation-defined aspect you encounter in this case though, is the signedness implementation of integers. Which in practice is always two's complement.
EDIT:
The quoted text in the question is concerned with that the various bit-wise operators will produce implementation-defined results. This is just briefly mentioned as implementation-defined even in the appendix with no exact references. The actual chapter 6.5 doesn't say much regarding impl.defined behavior of & | etc. The only operators where it is explicitly mentioned is the << and >>, where left shifting a negative number is even undefined behavior, but right shifting it is implementation-defined.

Why does (1 >> 0x80000000) == 1?

The number 1, right shifted by anything greater than 0, should be 0, correct? Yet I can type in this very simple program which prints 1.
#include <stdio.h>
int main()
{
int b = 0x80000000;
int a = 1 >> b;
printf("%d\n", a);
}
Tested with gcc on linux.
6.5.7 Bitwise shift operators:
If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
The compiler is at license to do anything, obviously, but the most common behaviors are to optimize the expression (and anything that depends on it) away entirely, or simply let the underlying hardware do whatever it does for out-of-range shifts. Many hardware platforms (including x86 and ARM) mask some number of low-order bits to use as a shift-amount. The actual hardware instruction will give the result you are observing on either of those platforms, because the shift amount is masked to zero. So in your case the compiler might have optimized away the shift, or it might be simply letting the hardware do whatever it does. Inspect the assembly if you want to know which.
according to the standard, shifting for more than the bits actually existing can result in undefined behavior. So we cannot blame the compiler for that.
The motivation probably resides in the "border meaning" of 0x80000000 that sits on the boundary of the maximum positive and negative together (and that is "negative" having the highmost bit set) and on certain check that should be done and that the compiled program doesn't to to avoid to waste time verifying "impossible" things (do you really want the processor to shift bits 3 billion times?).
It's very probably not attempting to shift by some large number of bits.
INT_MAX on your system is probably 2**31-1, or 0x7fffffff (I'm using ** to denote exponentiation). If that's the case, then In the declaration:
int b = 0x80000000;
(which was missing a semicolon in the question; please copy-and-paste your exact code) the constant 0x80000000 is of type unsigned int, not int. The value is implicitly converted to int. Since the result is outside the bounds of int, the result is implementation-defined (or, in C99, may raise an implementation-defined signal, but I don't know of any implementation that does that).
The most common way this is done is to reinterpret the bits of the unsigned value as a 2's-complement signed value. The result in this case is -2**31, or -2147483648.
So the behavior isn't undefined because you're shifting by value that equals or exceeds the width of type int, it's undefined because you're shifting by a (very large) negative value.
Not that it matters, of course; undefined is undefined.
NOTE: The above assumes that int is 32 bits on your system. If int is wider than 32 bits, then most of it doesn't apply (but the behavior is still undefined).
If you really wanted to attempt to shift by 0x80000000 bits, you could do it like this:
unsigned long b = 0x80000000;
unsigned long a = 1 >> b; // *still* undefined
unsigned long is guaranteed to be big enough to hold the value 0x80000000, so you avoid part of the problem.
Of course, the behavior of the shift is just as undefined as it was in your original code, since 0x80000000 is greater than or equal to the width of unsigned long. (Unless your compiler has a really big unsigned long type, but no real-world compiler does that.)
The only way to avoid undefined behavior is not to do what you're trying to do.
It's possible, but vanishingly unlikely, that your original code's behavior is not undefined. That can only happen if the implementation-defined conversion of 0x80000000 from unsigned int to int yields a value in the range 0 .. 31. IF int is smaller than 32 bits, the conversion is likely to yield 0.
well read that maybe can help you
expression1 >> expression2
The >> operator masks expression2 to avoid shifting expression1 by too much.
That because if the shift amount exceeded the number of bits in the data type of expression1, all the original bits would be shifted away to give a trivial result.
Now for ensure that each shift leaves at least one of the original bits,
the shift operators use the following formula to calculate the actual shift amount:
mask expression2 (using the bitwise AND operator) with one less than the number of bits in expression1.
Example
var x : byte = 15;
// A byte stores 8 bits.
// The bits stored in x are 00001111
var y : byte = x >> 10;
// Actual shift is 10 & (8-1) = 2
// The bits stored in y are 00000011
// The value of y is 3
print(y); // Prints 3
That "8-1" is because x is 8 bytes so the operacion will be with 7 bits. that void remove last bit of original chain bits

Resources