Bitwise AND on signed chars - c

I have a file that I've read into an array of data type signed char. I cannot change this fact.
I would now like to do this: !((c[i] & 0xc0) & 0x80) where c[i] is one of the signed characters.
Now, I know from section 6.5.10 of the C99 standard that "Each of the operands [of the bitwise AND] shall have integral type."
And Section 6.5 of the C99 specification tells me:
Some operators (the unary operator ~ , and the binary operators << , >> , & , ^ , and | ,
collectively described as bitwise operators )shall have operands that have integral type.
These operators return
values that depend on the internal representations of integers, and
thus have implementation-defined aspects for signed types.
My question is two-fold:
Since I want to work with the original bit patterns from the file, how can I convert/cast my signed char to unsigned char so that the bit patterns remain unchanged?
Is there a list of these "implementation-defined aspects" anywhere (say for MVSC and GCC)?
Or you could take a different route and argue that this produces the same result for both signed and unsigned chars for any value of c[i].
Naturally, I will reward references to relevant standards or authoritative texts and discourage "informed" speculation.

As others point out, in all likelyhood your implementation is based on two's complement, and will give exactly the result you expect.
However, if you're worried about the results of an operation involving a signed value, and all you care about is the bit pattern, simply cast directly to an equivalent unsigned type. The results are defined under the standard:
6.3.1.3 Signed and unsigned integers
...
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
This is essentially specifying that the result will be the two's complement representation of the value.
Fundamental to this is that in two's complement maths the result of a calculation is modulo some power of two (i.e. the number of bits in the type), which in turn is exactly equivalent to masking off the relevant number of bits. And the complement of a number is the number subtracted from the power of two.
Thus adding a negative value is the same as adding any value which differs from the value by a multiple of that power of two.
i.e:
(0 + signed_value) mod (2^N)
==
(2^N + signed_value) mod (2^N)
==
(7 * 2^N + signed_value) mod (2^N)
etc. (if you know modulo, that should be pretty self-evidently true)
So if you have a negative number, adding a power of two will make it positive (-5 + 256 = 251), but the bottom 'N' bits will be exactly the same (0b11111011) and it will not affect the outcome of a mathematical operation. As values are then truncated to fit the type, the result is exactly the binary value you expected with even if the result 'overflows' (i.e. what you might think happens if the number was positive to start with - this wrapping is also well defined behaviour).
So in 8-bit two's complement:
-5 is the same as 251 (i.e 256 - 5) - 0b11111011
If you add 30, and 251, you get 281. But that's larger than 256, and 281 mod 256 equals 25. Exactly the same as 30 - 5.
251 * 2 = 502. 502 mod 256 = 246. 246 and -10 are both 0b11110110.
Likewise if you have:
unsigned int a;
int b;
a - b == a + (unsigned int) -b;
Under the hood, this cast is unlikely to be implemented with arithmetic and will certainly be a straight assignment from one register/value to another, or just optimised out altogether as the maths does not make a distinction between signed and unsigned (intepretation of CPU flags is another matter, but that's an implementation detail). The standard exists to ensure that an implementation doesn't take it upon itself to do something strange instead, or I suppose, for some weird architecture which isn't using two's complement...

unsigned char UC = *(unsigned char*)&C - this is how you can convert signed C to unsigned keeping the "bit pattern". Thus you could change your code to something like this:
!(( (*(unsigned char*)(c+i)) & 0xc0) & 0x80)
Explanation(with references):
761 When a pointer to an object is converted to a pointer to a character type, the result points to the lowest addressed byte of the object.
1124 When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1.
These two implies that unsigned char pointer points to the same byte as original signed char pointer.

You appear to have something similar to:
signed char c[] = "\x7F\x80\xBF\xC0\xC1\xFF";
for (int i = 0; c[i] != '\0'; i++)
{
if (!((c[i] & 0xC0) & 0x80))
...
}
You are (correctly) concerned about sign extension of the signed char type. In practice, however, (c[i] & 0xC0) will convert the signed character to a (signed) int, but the & 0xC0 will discard any set bits in the more significant bytes; the result of the expression will be in the range 0x00 .. 0xFF. This will, I believe, apply whether you use sign-and-magnitude, one's complement or two's complement binary values. The detailed bit pattern you get for a specific signed character value varies depending on the underlying representation; but the overall conclusion that the result will be in the range 0x00 .. 0xFF is valid.
There is an easy resolution for that concern — cast the value of c[i] to an unsigned char before using it:
if (!(((unsigned char)c[i] & 0xC0) & 0x80))
The value c[i] is converted to an unsigned char before it is promoted to an int (or, the compiler might promote to int, then coerce to unsigned char, then promote the unsigned char back to int), and the unsigned value is used in the & operations.
Of course, the code is now merely redundant. Using & 0xC0 followed by & 0x80 is entirely equivalent to just & 0x80.
If you're processing UTF-8 data and looking for continuation bytes, the correct test is:
if (((unsigned char)c[i] & 0xC0) == 0x80)

"Since I want to work with the original bit patterns from the file,
how can I convert/cast my signed char to unsigned char so that the bit
patterns remain unchanged?"
As someone already explained in a previous answer to your question on the same topic, any small integer type, be it signed or unsigned, will get promoted to the type int whenever used in an expression.
C11 6.3.1.1
"If an int can represent all values of the original type (as
restricted by the width, for a bit-field), the value is converted to
an int; otherwise, it is converted to an unsigned int. These are
called the integer promotions."
Also, as explained in the same answer, integer literals are always of the type int.
Therefore, your expression will boil down to the pseudo code (int) & (int) & (int). The operations will be performed on three temporary int variables and the result will be of type int.
Now, if the original data contained bits that may be interpreted as sign bits for the specific signedness representation (in practice this will be two's complement on all systems), you will get problems. Because these bits will be preserved upon promotion from signed char to int.
And then the bit-wise & operator performs an AND on every single bit regardless of the contents of its integer operand (C11 6.5.10/3), be it signed or not. If you had data in the signed bits of your original signed char, it will now be lost. Because the integer literals (0xC0 or 0x80) will have no bits set that corresponds to the sign bits.
The solution is to prevent the sign bits from getting transferred to the "temporary int". One solution is to cast c[i] to unsigned char, which is completely well-defined (C11 6.3.1.3). This will tell the compiler that "the whole contents of this variable is an integer, there are no sign bits to be concerned about".
Better yet, make a habit of always using unsigned data in every form of bit manipulations. The purist, 100% safe, MISRA-C compliant way of re-writing your expression is this:
if ( ((uint8_t)c[i] & 0xc0u) & 0x80u) > 0u)
The u suffix actually enforces the expression to be of unsigned int, but it is good practice to always cast to the intended type. It tells the reader of the code "I actually know what I am doing and I also understand all weird implicit promotion rules in C".
And then if we know our hex, (0xc0 & 0x80) is pointless, it is always true. And x & 0xC0 & 0x80 is always the same as x & 0x80. Therefore simplify the expression to:
if ( ((uint8_t)c[i] & 0x80u) > 0u)
"Is there a list of these "implementation-defined aspects" anywhere"
Yes, the C standard conveniently lists them in Appendix J.3. The only implementation-defined aspect you encounter in this case though, is the signedness implementation of integers. Which in practice is always two's complement.
EDIT:
The quoted text in the question is concerned with that the various bit-wise operators will produce implementation-defined results. This is just briefly mentioned as implementation-defined even in the appendix with no exact references. The actual chapter 6.5 doesn't say much regarding impl.defined behavior of & | etc. The only operators where it is explicitly mentioned is the << and >>, where left shifting a negative number is even undefined behavior, but right shifting it is implementation-defined.

Related

How to express hexadecimal as signed and perform opeartion ? MISRA 10.1

Example:
int32 Temp;
Temp= (Temp & 0xFFFF);
How can I tell that 0xFFFF is signed not unsigned. Usually we add a "u" to the hexadecimal number 0xFFFFu and then perform operation. But what happens when we need a signed result ?
How can I tell that 0xFFFF is signed not unsigned.
You need to know the size of an int on the given system:
In case it is 16 bits, then 0xFFFF is of type unsigned int.
In case it is 32 bits, then 0xFFFF is of type (signed) int.
See the table at C17 6.4.4.1 §5 for details. As you can tell, this isn't portable nor reliable, which is why we should always use u suffix on hex constants. (See Why is 0 < -0x80000000? for an example of a subtle bug caused by this.)
In the rare event where you actually need signed numbers when doing bitwise operations, use explicit casts. For example MISRA-C compliant code for masking out some part of a signed integer would be:
int32_t Temp;
Temp = (int32_t) ((uint32_t)Temp & 0xFFFFu);
The u makes the 0xFFFFu "essentially unsigned". We aren't allowed to mix essentially signed and unsigned operands where implicit promotions might be present, hence the cast of Temp to unsigned type. When everything is done, we have to make an explicit cast back to signed type, because it isn't allowed to implicitly go from unsigned to signed during assignment either.
How to express hexadecimal as signed and perform operation?
When int is wider than 16 bit, 0xFFFF is signed and no changes needed.
int32 Temp;
Temp= (Temp & 0xFFFF);
To handle arbitrary int bit width, use a cast to quiet that warning.
Temp = Temp & (int32)0xFFFF;
Alternatively, use a signed constant as decimal constants, since C99, are always signed.
Temp = Temp & 65535; // 0xFFFF
The alternative idea goes against the "express hexadecimal as signed" but good code avoids naked magic numbers and so the usage of a hex constant is less relevant as the concept of the mask should be carried in its name.
#define IMASK16 65535
...
Temp = Temp & IMASK16;

What is the safest cross-platform way to get the low byte or the high byte of a 16-bit integer?

While looking at various sdks it seems LOBYTE and HIBYTE are rarely consistent as shown below.
Windows
#define LOBYTE(w) ((BYTE)(((DWORD_PTR)(w)) & 0xff))
#define HIBYTE(w) ((BYTE)((((DWORD_PTR)(w)) >> 8) & 0xff))
Various Linux Headers
#define HIBYTE(w) ((u8)(((u16)(w) >> 8) & 0xff))
#define LOBYTE(w) ((u8)(w))
Why is & 0xff needed if it's cast to a u8? Why wouldn't the following be the way to go? (assuming uint8_t and uint16_t are defined)
#define HIBYTE(w) ((uint8_t)(((uint16_t)(w) >> 8)))
#define LOBYTE(w) ((uint8_t)(w))
From ISO/IEC 9899:TC3, 6.3.1.3 Signed and unsigned integers (under 6.3 Conversions):
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it
is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
While that sounds a little convoluted, it answers the following question.
Why is & 0xff needed if it's cast to a u8?
It is not needed, because the cast does the masking automatically.
When it comes to the question in the topic, the OP's last suggestion is:
#define HIBYTE(w) ((uint8_t)(((uint16_t)(w) >> 8)))
#define LOBYTE(w) ((uint8_t)(w))
That will work as expected for all unsigned values. Signed values will always be converted to unsigned values by the macros, which in the case of two's complement will not change the representation, so the results of the calculations are well defined. Assuming two's complement, however, is not portable, so the solution is not strictly portable for signed integers.
Implementing a portable solution for signed integers would be quite difficult, and one could even question the meaning of such an implementation:
Is the result supposed to be signed or unsigned?
If the result is supposed to be unsigned, it does not really qualify as the high/low byte of the initial number, since a change of representation might be necessary to obtain it.
If the result is supposed to signed, it would have to be further specified. The result of >> for negative values, for instance, is implementation-defined, so getting a portable well-defined "high byte" sounds challenging. One should really question the purpose of such a calculation.
And since we are playing language lawyer, we might want to wonder about the signedness of the left operand of (uint16_t)(w) >> 8. Unsigned could seem as the obvious answer, but it is not, because of the integer promotion rules.
Integer promotion applies, among others, to objects or expressions specified as follows.
An object or expression with an integer type whose integer conversion rank is less than or equal to the rank of int and unsigned int.
The integer promotion rule in such a case is specified as:
If an int can represent all values of the original type, the value is converted to an int;
That will be the case for the left operand on a typical 32-bit or 64-bit machine.
Fortunately in such a case, the left operand after conversion will still be nonnegative, which makes the result of >> well defined:
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a nonnegative value, the value of the result is the integral part of the quotient of E1 / 2E2.

Clarifying How Bitwise Operators, Non-Negative Operants, and Type Conversion Interact

Question
As a fledgling C language-lawyer, I have run into a situation where I am uncertain if I understand what C specifications logically guarantee correctly.
As I understand it, "bitwise operators" (&, |, and &) will work as intuitively expected on non-negative values with any of C's integer types (char/short/int/long/etc, whether signed or unsigned) - regardless of underlying object representation.
Is this correct understanding of what is/isn't strictly well-defined behavior in C?
Key Point
In many ways, this question boils down to whether a conforming implementation is allowed to take two non-trap, non-negative values as operands to the bitwise operators, and produce a trap representation result (from the operation itself, not from assigning/interpreting the result into/as an inappropriate type).
Example
Consider the following code:
#include <limits.h>
#define MOST_SIGNIFICANT_BIT = (unsigned char )((UCHAR_MAX >> 1) + 1)
/* ... in some function: */
unsigned char byte;
/* Using broad meaning of "byte", not necessarily "octet" */
int val;
/* val is assigned an arbitrary _non-negative_ value at runtime */
byte = val | MOST_SIGNIFICANT_BIT;
Do note the above comment that val receives a non-negative value at runtime (which can be represented by val's type).
My expectation is that byte has the most significant bit set, and the lower bits are the pure binary representation (no padding bits or trap representations) of the bottom CHAR_BIT - 1 bits of the numerical value of val.
I expect this to remain true even if the type of val is changed to any other integer type, but I expect this guarantee to disappear as soon as the value of val becomes negative (no one result guaranteed for all implementations), or the type of val is changed to any non-integer type (violates a constraint of C's definition of the bitwise operators).
Self-Answering
I am posting my explanation of my current understanding as an answer because I'm fairly confident in it, but I am looking for corrections to any of my misconceptions and will accept any better/correcting answer instead of mine.
Why (I Think) This Is (Probably) Correct
The bitwise operators &, |, and ^ are defined as operating on the actual binary representation of the converted operands: the operands are said to undergo the "usual arithmetic conversions".
As I understand it, it logically follows that when you use two integer type expressions with non-negative values as operands to one of those operators, regardless of padding bits or trap representations, their value bits will "line up": therefore, the result's value bits will have the numerical value which matches with what you'd expect if you just assumed a "pure binary representation".
The kicker is that so long as you start with valid (non-trap), non-negative values as the operands, the operands should always promote to an integer type which can represent both of their values, and thus can logically represent the result value for either of those three operations. You also never run into possible issues with e.g. "signed zero", because limiting yourself to non-negative values avoids such problems. And so long as the result is used as a type which can hold the result value (or as an unsigned integer type), you won't introduce other similar/related issues.
Can These Operations Produce A Trap Representation From Non-Negative Non-Trap Operands?
Footnotes 44/53 and 45/54 of the last C99/C11 drafts, respectively, seem to suggest that the answer to this uncertainty depends on whether foo | bar, foo & bar, and foo ^ bar are considered "arithmetic operations". If they are, then they are not allowed to produce a trap representation result given non-trap values.
The Index of C99 and C11 standard drafts lists the bitwise operators as a subset of "arithmetic operators", suggesting yes. Though C89 doesn't organize its index that way, and my C Programming Language (2nd Edition) has a section called "arithmetic operators" which includes just +, -, *, /, and %, leaving the bitwise operators to a separate section. In other words, there is no clear answer on this point.
In practice, I don't know of any systems where this would happen (given the constraint of non-negative values for both operands), for what that's worth.
And one may consider the following: the type unsigned char is expected (and essentially blessed by C99 and C11) to be capable of accessing all bits of the underlying object representation of a type - it seems likely that the intent is that bitwise operators would work correctly with unsigned char - which would be integer-promoted to int on most modern systems, an unsigned int on the rest: therefore it seems very unlikely that foo | bar, foo & bar, or foo ^ bar would be allowed to produce trap representations - at least if foo and bar are both values that can be held in an unsigned char, and if the result is assigned into an unsigned char.
It is very tempting to generalize from the prior two points that this is a non-issue, although I wouldn't call it a rigorous proof.
Applied to the Example
Here's why I think this is correct and will work as expected:
UCHAR_MAX >> 1 subjects UCHAR_MAX to "usual arithmetic conversions": By definition, UCHAR_MAX will fit into either an int or unsigned int, because on most systems int can represent all values of unsigned char, and on the few that don't, an unsigned int has to be able to represent all values of unsigned char`, so that's just an "integer promotion" in this case.
Because bit shifts are defined in terms of values and not bitwise representations, UCHAR_MAX >> 1 is the quotient of UCHAR_MAX being divided by 2. (Let's call this result UCHAR_MAX_DIV_2).
UCHAR_MAX_DIV_2 + 1 subjects both arguments to usual arithmetic conversion: If UCHAR_MAX fit into an int, then the result is an int, otherwise, it is an unsigned int. Either way the conversions stop at integer promotion.
The result of UCHAR_MAX_DIV_2 + 1 is a positive value, which, when converted into an unsigned char, will have the most significant bit of the unsigned char set, and all other bits cleared (because the conversion would preserve the numerical value, and unsigned char is very strictly defined to have a pure binary representation without any padding bits or trap representations - but even without such an explicit requirement, the resulting value would have the most significant value bit set).
The (unsigned char) cast of MOST_SIGNIFICANT_BIT is actually redundant in this context - cast or no cast, it's going to be subject to the "usual arithmetic conversions" when bitwise-OR'ed. (but it might be useful in other contexts).
The above five steps will be constant-folded on pretty much every compiler out there - but a proper compiler should not constant-fold in a way which differs from the semantics of the code if it hadn't, so all of the above applies.
val | MOST_SIGNIFICANT_BIT is where it gets interesting: unlike << and >>, | and the other binary operators are defined in terms of manipulating the binary representations. Both val and MOST_SIGNIFICANT_BIT are subject to usual arithmetic conversions: details like the layout of bits or trap representations might mean a different binary representation, but should preserve the value: Given two variables of the same integer-type, holding non-negative, non-trap values, the value bits should "line up" correctly, so I actually expect that val | MOST_SIGNIFICANT_BIT produces the correct value (let's call this result VAL_WITH_MSB_SET). I don't see an explicit guarantee that this step couldn't produce a trap representation, but I don't believe there's an implementation of C where it would.
byte = VAL_WITH_MSB_SET forces a conversion: conversion of an integer type (so long as the value is a non-trap value) into a smaller, unsigned integer type is well defined: In this case, the value is reduced modulo UCHAR_MAX + 1. Since val is stated to be positive, the end result is that byte has the value of the remainder of VAL_WITH_MSB_SET divided by UCHAR_MAX + 1.
Explaining Where It Doesn't Work
If val were to be a negative value or non-integer type, we'd be out of luck because there is no longer a logical certainty that the bits that get binary-OR'ed will have the same "meaning":
If val is a signed integer type but has a negative value, then even though MOST_SIGNIFICANT_BIT is promoted to a compatible type, even though the value bits "line up", the result doesn't have any guaranteed meaning (because C makes no guarantee about how negative numbers are encoded), nor is the result (for any encoding) going to have the same meaning, especially once assigned into the unsigned char at the last step.
If val has a non-integer type, it's already violating the C standard, which constrains |, &, and ^ operators to operating on the "integer types". But if your compiler allowed it (or you did some tricks using unions, etc), then you have no guarantees about what each bit means, and thus the bit that you set are meaningless.
In many ways, this question boils down to whether a conforming implementation is allowed to take two non-trap, non-negative values as operands to the bitwise operators, and produce a trap representation result
This is covered by C11 section 6.2.6.2 (C99 is similar). There is a footnote that clarifies the intent of the more technical text:
Regardless, no arithmetic operation on valid values can generate a trap representation other than as part of an exceptional condition such as an overflow, and this cannot occur with unsigned types.
The bitwise operators are arithmetic operations as discussed here.
In this footnote, "trap representation" excludes the special case "negative zero". The negative zero may or may not cause UB, but it has its own text (in 6.2.6.2 also) separate from the trap representation text.
So your question can actually be answered for both signed and unsigned values; the only dangerous case is the "negative zero" possibility. (Which can't occur from non-negative input).

How to do a bit representation in a C-standard way?

As per the C standard the value representation of a integer type is implementation defined. So 5 might not be represented as 00000000000000000000000000000101 or -1 as 11111111111111111111111111111111 as we usually assume in a 32-bit 2's complement. So even though the operators ~, << and >> are well defined, the bit patterns they will work on is implementation defined. The only defined bit pattern I could find was "§5.2.1/3 A byte with all bits set to 0, called the null character, shall exist in the basic execution character set; it is used to terminate a character string.".
So my questions is - Is there a implementation independent way of converting integer types to a bit pattern?
We can always start with a null character and do enough bit operations on it to get it to a desired value, but I find it too cumbersome. I also realise that practically all implementations will use a 2's complement representation, but I want to know how to do it in a pure C standard way. Personally I find this topic quite intriguing due to the matter of device-driver programming where all code written till date assumes a particular implementation.
In general, it's not that hard to accommodate unusual platforms for the most cases (if you don't want to simply assume 8-bit char, 2's complement, no padding, no trap, and truncating unsigned-to-signed conversion), the standard mostly gives enough guarantees (a few macros to inspect certain implementation details would be helpful, though).
As far as a strictly conforming program can observe (outside bit-fields), 5 is always encoded as 00...0101. This is not necessarily the physical representation (whatever this should mean), but what is observable by portable code. A machine using Gray code internally, for example, would have to emulate a "pure binary notation" for bitwise operators and shifts.
For negative values of signed types, different encodings are allowed, which leads to different (but well-defined for every case) results when re-interpreting as the corresponding unsigned type. For example, strictly conforming code must distinguish between (unsigned)n and *(unsigned *)&n for a signed integer n: They are equal for two's complement without padding bits, but different for the other encodings if n is negative.
Further, padding bits may exist, and signed integer types may have more padding bits than their corresponding unsigned counterparts (but not the other way round, type-punning from signed to unsigned is always valid). sizeof cannot be used to get the number of non-padding bits, so e.g. to get an unsigned value where only the sign-bit (of the corresponding signed type) is set, something like this must be used:
#define TYPE_PUN(to, from, x) ( *(to *)&(from){(x)} )
unsigned sign_bit = TYPE_PUN(unsigned, int, INT_MIN) &
TYPE_PUN(unsigned, int, -1) & ~1u;
(there are probably nicer ways) instead of
unsigned sign_bit = 1u << sizeof sign_bit * CHAR_BIT - 1;
as this may shift by more than the width. (I don't know of a constant expression giving the width, but sign_bit from above can be right-shifted until it's 0 to determine it, Gcc can constant-fold that.) Padding bits can be inspected by memcpying into an unsigned char array, though they may appear to "wobble": Reading the same padding bit twice may give different results.
If you want the bit pattern (without padding bits) of a signed integer (little endian):
int print_bits_u(unsigned n) {
for(; n; n>>=1) {
putchar(n&1 ? '1' : '0'); // n&1 never traps
}
return 0;
}
int print_bits(int n) {
return print_bits_u(*(unsigned *)&n & INT_MAX);
/* This masks padding bits if int has more of them than unsigned int.
* Note that INT_MAX is promoted to unsigned int here. */
}
int print_bits_2scomp(int n) {
return print_bits_u(n);
}
print_bits gives different results for negative numbers depending on the representation used (it gives the raw bit pattern), print_bits_2scomp gives the two's complement representation (possibly with a greater width than a signed int has, if unsigned int has less padding bits).
Care must be taken not to generate trap representations when using bitwise operators and when type-punning from unsigned to signed, see below how these can potentially be generated (as an example, *(int *)&sign_bit can trap with two's complement, and -1 | 1 can trap with ones' complement).
Unsigned-to-signed integer conversion (if the converted value isn't representable in the target type) is always implementation-defined, I would expect non-2's complement machines to differ from the common definition more likely, though technically, it could also become an issue on 2's complement implementations.
From C11 (n1570) 6.2.6.2:
(1) For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2N-1, so that objects of that type shall be capable of representing values from 0 to 2N-1 using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.
(2) For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; signed char shall not have any padding bits. There shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed
type and N in the unsigned type, then M≤N ). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways:
the corresponding value with sign bit 0 is negated (sign and magnitude);
the sign bit has the value -(2M) (two's complement);
the sign bit has the value -(2M-1) (ones' complement).
Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones' complement), is a trap representation or a normal value. In the case of sign and magnitude and ones' complement, if this representation is a normal value it is called a negative zero.
To add to mafso's excellent answer, there's a part of the ANSI C rationale which talks about this:
The Committee has explicitly restricted the C language to binary architectures, on the grounds that this stricture was implicit in any case:
Bit-fields are specified by a number of bits, with no mention of “invalid integer” representation. The only reasonable encoding for such bit-fields is binary.
The integer formats for printf suggest no provision for “invalid integer” values, implying that any result of bitwise manipulation produces an integer result which can be printed by printf.
All methods of specifying integer constants — decimal, hex, and octal — specify an integer value. No method independent of integers is defined for specifying “bit-string constants.” Only a binary encoding provides a complete one-to-one mapping between bit strings and integer values.
The restriction to binary numeration systems rules out such curiosities as Gray code and makes
possible arithmetic definitions of the bitwise operators on unsigned types.
The relevant part of the standard might be this quote:
3.1.2.5 Types
[...]
The type char, the signed and unsigned integer types, and the
enumerated types are collectively called integral types. The
representations of integral types shall define values by use of a pure
binary numeration system.
If you want to get the bit-pattern of a given int, then bit-wise operators are your friends. If you want to convert an int to its 2-complement representation, then arithmetic operators are your friends. The two representations can be different, as it is implementation defined:
Std Draft 2011. 6.5/4. Some operators (the unary operator ~, and the
binary operators <<, >>, &, ^, and |, collectively described as
bitwise operators) are required to have operands that have integer
type. These operators yield values that depend on the internal
representations of integers, and have implementation-defined and
undefined aspects for signed types.
So it means that i<<1 will effectively shift the bit-pattern by one position to the left, but that the value produced can be different than i*2 (even for smal values of i).

How do I byte-swap a signed number in C?

I understand that casting from an unsigned type to a signed type of equal rank produces an implementation-defined value:
C99 6.3.1.3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
This means I don't know how to byte-swap a signed number. For instance, suppose I am receiving two-byte, twos-complement signed values in little-endian order from a peripheral device, and processing them on a big-endian CPU. The byte-swapping primitives in the C library (like ntohs) are defined to work on unsigned values. If I convert my data to unsigned so I can byte-swap it, how do I reliably recover a signed value afterward?
As you say in your question the result is implementation-defined or an implementation-defined signal is raised - i.e. depends on the platform/compiler what happens.
To byte-swap a signed number while avoiding as much implementation-defined behavior as possible, you can make use of a wider signed intermediate, one that can represent the entire range of the unsigned type with the same width as the signed value you wanted to byte-swap. Taking your example of little-endian, 16-bit numbers:
// Code below assumes CHAR_BIT == 8, INT_MAX is at least 65536, and
// signed numbers are twos complement.
#include <stdint.h>
int16_t
sl16_to_host(unsigned char b[2])
{
unsigned int n = ((unsigned int)b[0]) | (((unsigned int)b[1]) << 8);
int v = n;
if (n & 0x8000) {
v -= 0x10000;
}
return (int16_t)v;
}
Here's what this does. First, it converts the little-endian value in b to a host-endian unsigned value (regardless of which endianness the host actually is). Then it stores that value in a wider, signed variable. Its value is still in the range [0, 65535], but it is now a signed quantity. Because int can represent all the values in that range, the conversion is fully defined by the standard.
Now comes the key step. We test the high bit of the unsigned value, which is the sign bit, and if it's true we subtract 65536 (0x10000) from the signed value. That maps the range [32768, 655535] to [-32768, -1], which is precisely how a twos-complement signed number is encoded. This is still happening in the wider type and therefore we are guaranteed that all the values in the range are representable.
Finally, we truncate the wider type to int16_t. This step involves unavoidable implementation-defined behavior, but with probability one, your implementation defines it to behave as you would expect. In the vanishingly unlikely event that your implementation uses sign-and-magnitude or ones-complement representation for signed numbers, the value -32768 will be mangled by the truncation, and may cause the program to crash. I wouldn't bother worrying about it.
Another approach, which may be useful for byteswapping 32-bit numbers when you don't have a 64-bit type available, is to mask out the sign bit and handle it separately:
int32_t
sl32_to_host(unsigned char b[4])
{
uint32_t mag = ((((uint32_t)b[0]) & 0xFF) << 0) |
((((uint32_t)b[1]) & 0xFF) << 8) |
((((uint32_t)b[2]) & 0xFF) << 16) |
((((uint32_t)b[3]) & 0x7F) << 24);
int32_t val = mag;
if (b[3] & 0x80) {
val = (val - 0x7fffffff) - 1;
}
return val;
}
I've written (val - 0x7fffffff) - 1 here, instead of just val - 0x80000000, to ensure that the subtraction happens in a signed type.
I understand that casting from an unsigned type to a signed type of equal rank produces an implementation-defined value.
It will be implementation-defined only because the signedness format in C is implementation-defined. For example, two's complement is one such implementation-defined format.
So the only issue here is if either side of the transmission would not be two's complement, which is not likely going to happen in the real world. I would not bother to design programs to be portable to obscure, extinct one's complement computers from the dark ages.
This means I don't know how to byte-swap a signed number. For instance, suppose I am receiving two-byte, twos-complement signed values in little-endian order from a peripheral device, and processing them on a big-endian CPU
I suspect a source of confusion here is that you think a generic two's complement number will be transmitted from a sender that is either big or little endian and received by one which is either big/little. Data transmission protocols don't work like that though: they explicitly specify endianess and signedness format. So both sides have to adapt to the protocol.
And once that's specified, there's really no rocket science here: you are receiving 2 raw bytes. Store them in an array of raw data. Then assign them to your two's complement variable. Suppose the protocol specified little endian:
int16_t val;
uint8_t little[2];
val = (little[1]<<8) | little[0];
Bit shifting has the advantage of being endian-independent. So the above code will work no matter if your CPU is big or little. So although this code contains plenty of ugly implicit promotions, it is 100% portable. C is guaranteed to treat the above as this:
val = (int16_t)( ((int)((int)little[1]<<8)) | (int)little[0] );
The result type of the shift operator is that of its promoted left operand. The result type of | is the balanced type (usual arthmetic conversions).
Shifting signed negative numbers would give undefined behavior, but we get away with the shift because the individual bytes are unsigned. When they get implicitly promoted, the numbers are still treated as positive.
And since int is guaranteed to be at least 16 bits, the code will work on all CPUs.
Alternatively, you could use pedantic style that completely excludes all implicit promotions/conversions:
val = (int16_t) ( ((uint32_t)little[1] << 8) | (uint32_t)little[0] );
But this comes at the cost of readability.

Resources