Is bitwise & equivalent to modulo operation - c

I came across the following snippet which in my opinion is to convert an integer into binary equivalence.
Can anyone tell me why an &1 is used instead of %2 ? Many thanks.
for (i = 0; i <= nBits; ++i) {
bits[i] = ((unsigned long) x & 1) ? 1 : 0;
x = x / 2;
}

The representation of unsigned integers is specified by the Standard: An unsigned integer with n value bits represents numbers in the range [0, 2n), with the usual binary semantics. Therefore, the least significant bit is the remainder of the value of the integer after division by 2.
It is debatable whether it's useful to replace readable mathematics with low-level bit operations; this kind of style was popular in the 70s when compilers weren't very smart. Nowadays I think you can assume that a compiler will know that dividing by two can be realized as bit shift etc., so you can just Write What You Mean.

what the code snippet does, is not to convert a unsigned int into a binary number (it's internal representation is already binary). It created a bit array with the values of the unsigned int's bits. Spreads it out over an array if you will.
e.g. x=3 => bits[2]=0 bits[1]=1 bits[0]=1
To do this
it selects the last bit of the number and places it the bits array
(the &1 operation).
then shifts the number to the right by one position ( /2 is
equivalent to >>1).
Repeats the above operations for all the bits
You could have used %2 instead of &1, the generated code should be the same. But I guess it's just a matter of programming style and preference. For most programmers, the &1 is a lot clearer than %2.

In your example, %2 and &1 are the same. Which one to take is probably simply a matter of taste. While %2 is probably more easier to read for people with a strong mathematics background, &1 is easier to understand for people with a strong technical background.

They are equivalent in the very special case. It's an old Fortran influenced style.

Related

Bitwise operation results in unexpected variable size

Context
We are porting C code that was originally compiled using an 8-bit C compiler for the PIC microcontroller. A common idiom that was used in order to prevent unsigned global variables (for example, error counters) from rolling over back to zero is the following:
if(~counter) counter++;
The bitwise operator here inverts all the bits and the statement is only true if counter is less than the maximum value. Importantly, this works regardless of the variable size.
Problem
We are now targeting a 32-bit ARM processor using GCC. We've noticed that the same code produces different results. So far as we can tell, it looks like the bitwise complement operation returns a value that is a different size than we would expect. To reproduce this, we compile, in GCC:
uint8_t i = 0;
int sz;
sz = sizeof(i);
printf("Size of variable: %d\n", sz); // Size of variable: 1
sz = sizeof(~i);
printf("Size of result: %d\n", sz); // Size of result: 4
In the first line of output, we get what we would expect: i is 1 byte. However, the bitwise complement of i is actually four bytes which causes a problem because comparisons with this now will not give the expected results. For example, if doing (where i is a properly-initialized uint8_t):
if(~i) i++;
we will see i "wrap around" from 0xFF back to 0x00. This behaviour is different in GCC compared with when it used to work as we intended in the previous compiler and 8-bit PIC microcontroller.
We are aware that we can resolve this by casting like so:
if((uint8_t)~i) i++;
or, by
if(i < 0xFF) i++;
however in both of these workarounds, the size of the variable must be known and is error-prone for the software developer. These kinds of upper bounds checks occur throughout the codebase. There are multiple sizes of variables (eg., uint16_t and unsigned char etc.) and changing these in an otherwise working codebase is not something we're looking forward to.
Question
Is our understanding of the problem correct, and are there options available to resolving this that do not require re-visiting each case where we've used this idiom? Is our assumption correct, that an operation like bitwise complement should return a result that is the same size as the operand? It seems like this would break, depending on processor architectures. I feel like I'm taking crazy pills and that C should be a bit more portable than this. Again, our understanding of this could be wrong.
On the surface this might not seem like a huge issue but this previously-working idiom is used in hundreds of locations and we're eager to understand this before proceeding with expensive changes.
Note: There is a seemingly similar but not exact duplicate question here: Bitwise operation on char gives 32 bit result
I didn't see the actual crux of the issue discussed there, namely, the result size of a bitwise complement being different than what's passed into the operator.
What you are seeing is the result of integer promotions. In most cases where an integer value is used in an expression, if the type of the value is smaller than int the value is promoted to int. This is documented in section 6.3.1.1p2 of the C standard:
The following may be used in an expression wherever an intor
unsigned int may be used
An object or expression with an integer type (other than intor unsigned int) whose integer conversion rank is less
than or equal to the rank of int and unsigned int.
A bit-field of type _Bool, int ,signed int, orunsigned int`.
If an int can represent all values of the original type (as
restricted by the width, for a bit-field), the value is
converted to an int; otherwise, it is converted to an
unsigned int. These are called the integer promotions. All
other types are unchanged by the integer promotions.
So if a variable has type uint8_t and the value 255, using any operator other than a cast or assignment on it will first convert it to type int with the value 255 before performing the operation. This is why sizeof(~i) gives you 4 instead of 1.
Section 6.5.3.3 describes that integer promotions apply to the ~ operator:
The result of the ~ operator is the bitwise complement of its
(promoted) operand (that is, each bit in the result is set if and only
if the corresponding bit in the converted operand is not set). The
integer promotions are performed on the operand, and the
result has the promoted type. If the promoted type is an unsigned
type, the expression ~E is equivalent to the maximum value
representable in that type minus E.
So assuming a 32 bit int, if counter has the 8 bit value 0xff it is converted to the 32 bit value 0x000000ff, and applying ~ to it gives you 0xffffff00.
Probably the simplest way to handle this is without having to know the type is to check if the value is 0 after incrementing, and if so decrement it.
if (!++counter) counter--;
The wraparound of unsigned integers works in both directions, so decrementing a value of 0 gives you the largest positive value.
in sizeof(i); you request the size of the variable i, so 1
in sizeof(~i); you request the size of the type of the expression, which is an int, in your case 4
To use
if(~i)
to know if i does not value 255 (in your case with an the uint8_t) is not very readable, just do
if (i != 255)
and you will have a portable and readable code
There are multiple sizes of variables (eg., uint16_t and unsigned char etc.)
To manage any size of unsigned :
if (i != (((uintmax_t) 2 << (sizeof(i)*CHAR_BIT-1)) - 1))
The expression is constant, so computed at compile time.
#include <limits.h> for CHAR_BIT and #include <stdint.h> for uintmax_t
Here are several options for implementing “Add 1 to x but clamp at the maximum representable value,” given that x is some unsigned integer type:
Add one if and only if x is less than the maximum value representable in its type:
x += x < Maximum(x);
See the following item for the definition of Maximum. This method
stands a good chance of being optimized by a compiler to efficient
instructions such as a compare, some form of conditional set or move,
and an add.
Compare to the largest value of the type:
if (x < ((uintmax_t) 2u << sizeof x * CHAR_BIT - 1) - 1) ++x
(This calculates 2N, where N is the number of bits in x, by shifting 2 by N−1 bits. We do this instead of shifting 1 N bits because a shift by the number of bits in a type is not defined by the C standard. The CHAR_BIT macro may be unfamiliar to some; it is the number of bits in a byte, so sizeof x * CHAR_BIT is the number of bits in the type of x.)
This can be wrapped in a macro as desired for aesthetics and clarity:
#define Maximum(x) (((uintmax_t) 2u << sizeof (x) * CHAR_BIT - 1) - 1)
if (x < Maximum(x)) ++x;
Increment x and correct if it wraps to zero, using an if:
if (!++x) --x; // !++x is true if ++x wraps to zero.
Increment x and correct if it wraps to zero, using an expression:
++x; x -= !x;
This is is nominally branchless (sometimes beneficial for performance), but a compiler may implement it the same as above, using a branch if needed but possibly with unconditional instructions if the target architecture has suitable instructions.
A branchless option, using the above macro, is:
x += 1 - x/Maximum(x);
If x is the maximum of its type, this evaluates to x += 1-1. Otherwise, it is x += 1-0. However, division is somewhat slow on many architectures. A compiler may optimize this to instructions without division, depending on the compiler and the target architecture.
Before stdint.h the variable sizes can vary from compiler to compiler and the actual variable types in C are still int, long, etc and are still defined by the compiler author as to their size. Not some standard nor target specific assumptions. The author(s) then need to create stdint.h to map the two worlds, that is the purpose of stdint.h to map the uint_this that to int, long, short.
If you are porting code from another compiler and it uses char, short, int, long then you have to go through each type and do the port yourself, there is no way around it. And either you end up with the right size for the variable, the declaration changes but the code as written works....
if(~counter) counter++;
or...supply the mask or typecast directly
if((~counter)&0xFF) counter++;
if((uint_8)(~counter)) counter++;
At the end of the day if you want this code to work you have to port it to the new platform. Your choice as to how. Yes, you have to spend the time hit each case and do it right, otherwise you are going to keep coming back to this code which is even more expensive.
If you isolate the variable types on the code before porting and what size the variable types are, then isolate the variables that do this (should be easy to grep) and change their declarations using stdint.h definitions which hopefully won't change in the future, and you would be surprised but the wrong headers are used sometimes so even put checks in so you can sleep better at night
if(sizeof(uint_8)!=1) return(FAIL);
And while that style of coding works (if(~counter) counter++;), for portability desires now and in the future it is best to use a mask to specifically limit the size (and not rely on the declaration), do this when the code is written in the first place or just finish the port and then you won't have to re-port it again some other day. Or to make the code more readable then do the if x<0xFF then or x!=0xFF or something like that then the compiler can optimize it into the same code it would for any of these solutions, just makes it more readable and less risky...
Depends on how important the product is or how many times you want send out patches/updates or roll a truck or walk to the lab to fix the thing as to whether you try to find a quick solution or just touch the affected lines of code. if it is only a hundred or few that is not that big of a port.
6.5.3.3 Unary arithmetic operators
...
4 The result of the ~ operator is the bitwise complement of its (promoted) operand (that is,
each bit in the result is set if and only if the corresponding bit in the converted operand is
not set). The integer promotions are performed on the operand, and the result has the
promoted type. If the promoted type is an unsigned type, the expression ~E is equivalent
to the maximum value representable in that type minus E.
C 2011 Online Draft
The issue is that the operand of ~ is being promoted to int before the operator is applied.
Unfortunately, I don't think there's an easy way out of this. Writing
if ( counter + 1 ) counter++;
won't help because promotions apply there as well. The only thing I can suggest is creating some symbolic constants for the maximum value you want that object to represent and testing against that:
#define MAX_COUNTER 255
...
if ( counter < MAX_COUNTER-1 ) counter++;

In C, How do I calculate the signed difference between two 48-bit unsigned integers?

I've got two values from an unsigned 48bit nanosecond counter, which may wrap.
I need the difference, in nanoseconds, of the two times.
I think I can assume that the readings were taken at roughly the same time, so of the two possible answers I think I'm safe taking the smallest.
They're both stored as uint64_t. Because I don't think I can have 48 bit types.
I'd like to calculate the difference between them, as a signed integer (presumably int64_t), accounting for the wrapping.
so e.g. if I start out with
x=5
y=3
then the result of x-y is 2, and will stay so if I increment both x and y, even as they wrap over the top of the max value 0xffffffffffff
Similarly if x=3, y=5, then x-y is -2, and will stay so whenever x and y are incremented simultaneously.
If I could declare x,y as uint48_t, and the difference as int48_t, then I think
int48_t diff = x - y;
would just work.
How do I simulate this behaviour with the 64-bit arithmetic I've got available?
(I think any computer this is likely to run on will use 2's complement arithmetic)
P.S. I can probably hack this out, but I wonder if there's a nice neat standard way to do this sort of thing, which the next person to read my code will be able to understand.
P.P.S Also, this code is going to end up in the tightest of tight loops, so something that will compile efficiently would be nice, so that if there has to be a choice, speed trumps readability.
You can simulate a 48-bit unsigned integer type by just masking off the top 16 bits of a uint64_t after any arithmetic operation. So, for example, to take the difference between those two times, you could do:
uint64_t diff = (after - before) & 0xffffffffffff;
You will get the right value even if the counter wrapped around during the procedure. If the counter didn't wrap around, the masking is not needed but not harmful either.
Now if you want this difference to be recognized as a signed integer by your compiler, you have to sign extend the 48th bit. That means that if the 48th bit is set, the number is negative, and you want to set the 49th through the 64th bit of your 64-bit integer. I think a simple way to do that is:
int64_t diff_signed = (int64_t)(diff << 16) >> 16;
Warning: You should probably test this to make sure it works, and also beware there is implementation-defined behavior when I cast the uint64_t to an int64_t, and I think there is implementation-defined behavior when I shift a signed negative number to the right. I'm sure a C language lawyer could some up with something more robust.
Update: The OP points out that if you combine the operation of taking the difference and doing the sign extension, there is no need for masking. That would look like this:
int64_t diff = (int64_t)(x - y) << 16 >> 16;
struct Nanosecond48{
unsigned long long u48 : 48;
// int res : 12; // just for clarity, don't need this one really
};
Here we just use the explicit width of the field to be 48 bits and with that (admittedly somewhat awkward) type you live it up to your compiler to properly handle different architectures/platforms/whatnot.
Like the following:
Nanosecond48 u1, u2, overflow;
overflow.u48 = -1L;
u1.u48 = 3;
u2.u48 = 5;
const auto diff = (u2.u48 + (overflow.u48 + 1) - u1.u48) & 0x0000FFFFFFFFFFFF;
Of course in the last statement you can just do the remainder operation with % (overflow.u48 + 1) if you prefer.
Do you know which was the earlier reading and which was later? If so:
diff = (earlier <= later) ? later - earlier : WRAPVAL - earlier + later;
where WRAPVAL is (1 << 48) is pretty easy to read.

How to copy MSB to rest of the byte?

In an interrupt subroutine (called every 5 µs), I need to check the MSB of a byte and copy it to the rest of the byte.
I need to do something like:
if(MSB == 1){byte = 0b11111111}
else{byte = 0b00000000}
I need it to make it fast as it is on an interrupt subroutine, and there is some more code on it, so efficiency is calling.
Therefore, I don't want to use any if, switch, select, nor >> operands as I have the felling that it would slow down the process. If i'm wrong, then I'll go the "easy" way.
What I've tried:
byte = byte & 0b100000000
This gives me 0b10000000 or 0b00000000.
But I need the first to be 0b11111111.
I think I'm missing an OR somewhere (plus other gates). I don't know, my guts is telling me that this should be easy, but it isn't for me at this moment.
The trick is to use a signed type, such as int8_t, for your byte variable, and take advantage of sign extension feature of the shift-right operation:
byte = byte >> 7;
Demo.
Shifting right is very fast - a single instruction on most modern (and even not so modern) CPUs.
The reason this works is that >> on signed operands inserts the sign bit on the left to preserve the sign of its operand. This is called sign extension.
Note: Technically, this behavior is implementation-defined, and therefore is not universally portable. Thanks, Eugene Sh., for a comment and a reference.
EDIT: My answer has confused people because I did not specify unsigned bytes. Here my assumption is that B is of type unsigned char. As one comment notes below, I can omit the &1. This is not as fast as the signed byte solution that the other poster put up, but this code should be portable (once it is understood that B is unsigned type).
B = -((B >> 7)&1)
Negative numbers our are friends. Shifting bits should be fast by the way.
The MSB is actually the sign bit of the number. It is 1 for a negative number and 0 for a positive number.
so the simplest way to do that is
if(byte < 0)
{
byte = -1;
}
else
{
byte = 0;
}
because -1 = 11111111 in binary.
If it is the case for an unsigned integer, then just simply type cast it into a signed value and then compare it again as mentioned above.

Programmatically determining max value of a signed integer type

This related question is about determining the max value of a signed type at compile-time:
C question: off_t (and other signed integer types) minimum and maximum values
However, I've since realized that determining the max value of a signed type (e.g. time_t or off_t) at runtime seems to be a very difficult task.
The closest thing to a solution I can think of is:
uintmax_t x = (uintmax_t)1<<CHAR_BIT*sizeof(type)-2;
while ((type)x<=0) x>>=1;
This avoids any looping as long as type has no padding bits, but if type does have padding bits, the cast invokes implementation-defined behavior, which could be a signal or a nonsensical implementation-defined conversion (e.g. stripping the sign bit).
I'm beginning to think the problem is unsolvable, which is a bit unsettling and would be a defect in the C standard, in my opinion. Any ideas for proving me wrong?
Let's first see how C defines "integer types". Taken from ISO/IEC 9899, §6.2.6.2:
6.2.6.2 Integer types
1 For unsigned integer types other than unsigned char, the bits of the object
representation shall be divided into two groups: value bits and padding bits (there need
not be any of the latter). If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2N−1, so that objects of that type shall be capable of
representing values from 0 to 2N − 1 using a pure binary representation; this shall be
known as the value representation. The values of any padding bits are unspecified.44)
2 For signed integer types, the bits of the object representation shall be divided into three
groups: value bits, padding bits, and the sign bit. There need not be any padding bits;
there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M ≤ N). If the sign bit
is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be
modified in one of the following ways:
— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value −(2N) (two’s complement);
— the sign bit has the value −(2N − 1) (ones’ complement).
Which of these applies is implementation-defined, as is whether the value with sign bit 1
and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones’
complement), is a trap representation or a normal value. In the case of sign and
magnitude and ones’ complement, if this representation is a normal value it is called a
negative zero.
Hence we can conclude the following:
~(int)0 may be a trap representation, i.e. setting all bits to is a bad idea
There might be padding bits in an int that have no effect on its value
The order of the bits actually representing powers of two is undefined; so is the position of the sign bit, if it exists.
The good news is that:
there's only a single sign bit
there's only a single bit that represents the value 1
With that in mind, there's a simple technique to find the maximum value of an int. Find the sign bit, then set it to 0 and set all other bits to 1.
How do we find the sign bit? Consider int n = 1;, which is strictly positive and guaranteed to have only the one-bit and maybe some padding bits set to 1. Then for all other bits i, if i==0 holds true, set it to 1 and see if the resulting value is negative. If it's not, revert it back to 0. Otherwise, we've found the sign bit.
Now that we know the position of the sign bit, we take our int n, set the sign bit to zero and all other bits to 1, and tadaa, we have the maximum possible int value.
Determining the int minimum is slightly more complicated and left as an exercise to the reader.
Note that the C standard humorously doesn't require two different ints to behave the same. If I'm not mistaken, there may be two distinct int objects that have e.g. their respective sign bits at different positions.
EDIT: while discussing this approach with R.. (see comments below), I have become convinced that it is flawed in several ways and, more generally, that there is no solution at all. I can't see a way to fix this posting (except deleting it), so I let it unchanged for the comments below to make sense.
Mathematically, if you have a finite set (X, of size n (n a positive integer) and a comparison operator (x,y,z in X; x<=y and y<=z implies x<=z), it's a very simple problem to find the maximum value. (Also, it exists.)
The easiest way to solve this problem, but the most computationally expensive, is to generate an array with all possible values from, then find the max.
Part 1. For any type with a finite member set, there's a finite number of bits (m) which can be used to uniquely represent any given member of that type. We just make an array which contains all possible bit patterns, where any given bit pattern is represented by a given value in the specific type.
Part 2. Next we'd need to convert each binary number into the given type. This task is where my programming inexperience makes me unable to speak to how this may be accomplished. I've read some about casting, maybe that would do the trick? Or some other conversion method?
Part 3. Assuming that the previous step was finished, we now have a finite set of values in the desired type and a comparison operator on that set. Find the max.
But what if...
...we don't know the exact number of members of the given type? Than we over-estimate. If we can't produce a reasonable over-estimate, than there should be physical bounds on the number. Once we have an over-estimate, we check all of those possible bit patters to confirm which bit patters represent members of the type. After discarding those which aren't used, we now have a set of all possible bit patterns which represent some member of the given type. This most recently generated set is what we'd use now at part 1.
...we don't have a comparison operator in that type? Than the specific problem is not only impossible, but logically irrelevant. That is, if our program doesn't have access to give a meaningful result to if we compare two values from our given type, than our given type has no ordering in the context of our program. Without an ordering, there's no such thing as a maximum value.
...we can't convert a given binary number into a given type? Then the method breaks. But similar to the previous exception, if you can't convert types, than our tool-set seems logically very limited.
Technically, you may not need to convert between binary representations and a given type. The entire point of the conversion is to insure the generated list is exhaustive.
...we want to optimize the problem? Than we need some information about how the given type maps from binary numbers. For example, unsigned int, signed int (2's compliment), and signed int (1's compliment) each map from bits into numbers in a very documented and simple way. Thus, if we wanted the highest possible value for unsigned int and we knew we were working with m bits, than we could simply fill each bit with a 1, convert the bit pattern to decimal, then output the number.
This relates to optimization because the most expensive part of this solution is the listing of all possible answers. If we have some previous knowledge of how the given type maps from bit patterns, we can generate a subset of all possibilities by making instead all potential candidates.
Good luck.
Update: Thankfully, my previous answer below was wrong, and there seems to be a solution to this question.
intmax_t x;
for (x=INTMAX_MAX; (T)x!=x; x/=2);
This program either yields x containing the max possible value of type T, or generates an implementation-defined signal.
Working around the signal case may be possible but difficult and computationally infeasible (as in having to install a signal handler for every possible signal number), so I don't think this answer is fully satisfactory. POSIX signal semantics may give enough additional properties to make it feasible; I'm not sure.
The interesting part, especially if you're comfortable assuming you're not on an implementation that will generate a signal, is what happens when (T)x results in an implementation-defined conversion. The trick of the above loop is that it does not rely at all on the implementation's choice of value for the conversion. All it relies upon is that (T)x==x is possible if and only if x fits in type T, since otherwise the value of x is outside the range of possible values of any expression of type T.
Old idea, wrong because it does not account for the above (T)x==x property:
I think I have a sketch of a proof that what I'm looking for is impossible:
Let X be a conforming C implementation and assume INT_MAX>32767.
Define a new C implementation Y identical to X, but where the values of INT_MAX and INT_MIN are each divided by 2.
Prove that Y is a conforming C implementation.
The essential idea of this outline is that, due to the fact that everything related to out-of-bound values with signed types is implementation-defined or undefined behavior, an arbitrary number of the high value bits of a signed integer type can be considered as padding bits without actually making any changes to the implementation except the limit macros in limits.h.
Any thoughts on if this sounds correct or bogus? If it's correct, I'd be happy to award the bounty to whoever can do the best job of making it more rigorous.
I might just be writing stupid things here, since I'm relatively new to C, but wouldn't this work for getting the max of a signed?
unsigned x = ~0;
signed y=x/2;
This might be a dumb way to do it, but as far as I've seen unsigned max values are signed max*2+1. Won't it work backwards?
Sorry for the time wasted if this proves to be completely inadequate and incorrect.
Shouldn't something like the following pseudo code do the job?
signed_type_of_max_size test_values =
[(1<<7)-1, (1<<15)-1, (1<<31)-1, (1<<63)-1];
for test_value in test_values:
signed_foo_t a = test_value;
signed_foo_t b = a + 1;
if (b < a):
print "Max positive value of signed_foo_t is ", a
Or much simpler, why shouldn't the following work?
signed_foo_t signed_foo_max = (1<<(sizeof(signed_foo_t)*8-1))-1;
For my own code, I would definitely go for a build-time check defining a preprocessor macro, though.
Assuming modifying padding bits won't create trap representations, you could use an unsigned char * to loop over and flip individual bits until you hit the sign bit. If your initial value was ~(type)0, this should get you the maximum:
type value = ~(type)0;
assert(value < 0);
unsigned char *bytes = (void *)&value;
size_t i = 0;
for(; i < sizeof value * CHAR_BIT; ++i)
{
bytes[i / CHAR_BIT] ^= 1 << (i % CHAR_BIT);
if(value > 0) break;
bytes[i / CHAR_BIT] ^= 1 << (i % CHAR_BIT);
}
assert(value != ~(type)0);
// value == TYPE_MAX
Since you allow this to be at runtime you could write a function that de facto does an iterative left shift of (type)3. If you stop once the value is fallen below 0, this will never give you a trap representation. And the number of iterations - 1 will tell you the position of the sign bit.
Remains the problem of the left shift. Since just using the operator << would lead to an overflow, this would be undefined behavior, so we can't use the operator directly.
The simplest solution to that is not to use a shifted 3 as above but to iterate over the bit positions and to add always the least significant bit also.
type x;
unsigned char*B = &x;
size_t signbit = 7;
for(;;++signbit) {
size_t bpos = signbit / CHAR_BIT;
size_t apos = signbit % CHAR_BIT;
x = 1;
B[bpos] |= (1 << apos);
if (x < 0) break;
}
(The start value 7 is the minimum width that a signed type must have, I think).
Why would this present a problem? The size of the type is fixed at compile time, so the problem of determining the runtime size of the type reduces to the problem of determining the compile-time size of the type. For any given target platform, a declaration such as off_t offset will be compiled to use some fixed size, and that size will then always be used when running the resulting executable on the target platform.
ETA: You can get the size of the type type via sizeof(type). You could then compare against common integer sizes and use the corresponding MAX/MIN preprocessor define. You might find it simpler to just use:
uintmax_t bitWidth = sizeof(type) * CHAR_BIT;
intmax_t big2 = 2; /* so we do math using this integer size */
intmax_t sizeMax = big2^bitWidth - 1;
intmax_t sizeMin = -(big2^bitWidth - 1);
Just because a value is representable by the underlying "physical" type does not mean that value is valid for a value of the "logical" type. I imagine the reason max and min constants are not provided is that these are "semi-opaque" types whose use is restricted to particular domains. Where less opacity is desirable, you will often find ways of getting the information you want, such as the constants you can use to figure out how big an off_t is that are mentioned by the SUSv2 in its description of <unistd.h>.
For an opaque signed type for which you don't have a name of the associated unsigned type, this is unsolvable in a portable way, because any attempt to detect whether there is a padding bit will yield implementation-defined behavior or undefined behavior. The best thing you can deduce by testing (without additional knowledge) is that there are at least K padding bits.
BTW, this doesn't really answer the question, but can still be useful in practice: If one assumes that the signed integer type T has no padding bits, one can use the following macro:
#define MAXVAL(T) (((((T) 1 << (sizeof(T) * CHAR_BIT - 2)) - 1) * 2) + 1)
This is probably the best that one can do. It is simple and does not need to assume anything else about the C implementation.
Maybe I'm not getting the question right, but since C gives you 3 possible representations for signed integers (http://port70.net/~nsz/c/c11/n1570.html#6.2.6.2):
sign and magnitude
ones' complement
two's complement
and the max in any of these should be 2^(N-1)-1, you should be able to get it by taking the max of the corresponding unsigned, >>1-shifting it and casting the result to the proper type (which it should fit).
I don't know how to get the corresponding minimum if trap representations get in the way, but if they don't the min should be either (Tp)((Tp)-1|(Tp)TP_MAX(Tp)) (all bits set) (Tp)~TP_MAX(Tp) and which it is should be simple to find out.
Example:
#include <limits.h>
#define UNSIGNED(Tp,Val) \
_Generic((Tp)0, \
_Bool: (_Bool)(Val), \
char: (unsigned char)(Val), \
signed char: (unsigned char)(Val), \
unsigned char: (unsigned char)(Val), \
short: (unsigned short)(Val), \
unsigned short: (unsigned short)(Val), \
int: (unsigned int)(Val), \
unsigned int: (unsigned int)(Val), \
long: (unsigned long)(Val), \
unsigned long: (unsigned long)(Val), \
long long: (unsigned long long)(Val), \
unsigned long long: (unsigned long long)(Val) \
)
#define MIN2__(X,Y) ((X)<(Y)?(X):(Y))
#define UMAX__(Tp) ((Tp)(~((Tp)0)))
#define SMAX__(Tp) ((Tp)( UNSIGNED(Tp,~UNSIGNED(Tp,0))>>1 ))
#define SMIN__(Tp) ((Tp)MIN2__( \
(Tp)(((Tp)-1)|SMAX__(Tp)), \
(Tp)(~SMAX__(Tp)) ))
#define TP_MAX(Tp) ((((Tp)-1)>0)?UMAX__(Tp):SMAX__(Tp))
#define TP_MIN(Tp) ((((Tp)-1)>0)?((Tp)0): SMIN__(Tp))
int main()
{
#define STC_ASSERT(X) _Static_assert(X,"")
STC_ASSERT(TP_MAX(int)==INT_MAX);
STC_ASSERT(TP_MAX(unsigned int)==UINT_MAX);
STC_ASSERT(TP_MAX(long)==LONG_MAX);
STC_ASSERT(TP_MAX(unsigned long)==ULONG_MAX);
STC_ASSERT(TP_MAX(long long)==LLONG_MAX);
STC_ASSERT(TP_MAX(unsigned long long)==ULLONG_MAX);
/*STC_ASSERT(TP_MIN(unsigned short)==USHRT_MIN);*/
STC_ASSERT(TP_MIN(int)==INT_MIN);
/*STC_ASSERT(TP_MIN(unsigned int)==UINT_MIN);*/
STC_ASSERT(TP_MIN(long)==LONG_MIN);
/*STC_ASSERT(TP_MIN(unsigned long)==ULONG_MIN);*/
STC_ASSERT(TP_MIN(long long)==LLONG_MIN);
/*STC_ASSERT(TP_MIN(unsigned long long)==ULLONG_MIN);*/
STC_ASSERT(TP_MAX(char)==CHAR_MAX);
STC_ASSERT(TP_MAX(signed char)==SCHAR_MAX);
STC_ASSERT(TP_MAX(short)==SHRT_MAX);
STC_ASSERT(TP_MAX(unsigned short)==USHRT_MAX);
STC_ASSERT(TP_MIN(char)==CHAR_MIN);
STC_ASSERT(TP_MIN(signed char)==SCHAR_MIN);
STC_ASSERT(TP_MIN(short)==SHRT_MIN);
}
For all real machines, (two's complement and no padding):
type tmp = ((type)1)<< (CHAR_BIT*sizeof(type)-2);
max = tmp + (tmp-1);
With C++, you can calculate it at compile time.
template <class T>
struct signed_max
{
static const T max_tmp = T(T(1) << sizeof(T)*CO_CHAR_BIT-2u);
static const T value = max_tmp + T(max_tmp -1u);
};

Finding out no bits set in a variable in faster manner [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Best algorithm to count the number of set bits in a 32-bit integer?
Finding out the no. bits sets in a variable is easier. But how could we perform the same operation in fastest method ?
This page on Bit Twiddling Hacks covers several techniques to count the number of bits set, and discusses the performance of each.
The bit twiddling hacks page has a variety of suggestions.
I highly recommend reading Hacker's Delight for all questions regarding various forms of bit-twiddling. For counting bits, in particular, it analyzes several algorithms depending on the instructions you might have available to you.
If you're asking the question, then chances are __builtin_popcount on gcc is at least as fast as what you're currently doing. __builtin_popcount can generally be beaten on x86, so presumably on other CPUs too, but you don't say what your CPU is other than "embedded". It affects the answer.
If you're not using gcc, then you need to look up how to do a fast popcount on your actual compiler and/or CPU. For obvious reasons, there is no such thing as "the fastest way to count set bits in C".
int i, size, set;
for (i = 1, size = sizeof(int) * 8; i <= size; i++)
{
if (value & (0 << 2 * i)) set++;
}
Counting the set bits in a variable is termed the "population count", shortened to "popcount".
A very good micro-benchmark of different software algorithms is given at: http://www.dalkescientific.com/writings/diary/archive/2008/07/05/bitslice_and_popcount.html
AMD "Barcelona" processors onwards have a fast fixed-cost instruction, which in GCC you can get using __builtin_popcount
On Intel boxes I've found that __builtin_ffs in a loop works best for sparse bit sets.
Its something you can't rely upon; you must micro-benchmark if this is important to you.
If variable is an integer, you can count bits using
public static int BitCount(int x)
{ return ((x == 0) ? 0 : ((x < 0) ? 1 : 0) + BitCount(x <<= 1)); }
Explanation:
Recursive, if number is zero, no bits are set, and function returns a zero
else, it checks the sign bit and if set stores 1 else stores a 0,
then shifts the entire number one bit to the left
eliminating the sign bit just examined,
and putting a zero in rightmost bit,
and calls itself again with new left-Shifted value.
Overall result is to examine each bit from leftmost to rightmost, and for each one set, stores on stack whether that bit was set (as 1/0), left-Shits next bit into sign bit position and resurses. When it finally gets to the last bit set , the value will be zero and recursion will stop. Function then returns up the call stack, adding up all the temp values it stored on the way down. Returns total

Resources