Fast Range Detection Algorithm - arrays

I have an Array of 8 elements: Bin[8].
Bin represents a range container: I receive a number N, such as 0 <= N <= 255.
If N < 32 ==> Bin[0] += 1
Else If 32 <= N < 64 ==> Bin[1] += 1
... etc.
I want a fast solution that does not require an If-Else directive, as I have multiple Bins to handle.
I am using Java, but a solution in any programming language is accepted.
Thank you.

Ensure your number N is indeed 0 <= N <= 255, then simply:
Bin[N/32]++;
Edit: Another poster mentioned shifting right by 5 bits. This will work too, however I feel dividing by 32 shows intent cleaner and any modern compiler will optimize the division away into a bitshift if it's more efficient on the platform you're targeting anyway.

We can use some bitwise operators for this:
binIndex = N >> 5;
Then
Bin[binIndex]++;
This just ignores the low 5 bits of the number, using the top three bits (if N <= 255) as an index into the array of bins.

Just use integer division (truncation):
Bin[N / 32] += 1;

Related

How to use bit manipulation to find 5th bit, and to return number of 1 bits in an integer

Assume Z is an unsigned integer. Using ~, <<, >>, &, | , +, and - provide statements which return the desired result.
I am allowed to introduce new binary values if needed.
I have these problems:
1.Extract the 5th bit from the left Z.
For this I was thinking about doing something like
x x x x x x x x
& 0 0 0 0 1 0 0 0
___________________
0 0 0 0 1 0 0 0
Does this make sense for extracting the fifth bit? I am not totally sure how I would make this work by using just Z when I do not know its values. (I am relatively new to all of this). Would this type of idea work though?
2.Return the number of 1 bits in Z
Here I kind of have no idea how to work this out. What I really need to know is how to work on just Z with the operators, but I m not sure exactly how to.
Like I said I am new to this, so any help is appreciated.
Problem 1
You’re right on the money. I’d do an & and a >> so that you get either a nice 0 or 1.
result = (z & 0x08) >> 3;
However, this may not be strictly necessary. For example, if you’re trying to check whether the bit is set as part of an if conditional, you can exploit C’s definition of anything nonzero as true.
if (z & 0x08)
do_stuff();
Problem 2
There are a whole variety of ways to do this. According to that page, the following methodology dates from 1960, though it wasn’t published in C until 1988.
for (result = 0; z; result++)
z &= z - 1;
Exactly why this works might not be obvious at first, but if you work through a few examples, you’ll quickly see why it does.
It’s worth noting that this operation – determining the number of 1 bits in a number – is sufficiently important to have a name (population count or Hamming weight) and, on recent Intel and AMD processors, a dedicated instruction. If you’re using GCC, you can use the __builtin_popcount intrinsic.
Problem 1 looks right, except you should finish it by shifting the result right by 4 to get that bit after the mask.
To implement the mask, you need to know what integer is represented by a single 5th bit. That number is incidentally 2^5 = 32. So you can just AND z with 32 and shift it right by 4.
Problem 2:
int answer = 0;
while (z != 0){ //stop when there are no more 1 bits in z
//the following masks the lowest bit in z and adds it into answer
//if z ends with a 0, nothing is added, otherwise 1 is added
answer += (z & 1);
//this shifts z right by 1 to get the next higher bit
z >>= 1;
}
return answer;
To find out the value of the fifth bit, you don't care about the bottom bits so you can get rid of them:
unsigned int answer = z >> 4;
The fifth bit becomes the bottom bit, so you can strip it off with a bitwise-AND:
answer = answer & 1;
To find the number of 1-bits in a number you can apply stakSmashr's solution. You could optimise this further if you know you need to count the number of bits in a lot of integers - precompute the number of bits in every possible 8-bit number and store it in a table. There will only be 256 entries in the table so it won't use much memory. Then, you can loop over your data one byte at a time and find the answer from the table. This lookup will be quicker than looping again over each bit.

Bitwise modulo with numbers bigger 2³²

I have to calculate the remainder of two big numbers in C. One has the size of 938 or 256 Bit and the other has a size of 85 Bit. Both are no 2^n values!
My basic idea is to put every bit as one element of a short-array and calculate the remainder with basic bit operations. But I have no good idea how to do this. So I hope somebody here can help me.
For those who are interested I'm programming an ETCS - Encoder according to UNISIG-SUBSET 036 http://www.era.europa.eu/Document-Register/Documents/Set-2-Index009-SUBSET-036%20v300.pdf on page 36 - 39 and im trying to calculate the check bits.
Assuming you are working in decimal base, and you want to calculate X mod Y, you can do something like this:
1. mod = 0;
2. mod = ((mod * 10) + mostCignificantDigit)%Y;
3. remove the mostCignificantDigit from your number, and return to 2.
In other words, say you have the number in digit array A:
mod = 0;
for (int index = 0; index < A.size(); ++index)
mod = ((mod*10)+A[index]) %Y

How to avoid branching in C for this operation

Is there a way to remove the following if-statement to check if the value is below 0?
int a = 100;
int b = 200;
int c = a - b;
if (c < 0)
{
c += 3600;
}
The value of c should lie between 0 and 3600. Both a and b are signed. The value of a also should lie between 0 and 3600. (yes, it is a counting value in 0.1 degrees). The value gets reset by an interrupt to 3600, but if that interrupt comes too late it underflows, which is not of a problem, but the software should still be able to handle it. Which it does.
We do this if (c < 0) check at quite some places where we are calculating positions. (Calculating a new position etc.)
I was used to pythons modulo operator to use the signedness of the divisor where our compiler (C89) is using the dividend signedness.
Is there some way to do this calculation differently?
example results:
a - b = c
100 - 200 = 3500
200 - 100 = 100
Good question! How about this?
c += 3600 * (c < 0);
This is one way we preserve branch predictor slots.
What about this (assuming 32-bit ints):
c += 3600 & (c >> 31);
c >> 31 sets all bits to the original MSB, which is 1 for negative numbers and and 0 for others in 2-complement.
Negative number shift right is formally implementation-defined according to C standard documents, however it's almost always implemented with MSB copying (common processors can do it in a single instruction).
This will surely result in no branches, unlike (c < 0) which might be implemented with branch in some cases.
Why are you worried about the branch? [Reason explained in comments to the question.]
The alternative is something like:
((a - b) + 3600) % 3600
This assumes a and b are in the range 0..3600 already; if they're not under control, the more general solution is the one Drew McGowen suggests:
((a - b) % 3600 + 3600) % 3600
The branch miss has to be very expensive to make that much calculation worthwhile.
#skjaidev showed how to do it without branching. Here's how to automatically avoid multiplication as well when ints are twos-complement:
#if ((3600 & -0) == 0) && ((3600 & -1) == 3600)
c += 3600 & -(c < 0);
#else
c += 3600 * (c < 0);
#endif
What you want to do is modular arithmetic. Your 2's complement machine already does this with integer math. So, by mapping your values into 2's complement arithmetic, you can get the modolo operation free.
The trick is represent your angle as a fraction of 360 degrees between 0 and 1-epsilon. Of course, then your constant angles would have to represented similarly, but that shouldn't be hard; its just a bit of math we can hide in a conversion function (er, macro).
The value in this idea is that if you add or subtract angles, you'll get a value whose fraction part you want, and whose integer part you want to throw away. If we represent the fraction as a 32 bit fixed point number with the binary point at 2^32 (e.g., to the left of what is normally considered to be a sign bit), any overflows of the fraction simply fall off the top of the 32 bit value for free. So, you do all integer math, and "overflow" removal happens for free.
So I'd rewrite your code (preserving the idea of degrees times 10):
typedef unsigned int32 angle; // angle*3600/(2^32) represents degrees
#define angle_scale_factor 1193046.47111111 // = 2^32/3600
#define make_angle(degrees) (unsigned int32)((degrees%3600)*angle_scale_factor )
#define make_degrees(angle) (angle/(angle_scale_factor*10)) // produces float number
...
angle a = make_angle(100); // compiler presumably does compile-time math to compute 119304647
angle b = make_angle(200); // = 238609294
angle c = a - b; // compiler should generate integer subtract, which computes 4175662649
#if 0 // no need for this at all; other solutions execute real code to do something here
if (c < 0) // this can't happen
{ c += 3600; } // this is the wrong representation for our variant
#endif
// speed doesn't matter here, we're doing output:
printf("final angle %f4.2 = \n", make_degrees(c)); // should print 350.00
I have not compiled and run this code.
Changes to make this degrees times 100 or times 1 are pretty easy; modify the angle_scale_factor. If you have a 16 bit machine, switching to 16 bits is similarly easy; if you have 32 bits, and you still want to only do 16 bit math, you will need to mask the value to be printed to 16 bits.
This solution has one other nice property: you've documented which variables are angles (and have funny representations). OP's original code just called them ints, but that's not what they represent; a future maintainer will get suprised by the original code, especially if he finds the subtraction isolated from the variables.

how to calculate modulus division

I am stuck in a program while finding modulus of division.
Say for example I have:
((a*b*c)/(d*e)) % n
Now, I cannot simply calculate the expression and then modulo it to n as the multiplication and division are going in a loop and the value is large enough to not fit even in long long.
As clarified in comments, n can be considered prime.
I found that, for multiplication, I can easily calculate it as:
((a%n*b%n)%n*c%n)%n
but couldn't understand how to calculate the division part then.
The problem I am facing is say for a simple example:
((7*3*5)/(5*3)) % 11
The value of above expression would be 7
but if I calculate the multiplication, modulo, it would be like:
((7%11)*(3%11))%11 = 10
((10%11)*(5%11))%11 = 6
now I am left with 6/15 and I have no way to generate correct answer.
Could someone help me. Please make me understand the logic by above example.
Since 11 is prime, Z11 is a field. Since 15 % 11 is 4, 1/15 equals 3 (since 3 * 4 % 11 is 1). Therefore, 6/15 is 6 * 3 which is 7 mod 11.
In your comments below the question, you clarify that the modulus will always be a prime.
To efficiently generate a table of multiplicative inverses, you can raise 2 to successive powers to see which values it generates. Note that in a field Zp, where p is an odd prime, 2p-1 = 1. So, for Z11:
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 5
2^5 = 10
2^6 = 9
2^7 = 7
2^8 = 3
2^9 = 6
So the multiplicative inverse of 5 (which is 24) is 26 (which is 9).
So, you can generate the above table like this:
power_of_2[0] = 1;
for (int i = 1; i < n; ++i) {
power_of_2[i] = (2*power_of_2[i-1]) % n;
}
And the multiplicative inverse table can be computed like this:
mult_inverse[1] = 1;
for (int i = 1; i < n; ++i) {
mult_inverse[power_of_2[i]] = power_of_2[n-1-i];
}
In your example, since 15 = 4 mod 11, you actually end up with having to evaluate (6/4) mod 11.
In order to find an exact solution to this, rearrange it as 6 = ( (x * 4) mod 11), which makes clearer how the modulo division works.
If nothing else, if the modulus is always small, you can iterate from 0 to modulus-1 to get the solution.
Note that when the modulus is not prime, there may be multiple solutions to the reduced problem. For instance, there are two solutions to 4 = ( ( x * 2) mod 8): 2 and 6. This will happen for a reduced problem of form:
a = ( (x * b) mod c)
whenever b and c are NOT relatively prime (ie whenever they DO share a common divisor).
Similarly, when b and c are NOT relatively prime, there may be no solution to the reduced problem. For instance, 3 = ( (x * 2) mod 8) has no solution. This happens whenever the largest common divisor of b and c does not also divide a.
These latter two circumstances are consequences of the integers from 0 to n-1 not forming a group under multiplication (or equivalently, a field under + and *) when n is not prime, but rather forming simply the less useful structure of a ring.
I think the way the question is asked, it should be assumed that the numerator is divisible by the denominator. In that case the finite field solution for prime n and speculations about possible extensions and caveats for non-prime n is basically overkill. If you have all the numerator terms and denominator terms stored in arrays, you can iteratively test pairs of (numerator term, denominator term) and quickly find the greatest common divisor (gcd), and then divide the numerator term and denominator term by the gcd. (Finding the gcd is a classical problem and you can easily find a simple solution online.) In the worst case you will have to iterate over all possible pairs but at some point, if the denominator indeed divides the numerator, then you'll eventually be left with reduced numerator terms and all denominator terms will be 1. Then you're ready to apply multiplication (avoiding overflow) the way you described.
As n is prime, dividing an integer b is simply multiplying b's inverse. That is:
(a / b) mod n = (a * inv(b)) mod n
where
inv(b) = (b ^ (n - 2)) mod n
Calculating inv(b) can be done in O(log(n)) time using the Exponentiation by squaring algorithm. Here is the code:
int inv(int b, int n)
{
int r = 1, m = n - 2;
while (m)
{
if (m & 1) r = (long long)r * b % n;
b = (long long)b * b % n;
m >>= 1;
}
return r;
}
Why it works? According to Fermat's little theorem, if n is prime, b ^ (n - 1) mod n = 1 for any positive integer b. Therefore we have inv(b) * b mod n = 1.
Another solution for finding inv(b) is the Extended Euclidean algorithm, which needs a bit more code to implement.
I think you can distribute the division like
z = d*e/3
(a/z)*(b/z)*(c/z) % n
Remains only the integer division problem.
I think the problem you had was that you picked a problem that was too simple for an example. In that case the answer was 7 , but what if a*b*c was not evenly divisible by c*d ? You should probably look up how to do division with modulo first, it should be clear to you :)
Instead of dividing, think in terms of multiplicative inverses. For each number in a mod-n system, there ought to be an inverse, if certain conditions are met. For d and e, find those inverses, and then it's all just multiplying. Finding the inverses is not done by dividing! There's plenty of info out there...

Most optimized way to calculate modulus in C

I have minimize cost of calculating modulus in C.
say I have a number x and n is the number which will divide x
when n == 65536 (which happens to be 2^16):
mod = x % n (11 assembly instructions as produced by GCC)
or
mod = x & 0xffff which is equal to mod = x & 65535 (4 assembly instructions)
so, GCC doesn't optimize it to this extent.
In my case n is not x^(int) but is largest prime less than 2^16 which is 65521
as I showed for n == 2^16, bit-wise operations can optimize the computation. What bit-wise operations can I preform when n == 65521 to calculate modulus.
First, make sure you're looking at optimized code before drawing conclusion about what GCC is producing (and make sure this particular expression really needs to be optimized). Finally - don't count instructions to draw your conclusions; it may be that an 11 instruction sequence might be expected to perform better than a shorter sequence that includes a div instruction.
Also, you can't conclude that because x mod 65536 can be calculated with a simple bit mask that any mod operation can be implemented that way. Consider how easy dividing by 10 in decimal is as opposed to dividing by an arbitrary number.
With all that out of the way, you may be able to use some of the 'magic number' techniques from Henry Warren's Hacker's Delight book:
Archive of http://www.hackersdelight.org/
Archive of http://www.hackersdelight.org/magic.htm
There was an added chapter on the website that contained "two methods of computing the remainder of division without computing the quotient!", which you may find of some use. The 1st technique applies only to a limited set of divisors, so it won't work for your particular instance. I haven't actually read the online chapter, so I don't know exactly how applicable the other technique might be for you.
x mod 65536 is only equivalent to x & 0xffff if x is unsigned - for signed x, it gives the wrong result for negative numbers. For unsigned x, gcc does indeed optimise x % 65536 to a bitwise and with 65535 (even on -O0, in my tests).
Because 65521 is not a power of 2, x mod 65521 can't be calculated so simply. gcc 4.3.2 on -O3 calculates it using x - (x / 65521) * 65521; the integer division by a constant is done using integer multiplication by a related constant.
rIf you don't have to fully reduce your integers modulo 65521, then you can use the fact that 65521 is close to 2**16. I.e. if x is an unsigned int you want to reduce then you can do the following:
unsigned int low = x &0xffff;
unsigned int hi = (x >> 16);
x = low + 15 * hi;
This uses that 2**16 % 65521 == 15. Note that this is not a full reduction. I.e. starting with a 32-bit input, you only are guaranteed that the result is at most 20 bits and that it is of course congruent to the input modulo 65521.
This trick can be used in applications where there are many operations that have to be reduced modulo the same constant, and where intermediary results do not have to be the smallest element in its residue class.
E.g. one application is the implementation of Adler-32, which uses the modulus 65521. This hash function does a lot of operations modulo 65521. To implement it efficiently one would only do modular reductions after a carefully computed number of additions. A reduction shown as above is enough and only the computation of the hash will need a full modulo operation.
The bitwise operation only works well if the divisor is of the form 2^n. In the general case, there is no such bit-wise operation.
If the constant with which you want to take the modulo is known at compile time
and you have a decent compiler (e.g. gcc), tis usually best to let the compiler
work its magic. Just declare the modulo const.
If you don't know the constant at compile time, but you are going to take - say -
a billion modulos with the same number, then use this http://libdivide.com/
As an approach when we deal with powers of 2, can be considered this one (mostly C flavored):
.
.
#define THE_DIVISOR 0x8U; /* The modulo value (POWER OF 2). */
.
.
uint8 CheckIfModulo(const sint32 TheDividend)
{
uint8 RetVal = 1; /* TheDividend is not modulus THE_DIVISOR. */
if (0 == (TheDividend & (THE_DIVISOR - 1)))
{
/* code if modulo is satisfied */
RetVal = 0; /* TheDividend IS modulus THE_DIVISOR. */
}
else
{
/* code if modulo is NOT satisfied */
}
return RetVal;
}
If x is an increasing index, and the increment i is known to be less than n (e.g. when iterating over a circular array of length n), avoid the modulus completely.
A loop going
x += i; if (x >= n) x -= n;
is way faster than
x = (x + i) % n;
which you unfortunately find in many text books...
If you really need an expression (e.g. because you are using it in a for statement), you can use the ugly but efficient
x = x + (x+i < n ? i : i-n)
idiv — Integer Division
The idiv instruction divides the contents of the 64 bit integer EDX:EAX (constructed by viewing EDX as the most significant four bytes and EAX as the least significant four bytes) by the specified operand value. The quotient result of the division is stored into EAX, while the remainder is placed in EDX.
source: http://www.cs.virginia.edu/~evans/cs216/guides/x86.html

Resources