I have integer values ranging from 32-8191 which I want to map to a roughly logarithmic scale. If I were using base 2, I could just count the leading zero bits and map them into 8 slots, but this is too course-grained; I need 32 slots (and more would be better, but I need them to map to bits in a 32-bit value), which comes out to a base of roughly 1.18-1.20 for the logarithm. Anyone have some tricks for computing this value, or a reasonable approximation, very fast?
My intuition is to break the range down into 2 or 3 subranges with conditionals, and use a small lookup table for each, but I wonder if there's some trick I could do with count-leading-zeros then refining the result, especially since the results don't have to be exact but just roughly logarithmic.
Why not use the next two bits other than the leading bit. You can first partition the number into the 8 bin, and the next two bits to further divide each bin into four. In this case, you can use a simple shift operation which is very fast.
Edit: If you think using the logarithm is a viable solution. Here is the general algorithm:
Let a be the base of the logarithm, and the range is (b_min, b_max) = (32,8191). You can find the base using the formula:
log(b_max/b_min) / log(a) = 32 bin
which give you a~1.1892026. If you use this a as the base of the logarithm, you can map the range (b_min, b_max) into (log_a(b_min), log_a(b_max)) = (20.0004,52.0004).
Now you only need to subtract the all element by a 20.0004 to get the range (0,32). It guarantees all elements are logarithmically uniform. Done
Note: Either a element may fall out of range because of numerical error. You should calculate it yourself for the exact value.
Note2: log_a(b) = log(b)/log(a)
Table lookup is one option, that table isn't that big. If an 8K table is too big, and you have a count leading zeros instruction, you can use a table lookup on the top few bits.
nbits = 32 - count_leading_zeros(v) # number of bits in number
highbits = v >> (nbits - 4) # top 4 bits. Top bit is always a 1.
log_base_2 = nbits + table[highbits & 0x7]
The table you populate with some approximation of log_2
table[i] = approx(log_2(1 + i/8.0))
If you want to stay in integer arithmetic, multiply the last line by a convenient factor.
Answer I just came up with based in IEEE 754 floating point:
((union { float v; uint32_t r; }){ x }.r >> 21 & 127) - 16
It maps 32-8192 onto 0-31 roughly logarithmically (same as hwlau's answer).
Improved version (cut out useless bitwise and):
((union { float v; uint32_t r; }){ x }.r >> 21) - 528
Related
I am trying to generate an logarithmic spaced array in C.
For example, starting at 100 and ending at 500, with 40 logarithmic spaced points.
Can anyone help me? Are there any logspace() functions available?
With no further constraints, simply divide the linear interval [ln(100)..ln(500)] into as much subintervals (equidistant) as you need. Then take the exp() of each point.
Arrays always use linear, integer and n+1 stepping. So you have to map the logarithmic scale to the linear index. This can be done either by simply taking log(log_index) or a table of ranges and a linear search in that. For log(), there might be approximations which suit your needs better and are faster than a full-grown (float) logarithm function.
You might for instance take the number of the uppermost 1-bit in the log-index and use the next n lower bits as range-index:
// all vars are size_t (unsigned at least!)
base_index = get_number_of_uppermost_bit(log_index);
shift = (base_index > 3U) ? (base_index - 3U) : 0;
lin_index = base_index * 8U + ((log_index >> shift) & (8U-1U);
The values of 8 and 3 (ld(8)) are the number of entries per log-range. Note these are linear (sometimes an acceptable approximation). You can also apply the algorithm to the lower bits, however getting an integer log function. But the above is faster and might be sufficient. Alternatively, you can use a lookup table for the lower 3 bits.
A decimal stepping would be more difficult that way and pretty inefficient.
Given a N-dimensional vector of small integers is there any simple way to map it with one-to-one correspondence to a large integer number?
Say, we have N=3 vector space. Can we represent a vector X=[(int16)x1,(int16)x2,(int16)x3] using an integer (int48)y? The obvious answer is "Yes, we can". But the question is: "What is the fastest way to do this and its inverse operation?"
Will this new 1-dimensional space possess some very special useful properties?
For the above example you have 3 * 32 = 96 bits of information, so without any a priori knowledge you need 96 bits for the equivalent long integer.
However, if you know that your x1, x2, x3, values will always fit within, say, 16 bits each, then you can pack them all into a 48 bit integer.
In either case the technique is very simple you just use shift, mask and bitwise or operations to pack/unpack the values.
Just to make this concrete, if you have a 3-dimensional vector of 8-bit numbers, like this:
uint8_t vector[3] = { 1, 2, 3 };
then you can join them into a single (24-bit number) like so:
uint32_t all = (vector[0] << 16) | (vector[1] << 8) | vector[2];
This number would, if printed using this statement:
printf("the vector was packed into %06x", (unsigned int) all);
produce the output
the vector was packed into 010203
The reverse operation would look like this:
uint8_t v2[3];
v2[0] = (all >> 16) & 0xff;
v2[1] = (all >> 8) & 0xff;
v2[2] = all & 0xff;
Of course this all depends on the size of the individual numbers in the vector and the length of the vector together not exceeding the size of an available integer type, otherwise you can't represent the "packed" vector as a single number.
If you have sets Si, i=1..n of size Ci = |Si|, then the cartesian product set S = S1 x S2 x ... x Sn has size C = C1 * C2 * ... * Cn.
This motivates an obvious way to do the packing one-to-one. If you have elements e1,...,en from each set, each in the range 0 to Ci-1, then you give the element e=(e1,...,en) the value e1+C1*(e2 + C2*(e3 + C3*(...Cn*en...))).
You can do any permutation of this packing if you feel like it, but unless the values are perfectly correlated, the size of the full set must be the product of the sizes of the component sets.
In the particular case of three 32 bit integers, if they can take on any value, you should treat them as one 96 bit integer.
If you particularly want to, you can map small values to small values through any number of means (e.g. filling out spheres with the L1 norm), but you have to specify what properties you want to have.
(For example, one can map (n,m) to (max(n,m)-1)^2 + k where k=n if n<=m and k=n+m if n>m--you can draw this as a picture of filling in a square like so:
1 2 5 | draw along the edge of the square this way
4 3 6 v
8 7
if you start counting from 1 and only worry about positive values; for integers, you can spiral around the origin.)
I'm writing this without having time to check details, but I suspect the best way is to represent your long integer via modular arithmetic, using k different integers which are mutually prime. The original integer can then be reconstructed using the Chinese remainder theorem. Sorry this is a bit sketchy, but hope it helps.
To expand on Rex Kerr's generalised form, in C you can pack the numbers like so:
X = e[n];
X *= MAX_E[n-1] + 1;
X += e[n-1];
/* ... */
X *= MAX_E[0] + 1;
X += e[0];
And unpack them with:
e[0] = X % (MAX_E[0] + 1);
X /= (MAX_E[0] + 1);
e[1] = X % (MAX_E[1] + 1);
X /= (MAX_E[1] + 1);
/* ... */
e[n] = X;
(Where MAX_E[n] is the greatest value that e[n] can have). Note that these maximum values are likely to be constants, and may be the same for every e, which will simplify things a little.
The shifting / masking implementations given in the other answers are a generalisation of this, for cases where the MAX_E + 1 values are powers of 2 (and thus the multiplication and division can be done with a shift, the addition with a bitwise-or and the modulus with a bitwise-and).
There is some totally non portable ways to make this real fast using packed unions and direct accesses to memory. That you really need this kind of speed is suspicious. Methods using shifts and masks should be fast enough for most purposes. If not, consider using specialized processors like GPU for wich vector support is optimized (parallel).
This naive storage does not possess any usefull property than I can foresee, except you can perform some computations (add, sub, logical bitwise operators) on the three coordinates at once as long as you use positive integers only and you don't overflow for add and sub.
You'd better be quite sure you won't overflow (or won't go negative for sub) or the vector will become garbage.
#include <stdint.h> // for uint8_t
long x;
uint8_t * p = &x;
or
union X {
long L;
uint8_t A[sizeof(long)/sizeof(uint8_t)];
};
works if you don't care about the endian. In my experience compilers generate better code with the union because it doesn't set of their "you took the address of this, so I must keep it in RAM" rules as quick. These rules will get set off if you try to index the array with stuff that the compiler can't optimize away.
If you do care about the endian then you need to mask and shift.
I think what you want can be solved using multi-dimensional space filling curves. The link gives a lot of references on this, which in turn give different methods and insights. Here's a specific example of an invertible mapping. It works for any dimension N.
As for useful properties, these mappings are related to Gray codes.
Hard to say whether this was what you were looking for, or whether the "pack 3 16-bit ints into a 48-bit int" does the trick for you.
I have minimize cost of calculating modulus in C.
say I have a number x and n is the number which will divide x
when n == 65536 (which happens to be 2^16):
mod = x % n (11 assembly instructions as produced by GCC)
or
mod = x & 0xffff which is equal to mod = x & 65535 (4 assembly instructions)
so, GCC doesn't optimize it to this extent.
In my case n is not x^(int) but is largest prime less than 2^16 which is 65521
as I showed for n == 2^16, bit-wise operations can optimize the computation. What bit-wise operations can I preform when n == 65521 to calculate modulus.
First, make sure you're looking at optimized code before drawing conclusion about what GCC is producing (and make sure this particular expression really needs to be optimized). Finally - don't count instructions to draw your conclusions; it may be that an 11 instruction sequence might be expected to perform better than a shorter sequence that includes a div instruction.
Also, you can't conclude that because x mod 65536 can be calculated with a simple bit mask that any mod operation can be implemented that way. Consider how easy dividing by 10 in decimal is as opposed to dividing by an arbitrary number.
With all that out of the way, you may be able to use some of the 'magic number' techniques from Henry Warren's Hacker's Delight book:
Archive of http://www.hackersdelight.org/
Archive of http://www.hackersdelight.org/magic.htm
There was an added chapter on the website that contained "two methods of computing the remainder of division without computing the quotient!", which you may find of some use. The 1st technique applies only to a limited set of divisors, so it won't work for your particular instance. I haven't actually read the online chapter, so I don't know exactly how applicable the other technique might be for you.
x mod 65536 is only equivalent to x & 0xffff if x is unsigned - for signed x, it gives the wrong result for negative numbers. For unsigned x, gcc does indeed optimise x % 65536 to a bitwise and with 65535 (even on -O0, in my tests).
Because 65521 is not a power of 2, x mod 65521 can't be calculated so simply. gcc 4.3.2 on -O3 calculates it using x - (x / 65521) * 65521; the integer division by a constant is done using integer multiplication by a related constant.
rIf you don't have to fully reduce your integers modulo 65521, then you can use the fact that 65521 is close to 2**16. I.e. if x is an unsigned int you want to reduce then you can do the following:
unsigned int low = x &0xffff;
unsigned int hi = (x >> 16);
x = low + 15 * hi;
This uses that 2**16 % 65521 == 15. Note that this is not a full reduction. I.e. starting with a 32-bit input, you only are guaranteed that the result is at most 20 bits and that it is of course congruent to the input modulo 65521.
This trick can be used in applications where there are many operations that have to be reduced modulo the same constant, and where intermediary results do not have to be the smallest element in its residue class.
E.g. one application is the implementation of Adler-32, which uses the modulus 65521. This hash function does a lot of operations modulo 65521. To implement it efficiently one would only do modular reductions after a carefully computed number of additions. A reduction shown as above is enough and only the computation of the hash will need a full modulo operation.
The bitwise operation only works well if the divisor is of the form 2^n. In the general case, there is no such bit-wise operation.
If the constant with which you want to take the modulo is known at compile time
and you have a decent compiler (e.g. gcc), tis usually best to let the compiler
work its magic. Just declare the modulo const.
If you don't know the constant at compile time, but you are going to take - say -
a billion modulos with the same number, then use this http://libdivide.com/
As an approach when we deal with powers of 2, can be considered this one (mostly C flavored):
.
.
#define THE_DIVISOR 0x8U; /* The modulo value (POWER OF 2). */
.
.
uint8 CheckIfModulo(const sint32 TheDividend)
{
uint8 RetVal = 1; /* TheDividend is not modulus THE_DIVISOR. */
if (0 == (TheDividend & (THE_DIVISOR - 1)))
{
/* code if modulo is satisfied */
RetVal = 0; /* TheDividend IS modulus THE_DIVISOR. */
}
else
{
/* code if modulo is NOT satisfied */
}
return RetVal;
}
If x is an increasing index, and the increment i is known to be less than n (e.g. when iterating over a circular array of length n), avoid the modulus completely.
A loop going
x += i; if (x >= n) x -= n;
is way faster than
x = (x + i) % n;
which you unfortunately find in many text books...
If you really need an expression (e.g. because you are using it in a for statement), you can use the ugly but efficient
x = x + (x+i < n ? i : i-n)
idiv — Integer Division
The idiv instruction divides the contents of the 64 bit integer EDX:EAX (constructed by viewing EDX as the most significant four bytes and EAX as the least significant four bytes) by the specified operand value. The quotient result of the division is stored into EAX, while the remainder is placed in EDX.
source: http://www.cs.virginia.edu/~evans/cs216/guides/x86.html
#define ROUND_UP(N, S) ((((N) + (S) - 1) / (S)) * (S))
With the above macro, could someone please help me on understanding the "(s)-1" part, why's that?
and also macros like:
#define PAGE_ROUND_DOWN(x) (((ULONG_PTR)(x)) & (~(PAGE_SIZE-1)))
#define PAGE_ROUND_UP(x) ( (((ULONG_PTR)(x)) + PAGE_SIZE-1) & (~(PAGE_SIZE-1)) )
I know the "(~(PAGE_SIZE-1)))" part will zero out the last five bits, but other than that I'm clueless, especially the role '&' operator plays.
Thanks,
The ROUND_UP macro is relying on integer division to get the job done. It will only work if both parameters are integers. I'm assuming that N is the number to be rounded and S is the interval on which it should be rounded. That is, ROUND_UP(12, 5) should return 15, since 15 is the first interval of 5 larger than 12.
Imagine we were rounding down instead of up. In that case, the macro would simply be:
#define ROUND_DOWN(N,S) ((N / S) * S)
ROUND_DOWN(12,5) would return 10, because (12/5) in integer division is 2, and 2*5 is 10. But we're not doing ROUND_DOWN, we're doing ROUND_UP. So before we do the integer division, we want to add as much as we can without losing accuracy. If we added S, it would work in almost every case; ROUND_UP(11,5) would become (((11+5) / 5) * 5), and since 16/5 in integer division is 3, we'd get 15.
The problem comes when we pass a number that's already rounded to the multiple specified. ROUND_UP(10, 5) would return 15, and that's wrong. So instead of adding S, we add S-1. This guarantees that we'll never push something up to the next "bucket" unnecessarily.
The PAGE_ macros have to do with binary math. We'll pretend we're dealing with 8-bit values for simplicity's sake. Let's assume that PAGE_SIZE is 0b00100000. PAGE_SIZE-1 is thus 0b00011111. ~(PAGE_SIZE-1) is then 0b11100000.
A binary & will line up two binary numbers and leave a 1 anywhere that both numbers had a 1. Thus, if x was 0b01100111, the operation would go like this:
0b01100111 (x)
& 0b11100000 (~(PAGE_SIZE-1))
------------
0b01100000
You'll note that the operation really only zeroed-out the last 5 bits. That's all. But that was exactly that operation needed to round down to the nearest interval of PAGE_SIZE. Note that this only worked because PAGE_SIZE was exactly a power of 2. It's a bit like saying that for any arbitrary decimal number, you can round down to the nearest 100 simply by zeroing-out the last two digits. It works perfectly, and is really easy to do, but wouldn't work at all if you were trying to round to the nearest multiple of 76.
PAGE_ROUND_UP does the same thing, but it adds as much as it can to the page before cutting it off. It's kinda like how I can round up to the nearest multiple of 100 by adding 99 to any number and then zeroing-out the last two digits. (We add PAGE_SIZE-1 for the same reason we added S-1 above.)
Good luck with your virtual memory!
Using integer arithmetic, dividing always rounds down. To fix that, you add the largest possible number that won't affect the result if the original number was evenly divisible. For the number S, that largest possible number is S-1.
Rounding to a power of 2 is special, because you can do it with bit operations. A multiple of 2 will aways have a zero in the bottom bit, a multiple of 4 will always have zero in the bottom two bits, etc. The binary representation of a power of 2 is a single bit followed by a bunch of zeros; subtracting 1 will clear that bit, and set all the bits to the right. Inverting that value creates a bit mask with zeros in the places that need to be cleared. The & operator will clear those bits in your value, thus rounding the value down. The same trick of adding (PAGE_SIZE-1) to the original value causes it to round up instead of down.
The page rounding macros assume that `PAGE_SIZE is a power of two, such as:
0x0400 -- 1 KiB
0x0800 -- 2 KiB`
0x1000 -- 4 KiB
The value of PAGE_SIZE - 1, therefore, is all one bits:
0x03FF
0x07FF
0x0FFF
Therefore, if integers were 16 bits (instead of 32 or 64 - it saves me some typing), then the value of ~(PAGE_SIZE-1) is:
0xFC00
0xFE00
0xF000
When you take the value of x (assuming, implausibly for real life, but sufficient for the purposes of exposition, that ULONG_PTR is an unsigned 16-bit integer) is 0xBFAB, then
PAGE_SIZE PAGE_ROUND_DN(0xBFAB) PAGE_ROUND_UP(0xBFAB)
0x0400 --> 0xBC00 0xC000
0x0800 --> 0xB800 0xC000
0x1000 --> 0xB000 0xC000
The macros round down and up to the nearest multiple of a page size. The last five bits would only be zeroed out if PAGE_SIZE == 0x20 (or 32).
Based on the current draft standard (C99) this macro is not entirely correct however, note that for negative values of N the result will almost certainly be incorrect.
The formula:
#define ROUND_UP(N, S) ((((N) + (S) - 1) / (S)) * (S))
Makes use of the fact that integer division rounds down for non-negative integers and uses the S - 1 part to force it to round up instead.
However, integer division rounds towards zero (C99, Section 6.5.5. Multiplicative operators, item 6). For negative N, the correct way to 'round up' is: 'N / S', nothing more, nothing less.
It gets even more involved if S is also allowed to be a negative value, but let's not even go there... (see: How can I ensure that a division of integers is always rounded up? for a more detailed discussion of various wrong and one or two right solutions)
The & makes it so.. well ok, lets take some binary numbers.
(with 1000 being page size)
PAGE_ROUND_UP(01101b)=
01101b+1000b-1b & ~(1000b-1b) =
01101b+111b & ~(111b) =
01101b+111b & ...11000b = (the ... means 1's continuing for size of ULONG)
10100b & 11000b=
10000b
So, as you can see(hopefully) This rounds up by adding PAGE_SIZE to x and then ANDing so it cancels out the bottom bits of PAGE_SIZE that are not set
This is what I use:
#define SIGN(x) ((x)<0?-1:1)
#define ROUND(num, place) ((int)(((float)(num) / (float)(place)) + (SIGN(num)*0.5)) * (place))
float A=456.456789
B=ROUND(A, 50.0f) // 450.0
C=ROUND(A, 0.001) // 456.457
I am currently writing a fast 32.32 fixed-point math library. I succeeded at making adding, subtraction and multiplication work correctly, but I am quite stuck at division.
A little reminder for those who can't remember: a 32.32 fixed-point number is a number having 32 bits of integer part and 32 bits of fractional part.
The best algorithm I came up with needs 96-bit integer division, which is something compilers usually don't have built-ins for.
Anyway, here it goes:
G = 2^32
notation: x is the 64-bit fixed-point number, x1 is its low nibble and x2 is its high
G*(a/b) = ((a1 + a2*G) / (b1 + b2*G))*G // Decompose this
G*(a/b) = (a1*G) / (b1*G + b2) + (a2*G*G) / (b1*G + b2)
As you can see, the (a2*G*G) is guaranteed to be larger than the regular 64-bit integer. If uint128_t's were actually supported by my compiler, I would simply do the following:
((uint128_t)x << 32) / y)
Well they aren't and I need a solution. Thank you for your help.
You can decompose a larger division into multiple chunks that do division with less bits. As another poster already mentioned the algorithm can be found in TAOCP from Knuth.
However, no need to buy the book!
There is a code on the hackers delight website that implements the algorithm in C. It's written to do 64-bit unsigned divisions using 32-bit arithmetic only, so you can't directly cut'n'paste the code. To get from 64 to 128-bit you have to widen all types, masks and constans by two e.g. a short becomes a int, a 0xffff becomes 0xffffffffll ect.
After this easy easy change you should be able to do 128bit divisions.
The code is mirrored on GitHub, but was originally posted on Hackersdelight.org (original link no longer accessible).
Since your largest values only need 96-bit, One of the 64-bit divisions will always return zero, so you can even simplify the code a bit.
Oh - and before I forget this: The code only works with unsigned values. To convert from signed to unsigned divide you can do something like this (pseudo-code style):
fixpoint Divide (fixpoint a, fixpoint b)
{
// check if the integers are of different sign:
fixpoint sign_difference = a ^ b;
// do unsigned division:
fixpoint x = unsigned_divide (abs(a), abs(b));
// if the signs have been different: negate the result.
if (sign_difference < 0)
{
x = -x;
}
return x;
}
The website itself is worth checking out as well: http://www.hackersdelight.org/
By the way - nice task that you're working on.. Do you mind telling us for what you need the fixed-point library?
By the way - the ordinary shift and subtract algorithm for division would work as well.
If you target x86 you can implement it using MMX or SSE intrinsics. The algorithm relies only on primitive operations, so it could perform quite fast as well.
Better self-adjusting answer:
Forgive the C#-ism of the answer, but the following should work in all cases. There is likely a solution possible that finds the right shifts to use quicker, but I'd have to think much deeper than I can right now. This should be reasonably efficient though:
int upshift = 32;
ulong mask = 0xFFFFFFFF00000000;
ulong mod = x % y;
while ((mod & mask) != 0)
{
// Current upshift of the remainder would overflow... so adjust
y >>= 1;
mask <<= 1;
upshift--;
mod = x % y;
}
ulong div = ((x / y) << upshift) + (mod << upshift) / y;
Simple but unsafe answer:
This calculation can cause an overflow in the upshift of the x % y remainder if this remainder has any bits set in the high 32 bits, causing an incorrect answer.
((x / y) << 32) + ((x % y) << 32) / y
The first part uses integer division and gives you the high bits of the answer (shift them back up).
The second part calculates the low bits from the remainder of the high-bit division (the bit that could not be divided any further), shifted up and then divided.
I like Nils' answer, which is probably the best. It's just long division, like we all learned in grade school, except the digits are base 2^32 instead of base 10.
However, you might also consider using Newton's approximation method for division:
x := x (N + N - N * D * x)
where N is the numerator and D is the demoninator.
This just uses multiplies and adds, which you already have, and it converges very quickly to about 1 ULP of precision. On the other hand, you won't be able to acheive the exact 0.5-ULP answer in all cases.
In any case, the tricky bit is detecting and handling the overflows.
Quick -n- dirty.
Do the A/B divide with double precision floating point.
This gives you C~=A/B. It's only approximate because of floating point precision and 53 bits of mantissa.
Round off C to a representable number in your fixed point system.
Now compute (again with your fixed point) D=A-C*B. This should have significantly lower magnitude than A.
Repeat , now computing D/B with floating point. Again, round the answer to an integer. Add each division result together as you go. You can stop when your remainder is so small that your floating point divide returns 0 after rounding.
You're still not done. Now you're very close to the answer, but the divisions weren't exact.
To finalize, you'll have to do a binary search. Using the (very good) starting estimate, see if increasing it improves the error.. you basically want to bracket the proper answer and keep dividing the range in half with new tests.
Yes, you could do Newton iteration here, but binary search will likely be easier since you need only simple multiplies and adds using your existing 32.32 precision toolkit.
This is not the most efficient method, but it's by far the easiest to code.