optimized byte array shifter - c

I'm sure this has been asked before, but I need to implement a shift operator on a byte array of variable length size. I've looked around a bit but I have not found any standard way of doing it. I came up with an implementation which works, but I'm not sure how efficient it is. Does anyone know of a standard way to shift an array, or at least have any recommendation on how to boost the performance of my implementation;
char* baLeftShift(const char* array, size_t size, signed int displacement,char* result)
{
memcpy(result,array,size);
short shiftBuffer = 0;
char carryFlag = 0;
char* byte;
if(displacement > 0)
{
for(;displacement--;)
{
for(byte=&(result[size - 1]);((unsigned int)(byte))>=((unsigned int)(result));byte--)
{
shiftBuffer = *byte;
shiftBuffer <<= 1;
*byte = ((carryFlag) | ((char)(shiftBuffer)));
carryFlag = ((char*)(&shiftBuffer))[1];
}
}
}
else
{
unsigned int offset = ((unsigned int)(result)) + size;
displacement = -displacement;
for(;displacement--;)
{
for(byte=(char*)result;((unsigned int)(byte)) < offset;byte++)
{
shiftBuffer = *byte;
shiftBuffer <<= 7;
*byte = ((carryFlag) | ((char*)(&shiftBuffer))[1]);
carryFlag = ((char)(shiftBuffer));
}
}
}
return result;
}

If I can just add to what #dwelch is saying, you could try this.
Just move the bytes to their final locations. Then you are left with a shift count such as 3, for example, if each byte still needs to be left-shifted 3 bits into the next higher byte. (This assumes in your mind's eye the bytes are laid out in ascending order from right to left.)
Then rotate each byte to the left by 3. A lookup table might be faster than individually doing an actual rotate. Then, in each byte, the 3 bits to be shifted are now in the right-hand end of the byte.
Now make a mask M, which is (1<<3)-1, which is simply the low order 3 bits turned on.
Now, in order, from high order byte to low order byte, do this:
c[i] ^= M & (c[i] ^ c[i-1])
That will copy bits to c[i] from c[i-1] under the mask M.
For the last byte, just use a 0 in place of c[i-1].
For right shifts, same idea.

My first suggestion would be to eliminate the for loops around the displacement. You should be able to do the necessary shifts without the for(;displacement--;) loops. For displacements of magnitude greater than 7, things get a little trickier because your inner loop bounds will change and your source offset is no longer 1. i.e. your input buffer offset becomes magnitude / 8 and your shift becomes magnitude % 8.

It does look inefficient and perhaps this is what Nathan was referring to.
assuming a char is 8 bits where this code is running there are two things to do first move the whole bytes, for example if your input array is 0x00,0x00,0x12,0x34 and you shift left 8 bits then you get 0x00 0x12 0x34 0x00, there is no reason to do that in a loop 8 times one bit at a time. so start by shifting the whole chars in the array by (displacement>>3) locations and pad the holes created with zeros some sort of for(ra=(displacement>>3);ra>3)] = array[ra]; for(ra-=(displacement>>3);ra>(7-(displacement&7))). a good compiler will precompute (displacement>>3), displacement&7, 7-(displacement&7) and a good processor will have enough registers to keep all of those values. you might help the compiler by making separate variables for each of those items, but depending on the compiler and how you are using it it could make it worse too.
The bottom line though is time the code. perform a thousand 1 bit shifts then a thousand 2 bit shifts, etc time the whole thing, then try a different algorithm and time it the same way and see if the optimizations make a difference, make it better or worse. If you know ahead of time this code will only ever be used for single or less than 8 bit shifts adjust the timing test accordingly.
your use of the carry flag implies that you are aware that many processors have instructions specifically for chaining infinitely long shifts using the standard register length (for single bit at a time) rotate through carry basically. Which the C language does not support directly. for chaining single bit shifts you could consider assembler and likely outperform the C code. at least the single bit shifts are faster than C code can do. A hybrid of moving the bytes then if the number of bits to shift (displacement&7) is maybe less than 4 use the assembler else use a C loop. again the timing tests will tell you where the optimizations are.

Related

String to very long sequence of length less than 1 byte

I can't guess how to solve following problem. Assume I have a string or an array of integer-type variables (uchar, char, integer, whatever). Each of these data type is 1 byte long or more.
I would like to read from such array but read a pieces that are smaller than 1 byte, e.g. 3 bits (values 0-7). I tried to do a loop like
cout << ( (tab[index] >> lshift & lmask) | (tab[index+offset] >> rshift & rmask) );
but guessing how to set these variables is out of my reach. What is the metodology to solve such problem?
Sorry if question has been ever asked, but searching gives no answer.
I am sure this is not the best solution, as there some inefficiencies in the code that could be eliminated, but I think the idea is workable. I only tested it briefly:
void bits(uint8_t * src, int arrayLength, int nBitCount) {
int idxByte = 0; // byte index
int idxBitsShift = 7; // bit index: start at the high bit
// walk through the array, computing bit sets
while (idxByte < arrayLength) {
// compute a single bit set
int nValue = 0;
for (int i=2; i>=0; i--) {
nValue += (src[idxByte] & (1<<idxBitsShift)) >> (idxBitsShift-i);
if ((--idxBitsShift) < 0) {
idxBitsShift=8;
if (++idxByte >= arrayLength)
break;
}
}
// print it
printf("%d ", nValue);
}
}
int main() {
uint8_t a[] = {0xFF, 0x80, 0x04};
bits(a, 3, 3);
}
The thing with collecting bits across byte boundaries is a bit of a PITA, so I avoided all that by doing this a bit at a time, and then collecting the bits together in the nValue. You could have smarter code that does this three (or however many) bits at a time, but as far as I am concerned, with problems like this it is usually best to start with a simple solution (unless you already know how to do a better one) and then do something more complicated.
In short, the way the data is arranged in memory strictly depends on :
the Endianess
the standard used for computation/representation ( usually it's the IEEE 754 )
the type of the given variable
Now, you can't "disassemble" a data structure with this rationale without destroing its own meaning, simply put, if you are going to subdivide your variable in "bitfields" you are just picturing an undefined value.
In computer science there are data structure or informations structured in blocks, like many hashing algorithms/hash results, but a numerical value it's not stored like that and you are supposed to know what you are doing to prevent any data loss.
Another thing to note is that your definition of "pieces that are smaller than 1 byte" doesn't make much sense, it's also highly intrusive, you are losing abstraction here and you can also do something bad.
Here's the best method I could come up with for setting individual bits of a variable:
Assume we need to set the first four bits of variable1 (a char or other byte long variable) to 1010
variable1 &= 0b00001111; //Zero the first four bytes
variable1 |= 0b10100000; //Set them to 1010, its important that any unaffected bits be zero
This could be extended to whatever bits desired by placing zeros in the first number corresponding to the bits which you wish to set (the first four in the example's case), and placing zeros in the second number corresponding to the bits which you wish to remain neutral in the second number (the last four in the example's case). The second number could also be derived by bit-shifting your desired value by the appropriate number of places (which would have been four in the example's case).
In response to your comment this can be modified as follows to accommodate for increased variability:
For this operation we will need two shifts assuming you wish to be able to modify non-starting and non-ending bits. There are two sets of bits in this case the first (from the left) set of unaffected bits and the second set. If you wish to modify four bits skipping the first bit from the left (1 these four bits 111 for a single byte), the first shift would be would be 7 and the second shift would be 5.
variable1 &= ( ( 0b11111111 << shift1 ) | 0b11111111 >> shift2 );
Next the value we wish to assign needs to be shifted and or'ed in.
However, we will need a third shift to account for how many bits we want to set.
This shift (we'll call it shift3) is shift1 minus the number of bits we wish to modify (as previously mentioned 4).
variable1 |= ( value << shift3 );

Need help understanding bitmaps, bitwise operations, and C

Disclaimer: I am asking these questions in relation to an assignment. The assignment itself calls for implementing a bitmap and doing some operations with that, but that is not what I am asking about. I just want to understand the concepts so I can try the implementation for myself.
I need help understanding bitmaps/bit arrays and bitwise operations. I understand the basics of binary and how left/right shift work, but I don't know exactly how that use is beneficial.
Basically, I need to implement a bitmap to store the results of a prime sieve (of Eratosthenes.) This is a small part of a larger assignment focused on different IPC methods, but to get to that part I need to get the sieve completed first. I've never had to use bitwise operations nor have I ever learned about bitmaps, so I'm kind of on my own to learn this.
From what I can tell, bitmaps are arrays of a bit of a certain size, right? By that I mean you could have an 8-bit array or a 32-bit array (in my case, I need to find the primes for a 32-bit unsigned int, so I'd need the 32-bit array.) So if this is an array of bits, 32 of them to be specific, then we're basically talking about a string of 32 1s and 0s. How does this translate into a list of primes? I figure that one method would evaluate the binary number and save it to a new array as decimal, so all the decimal primes exist in one array, but that seems like you're using too much data.
Do I have the gist of bitmaps? Or is there something I'm missing? I've tried reading about this around the internet but I can't find a source that makes it clear enough for me...
Suppose you have a list of primes: {3, 5, 7}. You can store these numbers as a character array: char c[] = {3, 5, 7} and this requires 3 bytes.
Instead lets use a single byte such that each set bit indicates that the number is in the set. For example, 01010100. If we can set the byte we want and later test it we can use this to store the same information in a single byte. To set it:
char b = 0;
// want to set `3` so shift 1 twice to the left
b = b | (1 << 2);
// also set `5`
b = b | (1 << 4);
// and 7
b = b | (1 << 6);
And to test these numbers:
// is 3 in the map:
if (b & (1 << 2)) {
// it is in...
You are going to need a lot more than 32 bits.
You want a sieve for up to 2^32 numbers, so you will need a bit for each one of those. Each bit will represent one number, and will be 0 if the number is prime and 1 if it is composite. (You can save one bit by noting that the first bit must be 2 as 1 is neither prime nor composite. It is easier to waste that one bit.)
2^32 = 4,294,967,296
Divide by 8
536,870,912 bytes, or 1/2 GB.
So you will want an array of 2^29 bytes, or 2^27 4-byte words, or whatever you decide is best, and also a method for manipulating the individual bits stored in the chars (ints) in the array.
It sounds like eventually, you are going to have several threads or processes operating on this shared memory.You may need to store it all in a file if you can't allocate all that memory to yourself.
Say you want to find the bit for x. Then let a = x / 8 and b = x - 8 * a. Then the bit is at arr[a] & (1 << b). (Avoid the modulus operator % wherever possible.)
//mark composite
a = x / 8;
b = x - 8 * a;
arr[a] |= 1 << b;
This sounds like a fun assignment!
A bitmap allows you to construct a large predicate function over the range of numbers you're interested in. If you just have a single 8-bit char, you can store Boolean values for each of the eight values. If you have 2 chars, it doubles your range.
So, say you have a bitmap that already has this information stored, your test function could look something like this:
bool num_in_bitmap (int num, char *bitmap, size_t sz) {
if (num/8 >= sz) return 0;
return (bitmap[num/8] >> (num%8)) & 1;
}

Hash function for 64 bit to 10 bits

I want a hash function that takes a long number (64 bits) and produces result of 10 bits. What is the best hash function for such purpose. Inputs are basically addresses of variables (Addresses are of 64 bits or 8 bytes on Linux), so my hash function should be optimized for that purpose.
I would say somethig like this:
uint32_t hash(uint64_t x)
{
x >>= 3;
return (x ^ (x>>10) ^ (x>>20)) & 0x3FF;
}
The lest significant 3 bits are not very useful, as most variables are 4-byte or 8-byte aligned, so we remove them.
Then we take the next 30 bits and mix them together (XOR) in blocks of 10 bits each.
Naturally, you could also take the (x>>30)^(x>>40)^(x>>50) but I'm not sure if they'll make any difference in practice.
I wrote a toy program to see some real addresses on the stack, data area, and heap. Basically I declared 4 globals, 4 locals and did 2 mallocs. I dropped the last two bits when printing the addresses. Here is an output from one of the runs:
20125e8
20125e6
20125e7
20125e4
3fef2131
3fef2130
3fef212f
3fef212c
25e4802
25e4806
What this tells me:
The LSB in this output (3rd bit of the address) is frequently 'on' and 'off'. So I wouldn't drop it when calculating the hash. Dropping 2 LSBs seems enough.
We also see that there is more entropy in the lower 8-10 bits. We must use that when calculating the hash.
We know that on a 64 bit machine, virtual addresses are never more than 48 bits wide.
What I would do next:
/* Drop two LSBs. */
a >>= 2;
/* Get rid of the MSBs. Keep 46 bits. */
a &= 0x3fffffffffff;
/* Get the 14 MSBs and fold them in to get a 32 bit integer.
The MSBs are mostly 0s anyway, so we don't lose much entropy. */
msbs = (a >> 32) << 18;
a ^= msbs;
Now we pass this through a decent 'half avalanche' hash function, instead of rolling our own. 'Half avalanche' means each bit of the input gets a chance to affect bits at the same position and higher:
uint32_t half_avalanche( uint32_t a)
{
a = (a+0x479ab41d) + (a<<8);
a = (a^0xe4aa10ce) ^ (a>>5);
a = (a+0x9942f0a6) - (a<<14);
a = (a^0x5aedd67d) ^ (a>>3);
a = (a+0x17bea992) + (a<<7);
return a;
}
For an 10-bit hash, use the 10 MSBs of the uint32_t returned. The hash function continues to work fine if you pick N MSBs for an N bit hash, effectively doubling the bucket count with each additional bit.
I was a little bored, so I wrote a toy benchmark for this. Nothing fancy, it allocates a bunch of memory on the heap and tries out the hash I described above. The source can be had from here. An example result:
1024 buckets, 256 values generated, 29 collissions
1024 buckets, 512 values generated, 103 collissions
1024 buckets, 1024 values generated, 370 collissions
Next: I tried out the other two hashes answered here. They both have similar performance. Looks like: Just pick the fastest one ;)
Best for most distributions is mod by a prime, 1021 is the largest 10-bit prime. There's no need to strip low bits.
static inline int hashaddress(void *v)
{
return (uintptr_t)v % 1021;
}
If you think performance might be a concern, have a few alternates on hand and race them in your actual program. Microbenchmarks are waste; a difference of a few cycles is almost certain to be swamped by cache effects, and size matters.

A Perfect Hashing Function for an 8 by 8 board?

I'm implementing a board with only 2 types of pieces, and was looking for a function to map from that board to a Long Integer (64 bits). I was thinking this should not be so hard, since a long integer contains more available information than an 8 by 8 array (call it grid[x][y]) with only 3 possible elements in each spot including the empty element. I tried the following:
(1) Zobrist hashing with Longs rather than ints (Just to test - I didn't actually expect that to work perfectly)
(2) Translated the grid into a 64 character string of a base 3 number, and then took that number and parsed it into a long. I think this should work, but it took a very very long time.
Is there some simpler solution to (2) involving bit operations of shifting or something like that?
Edit: Please don't give me actual code, as this is for a class project, and that would probably be considered unethical in our department (or at least not in Java).
Edit2: Basically, there are only 10 whites and 10 blacks on the board at any given time, of which no two pieces of the same color can be neighbors, either in the horizontal, vertical, or diagonal direction. Also, there are 12 spaces for each color where only that color may place pieces.
If each tile in the game can be 1 of any 3 states at any point in the game, then the minimum amount of storage required for a "perfect hash" when hashing every possible state of the game board, at any given moment will
= power(3,8*8) individual hashes
= log2(3^64) bits
= approx. 101.4 bits, so you will need at least 102 bits to store this info
At this point, you may as well just say there are 4 states for each tile, which will bring you to needing 128 bits.
Once you do this, its rather easy to make a fast hashing algorithm for the board.
E.g. (writtin as c++, may need to alter code if the platform doesn't support 128 bit numbers)
uint_128 CreateGameBoardHash(int (&game_board)[8][8])
{
uint_128 board_hash = 0;
for(int i = 0; i < 8; ++i)
{
for(int j = 0; j < 8; ++j)
{
board_hash |= game_board[i][j] << ((i * 8 + j) *2);
}
}
return board_hash;
}
This method will only waste 26 bits (little more than 3 bytes) over the optimal solution of 102 bits, but you will save a LOT of processing time that would be otherwise spent doing base 3 math.
Edit Here's a version that doesn't require 128 bits and should work on any 16-bit (or better) processor
struct GameBoardHash
{
uint16 row[8];
};
GameBoardHash CreateGameBoardHash(int (&game_board)[8][8])
{
GameBoardHash board_hash;
for(int i = 0; i < 8; ++i)
{
board_hash.row[i] = 0;
for(int j = 0; j < 8; ++j)
{
board_hash.row[i] |= game_board[j] << (j*2);
}
}
return board_hash;
}
It won't fit into a 64 bit integer. You have 64 squares and you need more than 1 bit to record each square. Why do you need it to fit into a 64 bit int? Are you targetting the ZX81?
How about a 16 byte array containing the bits? Each 2-bits, represent a position's value, so that given a position in the 8x8 board (pos=0-63), you can figure out the index by dividing pos by 4 and you can get the value by doing bit manipulation to get two bits (bit0=pos mod 4 and bit1=bit0 + 1). The two bits can be either 00, 01, or 10.
Reading your comments to David, it doesn't seem like you really need a perfect hash value. You just need a hashable object.
Make it simple for yourself... Make some hash for you position in the overwrite to GetHashCode(), and then do the rest of the work in the Equals function.
If you REALLY need it to be perfect, then you have to use a GUID to encode your data in and make your own hash that can use 128bit keys. But that is just a huge investment of time for little benifit.

Large bit arrays in C

Our OS professor mentioned that for assigning a process id to a new process, the kernel incrementally searches for the first zero bit in a array of size equivalent to the maximum number of processes(~32,768 by default), where an allocated process id has 1 stored in it.
As far as I know, there is no bit data type in C. Obviously, there's something I'm missing here.
Is there any such special construct from which we can build up a bit array? How is this done exactly?
More importantly, what are the operations that can be performed on such an array?
Bit arrays are simply byte arrays where you use bitwise operators to read the individual bits.
Suppose you have a 1-byte char variable. This contains 8 bits. You can test if the lowest bit is true by performing a bitwise AND operation with the value 1, e.g.
char a = /*something*/;
if (a & 1) {
/* lowest bit is true */
}
Notice that this is a single ampersand. It is completely different from the logical AND operator &&. This works because a & 1 will "mask out" all bits except the first, and so a & 1 will be nonzero if and only if the lowest bit of a is 1. Similarly, you can check if the second lowest bit is true by ANDing it with 2, and the third by ANDing with 4, etc, for continuing powers of two.
So a 32,768-element bit array would be represented as a 4096-element byte array, where the first byte holds bits 0-7, the second byte holds bits 8-15, etc. To perform the check, the code would select the byte from the array containing the bit that it wanted to check, and then use a bitwise operation to read the bit value from the byte.
As far as what the operations are, like any other data type, you can read values and write values. I explained how to read values above, and I'll explain how to write values below, but if you're really interested in understanding bitwise operations, read the link I provided in the first sentence.
How you write a bit depends on if you want to write a 0 or a 1. To write a 1-bit into a byte a, you perform the opposite of an AND operation: an OR operation, e.g.
char a = /*something*/;
a = a | 1; /* or a |= 1 */
After this, the lowest bit of a will be set to 1 whether it was set before or not. Again, you could write this into the second position by replacing 1 with 2, or into the third with 4, and so on for powers of two.
Finally, to write a zero bit, you AND with the inverse of the position you want to write to, e.g.
char a = /*something*/;
a = a & ~1; /* or a &= ~1 */
Now, the lowest bit of a is set to 0, regardless of its previous value. This works because ~1 will have all bits other than the lowest set to 1, and the lowest set to zero. This "masks out" the lowest bit to zero, and leaves the remaining bits of a alone.
A struct can assign members bit-sizes, but that's the extent of a "bit-type" in 'C'.
struct int_sized_struct {
int foo:4;
int bar:4;
int baz:24;
};
The rest of it is done with bitwise operations. For example. searching that PID bitmap can be done with:
extern uint32_t *process_bitmap;
uint32_t *p = process_bitmap;
uint32_t bit_offset = 0;
uint32_t bit_test;
/* Scan pid bitmap 32 entries per cycle. */
while ((*p & 0xffffffff) == 0xffffffff) {
p++;
}
/* Scan the 32-bit int block that has an open slot for the open PID */
bit_test = 0x80000000;
while ((*p & bit_test) == bit_test) {
bit_test >>= 1;
bit_offset++;
}
pid = (p - process_bitmap)*8 + bit_offset;
This is roughly 32x faster than doing a simple for loop scanning an array with one byte per PID. (Actually, greater than 32x since more of the bitmap is will stay in CPU cache.)
see http://graphics.stanford.edu/~seander/bithacks.html
No bit type in C, but bit manipulation is fairly straight forward. Some processors have bit specific instructions which the code below would nicely optimize for, even without that should be pretty fast. May or may not be faster using an array of 32 bit words instead of bytes. Inlining instead of functions would also help performance.
If you have the memory to burn just use a whole byte to store one bit (or whole 32 bit number, etc) greatly improve performance at the cost of memory used.
unsigned char data[SIZE];
unsigned char get_bit ( unsigned int offset )
{
//TODO: limit check offset
if(data[offset>>3]&(1<<(offset&7))) return(1);
else return(0);
}
void set_bit ( unsigned int offset, unsigned char bit )
{
//TODO: limit check offset
if(bit) data[offset>>3]|=1<<(offset&7);
else data[offset>>3]&=~(1<<(offset&7));
}

Resources