I want to shift the contents of an array of bytes by 12-bit to the left.
For example, starting with this array of type uint8_t shift[10]:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0xBC}
I'd like to shift it to the left by 12-bits resulting in:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xAB, 0xC0, 0x00}
Hurray for pointers!
This code works by looking ahead 12 bits for each byte and copying the proper bits forward. 12 bits is the bottom half (nybble) of the next byte and the top half of 2 bytes away.
unsigned char length = 10;
unsigned char data[10] = {0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0A,0xBC};
unsigned char *shift = data;
while (shift < data+(length-2)) {
*shift = (*(shift+1)&0x0F)<<4 | (*(shift+2)&0xF0)>>4;
shift++;
}
*(data+length-2) = (*(data+length-1)&0x0F)<<4;
*(data+length-1) = 0x00;
Justin wrote:
#Mike, your solution works, but does not carry.
Well, I'd say a normal shift operation does just that (called overflow), and just lets the extra bits fall off the right or left. It's simple enough to carry if you wanted to - just save the 12 bits before you start to shift. Maybe you want a circular shift, to put the overflowed bits back at the bottom? Maybe you want to realloc the array and make it larger? Return the overflow to the caller? Return a boolean if non-zero data was overflowed? You'd have to define what carry means to you.
unsigned char overflow[2];
*overflow = (*data&0xF0)>>4;
*(overflow+1) = (*data&0x0F)<<4 | (*(data+1)&0xF0)>>4;
while (shift < data+(length-2)) {
/* normal shifting */
}
/* now would be the time to copy it back if you want to carry it somewhere */
*(data+length-2) = (*(data+length-1)&0x0F)<<4 | (*(overflow)&0x0F);
*(data+length-1) = *(overflow+1);
/* You could return a 16-bit carry int,
* but endian-ness makes that look weird
* if you care about the physical layout */
unsigned short carry = *(overflow+1)<<8 | *overflow;
Here's my solution, but even more importantly my approach to solving the problem.
I approached the problem by
drawing the memory cells and drawing arrows from the destination to the source.
made a table showing the above drawing.
labeling each row in the table with the relative byte address.
This showed me the pattern:
let iL be the low nybble (half byte) of a[i]
let iH be the high nybble of a[i]
iH = (i+1)L
iL = (i+2)H
This pattern holds for all bytes.
Translating into C, this means:
a[i] = (iH << 4) OR iL
a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4)
We now make three more observations:
since we carry out the assignments left to right, we don't need to store any values in temporary variables.
we will have a special case for the tail: all 12 bits at the end will be zero.
we must avoid reading undefined memory past the array. since we never read more than a[i+2], this only affects the last two bytes
So, we
handle the general case by looping for N-2 bytes and performing the general calculation above
handle the next to last byte by it by setting iH = (i+1)L
handle the last byte by setting it to 0
given a with length N, we get:
for (i = 0; i < N - 2; ++i) {
a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4);
}
a[N-2] = (a[N-1) & 0x0f) << 4;
a[N-1] = 0;
And there you have it... the array is shifted left by 12 bits. It could easily be generalized to shifting N bits, noting that there will be M assignment statements where M = number of bits modulo 8, I believe.
The loop could be made more efficient on some machines by translating to pointers
for (p = a, p2=a+N-2; p != p2; ++p) {
*p = ((*(p+1) & 0x0f) << 4) | (((*(p+2) & 0xf0) >> 4);
}
and by using the largest integer data type supported by the CPU.
(I've just typed this in, so now would be a good time for somebody to review the code, especially since bit twiddling is notoriously easy to get wrong.)
Lets make it the best way to shift N bits in the array of 8 bit integers.
N - Total number of bits to shift
F = (N / 8) - Full 8 bit integers shifted
R = (N % 8) - Remaining bits that need to be shifted
I guess from here you would have to find the most optimal way to make use of this data to move around ints in an array. Generic algorithms would be to apply the full integer shifts by starting from the right of the array and moving each integer F indexes. Zero fill the newly empty spaces. Then finally perform an R bit shift on all of the indexes, again starting from the right.
In the case of shifting 0xBC by R bits you can calculate the overflow by doing a bitwise AND, and the shift using the bitshift operator:
// 0xAB shifted 4 bits is:
(0xAB & 0x0F) >> 4 // is the overflow (0x0A)
0xAB << 4 // is the shifted value (0xB0)
Keep in mind that the 4 bits is just a simple mask: 0x0F or just 0b00001111. This is easy to calculate, dynamically build, or you can even use a simple static lookup table.
I hope that is generic enough. I'm not good with C/C++ at all so maybe someone can clean up my syntax or be more specific.
Bonus: If you're crafty with your C you might be able to fudge multiple array indexes into a single 16, 32, or even 64 bit integer and perform the shifts. But that is prabably not very portable and I would recommend against this. Just a possible optimization.
Here a working solution, using temporary variables:
void shift_4bits_left(uint8_t* array, uint16_t size)
{
int i;
uint8_t shifted = 0x00;
uint8_t overflow = (0xF0 & array[0]) >> 4;
for (i = (size - 1); i >= 0; i--)
{
shifted = (array[i] << 4) | overflow;
overflow = (0xF0 & array[i]) >> 4;
array[i] = shifted;
}
}
Call this function 3 times for a 12-bit shift.
Mike's solution maybe faster, due to the use of temporary variables.
The 32 bit version... :-) Handles 1 <= count <= num_words
#include <stdio.h>
unsigned int array[] = {0x12345678,0x9abcdef0,0x12345678,0x9abcdef0,0x66666666};
int main(void) {
int count;
unsigned int *from, *to;
from = &array[0];
to = &array[0];
count = 5;
while (count-- > 1) {
*to++ = (*from<<12) | ((*++from>>20)&0xfff);
};
*to = (*from<<12);
printf("%x\n", array[0]);
printf("%x\n", array[1]);
printf("%x\n", array[2]);
printf("%x\n", array[3]);
printf("%x\n", array[4]);
return 0;
}
#Joseph, notice that the variables are 8 bits wide, while the shift is 12 bits wide. Your solution works only for N <= variable size.
If you can assume your array is a multiple of 4 you can cast the array into an array of uint64_t and then work on that. If it isn't a multiple of 4, you can work in 64-bit chunks on as much as you can and work on the remainder one by one.
This may be a bit more coding, but I think it's more elegant in the end.
There are a couple of edge-cases which make this a neat problem:
the input array might be empty
the last and next-to-last bits need to be treated specially, because they have zero bits shifted into them
Here's a simple solution which loops over the array copying the low-order nibble of the next byte into its high-order nibble, and the high-order nibble of the next-next (+2) byte into its low-order nibble. To save dereferencing the look-ahead pointer twice, it maintains a two-element buffer with the "last" and "next" bytes:
void shl12(uint8_t *v, size_t length) {
if (length == 0) {
return; // nothing to do
}
if (length > 1) {
uint8_t last_byte, next_byte;
next_byte = *(v + 1);
for (size_t i = 0; i + 2 < length; i++, v++) {
last_byte = next_byte;
next_byte = *(v + 2);
*v = ((last_byte & 0x0f) << 4) | (((next_byte) & 0xf0) >> 4);
}
// the next-to-last byte is half-empty
*(v++) = (next_byte & 0x0f) << 4;
}
// the last byte is always empty
*v = 0;
}
Consider the boundary cases, which activate successively more parts of the function:
When length is zero, we bail out without touching memory.
When length is one, we set the one and only element to zero.
When length is two, we set the high-order nibble of the first byte to low-order nibble of the second byte (that is, bits 12-16), and the second byte to zero. We don't activate the loop.
When length is greater than two we hit the loop, shuffling the bytes across the two-element buffer.
If efficiency is your goal, the answer probably depends largely on your machine's architecture. Typically you should maintain the two-element buffer, but handle a machine word (32/64 bit unsigned integer) at a time. If you're shifting a lot of data it will be worthwhile treating the first few bytes as a special case so that you can get your machine word pointers word-aligned. Most CPUs access memory more efficiently if the accesses fall on machine word boundaries. Of course, the trailing bytes have to be handled specially too so you don't touch memory past the end of the array.
Related
I have encountered the following C function while working on a legacy code and I am compeletely baffled, the way the code is organized. I can see that the function is trying to set bits at given position in bit stream but I can't get my head around with individual statements and expressions. Can somebody please explain why the developer used divison by 8 (/8) and modulus 8 (%8) expressions here and there. Is there an easy way to read these kinds of bit manipulation functions in c?
static void setBits(U8 *input, U16 *bPos, U8 len, U8 val)
{
U16 pos;
if (bPos==0)
{
pos=0;
}
else
{
pos = *bPos;
*bPos += len;
}
input[pos/8] = (input[pos/8]&(0xFF-((0xFF>>(pos%8))&(0xFF<<(pos%8+len>=8?0:8-(pos+len)%8)))))
|((((0xFF>>(8-len)) & val)<<(8-len))>>(pos%8));
if ((pos/8 == (pos+len)/8)|(!((pos+len)%8)))
return;
input[(pos+len)/8] = (input[(pos+len)/8]
&(0xFF-(0xFF<<(8-(pos+len)%8))))
|((0xFF>>(8-len)) & val)<<(8-(pos+len)%8);
}
please explain why the developer used divison by 8 (/8) and modulus 8 (%8) expressions here and there
First of all, note that the individual bits of a byte are numbered 0 to 7, where bit 0 is the least significant one. There are 8 bits in a byte, hence the "magic number" 8.
Generally speaking: if you have any raw data, it consists of n bytes and can therefore always be treated as an array of bytes uint8_t data[n]. To access bit x in that byte array, you can for example do like this:
Given x = 17, bit x is then found in byte number 17/8 = 2. Note that integer division "floors" the value, instead of 2.125 you get 2.
The remainder of the integer division gives you the bit position in that byte, 17%8 = 1.
So bit number 17 is located in byte 2, bit 1. data[2] gives the byte.
To mask out a bit from a byte in C, the bitwise AND operator & is used. And in order to use that, a bit mask is needed. Such bit masks are best obtained by shifting the value 1 by the desired amount of bits. Bit masks are perhaps most clearly expressed in hex and the possible bit masks for a byte will be (1<<0) == 0x01 , (1<<1) == 0x02, (1<<3) == 0x04, (1<<4) == 0x08 and so on.
In this case (1<<1) == 0x02.
C code:
uint8_t data[n];
...
size_t byte_index = x / 8;
size_t bit_index = x % 8;
bool is_bit_set;
is_bit_set = ( data[byte_index] & (1<<bit_index) ) != 0;
I'm writing an algorithm that compresses data (LZSS) and it requires me to have two 13-bit values which I'll have to later merge together.
In some cases, however, I don't need 13 bits; 8 are enough.
For this purpose I have a structure like this:
typedef struct pattern
{
char is_compressed:1; //flag
short index :13; //first value
short length :13; //second value
unsigned char c; //is 8 bits are enough, use this instead
} Pattern;
I therefore have an array of these structures, and each structure can either contain the two 13-bit values or an 8-bit value.
I am now looping over this array, and my objective is to merge all these bits together.
I easily calculated the total number of bits used and the number of arrays of unsigned chars (8 bits) needed in order to store all the values:
int compressed = 0, plain = 0;
//count is the amount of patterns i have and p is the array of patterns (the structures)
for (int i = 0; i < count; i++)
{
if (p[i]->is_compressed)
compressed++;
else
plain++;
}
//this stores the number of bits used in the pattern (13 for length and 13 for the index or 8 for the plain uchar)
int tot_bits = compressed * 26 + plain * 8;
//since we can only write a minimum of 8 bits, we calculate how many arrays are needed to store the bits
int nr_of_arrays = (tot_bits % 8 == 0) ? tot_bits / 8 : (tot_bits / 8) + 1;
//we allocate the needed memory for the array of unsigned chars that will contain, concatenated, all the bits
unsigned char* uc = (unsigned char*) malloc(nr_of_arrays * sizeof(unsigned char));
After allocating the memory for the array I'm going to fill, I simply loop through the array of structures and recognize whether the structure I'm looking at contains the two 13-bit values or just the 8-bit one
for (int i = 0; i < count; i++)
{
if (p->is_compressed)
{
//The structure contains the two 13 bits value
}
else
{
//The structure only contains the 8 bits value
}
}
Here I'm stuck and can't seem to figure out a proper way of getting the job done.
Does anybody of you know how to implement that part there?
A practical example would be:
pattern 1 contains the 2 13-bit values:
1111 1111 1111 1
0000 0000 0000 0
pattern 2 contains the 8-bit value
1010 1010
total bits: 34
number of arrays required: 5 (that will waste 6 bits)
resulting array is:
[0] 1111 1111
[1] 1111 1000
[2] 0000 0000
[3] 0010 1010
[4] 1000 0000 (the remaining 6 bits are set to 0)
One way to do that is to write bytes one by one and keep track of partial bytes as you write.
You need a pointer to your char array, and an integer to keep track of how many bits you wrote to the last byte. Every time you write bits, you check how many bits you can write to the last byte, and you write these bits accordingly (ex: if there is 5 bits free, you shift your next value by 3 and add it to the last byte). Every time a byte is complete, you increment your array pointer and reset your bit tracker.
A clean way to implement this would be to write functions like :
void BitWriter_init( char *myArray );
void BitWriter_write( int theBitsToWrite, int howManyBits );
Now you just have to figure out how to implement these functions, or use any other method of your choice.
The problem intrigued me. Here's a possible implementation of "by using a lot of bitwise operations":
/* A writable bit string, with an indicator of the next available bit */
struct bitbuffer {
uint8_t *bytes;
size_t next_bit;
};
/*
* writes the bits represented by the given pattern to the next available
* positions in the specified bit buffer
*/
void write_bits(struct bitbuffer *buffer, Pattern *pattern) {
/* The index of the byte containing the next available bit */
size_t next_byte = buffer->next_bit / 8;
/* the number of bits already used in the next available byte */
unsigned bits_used = buffer->next_bit % 8;
if (pattern->is_compressed) {
/* assemble the bits to write in a 32-bit block */
uint32_t bits = pattern->index << 13 + pattern->length;
if (bits_used == 7) {
/* special case: the bits to write will span 5 bytes */
/* the first bit written will be the last in the current byte */
uint8_t first_bit = bits >> 25;
buffer->bytes[next_byte] |= first_bit;
/* write the next 8 bits to the next byte */
buffer->bytes[++next_byte] = (bits >> 17) & 0xFF;
/* align the tail of the bit block with the buffer*/
bits <<= 7;
} else {
/* the first bits written will fill out the current byte */
uint8_t first_bits = (bits >> (18 + bits_used)) & 0xFF;
buffer->bytes[next_byte] |= first_bits;
/* align the tail of the bit block with the buffer*/
bits <<= (6 - bits_used);
}
/*
* Write the remainder of the bit block to the buffer,
* most-significant bits first. Three (more) bytes will be modified.
*/
buffer->bytes[++next_byte] = (bits >> 16) & 0xFF;
buffer->bytes[++next_byte] = (bits >> 8) & 0xFF;
buffer->bytes[++next_byte] = bits & 0xFF;
/* update the buffer's index of the next available bit */
buffer->next_bit += 26;
} else { /* the pattern is not compressed */
if (bits_used) {
/* the bits to write will span two bytes in the buffer */
buffer->bytes[next_byte] |= (pattern->c >> bits_used);
buffer[++next_byte] = (pattern->c << bits_used) & 0xFF;
} else {
/* the bits to write exactly fill the next buffer byte */
buffer->bytes[next_byte] = pattern->c;
}
/* update the buffer's index of the next available bit */
buffer->next_bit += 8;
}
}
in our company's latest project, we want to move a char arry to left half a byte, e.g.
char buf[] = {0x12, 0x34, 0x56, 0x78, 0x21}
we want to make the buf like
0x23, 0x45, 0x67, 0x82, 0x10
how do I make the process more efficient, can you make the time complexity less than O(N) if there are N bytes to be processed?
SOS...
Without more context, I would go even as far as questioning the need for an actual array. If you have 4 bytes, that can easily be represented using a uint32_t, and then you can perform an O(1) shift operation:
uint32_t x = 0x12345678;
uint32_t offByHalf = x << 4;
This way, you would replace array access with bit masking, like this:
array[i]
would be equivalent with
(x >> 8 * (3 - i)) & 0xff
And who knows, arithmetic may even be faster than memory access. But don't take my word for it, benchmark it.
No, if you want to actually shift the array, you'll need to hit every element at least once so it'll be O(n). There's no getting around that. You can do it with something like the following:
#include <stdio.h>
void shiftNybbleLeft (unsigned char *arr, size_t sz) {
for (int i = 1; i < sz; i++)
arr[i-1] = ((arr[i-1] & 0x0f) << 4) | (arr[i] >> 4);
arr[sz-1] = (arr[sz-1] & 0x0f) << 4;
}
int main (int argc, char *argv[]) {
unsigned char buf[] = {0x12, 0x34, 0x56, 0x78};
shiftNybbleLeft (buf, sizeof (buf));
for (int i = 0; i < sizeof (buf); i++)
printf ("0x%02x ", buf[i]);
putchar ('\n');
return 0;
}
which gives you:
0x23 0x45 0x67 0x80
That's not to say you can't make it more efficient (a). If you instead modify your extraction code so that it behaves differently, you can avoid the shifting operation.
In other words, don't shift the array, simply set an offset variable and use that to modify the extraction process. Examine the following code:
#include <stdio.h>
unsigned char getByte (unsigned char *arr, size_t index, size_t shiftSz) {
if ((shiftSz % 2) == 0)
return arr[index + shiftSz / 2];
return ((arr[index + shiftSz / 2] & 0x0f) << 4)
| (arr[index + shiftSz / 2 + 1] >> 4);
}
int main (int argc, char *argv[]) {
unsigned char buf[] = {0x12, 0x34, 0x56, 0x78};
//shiftNybbleLeft (buf, sizeof (buf));
for (int i = 0; i < 4; i++)
printf ("buf[1] with left shift %d nybbles -> 0x%02x\n",
i, getByte (buf, 1, i));
return 0;
}
With shiftSz set to 0, it's as if the array isn't shifted. By setting shiftSz to non-zero, an O(1) operation, getByte() will actually return the element as if you had shifted it by that amount. The output is as you would expect:
Index 1 with left shift 0 nybbles -> 0x34
Index 1 with left shift 1 nybbles -> 0x45
Index 1 with left shift 2 nybbles -> 0x56
Index 1 with left shift 3 nybbles -> 0x67
Now that may seem a contrived example (because it is) but there's ample precedent in using tricks like that to avoid potentially costly operations. You'd probably also want to add some bounds checking to catch problems with referencing outside the array.
Keep in mind that there's a trade-off. What you gain by not having to shift the array may be offset to some degree by the calculations done during extraction. Whether it's actually worth it depends on how you use the data. If the arrays is large but you don't extract that many values from it, this trick may be worth it.
As another example of using "tricks" to prevent costly operations, I've seen text editors that don't bother shifting the contents of lines either (when deleting a character for example). Instead they simply set the character to a 0 code point and take care of it when displaying the line (ignoring the 0 code points).
They'll generally clean up eventually but often in the background where it won't interfere with your editing speed.
(a) Though you may want to actually make sure this is necessary.
One of your comments stated that your arrays are about 500 entries in length and I can tell you that my not-supremely-grunty development box can shift that array one nybble to the left at the rate of about half a million times every single second.
So, even if your profiler states that a large proportion of time is being spent in there, that doesn't necessarily mean it's a large amount of time.
You should only look into optimising code if there's a specific, identified bottleneck.
I'll tackle the only objectively answerable part of the question, which is:
can you make the time complexity less than O(N) if there are N bytes to be processed?
If you need the entire output array then no, you cannot do better than O(N).
If you only need certain elements of the output array, then you can compute just those.
It may not compile well due to alignment, but you can try using a bitfield offset in a struct.
struct __attribute__((packed)) shifted{
char offset:4; // dump data
char data[N]; // rest of data
};
or on some systems
struct __attribute__((packed)) shifted{
char offset:4; // dump data
char data[N]; // rest of data
char last:4; // to make an even byte
};
struct shifted *shifted_buf=&buf;
//now operate on shifted_buf->data
Or you can try making it a union
union __attribute__((packed)) {
char old[N];
struct{
char offset:4;
char buf[N];
char last:4; // to make an even byte
}shifted;
}data;
The alternative would be to cast to an array of int and <<4 for each int, reducing it to N/4, but this is dependent on endianness.
I have some code below that is supposed to be converting a C (Arduino) 8-bit byte array to a 16-bit int array, but it only seems to partially work. I'm not sure what I'm doing wrong.
The byte array is in little endian byte order. How do I convert it to an int (two bytes per enty) array?
In layman's terms, I want to merge every two bytes.
Currently it is outputting for an input BYTE ARRAY of: {0x10, 0x00, 0x00, 0x00, 0x30, 0x00}. The output INT ARRAY is: {1,0,0}. The output should be an INT ARRAY is: {1,0,3}.
The code below is what I currently have:
I wrote this function based on a solution in Stack Overflow question Convert bytes in a C array as longs.
I also have this solution based off the same code which works fine for byte array to long (32-bits) array http://pastebin.com/TQzyTU2j.
/**
* Convert the retrieved bytes into a set of 16 bit ints
**/
int * byteA2IntA(byte * byte_slice, int sizeOfB, int * ret_array){
//Variable that stores the addressed int to be stored in SRAM
int currentInt;
int sizeOfI = sizeOfB / 2;
if(sizeOfB % 2 != 0) ++sizeOfI;
for(int i = 0; i < sizeOfB; i+=2){
currentInt = 0;
if(byte_slice[i]=='\0') {
break;
}
if(i + 1 < sizeOfB)
currentInt = (currentInt << 8) + byte_slice[i+1];
currentInt = (currentInt << 8) + byte_slice[i+0];
*ret_array = currentInt;
ret_array++;
}
//Pointer to the return array in the parent scope.
return ret_array;
}
What is the meaning of this line of code?
if(i + 1 < sizeOfB) currentInt = (currentInt << 8) + byte_slice[i+1];
Here currentInt is always 0 and 0 << 8 = 0.
Also what you do is, for each couple of bytes (let me call them uint8_t from now on), you pack an int (let me call it uint16_t from now on) by doing the following:
You take the rightmost uint8_t
You shift it 8 positions to the left
You add the leftmost uint8_t
Is this really what you want?
Supposing you have byte_slice[] = {1, 2}, you pack a 16 bit integer with the value 513 (2<<8 + 1)!
Also, you don't need to return the pointer to the array of uint16_t as the caller has already provided it to the function.
If you use the return of your function, as Joachim said, you get a pointer starting from a position of the uint16_t array which is not position [0].
Vincenzo has a point (or two), you need to be clear what you're trying to do;
Combine two bytes to one 16-bit int, one byte being the MSB and one byte being the LSB
int16 result = (byteMSB << 8) | byteLSB;
Convert an array of bytes into 16-bit
for(i = 0; i < num_of_bytes; i++)
{
myint16array[i] = mybytearray[i];
}
Copy an array of data into another one
memcpy(dest, src, num_bytes);
That will (probably, platform/compiler dependent) have the same effect as my 1st example.
Also, beware of using ints as that suggests signed values, use uints, safer and probably faster.
The problem is most likely that you increase ret_array and then return it. When you return it, it will point to one place beyond the destination array.
Save the pointer at the start of the function, and use that pointer instead.
Consider using a struct. This is kind of a hack, though.
Off the top of my head it would look like this.
struct customINT16 {
byte ByteHigh;
byte ByteLow;
}
So in your case you would write:
struct customINT16 myINT16;
myINT16.ByteHigh = BYTEARRAY[0];
myINT16.ByteLow = BYTEARRAY[1];
You'll have to go through a pointer to cast it, though:
intpointer = (int*)(&myINT16);
INTARRAY[0] = *intpointer;
I want to create a very large array on which I write '0's and '1's. I'm trying to simulate a physical process called random sequential adsorption, where units of length 2, dimers, are deposited onto an n-dimensional lattice at a random location, without overlapping each other. The process stops when there is no more room left on the lattice for depositing more dimers (lattice is jammed).
Initially I start with a lattice of zeroes, and the dimers are represented by a pair of '1's. As each dimer is deposited, the site on the left of the dimer is blocked, due to the fact that the dimers cannot overlap. So I simulate this process by depositing a triple of '1's on the lattice. I need to repeat the entire simulation a large number of times and then work out the average coverage %.
I've already done this using an array of chars for 1D and 2D lattices. At the moment I'm trying to make the code as efficient as possible, before working on the 3D problem and more complicated generalisations.
This is basically what the code looks like in 1D, simplified:
int main()
{
/* Define lattice */
array = (char*)malloc(N * sizeof(char));
total_c = 0;
/* Carry out RSA multiple times */
for (i = 0; i < 1000; i++)
rand_seq_ads();
/* Calculate average coverage efficiency at jamming */
printf("coverage efficiency = %lf", total_c/1000);
return 0;
}
void rand_seq_ads()
{
/* Initialise array, initial conditions */
memset(a, 0, N * sizeof(char));
available_sites = N;
count = 0;
/* While the lattice still has enough room... */
while(available_sites != 0)
{
/* Generate random site location */
x = rand();
/* Deposit dimer (if site is available) */
if(array[x] == 0)
{
array[x] = 1;
array[x+1] = 1;
count += 1;
available_sites += -2;
}
/* Mark site left of dimer as unavailable (if its empty) */
if(array[x-1] == 0)
{
array[x-1] = 1;
available_sites += -1;
}
}
/* Calculate coverage %, and add to total */
c = count/N
total_c += c;
}
For the actual project I'm doing, it involves not just dimers but trimers, quadrimers, and all sorts of shapes and sizes (for 2D and 3D).
I was hoping that I would be able to work with individual bits instead of bytes, but I've been reading around and as far as I can tell you can only change 1 byte at a time, so either I need to do some complicated indexing or there is a simpler way to do it?
Thanks for your answers
If I am not too late, this page gives awesome explanation with examples.
An array of int can be used to deal with array of bits. Assuming size of int to be 4 bytes, when we talk about an int, we are dealing with 32 bits. Say we have int A[10], means we are working on 10*4*8 = 320 bits and following figure shows it: (each element of array has 4 big blocks, each of which represent a byte and each of the smaller blocks represent a bit)
So, to set the kth bit in array A:
// NOTE: if using "uint8_t A[]" instead of "int A[]" then divide by 8, not 32
void SetBit( int A[], int k )
{
int i = k/32; //gives the corresponding index in the array A
int pos = k%32; //gives the corresponding bit position in A[i]
unsigned int flag = 1; // flag = 0000.....00001
flag = flag << pos; // flag = 0000...010...000 (shifted k positions)
A[i] = A[i] | flag; // Set the bit at the k-th position in A[i]
}
or in the shortened version
void SetBit( int A[], int k )
{
A[k/32] |= 1 << (k%32); // Set the bit at the k-th position in A[i]
}
similarly to clear kth bit:
void ClearBit( int A[], int k )
{
A[k/32] &= ~(1 << (k%32));
}
and to test if the kth bit:
int TestBit( int A[], int k )
{
return ( (A[k/32] & (1 << (k%32) )) != 0 ) ;
}
As said above, these manipulations can be written as macros too:
// Due order of operation wrap 'k' in parentheses in case it
// is passed as an equation, e.g. i + 1, otherwise the first
// part evaluates to "A[i + (1/32)]" not "A[(i + 1)/32]"
#define SetBit(A,k) ( A[(k)/32] |= (1 << ((k)%32)) )
#define ClearBit(A,k) ( A[(k)/32] &= ~(1 << ((k)%32)) )
#define TestBit(A,k) ( A[(k)/32] & (1 << ((k)%32)) )
typedef unsigned long bfield_t[ size_needed/sizeof(long) ];
// long because that's probably what your cpu is best at
// The size_needed should be evenly divisable by sizeof(long) or
// you could (sizeof(long)-1+size_needed)/sizeof(long) to force it to round up
Now, each long in a bfield_t can hold sizeof(long)*8 bits.
You can calculate the index of a needed big by:
bindex = index / (8 * sizeof(long) );
and your bit number by
b = index % (8 * sizeof(long) );
You can then look up the long you need and then mask out the bit you need from it.
result = my_field[bindex] & (1<<b);
or
result = 1 & (my_field[bindex]>>b); // if you prefer them to be in bit0
The first one may be faster on some cpus or may save you shifting back up of you need
to perform operations between the same bit in multiple bit arrays. It also mirrors
the setting and clearing of a bit in the field more closely than the second implemention.
set:
my_field[bindex] |= 1<<b;
clear:
my_field[bindex] &= ~(1<<b);
You should remember that you can use bitwise operations on the longs that hold the fields
and that's the same as the operations on the individual bits.
You'll probably also want to look into the ffs, fls, ffc, and flc functions if available. ffs should always be avaiable in strings.h. It's there just for this purpose -- a string of bits.
Anyway, it is find first set and essentially:
int ffs(int x) {
int c = 0;
while (!(x&1) ) {
c++;
x>>=1;
}
return c; // except that it handles x = 0 differently
}
This is a common operation for processors to have an instruction for and your compiler will probably generate that instruction rather than calling a function like the one I wrote. x86 has an instruction for this, by the way. Oh, and ffsl and ffsll are the same function except take long and long long, respectively.
You can use & (bitwise and) and << (left shift).
For example, (1 << 3) results in "00001000" in binary. So your code could look like:
char eightBits = 0;
//Set the 5th and 6th bits from the right to 1
eightBits &= (1 << 4);
eightBits &= (1 << 5);
//eightBits now looks like "00110000".
Then just scale it up with an array of chars and figure out the appropriate byte to modify first.
For more efficiency, you could define a list of bitfields in advance and put them in an array:
#define BIT8 0x01
#define BIT7 0x02
#define BIT6 0x04
#define BIT5 0x08
#define BIT4 0x10
#define BIT3 0x20
#define BIT2 0x40
#define BIT1 0x80
char bits[8] = {BIT1, BIT2, BIT3, BIT4, BIT5, BIT6, BIT7, BIT8};
Then you avoid the overhead of the bit shifting and you can index your bits, turning the previous code into:
eightBits &= (bits[3] & bits[4]);
Alternatively, if you can use C++, you could just use an std::vector<bool> which is internally defined as a vector of bits, complete with direct indexing.
bitarray.h:
#include <inttypes.h> // defines uint32_t
//typedef unsigned int bitarray_t; // if you know that int is 32 bits
typedef uint32_t bitarray_t;
#define RESERVE_BITS(n) (((n)+0x1f)>>5)
#define DW_INDEX(x) ((x)>>5)
#define BIT_INDEX(x) ((x)&0x1f)
#define getbit(array,index) (((array)[DW_INDEX(index)]>>BIT_INDEX(index))&1)
#define putbit(array, index, bit) \
((bit)&1 ? ((array)[DW_INDEX(index)] |= 1<<BIT_INDEX(index)) \
: ((array)[DW_INDEX(index)] &= ~(1<<BIT_INDEX(index))) \
, 0 \
)
Use:
bitarray_t arr[RESERVE_BITS(130)] = {0, 0x12345678,0xabcdef0,0xffff0000,0};
int i = getbit(arr,5);
putbit(arr,6,1);
int x=2; // the least significant bit is 0
putbit(arr,6,x); // sets bit 6 to 0 because 2&1 is 0
putbit(arr,6,!!x); // sets bit 6 to 1 because !!2 is 1
EDIT the docs:
"dword" = "double word" = 32-bit value (unsigned, but that's not really important)
RESERVE_BITS: number_of_bits --> number_of_dwords
RESERVE_BITS(n) is the number of 32-bit integers enough to store n bits
DW_INDEX: bit_index_in_array --> dword_index_in_array
DW_INDEX(i) is the index of dword where the i-th bit is stored.
Both bit and dword indexes start from 0.
BIT_INDEX: bit_index_in_array --> bit_index_in_dword
If i is the number of some bit in the array, BIT_INDEX(i) is the number
of that bit in the dword where the bit is stored.
And the dword is known via DW_INDEX().
getbit: bit_array, bit_index_in_array --> bit_value
putbit: bit_array, bit_index_in_array, bit_value --> 0
getbit(array,i) fetches the dword containing the bit i and shifts the dword right, so that the bit i becomes the least significant bit. Then, a bitwise and with 1 clears all other bits.
putbit(array, i, v) first of all checks the least significant bit of v; if it is 0, we have to clear the bit, and if it is 1, we have to set it.
To set the bit, we do a bitwise or of the dword that contains the bit and the value of 1 shifted left by bit_index_in_dword: that bit is set, and other bits do not change.
To clear the bit, we do a bitwise and of the dword that contains the bit and the bitwise complement of 1 shifted left by bit_index_in_dword: that value has all bits set to one except the only zero bit in the position that we want to clear.
The macro ends with , 0 because otherwise it would return the value of dword where the bit i is stored, and that value is not meaningful. One could also use ((void)0).
It's a trade-off:
(1) use 1 byte for each 2 bit value - simple, fast, but uses 4x memory
(2) pack bits into bytes - more complex, some performance overhead, uses minimum memory
If you have enough memory available then go for (1), otherwise consider (2).