For the project i'm working on, a "word" is defined as 10 bit length, and as according to what my program does, I need to update specific bits in this word, with binary numbers (of course up to the limit of the length of the bits). My problem is that I don't know how to create these bits, and after that how to read them.
For example, "word" is set like this:
bits 0-1 - representing something A - can get values between 0-3.
bits 2-3 - representing something B - can get values between 0-3.
bits 4-5 - C - values 0-3.
bits 6-9 - D - values 0-15.
and as my program running, I need to decide what to fill in each group of bit. After that, when my word is completely full, I need to analyze the results, meaning to go over the full word, and understand from bits 0-1 what A is representing, from bits 2-3 what B is representing, and so on..
another problem is that bit number 9 is the most significant bit, which mean the word is filling up from bits 6-9 to 4-5 to 2-3 to 0-1, and later on printed from bit 9 to 0, and not as a regular array.
I tried to do it with struct of bit-fields, but the problem is that while a "word" is always 10 bits length, the sub-division as mentioned above is only one example of a "word". it can also be that the bits 0-1 representing something, and bits 2-9 something else.
I'm a bit lost and don't know how to do it, and I'll be glad if someone can help me with that.
Thanks!
Just model a "word" as an uint16_t, and set the appropriate bits.
Something like this:
typedef uint16_t word;
word word_set_A(word w, uint8_t a)
{
w &= ~3;
return w | (a & 3);
}
uint8_t word_get_A(word w)
{
return w & 3;
}
word word_set_B(word w, uint8_t b)
{
w &= ~0xc0;
return w | ((b & 3) << 2);
}
... and so on.
Related
It has come to my attention that there is no builtin structure for a single bit in C. There is (unsigned) char and int, which are 8 bits (one byte), and long which is 64+ bits, and so on (uint64_t, bool...)
I came across this while coding up a huffman tree, and the encodings for certain characters were not necessarily exactly 8 bits long (like 00101), so there was no efficient way to store the encodings. I had to find makeshift solutions such as strings or boolean arrays, but this takes far more memory.
But anyways, my question is more general: is there a good way to store an array of bits, or some sort of user-defined struct? I scoured the web for one but the smallest structure seems to be 8 bits (one byte). I tried things such as int a : 1 but it didn't work. I read about bit fields but they do not simply achieve exactly what I want to do. I know questions have already been asked about this in C++ and if there is a struct for a single bit, but mostly I want to know specifically what would be the most memory-efficient way to store an encoding such as 00101 in C.
If you're mainly interested in accessing a single bit at a time, you can take an array of unsigned char and treat it as a bit array. For example:
unsigned char array[125];
Assuming 8 bits per byte, this can be treated as an array of 1000 bits. The first 16 logically look like this:
---------------------------------------------------------------------------------
byte | 0 | 1 |
---------------------------------------------------------------------------------
bit | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
---------------------------------------------------------------------------------
Let's say you want to work with bit b. You can then do the following:
Read bit b:
value = (array[b/8] & (1 << (b%8)) != 0;
Set bit b:
array[b/8] |= (1 << (b%8));
Clear bit b:
array[b/8] &= ~(1 << (b%8));
Dividing the bit number by 8 gets you the relevant byte. Similarly, mod'ing the bit number by 8 gives you the relevant bit inside of that byte. You then left shift the value 1 by the bit number to give you the necessary bit mask.
While there is integer division and modulus at work here, the dividend is a power of 2 so any decent compiler should replace them with bit shifting/masking.
It has come to my attention that there is no builtin structure for a single bit in C.
That is true, and it makes sense because substantially no machines have bit-addressible memory.
But anyways, my question is more general: is there a good way to store
an array of bits, or some sort of user-defined struct?
One generally uses an unsigned char or another unsigned integer type, or an array of such. Along with that you need some masking and shifting to set or read the values of individual bits.
I scoured the
web for one but the smallest structure seems to be 8 bits (one byte).
Technically, the smallest addressible storage unit ([[un]signed] char) could be larger than 8 bits, though you're unlikely ever to see that.
I tried things such as int a : 1 but it didn't work. I read about bit
fields but they do not simply achieve exactly what I want to do.
Bit fields can appear only as structure members. A structure object containing such a bitfield will still have a size that is a multiple of the size of a char, so that doesn't map very well onto a bit array or any part of one.
I
know questions have already been asked about this in C++ and if there
is a struct for a single bit, but mostly I want to know specifically
what would be the most memory-efficient way to store an encoding such
as 00101 in C.
If you need a bit pattern and a separate bit count -- such as if some of the bits available in the bit-storage object are not actually significant -- then you need a separate datum for the significant-bit count. If you want a data structure for a small but variable number of bits, then you might go with something along these lines:
struct bit_array_small {
unsigned char bits;
unsigned char num_bits;
};
Of course, you can make that larger by choosing a different data type for the bits member and, maybe, the num_bits member. I'm sure you can see how you might extend the concept to handling arbitrary-length bit arrays if you should happen to need that.
If you really want the most memory efficiency, you can encode the Huffman tree itself as a stream of bits. See, for example:
https://www.siggraph.org/education/materials/HyperGraph/video/mpeg/mpegfaq/huffman_tutorial.html
Then just encode those bits as an array of bytes, with a possible waste of 7 bits.
But that would be a horrible idea. For the structure in memory to be useful, it must be easy to access. You can still do that very efficiently. Let's say you want to encode up to 12-bit codes. Use a 16-bit integer and bitfields:
struct huffcode {
uint16_t length: 4,
value: 12;
}
C will store this as a single 16-bit value, and allow you to access the length and value fields separately. The complete Huffman node would also contain the input code value, and tree pointers (which, if you want further compactness, can be integer indices into an array).
You can make you own bit array in no time.
#define ba_set(ptr, bit) { (ptr)[(bit) >> 3] |= (char)(1 << ((bit) & 7)); }
#define ba_clear(ptr, bit) { (ptr)[(bit) >> 3] &= (char)(~(1 << ((bit) & 7))); }
#define ba_get(ptr, bit) ( ((ptr)[(bit) >> 3] & (char)(1 << ((bit) & 7)) ? 1 : 0 )
#define ba_setbit(ptr, bit, value) { if (value) { ba_set((ptr), (bit)) } else { ba_clear((ptr), (bit)); } }
#define BITARRAY_BITS (120)
int main()
{
char mybits[(BITARRAY_BITS + 7) / 8];
memset(mybits, 0, sizeof(mybits));
ba_setbit(mybits, 33, 1);
if (!ba_get(33))
return 1;
return 0;
};
Question:
A device is connected to a computer that can return various
temperatures related to the weather. The GetTemps function returns the
daily high temperature in bits 20–29, the daily low temperature in
bits 10–19, and the current temperature in bits 0–9, all as 10-bit
integers. In the following program fragment, lines 8 and 9 are
incomplete. They should store the high temperature in highTemp and the
current temperature in currTemp, so that these temperatures can be
printed in line 10. Please complete lines 8 and 9, and implement code
efficiently
#include <stdio.h>
// Line 1
// Line 2
int GetTemps(void);
// Line 3
// Line 4
int main( ) {
// Line 5
int w, highTemp, currTemp;
// Line 6
w = getTemps( );
// Line 7
highTemp = <QUESTION 1>
// Line 8
currTemp = <QUESTION 2>
// Line 9
printf ( "High: %d\nCurrent: %d\n", highTemp, currTemp)
// Line 10
return 0;
}
The answers are
highTemp = w>>20
currtemp = w<<20
The correct right shift (for highTemp) and bitmask (for lowTemp)
operations.
As given by my class TA
Can someone explain this answer to me? I think I understand how hightemp is w>>20, but if w is a 30-bit int[30...0] then wouldn't a bitshift to the left push bits 10 to 0 to the left and effectively multiply it by 2^20? That seems too large to me.
Edit: The exact answer:
Your TA's answer is wrong. Here's how to solve the problem.
When several fields are packed into a single value, it's safest to isolate the bits you want first (by masking). For example, given that the high temp is in bits 20-29, we need a mask to isolate those bits.
const int high_mask = 0x3FF00000; // 10-bit integer in bits 20-29
const int high_bits = w & high_mask; // select the bits we care about
To convert that to a temperature, we need to shift the result so that bit 20 is int bit 0.
const int high_temp = high_bits >> 20; // shift them "down"
But this is not entirely right! We haven't accounted for negative temperatures. An arithmetic right-shift will preserve the sign of the value, but we've zero-ed out the high bits (of the 32-bit integer). Even if we hadn't masked those bits out, the problem doesn't say what values are in those top bits, so we shouldn't make assumptions.
The easiest way to account for the sign is to first shift left, so that our top bit is in the top position. Then, we when shift right, the processor will do the appropriate sign extension. Assuming ints are 32 bits ...
const int high_temp = (high_bits << 3) >> 23; // shift down, preserving the sign
Note that the right shift value must account for the left shift we did first.
(Technically, if we shift to lop off the top bits and then shift the other way to lop off the lowest bits, the masking is no longer necessary, but conceptually, it can help with comprehension.)
Also note that precedence with bit-wise operators can be surprising. So if you try to combine these steps into a single expression, you'll likely have to add some parentheses.
A similar process can extract the current temperature (and any int value embedded in a larger integral type). You just have to tweak the constants.
I can't guess how to solve following problem. Assume I have a string or an array of integer-type variables (uchar, char, integer, whatever). Each of these data type is 1 byte long or more.
I would like to read from such array but read a pieces that are smaller than 1 byte, e.g. 3 bits (values 0-7). I tried to do a loop like
cout << ( (tab[index] >> lshift & lmask) | (tab[index+offset] >> rshift & rmask) );
but guessing how to set these variables is out of my reach. What is the metodology to solve such problem?
Sorry if question has been ever asked, but searching gives no answer.
I am sure this is not the best solution, as there some inefficiencies in the code that could be eliminated, but I think the idea is workable. I only tested it briefly:
void bits(uint8_t * src, int arrayLength, int nBitCount) {
int idxByte = 0; // byte index
int idxBitsShift = 7; // bit index: start at the high bit
// walk through the array, computing bit sets
while (idxByte < arrayLength) {
// compute a single bit set
int nValue = 0;
for (int i=2; i>=0; i--) {
nValue += (src[idxByte] & (1<<idxBitsShift)) >> (idxBitsShift-i);
if ((--idxBitsShift) < 0) {
idxBitsShift=8;
if (++idxByte >= arrayLength)
break;
}
}
// print it
printf("%d ", nValue);
}
}
int main() {
uint8_t a[] = {0xFF, 0x80, 0x04};
bits(a, 3, 3);
}
The thing with collecting bits across byte boundaries is a bit of a PITA, so I avoided all that by doing this a bit at a time, and then collecting the bits together in the nValue. You could have smarter code that does this three (or however many) bits at a time, but as far as I am concerned, with problems like this it is usually best to start with a simple solution (unless you already know how to do a better one) and then do something more complicated.
In short, the way the data is arranged in memory strictly depends on :
the Endianess
the standard used for computation/representation ( usually it's the IEEE 754 )
the type of the given variable
Now, you can't "disassemble" a data structure with this rationale without destroing its own meaning, simply put, if you are going to subdivide your variable in "bitfields" you are just picturing an undefined value.
In computer science there are data structure or informations structured in blocks, like many hashing algorithms/hash results, but a numerical value it's not stored like that and you are supposed to know what you are doing to prevent any data loss.
Another thing to note is that your definition of "pieces that are smaller than 1 byte" doesn't make much sense, it's also highly intrusive, you are losing abstraction here and you can also do something bad.
Here's the best method I could come up with for setting individual bits of a variable:
Assume we need to set the first four bits of variable1 (a char or other byte long variable) to 1010
variable1 &= 0b00001111; //Zero the first four bytes
variable1 |= 0b10100000; //Set them to 1010, its important that any unaffected bits be zero
This could be extended to whatever bits desired by placing zeros in the first number corresponding to the bits which you wish to set (the first four in the example's case), and placing zeros in the second number corresponding to the bits which you wish to remain neutral in the second number (the last four in the example's case). The second number could also be derived by bit-shifting your desired value by the appropriate number of places (which would have been four in the example's case).
In response to your comment this can be modified as follows to accommodate for increased variability:
For this operation we will need two shifts assuming you wish to be able to modify non-starting and non-ending bits. There are two sets of bits in this case the first (from the left) set of unaffected bits and the second set. If you wish to modify four bits skipping the first bit from the left (1 these four bits 111 for a single byte), the first shift would be would be 7 and the second shift would be 5.
variable1 &= ( ( 0b11111111 << shift1 ) | 0b11111111 >> shift2 );
Next the value we wish to assign needs to be shifted and or'ed in.
However, we will need a third shift to account for how many bits we want to set.
This shift (we'll call it shift3) is shift1 minus the number of bits we wish to modify (as previously mentioned 4).
variable1 |= ( value << shift3 );
Disclaimer: I am asking these questions in relation to an assignment. The assignment itself calls for implementing a bitmap and doing some operations with that, but that is not what I am asking about. I just want to understand the concepts so I can try the implementation for myself.
I need help understanding bitmaps/bit arrays and bitwise operations. I understand the basics of binary and how left/right shift work, but I don't know exactly how that use is beneficial.
Basically, I need to implement a bitmap to store the results of a prime sieve (of Eratosthenes.) This is a small part of a larger assignment focused on different IPC methods, but to get to that part I need to get the sieve completed first. I've never had to use bitwise operations nor have I ever learned about bitmaps, so I'm kind of on my own to learn this.
From what I can tell, bitmaps are arrays of a bit of a certain size, right? By that I mean you could have an 8-bit array or a 32-bit array (in my case, I need to find the primes for a 32-bit unsigned int, so I'd need the 32-bit array.) So if this is an array of bits, 32 of them to be specific, then we're basically talking about a string of 32 1s and 0s. How does this translate into a list of primes? I figure that one method would evaluate the binary number and save it to a new array as decimal, so all the decimal primes exist in one array, but that seems like you're using too much data.
Do I have the gist of bitmaps? Or is there something I'm missing? I've tried reading about this around the internet but I can't find a source that makes it clear enough for me...
Suppose you have a list of primes: {3, 5, 7}. You can store these numbers as a character array: char c[] = {3, 5, 7} and this requires 3 bytes.
Instead lets use a single byte such that each set bit indicates that the number is in the set. For example, 01010100. If we can set the byte we want and later test it we can use this to store the same information in a single byte. To set it:
char b = 0;
// want to set `3` so shift 1 twice to the left
b = b | (1 << 2);
// also set `5`
b = b | (1 << 4);
// and 7
b = b | (1 << 6);
And to test these numbers:
// is 3 in the map:
if (b & (1 << 2)) {
// it is in...
You are going to need a lot more than 32 bits.
You want a sieve for up to 2^32 numbers, so you will need a bit for each one of those. Each bit will represent one number, and will be 0 if the number is prime and 1 if it is composite. (You can save one bit by noting that the first bit must be 2 as 1 is neither prime nor composite. It is easier to waste that one bit.)
2^32 = 4,294,967,296
Divide by 8
536,870,912 bytes, or 1/2 GB.
So you will want an array of 2^29 bytes, or 2^27 4-byte words, or whatever you decide is best, and also a method for manipulating the individual bits stored in the chars (ints) in the array.
It sounds like eventually, you are going to have several threads or processes operating on this shared memory.You may need to store it all in a file if you can't allocate all that memory to yourself.
Say you want to find the bit for x. Then let a = x / 8 and b = x - 8 * a. Then the bit is at arr[a] & (1 << b). (Avoid the modulus operator % wherever possible.)
//mark composite
a = x / 8;
b = x - 8 * a;
arr[a] |= 1 << b;
This sounds like a fun assignment!
A bitmap allows you to construct a large predicate function over the range of numbers you're interested in. If you just have a single 8-bit char, you can store Boolean values for each of the eight values. If you have 2 chars, it doubles your range.
So, say you have a bitmap that already has this information stored, your test function could look something like this:
bool num_in_bitmap (int num, char *bitmap, size_t sz) {
if (num/8 >= sz) return 0;
return (bitmap[num/8] >> (num%8)) & 1;
}
I want a hash function that takes a long number (64 bits) and produces result of 10 bits. What is the best hash function for such purpose. Inputs are basically addresses of variables (Addresses are of 64 bits or 8 bytes on Linux), so my hash function should be optimized for that purpose.
I would say somethig like this:
uint32_t hash(uint64_t x)
{
x >>= 3;
return (x ^ (x>>10) ^ (x>>20)) & 0x3FF;
}
The lest significant 3 bits are not very useful, as most variables are 4-byte or 8-byte aligned, so we remove them.
Then we take the next 30 bits and mix them together (XOR) in blocks of 10 bits each.
Naturally, you could also take the (x>>30)^(x>>40)^(x>>50) but I'm not sure if they'll make any difference in practice.
I wrote a toy program to see some real addresses on the stack, data area, and heap. Basically I declared 4 globals, 4 locals and did 2 mallocs. I dropped the last two bits when printing the addresses. Here is an output from one of the runs:
20125e8
20125e6
20125e7
20125e4
3fef2131
3fef2130
3fef212f
3fef212c
25e4802
25e4806
What this tells me:
The LSB in this output (3rd bit of the address) is frequently 'on' and 'off'. So I wouldn't drop it when calculating the hash. Dropping 2 LSBs seems enough.
We also see that there is more entropy in the lower 8-10 bits. We must use that when calculating the hash.
We know that on a 64 bit machine, virtual addresses are never more than 48 bits wide.
What I would do next:
/* Drop two LSBs. */
a >>= 2;
/* Get rid of the MSBs. Keep 46 bits. */
a &= 0x3fffffffffff;
/* Get the 14 MSBs and fold them in to get a 32 bit integer.
The MSBs are mostly 0s anyway, so we don't lose much entropy. */
msbs = (a >> 32) << 18;
a ^= msbs;
Now we pass this through a decent 'half avalanche' hash function, instead of rolling our own. 'Half avalanche' means each bit of the input gets a chance to affect bits at the same position and higher:
uint32_t half_avalanche( uint32_t a)
{
a = (a+0x479ab41d) + (a<<8);
a = (a^0xe4aa10ce) ^ (a>>5);
a = (a+0x9942f0a6) - (a<<14);
a = (a^0x5aedd67d) ^ (a>>3);
a = (a+0x17bea992) + (a<<7);
return a;
}
For an 10-bit hash, use the 10 MSBs of the uint32_t returned. The hash function continues to work fine if you pick N MSBs for an N bit hash, effectively doubling the bucket count with each additional bit.
I was a little bored, so I wrote a toy benchmark for this. Nothing fancy, it allocates a bunch of memory on the heap and tries out the hash I described above. The source can be had from here. An example result:
1024 buckets, 256 values generated, 29 collissions
1024 buckets, 512 values generated, 103 collissions
1024 buckets, 1024 values generated, 370 collissions
Next: I tried out the other two hashes answered here. They both have similar performance. Looks like: Just pick the fastest one ;)
Best for most distributions is mod by a prime, 1021 is the largest 10-bit prime. There's no need to strip low bits.
static inline int hashaddress(void *v)
{
return (uintptr_t)v % 1021;
}
If you think performance might be a concern, have a few alternates on hand and race them in your actual program. Microbenchmarks are waste; a difference of a few cycles is almost certain to be swamped by cache effects, and size matters.