This question already has answers here:
How to determine if a byte is null in a word
(1 answer)
How the glibc strlen() implementation works [duplicate]
(1 answer)
Closed 9 months ago.
I was going through the source of strlen for glibc. They have used magic bits to find the length of string. Can someone please explain how it is working.
Thank you
Let's say this function is looking through a string -- 4 bytes at a time, as explained by the comments (we assume long ints are 4 bytes) -- and the current "chunk" look like this:
'\3' '\3' '\0' '\3'
00000011 00000011 00000000 00000011 (as a string: "\x03\x03\x00\x03")
The strlen function is just looking for the first zero byte in this string. It first determines, for each 4-byte chunk, whether there's any zero byte in there, by checking this magic_bits shortcut first: it adds the 4 bytes to this value:
01111110 11111110 11111110 11111111
Adding any non-zero bytes to this value will cause the 1's to overflow into the holes marked by zeroes, by propagating the carries. For our chunk, it'd look like this:
11111111 111111 1 1111111 Carries
00000011 00000011 00000000 00000011 Chunk
01111110 11111110 11111110 11111111 Magic bits
+ -----------------------------------
10000010 00000001 11111111 00000010
^ ^ ^ ^
(The hole bits are marked by ^'s.)
And, from the comments:
/* Look at only the hole bits. If any of the hole bits
are unchanged, most likely one of the bytes was a
zero. */
If there's no zeroes in the chunk, all of the hole bits will get set to 1's. However, because of the zero byte, one hole bit didn't get filled by a propagating carry, and we can then go check which byte it was.
Essentially, it speeds up the strlen calculation by applying some bit addition magic to 4-byte chunks to scan for zeroes, before narrowing down the search to single byte comparisons.
The idea is instead to compare one byte at a time against zero, rather to check one unsigned long object at a time if one of its byte is zero. This means checking 8 bytes at a time when sizeof (unsigned long) is 8.
With bit hacks, there is a fast known expression that can determine if one of the bytes compares equal to zero. Then if one of the bytes is equal to zero, the bytes of the object are individually tested to find the first one which is zero. The advantage of using bitwise operations is it reduces the number of branching instructions.
The bit hack expression to check if one of the byte of a multi-byte object is equal to zero is explained in the famous Stanford Bit Twiddling Hacks page, in
Determine if a word has a zero byte
http://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord
Related
So I want to store random bits of length 1 to 8 (a BYTE) in memory. I know that computer aren't efficient enough to store individual bits in memory and that we must store at least a BYTE data on most modern machines. I have been doing some research on this but haven't come across any useful material. I need to find a way to store these bits so that, for example, when reading the bits back from the memory, 0 must NOT be evaluated as 00, 0000 or 00000000. To further explain, for example, 010 must NOT be read back or evaluated for that matter, as 00000010. Numbers should be unique based on the value as well as their cardinality.
Some more examples;
1 ≠ 00000001
10 ≠ 00000010
0010 ≠ 00000010
10001 ≠ 00010001
And so on...
Also one thing i want to point out again is that the bit size is always between 1 and 8 (inclusive) and is NOT a fixed number. I'm using C for this problem.
So you want to store bits in memory and read them back without knowing how long they are. This is not possible. (It's not possible with bytes either)
Imagine if you could do this. Then we could compress a file by, for example, saying that "0" compresses to "0" and "1" compresses to "00". After this "compression" (which would actually make the file bigger) we have a file with only 0's in it. Then, we compress the file with only 0's in it by writing down how many 0's there are. Amazing! Any 2GB file compresses to only 4 bytes. But we know it's impossible to compress every 2GB file into 4 bytes. So something is wrong with this idea.
You can read several bits from memory but you need to know how many you are reading. You can also do it if you don't know how many bits you are reading, but the combinations don't "overlap". So if "01" is a valid combination, then you can't have "010" because that would overlap "01". But you could have "001". This is called a prefix code and it is used in Huffman coding, a type of compression.
Of course, you could also save the length before each number. So you could save "0" as "0010" where the "001" means how many bits long the number is. With 3-digit lengths, you could only have up to 7-bit numbers. Or 8-bit numbers if you subtract 1 from the length, in which case you can't have zero-bit numbers. (so "0" becomes "0000", "101" becomes "010101", etc)
You can control bits using bit shift operators or bit fields
Make sure you understand the endianess concept, that is machine dependent. and keep in mind that bit fields needs a struct, and a struct uses a minimum of 4 bytes.
And bit fields can be very tricky.
Good luck!
If you just need to make sure a given binary number is evaluated properly, then you have two choices I can think of. You could store all of the amount of bits of each numbers alongside with the given number, which wouldn't be so efficient.
But you could also store all the binary numbers as being 8-bit, then when processing each individual number, pass through all of its digits to find its length. That way you just store the lenght of a single number at a time.
Here is some quick code, hopefully it's clear:
Uint8 rightNumber = 2; //Which is 10 in binary, or 00000010
int rightLength = 2; //Since it is 2 bits long
Uint8 bn = mySuperbBinaryValueIWantToTest;
int i;
for(i = 7; i > 0; i--)
{
if((bn & (1 << i)) != 0)break;
}
int length = i + 1;
if(bn == rightNumber && length == rightLength) printf("Correct number");
else printf("Incorrect number");
Keep in mind you can also use the same technique to calculate the amount of bits inside the right value instead of precomputing it. If it's to arbitrary values you are comparing, the same can also work.
Hope this helped, if not, feel free to criticize/re-explain your problem
I stumbled upon this answer regarding the utilization of the magic number 0x07EFEFEFF used for strlen's optimization, and here is what the top answer says:
Look at the magic bits. Bits number 16, 24 and 31 are 1. 8th bit is 0.
8th bit represents the first byte. If the first byte is not zero, 8th bit becomes 1 at this point. Otherwise it's 0.
16th bit represents the second byte. Same logic.
24th bit represents the third byte.
31th bit represents the fourth byte.
However, if I calculate result = ((a + magic) ^ ~a) & ~magic with a = 0x100, I find that result = 0x81010100, meaning that according to the top answerer, the second byte of a equals 0, which is obviously false.
What am I missing?
Thanks!
The bits only tell you if a byte is zero if the lower bytes are non-zero -- so it can only tell you the FIRST 0 byte, but not about bytes after the first 0.
bit8=1 means first byte is zero. Other bytes, unknown
bit8=0 means first byte is non-zero
bit8=0 & bit16=1 means second byte is zero, higher bytes unknown
bit8=0 & bit16=0 mans first two bytes are non-zero.
Also, the last bit (bit31) only tells you about 7 bits of the last byte (and only if the first 3 bytes are non-zero) -- if it is the only bit set then the last byte is 0 or 128 (and the rest are non-zero).
I am trying to understand how the MD5 hashing algorithm work and have been reading the Wikipedia article about it.
After one appends the message so that the length of the message (in bits) is congruent to 448 mod 512, one is supposed to
append length mod (2 pow 64) to message
From what I can understand this means to append the message with 64 bits representing the length of the message. I am a bit confused about how this is done.
My first questions is: is this the length of the original unappended message or the length that one gets after having appended it with the 1 followed by zeros?
My second question is: Is the length the length in bytes? That is, if my message is one byte, would I append the message with 63 0's and then a 1. Or if the message is 10 bytes, then I would append the message with 60 0's and 1010.
The length of the unpadded message. From the MD5 RFC, 3.2:
A 64-bit representation of b (the length of the message before the
padding bits were added) is appended to the result of the previous
step. In the unlikely event that b is greater than 2^64, then only
the low-order 64 bits of b are used. (These bits are appended as two
32-bit words and appended low-order word first in accordance with the
previous conventions.)
The length is in bits. See MD5 RFC, 3.1:
The message is "padded" (extended) so that its length (in bits) is
congruent to 448, modulo 512. That is, the message is extended so
that it is just 64 bits shy of being a multiple of 512 bits long.
Padding is always performed, even if the length of the message is
already congruent to 448, modulo 512.
The MD5 spec is far more precise than the Wikipedia article. I always suggest reading the spec over the Wiki page if you want implementation-level detail.
if my message is one byte, would I append the message with 63 0's and then a 1. Or if the message is 10 bytes, then I would append the message with 60 0's and 1010.
Not quite. Don't forget the obligatory bit value "1" that is always appended at the start of the padding. From the spec:
Padding is performed as follows: a single "1" bit is appended to the
message, and then "0" bits are appended so that the length in bits of
the padded message becomes congruent to 448, modulo 512. In all, at
least one bit and at most 512 bits are appended.
This reference C implementation (disclaimer: my own) of MD5 may be of help, it's written so that hopefully it's easy to follow.
Came across this question in one of the interview samples. A 16-byte aligned allocation has already been answered in How to allocate aligned memory only using the standard library?
But, I have a specific question in the same regarding the mask used to zero down the last 4 bits. This mask "~0F" has been used such that the resulting address is divisible by 16. What should be done to achieve the same for 32-byte alignment/divisibility?
First, the question you referred to is 16-byte alignment, not 16-bit alignment.
Regarding your actual question, you just want to mask off 5 bits instead of 4 to make the result 32-byte aligned. So it would be ~0x1F.
To clarify a bit:
To align a pointer to a 32 byte boundary, you want the last 5 bits of the address to be 0. (Since 100000 is 32 in binary, any multiple of 32 will end in 00000.)
0x1F is 11111 in binary. Since it's a pointer, it's actually some number of 0's followed by 11111 - for example, with 64-bit pointers, it would be 59 0's and 5 1's. The ~ means that these values are inverted - so ~0x1F is 59 1's followed by 5 0's.
When you take ptr & ~0x1F, the bitwise & causes all bits that are &'ed with 1 to stay the same, and all bits that are &'ed with 0 to be set to 0. So you end up with the original value of ptr, except that the last 5 bits have been set to 0. What this means is that we've subtracted some number between 0 and 31 in order to make ptr a multiple of 32, which was the goal.
I want to get the designated byte from a 32 bit integer. I am getting wrong values but I don't know why.
The restrictions to this problem are:
Must use signed bits, and I can't use multiplication.
I specifically need to know what is wrong with the function as it's below.
Here is the function:
int retrieveByteFromWord(int word, int byte)
{
return (word >> (byte << 3)) & 0xFF;
}
ex:
(3) (2) (1) (0) ------ byte number
In word: 10010011 11001100 00110011 10101000
I want to return byte 2 (1100 1100).
retrieveByteFromWord(word, 2) ---- gives: 1100 1100
But for some cases it's wrong and it won't tell me what case.
Any ideas?
Here is the problem:
You just started working for a company that is implementing a set of procedures to operate on a data structure where 4 signed bytes are packed into a 32 bit unsigned. Bytes within the word are numbered from 0(LSB) to 3(MSB). You have been assigned the task of implementing a function for a machine using 2's complement arithmetic and arithmetic right shifts with the following prototype:
typedef unsigned packed_t
int xbyte(packed_t word, int bytenum);
This is the previous employees attempt which got him fired for being wrong:
int xbyte(packed_t word, int bytenum)
{
return (word >> (bytenum << 3)) & 0xFF;
}
A) What is wrong with the code?
B) Write a correct implementation using only left and right shifts and one subtraction.
I have done B but still don't know why A is wrong. Is it because the decimal numbers going in like 12, 15, 19, 55 and then getting packed into a word and then when I extract them they aren't the same number anymore??? It might be so I am going to run some tests real fast...
As this is homework I won't give you a full answer, but I'll point you in the right direction. Your problem statement says that:
4 signed bytes are packed into a 32 bit unsigned.
When you bitwise & a 32 bit signed integer with 0xFF the most significant bit - i.e. the sign bit - of the result is always 0, so the original function never returns a negative value regardless of the input.
By way of example...
When you say "retrieveByteFromWord(word, 2) ---- gives: 11001100" you're wrong.
Your return type is a 32 bit integer - not an 8 bit integer. You're not returning 11001100 you're returning 00000000 00000000 00000000 11001100.
To work with numbers, use signed integer types such as int.
To work with bits, use unsigned integer types such as unsigned. I.e. let the word argument be of type unsigned. That is what the unsigned types are for.
To multiply by 8, write just *8 (this does not mean that that part of the code is technically wrong, just that it is artificially contrived and needlessly unreadable).
Even better, create a self-describing name for that magic number 8, e.g. *bitsPerByte (the standard library calls it CHAR_BIT, which is not particularly self-describing nor readable).
Finally, at the design level, think about designing your functions so that the code that uses a function of yours – each call – becomes clear and readable. E.g. like int const b = byteAt( 2, x );. That can prevent bugs by e.g. preventing wrong actual argument order, and since designing for readability makes the code easier to read, it reduces time spent on that. :-)
Cheers & hth.,
Works fine for positive numbers. You may want to cast word to unsigned to make it work for integers with the MSB set.
int retrieveByteFromWord(int word, int byte)
{
return ((unsigned)word >> (byte << 3)) & 0xFF;
}