I found this piece of code online and it works as part of my project, but I'm not sure why. I don't want to just use it without understanding what it does.
type = (packet_data[12] << 8) | packet_data[13];
if I use it I get the proper type (0x0800 for IPv4) and can use it for comparison on printing out whether it's IPv4 or IPv6. If I don't use it and try something like:
if(packet_data[12] == 08 && packet_data[13] == 00)
print out IPv4
it doesn't work (compiling errors).
Also if I just print out the values like
printf"%02X", packet_data[12];
printf"%02X", packet_data[13];
it prints out the proper value in the form 0800, but I need to print out that it's an IPv4 type. Which is why I need to comparison in the first place. Thanks for any piece of advice or explanation on what this does would be much appreciated. Thanks
if(packet_data[12] == 08 && packet_data[13] == 00)
the right literal operands are seen as octal base literals by the compiler.
Fortunately for you, 8 cannot represent an octal number and you're getting a compilation error.
You mean hexadecimal literals:
if (packet_data[12] == 0x8 && packet_data[13] == 0x0)
this line:
(packet_data[12] << 8) | packet_data[13]
recreates the big endian value (network convention) of the data located at offsets 12 & 13. Both are equivalent in your case, although the latter is more convenient to compare values as a whole.
packet_data[12] << 8 takes the first Ethertype octet and shifts it 8 bits to the left to the upper 8 bits of a 16-bit word.
| packet_data[13] takes the second Ethertype octet and bitwise-ORs it to the previous 16-bit word.
You can then compare it to 0x0800 for IPv4 or 0x86DD for IPv6; see a more complete list on https://en.wikipedia.org/wiki/EtherType#Examples
As has already been pointed out 08 doesn't work since numerals starting with 0 represent octal numbers, and 8 doesn't exist in octal.
type = (packet_data[12] << 8) | packet_data[13];
The << is bitwise shift left. It takes the binary representation of the variable and shifts its 1's to the left, 8 bits in this case.
'0x0800' looks like 100000000000 in binary. So in order for 0x0800 to be the type, it has to end up looking like that after | packet_data[13]. This last part is bitwise OR. It will write a 1 if either the left side or the right side have a 1 in that place and a 0 otherwise.
So after shifting the value in packet_data[12], the only way for it to be type 0x0800 (100000000000) is if packet_data[13] looks like 0x0800 or 0x0000:
type = (0x800) <==> ( 100000000000 | 100000000000 )
type = (0x800) <==> ( 100000000000 | 000000000000 )
Also, to get the 0x out from printf() you need a to add the %# format specifier. But to get 0x0800 you need to specify a .04 which means 4 characters including leading zeros. However this won't output the 0x if the type is 0. For that you'd need to hardcode the literal 0x into printf().
printf("%#02x\n", data);
printf("%#.04x\n", data);
printf("0x%.04x\n", data=0);
Output
0x800
0x0800
0x0000
Related
This question already has answers here:
What are bitwise operators?
(9 answers)
Closed last month.
when I read SYSTEMC code,I find a function return int like this:
static inline int rp_get_busaccess_response(struct rp_pkt *pkt)
{
return (pkt->busaccess_ext_base.attributes & RP_BUS_RESP_MASK) >>
RP_BUS_RESP_SHIFT;
}
pkt->busaccess_ext_base.attributes defined as uint64_t.
RP_BUS_RESP_MASK and RP_BUS_RESP_SHIFT defined as:
enum {
RP_RESP_OK = 0x0,
RP_RESP_BUS_GENERIC_ERROR = 0x1,
RP_RESP_ADDR_ERROR = 0x2,
RP_RESP_MAX = 0xF,
};
enum {
RP_BUS_RESP_SHIFT = 8,
RP_BUS_RESP_MASK = (RP_RESP_MAX << RP_BUS_RESP_SHIFT),
};
What the meaning of this function's return?
Thanks!
a & b is a bitwise operation, this will perform a logical AND to each pair of bits, let's say you have 262 & 261 this will translate to 100000110 & 100000101 the result will be 100000100 (260), the logic behind the result is that each 1 AND 1 will result in 1 whereas 1 AND 0 and 0 AND 0 will result in 0, these are normal logical operations but are performed at bit level:
100000110
& 100000101
-----------
100000100
In (a & b) >> c, >> will shift the bits of the resulting value of a & b to the right by c positions. For example for the previous result 100000100 and having a c value of 8, all bits will shift to the right by 8, and the result is 000000001. The left most 1 bit in the original value will become the first most right whereas the third 1 bit from the right in the original value will be shifted away.
With this knowledge in mind and looking at the function, we can see that the RP_BUS_RESP_MASK constant is a mask that protects the field of bits from 9th through 12th position(from the right, i.e. the first four bits of the second byte), setting them to 1 (RP_RESP_MAX << RP_BUS_RESP_SHIFT which translates to 1111 << 8 resulting in 111100000000), this will preserve the bit values in that range. Then it sets the other bits of pkt->busaccess_ext_base.attributes to 0 when it performs the bitwise & against this mask. Finally it shifts this field to the right by RP_BUS_RESP_SHIFT(8).
It basically extracts the the first four bits in the second byte of kt->busaccess_ext_base.attributes and returns the result as an integer.
What it's for specifically? You must consult the documentation if it exists or try to understand its use in the global context, for what I can see this belongs to LibSystemCTLM-SoC (In case you didn't know)
The function extracts the first 4-Bit of the second byte of the 8-Byte (64-Bit) Attribute. This means, it extracts the following 4-Bits of the Attribute 0xFFFF FFFF FFFFF FAFF resulting in 0x0A
First it creates the mask, which is RP_BUS_RESP_MASK = 0x0F00
Next it applies the mask to the attribute pkt->busaccess_ext_base.attributes & 0x0F00 resulting in 0x0A00 from the example
Next it shifts A by 8-Bit to the right side, leading to 0x0A
I'll start with the code rightaway:
#include <stdio.h>
int main()
{
unsigned char value = 0xAF;
printf("%02x\n", value);
value = (value << 4) | (value >> 4);
printf("%02x\n", value);
return 0;
}
Firstly I thought you can't store numbers in chars and that you would need to make that an int. Appearently not. Then, if I did the bitshifting mats:
value << 4 = 101011110
value >> 4 = 1010111
101011110
| 1010111
=101011111
and that would be 0x15f.
If I compile that code it prints
af
fa
Can anyone explain to me where I'm thinking wrong?
Bit shifting 4 shifts 4 binary digits, not 2 as you seem to be showing. It also shifts 1 hex digit. So if you have 0xAF, shifting left 4 gives you 0xF0. Because it is a char, it only has 8 bits and the A is cut off. Shifting right 4 similarly yields 0xA. 0x0A | 0xF0 == 0xFA.
Start with the baseline, 0xaf is 1010-111116 (and we're assuming an eight-bit char here based on the code though it's not mandated by the standard).
The expression value << 4 will left-shift that by four bits (not one as you seem to think), giving 1010-1111-000016 and, yes, it's more than an eight-bit char because of integer promotions (both operands of a << expression are promoted to int as per ISO C11 6.5.7 and also in earlier iterations of the standard).
The expression value >> 4 will right-shift it by four bits, giving 101016.
When you bitwise-or those together, you get:
1010-1111-0000
1010
==============
1010-1111-1010
and when you finally try to shoe-horn that back into the eight-bit value, it lops off the upper bits, giving 1111-101016, which is 0xFA.
You might have messed up the bit representations in your calculation.
Ok. I will try to explain according to the code you have provided.
value 0XAF = 10101111
value << 4 = 11110000
value >> 4 = 00001010
11110000
|00001010 = 11111010 and hence the 0XFA.
Explanation:
1. Representation is in binary 8 bit.
2. When you left/right shift by a number, I think you are considering it in terms of multiplication and division, but in 8-bit binary representation it just gets shifted by 4 places and the bits get replaced by 0.
Hope this helps.
because sizeof(unsigned char) is equal to 1.its a 8bit data.
the range of "value" is from 0x0 to 0xFF, that's the valid bit is from bit0 to bit7.
so when assign 0x15F to "value" after bitshifting, only the data from bit 0 to bit7 are assigned to variable "value", bit8 is cut off.
0x15f ---binarization---> 0001 0101 1111
variable "value" is a 8bit data, so, only 0101 1111 is assigned to it.
value ---binarization---> 0101 1111
I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:
int getTotalSize(char * mmap)
{
int *tmp1 = malloc(sizeof(int));
int *tmp2 = malloc(sizeof(int));
int retVal;
* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;
};
From what I've read so far, the FAT12 format stores the integers in little endian format.
and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.
however I don't understand why retVal = *tmp1+((*tmp2)<<8); works. is the bitwise <<8 converting the second byte to decimal? or to big endian format?
why is it only doing it to the second byte and not the first one?
the bytes in question are [in little endian format] :
40 0B
and i tried converting them manually by switching the order first to
0B 40
and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing?
Thanks
The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).
That said, the code you're asking about is pretty straight-forward.
Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.
Using boxes, the 16-bit value can look either like this:
+---+---+
| a | b |
+---+---+
or like this, if you instead consider b to be the most significant byte:
+---+---+
| b | a |
+---+---+
The way to combine the lsb and the msb into 16-bit value is simply:
result = (msb * 256) + lsb;
UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).
Consider msb = 0x01 and lsb = 0x00, then the above would be:
result = 0x1 * 256 + 0 = 256 = 0x0100
You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.
Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.
Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.
I see no problem combining individual digits or bytes into larger integers.
Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):
1 + 2 * 10 = 21 (10 is the system base)
Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):
0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)
The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.
A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.
Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.
In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:
0x12 + ((0x34 & 0x0F) << 8) == 0x412
In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:
(0x56 << 4) + (0x34 >> 4) == 0x563
If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.
The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.
So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.
For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.
The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.
I'm trying to understand the following function which decides whether a bit is on:
int isBitISet( char ch, int i )
{
char mask = 1 << i ;
return mask & ch ;
}
First, why do I get a char? for ch=abcdefgh and i=5 the function suppose to return the fifth bit from the right (?) , d. so mask=00000001<<5=00100000, and 00100000 & abcdefgh = 00c00000.
Can you please explain me how come we get char and we can do all these shifts without any casting? how come we didn't get the fifth bit and why the returned value is really the Indication whether the bit is on or not?
Edit: the 'abcdefg' are just a symbols for the bits, I didn't mean to represent a string in a char type.
I used to think of a char as 'a' and not as an actual 8 bits, so probably this is the answer to my first question.
It won't give you the fifth bit. Binary numbers start at 20, so the first bit is actually indexed with 0, not with 1. It will give return you sixth bit instead.
Examples:
ch & (1 << 0); // first bit
ch & (1 << 1); // second bit
ch & ((1 << 3) | (1 << 2)); // third and fourth bit.
Also, a char is only an interpretation of a number. On most machines it has a size of 8 bit, which you can either interpret as a unsigned value (0 to 255) or signed value (-128 to 127). So basically it's an integer with a very limited range, thus you can apply bit shifting without casting.
Also, your function will return an integer value that equals zero if and only if the given bit isn't set. Otherwise it's a non-zero value.
The function may return a char, because the input it works on is also a char only. You certainly can not pass in ch=abcdefgh, because that would be a string of 8 chars.
You can do shifts on chars, because C allows to do it. char is just an 8-bit integer type so there's no need to disallow it.
You are right about the fact, that isBitISet(abcdefgh, 5) returns 00c00000 if the letters a, b, etc. are bits in the binary representation of numbers.
The return value is not the fifth bit from the right, it is the same number as in the input, but with all the bits but the fifth bit zeroed.
You also have to remember that numbering of bits goes from zero, so the fifth bit being c is correct, just as that the zeroth bit is h.
This example uses an integer type to represent a boolean value. This is common in C code prior to C99, as C didn't have the bool type.
If you treat your return value as a boolean value, remember that everything non-zero is true, and zero is false. Hence, the output of isBitISet is true for C if bit i is set, and false otherwise.
You should know by now that in computers, everything starts with 0. That is, bit number 5 is in fact the sixth bit (not the fifth).
Your analysis is actually correct, if you give it abcdefgh and 5, you get 00c00000.
When you do the "and":
return mask & ch;
since mask has type int, ch will also automatically be cast to int (same way as many other operators). That's why you don't need explicit casting.
Finally, the result of this function is in the form 0..0z0..0. If z, the bit you are checking for is 0, this value is 0 which is false as long as an if is concerned. If it is not zero, then it is true for an if.
Do:
return 0 != (mask & ch) ;
if you want a bool (0x00000000 or 0x00000001) return. mask & ch alone will give you the bit you're asking about at correct position.
(others said more than enuff about i=5 being sixth bit)
First of all, this function does not return the i-th bit, but tells you if that bit is on or off.
The usage of char mask is implementation depend here. Simply defines an 8-bit mask since the value on which to apply this mask is a char.
Why would you need a cast when 1 is a char? i is only an value for << operator.
ch=abcdefgh makes no sense as an input. ch is char, so ch can only be one character.
The working is as follows: first you construct a mask to zero all the bits you don't need. So for example if the input is ch = 204 (ch = 11001100) and we want to know if the 6th bit is on, so i = 5. So mask = 1 << 5 = 00100000. Then this mask is applied to the value with an AND operation. This will zero everything except the bit in question: 11001100 & 00100000 = 00000000 = 0. As 0 is false in C, then 6th bit is not set. Another example on same ch input and i = 6: mask = 1 << 6 = 01000000; 11001100 & 01000000 = 01000000 = 64, which is not 0, and thus true, so 7th bit is set.
I want to convert an unsigned 32-bit integer as follows:
Input = 0xdeadbeef
Output = 0xfeebdaed
Thank you.
That's not an endianness conversion. The output should be 0xEFBEADDE, not 0xFEEBDAED. (Only the bytes are swapped, and each byte is 2 hexadecimal digits.)
For converting between little- and big-endian, take a look at _byteswap_ulong.
The general process for nibble reversal is:
((i & 0xF0000000)>>28) | ((i &
0xF000000)>>20) | ((i & 0xF00000)>>12)
| ..... | ((i & 0xF)<<28)
Mask, shift, or (I hope I got the numbers right):
Extract the portion of the number you're interested in by ANDing (&) with a mask.
Shift it to it's target location with the >> and << operations.
Construct the new value by ORing (|) the pieces together.
If you want to reorder bytes you will mask with 0xFF. As everyone is saying that's probably what you want and if you're looking for a canned version follow other people's suggestions.