converting little endian hex to big endian decimal in C - c

I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:
int getTotalSize(char * mmap)
{
int *tmp1 = malloc(sizeof(int));
int *tmp2 = malloc(sizeof(int));
int retVal;
* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;
};
From what I've read so far, the FAT12 format stores the integers in little endian format.
and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.
however I don't understand why retVal = *tmp1+((*tmp2)<<8); works. is the bitwise <<8 converting the second byte to decimal? or to big endian format?
why is it only doing it to the second byte and not the first one?
the bytes in question are [in little endian format] :
40 0B
and i tried converting them manually by switching the order first to
0B 40
and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing?
Thanks

The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).
That said, the code you're asking about is pretty straight-forward.
Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.
Using boxes, the 16-bit value can look either like this:
+---+---+
| a | b |
+---+---+
or like this, if you instead consider b to be the most significant byte:
+---+---+
| b | a |
+---+---+
The way to combine the lsb and the msb into 16-bit value is simply:
result = (msb * 256) + lsb;
UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).
Consider msb = 0x01 and lsb = 0x00, then the above would be:
result = 0x1 * 256 + 0 = 256 = 0x0100
You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.
Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.
Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.

I see no problem combining individual digits or bytes into larger integers.
Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):
1 + 2 * 10 = 21 (10 is the system base)
Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):
0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)
The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.
A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.
Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.
In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:
0x12 + ((0x34 & 0x0F) << 8) == 0x412
In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:
(0x56 << 4) + (0x34 >> 4) == 0x563
If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.

The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.
So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.
For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.
The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.

Related

Converting 32 bit number to four 8bit numbers

I am trying to convert the input from a device (always integer between 1 and 600000) to four 8-bit integers.
For example,
If the input is 32700, I want 188 127 00 00.
I achieved this by using:
32700 % 256
32700 / 256
The above works till 32700. From 32800 onward, I start getting incorrect conversions.
I am totally new to this and would like some help to understand how this can be done properly.
Major edit following clarifications:
Given that someone has already mentioned the shift-and-mask approach (which is undeniably the right one), I'll give another approach, which, to be pedantic, is not portable, machine-dependent, and possibly exhibits undefined behavior. It is nevertheless a good learning exercise, IMO.
For various reasons, your computer represents integers as groups of 8-bit values (called bytes); note that, although extremely common, this is not always the case (see CHAR_BIT). For this reason, values that are represented using more than 8 bits use multiple bytes (hence those using a number of bits with is a multiple of 8). For a 32-bit value, you use 4 bytes and, in memory, those bytes always follow each other.
We call a pointer a value containing the address in memory of another value. In that context, a byte is defined as the smallest (in terms of bit count) value that can be referred to by a pointer. For example, your 32-bit value, covering 4 bytes, will have 4 "addressable" cells (one per byte) and its address is defined as the first of those addresses:
|==================|
| MEMORY | ADDRESS |
|========|=========|
| ... | x-1 | <== Pointer to byte before
|--------|---------|
| BYTE 0 | x | <== Pointer to first byte (also pointer to 32-bit value)
|--------|---------|
| BYTE 1 | x+1 | <== Pointer to second byte
|--------|---------|
| BYTE 2 | x+2 | <== Pointer to third byte
|--------|---------|
| BYTE 3 | x+3 | <== Pointer to fourth byte
|--------|---------|
| ... | x+4 | <== Pointer to byte after
|===================
So what you want to do (split the 32-bit word into 8-bits word) has already been done by your computer, as it is imposed onto it by its processor and/or memory architecture. To reap the benefits of this almost-coincidence, we are going to find where your 32-bit value is stored and read its memory byte-by-byte (instead of 32 bits at a time).
As all serious SO answers seem to do so, let me cite the Standard (ISO/IEC 9899:2018, 6.2.5-20) to define the last thing I need (emphasis mine):
Any number of derived types can be constructed from the object and function types, as follows:
An array type describes a contiguously allocated nonempty set of objects with a particular member object type, called the element type. [...] Array types are characterized by their element type and by the number of elements in the array. [...]
[...]
So, as elements in an array are defined to be contiguous, a 32-bit value in memory, on a machine with 8-bit bytes, really is nothing more, in its machine representation, than an array of 4 bytes!
Given a 32-bit signed value:
int32_t value;
its address is given by &value. Meanwhile, an array of 4 8-bit bytes may be represented by:
uint8_t arr[4];
notice that I use the unsigned variant because those bytes don't really represent a number per se so interpreting them as "signed" would not make sense. Now, a pointer-to-array-of-4-uint8_t is defined as:
uint8_t (*ptr)[4];
and if I assign the address of our 32-bit value to such an array, I will be able to index each byte individually, which means that I will be reading the byte directly, avoiding any pesky shifting-and-masking operations!
uint8_t (*bytes)[4] = (void *) &value;
I need to cast the pointer ("(void *)") because I can't bear that whining compiler &value's type is "pointer-to-int32_t" while I'm assigning it to a "pointer-to-array-of-4-uint8_t" and this type-mismatch is caught by the compiler and pedantically warned against by the Standard; this is a first warning that what we're doing is not ideal!
Finally, we can access each byte individually by reading it directly from memory through indexing: (*bytes)[n] reads the n-th byte of value!
To put it all together, given a send_can(uint8_t) function:
for (size_t i = 0; i < sizeof(*bytes); i++)
send_can((*bytes)[i]);
and, for testing purpose, we define:
void send_can(uint8_t b)
{
printf("%hhu\n", b);
}
which prints, on my machine, when value is 32700:
188
127
0
0
Lastly, this shows yet another reason why this method is platform-dependent: the order in which the bytes of the 32-bit word is stored isn't always what you would expect from a theoretical discussion of binary representation i.e:
byte 0 contains bits 31-24
byte 1 contains bits 23-16
byte 2 contains bits 15-8
byte 3 contains bits 7-0
actually, AFAIK, the C Language permits any of the 24 possibilities for ordering those 4 bytes (this is called endianness). Meanwhile, shifting and masking will always get you the n-th "logical" byte.
It really depends on how your architecture stores an int. For example
8 or 16 bit system short=16, int=16, long=32
32 bit system, short=16, int=32, long=32
64 bit system, short=16, int=32, long=64
This is not a hard and fast rule - you need to check your architecture first. There is also a long long but some compilers do not recognize it and the size varies according to architecture.
Some compilers have uint8_t etc defined so you can actually specify how many bits your number is instead of worrying about ints and longs.
Having said that you wish to convert a number into 4 8 bit ints. You could have something like
unsigned long x = 600000UL; // you need UL to indicate it is unsigned long
unsigned int b1 = (unsigned int)(x & 0xff);
unsigned int b2 = (unsigned int)(x >> 8) & 0xff;
unsigned int b3 = (unsigned int)(x >> 16) & 0xff;
unsigned int b4 = (unsigned int)(x >> 24);
Using shifts is a lot faster than multiplication, division or mod. This depends on the endianess you wish to achieve. You could reverse the assignments using b1 with the formula for b4 etc.
You could do some bit masking.
600000 is 0x927C0
600000 / (256 * 256) gets you the 9, no masking yet.
((600000 / 256) & (255 * 256)) >> 8 gets you the 0x27 == 39. Using a 8bit-shifted mask of 8 set bits (256 * 255) and a right shift by 8 bits, the >> 8, which would also be possible as another / 256.
600000 % 256 gets you the 0xC0 == 192 as you did it. Masking would be 600000 & 255.
I ended up doing this:
unsigned char bytes[4];
unsigned long n;
n = (unsigned long) sensore1 * 100;
bytes[0] = n & 0xFF;
bytes[1] = (n >> 8) & 0xFF;
bytes[2] = (n >> 16) & 0xFF;
bytes[3] = (n >> 24) & 0xFF;
CAN_WRITE(0x7FD,8,01,sizeof(n),bytes[0],bytes[1],bytes[2],bytes[3],07,255);
I have been in a similar kind of situation while packing and unpacking huge custom packets of data to be transmitted/received, I suggest you try below approach:
typedef union
{
uint32_t u4_input;
uint8_t u1_byte_arr[4];
}UN_COMMON_32BIT_TO_4X8BIT_CONVERTER;
UN_COMMON_32BIT_TO_4X8BIT_CONVERTER un_t_mode_reg;
un_t_mode_reg.u4_input = input;/*your 32 bit input*/
// 1st byte = un_t_mode_reg.u1_byte_arr[0];
// 2nd byte = un_t_mode_reg.u1_byte_arr[1];
// 3rd byte = un_t_mode_reg.u1_byte_arr[2];
// 4th byte = un_t_mode_reg.u1_byte_arr[3];
The largest positive value you can store in a 16-bit signed int is 32767. If you force a number bigger than that, you'll get a negative number as a result, hence unexpected values returned by % and /.
Use either unsigned 16-bit int for a range up to 65535 or a 32-bit integer type.

Flipping bytes, doing arithmetic and flipping them back again

I have a programming/math related question regarding converting between big endian and little endian and doing arithmetic.
Assume we have two integers in little endian mode:
int a = 5;
int b = 6;
//a+b = 11
Let's flip the bytes and add them again:
int a = 1280;
int b = 1536;
//a+b = 2816
Now if we flip the byte order of 2816 we get 11. So essentially we can do arithmetic computation between little endian and big endian and once converted they represent the same number?
Does this have a theory/name behind it in the computer science world?
It doesn't work if the addition involves carrying since carrying propagates right-to-left. Swapping digits doesn't mean carrying switches direction, so any bytes that overflow into the next byte will be different.
Let's look at an example in hex, pretending that endianness means each 4-bit nibble is swapped:
int a = 0x68;
int b = 0x0B;
//a+b: 0x73
int a = 0x86;
int b = 0xB0;
//a+b: 0x136
816 + B16 is 1316. That 1 is carried and adds on to the 6 in the first sum. But in the second sum it's not carried right and added to the 6, it's carried left and overflows into the third hex digit.
First, it should be noted that your assumption that int in C has 16 bits is wrong. In most modern systems int is a 32-bit type, so if we reverse (not flip, which typically means taking the complement) the bytes of 5 we'll get 83886080 (0x05000000), not 1280 (0x0500)
Is the size of C "int" 2 bytes or 4 bytes?
What does the C++ standard state the size of int, long type to be?
Also note that you should write in hex to make it easier to understand because computers don't work in decimal:
int16_t a = 0x0005;
int16_t b = 0x0006;
// a+b = 0x000B
int16_t a = 0x0500; // 1280
int16_t b = 0x0600; // 1536
//a+b = 0x0B00
OK now as others said, ntohl(htonl(5) + htonl(6)) happens to be the same as 5 + 6 just be cause you have small numbers that their reverses' sum don't overflow. Choosing larger numbers and you'll see the difference right away
However that property does hold in ones' complement for systems where values are stored in 2 smaller parts like this case
In ones' complement one does arithmetic with end-around carry by propagating the carry out back to the carry in. That makes ones' complement arithmetic endian independent if one has only one internal "carry break" (i.e. the stored value is broken into two separate chunks) because of the "circular carry"
Suppose we have xxyy and zztt then xxyy + zztt is done like this
carry
xx yy
+ zz <───── tt
──────────────
carry aa bb
│ ↑
└─────────────┘
When we reverse the chunks, yyxx + ttzz is carried the same way. Because xx, yy, zz, tt are chunks of bits of any length, it works for PDP's mixed endian, or when you store a 32-bit number in two 16-bit parts, a 64-bit number in two 32-bit parts...
For example:
0x7896 + 0x6987 = 0xE21D
0x9678 + 0x8769 = 0x11DE1 → 0x1DE1 + 1 = 0x1DE2
0x2345 + 0x9ABC = 0xBE01
0x4523 + 0xBC9A = 0x101BD → 0x01BD + 1 = 0x01BE
0xABCD + 0xBCDE = 0x168AB → 0x68AB + 1 = 0x68AC
0xCDAB + 0xDEBC = 0x1AC67 → 0xAC67 + 1 = 0xAC68
Or John Kugelman's example above: 0x68 + 0x0B = 0x73; 0x86 + 0xB0 = 0x136 → 0x36 + 1 = 0x37
The end-around carry is one of the reasons why ones' complement was chosen for TCP checksum, because you can calculate the sum in higher precision easily. 16-bit CPUs can work in 16-bit units like normal, but 32 and 64-bit CPUs can add 32 and 64-bit chunks in parallel without worrying about the carry when SIMD isn't available like the SWAR technique
This only appears to work because you happened to pick numbers that are small enough so that they as well as their sum fit into one byte. As long as everything going on in your number stays within its respective byte, you can obviously shuffle and deshuffle your bytes however you want, it won't make a difference. If you pick larger numbers, e.g., 1234 and 4321, you will notice that it won't work anymore. In fact, you will most likely end up invoking undefined behavior because your int will overflow…
Apart from all that, you will almost certainly want to read this: https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html

Understanding Byte swapping

I'm trying to figure out a way to create a little endian/big endian conversion for 64 bit integers (uint64_t) and while I find a lot of answers online as to how to do it, none of them explain what exactly is going on. For example, to get the nth byte of the integer I found this response:
int x = (number >> (8*n)) & 0xff;
Even though I understand the bit shifting component (shifting 8n digits to the right) I don't see where the & and 0xff come in, and what they mean aside from & is a bitwise AND operator.
So, how would this sort of logic apply to a big-endian/little-endian byte swapping method for 64 bit integers?
Thank you in advance.
It might be easiest to think of an analogy with decimal numbers:
Take the number 308. It has three digits, '3', '0', and '8'. By convention, digits to the left are more significant than digits to the right. But the convention could just as easily have been the other way...the digits could've been written in reverse order (e.g., 803).
Why is this relevant? Consider a hexadecimal representation of a number on a computer: 0xabcd0123. In a mathematically rigorous sense, one can view this number as 4 radix-256 digits. (i.e., 0xab, 0xcd, 0x01, 0x23). So, endianness is about the convention by which these radix-256 digits are ordered when written into memory.
Little-endian means "write the least significant digit to the lowest address";
Big-endian means "write the most significant digit to the lowest address".
So, on to the mechanics of processing endianness:
If you think of the decimal example above, how would you get each digit? The least significant digit is given by taking the number modulo 10 (i.e., 308 % 10 = 8). The second digit can be found by dividing the number by 10, then taking it modulo 10 (i.e., 308 / 10 = 30; 30 % 10 = 0) and so forth.
The process is exactly the same for binary data on a computer, except that it's treated as radix-256 instead of radix-10 like decimal digits. This is where a few tricks come in.
When doing modulo with a modulus that is a power of 2, you can do it via AND. Let m=256 as our modulus. Because m = 2 to some power, x % m is equivalent to x & (m-1). This is a numerical fact that is out-of scope for this answer.
When doing division by a power of 2, you can do it via right-shift. That is, let m=256 be our divisor. Because m = 2 to the 8th power, x / m is equivalent to x >> 8.
Thus binary endian-specific serialization uses exactly the process above:
uint32_t val = 0xabcd0123;
(val & 0xff) is equivalent to (val % 256), and yields 0x23.
((val >> 8) & 0xff) is equivalent to ((val / 256) % 256), and yields 0x01.
((val >> 16) & 0xff) is equivalent to (((val / 256)/256) % 256), and yields 0xcd.
and so on. So now that you have access to the digits/bytes, you simply have to chose the order in which to store them. Per above, "big endian = most-significant at lowest address", "little endian = least-significant at lowest address".

Understanding shifting and logical operations

I am trying to read the 'size' of an SD card. The sample example which I am having has following lines of code:
unsigned char xdata *pchar; // Pointer to external mem space for FLASH Read function;
pchar += 9; // Size indicator is in the 9th byte of CSD (Card specific data) register;
// Extract size indicator bits;
size = (unsigned int)((((*pchar) & 0x03) << 1) | (((*(pchar+1)) & 0x80) >> 7));
I am not able to understand what is actually being done in the above line where indicator bit is being extracted. Can somebody help me in understanding this?
The size is made up of bits from two bytes. One byte is at pchar, the other at pchar + 1.
(*pchar) & 0x03) takes the 2 least significant bits (chopping of the 6 most significant ones).
This result is shifted one bit to the left using << 1. For example:
11011010 (& 0x03/00000011)==> 00000010 (<< 1)==> 00000100 (-----10-)
Something similar is done with pchar + 1. For example:
11110110 (& 0x80/10000000)==> 10000000 (>> 7)==> 00000001 (-------1)
Then these two values are OR-ed together with |. So in this example you'd get:
00000100 | 00000001 = 00000101 (-----101)
But note that the 5 most significant bits will always be 0 (above indicated with -) because they were &-ed away:
To summarize, the first byte holds two bits of size, while the second byte only one bit.
It seems the size indicator, say SI, consists of 3 bits, where *pchar contains the two most significant bits of SI in its lowest two bits (0x03) and *(pchar+1) contains the least significant bit of SI in its highest bit (0x80).
The first and second line figure out how to point to the data that you want.
Let's now go through the steps involved, from left to right.
The first portion of the operations takes the byte pointed to by pchar, performs a logical AND on the byte and 0x03 and shifts over that result by one bit.
That result is then logically ORed with the next byte (*pchar+1), which in turn is ANDed with 0x80, which is then right shifted by seven bits. Essentially, this portion just strips off the first bit in the byte and shifts it over by seven bits.
What the result is essentially this:
Imagine pchar points to the byte where bits are represented by letters: ABCDEFGH.
The first part ANDs with 0x03, so we are left with 000000GH. This is then left shifted by one bit, so we are left with 00000GH0.
Same thing for the right portion. pchar+1 is represented by IJKLMNOP. With the first logical AND, we are left with I0000000. This is then right shifted seven times. So we have 0000000I. This is combined with the left hand portion using the OR, so we have 00000GHI, which is then casted into an int, which holds your size.
Basically, there are three bits that hold the size, but they are not byte aligned. As a result, some manipulation is necessary.
size = (unsigned int)((((*pchar) & 0x03) << 1) | (((*(pchar+1)) & 0x80) >> 7));
Can somebody help me in understanding this?
We have byte *pchar and byte *(pchar+1). Each byte consists of 8 bits.
Let's index each bit of *pchar in bold: 76543210 and each bit of *(pchar+1) in italic: 76543210.
1.. ((*pchar) & 0x03) << 1 means "zero all bits of *pchar except bits 0 and 1, then shift result to the left by 1 bit":
76543210 --> xxxxxx10 --> xxxxx10x
2.. (((*(pchar+1)) & 0x80) >> 7) means "zero all bits of *(pchar+1) except bit 7, then shift result to the right by 7 bits":
76543210 --> 7xxxxxxx --> xxxxxxx7
3.. ((((*pchar) & 0x03) << 1) | (((*(pchar+1)) & 0x80) >> 7)) means "combine all non-zero bits of left and right operands into one byte":
xxxxx10x | xxxxxxx7 --> xxxxx107
So, in the result we have two low bits from *pchar and one high bit from *(pchar+1).

How to create mask with least significat bits set to 1 in C

Can someone please explain this function to me?
A mask with the least significant n bits set to 1.
Ex:
n = 6 --> 0x2F, n = 17 --> 0x1FFFF // I don't get these at all, especially how n = 6 --> 0x2F
Also, what is a mask?
The usual way is to take a 1, and shift it left n bits. That will give you something like: 00100000. Then subtract one from that, which will clear the bit that's set, and set all the less significant bits, so in this case we'd get: 00011111.
A mask is normally used with bitwise operations, especially and. You'd use the mask above to get the 5 least significant bits by themselves, isolated from anything else that might be present. This is especially common when dealing with hardware that will often have a single hardware register containing bits representing a number of entirely separate, unrelated quantities and/or flags.
A mask is a common term for an integer value that is bit-wise ANDed, ORed, XORed, etc with another integer value.
For example, if you want to extract the 8 least significant digits of an int variable, you do variable & 0xFF. 0xFF is a mask.
Likewise if you want to set bits 0 and 8, you do variable | 0x101, where 0x101 is a mask.
Or if you want to invert the same bits, you do variable ^ 0x101, where 0x101 is a mask.
To generate a mask for your case you should exploit the simple mathematical fact that if you add 1 to your mask (the mask having all its least significant bits set to 1 and the rest to 0), you get a value that is a power of 2.
So, if you generate the closest power of 2, then you can subtract 1 from it to get the mask.
Positive powers of 2 are easily generated with the left shift << operator in C.
Hence, 1 << n yields 2n. In binary it's 10...0 with n 0s.
(1 << n) - 1 will produce a mask with n lowest bits set to 1.
Now, you need to watch out for overflows in left shifts. In C (and in C++) you can't legally shift a variable left by as many bit positions as the variable has, so if ints are 32-bit, 1<<32 results in undefined behavior. Signed integer overflows should also be avoided, so you should use unsigned values, e.g. 1u << 31.
For both correctness and performance, the best way to accomplish this has changed since this question was asked back in 2012 due to the advent of BMI instructions in modern x86 processors, specifically BLSMSK.
Here's a good way of approaching this problem, while retaining backwards compatibility with older processors.
This method is correct, whereas the current top answers produce undefined behavior in edge cases.
Clang and GCC, when allowed to optimize using BMI instructions, will condense gen_mask() to just two ops. With supporting hardware, be sure to add compiler flags for BMI instructions:
-mbmi -mbmi2
#include <inttypes.h>
#include <stdio.h>
uint64_t gen_mask(const uint_fast8_t msb) {
const uint64_t src = (uint64_t)1 << msb;
return (src - 1) ^ src;
}
int main() {
uint_fast8_t msb;
for (msb = 0; msb < 64; ++msb) {
printf("%016" PRIx64 "\n", gen_mask(msb));
}
return 0;
}
First, for those who only want the code to create the mask:
uint64_t bits = 6;
uint64_t mask = ((uint64_t)1 << bits) - 1;
# Results in 0b111111 (or 0x03F)
Thanks to #Benni who asked about using bits = 64. If you need the code to support this value as well, you can use:
uint64_t bits = 6;
uint64_t mask = (bits < 64)
? ((uint64_t)1 << bits) - 1
: (uint64_t)0 - 1
For those who want to know what a mask is:
A mask is usually a name for value that we use to manipulate other values using bitwise operations such as AND, OR, XOR, etc.
Short masks are usually represented in binary, where we can explicitly see all the bits that are set to 1.
Longer masks are usually represented in hexadecimal, that is really easy to read once you get a hold of it.
You can read more about bitwise operations in C here.
I believe your first example should be 0x3f.
0x3f is hexadecimal notation for the number 63 which is 111111 in binary, so that last 6 bits (the least significant 6 bits) are set to 1.
The following little C program will calculate the correct mask:
#include <stdarg.h>
#include <stdio.h>
int mask_for_n_bits(int n)
{
int mask = 0;
for (int i = 0; i < n; ++i)
mask |= 1 << i;
return mask;
}
int main (int argc, char const *argv[])
{
printf("6: 0x%x\n17: 0x%x\n", mask_for_n_bits(6), mask_for_n_bits(17));
return 0;
}
0x2F is 0010 1111 in binary - this should be 0x3f, which is 0011 1111 in binary and which has the 6 least-significant bits set.
Similarly, 0x1FFFF is 0001 1111 1111 1111 1111 in binary, which has the 17 least-significant bits set.
A "mask" is a value that is intended to be combined with another value using a bitwise operator like &, | or ^ to individually set, unset, flip or leave unchanged the bits in that other value.
For example, if you combine the mask 0x2F with some value n using the & operator, the result will have zeroes in all but the 6 least significant bits, and those 6 bits will be copied unchanged from the value n.
In the case of an & mask, a binary 0 in the mask means "unconditionally set the result bit to 0" and a 1 means "set the result bit to the input value bit". For an | mask, an 0 in the mask sets the result bit to the input bit and a 1 unconditionally sets the result bit to 1, and for an ^ mask, an 0 sets the result bit to the input bit and a 1 sets the result bit to the complement of the input bit.

Resources