How to flip and reverse an int in C? - c

For example:
Input: 01011111
Output: 00000101
I know I can use ~ to flip a number, but I don't know good ways to reverse it. And I'm not sure whether they can be done together.
Does anyone have any ideas?

For this sort of thing I'd advise you to go to the fantastic bit twiddling hacks webpage. Here's one of the solutions from that page:
Reverse the bits in a byte with 3 operations (64-bit multiply and modulus division):
unsigned char b; // reverse this (8-bit) byte
b = (b * 0x0202020202ULL & 0x010884422010ULL) % 1023;
The multiply operation creates five separate copies of the 8-bit byte pattern to fan-out into a 64-bit value. The AND operation selects the bits that are in the correct (reversed) positions, relative to each 10-bit groups of bits. The multiply and the AND operations copy the bits from the original byte so they each appear in only one of the 10-bit sets. The reversed positions of the bits from the original byte coincide with their relative positions within any 10-bit set. The last step, which involves modulus division by 2^10 - 1, has the effect of merging together each set of 10 bits (from positions 0-9, 10-19, 20-29, ...) in the 64-bit value. They do not overlap, so the addition steps underlying the modulus division behave like or operations.
This method was attributed to Rich Schroeppel in the Programming Hacks section of Beeler, M., Gosper, R. W., and Schroeppel, R. HAKMEM. MIT AI Memo 239, Feb. 29, 1972.
And here's a different solution that doesn't use 64-bit integers:
Reverse the bits in a byte with 7 operations (no 64-bit):
b = ((b * 0x0802LU & 0x22110LU) | (b * 0x8020LU & 0x88440LU)) * 0x10101LU >> 16;
Make sure you assign or cast the result to an unsigned char to remove garbage in the higher bits. Devised by Sean Anderson, July 13, 2001. Typo spotted and correction supplied by Mike Keith, January 3, 2002.

Related

Understanding Byte swapping

I'm trying to figure out a way to create a little endian/big endian conversion for 64 bit integers (uint64_t) and while I find a lot of answers online as to how to do it, none of them explain what exactly is going on. For example, to get the nth byte of the integer I found this response:
int x = (number >> (8*n)) & 0xff;
Even though I understand the bit shifting component (shifting 8n digits to the right) I don't see where the & and 0xff come in, and what they mean aside from & is a bitwise AND operator.
So, how would this sort of logic apply to a big-endian/little-endian byte swapping method for 64 bit integers?
Thank you in advance.
It might be easiest to think of an analogy with decimal numbers:
Take the number 308. It has three digits, '3', '0', and '8'. By convention, digits to the left are more significant than digits to the right. But the convention could just as easily have been the other way...the digits could've been written in reverse order (e.g., 803).
Why is this relevant? Consider a hexadecimal representation of a number on a computer: 0xabcd0123. In a mathematically rigorous sense, one can view this number as 4 radix-256 digits. (i.e., 0xab, 0xcd, 0x01, 0x23). So, endianness is about the convention by which these radix-256 digits are ordered when written into memory.
Little-endian means "write the least significant digit to the lowest address";
Big-endian means "write the most significant digit to the lowest address".
So, on to the mechanics of processing endianness:
If you think of the decimal example above, how would you get each digit? The least significant digit is given by taking the number modulo 10 (i.e., 308 % 10 = 8). The second digit can be found by dividing the number by 10, then taking it modulo 10 (i.e., 308 / 10 = 30; 30 % 10 = 0) and so forth.
The process is exactly the same for binary data on a computer, except that it's treated as radix-256 instead of radix-10 like decimal digits. This is where a few tricks come in.
When doing modulo with a modulus that is a power of 2, you can do it via AND. Let m=256 as our modulus. Because m = 2 to some power, x % m is equivalent to x & (m-1). This is a numerical fact that is out-of scope for this answer.
When doing division by a power of 2, you can do it via right-shift. That is, let m=256 be our divisor. Because m = 2 to the 8th power, x / m is equivalent to x >> 8.
Thus binary endian-specific serialization uses exactly the process above:
uint32_t val = 0xabcd0123;
(val & 0xff) is equivalent to (val % 256), and yields 0x23.
((val >> 8) & 0xff) is equivalent to ((val / 256) % 256), and yields 0x01.
((val >> 16) & 0xff) is equivalent to (((val / 256)/256) % 256), and yields 0xcd.
and so on. So now that you have access to the digits/bytes, you simply have to chose the order in which to store them. Per above, "big endian = most-significant at lowest address", "little endian = least-significant at lowest address".

converting little endian hex to big endian decimal in C

I am trying to understand and implement a simple file system based on FAT12. I am currently looking at the following snippet of code and its driving me crazy:
int getTotalSize(char * mmap)
{
int *tmp1 = malloc(sizeof(int));
int *tmp2 = malloc(sizeof(int));
int retVal;
* tmp1 = mmap[19];
* tmp2 = mmap[20];
printf("%d and %d read\n",*tmp1,*tmp2);
retVal = *tmp1+((*tmp2)<<8);
free(tmp1);
free(tmp2);
return retVal;
};
From what I've read so far, the FAT12 format stores the integers in little endian format.
and the code above is getting the size of the file system which is stored in the 19th and 20th byte of boot sector.
however I don't understand why retVal = *tmp1+((*tmp2)<<8); works. is the bitwise <<8 converting the second byte to decimal? or to big endian format?
why is it only doing it to the second byte and not the first one?
the bytes in question are [in little endian format] :
40 0B
and i tried converting them manually by switching the order first to
0B 40
and then converting from hex to decimal, and I get the right output, I just don't understand how adding the first byte to the bitwise shift of second byte does the same thing?
Thanks
The use of malloc() here is seriously facepalm-inducing. Utterly unnecessary, and a serious "code smell" (makes me doubt the overall quality of the code). Also, mmap clearly should be unsigned char (or, even better, uint8_t).
That said, the code you're asking about is pretty straight-forward.
Given two byte-sized values a and b, there are two ways of combining them into a 16-bit value (which is what the code is doing): you can either consider a to be the least-significant byte, or b.
Using boxes, the 16-bit value can look either like this:
+---+---+
| a | b |
+---+---+
or like this, if you instead consider b to be the most significant byte:
+---+---+
| b | a |
+---+---+
The way to combine the lsb and the msb into 16-bit value is simply:
result = (msb * 256) + lsb;
UPDATE: The 256 comes from the fact that that's the "worth" of each successively more significant byte in a multibyte number. Compare it to the role of 10 in a decimal number (to combine two single-digit decimal numbers c and d you would use result = 10 * c + d).
Consider msb = 0x01 and lsb = 0x00, then the above would be:
result = 0x1 * 256 + 0 = 256 = 0x0100
You can see that the msb byte ended up in the upper part of the 16-bit value, just as expected.
Your code is using << 8 to do bitwise shifting to the left, which is the same as multiplying by 28, i.e. 256.
Note that result above is a value, i.e. not a byte buffer in memory, so its endianness doesn't matter.
I see no problem combining individual digits or bytes into larger integers.
Let's do decimal with 2 digits: 1 (least significant) and 2 (most significant):
1 + 2 * 10 = 21 (10 is the system base)
Let's now do base-256 with 2 digits: 0x40 (least significant) and 0x0B (most significant):
0x40 + 0x0B * 0x100 = 0x0B40 (0x100=256 is the system base)
The problem, however, is likely lying somewhere else, in how 12-bit integers are stored in FAT12.
A 12-bit integer occupies 1.5 8-bit bytes. And in 3 bytes you have 2 12-bit integers.
Suppose, you have 0x12, 0x34, 0x56 as those 3 bytes.
In order to extract the first integer you only need take the first byte (0x12) and the 4 least significant bits of the second (0x04) and combine them like this:
0x12 + ((0x34 & 0x0F) << 8) == 0x412
In order to extract the second integer you need to take the 4 most significant bits of the second byte (0x03) and the third byte (0x56) and combine them like this:
(0x56 << 4) + (0x34 >> 4) == 0x563
If you read the official Microsoft's document on FAT (look up fatgen103 online), you'll find all the FAT relevant formulas/pseudo code.
The << operator is the left shift operator. It takes the value to the left of the operator, and shift it by the number used on the right side of the operator.
So in your case, it shifts the value of *tmp2 eight bits to the left, and combines it with the value of *tmp1 to generate a 16 bit value from two eight bit values.
For example, lets say you have the integer 1. This is, in 16-bit binary, 0000000000000001. If you shift it left by eight bits, you end up with the binary value 0000000100000000, i.e. 256 in decimal.
The presentation (i.e. binary, decimal or hexadecimal) has nothing to do with it. All integers are stored the same way on the computer.

Need help understanding bitmaps, bitwise operations, and C

Disclaimer: I am asking these questions in relation to an assignment. The assignment itself calls for implementing a bitmap and doing some operations with that, but that is not what I am asking about. I just want to understand the concepts so I can try the implementation for myself.
I need help understanding bitmaps/bit arrays and bitwise operations. I understand the basics of binary and how left/right shift work, but I don't know exactly how that use is beneficial.
Basically, I need to implement a bitmap to store the results of a prime sieve (of Eratosthenes.) This is a small part of a larger assignment focused on different IPC methods, but to get to that part I need to get the sieve completed first. I've never had to use bitwise operations nor have I ever learned about bitmaps, so I'm kind of on my own to learn this.
From what I can tell, bitmaps are arrays of a bit of a certain size, right? By that I mean you could have an 8-bit array or a 32-bit array (in my case, I need to find the primes for a 32-bit unsigned int, so I'd need the 32-bit array.) So if this is an array of bits, 32 of them to be specific, then we're basically talking about a string of 32 1s and 0s. How does this translate into a list of primes? I figure that one method would evaluate the binary number and save it to a new array as decimal, so all the decimal primes exist in one array, but that seems like you're using too much data.
Do I have the gist of bitmaps? Or is there something I'm missing? I've tried reading about this around the internet but I can't find a source that makes it clear enough for me...
Suppose you have a list of primes: {3, 5, 7}. You can store these numbers as a character array: char c[] = {3, 5, 7} and this requires 3 bytes.
Instead lets use a single byte such that each set bit indicates that the number is in the set. For example, 01010100. If we can set the byte we want and later test it we can use this to store the same information in a single byte. To set it:
char b = 0;
// want to set `3` so shift 1 twice to the left
b = b | (1 << 2);
// also set `5`
b = b | (1 << 4);
// and 7
b = b | (1 << 6);
And to test these numbers:
// is 3 in the map:
if (b & (1 << 2)) {
// it is in...
You are going to need a lot more than 32 bits.
You want a sieve for up to 2^32 numbers, so you will need a bit for each one of those. Each bit will represent one number, and will be 0 if the number is prime and 1 if it is composite. (You can save one bit by noting that the first bit must be 2 as 1 is neither prime nor composite. It is easier to waste that one bit.)
2^32 = 4,294,967,296
Divide by 8
536,870,912 bytes, or 1/2 GB.
So you will want an array of 2^29 bytes, or 2^27 4-byte words, or whatever you decide is best, and also a method for manipulating the individual bits stored in the chars (ints) in the array.
It sounds like eventually, you are going to have several threads or processes operating on this shared memory.You may need to store it all in a file if you can't allocate all that memory to yourself.
Say you want to find the bit for x. Then let a = x / 8 and b = x - 8 * a. Then the bit is at arr[a] & (1 << b). (Avoid the modulus operator % wherever possible.)
//mark composite
a = x / 8;
b = x - 8 * a;
arr[a] |= 1 << b;
This sounds like a fun assignment!
A bitmap allows you to construct a large predicate function over the range of numbers you're interested in. If you just have a single 8-bit char, you can store Boolean values for each of the eight values. If you have 2 chars, it doubles your range.
So, say you have a bitmap that already has this information stored, your test function could look something like this:
bool num_in_bitmap (int num, char *bitmap, size_t sz) {
if (num/8 >= sz) return 0;
return (bitmap[num/8] >> (num%8)) & 1;
}

Homework - C bit puzzle - Perform % using C bit operations (no looping, conditionals, function calls, etc)

I'm completely stuck on how to do this homework problem and looking for a hint or two to keep me going. I'm limited to 20 operations (= doesn't count in this 20).
I'm supposed to fill in a function that looks like this:
/* Supposed to do x%(2^n).
For example: for x = 15 and n = 2, the result would be 3.
Additionally, if positive overflow occurs, the result should be the
maximum positive number, and if negative overflow occurs, the result
should be the most negative number.
*/
int remainder_power_of_2(int x, int n){
int twoToN = 1 << n;
/* Magic...? How can I do this without looping? We are assuming it is a
32 bit machine, and we can't use constants bigger than 8 bits
(0xFF is valid for example).
However, I can make a 32 bit number by ORing together a bunch of stuff.
Valid operations are: << >> + ~ ! | & ^
*/
return theAnswer;
}
I was thinking maybe I could shift the twoToN over left... until I somehow check (without if/else) that it is bigger than x, and then shift back to the right once... then xor it with x... and repeat? But I only have 20 operations!
Hint: In decadic system to do a modulo by power of 10, you just leave the last few digits and null the other. E.g. 12345 % 100 = 00045 = 45. Well, in computer numbers are binary. So you have to null the binary digits (bits). So look at various bit manipulation operators (&, |, ^) to do so.
Since binary is base 2, remainders mod 2^N are exactly represented by the rightmost bits of a value. For example, consider the following 32 bit integer:
00000000001101001101000110010101
This has the two's compliment value of 3461525. The remainder mod 2 is exactly the last bit (1). The remainder mod 4 (2^2) is exactly the last 2 bits (01). The remainder mod 8 (2^3) is exactly the last 3 bits (101). Generally, the remainder mod 2^N is exactly the last N bits.
In short, you need to be able to take your input number, and mask it somehow to get only the last few bits.
A tip: say you're using mod 64. The value of 64 in binary is:
00000000000000000000000001000000
The modulus you're interested in is the last 6 bits. I'll provide you a sequence of operations that can transform that number into a mask (but I'm not going to tell you what they are, you can figure them out yourself :D)
00000000000000000000000001000000 // starting value
11111111111111111111111110111111 // ???
11111111111111111111111111000000 // ???
00000000000000000000000000111111 // the mask you need
Each of those steps equates to exactly one operation that can be performed on an int type. Can you figure them out? Can you see how to simplify my steps? :D
Another hint:
00000000000000000000000001000000 // 64
11111111111111111111111111000000 // -64
Since your divisor is always power of two, it's easy.
uint32_t remainder(uint32_t number, uint32_t power)
{
power = 1 << power;
return (number & (power - 1));
}
Suppose you input number as 5 and divisor as 2
`00000000000000000000000000000101` number
AND
`00000000000000000000000000000001` divisor - 1
=
`00000000000000000000000000000001` remainder (what we expected)
Suppose you input number as 7 and divisor as 4
`00000000000000000000000000000111` number
AND
`00000000000000000000000000000011` divisor - 1
=
`00000000000000000000000000000011` remainder (what we expected)
This only works as long as divisor is a power of two (Except for divisor = 1), so use it carefully.

What does >> and 0xfffffff8 mean?

I was told that (i >> 3) is faster than (i/8) but I can't find any information on what >> is. Can anyone point me to a link that explains it?
The same person told me "int k = i/8, followed by k*8 is better accomplished by (i&0xfffffff8);" but again Google didn't help m...
Thanks for any links!
As explained here the >> operator is simply a bitwise shift of the bits of i. So shifting i 1 bit to the right results in an integer-division by 2 and shifting by 3 bits results in a division by 2^3=8.
But nowadays this optimization for division by a power of two should not really be done anymore, as compilers should be smart enough to do this themselves.
Similarly a bitwise AND with 0xFFFFFFF8 (1...1000, last 3 bits 0) is equal to rounding down i to the nearest multiple of 8 (like (i/8)*8 does), as it will zero the last 3 bits of i.
Bitwise shift right.
i >> 3 moves the i integer 3 places to the right [binary-way] - aka, divide by 2^3.
int x = i / 8 * 8:
1) i / 8, can be replaced with i >> 3 - bitwise shift to the right on to 3 digits (8 = 2^3)
2) i & xfffffff8 comparison with mask
For example you have:
i = 11111111
k (i/8) would be: 00011111
x (k * 8) would be: 11111000
Therefore the operation just resets last 3 bits:
And comparable time cost multiplication and division operation can be rewritten simple with
i & xfffffff8 - comparison with (... 11111000 mask)
They are Bitwise Operations
The >> operator is the bit shift operator. It takes the bit represented by the value and shifts them over a set number of slots to the right.
Regarding the first half:
>> is a bit-wise shift to the right.
So shifting a numeric value 3 bits to the right is the same as dividing by 8 and inting the result.
Here's a good reference for operators and their precedence: http://web.cs.mun.ca/~michael/c/op.html
The second part of your question involves the & operator, which is a bit-wise AND. The example is ANDing i and a number that leaves all bits set except for the 3 least significant ones. That is essentially the same thing happening when you have a number, divide it by 8, store the result as an integer, then multiply that result by 8.
The reason this is so is that dividing by 8 and storing as an integer is the same as bit-shifting to the right 3 places, and multiplying by 8 and storing the result in an int is the same as bit-shifting to the left 3 places.
So, if you're multiplying or dividing by a power of 2, such as 8, and you're going to accept the truncating of bits that happens when you store that result in an int, bit-shifting is faster, operationally. This is because the processor can skip the multiply/divide algorithm and just go straight to shifting bits, which involves few steps.
Bitwise shifting.
Suppose I have an 8 -bit integer, in binary
01000000
If I left shift (>> operator) 1 the result is
00100000
If I then right shift (<< operator) 1, I clearly get back to wear I started
01000000
It turns out that because the first binary integer is equivelant to
0*2^7 + 1*2^6 + 0*2^5 + 0*2^4 + 0*2^3 + 1*2^2 + 0*2^1 + 0*2^0
or simply 2^6 or 64
When we right shift 1 we get the following
0*2^7 + 0*2^6 + 1*2^5 + 0*2^4 + 0*2^3 + 1*2^2 + 0*2^1 + 0*2^0
or simply 2^5 or 32
Which means
i >> 1
is the same as
i / 2
If we shift once more (i >> 2), we effectively divide by 2 once again and get
i / 2 / 2
Which is really
i / 4
Not quite a mathematical proof, but you can see the following holds true
i >> n == i / (2^n)
That's called bit shifting, it's an operation on bits, for example, if you have a number on a binary base, let's say 8, it will be 1000, so
x = 1000;
y = x >> 1; //y = 100 = 4
z = x >> 3; //z = 1
Your shifting the bits in binary so for example:
1000 == 8
0100 == 4 (>> 1)
0010 == 2 (>> 2)
0001 == 1 (>> 3)
Being as you're using a base two system, you can take advantage with certain divisors (integers only!) and just bit-shift. On top of that, I believe most compilers know this and will do this for you.
As for the second part:
(i&0xfffffff8);
Say i = 16
0x00000010 & 0xfffffff8 == 16
(16 / 8) * 8 == 16
Again taking advantage of logical operators on binary. Investigate how logical operators work on binary a bit more for really clear understanding of what is going on at the bit level (and how to read hex).
>> is right shift operation.
If you have a number 8, which is represented in binary as 00001000, shifting bits to the right by 3 positions will give you 00000001, which is decimal 1. This is equivalent to dividing by 2 three times.
Division and multiplication by the same number means that you set some bits at the right to zero. The same can be done if you apply a mask. Say, 0xF8 is 11111000 bitwise and if you AND it to a number, it will set its last three bits to zero, and other bits will be left as they are. E.g., 10 & 0xF8 would be 00001010 & 11111000, which equals 00001000, or 8 in decimal.
Of course if you use 32-bit variables, you should have a mask fitting this size, so it will have all the bits set to 1, except for the three bits at the right, giving you your number - 0xFFffFFf8.
>> shifts the number to the right. Consider a binary number 0001000 which represents 8 in the decimal notation. Shifting it 3 bits to the right would give 0000001 which is the number 1 in decimal notation. Thus you see that every 1 bit shift to the right is in fact a division by 2. Note here that the operator truncates the float part of the result.
Therefore i >> n implies i/2^n.
This might be fast depending on the implementation of the compiler. But it generally takes place right in the registers and therefore is very fast as compared to traditional divide and multiply.
The answer to the second part is contained in the first one itself. Since division also truncates all the float part of the result, the division by 8 will in theory shift your number 3 bits to the right, thereby losing all the information about the rightmost 3 bits. Now when you again multiply it by 8 (which in theory means shifting left by 3 bits), you are padding the righmost 3 bits by zero after shifting the result left by 3 bits. Therefore, the complete operation can be considered as one "&" operation with 0xfffffff8 which means that the number has all bits 1 except the rightmost 4 bits which are 1000.

Resources