How can i swap every 2 bits in a binary number? - c

I'm working on this programming project and part of it is to write a function with just bitwise operators that switches every two bits. I've come up with a comb sort of algorithm that accomplishes this but it only works for unsigned numbers, any ideas how I can get it to work with signed numbers as well? I'm completely stumped on this one. Heres what I have so far:
// Mask 1 - For odd bits
int a1 = 0xAA; a1 <<= 24;
int a2 = 0xAA; a2 <<= 16;
int a3 = 0xAA; a3 <<= 8;
int a4 = 0xAA;
int mask1 = a1 | a2 | a3 | a4;
// Mask 2 - For even bits
int b1 = 0x55; b1 <<= 24;
int b2 = 0x55; b2 <<= 16;
int b3 = 0x55; b3 <<= 8;
int b4 = 0x55;
int mask2 = b1 | b2 | b3 | b4;
// Mask Results
int odd = x & mask1;
int even = x & mask2;
int newNum = (odd >> 1) | (even << 1);
return newNum;
The manual creation of the masks by or'ing variables together is because the only constants that can be used are between 0x00-0xFF.

The problem is that odd >> 1 will sign extend with negative numbers. Simply do another and to eliminate the duplicated bit.
int newNum = ((odd >> 1) & mask2) | (even << 1);

Minimizing the operators and noticing the sign extension problem gives:
int odd = 0x55;
odd |= odd << 8;
odd |= odd << 16;
int newnum = ((x & odd) << 1 ) // This is (sort of well defined)
| ((x >> 1) & odd); // this handles the sign extension without
// additional & -operations
One remark though: bit twiddling should be generally applied to unsigned integers only.

When you right shift a signed number, the sign will also be extended. This is known as sign extension. Typically when you are dealing with bit shifting, you want to use unsigned numbers.

Minimizing use of constants by working one byte at a time:
unsigned char* byte_p;
unsigned char byte;
int ii;
byte_p = &x;
for(ii=0; ii<4; ii++) {
byte = *byte_p;
*byte_p = ((byte & 0xAA)>>1) | ((byte & 0x55) << 1);
byte_p++;
}
Minimizing operations and keeping constants between 0x00 and 0xFF:
unsigned int comb = (0xAA << 8) + 0xAA;
comb += comb<<16;
newNum = ((x & comb) >> 1) | ((x & (comb >> 1)) << 1);
10 operations.
Just saw the comments above and realize this is implementing (more or less) some of the suggestions that #akisuihkonen made. So consider this a tip of the hat!

Related

Sign extension, addition and subtraction binary in C

How would I go about implementing a sign extend from 16 bits to 32 bits in C code?
I am supposed to be using bitwise operators. I also need to add and subtract; can anyone point me in the right direction? I did the first 4 but am confused on the rest. I have to incorporate a for loop somewhere as well for 1 of the cases.
I am not allowed to use any arithmetic operators (+, -, /, *) and no if statements.
Here is the code for the switch statement I am currently editing:
unsigned int csc333ALU(const unsigned int opcode,
const unsigned int argument1,
const unsigned int argument2) {
unsigned int result;
switch(opcode) {
case(0x01): // result = NOT argument1
result = ~(argument1);
break;
case(0x02): // result = argument 1 OR argument 2
result = argument1 | argument2;
break;
case(0x03): // result = argument 1 AND argument 2
result = argument1 & argument2;
break;
case(0x04): // result = argument 1 XOR argument 2
result = argument1 ^ argument2;
break;
case(0x05): // result = 16 bit argument 1 sign extended to 32 bits
result = 0x00000000;
break;
case(0x06): // result = argument1 + argument2
result = 0x00000000;
break;
case(0x07): // result = -argument1. In two's complement, negate and add 1.
result = 0x00000000;
break;
default:
printf("Invalid opcode: %X\n", opcode);
result = 0xFFFFFFFF;
}
partial answer for sign extension:
result = (argument1 & 0x8000) == 0x8000 ? 0xFFFF0000 | argument1 : argument1;
To sign-extend a 16 bit number to 32 bit, you need to copy bit 15 to the upper bits. The naive way to do this is with 16 instructions, copying bit 15 to bit 16, then 17, then 18, and so on. But you can do it more efficiently by using previously copied bits and doubling the number of bits you've copied each time like this:
unsigned int ext = (argument1 & 0x8000U) << 1;
ext |= ext << 1;
ext |= ext << 2;
ext |= ext << 4;
ext |= ext << 8;
result = (argument1 & 0xffffU) | ext;
To add two 32 bit numbers "manually" then you can simply do it bit by bit.
unsigned carry = 0;
result = 0;
for (int i = 0; i < 32; i++) {
// Extract the ith bit from argument1 and argument 2.
unsigned a1 = (argument1 >> i) & 1;
unsigned a2 = (argument2 >> i) & 1;
// The ith bit of result is set if 1 or 3 of a1, a2, carry is set.
unsigned v = a1 ^ a2 ^ carry;
result |= v << i;
// The new carry is 1 if at least two of a1, a2, carry is set.
carry = (a1 & a2) | (a1 & carry) | (a2 & carry);
}
Subtraction works with almost exactly the same code: a - b is the same as a + (~b+1) in two's complement arithmetic. Because you aren't allowed to simply add 1, you can achieve the same by initialising carry to 1 instead of 0.
unsigned carry = 1;
result = 0;
for (int i = 0; i < 32; i++) {
unsigned a1 = (argument1 >> i) & 1;
unsigned a2 = (~argument2 >> i) & 1;
unsigned v = a1 ^ a2 ^ carry;
result |= v << i;
carry = (a1 & a2) | (a1 & carry) | (a2 & carry);
}
To find two's complement without doing the negation, similar ideas apply. Bitwise negate and then add 1. Adding 1 is simpler than adding argument2, so the code is correspondingly simpler.
result = ~argument1;
unsigned carry = 1;
for (int i = 0; i < 32 && carry; i++) {
carry &= (result >> i);
result |= (1 << i);
}
to get sign extension from short int to int....
short int iShort = value;
int i = iShort; // compiler automatically creates code that performs sign extension
Note: going from i to iShort will generate a compiler waring.
however, for other situations...
no need to make comparison, the & will result in a single bit being either 0 or 1 and be sure to cast the parts of the calculation as int
int i = (short int argument&0x8000)? (int)(0xFFFF000 | (int)argument) : (int)argument;

Zip two or more numbers together bitwise

What is the best way to zip two (or more) numbers' bit representations together in C/C++/Obj-C?
I have num one to three. Their binary representations are [abc, ABC, xyz]. I would like to produce a num with binary [aAxbBycCz]. I'm mainly working with numbers that are over 21 bits.
(Ignoring the limit on integers, endian-ness and whatnot).
Thanks, happy holidays guys :)
A solution that should work for any number of bits:
const unsigned int BITS = 21;
unsigned int zipper(unsigned a0, unsigned a1, unsigned a2)
{
unsigned int result = 0;
for (unsigned int mask = 1<<BITS; mask != 0; mask >>= 1)
{
result |= a0 & mask;
result <<= 1;
result |= a1 & mask;
result <<= 1;
result |= a2 & mask;
}
return result;
}
If you need more speed, do some precalculation:
static unsigned explode[]= { 0, 1, 0x1000, 0x1001, 0x1000000, 0x1000001, 0x1001000, 0x1001001 } ;
unsigned int zipper(unsigned a0, unsigned a1, unsigned a2)
{
return explode[a0] | ( explode[a1] << 1) | ( explode[a2] << 2 ) ;
}
With the usual caveats for out of bounds, etc.
I would just do it by brute force:
unsigned int binaryZip(unsigned int a0, unsigned int a1, unsigned int a2)
{
return ((a0 << 0) & 0x001) |
((a1 << 1) & 0x002) |
((a2 << 2) & 0x004) |
((a0 << 2) & 0x008) |
((a1 << 3) & 0x010) |
((a2 << 4) & 0x020) |
((a0 << 4) & 0x040) |
((a1 << 5) & 0x080) |
((a2 << 6) & 0x100);
}

Convert Little Endian to Big Endian

I just want to ask if my method is correct to convert from little endian to big endian, just to make sure if I understand the difference.
I have a number which is stored in little-endian, here are the binary and hex representations of the number:
‭0001 0010 0011 0100 0101 0110 0111 1000‬
‭12345678‬
In big-endian format I believe the bytes should be swapped, like this:
1000 0111 0110 0101 0100 0011 0010 0001
‭87654321
Is this correct?
Also, the code below attempts to do this but fails. Is there anything obviously wrong or can I optimize something? If the code is bad for this conversion can you please explain why and show a better method of performing the same conversion?
uint32_t num = 0x12345678;
uint32_t b0,b1,b2,b3,b4,b5,b6,b7;
uint32_t res = 0;
b0 = (num & 0xf) << 28;
b1 = (num & 0xf0) << 24;
b2 = (num & 0xf00) << 20;
b3 = (num & 0xf000) << 16;
b4 = (num & 0xf0000) << 12;
b5 = (num & 0xf00000) << 8;
b6 = (num & 0xf000000) << 4;
b7 = (num & 0xf0000000) << 4;
res = b0 + b1 + b2 + b3 + b4 + b5 + b6 + b7;
printf("%d\n", res);
OP's sample code is incorrect.
Endian conversion works at the bit and 8-bit byte level. Most endian issues deal with the byte level. OP's code is doing a endian change at the 4-bit nibble level. Recommend instead:
// Swap endian (big to little) or (little to big)
uint32_t num = 9;
uint32_t b0,b1,b2,b3;
uint32_t res;
b0 = (num & 0x000000ff) << 24u;
b1 = (num & 0x0000ff00) << 8u;
b2 = (num & 0x00ff0000) >> 8u;
b3 = (num & 0xff000000) >> 24u;
res = b0 | b1 | b2 | b3;
printf("%" PRIX32 "\n", res);
If performance is truly important, the particular processor would need to be known. Otherwise, leave it to the compiler.
[Edit] OP added a comment that changes things.
"32bit numerical value represented by the hexadecimal representation (st uv wx yz) shall be recorded in a four-byte field as (st uv wx yz)."
It appears in this case, the endian of the 32-bit number is unknown and the result needs to be store in memory in little endian order.
uint32_t num = 9;
uint8_t b[4];
b[0] = (uint8_t) (num >> 0u);
b[1] = (uint8_t) (num >> 8u);
b[2] = (uint8_t) (num >> 16u);
b[3] = (uint8_t) (num >> 24u);
[2016 Edit] Simplification
... The type of the result is that of the promoted left operand.... Bitwise shift operators C11 §6.5.7 3
Using a u after the shift constants (right operands) results in the same as without it.
b3 = (num & 0xff000000) >> 24u;
b[3] = (uint8_t) (num >> 24u);
// same as
b3 = (num & 0xff000000) >> 24;
b[3] = (uint8_t) (num >> 24);
Sorry, my answer is a bit too late, but it seems nobody mentioned built-in functions to reverse byte order, which in very important in terms of performance.
Most of the modern processors are little-endian, while all network protocols are big-endian. That is history and more on that you can find on Wikipedia. But that means our processors convert between little- and big-endian millions of times while we browse the Internet.
That is why most architectures have a dedicated processor instructions to facilitate this task. For x86 architectures there is BSWAP instruction, and for ARMs there is REV. This is the most efficient way to reverse byte order.
To avoid assembly in our C code, we can use built-ins instead. For GCC there is __builtin_bswap32() function and for Visual C++ there is _byteswap_ulong(). Those function will generate just one processor instruction on most architectures.
Here is an example:
#include <stdio.h>
#include <inttypes.h>
int main()
{
uint32_t le = 0x12345678;
uint32_t be = __builtin_bswap32(le);
printf("Little-endian: 0x%" PRIx32 "\n", le);
printf("Big-endian: 0x%" PRIx32 "\n", be);
return 0;
}
Here is the output it produces:
Little-endian: 0x12345678
Big-endian: 0x78563412
And here is the disassembly (without optimization, i.e. -O0):
uint32_t be = __builtin_bswap32(le);
0x0000000000400535 <+15>: mov -0x8(%rbp),%eax
0x0000000000400538 <+18>: bswap %eax
0x000000000040053a <+20>: mov %eax,-0x4(%rbp)
There is just one BSWAP instruction indeed.
So, if we do care about the performance, we should use those built-in functions instead of any other method of byte reversing. Just my 2 cents.
I think you can use function htonl(). Network byte order is big endian.
"I swap each bytes right?" -> yes, to convert between little and big endian, you just give the bytes the opposite order.
But at first realize few things:
size of uint32_t is 32bits, which is 4 bytes, which is 8 HEX digits
mask 0xf retrieves the 4 least significant bits, to retrieve 8 bits, you need 0xff
so in case you want to swap the order of 4 bytes with that kind of masks, you could:
uint32_t res = 0;
b0 = (num & 0xff) << 24; ; least significant to most significant
b1 = (num & 0xff00) << 8; ; 2nd least sig. to 2nd most sig.
b2 = (num & 0xff0000) >> 8; ; 2nd most sig. to 2nd least sig.
b3 = (num & 0xff000000) >> 24; ; most sig. to least sig.
res = b0 | b1 | b2 | b3 ;
You could do this:
int x = 0x12345678;
x = ( x >> 24 ) | (( x << 8) & 0x00ff0000 )| ((x >> 8) & 0x0000ff00) | ( x << 24) ;
printf("value = %x", x); // x will be printed as 0x78563412
One slightly different way of tackling this that can sometimes be useful is to have a union of the sixteen or thirty-two bit value and an array of chars. I've just been doing this when getting serial messages that come in with big endian order, yet am working on a little endian micro.
union MessageLengthUnion
{
uint16_t asInt;
uint8_t asChars[2];
};
Then when I get the messages in I put the first received uint8 in .asChars[1], the second in .asChars[0] then I access it as the .asInt part of the union in the rest of my program.
If you have a thirty-two bit value to store you can have the array four long.
I am assuming you are on linux
Include "byteswap.h" & Use int32_t bswap_32(int32_t argument);
It is logical view, In actual see, /usr/include/byteswap.h
one more suggestion :
unsigned int a = 0xABCDEF23;
a = ((a&(0x0000FFFF)) << 16) | ((a&(0xFFFF0000)) >> 16);
a = ((a&(0x00FF00FF)) << 8) | ((a&(0xFF00FF00)) >>8);
printf("%0x\n",a);
A Simple C program to convert from little to big
#include <stdio.h>
int main() {
unsigned int little=0x1234ABCD,big=0;
unsigned char tmp=0,l;
printf(" Little endian little=%x\n",little);
for(l=0;l < 4;l++)
{
tmp=0;
tmp = little | tmp;
big = tmp | (big << 8);
little = little >> 8;
}
printf(" Big endian big=%x\n",big);
return 0;
}
OP's code is incorrect for the following reasons:
The swaps are being performed on a nibble (4-bit) boundary, instead of a byte (8-bit) boundary.
The shift-left << operations of the final four swaps are incorrect, they should be shift-right >> operations and their shift values would also need to be corrected.
The use of intermediary storage is unnecessary, and the code can therefore be rewritten to be more concise/recognizable. In doing so, some compilers will be able to better-optimize the code by recognizing the oft-used pattern.
Consider the following code, which efficiently converts an unsigned value:
// Swap endian (big to little) or (little to big)
uint32_t num = 0x12345678;
uint32_t res =
((num & 0x000000FF) << 24) |
((num & 0x0000FF00) << 8) |
((num & 0x00FF0000) >> 8) |
((num & 0xFF000000) >> 24);
printf("%0x\n", res);
The result is represented here in both binary and hex, notice how the bytes have swapped:
‭0111 1000 0101 0110 0011 0100 0001 0010‬
78563412
Optimizing
In terms of performance, leave it to the compiler to optimize your code when possible. You should avoid unnecessary data structures like arrays for simple algorithms like this, doing so will usually cause different instruction behavior such as accessing RAM instead of using CPU registers.
#include <stdio.h>
#include <inttypes.h>
uint32_t le_to_be(uint32_t num) {
uint8_t b[4] = {0};
*(uint32_t*)b = num;
uint8_t tmp = 0;
tmp = b[0];
b[0] = b[3];
b[3] = tmp;
tmp = b[1];
b[1] = b[2];
b[2] = tmp;
return *(uint32_t*)b;
}
int main()
{
printf("big endian value is %x\n", le_to_be(0xabcdef98));
return 0;
}
You can use the lib functions. They boil down to assembly, but if you are open to alternate implementations in C, here they are (assuming int is 32-bits) :
void byte_swap16(unsigned short int *pVal16) {
//#define method_one 1
// #define method_two 1
#define method_three 1
#ifdef method_one
unsigned char *pByte;
pByte = (unsigned char *) pVal16;
*pVal16 = (pByte[0] << 8) | pByte[1];
#endif
#ifdef method_two
unsigned char *pByte0;
unsigned char *pByte1;
pByte0 = (unsigned char *) pVal16;
pByte1 = pByte0 + 1;
*pByte0 = *pByte0 ^ *pByte1;
*pByte1 = *pByte0 ^ *pByte1;
*pByte0 = *pByte0 ^ *pByte1;
#endif
#ifdef method_three
unsigned char *pByte;
pByte = (unsigned char *) pVal16;
pByte[0] = pByte[0] ^ pByte[1];
pByte[1] = pByte[0] ^ pByte[1];
pByte[0] = pByte[0] ^ pByte[1];
#endif
}
void byte_swap32(unsigned int *pVal32) {
#ifdef method_one
unsigned char *pByte;
// 0x1234 5678 --> 0x7856 3412
pByte = (unsigned char *) pVal32;
*pVal32 = ( pByte[0] << 24 ) | (pByte[1] << 16) | (pByte[2] << 8) | ( pByte[3] );
#endif
#if defined(method_two) || defined (method_three)
unsigned char *pByte;
pByte = (unsigned char *) pVal32;
// move lsb to msb
pByte[0] = pByte[0] ^ pByte[3];
pByte[3] = pByte[0] ^ pByte[3];
pByte[0] = pByte[0] ^ pByte[3];
// move lsb to msb
pByte[1] = pByte[1] ^ pByte[2];
pByte[2] = pByte[1] ^ pByte[2];
pByte[1] = pByte[1] ^ pByte[2];
#endif
}
And the usage is performed like so:
unsigned short int u16Val = 0x1234;
byte_swap16(&u16Val);
unsigned int u32Val = 0x12345678;
byte_swap32(&u32Val);
Below is an other approach that was useful for me
convertLittleEndianByteArrayToBigEndianByteArray (byte littlendianByte[], byte bigEndianByte[], int ArraySize){
int i =0;
for(i =0;i<ArraySize;i++){
bigEndianByte[i] = (littlendianByte[ArraySize-i-1] << 7 & 0x80) | (littlendianByte[ArraySize-i-1] << 5 & 0x40) |
(littlendianByte[ArraySize-i-1] << 3 & 0x20) | (littlendianByte[ArraySize-i-1] << 1 & 0x10) |
(littlendianByte[ArraySize-i-1] >>1 & 0x08) | (littlendianByte[ArraySize-i-1] >> 3 & 0x04) |
(littlendianByte[ArraySize-i-1] >>5 & 0x02) | (littlendianByte[ArraySize-i-1] >> 7 & 0x01) ;
}
}
Below program produce the result as needed:
#include <stdio.h>
unsigned int Little_To_Big_Endian(unsigned int num);
int main( )
{
int num = 0x11223344 ;
printf("\n Little_Endian = 0x%X\n",num);
printf("\n Big_Endian = 0x%X\n",Little_To_Big_Endian(num));
}
unsigned int Little_To_Big_Endian(unsigned int num)
{
return (((num >> 24) & 0x000000ff) | ((num >> 8) & 0x0000ff00) | ((num << 8) & 0x00ff0000) | ((num << 24) & 0xff000000));
}
And also below function can be used:
unsigned int Little_To_Big_Endian(unsigned int num)
{
return (((num & 0x000000ff) << 24) | ((num & 0x0000ff00) << 8 ) | ((num & 0x00ff0000) >> 8) | ((num & 0xff000000) >> 24 ));
}
#include<stdio.h>
int main(){
int var = 0X12345678;
var = ((0X000000FF & var)<<24)|
((0X0000FF00 & var)<<8) |
((0X00FF0000 & var)>>8) |
((0XFF000000 & var)>>24);
printf("%x",var);
}
Here is a little function I wrote that works pretty good, its probably not portable to every single machine or as fast a single cpu instruction, but should work for most. It can handle numbers up to 32 byte (256 bit) and works for both big and little endian swaps. The nicest part about this function is you can point it into a byte array coming off or going on the wire and swap the bytes inline before converting.
#include <stdio.h>
#include <string.h>
void byteSwap(char**,int);
int main() {
//32 bit
int test32 = 0x12345678;
printf("\n BigEndian = 0x%X\n",test32);
char* pTest32 = (char*) &test32;
//convert to little endian
byteSwap((char**)&pTest32, 4);
printf("\n LittleEndian = 0x%X\n", test32);
//64 bit
long int test64 = 0x1234567891234567LL;
printf("\n BigEndian = 0x%lx\n",test64);
char* pTest64 = (char*) &test64;
//convert to little endian
byteSwap((char**)&pTest64,8);
printf("\n LittleEndian = 0x%lx\n",test64);
//back to big endian
byteSwap((char**)&pTest64,8);
printf("\n BigEndian = 0x%lx\n",test64);
return 0;
}
void byteSwap(char** src,int size) {
int x = 0;
char b[32];
while(size-- >= 0) { b[x++] = (*src)[size]; };
memcpy(*src,&b,x);
}
output:
$gcc -o main *.c -lm
$main
BigEndian = 0x12345678
LittleEndian = 0x78563412
BigEndian = 0x1234567891234567
LittleEndian = 0x6745239178563412
BigEndian = 0x1234567891234567

Swap byte 2 and 4 in a 32 bit integer

I had this interview question -
Swap byte 2 and byte 4 within an integer sequence.
Integer is a 4 byte wide i.e. 32 bits
My approach was to use char *pointer and a temp char to swap the bytes.
For clarity I have broken the steps otherwise an character array can be considered.
unsigned char *b2, *b4, tmpc;
int n = 0xABCD; ///expected output 0xADCB
b2 = &n; b2++;
b4 = &n; b4 +=3;
///swap the values;
tmpc = *b2;
*b2 = *b4;
*b4 = tmpc;
Any other methods?
int someInt = 0x12345678;
int byte2 = someInt & 0x00FF0000;
int byte4 = someInt & 0x000000FF;
int newInt = (someInt & 0xFF00FF00) | (byte2 >> 16) | (byte4 << 16);
To avoid any concerns about sign extension:
int someInt = 0x12345678;
int newInt = (someInt & 0xFF00FF00) | ((someInt >> 16) & 0x000000FF) | ((someInt << 16) & 0x00FF0000);
(Or, to really impress them, you could use the triple XOR technique.)
Just for fun (probably a tupo somewhere):
int newInt = someInt ^ ((someInt >> 16) & 0x000000FF);
newInt = newInt ^ ((newInt << 16) & 0x00FF0000);
newInt = newInt ^ ((newInt >> 16) & 0x000000FF);
(Actually, I just tested it and it works!)
You can mask out the bytes you want and shift them around. Something like this:
unsigned int swap(unsigned int n) {
unsigned int b2 = (0x0000FF00 & n);
unsigned int b4 = (0xFF000000 & n);
n ^= b2 | b4; // Clear the second and fourth bytes
n |= (b2 << 16) | (b4 >> 16); // Swap and write them.
return n;
}
This assumes that the "first" byte is the lowest order byte (even if in memory it may be stored big-endian).
Also it uses unsigned ints everywhere to avoid right shifting introducing extra 1s due to sign extension.
What about unions?
int main(void)
{
char tmp;
union {int n; char ary[4]; } un;
un.n = 0xABCDEF00;
tmp = un.ary[3];
un.ary[3] = un.ary[1];
un.ary[1] = tmp;
printf("0x%.2X\n", un.n);
}
in > 0xABCDEF00
out>0xEFCDAB00
Please don't forget to check endianess. this only work for little endian, but should not be hard to make it portable.

swap a length of bits in 2 bytes

I would like to input 2 unsigned char variables:a and b. If use a(0) for bit 0 in a, I would like to swap a(6) to a(1) with b(6) to b(1). Finally I wish to get 2 new unsigned char_type variables:a1 and b1 with required bits swapped. I would love to know that is there method to address this issue in C language?
An further requirement is that add 2 variables: pa and pb to decide the start position for the length. For example: if pa=6, pb=7, I have to swap a(6) to a(1) with b(7) to b(2).
Any good solution?
I'd be inclined to use xor masking:
mask = 0x3e; // 0b00111110
diff = (a & mask) ^ (b & mask);
a1 = a ^ diff;
b1 = b ^ diff;
Aha. I get it now
unsigned const char mask = 0x3e;
usigned char a,b; // input somehow
unsigned char a2=a, b2=b;
a2 = (a2 & ~mask) | (b & mask);
b2 = (b2 & ~mask) | (a & mask);
Given integers, e.g.:
uint32_t a = 0xff00ff00;
uint32_t b = 0x00ff00ff;
This is how you swap entire values:
a ^= b;
b ^= a;
a ^= b;
If you want to swap only specific bits, add a mask there:
uint32_t mask = 0x0000ffff; // only swap the lower 16 bits
a ^= (b & mask);
b ^= (a & mask);
a ^= (b & mask);
The above requires 6 bitwise operations, whereas ecatmur's solution requires only 5.

Resources