I'm trying to reverse the bytes for a 64 bit address pointer for an assignment and have this code:
char swapPtr(char x){
x = (x & 0x00000000FFFFFFFF) << 32 | (x & 0xFFFFFFFF00000000) >> 32;
x = (x & 0x0000FFFF0000FFFF) << 16 | (x & 0xFFFF0000FFFF0000) >> 16;
x = (x & 0x00FF00FF00FF00FF) << 8 | (x & 0xFF00FF00FF00FF00) >> 8;
return x;
}
But, it just messes everything up. However, a similar function works perfectly for a 64bit long. Is there something different that needs to be done for pointers?
Could the way I'm making the function call be an issue?
For a pointer:
*(char*)loc = swapPtr(*(char*)loc);
For a long:
*loc = swapLong(*loc);
You cannot use char x for a pointer!!!! A char is only a single byte long.
You need at the very least
unsigned long int swapPtr(unsigned long int x) {
Or better, use the type of the pointer
void* swapPtr(void* x) {
Quite likely your compiler will complain when you start bit shifting pointers; in that case you're better off explicitly casting your argument to an unsigned 64 bit integer:
#include <stdint.h>
uint64_t x;
Note also that you have to call with the address of a variable, so you call with
result = swapLong(&loc);
not *loc (which looks at the place where loc is pointing - the value, not the address).
Complete program:
#include <stdio.h>
#include <stdint.h>
uint64_t swapLong(void *X) {
uint64_t x = (uint64_t) X;
x = (x & 0x00000000FFFFFFFF) << 32 | (x & 0xFFFFFFFF00000000) >> 32;
x = (x & 0x0000FFFF0000FFFF) << 16 | (x & 0xFFFF0000FFFF0000) >> 16;
x = (x & 0x00FF00FF00FF00FF) << 8 | (x & 0xFF00FF00FF00FF00) >> 8;
return x;
}
int main(void) {
char a;
printf("the address of a is 0x%016llx\n", (uint64_t)(&a));
printf("swapping all the bytes gives 0x%016llx\n",(uint64_t)swapLong(&a));
}
Output:
the address of a is 0x00007fff6b133b1b
swapping all the bytes gives 0x1b3b136bff7f0000
EDIT you could use something like
#include <inttypes.h>
printf("the address of a is 0x%016" PRIx64 "\n", (uint64_t)(&a));
where the macro PRIx64 expands into "the format string you need to print a 64 bit number in hex". It is a little cleaner than the above.
You may also use _bswap64 intrinsic (which has latency of 2 and a throughput of 0.5 on Skylake Architecture). It is a wrapper for the assembly instruction bswap r64 so probably the most efficient :
Reverse the byte order of 64-bit integer a, and store the result in dst. This intrinsic is provided for conversion between little and big endian values.
#include <immintrin.h>
uint64_t swapLongIntrinsic(void *X) {
return __bswap_64((uint64_t) X);
}
NB: Don't forget the header
Here is an alternative way for converting a 64-bit value from LE to BE or vice-versa.
You can basically apply this method any type, by defining var_type:
typedef long long var_type;
Reverse by pointer:
void swapPtr(var_type* x)
{
char* px = (char*)x;
for (int i=0; i<sizeof(var_type)/2; i++)
{
char temp = px[i];
px[i] = px[sizeof(var_type)-1-i];
px[sizeof(var_type)-1-i] = temp;
}
}
Reverse by value:
var_type swapVal(var_type x)
{
var_type y;
char* px = (char*)&x;
char* py = (char*)&y;
for (int i=0; i<sizeof(var_type); i++)
py[i] = px[sizeof(var_type)-1-i];
return y;
}
Related
IN C Programming, how do I combine (note: not add) two integers into one big integer? So if i have
int a = 8
int b = 6
in binary it would be
int a = 1000
int b = 0110
so combined it would be = 01101000
You would use a combination of the << shift operator and the bitwise | operator. If you are trying to build an 8-bit value from two 4-bit inputs, then:
int a = 8;
int b = 6;
int result = (b << 4) | a;
If you are trying to build a 32-bit value from two 16-bit inputs, then you would write
result = (b << 16) | a;
Example:
#include <stdio.h>
int main( void )
{
int a = 8;
int b = 6;
printf( "a = %08x, b = %08x\n", (unsigned int) a, (unsigned int) b );
int result = (b << 4) | a;
printf( "result = %08x\n", (unsigned int) result );
result = (b << 8) | a;
printf( "result = %08x\n", (unsigned int) result );
result = (b << 16) | a;
printf( "result = %08x\n", (unsigned int) result );
return 0;
}
$ ./bits
a = 00000008, b = 00000006
result = 00000068
result = 00000608
result = 00060008
You can do it as follow using binary mask & 0x0F and bit translation <<:
int a = 0x08
int b = 0x06
int c = (a & 0x0F) + ((b & 0x0F) << 4 )
I hope that it helped
Update 1:
As mentionned in the comment addition + or binary or | are both fine.
What is important to highlight in this answer is the mask & 0x0F, I strongly recommand to use this kind of mecanism to avoid any overflow.
you could use or operator.
int a = 8 ;
int b = 6 ;
int c = (a << 8) | b;
You can use the bit-shift operator << to move the bits into the correct position:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main()
{
uint8_t a = 8;
uint8_t b = 6;
uint16_t c = (b << 4) | a;
printf( "The result is: 0x%" PRIX16 "\n", c );
}
This program will print the following:
The result is: 0x68
Note that this program uses fixed-width integer types, which are recommended in this situation, as you cannot rely on the size of an int or unsigned int to have a certain width.
However, there is no need for the result to be 16-bits, if you are only shifting one value by 4 bits, as you are doing in your example. In that case, an integer type with a width of 8-bits would have been sufficient. I am only using 16-bits for the result because you explicitly asked for it.
The macro PRIX16 will probably expand to "hX" or "X" on most platforms. But it is still recommended to use this macro when using fixed-width integer types, as you cannot rely on %hX or %X being the correct format specifier for uint16_t on all platforms.
I'm currently working to create a function which accepts two 4 byte unsigned integers, and returns an 8 byte unsigned long. I've tried to base my work off of the methods depicted by this research but all my attempts have been unsuccessful. The specific inputs I am working with are: 0x12345678 and 0xdeadbeef, and the result I'm looking for is 0x12de34ad56be78ef. This is my work so far:
unsigned long interleave(uint32_t x, uint32_t y){
uint64_t result = 0;
int shift = 33;
for(int i = 64; i > 0; i-=16){
shift -= 8;
//printf("%d\n", i);
//printf("%d\n", shift);
result |= (x & i) << shift;
result |= (y & i) << (shift-1);
}
}
However, this function keeps returning 0xfffffffe which is incorrect. I am printing and verifying these values using:
printf("0x%x\n", z);
and the input is initialized like so:
uint32_t x = 0x12345678;
uint32_t y = 0xdeadbeef;
Any help on this topic would be greatly appreciated, C has been a very difficult language for me, and bitwise operations even more so.
This can be done based on interleaving bits, but skipping some steps so it only interleaves bytes. Same idea: first spread out the bytes in a couple of steps, then combine them.
Here is the plan, illustrated with my amazing freehand drawing skills:
In C (not tested):
// step 1, moving the top two bytes
uint64_t a = (((uint64_t)x & 0xFFFF0000) << 16) | (x & 0xFFFF);
// step 2, moving bytes 2 and 6
a = ((a & 0x00FF000000FF0000) << 8) | (a & 0x000000FF000000FF);
// same thing with y
uint64_t b = (((uint64_t)y & 0xFFFF0000) << 16) | (y & 0xFFFF);
b = ((b & 0x00FF000000FF0000) << 8) | (b & 0x000000FF000000FF);
// merge them
uint64_t result = (a << 8) | b;
Using SSSE3 PSHUFB has been suggested, it'll work but there is an instruction that can do a byte-wise interleave in one go, punpcklbw. So all we need to really do is get the values into and out of vector registers, and that single instruction will then just care of it.
Not tested:
uint64_t interleave(uint32_t x, uint32_t y) {
__m128i xvec = _mm_cvtsi32_si128(x);
__m128i yvec = _mm_cvtsi32_si128(y);
__m128i interleaved = _mm_unpacklo_epi8(yvec, xvec);
return _mm_cvtsi128_si64(interleaved);
}
With bit-shifting and bitwise operations (endianness independent):
uint64_t interleave(uint32_t x, uint32_t y){
uint64_t result = 0;
for(uint8_t i = 0; i < 4; i ++){
result |= ((x & (0xFFull << (8*i))) << (8*(i+1)));
result |= ((y & (0xFFull << (8*i))) << (8*i));
}
return result;
}
With pointers (endianness dependent):
uint64_t interleave(uint32_t x, uint32_t y){
uint64_t result = 0;
uint8_t * x_ptr = (uint8_t *)&x;
uint8_t * y_ptr = (uint8_t *)&y;
uint8_t * r_ptr = (uint8_t *)&result;
for(uint8_t i = 0; i < 4; i++){
*(r_ptr++) = y_ptr[i];
*(r_ptr++) = x_ptr[i];
}
return result;
}
Note: this solution assumes little-endian byte order
You could do it like this:
uint64_t interleave(uint32_t x, uint32_t y)
{
uint64_t z;
unsigned char *a = (unsigned char *)&x; // 1
unsigned char *b = (unsigned char *)&y; // 1
unsigned char *c = (unsigned char *)&z;
c[0] = a[0];
c[1] = b[0];
c[2] = a[1];
c[3] = b[1];
c[4] = a[2];
c[5] = b[2];
c[6] = a[3];
c[7] = b[3];
return z;
}
Interchange a and b on the lines marked 1 depending on ordering requirement.
A version with shifts, where the LSB of y is always the LSB of the output as in your example, is:
uint64_t interleave(uint32_t x, uint32_t y)
{
return
(y & 0xFFull)
| (x & 0xFFull) << 8
| (y & 0xFF00ull) << 8
| (x & 0xFF00ull) << 16
| (y & 0xFF0000ull) << 16
| (x & 0xFF0000ull) << 24
| (y & 0xFF000000ull) << 24
| (x & 0xFF000000ull) << 32;
}
The compilers I tried don't seem to do a good job of optimizing either version so if this is a performance critical situation then maybe the inline assembly suggestion from comments is the way to go.
use union punning. Easy for the compiler to optimize.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
typedef union
{
uint64_t u64;
struct
{
union
{
uint32_t a32;
uint8_t a8[4]
};
union
{
uint32_t b32;
uint8_t b8[4]
};
};
uint8_t u8[8];
}data_64;
uint64_t interleave(uint32_t a, uint32_t b)
{
data_64 in , out;
in.a32 = a;
in.b32 = b;
for(size_t index = 0; index < sizeof(a); index ++)
{
out.u8[index * 2 + 1] = in.a8[index];
out.u8[index * 2 ] = in.b8[index];
}
return out.u64;
}
int main(void)
{
printf("%llx\n", interleave(0x12345678U, 0xdeadbeefU)) ;
}
i have a 32 bit instruction that i wish to split into four bytes.
Let say the instruction looks like this:
yyyyyzzzzzzxxxxxx?????????
The instruction is a word that consists of four unsigned ints. y represents the operation code, and ??? are for the unused space. I am working on a big-endian machine.
What I would like to happen is to move the values from z + w to a.
I have never worked in C before but I have tried to do it like this.
Here is how I read the word, just so I ca print out each byte:
unsigned int a, b, c, o;
w = instruction << 24;
z = instruction << 16;
x = instruction << 8;
y = instruction;
Here I print unsigned values, just to check what the result are.
printf("%u\n", w);
printf("%u\n", z);
printf("%u\n", x);
printf("%u\n", y);
printf("\n");
regs[x] = instruction + instruction << 8;
if I print out the values of regs[x] after this, then I can see that I has a value now, but is this the correct way of doing it? When I do like this, do I set the register = z + w?
EDIT
Mabye i should get the bits like this?
y = (inst >> 24) & 077;
x = (inst >> 16) & 0xffff;
z = (inst >> 8) & 0xffff;
w = (inst) & 0xffff;
and then do like this:
regs[y] = z + w;
If you like to use only bit positions and counts you can build a bit mask of i.e. 9 bits with setting the next bit and decrement (1<<10)-1. So your values are
#define MASK(n) ((1<<(n+1))-1)
unsigned int w = instruction & MASK(9);
unsigned int x = (instruction >> 9) & MASK(6);
unsigned int z = (instruction >> 15) & MASK(6);
unsigned int y = (instruction >> 21) & MASK(5);
all values are down shifted. So if you like to combine z and w you will have to
unsigned int zw = z<<9 | w;
because w contains 9 bits, or
unsigned int wz = w<<6 | z;
because z contains 6 bits.
I have following function which counts the number of binary digits in an unsigned 32-bit integer.
uint32_t L(uint32_t in)
{
uint32_t rc = 0;
while (in)
{
rc++;
in >>= 1;
}
return(rc);
}
Could anyone tell me please in case of signed 32-bit integer, which approach i should take ? implementing two's complement is an option. if you have any better approach, please let me know.
What about:
uint32_t count_bits(int32_t in)
{
uint32_t unsigned_in = (uint32_t) in;
uint32_t rc = 0;
while (unsigned_in)
{
rc++;
unsigned_in >>= 1;
}
return(rc);
}
Just convert the signed int into an unsigned one and do the same thing as before.
BTW: I guess you know that - unless your processor has a special instruction for it and you have access to it - one of the fastest implementation of counting the bits is:
int count_bits(unsigned x) {
x = x - ((x >> 1) & 0xffffffff);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0f0f0f0f;
x = x + (x >> 8);
x = x + (x >> 16);
return x & 0x0000003f;
}
It's not the fastest though...
Just reuse the function you defined as is:
int32_t bla = /* ... */;
uin32_t count;
count = L(bla);
You can cast bla to uint32_t (i.e., L((uint32_t) bla);) to make the conversion explicit, but it's not required by C.
If you are using gcc, it already provides fast implementations of functions to count bits and you can use them:
int __builtin_popcount (unsigned int x);
int __builtin_popcountl (unsigned long);
int __builtin_popcountll (unsigned long long);
http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
Your negative number always shows 32 because the first digit of a signed negative integer is 1. A UInt4 of 1000 = 16 but an Int4 of 1000 = -8, an Int4 of 1001 = -7, and Int4 of 1010 = -6 etc...
Since the first digit in an Int32 is meaningful rather just a bit of padding, you cannot really ignore it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I need to write a function to convert big endian to little endian in C. I can not use any library function.
Assuming what you need is a simple byte swap, try something like
Unsigned 16 bit conversion:
swapped = (num>>8) | (num<<8);
Unsigned 32-bit conversion:
swapped = ((num>>24)&0xff) | // move byte 3 to byte 0
((num<<8)&0xff0000) | // move byte 1 to byte 2
((num>>8)&0xff00) | // move byte 2 to byte 1
((num<<24)&0xff000000); // byte 0 to byte 3
This swaps the byte orders from positions 1234 to 4321. If your input was 0xdeadbeef, a 32-bit endian swap might have output of 0xefbeadde.
The code above should be cleaned up with macros or at least constants instead of magic numbers, but hopefully it helps as is
EDIT: as another answer pointed out, there are platform, OS, and instruction set specific alternatives which can be MUCH faster than the above. In the Linux kernel there are macros (cpu_to_be32 for example) which handle endianness pretty nicely. But these alternatives are specific to their environments. In practice endianness is best dealt with using a blend of available approaches
By including:
#include <byteswap.h>
you can get an optimized version of machine-dependent byte-swapping functions.
Then, you can easily use the following functions:
__bswap_32 (uint32_t input)
or
__bswap_16 (uint16_t input)
#include <stdint.h>
//! Byte swap unsigned short
uint16_t swap_uint16( uint16_t val )
{
return (val << 8) | (val >> 8 );
}
//! Byte swap short
int16_t swap_int16( int16_t val )
{
return (val << 8) | ((val >> 8) & 0xFF);
}
//! Byte swap unsigned int
uint32_t swap_uint32( uint32_t val )
{
val = ((val << 8) & 0xFF00FF00 ) | ((val >> 8) & 0xFF00FF );
return (val << 16) | (val >> 16);
}
//! Byte swap int
int32_t swap_int32( int32_t val )
{
val = ((val << 8) & 0xFF00FF00) | ((val >> 8) & 0xFF00FF );
return (val << 16) | ((val >> 16) & 0xFFFF);
}
Update : Added 64bit byte swapping
int64_t swap_int64( int64_t val )
{
val = ((val << 8) & 0xFF00FF00FF00FF00ULL ) | ((val >> 8) & 0x00FF00FF00FF00FFULL );
val = ((val << 16) & 0xFFFF0000FFFF0000ULL ) | ((val >> 16) & 0x0000FFFF0000FFFFULL );
return (val << 32) | ((val >> 32) & 0xFFFFFFFFULL);
}
uint64_t swap_uint64( uint64_t val )
{
val = ((val << 8) & 0xFF00FF00FF00FF00ULL ) | ((val >> 8) & 0x00FF00FF00FF00FFULL );
val = ((val << 16) & 0xFFFF0000FFFF0000ULL ) | ((val >> 16) & 0x0000FFFF0000FFFFULL );
return (val << 32) | (val >> 32);
}
Here's a fairly generic version; I haven't compiled it, so there are probably typos, but you should get the idea,
void SwapBytes(void *pv, size_t n)
{
assert(n > 0);
char *p = pv;
size_t lo, hi;
for(lo=0, hi=n-1; hi>lo; lo++, hi--)
{
char tmp=p[lo];
p[lo] = p[hi];
p[hi] = tmp;
}
}
#define SWAP(x) SwapBytes(&x, sizeof(x));
NB: This is not optimised for speed or space. It is intended to be clear (easy to debug) and portable.
Update 2018-04-04
Added the assert() to trap the invalid case of n == 0, as spotted by commenter #chux.
If you need macros (e.g. embedded system):
#define SWAP_UINT16(x) (((x) >> 8) | ((x) << 8))
#define SWAP_UINT32(x) (((x) >> 24) | (((x) & 0x00FF0000) >> 8) | (((x) & 0x0000FF00) << 8) | ((x) << 24))
Edit: These are library functions. Following them is the manual way to do it.
I am absolutely stunned by the number of people unaware of __byteswap_ushort, __byteswap_ulong, and __byteswap_uint64. Sure they are Visual C++ specific, but they compile down to some delicious code on x86/IA-64 architectures. :)
Here's an explicit usage of the bswap instruction, pulled from this page. Note that the intrinsic form above will always be faster than this, I only added it to give an answer without a library routine.
uint32 cq_ntohl(uint32 a) {
__asm{
mov eax, a;
bswap eax;
}
}
As a joke:
#include <stdio.h>
int main (int argc, char *argv[])
{
size_t sizeofInt = sizeof (int);
int i;
union
{
int x;
char c[sizeof (int)];
} original, swapped;
original.x = 0x12345678;
for (i = 0; i < sizeofInt; i++)
swapped.c[sizeofInt - i - 1] = original.c[i];
fprintf (stderr, "%x\n", swapped.x);
return 0;
}
here's a way using the SSSE3 instruction pshufb using its Intel intrinsic, assuming you have a multiple of 4 ints:
unsigned int *bswap(unsigned int *destination, unsigned int *source, int length) {
int i;
__m128i mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11, 4, 5, 6, 7, 0, 1, 2, 3);
for (i = 0; i < length; i += 4) {
_mm_storeu_si128((__m128i *)&destination[i],
_mm_shuffle_epi8(_mm_loadu_si128((__m128i *)&source[i]), mask));
}
return destination;
}
Will this work / be faster?
uint32_t swapped, result;
((byte*)&swapped)[0] = ((byte*)&result)[3];
((byte*)&swapped)[1] = ((byte*)&result)[2];
((byte*)&swapped)[2] = ((byte*)&result)[1];
((byte*)&swapped)[3] = ((byte*)&result)[0];
This code snippet can convert 32bit little Endian number to Big Endian number.
#include <stdio.h>
main(){
unsigned int i = 0xfafbfcfd;
unsigned int j;
j= ((i&0xff000000)>>24)| ((i&0xff0000)>>8) | ((i&0xff00)<<8) | ((i&0xff)<<24);
printf("unsigned int j = %x\n ", j);
}
Here's a function I have been using - tested and works on any basic data type:
// SwapBytes.h
//
// Function to perform in-place endian conversion of basic types
//
// Usage:
//
// double d;
// SwapBytes(&d, sizeof(d));
//
inline void SwapBytes(void *source, int size)
{
typedef unsigned char TwoBytes[2];
typedef unsigned char FourBytes[4];
typedef unsigned char EightBytes[8];
unsigned char temp;
if(size == 2)
{
TwoBytes *src = (TwoBytes *)source;
temp = (*src)[0];
(*src)[0] = (*src)[1];
(*src)[1] = temp;
return;
}
if(size == 4)
{
FourBytes *src = (FourBytes *)source;
temp = (*src)[0];
(*src)[0] = (*src)[3];
(*src)[3] = temp;
temp = (*src)[1];
(*src)[1] = (*src)[2];
(*src)[2] = temp;
return;
}
if(size == 8)
{
EightBytes *src = (EightBytes *)source;
temp = (*src)[0];
(*src)[0] = (*src)[7];
(*src)[7] = temp;
temp = (*src)[1];
(*src)[1] = (*src)[6];
(*src)[6] = temp;
temp = (*src)[2];
(*src)[2] = (*src)[5];
(*src)[5] = temp;
temp = (*src)[3];
(*src)[3] = (*src)[4];
(*src)[4] = temp;
return;
}
}
EDIT: This function only swaps the endianness of aligned 16 bit words. A function often necessary for UTF-16/UCS-2 encodings.
EDIT END.
If you want to change the endianess of a memory block you can use my blazingly fast approach.
Your memory array should have a size that is a multiple of 8.
#include <stddef.h>
#include <limits.h>
#include <stdint.h>
void ChangeMemEndianness(uint64_t *mem, size_t size)
{
uint64_t m1 = 0xFF00FF00FF00FF00ULL, m2 = m1 >> CHAR_BIT;
size = (size + (sizeof (uint64_t) - 1)) / sizeof (uint64_t);
for(; size; size--, mem++)
*mem = ((*mem & m1) >> CHAR_BIT) | ((*mem & m2) << CHAR_BIT);
}
This kind of function is useful for changing the endianess of Unicode UCS-2/UTF-16 files.
If you are running on a x86 or x86_64 processor, the big endian is native. so
for 16 bit values
unsigned short wBigE = value;
unsigned short wLittleE = ((wBigE & 0xFF) << 8) | (wBigE >> 8);
for 32 bit values
unsigned int iBigE = value;
unsigned int iLittleE = ((iBigE & 0xFF) << 24)
| ((iBigE & 0xFF00) << 8)
| ((iBigE >> 8) & 0xFF00)
| (iBigE >> 24);
This isn't the most efficient solution unless the compiler recognises that this is byte level manipulation and generates byte swapping code. But it doesn't depend on any memory layout tricks and can be turned into a macro pretty easily.