Difference between writing an integer in a HEX from a real - c

I was given a task at the university, there is a number and I need to display it in HEX as it is presented on the computer. I wrote a program for translating signed integers. And I also found a real number entry in HEX. But it is different from the usual.
For integers i use: printf("%#X", d);
For reals i use: printf("%#lX", r);
If i input 12, first prints: 0xC
If i input 12.0, second prints: 0x4028000000000000
Can you explain what the difference and how it's calculate?

Printing double value r using format %#lX actually has undefined behavior.
You have been lucky to get the 64-bits that represent the value 12.0 as a double, unless r has type unsigned long and was initialized from the value 12.0 this way:
unsigned long r;
double d = 12.0;
memcpy(&r, &d, sizeof r);
printf("%#lX", r);
But type unsigned long does not have 64-bits on all platforms, indeed it does not on the 32-bit intel ABI. You should use the type uint64_t from <stdint.h> and the conversion format from <inttypes.h>:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <string.h>
int main() {
int x = 12;
printf("int: %#X [", x);
for (size_t i = 0; i < sizeof x; i++) {
printf(" %02X", ((unsigned char *)&x)[i]);
}
printf(" ]\n");
double d = 12.0;
uint64_t r;
memcpy(&r, &d, sizeof r);
printf("double: %#"PRIX64" [", r);
for (size_t i = 0; i < sizeof d; i++) {
printf(" %02X", ((unsigned char *)&d)[i]);
}
printf(" ]\n");
printf("sign bit: %d\n", (int)(r >> 63));
printf("exponent: %d\n", (int)((r >> 52) & 2047));
unsigned long long mantissa = r & ((1ULL << 52) - 1);
printf("mantissa: %#llX, %.17f\n",
mantissa, 1 + (double)mantissa / (1ULL << 52));
return 0;
}
Output:
int: 0XC [ 0C 00 00 00 ]
double: 0X4028000000000000 [ 00 00 00 00 00 00 28 40 ]
sign bit: 0
exponent: 1026
mantissa: 0X8000000000000, 1.50000000000000000
As explained in the article Double-precision floating-point format, this representation corresponds to a positive number with value 1.5*21026-1023, ie: 1.5*8 = 12.0.

The X format specifier expects an int or unsigned int argument. With the l modifier it expects a long or unsigned long int argument. If you call it with anything else (such as a double) you get undefined behavior.
If you want to print a hex float (with uppercase letters), use %A format, which for 12.0 will print 0X1.8P+3 -- 1&half;×23

To produce the encoding of a number in hex is a simple memory dump.
The process is not so different among types.
The below passes the address of the object and its size to form a string for printing.
#include <stdio.h>
#include <assert.h>
#include <limits.h>
// .... compound literal ....................
#define VAR_TO_STR_HEX(x) obj_to_hex((char [(sizeof(x)*CHAR_BIT + 3)/4 + 1]){""}, &(x), sizeof (x))
char *obj_to_hex(char *dest, void *object, size_t osize) {
const unsigned char *p = (const unsigned char *) object;
p += osize;
char *s = dest;
while (osize-- > 0) {
p--;
unsigned i = (CHAR_BIT + 3)/4;
while (i-- > 0) {
unsigned digit = (*p >> (i*4)) & 15;
*s++ = "0123456789ABCDEF"[digit];
}
}
*s = '\0';
return dest;
}
int main(void) {
double d = 12.0;
int i = 12;
printf("double %s\tint %s\n", VAR_TO_STR_HEX(d), VAR_TO_STR_HEX(i) );
d = -d;
i = -i;
printf("double %s\tint %s\n", VAR_TO_STR_HEX(d), VAR_TO_STR_HEX(i) );
return 0;
}
Output
double 4028000000000000 int 0000000C
double C028000000000000 int FFFFFFF4
With more complex objects, the output may include padding bits/bytes and the output is sensitive to endian.

Related

Alternatives to register_printf_specifier ( to print numbers in binary format using printf() )

I understand that register_printf_specifier is now deprecated.
I can no longer run code using register_printf_specifier using the C99 compiler at www.onlinegdb.com.
e.g. I really wanted to run the following code that adds the %B format specifier to printf() in order to print out integers in binary (from Is there a printf converter to print in binary format?):
/*
* File: main.c
* Author: Techplex.Engineer
*
* Created on February 14, 2012, 9:16 PM
*/
#include <stdio.h>
#include <stdlib.h>
#include <printf.h>
#include <math.h>
#include <string.h>
#include <stdarg.h>
static int printf_arginfo_M(const struct printf_info *info, size_t n, int *argtypes)
{
/* "%M" always takes one argument, a pointer to uint8_t[6]. */
if (n > 0) {
argtypes[0] = PA_POINTER;
}
return 1;
}
static int printf_output_M(FILE *stream, const struct printf_info *info, const void *const *args)
{
int value = 0;
int len;
value = *(int *) (args[0]);
// Beginning of my code ------------------------------------------------------------
//char buffer [50] = ""; // Is this bad?
char* buffer = (char*) malloc(sizeof(char) * 50);
// char buffer2 [50] = ""; // Is this bad?
char* buffer2 = (char*) malloc(sizeof(char) * 50);
int bits = info->width;
if (bits <= 0)
bits = 8; // Default to 8 bits
int mask = pow(2, bits - 1);
while (mask > 0) {
sprintf(buffer, "%s", ((value & mask) > 0 ? "1" : "0"));
strcat(buffer2, buffer);
mask >>= 1;
}
strcat(buffer2, "\n");
// End of my code --------------------------------------------------------------
len = fprintf(stream, "%s", buffer2);
free (buffer);
free (buffer2);
return len;
}
int main(int argc, char** argv)
{
register_printf_specifier('B', printf_output_M, printf_arginfo_M);
printf("%4B\n", 65);
return EXIT_SUCCESS;
}
When I do, I get:
main.c:65:53: warning: passing argument 3 of ‘register_printf_specifier’ from incompatible pointer type [-Wincompatible-pointer-types]
register_printf_specifier('B', printf_output_M, printf_arginfo_M);
^~~~~~~~~~~~~~~~
In file included from main.c:18:0:
/usr/include/printf.h:96:12: note: expected ‘int (*)(const struct printf_info *, size_t, int *, int *) {aka int (*)(const struct printf_info *, long unsigned int, int *, int *)}’ but argument is of type ‘int (*)(const struct printf_info *, size_t, int *) {aka int (*)(const struct printf_info *, long unsigned int, int *)}’
extern int register_printf_specifier (int __spec, printf_function __func,
^~~~~~~~~~~~~~~~~~~~~~~~~
main.c:67:15: warning: unknown conversion type character ‘B’ in format [-Wformat=]
printf("%4B\n", 65);
^
main.c:67:12: warning: too many arguments for format [-Wformat-extra-args]
printf("%4B\n", 65);
^~~~~~~
000
Since register_printf_specifier is now deprecated, what are programmers supposed to use instead? Create one's own variadic printf() like function?
P.S. Below is the updated code that corrects the errors pointed out to me. Make sure you use the right format specifier for the type of integer that you want to display in binary. (e.g. %hhB for chars and %hB for shorts). You can pad with spaces or zeros (e.g. %018hB will add 2 leading zeros to the binary since the size of shorts is 16 bits on the computer I am using). Please note: Make sure you use the right format specifier! If you do not, the binary output will likely be wrong especially for negative integers or unsigned integers.
/*
* File: main.c
* Author: Techplex.Engineer
* Modified by: Robert Kennedy
*
* Created on February 14, 2012, 9:16 PM
* Modified on August 28, 2021, 9:06 AM
*/
//
// The following #pragma's are the only way to supress the compiler warnings
// because there is no way of letting -Wformat know about the
// custom %B format specifier.
//
#pragma GCC diagnostic ignored "-Wformat="
#pragma GCC diagnostic ignored "-Wformat-extra-args"
#include <stdio.h> // Needed for fprintf(); sprintf() and printf();
#include <stdlib.h> // Needed for exit(); malloc(); and free(); and
// EXIT_SUCCESS macro constant.
#include <printf.h> // Needed for register_printf_specifier(); and related
// data structures (like "const struct print_info") and
// related macro constants (e.g. PA_POINTER)
#include <math.h> // Needed for pow(); and powl();
#include <string.h> // Needed for strcat; strncat; memset();
#include <limits.h> // Needed for min and max values for the various integer
// data types.
#include <inttypes.h> // Needed for int64_t data types and the min and max
// values.
static int printf_arginfo_B(const struct printf_info *info, size_t n, int *argtypes, int* size)
{
if (info->is_long_double)
*size = sizeof(long long); /* Optional to specify *size here */
else if (info->is_long)
*size = sizeof(long); /* Optional to specify *size here */
else
*size = sizeof(int); /* Optional to specify *size here */
if (n > 0) /* means there are arguments! */
{
argtypes[0] = PA_POINTER; /* Specifies a void* pointer type */
}
return 1;
}
static int printf_output_B(FILE *stream, const struct printf_info *info, const void *const *args)
{
const int sizeOfByte = CHAR_BIT;
const int charSizeInBits = sizeof(char) * sizeOfByte;
const int shortSizeInBits = sizeof(short) * sizeOfByte;
const int intSizeInBits = sizeof(int) * sizeOfByte;
const int longSizeInBits = sizeof(long) * sizeOfByte;
const int longlongSizeInBits = sizeof(long long) * sizeOfByte;
unsigned int intValue = 0;
unsigned long longValue = 0l;
unsigned long long longlongValue = 0ll;
int len; // Length of the string (containing the binary
// number) that was printed.
// On error, a negative number will be returned.
int i; // A simple counter variable.
int displayBits; // Number of bits to be displayed
// If greater than calcBits, leading zeros
// will be displayed.
int calcBits; // The minimum number of bits needed for the
// decimcal to binary conversion.
displayBits = info->width;
wchar_t padWithZero = info->pad;
char padChar = ' ';
if (info->is_long_double)
{
calcBits = longlongSizeInBits;
if (displayBits < longlongSizeInBits)
{
displayBits = longlongSizeInBits;
}
}
if (info->is_long)
{
calcBits = longSizeInBits;
if (displayBits < longSizeInBits)
{
displayBits = longSizeInBits;
}
}
if ( !(info->is_long) && !(info->is_long_double) && !(info->is_short) && !(info->is_char) )
{
calcBits = intSizeInBits;
if (displayBits < intSizeInBits)
{
displayBits = intSizeInBits;
}
}
if (info->is_short)
{
calcBits = shortSizeInBits;
if (displayBits < shortSizeInBits)
{
displayBits = shortSizeInBits;
}
}
if (info->is_char)
{
calcBits = charSizeInBits;
if (displayBits < charSizeInBits)
{
displayBits = charSizeInBits;
}
}
// printf("\ndisplayBits = %d and calcBits = %d\n", displayBits, calcBits);
char* buffer = (char*) malloc(sizeof(char) * (displayBits+1));
char* buffer2 = (char*) malloc(sizeof(char) * (displayBits+1));
if ( info->is_long_double )
{
longlongValue= * ((unsigned long long *) (args[0]));
unsigned long long mask = powl(2, calcBits - 1);
while (mask > 0)
{
sprintf(buffer, "%s", ((longlongValue & mask) > 0 ? "1" : "0"));
// strcat(buffer2, buffer);
strncat(buffer2, buffer, displayBits-( (int)strlen(buffer2)) );
mask >>= 1;
}
}
else if ( info->is_long )
{
longValue= * ((unsigned long *) (args[0]));
unsigned long mask = powl(2, calcBits - 1);
while (mask > 0)
{
sprintf(buffer, "%s", ((longValue & mask) > 0 ? "1" : "0"));
// strcat(buffer2, buffer);
strncat(buffer2, buffer, displayBits-( (int)strlen(buffer2)) );
mask >>= 1;
}
}
else
{
intValue = * ((unsigned int *) (args[0]));
unsigned long mask = pow(2, calcBits - 1);
while (mask > 0) {
sprintf(buffer, "%s", ((intValue & mask) > 0 ? "1" : "0"));
// strcat(buffer2, buffer);
strncat(buffer2, buffer, displayBits-( (int)strlen(buffer2)) );
mask >>= 1;
}
}
strcat(buffer2, "\0");
if (displayBits > calcBits)
{
if ('0' == padWithZero)
padChar = '0';
else
padChar = ' ';
memset(buffer, '\0', displayBits);
memset(buffer, padChar, (displayBits-calcBits));
strncat(buffer, buffer2, displayBits-( (int)strlen(buffer)) );
len = fprintf(stream, "%s", buffer);
}
else
{
len = fprintf(stream, "%s", buffer2);
}
free (buffer);
free (buffer2);
return len;
}
int main(int argc, char** argv)
{
const int sizeOfByte = 8;
register_printf_specifier('B', printf_output_B, printf_arginfo_B);
printf("Sizeof(char) is: %ld bits\n", sizeof(char) * sizeOfByte);
printf("CHAR_MAX %hhd in binary is: %hhB\n", CHAR_MAX, CHAR_MAX);
printf("CHAR_MIN %hhd in binary is: %hhB\n", CHAR_MIN, CHAR_MIN);
printf("UCHAR_MAX %hhu (unsigned) in binary is: %hhB\n", UCHAR_MAX, UCHAR_MAX);
printf("%hhd in binary is: %hhB\n", -5, -5);
printf(" %hhd in binary is: %hhB\n\n", 0, 0);
printf("Sizeof(short) is: %ld bits\n", sizeof(short) * sizeOfByte);
printf("SHRT_MAX %hd in binary is: %hB\n", SHRT_MAX, SHRT_MAX);
printf("SHRT_MIN %hd in binary is: %hB\n", SHRT_MIN, SHRT_MIN);
printf("USHRT_MAX %hu (unsigned) in binary is: %hB\n", USHRT_MAX, USHRT_MAX);
printf("USHRT_MAX %hu (unsigned) in binary with 2 leading zeros is: %018hB\n", USHRT_MAX, USHRT_MAX);
printf("USHRT_MAX %hu (unsigned) in binary with 2 leading spaces is: %18hB\n\n", USHRT_MAX, USHRT_MAX);
printf("Sizeof(int) is: %ld bits\n", sizeof(int) * sizeOfByte);
printf("INT_MAX %d in binary is: %B\n", INT_MAX, INT_MAX);
printf("INT_MIN %d in binary is: %B\n", INT_MIN, INT_MIN);
printf("UINT_MAX %u (unsigned) in binary is: %B\n", UINT_MAX, UINT_MAX);
printf("UINT_MAX %u (unsigned) in binary with 4 leading zeros is: %036B\n\n", UINT_MAX, UINT_MAX);
printf("Sizeof(long) is: %ld bits\n", sizeof(long) * sizeOfByte);
printf("LONG_MAX %ld in binary is: %lB\n", LONG_MAX, LONG_MAX);
printf("LONG_MIN %ld in binary is: %lB\n", LONG_MIN, LONG_MIN);
printf("ULONG_MAX %lu (unsigned) in binary is: %lB\n\n", ULONG_MAX, ULONG_MAX);
printf("Sizeof(long long) is: %ld bits\n", sizeof(long long) * sizeOfByte);
printf("LLONG_MAX %lld in binary is: %llB\n", LLONG_MAX, LLONG_MAX);
printf("LLONG_MIN %ld in binary is: %lB\n", LLONG_MIN, LLONG_MIN);
printf("ULLONG_MAX %llu (unsigned) in binary is: %llB\n\n", ULLONG_MAX, ULLONG_MAX);
printf("Sizeof(int64_t) is: %ld bits\n", sizeof(int64_t) * sizeOfByte);
printf("INT_64_MAX %lld in binary is: %LB\n", INT64_MAX, INT64_MAX);
printf("INT_64_MIN %lld in binary is: %LB\n", INT64_MIN, INT64_MIN);
printf("UINT64_MAX %llu in binary is: %LB\n", UINT64_MAX, UINT64_MAX);
return EXIT_SUCCESS;
}
Below is the output:
Sizeof(char) is: 8 bits
CHAR_MAX 127 in binary is: 01111111
CHAR_MIN -128 in binary is: 10000000
UCHAR_MAX 255 (unsigned) in binary is: 11111111
-5 in binary is: 11111011
0 in binary is: 00000000
Sizeof(short) is: 16 bits
SHRT_MAX 32767 in binary is: 0111111111111111
SHRT_MIN -32768 in binary is: 1000000000000000
USHRT_MAX 65535 (unsigned) in binary is: 1111111111111111
USHRT_MAX 65535 (unsigned) in binary with 2 leading zeros is: 001111111111111111
USHRT_MAX 65535 (unsigned) in binary with 2 leading spaces is: 1111111111111111
Sizeof(int) is: 32 bits
INT_MAX 2147483647 in binary is: 01111111111111111111111111111111
INT_MIN -2147483648 in binary is: 10000000000000000000000000000000
UINT_MAX 4294967295 (unsigned) in binary is: 11111111111111111111111111111111
UINT_MAX 4294967295 (unsigned) in binary with 4 leading zeros is: 000011111111111111111111111111111111
Sizeof(long) is: 64 bits
LONG_MAX 9223372036854775807 in binary is: 0111111111111111111111111111111111111111111111111111111111111111
LONG_MIN -9223372036854775808 in binary is: 1000000000000000000000000000000000000000000000000000000000000000
ULONG_MAX 18446744073709551615 (unsigned) in binary is: 1111111111111111111111111111111111111111111111111111111111111111
Sizeof(long long) is: 64 bits
LLONG_MAX 9223372036854775807 in binary is: 0111111111111111111111111111111111111111111111111111111111111111
LLONG_MIN -9223372036854775808 in binary is: 1000000000000000000000000000000000000000000000000000000000000000
ULLONG_MAX 18446744073709551615 (unsigned) in binary is: 1111111111111111111111111111111111111111111111111111111111111111
Sizeof(int64_t) is: 64 bits
INT_64_MAX 9223372036854775807 in binary is: 0111111111111111111111111111111111111111111111111111111111111111
INT_64_MIN -9223372036854775808 in binary is: 1000000000000000000000000000000000000000000000000000000000000000
UINT64_MAX 18446744073709551615 in binary is: 1111111111111111111111111111111111111111111111111111111111111111
register_printf_specifier is not deprecated - register_printf_function is deprecated. Your code does compile and run - you can see that it prints binary at the end - you just have some compiler warnings. If you need proof that register_printf_specifier is valid, see in printf.h:
typedef int printf_arginfo_size_function (const struct printf_info *__info,
size_t __n, int *__argtypes,
int *__size);
/* Old version of 'printf_arginfo_function' without a SIZE parameter. */
typedef int printf_arginfo_function (const struct printf_info *__info,
size_t __n, int *__argtypes);
...
/* Register FUNC to be called to format SPEC specifiers; ARGINFO must be
specified to determine how many arguments a SPEC conversion requires and
what their types are. */
extern int register_printf_specifier (int __spec, printf_function __func,
printf_arginfo_size_function __arginfo)
__THROW;
/* Obsolete interface similar to register_printf_specifier. It can only
handle basic data types because the ARGINFO callback does not return
information on the size of the user-defined type. */
extern int register_printf_function (int __spec, printf_function __func,
printf_arginfo_function __arginfo)
__THROW __attribute_deprecated__;
i.e. in modern code, instead of using register_printf_function you should use register_printf_specifier. You'll notice that their signatures are very similar, with the only difference being that the last parameter is printf_arginfo_size_function as opposed to printf_arginfo_function.
And this is your problem - you are passing a parameter of type printf_arginfo_function to a function that expects a parameter of type printf_arginfo_size_function. You need to add a int* size parameter to printf_arginfo_M and fill it with the size of your parameter - i.e. *size = sizeof(int).
As an aside, you could have understood this from the compiler warning:
/usr/include/printf.h:96:12: note: expected ‘int (*)(const struct
printf_info *, size_t, int *, int ) {aka int ()(const struct
printf_info *, long unsigned int, int *, int )}’ but argument is of type
‘int ()(const struct printf_info *, size_t, int ) {aka int
()(const struct printf_info *, long unsigned int, int *)}’
As for your modifier function, consider your code:
int bits = info->width;
if (bits <= 0)
bits = 8; // Default to 8 bits
int mask = pow(2, bits - 1);
Consider that you're missing quite a few bits here - for instance for 65 you need 7 bits, and you're only printing the bottom 4 when you print "%4B".
Also, if you want to print 64-bit numbers in binary, then in the same way that you print 64-bit integers using %lld - you should print your numbers using %llB. Then, inside printf_output_m you can write code like this:
if (info->is_long_double) {
long_value = *(uint64_t*)(args[0]);
bits = 64;
}
notice that this requires a general redesign of your function - you must change maxBits to 64 (because you want to support printing up to 64 bits), etc.
As to the main.c:67:15: warning: unknown conversion type character ‘B’ in format [-Wformat=] warnings - these aren't avoidable without manually suppressing the -Wformat flag for your lines of code. As of now, there is no way of letting -Wformat know about your custom specifier.
Unrelated side note - Google's latest CTF had a very cool challenge involving reverse engineering a virtual machine written using register_printf_function (yes, the deprecated one) - you can see the source code here.

How to get specific bit segments of an integer in C?

You are given a getTemps() function returns an integer composed of: the daily high temperature
in bits 20-29, the daily low temperature in bits 10-19, and the current temperature in bits 0-9, all
as 2's complement 10-bit integers.
Write a C program which extracts the high, low, and current temperature and prints the values.
I am given this situation. So my question is how do I get the specific segments of an integer.
So far I have:
#include <stdio.h>
unsigned createMask(unsigned a, unsigned b){
unsigned r = 0;
unsigned i;
for (i=a; i<=b; i++){
r |= 1 << i;
}
return r;
}
int main(int argc, char *argv[])
{
unsigned r = createMask(29,31);
int i = 23415;
unsigned result = r & i;
printf("%d\n", i);
printf("%u\n", result);
}
The integer 23415 for example would have the binary 00000000 00000000 01011011 01110111
Then bits 29 through 31 for example should be 111 or integer -1 since its 2's complement.
There are three basic approaches for extracting encoded information from a bitfield. The first two are related and differ only in the manner the bitfield struct is initialized. The first and shortest is to simply create a bitfield struct defining the bits associated with each member of the struct. The sum of the bits cannot exceed sizeof type * CHAR_BIT bits for the type used to create the bitfield. A simple example is:
#include <stdio.h>
typedef struct {
unsigned cur : 10,
low : 10,
high : 10;
} temps;
int main (void) {
unsigned n = 0; /* encoded number of temps */
n = 58; /* fill number (58, 46, 73) */
n |= (46 << 10);
n |= (73 << 20);
temps t = *(temps *)&n; /* initialize struct t */
/* output value and temps */
printf ("\n number entered : %u\n\n", n);
printf (" %2hhu - %2hhu value : %u (deg. F)\n", 0, 9, t.cur);
printf (" %2hhu - %2hhu value : %u (deg. F)\n", 10, 19, t.low);
printf (" %2hhu - %2hhu value : %u (deg. F)\n\n", 20, 29, t.high);
return 0;
}
Note: memcpy can also be used to initialize the value for the structure to avoid casting the address of n. (that was done intentionally here to avoid inclusion of string.h).
The next method involves creation of a union between the data type represented and the exact same struct discussed above. The benefit of using the union is that you avoid having to typecast, or memcpy a value to initialize the struct. You simply assign the encoded value to the numeric type within the union. The same example using this method is:
#include <stdio.h>
typedef struct {
unsigned cur : 10,
low : 10,
high : 10;
} temps;
typedef union {
temps t;
unsigned n;
} utemps;
int main (void) {
unsigned n = 0; /* encoded number of temps */
n = 58; /* fill number (58, 46, 73) */
n |= (46 << 10);
n |= (73 << 20);
utemps u; /* declare/initialize union */
u.n = n;
/* output value and temps */
printf ("\n number entered : %u\n\n", n);
printf (" %2hhu - %2hhu value : %u (deg. F)\n", 0, 9, u.t.cur);
printf (" %2hhu - %2hhu value : %u (deg. F)\n", 10, 19, u.t.low);
printf (" %2hhu - %2hhu value : %u (deg. F)\n\n", 20, 29, u.t.high);
return 0;
}
Finally, the third method uses neither a struct or union and simply relies on bit shift operations to accomplish the same purpose. A quick example is:
#include <stdio.h>
#include <limits.h> /* for CHAR_BIT */
int main (void) {
unsigned n = 0; /* encoded number of temps */
unsigned char i = 0; /* loop counter */
unsigned char r = 10; /* number of bits in temps */
unsigned char s = 0; /* shift accumulator */
unsigned v = 0; /* extracted value */
n = 58; /* fill number (58, 46, 73) */
n |= (46 << 10);
n |= (73 << 20);
printf ("\n number entered : %u\n\n", n);
/* extract and output temps from n */
for (i = 0; i < (sizeof n * CHAR_BIT)/r; i++)
{
v = (n >> i * r) & 0x3ff;
printf (" %2hhu - %2hhu value : %u (deg. F)\n", s, s + r - 1, v);
s += r;
}
printf ("\n");
return 0;
}
Note: you can automate the creation of the mask with the createMask function. While longer, the shift method is not computationally intensive as shift operations take little to accomplish. While negligible, the multiplication could also be replaced with a shift and addition to further tweak performance. The only costly instruction is the division to set the loop test clause, but again it is negligible and all of these cases are likely to be optimized by the compiler.
All of the examples above produce exactly the same output:
$ ./bin/bit_extract_shift
number entered : 76593210
0 - 9 value : 58 (deg. F)
10 - 19 value : 46 (deg. F)
20 - 29 value : 73 (deg. F)
You can use union and bit field to do it. Try something like this:
struct TEM_BITS {
unsigned int tem_high :10;
unsigned int tem_low :10;
unsigned int tem_cur :10;
};
union TEM_U {
int tem_values;
struct TEM_BITS bits;
}
TEM_U t;
t.tem_values = 12345;
printf("tem_high : 0x%X\n", t.bits.tem_high);
printf("tem_low : 0x%X\n", t.bits.tem_low);
printf("tem_cur : 0x%X\n", t.bits.tem_cur);
I might have my bits backwards, but the following should be close.
int current=x&0x3ff;
int low = (x>>10)&0x3ff;
int high = (x>>20)&0x3ff;

Bit Manipulation on char array in c

If I am given a char array of size 8, where I know the the first 3 bytes are the id, the next byte is the message, and the last 3 bytes are the values. How could I use bit manipulation in order to extract the message.
Example: a char array contains 9990111 (one integer per position), where 999 is the id, 0 is the message, and 111 is the value.
Any tips? Thanks!
Given:
the array contains {'9','9','9','0','1','1','1'}
Then you can convert with sscanf():
char buffer[8] = { '9', '9', '9', '0', '1', '1', '1', '\0' };
//char buffer[] = "9990111"; // More conventional but equivalent notation
int id;
int message;
int value;
if (sscanf(buffer, "%3d%1d%3d", &id, &message, &value) != 3)
…conversion failed…inexplicably in this context…
assert(id == 999);
assert(message == 0);
assert(value == 111);
But there's no bit manipulation needed there.
Well, if you want bit manipulation, no matter what, here it goes:
#include <stdio.h>
#include <arpa/inet.h>
int main(void) {
char arr[8] = "9997111";
int msg = 0;
msg = ((ntohl(*(uint32_t *) arr)) & 0xff) - 48;
printf("%d\n", msg);
return 0;
}
Output:
7
Just remember one thing... this does not comply with strict aliasing rules. But you can use some memcpy() stuff to solve it.
Edit #1 (parsing it all, granting compliance with strict aliasing rules, and making you see that this does not make any sense):
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <arpa/inet.h>
int main(void) {
char arr[8] = "9997111";
uint32_t a[2];
unsigned int id = 0, msg = 0, val = 0;
memcpy(a, arr, 4);
memcpy(&a[1], arr + 4, 4);
a[0] = ntohl(a[0]);
a[1] = ntohl(a[1]);
id = ((((a[0] & 0xff000000) >> 24) - 48) * 100) + ((((a[0] & 0xff0000) >> 16)- 48) * 10) + (((a[0] & 0xff00) >> 8)- 48);
msg = (a[0] & 0xff) - 48;
val = ((((a[1] & 0xff000000) >> 24) - 48) * 100) + ((((a[1] & 0xff0000) >> 16)- 48) * 10) + (((a[1] & 0xff00) >> 8)- 48);
printf("%d\n", id);
printf("%d\n", msg);
printf("%d\n", val);
return 0;
}
Output:
999
7
111
The usual way would be to define a structure with members which are bit fields and correspond to the segmented information in your array. (oh, re-reading your question: is the array filled with { '9', '9',...}?? Then you'd just sscanf the values with the proper offset into the array.
You can use Memory Copy to extract the values. Here is an example
char *info = malloc(sizeof(int)*3);
char *info2 = malloc(sizeof(int)*1);
char *info3 = malloc(sizeof(int)*3);
memcpy(info,msgTest, 3);
memcpy(info2,msgTest+3, 1);
memcpy(info3,msgTest+4, 3);
printf("%s\n", msgTest);
printf("ID is %s\n", info);
printf("Code is %s\n", info2);
printf("Val is %s\n", info3);
Lets say string msgTest = "0098457
The print statement willl goes as follows..
ID is 009
Code is 8
Val is 457
Hope this helps, Good luck!
here is an example in which i don't use malloc or memory copy for a good implementation on embedded devices, where the stack is limited. Note there is no need to use compact because it is only 1 byte. This is C11 implementation. If you have 4 Bytes for example to be analyzed, create another struct with 4 charbits, and copy the address to the new struct instead. This is coinstance with design patterns concept for embedded.
#include <stdio.h>
// start by creating a struct for the bits
typedef struct {
unsigned int bit0:1; //this is LSB
unsigned int bit1:1; //bit 1
unsigned int bit2:1;
unsigned int bit3:1;
unsigned int bit4:1;
unsigned int bit5:1;
unsigned int bit6:1;
unsigned int bit7:1;
unsigned int bit8:1;
}charbits;
int main()
{
// now assume we have a char to be converted into its bits
char a = 'a'; //asci of a is 97
charbits *x; //this is the character bits to be converted to
// first convert the char a to void pointer
void* p; //this is a void pointer
p=&a; // put the address of a into p
//now convert the void pointer to the struct pointer
x=(charbits *) p;
// now print the contents of the struct
printf("b0 %d b1 %d b2 %d b3 %d b4 %d b5 %d b6 %d b7 %d", x->bit0,x->bit1, x->bit2,x->bit3, x->bit4, x->bit5, x->bit6, x->bit7, x->bit8);
// 97 has bits like this 01100001
//b0 1 b1 0 b2 0 b3 0 b4 0 b5 1 b6 1 b7 0
// now we see that bit 0 is the LSB which is the first one in the struct
return 0;
}
// thank you and i hope this helps

How can I print a C double bit-by-bit to see the low-level representation?

I want to learn how the computer represents the double type in bit, but the & and | bit operators can't use double. And memcpy(&d, &src, 8) also doesn't seem to work. Any suggestions?
Here:
#include <stdio.h>
int main ()
{
double decker = 1.0;
unsigned char * desmond = (unsigned char *) & decker;
int i;
for (i = 0; i < sizeof (double); i++) {
printf ("%02X ", desmond[i]);
}
printf ("\n");
return 0;
}
You can try it: http://codepad.org/onHnAcnC
union {
double d;
unsigned char c[sizeof(double)];
} d;
int main(int ac, char **av) {
int i;
char s1[80], s2[80];
d.d = 1.0;
for(i = 0; i < sizeof d; ++i) {
sprintf(s1 + i * 3, " %02x", d.c[i]);
sprintf(s2 + i * 3, " %02x", d.c[sizeof d - 1 - i]);
}
printf("%s\n%s\n", s1, s2);
return 0;
}
$ ./a.out
00 00 00 00 00 00 f0 3f
3f f0 00 00 00 00 00 00
Or you could just read about the IEEE 754 standard, which specifies representation.
http://en.wikipedia.org/wiki/IEEE_754-1985
A particular bit layout by itself is meaningless. Suppose I have the following: 1101
Maybe I say that is unsigned and it represents the value 13.
Maybe it is signed and that high bit signifies that the value is a negative which means it is now -5.
Consider further that I consider the high two bits to be a base and the low two bits to be an exponent, then I get the value 3.
You see, it isnt the storage, its the interpretation. Read up on how floating point values are represented and interpreted; it will serve you much better than seeing how the bits are packed.
That isn't going to be very enlightening unless you also know a bit about typical IEEE FP representations.
Most likely the way your machine represents doubles is spelled out here.
This works for me
#include <stdio.h>
#include <string.h> /* memmove */
int main(void) {
unsigned char dontdothis[sizeof (double)];
double x = 62.42;
printf("%f\n", x);
memmove(&dontdothis, &x, sizeof dontdothis);
/* examine/change the array dontdothis */
dontdothis[sizeof x - 1] ^= 0x80;
/* examine/change the array dontdothis */
memmove(&x, &dontdothis, sizeof dontdothis);
printf("%f\n", x);
return 0;
}
The result is
62.420000
-62.420000
The key is to convert the double to a long long (assuming sizeof(double) == sizeof(long long)) without changing binary representation. This can be achieved by one of the following methods:
cast: double a; long long b = *((long long *)&a);
union: union { double a ; long long b };
Another option is to use bitfields. Armed with such a structure and knowledge of how a double is supposed to be stored on your computer you can very easily print out the different parts of the internal representation of the double. A bit like they do here.

Is there a printf converter to print in binary format?

I can print with printf as a hex or octal number. Is there a format tag to print as binary, or arbitrary base?
I am running gcc.
printf("%d %x %o\n", 10, 10, 10); //prints "10 A 12\n"
printf("%b\n", 10); // prints "%b\n"
Hacky but works for me:
#define BYTE_TO_BINARY_PATTERN "%c%c%c%c%c%c%c%c"
#define BYTE_TO_BINARY(byte) \
(byte & 0x80 ? '1' : '0'), \
(byte & 0x40 ? '1' : '0'), \
(byte & 0x20 ? '1' : '0'), \
(byte & 0x10 ? '1' : '0'), \
(byte & 0x08 ? '1' : '0'), \
(byte & 0x04 ? '1' : '0'), \
(byte & 0x02 ? '1' : '0'), \
(byte & 0x01 ? '1' : '0')
printf("Leading text "BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY(byte));
For multi-byte types
printf("m: "BYTE_TO_BINARY_PATTERN" "BYTE_TO_BINARY_PATTERN"\n",
BYTE_TO_BINARY(m>>8), BYTE_TO_BINARY(m));
You need all the extra quotes unfortunately. This approach has the efficiency risks of macros (don't pass a function as the argument to BYTE_TO_BINARY) but avoids the memory issues and multiple invocations of strcat in some of the other proposals here.
Print Binary for Any Datatype
// Assumes little endian
void printBits(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char*) ptr;
unsigned char byte;
int i, j;
for (i = size-1; i >= 0; i--) {
for (j = 7; j >= 0; j--) {
byte = (b[i] >> j) & 1;
printf("%u", byte);
}
}
puts("");
}
Test:
int main(int argv, char* argc[])
{
int i = 23;
uint ui = UINT_MAX;
float f = 23.45f;
printBits(sizeof(i), &i);
printBits(sizeof(ui), &ui);
printBits(sizeof(f), &f);
return 0;
}
Here is a quick hack to demonstrate techniques to do what you want.
#include <stdio.h> /* printf */
#include <string.h> /* strcat */
#include <stdlib.h> /* strtol */
const char *byte_to_binary
(
int x
)
{
static char b[9];
b[0] = '\0';
int z;
for (z = 128; z > 0; z >>= 1)
{
strcat(b, ((x & z) == z) ? "1" : "0");
}
return b;
}
int main
(
void
)
{
{
/* binary string to int */
char *tmp;
char *b = "0101";
printf("%d\n", strtol(b, &tmp, 2));
}
{
/* byte to binary string */
printf("%s\n", byte_to_binary(5));
}
return 0;
}
There isn't a binary conversion specifier in glibc normally.
It is possible to add custom conversion types to the printf() family of functions in glibc. See register_printf_function for details. You could add a custom %b conversion for your own use, if it simplifies the application code to have it available.
Here is an example of how to implement a custom printf formats in glibc.
You could use a small table to improve speed1. Similar techniques are useful in the embedded world, for example, to invert a byte:
const char *bit_rep[16] = {
[ 0] = "0000", [ 1] = "0001", [ 2] = "0010", [ 3] = "0011",
[ 4] = "0100", [ 5] = "0101", [ 6] = "0110", [ 7] = "0111",
[ 8] = "1000", [ 9] = "1001", [10] = "1010", [11] = "1011",
[12] = "1100", [13] = "1101", [14] = "1110", [15] = "1111",
};
void print_byte(uint8_t byte)
{
printf("%s%s", bit_rep[byte >> 4], bit_rep[byte & 0x0F]);
}
1 I'm mostly referring to embedded applications where optimizers are not so aggressive and the speed difference is visible.
Print the least significant bit and shift it out on the right. Doing this until the integer becomes zero prints the binary representation without leading zeros but in reversed order. Using recursion, the order can be corrected quite easily.
#include <stdio.h>
void print_binary(unsigned int number)
{
if (number >> 1) {
print_binary(number >> 1);
}
putc((number & 1) ? '1' : '0', stdout);
}
To me, this is one of the cleanest solutions to the problem. If you like 0b prefix and a trailing new line character, I suggest wrapping the function.
Online demo
Based on #William Whyte's answer, this is a macro that provides int8,16,32 & 64 versions, reusing the INT8 macro to avoid repetition.
/* --- PRINTF_BYTE_TO_BINARY macro's --- */
#define PRINTF_BINARY_PATTERN_INT8 "%c%c%c%c%c%c%c%c"
#define PRINTF_BYTE_TO_BINARY_INT8(i) \
(((i) & 0x80ll) ? '1' : '0'), \
(((i) & 0x40ll) ? '1' : '0'), \
(((i) & 0x20ll) ? '1' : '0'), \
(((i) & 0x10ll) ? '1' : '0'), \
(((i) & 0x08ll) ? '1' : '0'), \
(((i) & 0x04ll) ? '1' : '0'), \
(((i) & 0x02ll) ? '1' : '0'), \
(((i) & 0x01ll) ? '1' : '0')
#define PRINTF_BINARY_PATTERN_INT16 \
PRINTF_BINARY_PATTERN_INT8 PRINTF_BINARY_PATTERN_INT8
#define PRINTF_BYTE_TO_BINARY_INT16(i) \
PRINTF_BYTE_TO_BINARY_INT8((i) >> 8), PRINTF_BYTE_TO_BINARY_INT8(i)
#define PRINTF_BINARY_PATTERN_INT32 \
PRINTF_BINARY_PATTERN_INT16 PRINTF_BINARY_PATTERN_INT16
#define PRINTF_BYTE_TO_BINARY_INT32(i) \
PRINTF_BYTE_TO_BINARY_INT16((i) >> 16), PRINTF_BYTE_TO_BINARY_INT16(i)
#define PRINTF_BINARY_PATTERN_INT64 \
PRINTF_BINARY_PATTERN_INT32 PRINTF_BINARY_PATTERN_INT32
#define PRINTF_BYTE_TO_BINARY_INT64(i) \
PRINTF_BYTE_TO_BINARY_INT32((i) >> 32), PRINTF_BYTE_TO_BINARY_INT32(i)
/* --- end macros --- */
#include <stdio.h>
int main() {
long long int flag = 1648646756487983144ll;
printf("My Flag "
PRINTF_BINARY_PATTERN_INT64 "\n",
PRINTF_BYTE_TO_BINARY_INT64(flag));
return 0;
}
This outputs:
My Flag 0001011011100001001010110111110101111000100100001111000000101000
For readability you may want to add a separator for eg:
My Flag 00010110,11100001,00101011,01111101,01111000,10010000,11110000,00101000
As of February 3rd, 2022, the GNU C Library been updated to version 2.35. As a result, %b is now supported to output in binary format.
printf-family functions now support the %b format for output of
integers in binary, as specified in draft ISO C2X, and the %B variant
of that format recommended by draft ISO C2X.
Here's a version of the function that does not suffer from reentrancy issues or limits on the size/type of the argument:
#define FMT_BUF_SIZE (CHAR_BIT*sizeof(uintmax_t)+1)
char *binary_fmt(uintmax_t x, char buf[static FMT_BUF_SIZE])
{
char *s = buf + FMT_BUF_SIZE;
*--s = 0;
if (!x) *--s = '0';
for (; x; x /= 2) *--s = '0' + x%2;
return s;
}
Note that this code would work just as well for any base between 2 and 10 if you just replace the 2's by the desired base. Usage is:
char tmp[FMT_BUF_SIZE];
printf("%s\n", binary_fmt(x, tmp));
Where x is any integral expression.
Quick and easy solution:
void printbits(my_integer_type x)
{
for(int i=sizeof(x)<<3; i; i--)
putchar('0'+((x>>(i-1))&1));
}
Works for any size type and for signed and unsigned ints. The '&1' is needed to handle signed ints as the shift may do sign extension.
There are so many ways of doing this. Here's a super simple one for printing 32 bits or n bits from a signed or unsigned 32 bit type (not putting a negative if signed, just printing the actual bits) and no carriage return. Note that i is decremented before the bit shift:
#define printbits_n(x,n) for (int i=n;i;i--,putchar('0'|(x>>i)&1))
#define printbits_32(x) printbits_n(x,32)
What about returning a string with the bits to store or print later? You either can allocate the memory and return it and the user has to free it, or else you return a static string but it will get clobbered if it's called again, or by another thread. Both methods shown:
char *int_to_bitstring_alloc(int x, int count)
{
count = count<1 ? sizeof(x)*8 : count;
char *pstr = malloc(count+1);
for(int i = 0; i<count; i++)
pstr[i] = '0' | ((x>>(count-1-i))&1);
pstr[count]=0;
return pstr;
}
#define BITSIZEOF(x) (sizeof(x)*8)
char *int_to_bitstring_static(int x, int count)
{
static char bitbuf[BITSIZEOF(x)+1];
count = (count<1 || count>BITSIZEOF(x)) ? BITSIZEOF(x) : count;
for(int i = 0; i<count; i++)
bitbuf[i] = '0' | ((x>>(count-1-i))&1);
bitbuf[count]=0;
return bitbuf;
}
Call with:
// memory allocated string returned which needs to be freed
char *pstr = int_to_bitstring_alloc(0x97e50ae6, 17);
printf("bits = 0b%s\n", pstr);
free(pstr);
// no free needed but you need to copy the string to save it somewhere else
char *pstr2 = int_to_bitstring_static(0x97e50ae6, 17);
printf("bits = 0b%s\n", pstr2);
Is there a printf converter to print in binary format?
The printf() family is only able to print integers in base 8, 10, and 16 using the standard specifiers directly. I suggest creating a function that converts the number to a string per code's particular needs.
[Edit 2022] This is expected to change with the next version of C which implements "%b".
Binary constants such as 0b10101010, and %b conversion specifier for printf() function family C2x
To print in any base [2-36]
All other answers so far have at least one of these limitations.
Use static memory for the return buffer. This limits the number of times the function may be used as an argument to printf().
Allocate memory requiring the calling code to free pointers.
Require the calling code to explicitly provide a suitable buffer.
Call printf() directly. This obliges a new function for to fprintf(), sprintf(), vsprintf(), etc.
Use a reduced integer range.
The following has none of the above limitation. It does require C99 or later and use of "%s". It uses a compound literal to provide the buffer space. It has no trouble with multiple calls in a printf().
#include <assert.h>
#include <limits.h>
#define TO_BASE_N (sizeof(unsigned)*CHAR_BIT + 1)
// v--compound literal--v
#define TO_BASE(x, b) my_to_base((char [TO_BASE_N]){""}, (x), (b))
// Tailor the details of the conversion function as needed
// This one does not display unneeded leading zeros
// Use return value, not `buf`
char *my_to_base(char buf[TO_BASE_N], unsigned i, int base) {
assert(base >= 2 && base <= 36);
char *s = &buf[TO_BASE_N - 1];
*s = '\0';
do {
s--;
*s = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"[i % base];
i /= base;
} while (i);
// Could employ memmove here to move the used buffer to the beginning
// size_t len = &buf[TO_BASE_N] - s;
// memmove(buf, s, len);
return s;
}
#include <stdio.h>
int main(void) {
int ip1 = 0x01020304;
int ip2 = 0x05060708;
printf("%s %s\n", TO_BASE(ip1, 16), TO_BASE(ip2, 16));
printf("%s %s\n", TO_BASE(ip1, 2), TO_BASE(ip2, 2));
puts(TO_BASE(ip1, 8));
puts(TO_BASE(ip1, 36));
return 0;
}
Output
1020304 5060708
1000000100000001100000100 101000001100000011100001000
100401404
A2F44
const char* byte_to_binary(int x)
{
static char b[sizeof(int)*8+1] = {0};
int y;
long long z;
for (z = 1LL<<sizeof(int)*8-1, y = 0; z > 0; z >>= 1, y++) {
b[y] = (((x & z) == z) ? '1' : '0');
}
b[y] = 0;
return b;
}
None of the previously posted answers are exactly what I was looking for, so I wrote one. It is super simple to use %B with the printf!
/*
* File: main.c
* Author: Techplex.Engineer
*
* Created on February 14, 2012, 9:16 PM
*/
#include <stdio.h>
#include <stdlib.h>
#include <printf.h>
#include <math.h>
#include <string.h>
static int printf_arginfo_M(const struct printf_info *info, size_t n, int *argtypes)
{
/* "%M" always takes one argument, a pointer to uint8_t[6]. */
if (n > 0) {
argtypes[0] = PA_POINTER;
}
return 1;
}
static int printf_output_M(FILE *stream, const struct printf_info *info, const void *const *args)
{
int value = 0;
int len;
value = *(int **) (args[0]);
// Beginning of my code ------------------------------------------------------------
char buffer [50] = ""; // Is this bad?
char buffer2 [50] = ""; // Is this bad?
int bits = info->width;
if (bits <= 0)
bits = 8; // Default to 8 bits
int mask = pow(2, bits - 1);
while (mask > 0) {
sprintf(buffer, "%s", ((value & mask) > 0 ? "1" : "0"));
strcat(buffer2, buffer);
mask >>= 1;
}
strcat(buffer2, "\n");
// End of my code --------------------------------------------------------------
len = fprintf(stream, "%s", buffer2);
return len;
}
int main(int argc, char** argv)
{
register_printf_specifier('B', printf_output_M, printf_arginfo_M);
printf("%4B\n", 65);
return EXIT_SUCCESS;
}
This code should handle your needs up to 64 bits.
I created two functions: pBin and pBinFill. Both do the same thing, but pBinFill fills in the leading spaces with the fill character provided by its last argument.
The test function generates some test data, then prints it out using the pBinFill function.
#define kDisplayWidth 64
char* pBin(long int x,char *so)
{
char s[kDisplayWidth+1];
int i = kDisplayWidth;
s[i--] = 0x00; // terminate string
do { // fill in array from right to left
s[i--] = (x & 1) ? '1' : '0'; // determine bit
x >>= 1; // shift right 1 bit
} while (x > 0);
i++; // point to last valid character
sprintf(so, "%s", s+i); // stick it in the temp string string
return so;
}
char* pBinFill(long int x, char *so, char fillChar)
{
// fill in array from right to left
char s[kDisplayWidth+1];
int i = kDisplayWidth;
s[i--] = 0x00; // terminate string
do { // fill in array from right to left
s[i--] = (x & 1) ? '1' : '0';
x >>= 1; // shift right 1 bit
} while (x > 0);
while (i >= 0) s[i--] = fillChar; // fill with fillChar
sprintf(so, "%s", s);
return so;
}
void test()
{
char so[kDisplayWidth+1]; // working buffer for pBin
long int val = 1;
do {
printf("%ld =\t\t%#lx =\t\t0b%s\n", val, val, pBinFill(val, so, '0'));
val *= 11; // generate test data
} while (val < 100000000);
}
Output:
00000001 = 0x000001 = 0b00000000000000000000000000000001
00000011 = 0x00000b = 0b00000000000000000000000000001011
00000121 = 0x000079 = 0b00000000000000000000000001111001
00001331 = 0x000533 = 0b00000000000000000000010100110011
00014641 = 0x003931 = 0b00000000000000000011100100110001
00161051 = 0x02751b = 0b00000000000000100111010100011011
01771561 = 0x1b0829 = 0b00000000000110110000100000101001
19487171 = 0x12959c3 = 0b00000001001010010101100111000011
Some runtimes support "%b" although that is not a standard.
Also see here for an interesting discussion:
http://bytes.com/forum/thread591027.html
HTH
Maybe a bit OT, but if you need this only for debuging to understand or retrace some binary operations you are doing, you might take a look on wcalc (a simple console calculator). With the -b options you get binary output.
e.g.
$ wcalc -b "(256 | 3) & 0xff"
= 0b11
There is no formatting function in the C standard library to output binary like that. All the format operations the printf family supports are towards human readable text.
The following recursive function might be useful:
void bin(int n)
{
/* Step 1 */
if (n > 1)
bin(n/2);
/* Step 2 */
printf("%d", n % 2);
}
I optimized the top solution for size and C++-ness, and got to this solution:
inline std::string format_binary(unsigned int x)
{
static char b[33];
b[32] = '\0';
for (int z = 0; z < 32; z++) {
b[31-z] = ((x>>z) & 0x1) ? '1' : '0';
}
return b;
}
Use:
char buffer [33];
itoa(value, buffer, 2);
printf("\nbinary: %s\n", buffer);
For more ref., see How to print binary number via printf.
void
print_binary(unsigned int n)
{
unsigned int mask = 0;
/* this grotesque hack creates a bit pattern 1000... */
/* regardless of the size of an unsigned int */
mask = ~mask ^ (~mask >> 1);
for(; mask != 0; mask >>= 1) {
putchar((n & mask) ? '1' : '0');
}
}
Print bits from any type using less code and resources
This approach has as attributes:
Works with variables and literals.
Doesn't iterate all bits when not necessary.
Call printf only when complete a byte (not unnecessarily for all bits).
Works for any type.
Works with little and big endianness (uses GCC #defines for checking).
May work with hardware that char isn't a byte (eight bits). (Tks #supercat)
Uses typeof() that isn't C standard but is largely defined.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <limits.h>
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define for_endian(size) for (int i = 0; i < size; ++i)
#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define for_endian(size) for (int i = size - 1; i >= 0; --i)
#else
#error "Endianness not detected"
#endif
#define printb(value) \
({ \
typeof(value) _v = value; \
__printb((typeof(_v) *) &_v, sizeof(_v)); \
})
#define MSB_MASK 1 << (CHAR_BIT - 1)
void __printb(void *value, size_t size)
{
unsigned char uc;
unsigned char bits[CHAR_BIT + 1];
bits[CHAR_BIT] = '\0';
for_endian(size) {
uc = ((unsigned char *) value)[i];
memset(bits, '0', CHAR_BIT);
for (int j = 0; uc && j < CHAR_BIT; ++j) {
if (uc & MSB_MASK)
bits[j] = '1';
uc <<= 1;
}
printf("%s ", bits);
}
printf("\n");
}
int main(void)
{
uint8_t c1 = 0xff, c2 = 0x44;
uint8_t c3 = c1 + c2;
printb(c1);
printb((char) 0xff);
printb((short) 0xff);
printb(0xff);
printb(c2);
printb(0x44);
printb(0x4411ff01);
printb((uint16_t) c3);
printb('A');
printf("\n");
return 0;
}
Output
$ ./printb
11111111
11111111
00000000 11111111
00000000 00000000 00000000 11111111
01000100
00000000 00000000 00000000 01000100
01000100 00010001 11111111 00000001
00000000 01000011
00000000 00000000 00000000 01000001
I have used another approach (bitprint.h) to fill a table with all bytes (as bit strings) and print them based on the input/index byte. It's worth taking a look.
Maybe someone will find this solution useful:
void print_binary(int number, int num_digits) {
int digit;
for(digit = num_digits - 1; digit >= 0; digit--) {
printf("%c", number & (1 << digit) ? '1' : '0');
}
}
void print_ulong_bin(const unsigned long * const var, int bits) {
int i;
#if defined(__LP64__) || defined(_LP64)
if( (bits > 64) || (bits <= 0) )
#else
if( (bits > 32) || (bits <= 0) )
#endif
return;
for(i = 0; i < bits; i++) {
printf("%lu", (*var >> (bits - 1 - i)) & 0x01);
}
}
should work - untested.
I liked the code by paniq, the static buffer is a good idea. However it fails if you want multiple binary formats in a single printf() because it always returns the same pointer and overwrites the array.
Here's a C style drop-in that rotates pointer on a split buffer.
char *
format_binary(unsigned int x)
{
#define MAXLEN 8 // width of output format
#define MAXCNT 4 // count per printf statement
static char fmtbuf[(MAXLEN+1)*MAXCNT];
static int count = 0;
char *b;
count = count % MAXCNT + 1;
b = &fmtbuf[(MAXLEN+1)*count];
b[MAXLEN] = '\0';
for (int z = 0; z < MAXLEN; z++) { b[MAXLEN-1-z] = ((x>>z) & 0x1) ? '1' : '0'; }
return b;
}
Here is a small variation of paniq's solution that uses templates to allow printing of 32 and 64 bit integers:
template<class T>
inline std::string format_binary(T x)
{
char b[sizeof(T)*8+1] = {0};
for (size_t z = 0; z < sizeof(T)*8; z++)
b[sizeof(T)*8-1-z] = ((x>>z) & 0x1) ? '1' : '0';
return std::string(b);
}
And can be used like:
unsigned int value32 = 0x1e127ad;
printf( " 0x%x: %s\n", value32, format_binary(value32).c_str() );
unsigned long long value64 = 0x2e0b04ce0;
printf( "0x%llx: %s\n", value64, format_binary(value64).c_str() );
Here is the result:
0x1e127ad: 00000001111000010010011110101101
0x2e0b04ce0: 0000000000000000000000000000001011100000101100000100110011100000
No standard and portable way.
Some implementations provide itoa(), but it's not going to be in most, and it has a somewhat crummy interface. But the code is behind the link and should let you implement your own formatter pretty easily.
I just want to post my solution. It's used to get zeroes and ones of one byte, but calling this function few times can be used for larger data blocks. I use it for 128 bit or larger structs. You can also modify it to use size_t as input parameter and pointer to data you want to print, so it can be size independent. But it works for me quit well as it is.
void print_binary(unsigned char c)
{
unsigned char i1 = (1 << (sizeof(c)*8-1));
for(; i1; i1 >>= 1)
printf("%d",(c&i1)!=0);
}
void get_binary(unsigned char c, unsigned char bin[])
{
unsigned char i1 = (1 << (sizeof(c)*8-1)), i2=0;
for(; i1; i1>>=1, i2++)
bin[i2] = ((c&i1)!=0);
}
Here's how I did it for an unsigned int
void printb(unsigned int v) {
unsigned int i, s = 1<<((sizeof(v)<<3)-1); // s = only most significant bit at 1
for (i = s; i; i>>=1) printf("%d", v & i || 0 );
}
One statement generic conversion of any integral type into the binary string representation using standard library:
#include <bitset>
MyIntegralType num = 10;
print("%s\n",
std::bitset<sizeof(num) * 8>(num).to_string().insert(0, "0b").c_str()
); // prints "0b1010\n"
Or just: std::cout << std::bitset<sizeof(num) * 8>(num);

Resources