So have a signed decimal number that can be represented with 16 bits, in a char*. I want to have a char* of that number in 2s compliment binary. So I want to go from "-42" to "1111111111010110" (note all 16 bits are shown) in C. Is there a quick and dirty way to do this? Some library function perhaps? Or do I have to crank out a large-ish function myself to do this?
I'm aware that strtol() may be of some use.
There isn't a standard library function that can generate binary strings as you describe.
However it is not particularly difficult to do.
#include <stdio.h>
#include <stdint.h>
#include <ctype.h>
int main(int argc, char ** argv)
{
while(--argc >= 0 && ++argv && *argv){
char const * input = *argv;
if(! input)
continue;
uint32_t value = 0;
char negative = 0;
for(; *input; input++){
if (isdigit(*input))
value = value * 10 + (*input -'0');
else if (*input == '-' && value == 0)
negative = 1;
else {
printf("Error: unexpected character: %c at %d\n", *input, (int)(input - *argv));
continue; // this function doesn't handle floats, or hex
}
}
if (value > 0x7fff + negative){
printf("Error: value too large for 16bit integer: %d %x\n", value, value);
continue; // can't be represented in 16 bits
}
int16_t result = value;
if (negative)
result = -value;
for (int i=1; i <= 16; i++)
printf("%d", 0 != (result & 1 << (16-i)) );
printf("\n");
}
}
That function handles all valid 16 bit values and leverages the fact that the architecture stores integers as two's complement values. I'm not aware of an architecture that doesn't, so it's a fairly reasonable assumption.
Note that two's complement INT_MIN != -1 * INT_MAX.
This is handled by adding the negative flag to the validity check before conversion from unsigned 32bit to signed 16bit.
./foo 1 -1 2 -2 42 -42 32767 -32767 32768 -32768
0000000000000001
1111111111111111
0000000000000010
1111111111111110
0000000000101010
1111111111010110
0111111111111111
1000000000000001
Error: value too large for 16bit integer: 32768 8000
1000000000000000
Related
I tried left-shifting a 32-bit integer by 24:
char *int_to_bin(int num) {
int i = 0;
static char bin[64];
while (num != 0) {
bin[i] = num % 2 + 48;
num /= 2;
i++;
}
bin[i] = '\0';
return (bin);
}
int main() {
int number = 255;
printf("number: %s\n", int_to_bin(number));
printf("shifted number: %s\n", int_to_bin(number << 24));
return 0;
}
OUTPUT:
number: 11111111
shifted number: 000000000000000000000000/
and i left-shift with 23-bit it yields this result:
0000000000000000000000011111111
Well Why is it like that and what's the matter with '/' at the end of the wrong result?
Two things:
If number has the value 255 then number << 24 has the numerical value 4278190080, which overflows a 32-bit signed integer whose largest possible value is 2147483647. Signed integer overflow is undefined behavior in C, so the result could be anything at all.
What probably happens in this case is that the result of the shift is negative. When num is negative then num % 2 may take the value -1, so you store character 47 in the string, which is /.
Bit shifting math is usually better to do with unsigned types, where overflow is well-defined (it wraps around and bits just shift off the left and vanish) and num % 2 can only be 0 or 1. (Or write num & 1 instead.)
Your int_to_bin routine puts the least-significant bits at the beginning of the string (on the left), so the result is backwards from the way people usually write numbers (with the least-significant bits on the right). You may want to rewrite it.
Shift works fine, you simply print it from the wrong direction.
char *int_to_bin(char *buff, int num)
{
unsigned mask = 1U << (CHAR_BIT * sizeof(num) - 1);
char *wrk = buff;
for(; mask; mask >>= 1)
{
*wrk++ = '0' + !!((unsigned)num & mask);
}
*wrk = 0;
return buff;
}
int main()
{
char buff[CHAR_BIT * sizeof(int) + 1];
int number = 255;
printf("number: %s\n", int_to_bin(buff, number));
printf("shifted number: %s\n", int_to_bin(buff, number << 24));
return 0;
}
Shifting signed integers left is OK, but the right shift is implementation-defined. Many systems use arithmetic shift right and the result is not the same as using the bitwise one:
https://godbolt.org/z/e7f3shxd4
you are storing numbers backwards
you are using signed int32 while shifting by 23 results needs more than 32 bits to handle that operation ...you should use long long int
signed integer can lead to wrong answers as 1<<31 is -1 which results in bad characters in string
finally using unsigned long long int with storing numbers in correct order will produce correct string
you should try re write code on your own before seeing this improved version of your code
#include<stdio.h>
#include<stdlib.h>
char *int_to_bin( unsigned long long int num) {
int i = 0;
static char bin[65];
while (i != 64) {
bin[63-i] = num % 2 + 48;
num /= 2;
i++;
}
bin[64] = '\0';
return (bin);
}
int main() {
unsigned long long int number = 255;
printf("number 1: %s\n", int_to_bin(number));
printf("number 2: %s\n", int_to_bin(number << 24));
return 0;
}
How can i calculate in C power of 2, without pow function?
For example, after keyboard input 4, the result to be 16?
I know that, for example, 2^5 can be typing similar like 2^1*2^5 (I don't know if this idea can help)
To calculate 2N in C, use 1 << N.
If this may exceed the value representable in an int, use (Type) 1 << N, where Type is the integer type you want to use, such as unsigned long or uint64_t.
<< is the left-shift operator. It moves bits “left” in the bits that represent a number. Since numbers are represented in binary, moving bits left increases the powers of 2 they represent. Thus, 12 represents 1, 102 represents 2, 1002 represents 4, and so on, so 1 shifted left N positions represents 2N.
Numbers can be represented in binary form. For example, if integers are stored using 32 bits, 1 is stored like this:
00000000 00000000 00000000 00000001
And the value is the result of 1 x (20)
If you do a left-shift operation your value will be stored as this:
00000000 00000000 00000000 00000010
That means that now the result is result of 1 x (21)
Bit used to store a type is sizeof(type)x8, because a byte is 8 bit.
So best method is to use shift:
The left-shift of 1 by exp is equivalent to 2 raised to exp.
Shift operators must not be used for negative exponents in case of pow. The result is an undefined behaviour.
Another case of undefined behavior is the one of shifting the number equal to or more than N, in case of that number is stored in N bits.
#include <stdio.h>
#include <stdlib.h>
int main() {
int exp;
printf("Please, insert exponent:\n");
if (scanf("%d", &exp) != 1) {
printf("ERROR: scanf\n");
exit(EXIT_FAILURE);
}
if (exp < 0) {
printf("ERROR: exponent must be >= 0\n");
exit(EXIT_FAILURE);
}
printf("2^(%d) = %d\n", exp, 1 << exp);
exit(EXIT_SUCCESS);
}
You can also do it creating a ricorsive function (int) -> int:
int my_pow(int exp) {
If (exp < 0 ) {
return -1;
}
if (exp == 0) {
return 1;
}
if (exp > 0) {
return 2 * my_pow(exp-1);
}
}
Using it as main:
int main() {
int exp;
scanf("%d" &exp);
int res = my_pow(exp);
if (res == -1) {
printf("ERROR: Exponent must be equal or bigger than 0\n");
exit(EXIT_FAILURE);
}
printf("2^(%d) = %d", exp, res);
return 0;
}
Perhaps this task is a bit more complicated than what I've written below, but the code that follows is my take on decimal to BCD. The task is to take in a decimal number, convert it to BCD and then to ASCII so that it can be displayed on a microcontroller. As far as I'm aware the code works sufficiently for the basic operation of converting to BCD however I'm stuck when it comes to converting this into ASCII. The overall output is ASCII so that an incremented value can be displayed on an LCD.
My code so far:
int dec2bin(int a){ //Decimal to binary function
int bin;
int i =1;
while (a!=0){
bin+=(a%2)*i;
i*=10;
a/=2;
}
return bin;
}
unsigned int ConverttoBCD(int val){
unsigned int unit = 0;
unsigned int ten = 0;
unsigned int hundred = 0;
hundred = (val/100);
ten = ((val-hundred*100)/10);
unit = (val-(hundred*100+ten*10));
uint8_t ret1 = dec2bin(unit);
uint8_t ret2 = dec2bin((ten)<<4);
uint8_t ret3 = dec2bin((hundred)<<8);
return(ret3+ret2+ret1);
}
The idea to convert to BCD for an ASCII representation of a number is actually the "correct one". Given BCD, you only need to add '0' to each digit for getting the corresponding ASCII value.
But your code has several problems. The most important one is that you try to stuff a value shifted left by 8 bits in an 8bit type. This can never work, those 8 bits will be zero, think about it! Then I absolutely do not understand what your dec2bin() function is supposed to do.
So I'll present you one possible correct solution to your problem. The key idea is to use a char for each individual BCD digit. Of course, a BCD digit only needs 4 bits and a char has at least 8 of them -- but you need char anyways for your ASCII representation and when your BCD digits are already in individual chars, all you have to do is indeed add '0' to each.
While at it: Converting to BCD by dividing and multiplying is a waste of resources. There's a nice algorithm called Double dabble for converting to BCD only using bit shifting and additions. I'm using it in the following example code:
#include <stdio.h>
#include <string.h>
// for determining the number of value bits in an integer type,
// see https://stackoverflow.com/a/4589384/2371524 for this nice trick:
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
// number of bits in unsigned int:
#define UNSIGNEDINT_BITS IMAX_BITS((unsigned)-1)
// convert to ASCII using BCD, return the number of digits:
int toAscii(char *buf, int bufsize, unsigned val)
{
// sanity check, a buffer smaller than one digit is pointless
if (bufsize < 1) return -1;
// initialize output buffer to zero
// if you don't have memset, use a loop here
memset(buf, 0, bufsize);
int scanstart = bufsize - 1;
int i;
// mask for single bits in value, start at most significant bit
unsigned mask = 1U << (UNSIGNEDINT_BITS - 1);
while (mask)
{
// extract single bit
int bit = !!(val & mask);
for (i = scanstart; i < bufsize; ++i)
{
// this is the "double dabble" trick -- in each iteration,
// add 3 to each element that is greater than 4. This will
// generate the correct overflowing bits while shifting for
// BCD
if (buf[i] > 4) buf[i] += 3;
}
// if we have filled the output buffer from the right far enough,
// we have to scan one position earlier in the next iteration
if (buf[scanstart] > 7) --scanstart;
// check for overflow of our buffer:
if (scanstart < 0) return -1;
// now just shift the bits in the BCD digits:
for (i = scanstart; i < bufsize - 1; ++i)
{
buf[i] <<= 1;
buf[i] &= 0xf;
buf[i] |= (buf[i+1] > 7);
}
// shift in the new bit from our value:
buf[bufsize-1] <<= 1;
buf[bufsize-1] &= 0xf;
buf[bufsize-1] |= bit;
// next bit:
mask >>= 1;
}
// find first non-zero digit:
for (i = 0; i < bufsize - 1; ++i) if (buf[i]) break;
int digits = bufsize - i;
// eliminate leading zero digits
// (again, use a loop if you don't have memmove)
// (or, if you're converting to a fixed number of digits and *want*
// the leading zeros, just skip this step entirely, including the
// loop above)
memmove(buf, buf + i, digits);
// convert to ascii:
for (i = 0; i < digits; ++i) buf[i] += '0';
return digits;
}
int main(void)
{
// some simple test code:
char buf[10];
int digits = toAscii(buf, 10, 471142);
for (int i = 0; i < digits; ++i)
{
putchar(buf[i]);
}
puts("");
}
You won't need this IMAX_BITS() "magic macro" if you actually know your target platform and how many bits there are in the integer type you want to convert.
I know how to convert a float into it's binary representation using % 2 and / 2, but is there a shortcut or cleaner way of doing this? Is what I am doing even considered representing a float bitwise? Because I am supposed to be using bitwise comparison between two float numbers, but I'm not sure if that means using bitwise operations.
For example to obtain the binary representation for a number I'd store the resultant of a number like 10 % 2 into an array until the number reached 0 within a while loop and if the array were to be printed backwards it would represent the number in binary.
array[] = num % 2;
num = num / 2;
What I did was use the method above for two float numbers, loaded them up with their own individual arrays, and compared them both through their arrays.
I have them set up in IEEE floating point format within their arrays as well.
EDIT: I have to compare two numbers of type float by using bitwise comparison and operations to see if one number is greater, less than, or if they are equal with the floats represented in biased exponent notation. The specifics are that it tests whether a floating point number number1 is less than, equal to or greater than another floating point number number2, by simply comparing their floating point representations bitwise by using bitwise comparisons from left to right, stopping as soon as the first differing bit is encountered.
No, it won't. Dividing a float by 2 will result in half of the number like this:
#include <stdio.h>
int main(void)
{
float x = 5.0f;
float y = x / 2;
printf("%f\n", y);
}
Result:
2.50000
see? It has nothing to do with bits.
Binary representation of floating numbers consists of mantissa, exponent and a sign bit, which means that unlike for normal integers, the tricks you've mentioned won't apply here. You can learn more about this by reading an article on Wikipedia on IEEE floating points.
To make sure two floats have exactly the same bit configurations, you could compare their content using memcmp which compares things byte-by-byte, with no additional casts/arithmetic/whatever:
#include <stdio.h>
int main(void)
{
float x = 5.0f;
float y = 4.99999999999999f; //gets rounded up to 5.0f
float z = 4.9f;
printf("%d\n", memcmp(&x, &y, sizeof(float)) == 0);
printf("%d\n", memcmp(&x, &z, sizeof(float)) == 0);
}
...will print 1 and 0 respectively. You can also inspect the individual bits this way (e.g. by operating on a *(char*)&x.
This compares two IEEE 32-bit floats bit by bit, returning -1, 0, or 1, and also indicating the bit at which they differ. They can be compared as sign-and-magnitude numbers. The function float_comp below first compares them bit-by-bit as uint32_t and negates the comparison if they differ in the sign bit (bit 31).
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
static int float_comp(float f1, float f2, int *bit)
{
const uint32_t *a, *b;
int comp = 0;
a = (const uint32_t *)(const void *)&f1;
b = (const uint32_t *)(const void *)&f2;
for (*bit = 31; *bit >= 0; (*bit)--) {
if ((*a & (UINT32_C(1) << *bit))
&& !(*b & (UINT32_C(1) << *bit))) {
comp = 1;
break;
}
if (!(*a & (UINT32_C(1) << *bit))
&& (*b & (UINT32_C(1) << *bit))) {
comp = -1;
break;
}
}
if (*bit == 31)
comp = -comp; /* sign and magnitude conversion */
return comp;
}
int main(int argc, char **argv)
{
float f1, f2;
int comp, bit;
if (argc != 3) {
fprintf(stderr, "usage: %s: float1 float2\n", argv[0]);
return 2;
}
f1 = strtof(argv[1], NULL);
f2 = strtof(argv[2], NULL);
comp = float_comp(f1, f2, &bit);
if (comp == 0)
printf("%.8g = %.8g\n", f1, f2);
else if (comp < 0)
printf("%.8g < %.8g (differ at bit %d)\n", f1, f2, bit);
else
printf("%.8g > %.8g (differ at bit %d)\n", f1, f2, bit);
return 0;
}
Doing what you said will not give you the bits of floating point representation. Instead use union to convert between float and integer representations and print bits as usual:
#include <stdio.h>
#include <stdint.h>
typedef union {
uint32_t i;
float f;
} float_conv_t;
void
int_to_bin_print(uint32_t number)
{
char binaryNumber[33];
int i;
for (i = 31; i >= 0; --i)
{
binaryNumber[i] = (number & 1) ? '1' : '0';
number >>= 1;
}
binaryNumber[32] = '\0';
fprintf(stdout, "Number %s\n", binaryNumber);
}
int main(void) {
float_conv_t f;
f.f = 10.34;
int_to_bin_print(f.i);
f.f = -10.34;
int_to_bin_print(f.i);
f.f = 0.1;
int_to_bin_print(f.i);
f.f = 0.2;
int_to_bin_print(f.i);
return 0;
}
Output:
Number 01000001001001010111000010100100
Number 11000001001001010111000010100100
Number 00111101110011001100110011001101
My goal is to compare two floating point numbers by comparing their
floating point representations bitwise.
Then you can compare raw memory using memcmp:
float f1 = 0.1;
float f2 = 0.2;
if (memcmp(&f1, &f2, sizeof(float)) == 0)
// equal
SYNOPSIS
#include
int memcmp(const void *s1, const void *s2, size_t n);
DESCRIPTION
The memcmp() function compares the first n bytes (each interpreted as unsigned char) of the memory areas s1 and s2.
RETURN VALUE
The memcmp() function returns an integer less than, equal to, or greater than zero if the first n bytes of s1 is found,
respectively, to
be less than, to match, or be greater than the first n bytes of s2.
Can someone look over my program and tell me if i am doing it correctly?
I am accepting user input in the form of 8 hexadecimal digits. I want to interpret those 8 digits as an IEEE 754 32-bit floating point number and will print out information about that number.
here is my output:
IEEE 754 32-bit floating point
byte order: little-endian
>7fffffff
0x7FFFFFFF
signBit 0, expbits 255, fractbits 0x007FFFFF
normalized: exp = 128
SNaN
>40000000
0x40000000
signBit 0, expbits 128, fractbits 0x00000000
normalized: exp = 1
>0
0x00000000
signBit 0, expbits 0, fractbits 0x00000000
+zero
here is the code..
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int HexNumber;
int tru_exp =0;
int stored_exp;
int negative;
int exponent;
int mantissa;
printf("IEEE 754 32-bit floating point");
int a = 0x12345678;
unsigned char *c = (unsigned char*)(&a);
if (*c == 0x78)
{
printf("\nbyte order: little-endian\n");
}
else
{
printf("\nbyte order: big-endian\n");
}
do{
printf("\n>");
scanf("%x", &HexNumber);
printf("\n0x%08X",HexNumber);
negative = !!(HexNumber & 0x80000000);
exponent = (HexNumber & 0x7f800000) >> 23;
mantissa = (HexNumber & 0x007FFFFF);
printf("\nsignBit %d, ", negative);
printf("expbits %d, ", exponent);
printf("fractbits 0x%08X", mantissa);
// "%#010x, ", mantissa);
if(exponent == 0)
{
if(mantissa != 0)
{
printf("\ndenormalized ");
}
}
else{
printf("\nnormalized: ");
tru_exp = exponent - 127;
printf("exp = %d", tru_exp);
}
if(exponent == 0 && mantissa == 0 && negative == 1)
{
printf("\n-zero");
}
if(exponent ==0 && mantissa == 0 && negative == 0)
{
printf("\n+zero");
}
if(exponent == 255 && mantissa != 0 && negative == 1)
{
printf("\nQNaN");
}
if(exponent == 255 && mantissa != 0 && negative == 0)
{
printf("\nSNaN");
}
if(exponent == 0xff && mantissa == 0 && negative == 1)
{
printf("\n-infinity");
}
if(exponent == 0xff && mantissa == 0 && negative == 0)
{
printf("\n+infinity");
}
printf("\n");
}while(HexNumber != 0);
return 0;
}
I dont think the de normalized is right?
Generally, you're pretty close. Some comments:
0x7fffffff is a quiet NaN, not a signaling NaN. The signbit does not determine whether or not a NaN is quiet; rather it is the leading bit of the significand (the preferred term for what you call "mantissa") field. 0xffbfffff is a signaling NaN, for example.
Edit: interjay correctly points out that this encoding isn't actually required by IEEE-754; a platform is free to use a different encoding for differentiating quiet and signaling NaNs. However, it is recommended by the standard:
A quiet NaN bit string should be
encoded with the first bit of the
trailing significand field T being 1.
A signaling NaN bit string should be
encoded with the first bit of the
trailing significand field being 0.
Infinities and NaNs usually aren't called "normal numbers" in the IEEE-754 terminology.
Your condition for calling a number "denormal" is correct.
For normal numbers, it would be nice to add the implicit leading bit when you report the significand. I personally would probably print them out in the C99 hex notation: 0x40000000 has a significand (once you add the implicit bit) of 0x800000 and an exponent of 1, so becomes 0x1.000000p1.
I'm sure some aging PDP-11 hacker will give you a hard time about "big endian" and "little endian" not being the only two possibilities.
Edit Ok, example of checking for qNaN on platforms that use IEEE-754's recommended encoding:
if (exponent == 0xff && mantissa & 0x00400000) printf("\nqNaN");