Strange problem about a function that gets the machine word - c

I wrote a function to get the machine word in C yesterday, but it seems that there is something wrong in it.
Here is the code:
unsigned machineword()
{
int i = 1;
unsigned temp;
while (temp > 0)
{
i++;
temp = (unsigned)(~0 >> i);
}
return i;
}

The simplest way to get the width of unsigned int is (sizeof(unsigned)*CHAR_BIT).
EDIT: as noted by pmg, you should be aware of the theoretical difference between the size an unsigned takes in memory and the number of bits available for computing with. Your original code tries to compute the latter, and so does the program below. The above trick computes the space occupied in memory.
It is not very convenient to compute this number with >> as it is forbidden in C to use >> with a number equal to or larger than the size in bits of the type you are shifting. You can work around this, if you know that long long is strictly larger than int, by computing with unsigned long long:
unsigned machineword()
{
int i = 1;
unsigned temp=1;
while (temp > 0)
{
i++;
temp = (unsigned)(((unsigned long long)~(0U)) >> i);
}
return i;
}

The simplest way to avoid the UB when shifting for too large value while keeping your structure is:
unsigned machineword()
{
unsigned i = 0;
unsigned temp = ~0U;
while (temp > 0)
{
i++;
temp >>= 1;
}
return i;
}

To calculate the number of bits, you can use CHAR_BIT or UINT_MAX.
The CHAR_BIT approach gives you the number of bits each value occupies in memory.
The UINT_MAX approach gives you the effective available bits.
Usually both values will be the same
#include <limits.h>
#include <stdio.h>
int main(void) {
unsigned tmp = UINT_MAX;
int i = 0;
while (tmp) {
i++;
tmp /= 2;
}
printf("value bits in a unsigned: %d\n", i);
printf("memory bits in a unsigned: %d\n", CHAR_BIT * (int)sizeof (unsigned));
return 0;
}

Related

Long Long Decimal Binary Representation using C

I've been trying to print out the Binary representation of a long long integer using C Programming
My code is
#include<stdio.h>
#include <stdlib.h>
#include<limits.h>
int main()
{
long long number, binaryRepresentation = 0, baseOfOne = 1, remainder;
scanf("%lld", &number);
while(number > 0) {
remainder = number % 2;
binaryRepresentation = binaryRepresentation + remainder * baseOfOne;
baseOfOne *= 10;
number = number / 2;
}
printf("%lld\n", binaryRepresentation);
}
The above code works fine when I provide an input of 5 and fails when the number is 9223372036854775807 (0x7FFFFFFFFFFFFFFF).
1.Test Case
5
101
2.Test Case
9223372036854775807
-1024819115206086201
Using a denary number to represent binary digits never ends particularly well: you'll be vulnerable to overflow for a surprisingly small input, and all subsequent arithmetic operations will be meaningless.
Another approach is to print the numbers out as you go, but using a recursive technique so you print the numbers in the reverse order to which they are processed:
#include <stdio.h>
unsigned long long output(unsigned long long n)
{
unsigned long long m = n ? output(n / 2) : 0;
printf("%d", (int)(n % 2));
return m;
}
int main()
{
unsigned long long number = 9223372036854775807;
output(number);
printf("\n");
}
Output:
0111111111111111111111111111111111111111111111111111111111111111
I've also changed the type to unsigned long long which has a better defined bit pattern, and % does strange things for negative numbers anyway.
Really though, all I'm doing here is abusing the stack as a way of storing what is really an array of zeros and ones.
As Bathsheba's answer states, you need more space than is
available if you use a decimal number to represent a bit sequence like that.
Since you intend to print the result, it's best to do that one bit at a time. We can do this by creating a mask with only the highest bit set. The magic to create this for any type is to complement a zero of that type to get an "all ones" number; we then subtract half of that (i.e. 1111.... - 0111....) to get only a single bit. We can then shift it rightwards along the number to determine the state of each bit in turn.
Here's a re-worked version using that logic, with the following other changes:
I use a separate function, returning (like printf) the number of characters printed.
I accept an unsigned value, as we were ignoring negative values anyway.
I process arguments from the command line - I tend to find that more convenient that having to type stuff on stdin.
#include <stdio.h>
#include <stdlib.h>
int print_binary(unsigned long long n)
{
int printed = 0;
/* ~ZERO - ~ZERO/2 is the value 1000... of ZERO's type */
for (unsigned long long mask = ~0ull - ~0ull/2; mask; mask /= 2) {
if (putc(n & mask ? '1' : '0', stdout) < 0)
return EOF;
else
++printed;
}
return printed;
}
int main(int argc, char **argv)
{
for (int i = 1; i < argc; ++i) {
print_binary(strtoull(argv[i], 0, 10));
puts("");
}
}
Exercises for the reader:
Avoid printing leading zeros (hint: either keep a boolean flag that indicates you've seen the first 1, or have a separate loop to shift the mask before printing). Don't forget to check that print_binary(0) still produces output!
Check for errors when using strtoull to convert the input values from decimal strings.
Adapt the function to write to a character array instead of stdout.
Just to spell out some of the comments, the simplest thing to do is use a char array to hold the binary digits. Also, when dealing with bits, the bit-wise operators are a little more clear. Otherwise, I've kept your basic code structure.
int main()
{
char bits[64];
int i = 0;
unsigned long long number; // note the "unsigned" type here which makes more sense
scanf("%lld", &number);
while (number > 0) {
bits[i++] = number & 1; // get the current bit
number >>= 1; // shift number right by 1 bit (divide by 2)
}
if ( i == 0 ) // The original number was 0!
printf("0");
for ( ; i > 0; i-- )
printf("%d", bits[i]); // or... putchar('0' + bits[i])
printf("\n");
}
I am not sure what you really want to achieve, but here is some code that prints the binary representation of a number (change the typedef to the integral type you want):
typedef int shift_t;
#define NBITS (sizeof(shift_t)*8)
void printnum(shift_t num, int nbits)
{
int k= (num&(1LL<<nbits))?1:0;
printf("%d",k);
if (nbits) printnum(num,nbits-1);
}
void test(void)
{
shift_t l;
l= -1;
printnum(l,NBITS-1);
printf("\n");
l= (1<<(NBITS-2));
printnum(l,NBITS-1);
printf("\n");
l= 5;
printnum(l,NBITS-1);
printf("\n");
}
If you don't mind to print the digits separately, you could use the following approach:
#include<stdio.h>
#include <stdlib.h>
#include<limits.h>
void bindigit(long long num);
int main()
{
long long number, binaryRepresentation = 0, baseOfOne = 1, remainder;
scanf("%lld", &number);
bindigit(number);
printf("\n");
}
void bindigit(long long num) {
int remainder;
if (num < 2LL) {
printf("%d",(int)num);
} else {
remainder = num % 2;
bindigit(num/2);
printf("%d",remainder);
}
}
Finally I tried a code myself with idea from your codes which worked,
#include<stdio.h>
#include<stdlib.h>
int main() {
unsigned long long number;
int binaryRepresentation[70], remainder, counter, count = 0;
scanf("%llu", &number);
while(number > 0) {
remainder = number % 2;
binaryRepresentation[count++] = remainder;
number = number / 2;
}
for(counter = count-1; counter >= 0; counter--) {
printf("%d", binaryRepresentation[counter]);
}
}

issue convert double range number to binary

I have a problem to convert integer type's double rage number to binary as the below,
void intToBin(int digit) {
int b;
int k = 0;
char *bits;
int i;
bits= (char *) malloc(sizeof(char));
while (digit) {
b = digit % 2;
digit = digit / 2;
bits[k] = b;
k++;
}
for ( i = k - 1; i >= 0; i--) {
printf("%d", bits[i]);
}
}
But as you can see the that function's arguments input is integer.
I came across the error when I tried with intToBin(10329216702565230)
because 10329216702565230 is over integer range.
How can I extend what that have integer type's double rage number to binary ?
update
I've updated the below code
void intToBin(uint64_t digit) {
int b;
int k = 0;
char *bits;
int i;
bits = malloc(sizeof digit * 64);
while (digit) {
b = digit % 2;
digit = digit / 2;
bits[k] = b;
k++;
}
for ( i = k - 1; i >= 0; i--) {
printf("%d", bits[i]);
}
}
But I didn't get it what should I do to get the 2's complement ?
m
dmnngn
Solution is to use type which supports that range of numbers.
Use unsigned long long or uint64_t(assuming you are passing non negative integers, otherwise use long long or int64_t). Then you call the function like this Edited to add int64_t to uint64_t from the comment posted. unsigned long long is 64 bits atleast - can even be wider. With OP's comment of getting 64 bits output - better to use (u)int64_t
intToBin(10329216702565230U)
In case you want to use negative numbers use long long.Call it like this
intToBin(10329216702565230L).
You didn't allocate enough memory - you were accessing memory that you haven't allocated, resulting in Undefined behavior. You have allocated 1 char first and then you didn't allocate. You can solve this by reallocating - reallocate memory inside the loop (reallocate 1 char at a time inside loop). And then use it. Instead of calling realloc multiple times why don't you allocate memory for 64 chars and then use it to store the result. And in the end, the left over space can be freed with another realloc call.
You don't need to cast the return value of malloc (void* to char* conversion is done implicitly).
You didn't check the return value of malloc. malloc may return NULL and in that case you have to handle that separately. For example:-
#define NBITS 64
...
...
bits = malloc(NBITS);
if( bits == NULL ){
perror("malloc failed");
exit(EXIT_FAILURE);
}
Note: The 64 magic number is coming introduced with the thought that unsigned long long is 64 bits atleast. So while converting we will be using that in case the number of bits exceeds 64 we will reallocate. A better choice is to use what chux said - sizeof digit * CHAR_BIT.
Also
bits[k] = b+'0';
We are putting the ascii value and then you can print it like this
printf("%c", bits[i]);
You forgot to free the allocated memory. Without freeing it (free(bits)), you have memory leak.
Davic C. Rankins comment
void intToBin(int digit)
{
int b;
int k = 0;
char *bits;
int i;
bits= (char *) malloc(sizeof(char));
while (digit) {
b = digit % 2;
digit = digit / 2;
bits[k] = b;
k++;
}
for ( i = k - 1; i >= 0; i--) {
printf("%d", bits[i]);
}
}
The answer is simple,
Replace int with int64_t to use 64 bits instead of 32.
Please try it and let us know
Replace int with int64_t to use 64 bits instead of 32.

Malloc() to create a new size for integers for use with math - Guidance needed

My goal is to create a integer type with a bigger size than 4 bytes, or 8 if I use long. I tried malloc to try and give more bytes in the memory for a bigger integer, but it still broke on the 31st iteration (gave a negative number). here's my code:
int main()
{
int x = 31; //(normally an int can do up to 30 without going negative so this is my test number)
int i;
int *bigNum = NULL;
bigNum = malloc((sizeof(int)*2));
*bigNum = 1;
for (i=0; i<x; i++) {
*bigNum = *bigNum * 2;
printf("%d \n", *bigNum);
}
free(bigNum);
}
Output:
2
4
...
..
...
1073741824
-2147483648
Although you have allocated more memory for your integer, no other part of the system knows this, including:
the compiler doesn't know this;
the CPU chip doesn't know this.
printf doesn't know this.
So all calculations are just carried out using the native int size.
Note that you can't tell the CPU chip you use larger integers; it is a physical/design limitation of the chip.
Dereferencing an int * gives you an int no matter how much extra memory you allocate for it.
If you want a dat type able to hold more information, try a long (although the guarantee is that it will be at least as big as an int).
If you want to handle integers beyond what your implementation provides, use a bignum library, like MPIR.
goal is to create a integer type with a bigger size
To handle multi-int integers, code also needs supporting functions for each basic operation:
int main(void) {
int x = 31;
RandBigNum *bigNum = RandBigNum_Init();
RandBigNum_Assign_int(bigNum, 1);
for (int i=0; i<x; i++) {
RandBigNum_Muliply_int(bigNum, 2);
RandBigNum_Print(bigNum);
printf(" \n");
}
Now, how might implement all this? Many approaches.
Below is a simply, incomplete and untested one. It is not necessarily a good approach, but to present an initial idea of the details needed to accomplish a big number library.
// Numbers are all positive. The first array element is the size of the number
typedef unsigned RandBigNum;
#define RandBigNum_MAXP1 (UINT_MAX + 1ull)
RandBigNum *RandBigNum_Init(void) {
return calloc(1, sizeof *RandBigNum);
}
void RandBigNum_Muliply_int(RandBigNum *x, unsigned scale) {
unsigned carry = 0;
for (unsigned i = 1; i <= x[0]; i++) {
unsigned long long product = 1ull * x[i] * scale + carry;
x[i] = product % RandBigNum_MAXP1;
carry *= product / RandBigNum_MAXP1;
}
if (carry) {
unsigned n = x[0] + 2;
x = realloc(x, sizeof *x * n); // re-alloc check omitted
x[x[0]] = carry;
x[0]++;
}
}
// many other functions

Converting an int to uint8_t array HEX value

I'd like to take an int, and convert it into uint8_t array of hex numbers? The int is at maximum 8 bytes long in HEX after conversion. I was able to use a method that converts an int (19604) into a uint8_t array like this:
00-00-00-00-00-00-00-00-04-0C-09-04
But I need it to look like this:
00-00-00-00-00-00-00-00-00-00-4C-94
The algorithm I used was this:
void convert_file_size_to_hex(long int size)
{
size_t wr_len = 12;
long int decimalNumber, quotient;
int i=wr_len, temp;
decimalNumber = size;
quotient = decimalNumber;
uint8_t hexNum[wr_len];
memset(hexNum, 0, sizeof(hexNum));
while(quotient != 0) {
temp = quotient % 16;
hexNum[--i] = temp;
quotient /= 16;
}
How can I go about doing this? Should I use a different algorithm or should I try to bit shift the result? I'm kinda new to bit shifting in C so some help would be great. Thank you!
Consider the following code:
#include <stdio.h>
#include <string.h>
int main()
{
unsigned char hexBuffer[100]={0};
int n=19604;
int i;
memcpy((char*)hexBuffer,(char*)&n,sizeof(int));
for(i=0;i<4;i++)
printf("%02X ",hexBuffer[i]);
printf("\n");
return 0;
}
Use just a simple statement to convert int to byte buffer
memcpy((char*)hexBuffer,(char*)&n,sizeof(int));
You can use 8 instead of 4 while print the loop
Since n % 16 has a range of 0..15, inclusive, you are making an array of single hex digits from your number. If you would like to make an array of bytes, use 256 instead:
while(quotient != 0) {
temp = quotient % 256;
hexNum[--i] = temp;
quotient /= 256;
}
You can rewrite the same with bit shifts and bit masking:
while(quotient != 0) {
temp = quotient & 0xFF;
hexNum[--i] = temp;
quotient >>= 8;
}
To know how many bytes you need regardless of the system, use sizeof(int):
size_t wr_len = sizeof(int);
Use a union for this
union int_to_bytes {
int i;
uint8_t b[sizeof(int)];
};
union int_to_bytes test = { .i = 19604 };
int i = sizeof(test.b); // Little-Endian
while (i--)
printf("%hhx ", test.b[i]); // 0 0 4c 94
putchar('\n');

Two's complement and loss of information in C

I want do the two's complement of a float data.
unsigned long Temperature ;
Temperature = (~(unsigned long)(564.48))+1;
But the problem is that the cast loses information, 564 instead of 564.48.
Can i do the two's complement without a loss of information?
That is a very weird thing to do; floating-point numbers are not stored as 2s complement, so it doesn't make a lot of sense.
Anyway, you can perhaps use the good old union trick:
union {
float real;
unsigned long integer;
} tmp = { 564.48 };
tmp.integer = ~tmp.integer + 1;
printf("I got %f\n", tmp.real);
When I tried it (on ideone) it printed:
I got -0.007412
Note that this relies on unspecified behavior, so it's possible it might break if your compiler does not implement the access in the most straight-forward manner. This is distinct form undefined behavior (which would make the code invalid), but still not optimal. Someone did tell me that newer standards make it clearer, but I've not found an exact reference so ... consider yourself warned.
You can't use ~ over floats (it must be an integer type):
#include <stdio.h>
void print_binary(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char *) ptr;
unsigned char byte;
int i, j;
for (i = size - 1; i >= 0; i--) {
for (j = 7; j >= 0; j--) {
byte = b[i] & (1 << j);
byte >>= j;
printf("%u", byte);
}
}
printf("\n");
}
int main(void)
{
float f = 564.48f;
char *p = (char *)&f;
size_t i;
print_binary(sizeof(f), &f);
for (i = 0; i < sizeof(float); i++) {
p[i] = ~p[i];
}
print_binary(sizeof(f), &f);
f += 1.f;
return 0;
}
Output:
01000100000011010001111010111000
10111011111100101110000101000111
Of course print_binary is there for test the result, remove it, and (as pointed out by #barakmanos) print_binary assumes little endian, the rest of the code is not affected by endiannes:
#include <stdio.h>
int main(void)
{
float f = 564.48f;
char *p = (char *)&f;
size_t i;
for (i = 0; i < sizeof(float); i++) {
p[i] = ~p[i];
}
f += 1.f;
return 0;
}
Casting a floating-point value to an integer value changes the "bit contents" of that value.
In order to perform two's complement on the "bit contents" of a floating-point value:
float f = 564.48f;
unsigned long Temperature = ~*(unsigned long*)&f+1;
Make sure that sizeof(long) == sizeof(float), or use double instead of float.

Resources