Printing a long in binary 64-bit representation - c

I'm trying to print the binary representation of a long in order to practice bit manipulation and setting various bits in the long for a project I am working on. I successfully can print the bits on ints but whenever I try to print 64 bits of a long the output is screwy.
Here is my code:
#include <stdio.h>
void printbits(unsigned long n){
unsigned long i;
i = 1<<(sizeof(n)*4-1);
while(i>0){
if(n&1)
printf("1");
else
printf("0");
i >>= 1;
}
int main(){
unsigned long n=10;
printbits(n);
printf("\n");
}
My output is 0000000000000000000000000000111111111111111111111111111111111110.
Thanks for help!

4 isn’t the right number of bits in a byte
Even though you’re assigning it to an unsigned long, 1 << … is an int, so you need 1UL
n&1 should be n&i
There’s a missing closing brace
Fixes only:
#include <limits.h>
#include <stdio.h>
void printbits(unsigned long n){
unsigned long i;
i = 1UL<<(sizeof(n)*CHAR_BIT-1);
while(i>0){
if(n&i)
printf("1");
else
printf("0");
i >>= 1;
}
}
int main(){
unsigned long n=10;
printbits(n);
printf("\n");
}
And if you want to print a 64-bit number specifically, I would hard-code 64 and use uint_least64_t.

The problem is that i = 1<<(sizeof(n)*4-1) is not correct for a number of reasons.
sizeof(n)*4 is 32, not 64. you probably want sizeof(n)*8
1<<63 may give you overflow because 1 may be 32-bits by default. You should use 1ULL<<(sizeof(n)*8-1)
unsigned long is not necessarily 64 bits. You should use unsigned long long
If you want to be extra thorough, use sizeof(n) * CHAR_BIT (defined in <limits.h>).
In general, you should use stdint defines (e.g. uint64_t) whenever possible.

The following should do what you want:
#include <stdio.h>
void printbits(unsigned long number, unsigned int num_bits_to_print)
{
if (number || num_bits_to_print > 0) {
printbits(number >> 1, num_bits_to_print - 1);
printf("%d", number & 1);
}
}
We keep calling the function recursively until either we've printed enough bits, or we've printed the whole number, whichever takes more bits.
wrapping this in another function directly does exactly what you want:
void printbits64(unsigned long number) {
printbits(number, 64);
}

Related

How do I get 16 bit sections of a 64 bit unsigned long using an integer index?

I'm trying to return certain 16-bit sections from a 64-bit unsigned long and I'm stuck on how to accomplish this. Here is the function I am trying to implement:
// assume i is a valid index (0-3 inclusive)
unsigned short get(unsigned long* ex, int i) {
// return the 16-bit section based on the index i
}
For example, if I have unsigned long ex = 0xFEDCBA9876543210;, then my function get(ex, 0) would return 0x3210, get(ex, 1) would return 0x7654, etc. I'm very new to C and I'm still trying to wrap my head around bit management and pointers. Any advice or feedback is appreciated in helping me understand C better.
You'll have to use a bit shift
#include <stdint.h>
uint16_t get(uint64_t ex, int i)
{
return (uint16_t)(ex >> i*16);
}
You can just pass by value. There's no reason to pass a pointer. This will shift the bits to the right, meaning they become the low order value. When it gets converted to a 16 bit type, it loses the higher order bits.
I've included stdint.h because it defines types of exact size.
I would use a mask along with bit shifting
unsigned short get(int index, unsigned long n)
{
if (index > 3 || index < 0)
return 0xFFFF; // well you have to see if you have control over the inputs.
return (n >> (index << 4)) & 0xFFFF; // will extract 2 bytes.
}
The way closest to what you're asking for is a union:
#include <stdio.h>
#include <stdint.h>
int main(const int argc, const char * const argv[]) {
union {
uint64_t i;
uint16_t ia[4];
} u;
u.i = 0xFEDCBA9876543210;
printf("u.ia[0] %x\n", u.ia[0]);
}
Output is:
u.ia[0] 3210

I've got an incorrect output for 13 factorial,how do i fix this?

My output
13!=1932053504
Expected output
13!=6227020800
I tried using int,long int but still the output remains the same
long int fact(long int num);
int main(){
long int n;
printf("Enter a number to find factorial: ");
scanf("%ld",&n);
printf("%ld!= %ld",n,fact(n));
}
long int fact(long int n){
if(n>=1)
return n*fact(n-1);
else
return 1;
}
Output:
13!=1932053504
The expected value exceeds 32 bits, what you get is the actual result trimmed to 32 bits:
1932053504 equals (6227020800 & 0xFFFFFFFF)
You'll have to verify capacity of int and long int in your environment, e.g. with print-ing their sizeof.
You should use long long int to make calculations on 64 bits. If you break that barrier also, you need to do more complicated stuff.
Note: use long twice, it is not a mistake - provided that the compiler supports 64-bit architectures.
If you are not interested in negative numbers, you can use unsigned long long int for some extra "comfort".

How can I copy 4 letter ascii word to buffer in C?

I am trying to copy the word: 0x0FF0 to a buffer but unable to do so.
Here is my code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <math.h>
#include <time.h>
#include <linux/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
void print_bits(unsigned int x);
int main(int argc, char *argv[])
{
char buffer[512];
unsigned int init = 0x0FF0;
unsigned int * som = &init;
printf("print bits of som now: \n");
print_bits(init);
printf("\n");
memset(&buffer[0], 0, sizeof(buffer)); // reinitialize the buffer
memcpy(buffer, som, 4); // copy word to the buffer
printf("print bits of buffer[0] now: \n");
print_bits(buffer[0]);
printf("\n");
return 0;
}
void print_bits(unsigned int x)
{
int i;
for (i = 8 * sizeof(x)-17; i >= 0; i--) {
(x & (1 << i)) ? putchar('1') : putchar('0');
}
printf("\n");
}
this is the result I get in the console:
Why am I getting different values from the bit printing if I am using memcpy?
Don't know if it has something to do with big-little-endian but I am losing 4 bits of 1's here so in both of the methods it shouldn't happen.
When you call
print_bits(buffer[0]);
you're taking just one byte out of the buffer, converting it to unsigned int, and passing that to the function. The other bytes in buffer are ignored.
You are mixing up types and relying on specific settings of your architecture/platform; This already breaks your existing code, and it may get even more harmful once you compile with different settings.
Your buffer is of type char[512], while your init is of type unsigned int.
First, it depends on the settings whether char is signed or unsigned char. This is actually relevant, since it influences how a char-value is promoted to an unsigned int-value. See the following code that demonstrated the difference using explicitly signed and unsigned chars:
signed char c = 0xF0;
unsigned char uc = c;
unsigned int ui_from_c = c;
unsigned int ui_from_uc = uc;
printf("Singned char c:%hhd; Unsigned char uc:%hhu; ui_from_c:%u ui_from_uc:%u\n", c, uc, ui_from_c,ui_from_uc);
// output: Singned char c:-16; Unsigned char uc:240; ui_from_c:4294967280 ui_from_uc:240
Second, int may be represented by 4 or by 8 bytes (which can hold a "word"), yet char will typically be 1 byte and can therefore not hold a "word" of 16 bit.
Third, architectures can be big endian or little endian, and this influences where a constant like 0x0FF0, which requires 2 bytes, would actually be located in a 4 or 8 byte integral representation.
So it is for sure that buffer[0] selects just a portion of that what you think it does, the portion might get promoted in the wrong way to an unsigned int, and it might even be a portion completely out of the 0x0FF0-literal.
I'd suggest to use fixed-width integral values representing exactly a word throughout:
#include <stdio.h>
#include <stdint.h>
void print_bits(uint16_t x);
int main(int argc, char *argv[])
{
uint16_t buffer[512];
uint16_t init = 0x0FF0;
uint16_t * som = &init;
printf("print bits of som now: \n");
print_bits(init);
printf("\n");
memset(buffer, 0, sizeof(buffer)); // reinitialize the buffer
memcpy(buffer, som, sizeof(*som)); // copy word to the buffer
printf("print bits of buffer[0] now: \n");
print_bits(buffer[0]);
printf("\n");
return 0;
}
void print_bits(uint16_t x)
{
int i;
for (i = 8 * sizeof(x); i >= 0; i--) {
(x & (1 << i)) ? putchar('1') : putchar('0');
}
printf("\n");
}
You are not writing the bytes "0F F0" to the buffer. You are writing whatever bytes your platform uses internally to store the number 0x0FF0. There is no reason these need to be the same.
When you write 0x0FF0 in C, that means, roughly, "whatever my implementation uses to encode the number four thousand eighty". That might be the byte string 0F, F0. But it might not be.
I mean, how weird would it be if unsigned int init = 0x0FF0; and unsigned int init = 4080; would do the same thing on some platforms and different things on others? But surely not all platforms store the number 4,080 using the byte string "0F F0".
For example, I might store the number ten as "10" or "ten" or any number of other ways. It's unreasonable for you to expect "ten", "10", or any other particular byte sequence to appear in memory just because you stored the number ten unless you do happen to specifically know how your platform stores the number ten. Given that you asked this question, you don't know that.
Also, you are only printing the value of buffer[0], which is a single character. So it couldn't possibly hold any version of 0x0FF0.

Need to convert decimal to base 32, whats wrong ? base 32 = 0-9 A-V

this is my code in c, need to convert from from decimal to base 32 .Getting strange symbols as output.
#include <stdio.h>
char * f(unsigned int num){
static char base[3];
unsigned int mask=1,base32=0;
int i,j;
for (j=0;j<3;j++)
for (i=0;i<5;num<<1,mask<<1){
if (mask&num){
base32 = mask|base32;
base32<<1;
}
if (base32<9)
base[j]=base32+'0';
else
base[j]=(char)(64+base32-9);
}
}
base 32 = 0-9 A-V
int main()
{
unsigned int num =100;
printf("%s\n",f(num));
return 1;
}
should get 34.
You shift both mask and num in your loop. This means you're always checking for the same bit, just moved around. Only shift one of the values, the mask.
Also you're skipping the number 9 with your comparison.
Using a debugger you can easily see what's happening and how it is going wrong.

Trying to populate an array with random 32-bit binary numbers

I have an assignment to program the game of life in Standard C. I am only allowed to use 1D arrays. My first issue is trying to get my array populated with random 32 bit binary digits. I have tried making code to do this (with code supplied by my teacher which is not commented at all, so I have to some extent no idea what I am doing). When I try displaying the contents of the array it seems that at every index the values are all the same. Can you guys help me populate my array with 32 random binary digits (32 bits in length).
#include <stdio.h>
#include <stdlib.h>
#define ARSZ 32
void displayBinary(unsigned long* , int);
unsigned long init32();
unsigned long Row[ARSZ];
int main(void){
int i, j;
for (i=0;i<ARSZ;i++){
Row[i] = init32();
}//populate array
for (j=0;j<ARSZ;j++){
displayBinary(Row, j);
printf("\n");
}//display array contents
}/** End Main **/
void displayBinary(unsigned long array[], int x){
unsigned long MASK = 0x80000000;
do {
printf("%c", (array[x] & MASK) ?'X':0x20);
}while ((MASK >>=1) != 0);
}
unsigned long init32(){
unsigned long init32;
srand(time(NULL));
init32 = ((double)rand()/RAND_MAX)*0xFFFFFFFF;
return init32;
}
You need to change the seed. The srand(time(NULL)) call is going to return the same sequence of values because the seed hasn't changed.
Take a cursory glance at how the pseudo-random number generator works in C and it'll make sense.
1) Repetitive calling of srand(time(NULL)). Rather than calling srand() in init32(), call it once in main(). #WhozCraig, #Albert Myers
2) Potentially too small RAND_MAX with init32 = ((double)rand()/RAND_MAX)*0xFFFFFFFF;. C only requires RAND_MAX to be at least 32767. This would give init32 at most 32768 different values. To certainly generate a 32-bit random number, use:
unsigned long init32;
init32 = rand();
init32 <<= 15;
init32 ^= rand();
init32 <<= 15;
init32 ^= rand();
// if pedantic
init32 &= 0xFFFFFFFFL;
Note:
A small bias in 32-bit number generation occurs if RAND_MAX + 1 is not a power-of-2.
Also, rand() itself is sometimes nor the uniformly random - depends on platform.

Resources