Need to convert decimal to base 32, whats wrong ? base 32 = 0-9 A-V - base32

this is my code in c, need to convert from from decimal to base 32 .Getting strange symbols as output.
#include <stdio.h>
char * f(unsigned int num){
static char base[3];
unsigned int mask=1,base32=0;
int i,j;
for (j=0;j<3;j++)
for (i=0;i<5;num<<1,mask<<1){
if (mask&num){
base32 = mask|base32;
base32<<1;
}
if (base32<9)
base[j]=base32+'0';
else
base[j]=(char)(64+base32-9);
}
}
base 32 = 0-9 A-V
int main()
{
unsigned int num =100;
printf("%s\n",f(num));
return 1;
}
should get 34.

You shift both mask and num in your loop. This means you're always checking for the same bit, just moved around. Only shift one of the values, the mask.
Also you're skipping the number 9 with your comparison.
Using a debugger you can easily see what's happening and how it is going wrong.

Related

Test feedback: oddParitySet3 incorrectly returned

Purpose: Demonstrate the ability to manipulate bits using functions and to learn a little bit about parity bits.
Parity is a type of error detection where one of the bits in a bit string is used for this purpose. There are more complicated systems that can do more robust error detection as well as error correction. In this lab, we will use a simple version called odd parity. This reserves one bit as a parity bit. The other bits are examined, and the parity bit is set so that the number of 1 bits is odd. For example, if you have a 3-bit sequence, 110 and the rightmost bit is the parity bit, it would be set to 1 to make the number of 1s odd.
Notes: When referring to bit positions, bit 31 is the high-order bit (leftmost), and bit 0 is the low-order bit (rightmost). In order to work through these functions, you will likely have to map out bit patterns for testing to see how it all works. You may find using a converter that can convert between binary, hex, and decimal useful. Also, to assign bit patterns to integers, it might be easier to use hex notation. To assign a hex value in C, you can use the 0x????? where ????? are hex values. (There can be more or fewer than the number of ? here.) E.g.,
int i = 0x02A;
Would assign i = 42 in decimal.
Program Specifications: Write the functions below:
unsigned int leftRotate(unsigned int intArg, unsigned int rotAmt);
Returns an unsigned int that is intArg rotated left by rotAmt. Note: Rotate left is similar to shift left. The difference is that the bits shifted out at the left come back in on the right. Rotate is a common operation and often is a single machine instruction. Do not convert intArg to a string and operate on that. Do not use an array of ints (or other numbers). Use only integers or unsigned integers.
Example: Assuming you have 5-bit numbers, rotating the binary number 11000 left by 3 yields 00110
char *bitString(int intArg)
Returns a pointer to a character string containing the 32-bit pattern for the integer argument. The first character, index 0, should be the high-order bit and on down from there. For this function, you will need malloc. Can be used for printing bit patterns. E.g., if intArg = 24 the return string would be 00000000000000000000000000011000
unsigned int oddParitySet3(unsigned int intArg, unsigned int startBit);
This function will determine the odd parity for a 3-bit segment of intArg starting at bit startBit and set the parity bit (low-order bit) appropriately.
E.g., suppose intArg=3 and startBit = 2. The 32 bit representation, from high to low, would be 29 zeros then 110. So, bits 2 - 0 are 011. To make the parity odd, you would set bit zero to 0.
The return value is the modified intArg, in this case it would be 29 zeros then 010 or a value of 2.
Do not convert intArg to a string and operate on that. Use only integers or unsigned integers.
Note: If the start bit is greater than 31 or less than 2, this would present a problem (do you see this?). If this is the case, return a zero.
The compile command used by this zyLab is:
gcc main.c -Wall -Werror -Wextra -Wuninitialized -pedantic-errors -o a.out -lm
The program does not pass all tests and gives such errors:
enter image description here
C code:
#include<stdio.h>
#include<string.h>
#include<stdlib.h>
char * bitString(int intArg);
unsigned int leftRotate(unsigned int n, unsigned int d);
unsigned int oddParitySet3(unsigned int intArg, unsigned int startBit);
int main() {
return 0;
}
char * bitString(int intArg)
{
char *bits = (char*)malloc(33 * sizeof(char));
bits[32] = '\0';
for(int i = 31; i >= 0; i--)
{
if(intArg & (1 << i))
bits[31 - i] = '1';
else
bits[31 - i] = '0';
}
return bits;
}
unsigned int leftRotate(unsigned int intArg, unsigned int rotAmt)
{
return (intArg << rotAmt) | (intArg >> (32 - rotAmt));
}
unsigned int oddParitySet3(unsigned int intArg, unsigned int startBit){
unsigned int mask = 0x00000007;
unsigned int shiftedMask = mask << startBit;
unsigned int temp = intArg & shiftedMask;
unsigned int result = intArg;
if(__builtin_popcount(temp) % 2 == 0)
result |= shiftedMask;
else
result &= ~shiftedMask;
return result;
}
need help to fix the oddParitySet3 function so that it does not display errors that are in the photo.

why itoa fuction returns 32 bits if the size of variable in 16 bit

size of short int is 2 bytes(16 bits) on my 64 bit processor and mingw compiler but when I convert short int variable to a binary string using itoa function
it returns string of 32 bits
#include<stdio.h>
int main(){
char buffer [50];
short int a=-2;
itoa(a,buffer,2); //converting a to binnary
printf("%s %d",buffer,sizeof(a));
}
Output
11111111111111111111111111111110 2
The answer is in understanding C's promotion of short datatypes (and char's, too!) to int's when those values are used as parameters passed to a function and understanding the consequences of sign extension.
This may be more understandable with a very simple example:
#include <stdio.h>
int main() {
printf( "%08X %08X\n", (unsigned)(-2), (unsigned short)(-2));
// Both are cast to 'unsigned' to avoid UB
return 0;
}
/* Prints:
FFFFFFFE 0000FFFE
*/
Both parameters to printf() were, as usual, promoted to 32 bit int's. The left hand value is -2 (decimal) in 32bit notation. By using the cast to specify the other parameter should not be subjected to sign extension, the printed value shows that it was treated as a 32 bit representation of the original 16 bit short.
itoa() is not available in my compiler for testing, but this should give the expected results
itoa( (unsigned short)a, buffer, 2 );
your problem is so simple , refer to itoa() manual , you will notice its prototype which is
char * itoa(int n, char * buffer, int radix);
so it takes an int that to be converted and you are passing a short int so it's converted from 2 byte width to 4 byte width , that's why it's printing a 32 bits.
to solve this problem :
you can simply shift left the array by 16 position by the following simple for loop :
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
and it shall give the same result , here is edited version of your code:
#include<stdio.h>
#include <stdlib.h>
int main(){
char buffer [50];
short int a= -2;
itoa(a,buffer,2);
for (int i = 0; i < 17; ++i) {
buffer[i] = buffer[i+16];
}
printf("%s %d",buffer,sizeof(a));
}
and this is the output:
1111111111111110 2

I've got an incorrect output for 13 factorial,how do i fix this?

My output
13!=1932053504
Expected output
13!=6227020800
I tried using int,long int but still the output remains the same
long int fact(long int num);
int main(){
long int n;
printf("Enter a number to find factorial: ");
scanf("%ld",&n);
printf("%ld!= %ld",n,fact(n));
}
long int fact(long int n){
if(n>=1)
return n*fact(n-1);
else
return 1;
}
Output:
13!=1932053504
The expected value exceeds 32 bits, what you get is the actual result trimmed to 32 bits:
1932053504 equals (6227020800 & 0xFFFFFFFF)
You'll have to verify capacity of int and long int in your environment, e.g. with print-ing their sizeof.
You should use long long int to make calculations on 64 bits. If you break that barrier also, you need to do more complicated stuff.
Note: use long twice, it is not a mistake - provided that the compiler supports 64-bit architectures.
If you are not interested in negative numbers, you can use unsigned long long int for some extra "comfort".

Printing a long in binary 64-bit representation

I'm trying to print the binary representation of a long in order to practice bit manipulation and setting various bits in the long for a project I am working on. I successfully can print the bits on ints but whenever I try to print 64 bits of a long the output is screwy.
Here is my code:
#include <stdio.h>
void printbits(unsigned long n){
unsigned long i;
i = 1<<(sizeof(n)*4-1);
while(i>0){
if(n&1)
printf("1");
else
printf("0");
i >>= 1;
}
int main(){
unsigned long n=10;
printbits(n);
printf("\n");
}
My output is 0000000000000000000000000000111111111111111111111111111111111110.
Thanks for help!
4 isn’t the right number of bits in a byte
Even though you’re assigning it to an unsigned long, 1 << … is an int, so you need 1UL
n&1 should be n&i
There’s a missing closing brace
Fixes only:
#include <limits.h>
#include <stdio.h>
void printbits(unsigned long n){
unsigned long i;
i = 1UL<<(sizeof(n)*CHAR_BIT-1);
while(i>0){
if(n&i)
printf("1");
else
printf("0");
i >>= 1;
}
}
int main(){
unsigned long n=10;
printbits(n);
printf("\n");
}
And if you want to print a 64-bit number specifically, I would hard-code 64 and use uint_least64_t.
The problem is that i = 1<<(sizeof(n)*4-1) is not correct for a number of reasons.
sizeof(n)*4 is 32, not 64. you probably want sizeof(n)*8
1<<63 may give you overflow because 1 may be 32-bits by default. You should use 1ULL<<(sizeof(n)*8-1)
unsigned long is not necessarily 64 bits. You should use unsigned long long
If you want to be extra thorough, use sizeof(n) * CHAR_BIT (defined in <limits.h>).
In general, you should use stdint defines (e.g. uint64_t) whenever possible.
The following should do what you want:
#include <stdio.h>
void printbits(unsigned long number, unsigned int num_bits_to_print)
{
if (number || num_bits_to_print > 0) {
printbits(number >> 1, num_bits_to_print - 1);
printf("%d", number & 1);
}
}
We keep calling the function recursively until either we've printed enough bits, or we've printed the whole number, whichever takes more bits.
wrapping this in another function directly does exactly what you want:
void printbits64(unsigned long number) {
printbits(number, 64);
}

How to convert binary int array to hex char array?

Say I have a 32 bit long array of 32 binary digits and I want to output that in Hex form. How would I do that? this is what I have right now but it is too long and I don't know how to compare the 4 binary digits to the corresponding hex number
This is what I have right now where I break up the 32 bit number into 4 bit binary and try to find the matching number in binaryDigits
char hexChars[16] ={'0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f'};
char * binaryDigits[16] = {"0000","0001","0010","0011","0100","0101","0110","0111","1000","1001","1010","1011","1100","1101","1110","1111"};
int binaryNum[32]= {'0','0','1','0','0','0','0','1','0','0','0','0','1','0','0','1','0','0','0','0','0','0','0','0','0','0','0','0','1','0','1','0'};
int currentBlock, hexDigit;
int a=0, b=1, i=0;
while (a<32)
{
for(a=i+3;a>=i;a--)
{
current=binaryNum[a];
temp=current*b;
currentBlock=currentBlock+temp;
b*=10;
}
i=a;
while(match==0)
{
if(currentBlock != binaryDigits[y])
y++;
else
{
match=1;
hexDigit=binaryDigits[y];
y=0;
printf("%d",hexDigit);
}
}
}
printf("\n%d",currentBlock);
I apologize if this isn't the crux of your issue, but you say
I have a 32 bit long array of 32 binary digits
However, int binaryNum[32] is a 32-integer long array (4 bytes per int, 8 bits per byte = 4 * 8 * 32 which is (1024 bits)). That is what is making things unclear.
Further, you are assigning the ASCII character values '0' (which is 0x30 hex or 48 decimal) and '1' (0x31, 49) to each location in binaryNum. You can do it, and do the gymnastics to compare each value to actually form a
32 bit long array of 32 binary digits
but if that is what you have, why not just write it that way? (as a binary constant). That will give you your 32-bit binary value. For example:
#include <stdio.h>
int main (void) {
unsigned binaryNum = 0b00100001000010010000000000001010;
printf ("\n binaryNum : 0x%8x (%u decimal)\n\n", binaryNum, binaryNum);
return 0;
}
Output
$ ./bin/binum32
binaryNum : 0x2109000a (554237962 decimal)
If this is not where your difficulty lies, please explain further, or again, just what you are trying to accomplish.

Resources