I'm doing circular right shift and left shift in C, I'm wrong somewhere. For right rotation if I give the input as 123 and number of rotations as 3, the output what I get is wrong. Help me to find the mistake please.
#include<stdio.h>
#include <stdlib.h>
void rotateLeft(unsigned long int num,int n);
void rotateRight(unsigned long int num,int n);
void bin_print(unsigned long int num);
int main()
{
printf("\tThis program is to circular right & left shift the int number by n\n\n");
printf("Possible operations\n1. circular right shift\n2. circular left shift\n");
int choice,n;
unsigned long int num;
printf("Enter your choice: ");
scanf("%d",&choice);
printf("Enter a number: ");
scanf("%lu", &num);
bin_print(num);
printf("Enter number of rotation: ");
scanf("%d", &n);
(choice==1) ? rotateRight(num,n) : rotateLeft(num,n);
}
void bin_print(unsigned long int num)
{
for(int i = 31; i >= 0; i--)
{
if((num & (1 << i))) {
printf("%d",1); // The ith digit is one
}
else {
printf("%d",0); // The ith digit is zero
}
if(i%8==0) printf(" ");
}
printf("\n");
}
void rotateLeft(unsigned long int num, int n)
{
unsigned long int val = (num << n) | (num >> (32 - n));
bin_print(val);
printf("%ld",val);
}
void rotateRight(unsigned long int num,int n)
{
unsigned long int val = (num >> n) | (num << (32 - n));
bin_print(val);
printf("%ld",val);
}
Do not assume the width
Code assumes unsigned long is 32-bit. Its width must be at least 32-bit, but could be more, like 64.
int constant
1 << i is a shifted int, yet code needs a shifted unsigned long. Use 1UL << i.
Use a matching print specifier #Support Ukraine
This implies OP might not have enabled all warnings. Save time. Enable all compiler warnings.
// printf("%ld",val);
printf("%lu",val);
#include <limits.h>
#if ULONG_MAX == 0xFFFFFFFFu
#define ULONG_WIDTH 32
#elif ULONG_MAX == 0xFFFFFFFFFFFFFFFFu
#define ULONG_WIDTH 64
#else
#error TBD code
#endif
void bin_print(unsigned long int num) {
// for(int i = 31; i >= 0; i--)
for(int i = ULONG_WIDTH - 1; i >= 0; i--)
...
// if((num & (1 << i))) {
if((num & (1UL << i))) {
Advanced
A wonderful way to get the bit-width of an integer type's value bits:
// https://stackoverflow.com/a/4589384/2410359
/* Number of value bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
#define ULONG_WIDTH IMAX_BITS(ULONG_MAX)
Shifting more than "32"
To handle n outside the [1 ... ULONG_WIDTH) range, reduce the shift.
void rotateLeft(unsigned long num, int n) {
// Bring n into (-ULONG_WIDTH ... ULONG_WIDTH) range,
n %= ULONG_WIDTH;
// Handle negative n.
if (n < 0) n += ULONG_WIDTH;
// Cope with 0 as a special case as `num >> (ULONG_WIDTH - 0)` is bad.
unsigned long val = n == 0 ? n : (num << n) | (num >> (ULONG_WIDTH - n));
bin_print(val);
printf("%lu\n", val);
}
Related
Code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
long int x;
x = 1000000;
printf("%ld\n", x);
for(int i = 0; i < 32; i++)
{
printf("%c", (x & 0x80) ? '1' : '0');
x <<= 1;
}
printf("\n");
return 0;
}
This code is supposed to convert a decimal int to binary, but why doesn't it work correctly?
P.S. I solved this problem by replacing 0x80 with 0x80000000. But why was the wrong number displayed at 0x80?
EDIT2:
OP asks "P.S. I solved this problem by replacing 0x80 with 0x80000000. But why was the wrong number displayed at 0x80?"
What was wrong was 0x80 is equal to 0x00000080. 0x80 will never test any bits above b7 (where bits, right to left, are numbered b0 to b31.
The corrected value, 0x80000000, sets the MSB high and can be used (kind of) to 'sample' each bit of the data as the data value is 'scrolled' to the left.
//end edit2
Two concerns:
1) Mucking with the sign bit of a signed integer can be problematic
2) "Knowing" there are 32 bits can be problematic.
The following makes fewer presumptions. It creates a bit mask (only the MSB is set in an unsigned int value) and shifts that mask toward the LSB.
int main() {
long int x = 100000;
printf("%ld\n", x);
for( unsigned long int bit = ~(~0u >> 1); bit; bit >>= 1 )
printf("%c", (x & bit) ? '1' : '0');
printf("\n");
return 0;
}
100000
00000000000000011000011010100000
Bonus: Here is a version of the print statement that doesn't involve branching:
printf( "%c", '0' + !!(x & bit) );
EDIT:
Having seen the answer by #Lundin, the suggestion to insert SP's to improve readability is an excellent idea! (Full credit to #Lundin.)
Below, not only is the long string of bits output divided into "hexadecimal" chunks, but the compile time value is shown in a way to easily see it is 10million. (1e7 would have done, too.)
A new-and-improved version:
#include <stdio.h>
#include <stdlib.h>
int main() {
long int x = 10 * 1000 *1000;
printf("%ld\n", x);
for( unsigned long int bit = ~(~0u >> 1); bit; bit >>= 1 ) {
putchar( '0' + !!(x & bit) );
if( bit & 0x11111111 ) putchar( ' ' );
}
putchar( '\n' );
return 0;
}
10000000
0000 0000 1001 1000 1001 0110 1000 0000
1000000 dec = 11110100001001000000 bin.
80 hex = 10000000 bin.
And this doesn't make much sense at all:
11110100001001000000
& 10000000
Instead fix the loop body to something like this:
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
long int x;
x = 1000000;
printf("%ld\n", x);
for(int i = 0; i < 32; i++)
{
unsigned long mask = 1u << (31-i);
printf("%c", (x & mask) ? '1' : '0');
if((i+1) % 8 == 0) // to print a space after 8 digits
printf(" ");
}
printf("\n");
return 0;
}
Without using an integer counter to see what digit is at the ith position, you can instead use an unsigned variable which is equal to 2^i at the ith iteration. If this variable is unsigned, when it overflows it will become zero. Here is how the code would look like. It displays the number in reversed order (first position means the coefficient of 2^0 in the polynomial decomposition of the number).
int
main()
{
int x;
x = 1000000;
printf("%lx\n", x);
for(unsigned b = 1; b; b<<=1)
printf("%c", x & b ? '1':'0');
printf("\n");
return 0;
}
I would use functions
void printBin(long int x)
{
unsigned long mask = 1UL << (sizeof(mask) * CHAR_BIT - 1);
int digcount = 0;
while(mask)
{
printf("%d%s", !!(x & mask), ++digcount % 4 ? "" : " ");
mask >>= 1;
}
}
int main(void)
{
printBin(0); printf("\n");
printBin(1); printf("\n");
printBin(0xf0); printf("\n");
printBin(-10); printf("\n");
}
Below C program display binary representation of inputted decimal number:
#include <stdio.h>
#include <stdlib.h>
typedef union {
int i;
struct {
unsigned int dgts: 31;
unsigned int sign: 1;
} bin;
} myint;
void printb(int n, int i) {
int k;
for (k = i - 1; k >= 0; k--)
if ((n >> k) & 1)
printf("1");
else
printf("0");
}
void display_binary(myint x) {
printf("%d | ", x.bin.sign);
printb(x.bin.dgts, 31);
printf("\n");
}
int main() {
myint decimal;
printf("input decimal value : ");
scanf("%d", &decimal.i);
printf("Binary representation is:\n");
display_binary(decimal);
return 0;
}
The program is working correctly. What I can't understand is order of dgts and sign members of bin struct. Intuitively, sign member should precede dgts as bits that representing data are ordered from left to write in memory (as far as I know). After swapping orders of these two members, result became false. Why dgts should come before sign?
order of bits in the bitfields is implementation-defined, but most popular compilers start with LSB.
Numbers are stored binary and it does not matter how you enter them. Negative numbers are stored as two'2 complement on most modern systems. In this system, the sign bit does not exist "per se". No special types are needed
I would implement it as
void printb(int n) {
unsigned int mask = 1U << (sizeof(n) * CHAR_BIT - 1);
for (; mask; mask >>= 1)
{
printf("%c", (n & mask) ? '1' : '0');
}
}
In the following code, the scanf() in main() turns one of the input numbers from a non-zero number into zero, as shown by a debugging printf() in the while loop. I've tested it on several compilers but only to keep getting the same result. Please help me out by telling me why this is such. Thank you.
#include <stdio.h>
unsigned srl (unsigned x, int k)
{
/* perform shift arithmetically */
printf("x = %u, (int) x= %d\n", x, (int) x);
unsigned xsra = (int) x >> k;
printf("\nxsra before was: %u\n", xsra);
unsigned test = 0xffffffff;
test <<= ((sizeof (int) << 3) - k); // get e.g., 0xfff00...
printf("test after shift is: %x, xsra & test = %x\n", test, xsra & test);
if (xsra & test == 0) // if xsrl is positve
return xsra;
else
xsra ^= test; // turn 1s into 0s
return xsra;
}
int sra (int x, int k)
{
/* perform shift logically */
int xsrl = (unsigned) x >> k;
unsigned test = 0xffffffff;
test << ((sizeof (int) << 3) - k + 1); // get e.g., 0xffff00...
if (xsrl & test == 0) // if xsrl is positve
return xsrl;
else
xsrl |= test;
return xsrl;
}
int main(void)
{
int a;
unsigned b;
unsigned short n;
puts("Enter an integer and a positive integer (q or negative second number to quit): ");
while(scanf("%d%u", &a, &b) == 2 && b > 0)
{
printf("Enter the number of shifts (between 0 and %d): ", (sizeof (int) << 3) - 1);
scanf("%d", &n);
if (n < 0 || n >= ((sizeof (int)) << 3))
{
printf("The number of shifts should be between 0 and %d.\n", ((sizeof (int)) << 3) - 1);
break;
}
printf("\nBefore shifting, int a = %d, unsigned b = %u\n", a, b);
a = sra(a, n);
b = srl(b, n);
printf("\nAfter shifting, int a = %d, unsigned b = %u\n", a, b);
puts("\nEnter an integer and a positive integer (q or negative second number to quit): ");
}
puts("Done!");
return 0;
}
The problem is that n is an unsigned short, which has less size than a normal int. When you call scanf("%d", &n);, it reads the value into n and potentially overwrite the existing b value if b has the memory location right after n.
All you have to do is to change that problematic line into:
scanf("%hu", &n);
the h is a modifier for unsigned short int, from here.
I have tried to implement crc in c.My logic is not very good.What I have tried is to copy the message(msg) in a temp variable and at the end I have appended number of zeros 1 less than the number of bits in crc's divisor div.
for ex:
msg=11010011101100
div=1011
then temp becomes:
temp=11010011101100000
div= 10110000000000000
finding xor of temp and div and storing it in temp
gives temp=01100011101100000 counting number of zeros appearing before the first '1' of temp and shifting the characters of div right to that number and then repeating the same process until decimal value of temp becomes less than decimal value of div. Which gives the remainder.
My problem is when I append zeros at the end of temp it stores 0's along with some special characters like this:
temp=11010011101100000$#UFI#->Jp#|
and when I debugged I got error
Floating point:Stack Underflow
here is my code:
#include<stdio.h>
#include<conio.h>
#include<math.h>
#include<string.h>
void main() {
char msg[100],div[100],temp[100];
int i,j=0,k=0,l=0,msglen,divlen,newdivlen,ct=0,divdec=0,tempdec=0;
printf("Enter the message\n");
gets(msg);
printf("\nEnter the divisor\n");
gets(div);
msglen=strlen(msg);
divlen=strlen(div);
newdivlen=msglen+divlen-1;
strcpy(temp,msg);
for(i=msglen;i<newdivlen;i++)
temp[i]='0';
printf("\nModified Temp:");
printf("%s",temp);
for(i=divlen;i<newdivlen;i++)
div[i]='0';
printf("\nModified div:");
printf("%s",div);
for(i=newdivlen;i>0;i--)
divdec=divdec+div[i]*pow(2,j++);
for(i=newdivlen;i>0;i--)
tempdec=tempdec+temp[i]*pow(2,k++);
while(tempdec>divdec)
{
for(i=0;i<newdivlen;i++)
{
temp[i]=(temp[i]==div[i])?'0':'1';
while(temp[i]!='1')
ct++;
}
for(i=newdivlen+ct;i>ct;i--)
div[i]=div[i-ct];
for(i=0;i<ct;i++)
div[i]='0';
tempdec=0;
for(i=newdivlen;i>0;i--)
tempdec=tempdec+temp[i]*pow(2,l++);
}
printf("%s",temp);
getch();
}
and this part of the code :
for(i=newdivlen;i>0;i--)
divdec=divdec+div[i]*pow(2,i);
gives error Floating Point:Stack Underflow
The problem is that you wrote a 0 over the NUL terminator, and didn't put another NUL terminator on the string. So printf gets confused and prints garbage. Which is to say that this code
for(i=msglen;i<newdivlen;i++)
temp[i]='0';
printf("\nModified Temp:");
printf("%s",temp);
should be
for(i=msglen;i<newdivlen;i++)
temp[i]='0';
temp[i] = '\0'; // <--- NUL terminate the string
printf("\nModified Temp:");
printf("%s",temp);
You have to do this with integers
int CRC(unsigned int n);
int CRC_fast(unsigned int n);
void printbinary(unsigned int n);
unsigned int msb(register unsigned int n);
int main()
{
char buf[5];
strcpy(buf, "ABCD");
//convert string to number,
//this is like 1234 = 1*1000 + 2*100 + 3*10 + 4, but with hexadecimal
unsigned int n = buf[3] * 0x1000000 + buf[2] * 0x10000 + buf[1] * 0x100 + buf[3];
/*
- "ABCD" becomes just a number
- Any string of text can become a sequence of numbers
- you can work directly with numbers and bits
- shift the bits left and right using '<<' and '>>' operator
- use bitwise operators & | ^
- use basic math with numbers
*/
//finding CRC, from Wikipedia example:
n = 13548; // 11010011101100 in binary (14 bits long), 13548 in decimal
//padding by 3 bits: left shift by 3 bits:
n <<= 3; //11010011101100000 (now it's 17 bits long)
//17 is "sort of" the length of integer, can be obtained from 1 + most significant bit of n
int m = msb(n) + 1;
printf("len(%d) = %d\n", n, m);
int divisor = 11; //1011 in binary (4 bits)
divisor <<= (17 - 4);
//lets see the bits:
printbinary(n);
printbinary(divisor);
unsigned int result = n ^ divisor;// XOR operator
printbinary(result);
//put this in function:
n = CRC(13548);
n = CRC_fast(13548);
return 0;
}
void printbinary(unsigned int n)
{
char buf[33];
memset(buf, 0, 33);
unsigned int mask = 1 << 31;
//result in binary: 1 followed by 31 zero
for (int i = 0; i < 32; i++)
{
buf[i] = (n & mask) ? '1' : '0';
//shift the mask by 1 bit to the right
mask >>= 1;
/*
mask will be shifted like this:
100000... first
010000... second
001000... third
*/
}
printf("%s\n", buf);
}
//find most significant bit
unsigned int msb(register unsigned int n)
{
unsigned i = 0;
while (n >>= 1)
i++;
return i;
}
int CRC(unsigned int n)
{
printf("\nCRC(%d)\n", n);
unsigned int polynomial = 11;
unsigned int plen = msb(polynomial);
unsigned int divisor;
n <<= 3;
for (;;)
{
int shift = msb(n) - plen;
if (shift < 0) break;
divisor = polynomial << shift;
printbinary(n);
printbinary(divisor);
printf("-------------------------------\n");
n ^= divisor;
printbinary(n);
printf("\n");
}
printf("result: %d\n\n", n);
return n;
}
int CRC_fast(unsigned int n)
{
printf("\nCRC_fast(%d)\n", n);
unsigned int polynomial = 11;
unsigned int plen = msb(polynomial);
unsigned int divisor;
n <<= 3;
for (;;)
{
int shift = msb(n) - plen;
if (shift < 0) break;
n ^= (polynomial << shift);
}
printf("result: %d\n\n", n);
return n;
}
Previous problems with string method:
This is infinite loop:
while (temp[i] != '1')
{
ct++;
}
Previous problems with string method:
This one is too confusing:
for (i = newdivlen + ct; i > ct; i--)
div[i] = div[i - ct];
I don't know what ct is. The for loops are all going backward, this makes the code faster sometimes (maybe 1 nanosecond faster), but it makes it very confusing.
There is another while loop,
while (tempdec > divdec)
{
//...
}
This may go on forever if you don't get the expected result. It makes it very hard to debug the code.
#include <stdio.h>
int NumberOfSetBits(int);
int main(int argc, char *argv[]) {
int size_of_int = sizeof(int);
int total_bit_size = size_of_int * 8;
// binary representation of 3 is 0000011
// C standard doesn't support binary representation directly
int n = 3;
int count = NumberOfSetBits(n);
printf("Number of set bits is: %d\n", count);
printf("Number of unset bits is: %d", total_bit_size - count);
}
int NumberOfSetBits(int x)
{
int count = 0;
//printf("x is: %d\n", x);
while (x != 0) {
//printf("%d\n", x);
count += (x & 1);
x = x >> 1;
}
return count;
}
Number of set bits is: 2
Number of unset bits is: 30
int size_of_int = sizeof(int);
int total_bit_size = size_of_int * 8;
^ that will get the size of the int on the system and times it by 8 which is the number of bits in each byte
EDITED: Without the use of the ~
/*
Calculate how many set bits and unset bits are in a binary number aka how many 1s and 0s in a binary number
*/
#include <stdio.h>
unsigned int NumberOfSetBits(unsigned int);
unsigned int NumberOfUnSetBits(unsigned int x);
int main() {
// binary representation of 3 is 0000011
// C standard doesn't support binary representation directly
unsigned int n = 3;
printf("Number of set bits is: %u\n", NumberOfSetBits(n));
printf("Number of unset bits is: %u", NumberOfUnSetBits(n));
return 0;
}
unsigned int NumberOfSetBits(unsigned int x) {
// counts the number of 1s
unsigned int count = 0;
while (x != 0) {
count += (x & 1);
// moves to the next bit
x = x >> 1;
}
return count;
}
unsigned int NumberOfUnSetBits(unsigned int x) {
// counts the number of 0s
unsigned int count = 0;
while(x != 0) {
if ((x & 1) == 0) {
count++;
}
// moves to the next bit
x = x >> 1;
}
return count;
}
returns for input 3
Number of set bits is: 2
Number of unset bits is: 0
unset bits is 0? Doesn't seem right?
if I use NumberOfSetBits(~n) it returns 30
You've got a problem on some systems because you right shift a signed integer in your bit-counting function, which may shift 1's into the MSB each time for negative integers.
Use unsigned int (or just unsigned) instead:
int NumberOfSetBits(unsigned x)
{
int count = 0;
//printf("x is: %d\n", x);
while (x != 0) {
//printf("%d\n", x);
count += (x & 1);
x >>= 1;
}
return count;
}
If you fix that part of the problem, you can solve the other with:
int nbits = NumberOfSetBits(~n);
where ~ bitwise inverts the value in n, and hence the 'set bit count' counts the bits that were zeros.
There are also faster algorithms for counting the number of bits set: see Bit Twiddling Hacks.
To solve the NumberOfSetBits(int x) version without assuming 2's complement nor absence of padding bits is a challenge.
#Jonathan Leffler has the right approach: use unsigned. - Just thought I'd try a generic int one.
The x > 0, OP's code work fine
int NumberOfSetBits_Positive(int x) {
int count = 0;
while (x != 0) {
count += (x & 1);
x = x >> 1;
}
return count;
}
Use the following to find the bit width and not count padding bits.
BitWidth = NumberOfSetBits_Positive(INT_MAX) + 1;
With this, the count of 0 or 1 bits is trivial.
int NumberOfClearBits(int x) {
return NumberOfSetBits_Positive(INT_MAX) + 1 - NumberOfSetBits(x);
}
int NumberOfSetBits_Negative(int x) {
return NumberOfSetBits_Positive(INT_MAX) + 1 - NumberOfSetBits_Positive(~x);
}
All that is left is to find the number of bits set when x is 0. +0 is easy, the answer is 0, but -0 (1's compliment or sign magnitude) is BitWidth or 1.
int NumberOfSetBits(int x) {
if (x > 0) return NumberOfSetBits_Positive(x);
if (x < 0) return NumberOfSetBits_Negative(x);
// Code's assumption: Only 1 or 2 forms of 0.
/// There may be more because of padding.
int zero = 0;
// is x has same bit pattern as +0
if (memcmp(&x, &zero, sizeof x) == 0) return 0;
// Assume -0
return NumberOfSetBits_Positive(INT_MAX) + 1 - NumberOfSetBits_Positive(~x);
}
here is a proper way to count the number of zeores in a binary number
#include <stdio.h>
unsigned int binaryCount(unsigned int x)
{
unsigned int nb=0; // will count the number of zeores
if(x==0) //for the case zero we need to return 1
return 1;
while(x!=0)
{
if ((x & 1) == 0) // the condition for getting the most right bit in the number
{
nb++;
}
x=x>>1; // move to the next bit
}
return nb;
}
int main(int argc, char *argv[])
{
int x;
printf("input the number x:");
scanf("%d",&x);
printf("the number of 0 in the binary number of %d is %u \n",x,binaryCount(x));
return 0;
}