Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 months ago.
Improve this question
/* Print binary equivalent of characters using showbits( ) function */
#include <stdio.h>
void showbits(unsigned char);
int main() {
unsigned char num;
for (num = 0; num <= 5; num++) {
printf("\nDecimal %d is same as binary ", num);
showbits(num);
}
return 0;
}
void showbits(unsigned char n) {
int i;
unsigned char j, k, andmask;
for (i = 7; i >= 0; i--) {
j = i;
andmask = 1 << j;
k = n & andmask;
k == 0 ? printf("0") : printf("1");
}
}
Sample numbers assigned for num : 0,1,2,3,4 ...
Can someone explain in detail what is going on in
k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is a 1-digit value and 10000000 is a multiple-digit value?
Also why is char is used for n and not int?
Let's walk through it.
Assume n is 2. The binary representation of 2 is 00000010.
The first time through the loop j is equal to 7. The statement
andmask = 1 << j;
takes the binary representation of 1, which is 00000001, and shifts it left seven places, giving us 10000000, assigning to andmask.
The statement
k = n & andmask;
performs a bitwise AND operation on n and andmask:
00000010
& 10000000
--------
00000000
and assigns the result to k. Then if k is 0 it prints a "0", otherwise it prints a "1".
So, each time through the loop, it's basically doing
j andmask n result output
- -------- -------- -------- ------
7 10000000 & 00000010 00000000 "0"
6 01000000 & 00000010 00000000 "0"
5 00100000 & 00000010 00000000 "0"
4 00010000 & 00000010 00000000 "0"
3 00001000 & 00000010 00000000 "0"
2 00000100 & 00000010 00000000 "0"
1 00000010 & 00000010 00000010 "1"
0 00000001 & 00000010 00000000 "0"
Thus, the output is "00000010".
So the showbits function is printing out the binary representation of its input value. They're using unsigned char instead of int to keep the output easy to read (8 bits instead of 16 or 32).
Some issues with this code:
It assumes unsigned char is always 8 bits wide; while this is usually the case, it can be (and historically has been) wider than this. To be safe, it should be using the CHAR_BIT macro defined in limits.h:#include <limits.h>
...
for ( i = CHAR_BIT - 1; i >= 0; i++ )
{
...
}
?: is not a control structure and should not be used to replace an if-else - that would be more properly written asprintf( "%c", k ? '1' : '0' );
That tells printf to output a '1' if k is non-zero, '0' otherwise.
Can someone explain in detail what is going on in k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is 1 digit value and 10000000 is multiple digit value?
The number of digits in a number is a characteristic of a particular representation of that number. In context of the code presented, you actually appear to be using two different forms of representation yourself:
"2" seems to be expressed (i) in base 10, and (ii) without leading zeroes.
On the other hand, I take
"10000000" as expressed (i) in base 2, and (ii) without leading zeroes.
In this combination of representations, your claim about the numbers of digits is true, but not particularly interesting. Suppose we consider comparable representations. For example, what if we express both numbers in base 256? Both numbers have single-digit representations in that base.
Both numbers also have arbitrary-length multi-digit representations in base 256, formed by prepending any number of leading zeroes to the single-digit representations. And of course, the same is true in any base. Representations with leading zeroes are uncommon in human communication, but they are routine in computers because computers work most naturally with fixed-width numeric representations.
What matters for bitwise and (&) are base-2 representations of the operands, the width of one of C's built-in arithmetic types. According to the rules of C, the operands of any arithmetic operators are converted, if necessary, to a common numeric type. These have the same number of binary digits (i.e. bits) as each other, some of which often being leading zeroes. As I infer you understand, the & operator computes a result by combining corresponding bits from those base-2 representations to determine the bits of the result.
That is, the bits combined are
(leading zeroes)10000000 & (leading zeroes)00000010
Also why is char is used for n and not int?
It is unsigned char, not char, and it is used for both n and andmask. That is a developer choice. n could be made an int instead, and the showbits() function would produce the same output for all inputs representable in the original data type (unsigned char).
At first glance this function seems to print an 8-bit value in binary representation (0s and 1s). It creates a mask that isolate each bit of the char value (setting all other bits to 0) and prints "0" if the masked value is 0 or "1" otherwise. char is used here because the function is designed to print the binary representation of an 8-bit value. If int was used here, only values in the range [0-255] would correctly be printed.
I don't understand your point with the 1-digit value and multiple-digit value.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new at programming and trying to understand the following program.
This program gets the minimum number of bits needed to store an integer as a number.
#include <stdio.h>
/*function declaration
* name : countBit
* Desc : to get bits to store an int number
* Parameter : int
* return : int
*/
int countBit(int);
int main()
{
int num;
printf("Enter an integer number :");
scanf("%d",&num);
printf("Total number of bits required = %d\n",countBit(num));
return 0;
}
int countBit(int n)
{
int count=0,i;
if(n==0) return 0;
for(i=0; i< 32; i++)
{
if( (1 << i) & n)
count=i;
}
return ++count;
}
Can you please explain how the if( (1 << i) & n) condition works?
To begin you should read up on Bitwise Operators.
for(i=0; i< 32; i++)
{
// Check if the bit at position i is set to 1
if( (1 << i) & n)
count=i;
}
In plain english, this is checking what the highest position of all "set" bits is.
This program gets the minimum number of bits needed to store an integer as a number.
Getting the position of the largest "set" bit will tell us how many bits we need to store that number. If we used a lesser amount of bits, then we would be reducing our maximum possible number to below our desired integer.
"<<" and "&" are bitwise operators, that manipulate a given (usually unsigned integer) variable's bits. You can read more about such operators here. In your case,
1<<i
is the number whose binary representation is 1 followed by i-1 zeroes (and preceded only by zeroes as well). Overall, the check
(1<<i)&n
evaluates to true if the i-th bit of the variable n is 1, and false otherwise, and therefore the loop finds out what is the leftmost bit which is 1 in the given number.
Its very simple if you understand bitwise operators.
Shift operator: <<is a left shift operator, which shifts a value by designated bits. In C, x << 1, will shift x by 1-bit.
Lets just consider 8-bit values for now and lets say x is 100 in decimal, 0x64 in Hexadecimal numbering system and binary representation of the same would be 0110 0100
Using the shift operator, lets shift this value 1-bit. So,
0 1 1 0 0 1 0 0
becomes
0 1 1 0 0 1 0 0 0
^ ^
Discarded Padded
as the last (right extreme) bit will be padded with a 0.
The number becomes, 0xC8, which is 200 in decimal numbering system, which is double the previous value!
The same goes for a >> operator, try it yourselves if you haven't. Result should be the half, except for when you try to 0x01 :-)
As a side note, when you'll grow up and start looking at the way shell/console is used by developers, you'll understand that > has a different purpose.
The & operator: Firstly, && and & is different. First one is a logical operator and the latter one is a bitwise operator.
Lets pick a number again, 100.
In logical and operation, the end result is always true or false. For example, 0x64 && 0x64 will result in a true condition, all other combinations will result in a false result.
But, the bitwise and operation, is used this way: Is the ith bit of 0x64 set? If yes, results in true, else results in false.
The if statement:
if( (1 << i) & n)
is doing just that. For every iteration of loop, it left shifts 1 by i bits, and then checks if the ith bit of n is set, results in true if set, else results in false.
Programmers usually use a macro for this, which makes it more readable.
#define CHECK_BIT(value, position) ((value) & (1 << position))
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I saw the following code(part of a function):
if (end == start)
{
*max = *min = *start;
return 0x80000000;
}
I dont understand why it returns 0x80000000,which is 2^31,and it is out of int's range and has type unsigned int
And what is it equal to ?
Complete code:
int MaxDiffCore(int* start, int* end, int* max, int* min)
{
if (end == start)
{
*max = *min = *start;
return 0x80000000;
}
int* middle = start + (end - start) / 2;
int maxLeft, minLeft;
int leftDiff = MaxDiffCore(start, middle, &maxLeft, &minLeft);
int maxRight, minRight;
int rightDiff = MaxDiffCore(middle + 1, end, &maxRight, &minRight);
int crossDiff = maxLeft - minRight;
*max = (maxLeft > maxRight) ? maxLeft : maxRight;
*min = (minLeft < minRight) ? minLeft : minRight;
int maxDiff = (leftDiff > rightDiff) ? leftDiff : rightDiff;
maxDiff = (maxDiff > crossDiff) ? maxDiff : crossDiff;
return maxDiff;
}
0x80000000 is not out of int's range. An int is platform dependent, and there are platforms where int is 32 bits wide. This number is 32 bits wide, so it will do a "straight bit assignment" to the int.
Yes, the decimal representation of this number is 2^31 but that's only if you interpret the bits as unsigned, which in the case of bits, makes little sense. You really need to look at the L-value to know what it is going to be handled as, and that's a signed int/
Now, assuming this is a 32 bit platform, this is a fancy way to write MIN_INT, and by fancy, I mean non-portable and requiring a lot of assumptions that aren't constant, and finally confusing to those who don't want to do the binary math. It assumes 2's complement math and opts to set the bits directly.
Basically, with 2's complement numbers, zero is still
0x00000000
but to get -1 + 1 = 0 you have to get something to add to 1 yeilding 0
0x????????
+ 0x00000001
= 0x00000000
So you choose
0x11111111
+ 0x00000001
= 0x00000000
relying on the carrying 1's to eventually walk off the end. You can then deduce that 1 lower is -2 and so on; up to a point -2 = 0x11111110 and so on. Basically since the first bit determines the "sign" of the number, the "biggest" negative number you could have would be 0x1000000 and if you tried to subtract 1 from that, you would carry from the "negative" sign bit yielding the largest positive number. 0x01111111.
If the constant has type unsigned int on your platform and the function is declared as returning int, then the unsigned int value will get implicitly converted to type int. If the original value does not fit into int range, the result of this conversion is implementation-defined (or it might raise a signal).
6.3.1.3 Signed and unsigned integers
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is
implementation-defined or an implementation-defined signal is raised.
Consult your compiler documentation to see what will happen in this case. Apparently the authors of the code came to conclusion that the implementation does exactly what they wanted it to do. For example, a natural thing to expect is for some implementation to "reinterpret" this bit pattern as a signed integer value with the highest-order bit becoming the sign bit. It will convert 0x80000000 into INT_MIN value on a 2's-complement platform with 32-bit ints.
This is a wrong practice and the return value should be corrected.
The code is returning binary 10000000 00000000 00000000 00000000 and
On machines where sizeof(int) = 4byte :
In your case, function return type int is treating it as binary 10000000 00000000 00000000 00000000 = integer -2147483648 (negative value).
If function return type would have been unsigned int, it would have treated 0x80000000 as binary 10000000 00000000 00000000 00000000 = integer 2147483648
On machines where sizeof(int) = 2byte :
The result would be implementation-defined or an implementation-defined signal would be raised [see other answer].
I don't know why, I tried all the implementations I've found here in stack, but every of them have this strange behaviour.
My aim is: given a decimal, change it to binary and retrieve a bit at given position value.
Let's say I got this binary, which is decimal 255:
11111111
Well using this:
#define CHECK_BIT(var,pos) ((var) & (1<<(pos)))//other test
#define NCHECK_BIT(int, pos) (!! int & (1 << pos))
unsigned char* REG_SW ;
//from int to binary
unsigned int int_to_int(unsigned int k) {
return (k == 0 || k == 1 ? k : ((k % 2) + 10 * int_to_int(k / 2)));
}
int checkBitPosition(){
REG_SW = (unsigned char*) 0x300111;
return NCHECK_BIT(int_to_int(REG_SW [0]),6)==false?1:0 ;
}
it returns me true if the bit is 0 at sixth but also second (and some others..actually!!) so the results are impredictable... I didn't get the logic behind that gives me the result.
I assume that the int to binary translator works well, 'cause I'm printing that and it's always the correct binary taken from the int I give.
Can anybody help me?
Thanks in advance!
You are converting a number into its binary format first and then you are checking that binary format with your macros. This is wrong, since your macros will check for the position of the bit in the resulting number (not your original number) i.e,
first your number = 10 and binary format is 1010
now to check the bits you have to pass 10 as argument for your macros and not 1010
if you pass the binary representation 1010 to the macro then binary representation of the number 1010 will be checked for pos and not for 10.
When you do something like 0x01AE1 - 0x01AEA = fffffff7. I only want the last 3 digits. So I used the modulus trick to remove the extra digits. The displacement gets filled with hex values.
int extra_crap = 0;
int extra_crap1 = 0;
int displacement = 0;
int val1 = 0;
int val2 = 0;
displacement val1 - val2;
extra_crap = displacement % 0x100;
extra_crap1 = displacement % 256;
printf(" extra_crap is %x \n", extra_crap);
printf(" extra_crap1 is %x \n", extra_crap1);
Unfortunately this is having no effect at all. Is there another way to remove all but the last 3 digits?
'Unfortunately this is having no effect at all.'
That's probably because you do your calculations on signed int. Try casting the value to unsigned, or simply forget the remainder operator % and use bitwise masking:
displacement & 0xFF;
displacement & 255;
for two hex digits or
displacement & 0xFFF;
displacement & 4095;
for three digits.
EDIT – some explanation
A detailed answer would be quite long... You need to learn about data types used in C (esp. int and unsigned int, which are two of most used Integral types), the range of values that can be represented in those types and their internal representation in Two's complement code. Also about Integer overflow and Hexadecimal system.
Then you will easily get what happened to your data: subtracting 0x01AE1 - 0x01AEA, that is 6881 - 6890, gave the result of -9, which in 32-bit signed integer encoded with 2's complement and printed in hexadecimal is FFFFFFF7. That MINUS NINE divided by 256 gave a quotient ZERO and Remainder MINUS NINE, so the remainder operator % gave you a precise and correct result. What you call 'no effect at all' is just a result of your lack of understanding what you were actually doing.
My answer above (variant 1) is not any kind of magic, but just a way to enforce calculation on positive numbers. Casting values to unsigned type makes the program to interpret 0xFFFFFFF7 as 4294967287, which divided by 265 (0x100 in hex) results in quotient 16777215 (0xFFFFFF) and remainder 247 (0xF7). Variant 2 does no division at all and just 'masks' those necessary bits: numbers 255 and 4095 contain 8 and 12 low-order bits equal 1 (in hexadecimal 0xFF and 0xFFF, respectively), so bitwise AND does exactly what you want: removes the higher part of the value, leaving just the required two or three low-order hex dgits.
I am reading The C Programming Language by Brian Kernigan and Dennis Ritchie. Here is what it says about the bitwise AND operator:
The bitwise AND operator & is often used to mask off some set of bits, for example,
n = n & 0177
sets to zero all but the low order 7 bits of n.
I don't quite see how it is masking the lower seven order bits of n. Please can somebody clarify?
The number 0177 is an octal number representing the binary pattern below:
0000000001111111
When you AND it using the bitwise operation &, the result keeps the bits of the original only in the bits that are set to 1 in the "mask"; all other bits become zero. This is because "AND" follows this rule:
X & 0 -> 0 for any value of X
X & 1 -> X for any value of X
For example, if you AND 0177 and 0545454, you get
0000000001111111 -- 0000177
0101010101010101 -- 0545454
---------------- -------
0000000001010101 -- 0000154
In C an integer literal prefixed with 0 is an octal number so 0177 is an octal number.
Each octal digit (of value 0 to 7) is represented with 3 bits and 7 is the greatest value for each digit. So a value of 7 in octal means 3 bits set.
Since 0177 is an octal literal and each octal number is 3 three bits you have, the following binary equivalents:
7 = 111
1 = 001
Which means 0177 is 001111111 in binary.
It is already explained that the first '0' used for octal representation of a number in ANSI C. Actually, the number 0177 (octal) is same with 127 (in decimal), which is 128-1 and also can be represented as 2^7-1, and 2^n-1 in binary representation means take n 1's and put all the 1's to the right.
0177 = 127 = 128-1
which is a bitmask;
0000000000000000000000001111111
You can check the code down below;
Demo
#include <stdio.h>
int main()
{
int n = 0177; // octal representation of 127
printf("Decimal:[%d] : Octal:[%o]\n", n, n, n);
n = 127; // decimal representation of 127
printf("Decimal:[%d] : Octal:[%o]\n", n, n, n);
return 0;
}
Output
Decimal:[127] : Octal:[177]
Decimal:[127] : Octal:[177]
0177 is an octal value each digit is represented by 3 bits form the value 000 to 111 so 0177 translates to 001111111 (i.e 001|111|111) which if you consider in 32 bit binary ( can be 64 bit too except the remainder of the digits are populated as per the MSB i.e sign bit in this case value 0) form is 0000000000000000000000001111111 and and performing a bitwise with it for a given number, will output the lower 7 bits of the number turning of rest of the digits in the n-bit number to 0.
(since x&0 =0 & x&1=x e.g 0&0=0 ,1&0=0, 1&1=1 0&1=1)