How to translate these hex numbers into bitwise values? - c

I'm looking through the source code of a project written in C. Here is a list of options that are defined (no these aren't the real defines...not very descriptive!)
...
#define OPTION_5 32768
#define OPTION_6 65536
#define OPTION_7 0x20000L
#define OPTION_8 0x40000L
#define OPTION_9 0x80000L
I'd like to add a new option OPTION_10 but before I do that, I'd like to understand what exactly the hex numbers represent?
Do these numbers convert to the expected decimal values of 131,072 262,144 524,288 ? If so, why not keep the same format as the earlier options?

Do these numbers convert to the expected decimal values of 131,072
Yes. You can use Google for the conversion: search for "0x20000 in decimal".
If so, why not keep the same format as the earlier options?
I guess simply because programmers know their powers of two up to 65536 and prefer hexadecimal, where they are more recognizable, above that.
The L suffix forces the literal constant to be typed at least as a long int, but the chosen type may be still larger if that's necessary to hold the constant. It's probably unnecessary in your program and the programmer used it because s/he didn't understand the emphasized clause. The nitty-gritty details are in 6.4.4.1, page 56 of the C99 standard.

Just a further thought to add to the existing answers, I prefer to define such flags more like this:
enum {
OPTION_5_SHIFT = 15,
OPTION_6_SHIFT,
OPTION_7_SHIFT,
OPTION_8_SHIFT,
OPTION_9_SHIFT,
OPTION_10_SHIFT
};
enum {
OPTION_5 = 1L << OPTION_5_SHIFT,
OPTION_6 = 1L << OPTION_6_SHIFT,
OPTION_7 = 1L << OPTION_7_SHIFT,
OPTION_8 = 1L << OPTION_8_SHIFT,
OPTION_9 = 1L << OPTION_9_SHIFT,
OPTION_10 = 1L << OPTION_10_SHIFT
};
This avoids having explicitly calculated constants and makes it much easier to insert/delete values, etc.

They represent the same kind of numbers, they are all powers of two. Or, too see it in a different light, they are all binary numbers with exactly one one (no phun intended).
One possible reason why they are written the they way they are (even though the reason isn't a good one) is that many programmers know the following sequence by hart:
1
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
This sequence corresponds to the first 17 powers of two. Then things are not that easy any more, so they probably switched to hex (being too lazy to change all the earlier numbers).

The specific values represent bit flag options, which can be combined with the bitwise OR operator |:
flags = (OPTION_5|OPTION_6);
You will see from the binary representation of these values, that each has one unique bit set, to allow combining them using bitwise OR:
0x8000L = 32768 = 00000000 00000000 10000000 00000000
0x10000L = 65536 = 00000000 00000001 00000000 00000000
0x20000L = 131072 = 00000000 00000010 00000000 00000000
0x40000L = 262144 = 00000000 00000100 00000000 00000000
0x80000L = 524288 = 00000000 00001000 00000000 00000000
0x100000L = 1048576 = 00000000 00010000 00000000 00000000
To find out if a flag has been set in the flags variable, you can use the bitwise AND operator &:
if(flags & OPTION_6)
{
/* OPTION_6 is active */
}

Each digit of a number represents a multiplication factor of the number's numerical system's base to the power of the digit's position in the number, counted from right to left, beginning with zero.
So 32768 = 8 * 10^0 + 6 * 10^1 + 7 * 10^2 + 2 * 10^3 + 3 * 10^4.
(Hint for the sake of completeness: x^0 = 1, x^1 = x.)
Hexadecimal numbers have 16 digits (0 - 9, A (~10) - F (~15)) and hence a base of 16, so 0x20 = 0 * 16^0 + 2 * 16^1.
Binary numbers have 2 digits and a base of 2, so 100b = 1 * 2^2 + 0 * 2^1 + 0 * 2^0.
Knowing that you should be able to figure the rest yourself and handle binary and hexadecimal numbers, understand that each number you listed is twice its predecessor, what decimal values the hex numbers have, what the next decimal number in the row should be, and how to express OPTION_10 in any numerical system, and particularly binary, decimal and hexadecimal.

Related

Find how many times pattern accrues in binary number using C

Load a 32-bit non-negative integer (unsigned int) and an 8-bit pattern (unsigned int). It is not necessary to check the loaded numbers. Load both numbers in the decimal number system.
Determine the number of times a given pattern appears in the binary notation of a loaded 32-bit number.
It is not allowed to use the string.h library and aggregate data types.
For example:
32-bit number 514 00000000 00000000 00000010 00000010
8-bit number 2 00000010
So it should print that number 2 accrues 2 times.
I'm not sure how to tackle this problem. I've tried to keep a counter that counts streek, but it become to complicated to quickly.
This is a solution for specifically 8-bit patterns and 32-bit numbers:
for (int i = 0; i < 4; i++) // 32 bits = 4 bytes
if ((number & 0xFF << i * 8) >> i * 8 == pattern)
count++;
The general idea is, for every 8-bit sequence in the number, mask it (set all other bits to 0 by &'ing the number by 0xFF shifted however many bits required) and shift it however many bits are required to bring the masked sequence to the least significant position. Here is a live working example.

can't understand showbit() function in bitwise operatrers in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 months ago.
Improve this question
/* Print binary equivalent of characters using showbits( ) function */
#include <stdio.h>
void showbits(unsigned char);
int main() {
unsigned char num;
for (num = 0; num <= 5; num++) {
printf("\nDecimal %d is same as binary ", num);
showbits(num);
}
return 0;
}
void showbits(unsigned char n) {
int i;
unsigned char j, k, andmask;
for (i = 7; i >= 0; i--) {
j = i;
andmask = 1 << j;
k = n & andmask;
k == 0 ? printf("0") : printf("1");
}
}
Sample numbers assigned for num : 0,1,2,3,4 ...
Can someone explain in detail what is going on in
k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is a 1-digit value and 10000000 is a multiple-digit value?
Also why is char is used for n and not int?
Let's walk through it.
Assume n is 2. The binary representation of 2 is 00000010.
The first time through the loop j is equal to 7. The statement
andmask = 1 << j;
takes the binary representation of 1, which is 00000001, and shifts it left seven places, giving us 10000000, assigning to andmask.
The statement
k = n & andmask;
performs a bitwise AND operation on n and andmask:
00000010
& 10000000
--------
00000000
and assigns the result to k. Then if k is 0 it prints a "0", otherwise it prints a "1".
So, each time through the loop, it's basically doing
j andmask n result output
- -------- -------- -------- ------
7 10000000 & 00000010 00000000 "0"
6 01000000 & 00000010 00000000 "0"
5 00100000 & 00000010 00000000 "0"
4 00010000 & 00000010 00000000 "0"
3 00001000 & 00000010 00000000 "0"
2 00000100 & 00000010 00000000 "0"
1 00000010 & 00000010 00000010 "1"
0 00000001 & 00000010 00000000 "0"
Thus, the output is "00000010".
So the showbits function is printing out the binary representation of its input value. They're using unsigned char instead of int to keep the output easy to read (8 bits instead of 16 or 32).
Some issues with this code:
It assumes unsigned char is always 8 bits wide; while this is usually the case, it can be (and historically has been) wider than this. To be safe, it should be using the CHAR_BIT macro defined in limits.h:#include <limits.h>
...
for ( i = CHAR_BIT - 1; i >= 0; i++ )
{
...
}
?: is not a control structure and should not be used to replace an if-else - that would be more properly written asprintf( "%c", k ? '1' : '0' );
That tells printf to output a '1' if k is non-zero, '0' otherwise.
Can someone explain in detail what is going on in k = n & andmask? How can n, which is a number such as 2, be an operand of the same & operator with andmask, eg 10000000, since 2 is 1 digit value and 10000000 is multiple digit value?
The number of digits in a number is a characteristic of a particular representation of that number. In context of the code presented, you actually appear to be using two different forms of representation yourself:
"2" seems to be expressed (i) in base 10, and (ii) without leading zeroes.
On the other hand, I take
"10000000" as expressed (i) in base 2, and (ii) without leading zeroes.
In this combination of representations, your claim about the numbers of digits is true, but not particularly interesting. Suppose we consider comparable representations. For example, what if we express both numbers in base 256? Both numbers have single-digit representations in that base.
Both numbers also have arbitrary-length multi-digit representations in base 256, formed by prepending any number of leading zeroes to the single-digit representations. And of course, the same is true in any base. Representations with leading zeroes are uncommon in human communication, but they are routine in computers because computers work most naturally with fixed-width numeric representations.
What matters for bitwise and (&) are base-2 representations of the operands, the width of one of C's built-in arithmetic types. According to the rules of C, the operands of any arithmetic operators are converted, if necessary, to a common numeric type. These have the same number of binary digits (i.e. bits) as each other, some of which often being leading zeroes. As I infer you understand, the & operator computes a result by combining corresponding bits from those base-2 representations to determine the bits of the result.
That is, the bits combined are
(leading zeroes)10000000 & (leading zeroes)00000010
Also why is char is used for n and not int?
It is unsigned char, not char, and it is used for both n and andmask. That is a developer choice. n could be made an int instead, and the showbits() function would produce the same output for all inputs representable in the original data type (unsigned char).
At first glance this function seems to print an 8-bit value in binary representation (0s and 1s). It creates a mask that isolate each bit of the char value (setting all other bits to 0) and prints "0" if the masked value is 0 or "1" otherwise. char is used here because the function is designed to print the binary representation of an 8-bit value. If int was used here, only values in the range [0-255] would correctly be printed.
I don't understand your point with the 1-digit value and multiple-digit value.

Bits of the primitive type in C

Well, I'm starting my C studies and I was left with the following question, how are the bits of the primitive types filled in, for example, the int type, for example, has 4 bytes, that is 32 bits, which fits up to 4294967296. But if for example , I use a value that takes only 1 byte, how do the other bits stay?
#include <stdio.h>
int main(void) {
int x = 5; // 101 how the rest of the bits are filled
// which was not used?
return 0;
}
All leading bits will be set to 0, otherwise the value wouldn't be 5. A bit, in today computers, has only two states so if it's not 0 then it's 1 which would cause the value stored to be different. So assuming 32 bits you have that
5 == 0b00000000 00000000 00000000 00000101
5 == 0x00000005
The remaining bits are stored with 0.
int a = 356;
Now let us convert it to binary.
1 0 1 1 0 0 1 0 0
Now you get 9 bit number. Since int allocates 32 bits, fill the remaining 23 bits with 0.
So the value stored in memory is
00000000 00000000 00000001 01100100
The type you have picked determines how large the integer is, not the value you store inside the variable.
If we assume that int is 32 bits on your system, then the value 5 will be expressed as a 32 bit number. Which is 0000000000000000000000000101 binary or 0x00000005 hex. If the other bits had any other values, it would no longer be the number 5, 32 bits large.

Using |= for CLI argument parsing [duplicate]

I'm someone who writes code just for fun and haven't really delved into it in either an academic or professional setting, so stuff like these bitwise operators really escapes me.
I was reading an article about JavaScript, which apparently supports bitwise operations. I keep seeing this operation mentioned in places, and I've tried reading about to figure out what exactly it is, but I just don't seem to get it at all. So what are they? Clear examples would be great! :D
Just a few more questions - what are some practical applications of bitwise operations? When might you use them?
Since nobody has broached the subject of why these are useful:
I use bitwise operations a lot when working with flags. For example, if you want to pass a series of flags to an operation (say, File.Open(), with Read mode and Write mode both enabled), you could pass them as a single value. This is accomplished by assigning each possible flag it's own bit in a bitset (byte, short, int, or long). For example:
Read: 00000001
Write: 00000010
So if you want to pass read AND write, you would pass (READ | WRITE) which then combines the two into
00000011
Which then can be decrypted on the other end like:
if ((flag & Read) != 0) { //...
which checks
00000011 &
00000001
which returns
00000001
which is not 0, so the flag does specify READ.
You can use XOR to toggle various bits. I've used this when using a flag to specify directional inputs (Up, Down, Left, Right). For example, if a sprite is moving horizontally, and I want it to turn around:
Up: 00000001
Down: 00000010
Left: 00000100
Right: 00001000
Current: 00000100
I simply XOR the current value with (LEFT | RIGHT) which will turn LEFT off and RIGHT on, in this case.
Bit Shifting is useful in several cases.
x << y
is the same as
x * 2y
if you need to quickly multiply by a power of two, but watch out for shifting a 1-bit into the top bit - this makes the number negative unless it's unsigned. It's also useful when dealing with different sizes of data. For example, reading an integer from four bytes:
int val = (A << 24) | (B << 16) | (C << 8) | D;
Assuming that A is the most-significant byte and D the least. It would end up as:
A = 01000000
B = 00000101
C = 00101011
D = 11100011
val = 01000000 00000101 00101011 11100011
Colors are often stored this way (with the most significant byte either ignored or used as Alpha):
A = 255 = 11111111
R = 21 = 00010101
G = 255 = 11111111
B = 0 = 00000000
Color = 11111111 00010101 11111111 00000000
To find the values again, just shift the bits to the right until it's at the bottom, then mask off the remaining higher-order bits:
Int Alpha = Color >> 24
Int Red = Color >> 16 & 0xFF
Int Green = Color >> 8 & 0xFF
Int Blue = Color & 0xFF
0xFF is the same as 11111111. So essentially, for Red, you would be doing this:
Color >> 16 = (filled in 00000000 00000000)11111111 00010101 (removed 11111111 00000000)
00000000 00000000 11111111 00010101 &
00000000 00000000 00000000 11111111 =
00000000 00000000 00000000 00010101 (The original value)
It is worth noting that the single-bit truth tables listed as other answers work on only one or two input bits at a time. What happens when you use integers, such as:
int x = 5 & 6;
The answer lies in the binary expansion of each input:
5 = 0 0 0 0 0 1 0 1
& 6 = 0 0 0 0 0 1 1 0
---------------------
0 0 0 0 0 1 0 0
Each pair of bits in each column is run through the "AND" function to give the corresponding output bit on the bottom line. So the answer to the above expression is 4. The CPU has done (in this example) 8 separate "AND" operations in parallel, one for each column.
I mention this because I still remember having this "AHA!" moment when I learned about this many years ago.
Bitwise operators are operators that work on a bit at a time.
AND is 1 only if both of its inputs are 1.
OR is 1 if one or more of its inputs are 1.
XOR is 1 only if exactly one of its inputs are 1.
NOT is 1 only if its input are 0.
These can be best described as truth tables. Inputs possibilities are on the top and left, the resultant bit is one of the four (two in the case of NOT since it only has one input) values shown at the intersection of the two inputs.
AND|0 1 OR|0 1
---+---- ---+----
0|0 0 0|0 1
1|0 1 1|1 1
XOR|0 1 NOT|0 1
---+---- ---+---
0|0 1 |1 0
1|1 0
One example is if you only want the lower 4 bits of an integer, you AND it with 15 (binary 1111) so:
203: 1100 1011
AND 15: 0000 1111
------------------
IS 11: 0000 1011
These are the bitwise operators, all supported in JavaScript:
op1 & op2 -- The AND operator compares two bits and generates a result of 1 if both bits are 1; otherwise, it returns 0.
op1 | op2 -- The OR operator compares two bits and generates a result of 1 if the bits are complementary; otherwise, it returns 0.
op1 ^ op2 -- The EXCLUSIVE-OR operator compares two bits and returns 1 if either of the bits are 1 and it gives 0 if both bits are 0 or 1.
~op1 -- The COMPLEMENT operator is used to invert all of the bits of the operand.
op1 << op2 -- The SHIFT LEFT operator moves the bits to the left, discards the far left bit, and assigns the rightmost bit a value of 0. Each move to the left effectively multiplies op1 by 2.
op1 >> op2 -- The SHIFT RIGHT operator moves the bits to the right, discards the far right bit, and assigns the leftmost bit a value of 0. Each move to the right effectively divides op1 in half. The left-most sign bit is preserved.
op1 >>> op2 -- The SHIFT RIGHT - ZERO FILL operator moves the bits to the right, discards the far right bit, and assigns the leftmost bit a value of 0. Each move to the right effectively divides op1 in half. The left-most sign bit is discarded.
In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the processor, and is used to manipulate values for comparisons and calculations.
operations:
bitwise AND
bitwise OR
bitwise NOT
bitwise XOR
etc
List item
AND|0 1 OR|0 1
---+---- ---+----
0|0 0 0|0 1
1|0 1 1|1 1
XOR|0 1 NOT|0 1
---+---- ---+---
0|0 1 |1 0
1|1 0
Eg.
203: 1100 1011
AND 15: 0000 1111
------------------
= 11: 0000 1011
Uses of bitwise operator
The left-shift and right-shift operators are equivalent to multiplication and division by x * 2y respectively.
Eg.
int main()
{
int x = 19;
printf ("x << 1 = %d\n" , x <<1);
printf ("x >> 1 = %d\n", x >>1);
return 0;
}
// Output: 38 9
The & operator can be used to quickly check if a number is odd or even
Eg.
int main()
{
int x = 19;
(x & 1)? printf("Odd"): printf("Even");
return 0;
}
// Output: Odd
Quick find minimum of x and y without if else statement
Eg.
int min(int x, int y)
{
return y ^ ((x ^ y) & - (x < y))
}
Decimal to binary
conversion
Eg.
#include <stdio.h>
int main ()
{
int n , c , k ;
printf("Enter an integer in decimal number system\n " ) ;
scanf( "%d" , & n );
printf("%d in binary number
system is: \n " , n ) ;
for ( c = 31; c >= 0 ; c -- )
{
k = n >> c ;
if ( k & 1 )
printf("1" ) ;
else
printf("0" ) ;
}
printf(" \n " );
return 0 ;
}
The XOR gate encryption is popular technique, because of its complixblity and reare use by the programmer.
bitwise XOR operator is the most useful operator from technical interview perspective.
bitwise shifting works only with +ve number
Also there is a wide range of use of bitwise logic
To break it down a bit more, it has a lot to do with the binary representation of the value in question.
For example (in decimal):
x = 8
y = 1
would come out to (in binary):
x = 1000
y = 0001
From there, you can do computational operations such as 'and' or 'or'; in this case:
x | y =
1000
0001 |
------
1001
or...9 in decimal
Hope this helps.
When the term "bitwise" is mentioned, it is sometimes clarifying that is is not a "logical" operator.
For example in JavaScript, bitwise operators treat their operands as a sequence of 32 bits (zeros and ones); meanwhile, logical operators are typically used with Boolean (logical) values but can work with non-Boolean types.
Take expr1 && expr2 for example.
Returns expr1 if it can be converted
to false; otherwise, returns expr2.
Thus, when used with Boolean values,
&& returns true if both operands are
true; otherwise, returns false.
a = "Cat" && "Dog" // t && t returns Dog
a = 2 && 4 // t && t returns 4
As others have noted, 2 & 4 is a bitwise AND, so it will return 0.
You can copy the following to test.html or something and test:
<html>
<body>
<script>
alert("\"Cat\" && \"Dog\" = " + ("Cat" && "Dog") + "\n"
+ "2 && 4 = " + (2 && 4) + "\n"
+ "2 & 4 = " + (2 & 4));
</script>
It might help to think of it this way. This is how AND (&) works:
It basically says are both of these numbers ones, so if you have two numbers 5 and 3 they will be converted into binary and the computer will think
5: 00000101
3: 00000011
are both one: 00000001
0 is false, 1 is true
So the AND of 5 and 3 is one. The OR (|) operator does the same thing except only one of the numbers must be one to output 1, not both.
I kept hearing about how slow JavaScript bitwise operators were. I did some tests for my latest blog post and found out they were 40% to 80% faster than the arithmetic alternative in several tests. Perhaps they used to be slow. In modern browsers, I love them.
I have one case in my code that will be faster and easier to read because of this. I'll keep my eyes open for more.

Confused with Union in C

I could not understand how Union works..
#include <stdio.h>
#include <stdlib.h>
int main()
{
union {
int a:4;
char b[4];
}abc;
abc.a = 0xF;
printf(" %d, %d, %d, %d, %d, %d\n", sizeof(abc), abc.a, abc.b[0], abc.b[1], abc.b[2], abc.b[3]);
return 0;
}
In the above program.
I made int a : 4;
So, a should taking 4 bits.
now I am storing, a = 0xF; //i.e a= 1111(Binary form)
So when I am accessing b[0 0r 1 or 2 or 3] why the outputs are not coming like 1, 1, 1, 1
Your union's total size will be at least 4 * sizeof(char).
Assuming the compiler you are using handles this as defined behavior, consider the following:
abc is never fully initialized, so it contains a random assortment of zeros and ones. Big problem. So, do this first: memset(&abc, 0, sizeof(abc));
The union should be the size of its largest member, so you should now have 4 zeroed-out bytes: 00000000 00000000 00000000 00000000
You are only setting 4 bits high, so your union will become something like this:
00000000 00000000 00000000 00001111 or 11110000 00000000 00000000 00000000. I'm not sure how your compiler handles this type of alignment, so this is the best I can do.
You might also consider doing a char-to-bits conversion so you can manually inspect the value of each and every bit in binary format:
Access individual bits in a char c++
Best of luck!
0xF is -1 if you look at it as a 4-bit signed, so the output is normal. b is not even assigned fully, so it's value is undefined. It's a 4 byte entity but you only assign a 4-bit entity. So everything looks normal to me.
Because every char takes (on most platforms) 1 byte i.e. 8 bits, so all the 4 bits of a fall into a single element of b[].
And beside that, it is compiler-dependent how the bit fields are stored, so it is not defined, into which byte of b[] that maps...
0xF is -1 if you defined it to be a 4 bit signed number. Check two-complement binary representation to understand why.
And you didn't initialize b, so it could be holding any random value.

Resources