Logic of a bit masking XOR code - c

I have a code that changes two sets of hex numbers and then stores them into a new unsigned char. The code looks like the following:
unsigned char OldSw = 0x1D;
unsigned char NewSw = 0xF0;
unsgined char ChangedSw;
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
So what I know is:
0x1D = 0001 1101
0xF0 = 1111 0000
Im confused on what the changedSw line is doing. I know it will give the output 0x02 but I can not figure out how its doing it.

ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
It means "zero one part of OldSw and inverse other part". NewSw indicates what bits of OldSw to zero and what bits to inverse. Namely, 1's in NewSw indicate bits to be zeroed, 0's indicate bits to be inverted.
This operation implemented in two steps.
Step 1. Invert bits.
(OldSw ^ ~NewSw):
0001 1101
^ 0000 1111
---------
0001 0010
See, we inverted bits which were 0's in original NewSw.
Step 2. Zero bits which were not inverted in previous step.
& ~OldSw:
0001 0010
& 1110 0010
---------
0000 0010
See, it doesn't change inverted bits, but zero all the rest.

the first part would be 1F ie. 0001 1111.So when ended with ~oldsw(1110 0010)
the operation will be something like this:
0001 1111
1110 0010
----------
0000 0010
So the output will be 2. The tilde operator is 1's complement.

Related

Calculation of Bit wise NOT

How to calculate ~a manually? I am seeing these types of questions very often.
#include <stdio.h>
int main()
{
unsigned int a = 10;
a = ~a;
printf("%d\n", a);
}
The result of the ~ operator is the bitwise complement of its (promoted) operand
C11dr §6.5.3.3
When used with unsigned, it is sufficient to mimic ~ with exclusive-or with UINT_MAX which is the same type and value as (unsigned) -1. #EOF
unsigned int a = 10;
// a = ~a;
a ^= -1;
You could XOR it with a bitmask of all 1's.
unsigned int a = 10, mask = 0xFFFFFFFF;
a = a ^ mask;
This is assuming of course that an int is 32 bits. That's why it makes more sense to just use ~.
Just convert the number to binary form, and change '1' by '0' and '0' by '1'.
That is:
10 (decimal)
Converted to binary (32 bits as usual in an int) gives us:
0000 0000 0000 0000 0000 0000 0000 1010
Then apply the ~ operator:
1111 1111 1111 1111 1111 1111 1111 0101
Now you have a number that could be interpreted as an unsigned 32 bit number, or signed one. As you are using %d in your printf and a is an int, signed it is.
To find out the value in decimal from a signed (2-complement) number do as this:
If the most significant bit (the leftmost) is 0, then just convert back the binary number to decimal as usual.
if the most significant bit is 1 (our case here), then change '1' by '0' and '0' by '1', add '1' and convert to decimal prepending a minus sign to the result.
So it is:
1111 1111 1111 1111 1111 1111 1111 0101
^
|
Its most significant bit is 1, so first we change 0 and 1
0000 0000 0000 0000 0000 0000 0000 1010
And then, we add 1
0000 0000 0000 0000 0000 0000 0000 1010
1
---------------------------------------
0000 0000 0000 0000 0000 0000 0000 1011
Take this number and convert back to decimal prepending a minus sign to the result. The converted value is 11. With the minus sign, is -11
This function shows the binary representation of an int and swaps the 0's and 1's:
void not(unsigned int x)
{
int i;
for(i=(sizeof(int)*8)-1; i>=0; i--)
(x&(1u<<i))?putchar('0'):putchar('1');
printf("\n");
}
Source: https://en.wikipedia.org/wiki/Bitwise_operations_in_C#Right_shift_.3E.3E

bitwise operations in c explanation

I have the following code in c:
unsigned int a = 60; /* 60 = 0011 1100 */
int c = 0;
c = ~a; /*-61 = 1100 0011 */
printf("c = ~a = %d\n", c );
c = a << 2; /* 240 = 1111 0000 */
printf("c = a << 2 = %d\n", c );
The first output is -61 while the second one is 240. Why the first printf computes the two's complement of 1100 0011 while the second one just converts 1111 0000 to its decimal equivalent?
You have assumed that an int is only 8 bits wide. This is probably not the case on your system, which is likely to use 16 or 32 bits for int.
In the first example, all the bits are inverted. This is actually a straight inversion, not two's complement:
1111 1111 1111 1111 1111 1111 1100 0011 (32-bit)
1111 1111 1100 0011 (16-bit)
In the second example, when you shift it left by 2, the highest-order bit is still zero. You have misled yourself by depicting the numbers as 8 bits in your comments.
0000 0000 0000 0000 0000 0000 1111 0000 (32-bit)
0000 0000 1111 0000 (16-bit)
Try to avoid doing bitwise operations with signed integers -- often it'll lead you into undefined behavior.
The situation here is that you're taking unsigned values and assigning them to a signed variable. For ~60 this is undefined behavior. You see it as -61 because the bit pattern ~60 is also the two's-complement representation of -61. On the other hand 60 << 2 comes out correct because 240 has the same representation both as a signed and unsigned integer.

How to replace/overwrite an arbitrary number of bits in an arbitrary position in an array of short with arbitrary bits from an integer

I have a function I'm trying to write of the following form (and haven't found exactly what I'm looking for — if this is a dup please just point me at the right place — even if it's not ints and shorts but, say, chars and ints instead, that would be fine):
put_bits(short *array_of_short, int significant_bits, int bit_offset, int integer_to_append)
Where I overwrite the the significant_bits of integer_to_append at bit_offset in array_of_short.
I'd like to accomplish things by just overwriting (or bitwise oring, or overlaying, or replacing) bits to the position in the array (I don't want to add more elements to the array or allocate more memory) — i.e. it should be easily possible, but pretty inefficient, to just keep track of how many elements into the array the offset translates to, whether this falls on a boundary of the shorts and shift the bits of the integer to the appropriate offset and or them onto the appropriate short(s) — but that seems like loads of overhead and calculating more than I need to vs just oring the bits into the appropriate spot, but I'm kind of at a loss...
So, for example, I have an integer which will contain an arbitrary number of "significant" bits — let's say for this example there are 6. So the values would be from 0 to 63
0000 0000 0000 0000 0000 0000 0000 0000
to
0000 0000 0000 0000 0000 0000 0011 1111
and I want to overlay (or bitwise or this) this to an arbitrarily sized array of short at an arbitrary point. So if I had
Integer:
0000 0000 0000 0000 0000 0000 0010 0001
Array of short:
0100 1000 0100 1100 : 1100 0010 0110 0000 : 0000 0000 0000 0000 : 0000 0000 0000 0000
and I wanted to append at position 42 to get:
0100 1000 0100 1100 : 1100 0010 0110 0000 : 0000 0000 0000 1000 : 0100 0000 0000 0000
If I'm totally off or I don't make sense, let me know too.
If i understand your question correctly, you actually want to treat your array as array of bits. There is no such structure in c as bit array of course, but you can implement it. Here is example of bit array with int as base type. You can adopt this solution with short as base type, and then just set bit by bit something like that:
for( i = 0 ; i< sizeof(int)*8;++i)
{
unsigned int flag = 1;
flag = flag << i;
if( int_num & flag)
SetBit( array_of_short, bit_offset + i );
}
void SetBit( short array_of_short[ ], int k )
{
array_of_short[k/16] |= 1 << (k%16); // Set the bit at the k-th position in array_of_short[i]
}

Flags in C/ set-clear-toggle

I am confused as to what the following code does, I understand Line 1 sets a flag, line 2 clears a flag and line 3 toggles a flag;
#include <stdio.h>
#define SCC_150_A 0x01
#define SCC_150_B 0x02
#define SCC_150_C 0x04
unsigned int flags = 0;
main () {
flags |= SCC_150_A; // Line 1
flags &= ~SCC_150_B; // Line 2
flags ^= SCC_150_C; // Line 3
printf("Result: %d\n",flags); // Line 4
}
What I don't understand is what the output of Line 4 would be? What is the effect of setting/clearing/toggling the flags on 0x01 0x02 and 0x04?
The macros define constants that each require a single bit to be represented:
macro hex binary
======================
SCC_150_A 0x01 001
SCC_150_B 0x02 010
SCC_150_C 0x04 100
Initially flags is 0.
Then it has:
Bit 0 set by the bitwise OR.
Bit 1 cleared by the bitwise AND with the inverse of SCC_150_B.
Bit 2 toggled (turning it from 0 to 1).
The final result is thus 1012, or 5 in decimal.
First of all, I'm going to use binary numbers, cause it's easier to explain with them. In the end it's the same with hexadecimal numbers. Also note that I shortened the variable to unsigned char to have a shorter value to write down (8 bits vs. 32 bits). The end result is similar, just without leading digits.
Let's start with the values:
0x01 = 0000 0001
0x02 = 0000 0010
0x04 = 0000 0100
So after replacing the constant/macro, the first line would essentially be this:
flags |= 0000 0001
This performs a bitwise or operation, a bit in the result is 1, if any of the input values is 1 at that position. Due to the initial value of flags being 0, this will work just like an assignment or addition (which it won't in general, keep that in mind).
flags: 0000 0000
op: 0000 0001
----------------
or: 0000 0001
The result is flags being set to 0000 0001.
flags &= ~0000 0010
Here we've got two operations, first there's ~, the bitwise complement operator. What this essentially does is flipping all bits of the value. Therefore 0000 0010 becomes 1111 1101 (0xfd in hex). Then you're using the bitwise and operator, where a result bit is only set to 1 if both input values are 1 at the specific position as well. As you can see, this will essentially cause the second bit from the right to be set to 0 without touching any other bit.
flags: 0000 0001
op: 1111 1101
----------------
and: 0000 0001
Due to this, the result of this operation is 0000 0001 (0x01 in hex).
flags ^= 0000 0100
The last operation is the bitwise exclusive or (xor), which will set a bit to 1 only if the input bits don't match (i.e. they're different). This leads to the simple behavior of toggling the bits set in the operands.
flags: 0000 0001
op: 0000 0100
----------------
xor: 0000 0101
In this case the result will be 0000 0101 (0x05 in hex).
For clarification on the last operation, because I think xor might be the hardest to understand here, let's toggle it back:
flags: 0000 0101
op: 0000 0100
----------------
xor: 0000 0001
As you can see, the third bit from the right is equal in both inputs, so the result will be 0 rather than 1.

Bit Twiddling - Confused With This Program's Output

So I was messing around with Bit-Twiddling in C, and I came across an interesting output:
int main()
{
int a = 0x00FF00FF;
int b = 0xFFFF0000;
int res = (~b & a);
printf("%.8X\n", (res << 8) | (b >> 24));
}
And the output from this statement is:
FFFFFFFF
I expected the output to be
0000FFFF
But why wasn't it? Am I missing something with bit-shifting here?
TLDR: Your integer b is negative so when you shift it right the value of the uppermost bit (i.e. 1) remains the same. Therefore when you shift b right by 24 places you end up with 0xFFFFFFFF.
Longer explanation:
Assuming on your platform that your integers are 32 bits or longer and a signed integer is represented by 2's complement then the 0xFFFF0000 assigned to a signed integer variable is a negative number. If an int is longer than 32 bits then the 0xFFFF0000 will be sign extended first and will still be a negative number.
Shifting a negative number right is implementation defined by the standard (C99 / N1256, section 6.5.7.5):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. [...] If E1
has a signed type and a negative value, the resulting value is
implementation defined.
That means a particular compiler can choose what happens in a particular situation, but it should be noted in the compiler manual what the effect is.
There tend to be two sets of shift instructions in many processors, a logical shift and an arithmetic shift. The logical shift right will shift bits and fill the exposed bits with zeros. Arithmetic shifts right (assuming 2's complement again) will fill the exposed bits with the same bit value of the most significant bit so that it ends up with a result that is consistent with using shifts as a divide by 2. (For example, -4 >> 1 == 0xFFFFFFFC >> 1 == 0xFFFFFFFE == -2.)
In your case it appears that the compiler implementor has chosen to use arithmetic shifts when applied to signed integers and so the result of shifting a negative value to the right remains a negative value. In terms of bit patterns 0xFFFF0000 >> 24 gives 0xFFFFFFFF.
Unless you are absolutely sure of what you are doing it is best to perform bitwise operations only on unsigned types as their internal representation can safety be treated as a collection of bits. You probably also want to make sure any numeric values you use in that case are unsigned by appending the unsigned suffix to your number.
Right-shifting negative values (like b) can be defined in two different ways: logical shift, which pads the value with zeroes on the left (which yields a positive number when shifting a nonzero amount), and arithmetic shift, which pads the value with ones (always yielding a negative number). Which definition is used in C is implementation-defined, and your compiler apparently uses arithmetic shift, so b >> 24 is 0xFFFFFFFF.
b >> 24 gives 0xFFFFFFFF signed right pad of negative number
List = (res << 8) | (b >> 24)
a = 0x00FF00FF = 0000 0000 1111 1111 0000 0000 1111 1111
b = 0xFFFF0000 = 1111 1111 1111 1111 0000 0000 0000 0000
~b = 0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111
~b & a = 0x000000FF = 0000 0000 0000 0000 0000 0000 1111 1111, = res
res << 8 = 0x0000FF00 = 0000 0000 0000 0000 1111 1111 0000 0000
b >> 24 = 0xFFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111
List = 0xFFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111
The golden rule: Never ever mix signed numbers with bitwise operators.
Change all ints to unsigned ints. Just as a precaution, change all literals to unsigned too.
#include <stdint.h>
uint32_t a = 0x00FF00FFu;
uint32_t b = 0xFFFF0000u;
uint32_t res = (~b & a);

Resources