Flags in C/ set-clear-toggle - c

I am confused as to what the following code does, I understand Line 1 sets a flag, line 2 clears a flag and line 3 toggles a flag;
#include <stdio.h>
#define SCC_150_A 0x01
#define SCC_150_B 0x02
#define SCC_150_C 0x04
unsigned int flags = 0;
main () {
flags |= SCC_150_A; // Line 1
flags &= ~SCC_150_B; // Line 2
flags ^= SCC_150_C; // Line 3
printf("Result: %d\n",flags); // Line 4
}
What I don't understand is what the output of Line 4 would be? What is the effect of setting/clearing/toggling the flags on 0x01 0x02 and 0x04?

The macros define constants that each require a single bit to be represented:
macro hex binary
======================
SCC_150_A 0x01 001
SCC_150_B 0x02 010
SCC_150_C 0x04 100
Initially flags is 0.
Then it has:
Bit 0 set by the bitwise OR.
Bit 1 cleared by the bitwise AND with the inverse of SCC_150_B.
Bit 2 toggled (turning it from 0 to 1).
The final result is thus 1012, or 5 in decimal.

First of all, I'm going to use binary numbers, cause it's easier to explain with them. In the end it's the same with hexadecimal numbers. Also note that I shortened the variable to unsigned char to have a shorter value to write down (8 bits vs. 32 bits). The end result is similar, just without leading digits.
Let's start with the values:
0x01 = 0000 0001
0x02 = 0000 0010
0x04 = 0000 0100
So after replacing the constant/macro, the first line would essentially be this:
flags |= 0000 0001
This performs a bitwise or operation, a bit in the result is 1, if any of the input values is 1 at that position. Due to the initial value of flags being 0, this will work just like an assignment or addition (which it won't in general, keep that in mind).
flags: 0000 0000
op: 0000 0001
----------------
or: 0000 0001
The result is flags being set to 0000 0001.
flags &= ~0000 0010
Here we've got two operations, first there's ~, the bitwise complement operator. What this essentially does is flipping all bits of the value. Therefore 0000 0010 becomes 1111 1101 (0xfd in hex). Then you're using the bitwise and operator, where a result bit is only set to 1 if both input values are 1 at the specific position as well. As you can see, this will essentially cause the second bit from the right to be set to 0 without touching any other bit.
flags: 0000 0001
op: 1111 1101
----------------
and: 0000 0001
Due to this, the result of this operation is 0000 0001 (0x01 in hex).
flags ^= 0000 0100
The last operation is the bitwise exclusive or (xor), which will set a bit to 1 only if the input bits don't match (i.e. they're different). This leads to the simple behavior of toggling the bits set in the operands.
flags: 0000 0001
op: 0000 0100
----------------
xor: 0000 0101
In this case the result will be 0000 0101 (0x05 in hex).
For clarification on the last operation, because I think xor might be the hardest to understand here, let's toggle it back:
flags: 0000 0101
op: 0000 0100
----------------
xor: 0000 0001
As you can see, the third bit from the right is equal in both inputs, so the result will be 0 rather than 1.

Related

How can I clear half a byte from a number

I know I can clear a single bit by using the bitwise operator &
number &= ~(1 << x);
But how would I be able to clear either the upper or lower half of a byte?
For example, number = 99
99 = 0110 0011 and clearing the upper half we get 0000 0011
Just AND the byte with the mask you need: to clear the upper four bits, use n & 0x0F, to clear the lower four bits, use n & 0xF0.
You could say number &= 15;. 15 is represented as 0000 1111 in binary which means the leading/upper 4 bits would be cleared away.
Likewise, to clear the trailing/lower 4 bits, you could then say number &= 240 because 240 is represented as 1111 0000 in binary.
For example, let's say number = 170 and we want to clear away the leading/upper four bits. 170 is represented as 1010 1010 in binary. So, if we do number &= 15, we get:
1010 1010
& 0000 1111
-------------
0000 1010
(0000 1010 is 10 in binary)
Supposing, again, number = 170 and we want to clear the trailing/lower four bits, we can say: number &= 240. When we do this, we get:
1010 1010
& 1111 0000
-------------
1010 0000
(1010 0000 is 160 in binary)
You want to do an and with the bit-pattern showing what you want to keep.
So for your case:
number &= 0xf;
As 0xf is 0000 1111.
This is sometimes referred to as a "mask".
Clear lower half:
number &= ~0xF;
Higher half:
number &= ~0xF0;
The above works work int types. For larger types, you need to extend mask accordingly, for example:
number &= ~UINT64_C(0xF0);

How to replace/overwrite an arbitrary number of bits in an arbitrary position in an array of short with arbitrary bits from an integer

I have a function I'm trying to write of the following form (and haven't found exactly what I'm looking for — if this is a dup please just point me at the right place — even if it's not ints and shorts but, say, chars and ints instead, that would be fine):
put_bits(short *array_of_short, int significant_bits, int bit_offset, int integer_to_append)
Where I overwrite the the significant_bits of integer_to_append at bit_offset in array_of_short.
I'd like to accomplish things by just overwriting (or bitwise oring, or overlaying, or replacing) bits to the position in the array (I don't want to add more elements to the array or allocate more memory) — i.e. it should be easily possible, but pretty inefficient, to just keep track of how many elements into the array the offset translates to, whether this falls on a boundary of the shorts and shift the bits of the integer to the appropriate offset and or them onto the appropriate short(s) — but that seems like loads of overhead and calculating more than I need to vs just oring the bits into the appropriate spot, but I'm kind of at a loss...
So, for example, I have an integer which will contain an arbitrary number of "significant" bits — let's say for this example there are 6. So the values would be from 0 to 63
0000 0000 0000 0000 0000 0000 0000 0000
to
0000 0000 0000 0000 0000 0000 0011 1111
and I want to overlay (or bitwise or this) this to an arbitrarily sized array of short at an arbitrary point. So if I had
Integer:
0000 0000 0000 0000 0000 0000 0010 0001
Array of short:
0100 1000 0100 1100 : 1100 0010 0110 0000 : 0000 0000 0000 0000 : 0000 0000 0000 0000
and I wanted to append at position 42 to get:
0100 1000 0100 1100 : 1100 0010 0110 0000 : 0000 0000 0000 1000 : 0100 0000 0000 0000
If I'm totally off or I don't make sense, let me know too.
If i understand your question correctly, you actually want to treat your array as array of bits. There is no such structure in c as bit array of course, but you can implement it. Here is example of bit array with int as base type. You can adopt this solution with short as base type, and then just set bit by bit something like that:
for( i = 0 ; i< sizeof(int)*8;++i)
{
unsigned int flag = 1;
flag = flag << i;
if( int_num & flag)
SetBit( array_of_short, bit_offset + i );
}
void SetBit( short array_of_short[ ], int k )
{
array_of_short[k/16] |= 1 << (k%16); // Set the bit at the k-th position in array_of_short[i]
}

Logic of a bit masking XOR code

I have a code that changes two sets of hex numbers and then stores them into a new unsigned char. The code looks like the following:
unsigned char OldSw = 0x1D;
unsigned char NewSw = 0xF0;
unsgined char ChangedSw;
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
So what I know is:
0x1D = 0001 1101
0xF0 = 1111 0000
Im confused on what the changedSw line is doing. I know it will give the output 0x02 but I can not figure out how its doing it.
ChangedSw = (OldSw ^ ~NewSw) & ~OldSw;
It means "zero one part of OldSw and inverse other part". NewSw indicates what bits of OldSw to zero and what bits to inverse. Namely, 1's in NewSw indicate bits to be zeroed, 0's indicate bits to be inverted.
This operation implemented in two steps.
Step 1. Invert bits.
(OldSw ^ ~NewSw):
0001 1101
^ 0000 1111
---------
0001 0010
See, we inverted bits which were 0's in original NewSw.
Step 2. Zero bits which were not inverted in previous step.
& ~OldSw:
0001 0010
& 1110 0010
---------
0000 0010
See, it doesn't change inverted bits, but zero all the rest.
the first part would be 1F ie. 0001 1111.So when ended with ~oldsw(1110 0010)
the operation will be something like this:
0001 1111
1110 0010
----------
0000 0010
So the output will be 2. The tilde operator is 1's complement.

How to change a 32bit registers specific bits without changing other bits?

I want to manipulate some bits of a register directly using its physical address. However I couldn't find a way to make this. I saw some posts about setting bit masks but I find them too confusing.
My registers physical address is: 0x4A10005C
I want to manipulate its bit which was between 18-16 bits. I want to set 0x3 inside those bits.
I will be really glad if you guys can provide an answer or a way to do it. Thanks.
You can just define a pointer to the register and then use normal C bitwise operations to manipulate the individual bits:
volatile uint32_t * const my_register = (uint32_t *) 0x4A10005C;
// set up a pointer to the register
uint32_t val = *my_register; // read register
val &= ~(0x7 << 16); // clear bits 16..18
val |= (0x3 << 16); // set bits 16..18 to 0x03 (i.e. set bits 16 and 17)
*my_register = val; // write register
(The above assumes that you are talking about three bits within the register, bits 16, 17 and 18, and that you want to set bit 18 to zero and bits 16 and 17 to 1.)
bit masks are pretty easy to understand so let’s run through that first:
Let say your 32bit register contains some value right now I'll arbitrarily pick 0xF48C621916
I assume you know how to convert hex to binary, if not... let's just say use a calculator or google (rather than go into the nitty gritty of that too). So our hex value can be represented in binary as:
+-- bit 31 +-- bit 0
| |
v v
1111 0100 1000 1100 0110 0010 0001 1001
^ ^
| |
+-+-- bits you want to set, 16-18
Boolean logic tells us that:
1) anything OR'd (|) with 1 gives you a value of 1. Or "sets" the bit.
2) anything AND'd (&) with 0 gives you a value of 0. Or "clears" the bit.
So if we wanted to clear bits 16-18 you can AND it with a mask like:
base number: 1111 0100 1000 1100 0110 0010 0001 10012 == 0xF48C621916
mask number: 1111 1111 1111 1000 1111 1111 1111 11112 == 0xFFF8FFF16
1111 0100 1000 1100 0110 0010 0001 1001
& 1111 1111 1111 1000 1111 1111 1111 1111
------------------------------------------
1111 0100 1000 1000 0110 0010 0001 1001
Now you can OR it with whatever you want to set there:
new mask number: 0000 0000 0000 0011 0000 0000 0000 00002 == 0x0003000016
1111 0100 1000 1000 0110 0010 0001 1001
| 0000 0000 0000 0011 0000 0000 0000 0000
-----------------------------------------
1111 0100 1000 1011 0110 0010 0001 1001
So in the code:
#define CLEAR_MASK 0x70000 //70000 is shorter to write, so just do this and flip it
#define SET_3_MASK 0x30000
volatile uint32_t * const reg = (uint32_t *) 0x4A10005C;//set a pointer to the register
*reg &= ~CLEAR_MASK; //~ filps the bits
*reg |= SET_3_MASK;
You can do tricks with shifting bits and so forth, but this is the basics of bit masks and how they work. Hope it helps.
structure r32 {
unsigned int bit0 :1;
unsigned int bit1 :1;
unsigned int bit2 :1;
unsigned int bit3 :1;
unsigned int bit4 :1;
unsigned int bit5 :1;
.
.
.
unsigned int bit31 :1;
}
in your main
structure r32 *p;
volatile uint32_t * const my_register = (uint32_t *) 0x4A10005C;
p = (structure r32 *) my_register;
and then to access to bit 5 for example
p->bit4 = 0;

Why is ~0xF equal to 0xFFFFFFF0 on a 32-bit machine?

Why is ~0xF equal to 0xFFFFFFF0?
Also, how is ~0xF && 0x01 = 1? Maybe I don't get 0x01 either.
Question 1
Why is ~0xF equal to 0xFFFFFFF0?
First, this means you run this on a 32-bit machine. That means 0xF is actually 0x0000000F in hexadecimal,
And that means 0xF is
0000 0000 0000 0000 0000 0000 0000 1111 in binary representation.
The ~ operator means the NOT operation. Tt changes every 0 to 1 and every 1 to 0 in the binary representation. That would make ~0xF to be:
1111 1111 1111 1111 1111 1111 1111 0000 in binary representation.
And that is actually 0xFFFFFFF0.
Note that if you do this on a 16-bit machine, the answer of ~0xF would be 0xFFF0.
Question 2
You wrote the wrong statement, it should be 0xF & 0x1. Note that 0x1 0x01, 0x001, and 0x0001 are all the same. So let’s change this hexdecimal number to binary representation:
0xF would be:
0000 0000 0000 0000 0000 0000 0000 1111
and 0x1 would be:
0000 0000 0000 0000 0000 0000 0000 0001
The & operation follows the following rules:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
So doing that to every bit, you get the result:
0000 0000 0000 0000 0000 0000 0000 0001
which is actually 0x1.
Additional
| means bitwise OR operation. It follows:
0 | 0 = 0
0 | 1 = 1
1 | 0 = 1
1 | 1 = 1
^ means bitwise XOR operation. It follows:
0 ^ 0 = 0
0 ^ 1 = 1
1 ^ 0 = 1
1 ^ 1 = 0
You can get more information here.
If you store 0xF in a 32-bit "int", all 32 bits flip, so ~0xF = 0xFFFFFFF0.
See this:
http://teaching.idallen.com/cst8214/08w/notes/bit_operations.txt
They give a good explanation
You're negating 0xF, which flips all of the bits to their inverse. So for example, with 8 bits you have: 0xF = 00001111. If you Negate that, it turns into 11110000.
Since you're using 32 bits, the F's are just extended all the way out. 1111 .... 0000
For your second question, you're using a logical AND, not a bitwise AND. Those two behave entirely differently.
It sounds like your confusion is that you believe 0xF is the same as 0b1111111111111111. It is not, it is 0b0000000000001111.
~0xF inverts all its bits, going
from 0x0000000F = 00000000000000000000000000001111 (32 bits)
to 0xFFFFFFF0 = 11111111111111111111111111110000 (32 bits)
a && b is 1 if both a and b are non-zero, and ~0xF and 0x01 are both non-zero.
In C, ~0xF can never be equal to 0xFFFFFFF0. The former is a negative number (in any of the three signed representations C allows) and the latter is a positive number. However, if both are converted to a 32-bit unsigned type on a twos-complement implementation, the converted values will be equal.
As for ~0xF && 0x01, the && operator is logical and, not bitwise and.

Resources