I reading a book "Linux Kernel. Development. Third Edition." by Robert Love.
And in softirq section it's comment next piece of code:
u32 pending;
pending = local_softirq_pending();
if (pending) {
struct softirq_action *h;
/* reset the pending bitmask */
set_softirq_pending(0);
h = softirq_vec;
do {
if (pending & 1) /* STEP 4 */
h->action(h);
h++;
pending >>= 1;
} while (pending);
}
He discribe step by step what happens and most unclear step for me is 4:
If the first bit in pending is set, h->action(h) is called
I have following code to check if bit set like in book:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define BIT_SET(n) ((1) << (n))
#define BIT_CLEAR(n) ~((1) << (n))
int main(void)
{
unsigned char bits = 0x0;
bits |= BIT_SET(1);
if (bits & (1 << 1))
printf("TEST CHECK SET 1\n");
if (bits & 1)
printf("TEST CHECK SET 2\n");
bits &= BIT_CLEAR(1);
if (!(bits >> 1) & 1UL)
printf("BITS UNSET\n");
return 0;
}
compiled with:
gcc main.c -O0 -Wall -Wextra -Werror.
I always check if bit set with this one:
if (bits & (1 << n))
And my code make output like this:
TEST CHECK SET 1
BITS UNSET
Why if (bits & 1) statment is not work?:
So from several option which should I use and what exactly last one check?
if (bit & (1 << n))
if ((bit >> n) & 1)
if (bit & n)
I always check if bit set with this one: if (bits & (1 << n))
This checks n-th bit in place; the code from the book, however, shifts the bit into the least-significant-bit position prior to making the check with bits & 1. In other words, by the time the code reaches the if (bits & 1) the value of bits has already been shifted such that the bit of interest is in 1-s position.
This is similar to your other check
if ((bit >> n) & 1)
except (bit >> n) part is done by performing bit >>= 1 operation n times in a loop.
Note that in order for this code to work correctly bit must be unsigned.
So from several option which should I use and what exactly last one check?
You misinterpreted the last check: it's not checking bit n, it's checking bit against the entire bit pattern of n's binary representation.
Related
I am reviewing the open source AMD GPU drivers for Linux. I noticed something I haven't seen before, and I would like to know the purpose. On line 1441 of the sid.h file, there are a series of defines that are integers being bitshifted left by 0. Wouldn't this just result in the original integer being operated on?
Here is an excerpt and a link to the head
#define VGT_EVENT_INITIATOR 0xA2A4
#define SAMPLE_STREAMOUTSTATS1 (1 << 0)
#define SAMPLE_STREAMOUTSTATS2 (2 << 0)
#define SAMPLE_STREAMOUTSTATS3 (3 << 0)
https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/amd/amdgpu/sid.h#L1441
Also, I am learning to access the performance counter registers of AMD GPUs in order to calculate the GPU load. Any tips on that would be appreciated as well.
Things like that could be done just for the sake of consistency (not necessarily applicable to your specific case). For example, I can describe a set of single-bit flags as
#define FLAG_1 0x01
#define FLAG_2 0x02
#define FLAG_3 0x04
#define FLAG_4 0x08
or as
#define FLAG_1 (1u << 0)
#define FLAG_2 (1u << 1)
#define FLAG_3 (1u << 2)
#define FLAG_4 (1u << 3)
In the first line of the latter approach I did not have to shift by 0. But it just looks more consistent that way and emphasizes the fact that FLAG_1 has the same nature as the rest of the flags. And 0 acts as a placeholder for a different value, if I some day decide to change it.
You can actually see exactly that in the linked code with shift by 0 in the definitions of DYN_OR_EN and DYN_RR_EN macros.
The approach can be extended to multi-bit fields within a word, like in the following (contrived) example
// Bits 0-3 - lower counter, bits 4-7 - upper counter
#define LOWER_0 (0u << 0)
#define LOWER_1 (1u << 0)
#define LOWER_2 (2u << 0)
#define LOWER_3 (3u << 0)
#define UPPER_0 (0u << 4)
#define UPPER_1 (1u << 4)
#define UPPER_2 (2u << 4)
#define UPPER_3 (3u << 4)
unsigned packed_counters = LOWER_2 + UPPER_3; /* or `LOWER_2 | UPPER_3` */
Again, shifts by 0 bits are present purely for visual consistency. As well as shifts of 0 values.
You can actually see exactly that in the linked code with shift by 0 in the definitions of LC_XMIT_N_FTS and LC_XMIT_N_FTS_MASK macros.
I'm working on a personal project to improve my knowledge on how a CPU works. So I'm doing a Intel 8080 emulator, which is a 8 bits microprocessor.
In the implementation of a RRC instruction, which example is this:
case 0x0f: {
uint8_t x = state->a;
state->a = ((x & 1) << 7) | (x >> 1);
state->cc.cy = (1 == (x&1));
}
I can't understand how this line is working.
state->a = ((x & 1) << 7) | (x >> 1);
I know it's supposed to move all the bits to the right by 1 position, but I can't figure out how.
I would appreciate if someone could provide me an example of what it's actually doing step by step.
state->a is a uint8_t which emulate the intel 8080 register named
A.
0x0f is the HEX value for RRC.
The example has been provided by this page.
Lets study the steps in order:
uint8_t x = state->a; Use a temporary variable for the current value of the A register;
(x & 1) << 7 shift the low order bit to the high order bit; (x & 1) is the value of the low order bit as all other bits of x are masked off.
(x >> 1) shift the other bits one place to the right (towards the lower bits).
state->a = ((x & 1) << 7) | (x >> 1); combine the bits from the previous 2 steps and store as the new value of the A register;
state->cc.cy = (1 == (x&1)); store the low order bit from the original value into the carry bit (this is the bit that was rotated into the high order bit).
The effect of these steps is a rotation of the 8 bits one step to the right, with the low order bit landing into the carry flag. The 8080 reference card describes this as Rotate Accumulator Right thru Carry.
Note that the steps can be simplified:
state->a = ((x & 1) << 7) | (x >> 1); is the same as state->a = (x << 7) | (x >> 1); because state->a is a uint8_t.
state->cc.cy = (1 == (x&1)) is the same as state->cc.cy = x & 1;
I want to read and write from/to an unsigned char according to the table below:
for example I have following variables:
unsigned char hsi_div = 0x01; /* HSI/2 */
unsigned char cpu_div = 0x05; /* Fmaster/32 */
I want to write hsi_div to bits 4,3 and cpu_div to bits 2,1,0 (imagine the whole char is named CLK_DIVR):
CLK_DIVR |= hsi_div << 4; //not correct!
CLK_DIVR |= cpu_div << 2; //not correct!
And lets say I want to read the register back to make sure I did it correct:
if( ((CLK_DIVR << 4) - 1) & hsi_div) ) { /* SET OK */ }
if( ((CLK_DIVR << 2) - 1) & cpu_div) ) { /* SET OK */ }
Is there something wrong with my bitwise operations!? I do not get correct behaviour.
I assume CLK_DIVR is a hardware peripheral register which should be qualified volatile. Such registers should be set up with as few writes as possible. You change all write-able bits, so just
CLK_DIVR = (uint8_t)((hsi_div << 3) | (cpu_div << 0));
Note using fixed width type. That makes mentioniong it is an 8 bit register unnecessary. According to the excerpt, the upper bits are read-only, so they are not changed when writing. The cast keeps the compiler from issuing a truncation warning which is one of the recommended warnings to always enable (included in -Wconversion for gcc).
The shift count is actually the bit the field starts (the LSbit). A shift count of 0 means "no shifting", so the shift-operator is not required. I still use it to clarify I meant the field starts at bit 0. Just let the compiler optimize, concentrate on writing maintainable code.
Note: Your code bit-or's whatever already is in the register. Bit-or can only set bits, but not clear them. Addiionally the shift counts were wrong.
Not sure, but if the excerpt is for an ARM Cortex-M CPU (STM32Fxxxx?), reducing external bus-cycles becomes more relevant, as the ARM can take quite some cycles for an access.
For the HSIDIV bit fields you want:
hw_register = (hw_register & 0x18) | (hsi_value & 0x03) << 0x03;
This will mask the value to 2 bits wide then shift to bit position 3 and 4.
The CPUDIV fields are:
hw_register = (hw_register & 0x7) | (cpu_value & 7);
Reading the register:
hsi_value = (hw_register & 0x18) >> 3;
cpu_value = hw_register & 0x07;
Just
CLK_DIVR |= hsi_div << 3;
CLK_DIVR |= cpu_div << 0;
Since hsi_div is a 2-digit binary, you have to move it three positions to skip the CPUDIV field. And the cpu_div is already at the end of the field.
How can I switch the 0th and 3rd bits of each nibble in an integer using only bit operations (no control structures)? What kind of masks do I need to create in order to solve this problem? Any help would be appreciated. For example, 8(1000) become 1(0001).
/*
* SwitchBits(0) = 0
* SwitchBits(8) = 1
* SwitchBits(0x812) = 0x182
* SwitchBits(0x12345678) = 0x82a4c6e1
* Legal Operations: ! ~ & ^ | + << >>
*/
int SwitchBits(int n) {
}
Code:
#include <stdio.h>
#include <inttypes.h>
static uint32_t SwitchBits(uint32_t n)
{
uint32_t bit0_mask = 0x11111111;
uint32_t bit3_mask = 0x88888888;
uint32_t v_bit0 = n & bit0_mask;
uint32_t v_bit3 = n & bit3_mask;
n &= ~(bit0_mask | bit3_mask);
n |= (v_bit0 << 3) | (v_bit3 >> 3);
return n;
}
int main(void)
{
uint32_t i_values[] = { 0, 8, 0x812, 0x12345678, 0x9ABCDEF0 };
uint32_t o_values[] = { 0, 1, 0x182, 0x82A4C6E1, 0x93B5D7F0 };
enum { N_VALUES = sizeof(o_values) / sizeof(o_values[0]) };
for (int i = 0; i < N_VALUES; i++)
{
printf("0x%.8" PRIX32 " => 0x%.8" PRIX32 " (vs 0x%.8" PRIX32 ")\n",
i_values[i], SwitchBits(i_values[i]), o_values[i]);
}
return 0;
}
Output:
0x00000000 => 0x00000000 (vs 0x00000000)
0x00000008 => 0x00000001 (vs 0x00000001)
0x00000812 => 0x00000182 (vs 0x00000182)
0x12345678 => 0x82A4C6E1 (vs 0x82A4C6E1)
0x9ABCDEF0 => 0x93B5D7F0 (vs 0x93B5D7F0)
Note the use of uint32_t to avoid undefined behaviour with sign bits in signed integers.
To obtain a bit, you can mask it out using AND. To get the lowest bit, for example:
x & 0x01
Think about how AND works: both bits must be set. Since we're ANDing with 1, all bits except the first must be 0, because they're 0 in 0x01. The lowest bit will be either 0 or 1, depending on what's in x; said differently, the lowest bit will be the lowest bit in x, which is what we want. Visually:
x = abcd
AND 1 = 0001
--------
000d
(where abcd represent the bits in those slots; we don't know what they are)
To move it to bit 3's position, just shift it:
(x & 0x01) << 3
Visually, again:
x & 0x01 = 000d
<< 3
-----------
d000
To add it in, first, we need to clear out that spot in x for our bit. We use AND again:
x & ~0x08
Here, we invert 0x08 (which is 1000 in binary): this means all bits except bit 3 are set, and when we AND that with x, we get x except for that bit.
Visually,
0x08 = 1000
(invert)
-----------
0111
AND x = abcd
------------
0bcd
Combine with OR:
(x & ~0x08) | ((x & 0x01) << 3)
Visually,
x & ~0x08 = 0bcd
| ((x & 0x01) << 3) = d000
--------------------------
dbcd
Now, this only moves bit 0 to bit 3, and just overwrites bit 3. We still need to do bit 3 → 0. That's simply another:
x & 0x08 >> 3
And we need to clear out its spot:
x & ~0x01
We can combine the two clearing pieces:
x & ~0x09
And then:
(x & ~0x09) | ((x & 0x01) << 3) | ((x & 0x08) >> 3)
That of course handles only the lowest nibble. I'll leave the others as an exercise.
Try below code . Here you should know bitwise operator to implement and correct position to place.Also needs to aware of maintenance ,shifting and toggling basic properties.
#include<stdio.h>
#define BITS_SWAP(x) x=(((x & 0x88888888)>>3) | ((x & 0x11111111)<<3)) | ((x & ~ (0x88888888 | 0x11111111)))
int main()
{
int data=0;
printf("enter the data in hex=0x");
scanf("%x",&data);
printf("bits=%x",BITS_SWAP(data));
return 0;
}
OP
vinay#vinay-VirtualBox:~/c_skill$ ./a.out
enter the data in hex=0x1
bits=8
vinay#vinay-VirtualBox:~/c_skill$ ./a.out
enter the data in hex=0x812
bits=182
vinay#vinay-VirtualBox:~/c_skill$ ./a.out
enter the data in hex=0x12345678
bits=82a4c6e1
vinay#vinay-VirtualBox:~/c_skill$
Try this variant of the xor swap:
uint32_t switch_bits(uint32_t a){
static const mask = 0x11111111;
a ^= (a & mask) << 3;
a ^= (a >> 3) & mask;
a ^= (a & mask) << 3;
return a;
}
Move the low bits to the high bits and mask out the resulting bits.
Move the high bits to the low bits and mask out the resulting bits.
Mask out all bits that have not been moved.
Combine the results with ORs.
Code:
unsigned SwitchBits(unsigned n) {
return ((n << 3) & 0x88888888) | ((n >> 3) & 0x11111111) | (n & 0x66666666);
}
Alternativly, if you would like to be very clever. It can be done with two fewer operations, though this may not actually be faster due to some of the dependicies between instrutions.
Move the high bits to align with the low bits
XOR recording a 0 in the low bit if high an low bits are the same, and a 1 if they are different.
From this, mask out only the low bit of each nibble.
From this, multiply by 9, this will keep the low bit as is, and also copy it to the high bit.
From this, XOR with the original value. in the case that the high and low bit are the same, no change will correctly occure. In the case they are different, they will be effectivly exchanged.
Code:
unsigned SwitchBits(unsigned n) {
return ((((n >> 3) ^ n) & 0x11111111) * 0x9) ^ n;
}
I know how to set a bit, clear a bit , toggle a bit, and check if a bit is set.
But, how I can copy bit, for example nr 7 of byte_1 to bit nr 7 in byte_2 ?
It is possible without an if statement (without checking the value of the bit) ?
#include <stdio.h>
#include <stdint.h>
int main(){
int byte_1 = 0b00001111;
int byte_2 = 0b01010101;
byte_2 = // what's next ?
return 0;
}
byte_2 = (byte_2 & 0b01111111) | (byte_1 & 0b10000000);
You need to first read the bit from byte1, clear the bit on byte2 and or the bit you read earlier:
read_from = 3; // read bit 3
write_to = 5; // write to bit 5
the_bit = ((byte1 >> read_from) & 1) << write_to;
byte2 &= ~(1 << write_to);
byte2 |= the_bit;
Note that the formula in the other answer (if you extend it to using variables, instead of just bit 7) is for the case where read_from and write_to are the same value.