I need to convert an 8 bit number (0 - 255 or #0 - #FF) to its 12 bit equivalent (0 - 4095 or #0 - #FFF)
I am not wanting to do just a straight conversion of the same number. I am wanting to represent the same scale, but in 12 bits.
For example:-
0xFF in 8 bits should convert to 0xFFF in 12 bits
0x0 in 8 bits should convert to 0x0 in 12 bits
0x7F in 8 bits should convert to 0x7FF in 12 bits
0x24 in 8 bit should convert to 0x249 in 12 bits
Are there any specific algorithms or techniques that I should be using?
I am coding in C
Try x << 4 | x >> 4.
This has been updated by the OP, changed from x << 4 + x >> 4
If you are able to go through a larger domain then this may help:
b = a * ((1 << 12) - 1) / ((1 << 8) - 1)
It is ugly but preserves scaling almost as requested. Of course you can put constants.
What about:
x = x ?((x + 1) << 4) - 1 :0
I use mathematical equation y=mx+c
Assuming low range of values is zero.
You can scale your data by a factor of m (Multiple for increasing range and divide for decreasing)
Ex.
My ADC data was 12 bit. Range in integer =0 to 4095
I want to shrink this data in range 0 to 255.
m=(y2-y1/x2-x1)
m=(4095-0/255-0)
m=16.05 = 16
So data received in 12 bits is divided by 16 to convert to 8 bits.
This conversion is linear in nature.
Hope this is also a good idea.
Image Link
Related
I'm reading the Multiboot2 specification. You can find it here. Compared to the previous version, it names all of its structures "tags". They're defined like this:
3.1.3 General tag structure
Tags constitutes a buffer of structures following each other padded on u_virt size. Every structure has
following format:
+-------------------+
u16 | type |
u16 | flags |
u32 | size |
+-------------------+
type is divided into 2 parts. Lower contains an identifier of
contents of the rest of the tag. size contains the size of tag
including header fields. If bit 0 of flags (also known as
optional) is set if bootloader may ignore this tag if it lacks
relevant support. Tags are terminated by a tag of type 0 and size
8.
Then later in example code:
for (tag = (struct multiboot_tag *) (addr + 8);
tag->type != MULTIBOOT_TAG_TYPE_END;
tag = (struct multiboot_tag *) ((multiboot_uint8_t *) tag
+ ((tag->size + 7) & ~7)))
The last part confuses me. In Multiboot 1, the code was substantially simpler, you could just do multiboot_some_structure * mss = (multiboot_some_structure *) mbi->some_addr and get the members directly, without confusing code like this.
Can somebody explain what ((tag->size + 7) & ~7) means?
As mentioned by chux in his comment, this rounds tag->size up to the nearest multiple of 8.
Let's take a closer look at how that works.
Suppose size is 16:
00010000 // 16 in binary
+00000111 // add 7
--------
00010111 // results in 23
The expression ~7 takes the value 7 and inverts all bits. So:
00010111 // 23 (from pervious step)
&11111000 // bitwise-AND ~7
--------
00010000 // results in 16
Now suppose size is 17:
00010001 // 17 in binary
+00000111 // add 7
--------
00011000 // results in 24
Then:
00011000 // 24 (from pervious step)
&11111000 // bitwise-AND ~7
--------
00011000 // results in 24
So if the lower 3 bits of size are all zero, i.e. a multiple of 8, (size+7)&~7 sets those bits and then clears them, so no net effect. But if any one of those bits is 1, the bit corresponding to 8 gets incremented, then the lower bits are cleared, i.e. the number is rounded up to the nearest multiple of 8.
~ is a bitwise not. & is a bitwise AND
assuming 16 bits are used:
7 is 0000 0000 0000 0111
~7 is 1111 1111 1111 1000
Anything and'd with a 0 is 0. Anything and'd with 1 is itself. Thus
N & 0 = 0
N & 1 = N
So when you AND with ~7, you essentially clear the lowest three bits and all of the other bits remain unchanged.
Thanks for #chux for the answer. According to him, it rounds the size up to a multiple of 8, if needed. This is very similar to a technique done in 15bpp drawing code:
//+7/8 will cause this to round up...
uint32_t vbe_bytes_per_pixel = (vbe_bits_per_pixel + 7) / 8;
Here's the reasoning:
Things were pretty simple up to now but some confusion is introduced
by the 16bpp format. It's actually 15bpp since the default format is
actually RGB 5:5:5 with the top bit of each u_int16 being unused. In
this format, each of the red, green and blue colour components is
represented by a 5 bit number giving 32 different levels of each and
32786 possible different colours in total (true 16bpp would be RGB
5:6:5 where there are 65536 possible colours). No palette is used for
16bpp RGB images - the red, green and blue values in the pixel are
used to define the colours directly.
& ~7 sets the last three bits to 0
Please, explain why by doing as follows I'll get a second bit of the number stored in i in it's internal representation.
(i & 2) / 2;
Doing i & 2 masks out all but the second bit in i. [1]
That means the expression evaluates to either 0 or 2 (binary 00 and 10 respectively).
Dividing that by 2 gives either 0 or 1 which is effectively the value of the second bit in i.
For example, if i = 7 i.e. 0111 in binary:
i & 2 gives 0010.
0010 is 2 in decimal.
2/2 gives 1 i.e. 0001.
[1] & is the bitwise AND in C. See here for an explanation on how bitwise AND works.
i & 2 masks out all but the second bit.
Dividing it by 2 is the same as shifting down 1 bit.
e.g.
i = 01100010
(i & 2) == (i & 00000010) = 00000010
(i & 2) / 2 == (i & 2) >> 1 = 00000001
The & operator is bitwise AND: for each bit, the result is 1 only if the corresponding bits of both arguments are 1. Since the only 1 bit in the number 2 is the second-lowest bit, a bitwise AND with 2 will force all the other bits to 0. The result of (i & 2) is either 2 if the second bit in i is set, or 0 otherwise.
Dividing by 2 just changes the result to 1 instead of 2 when the second bit of i is set. It isn't necessary if you're just concerned with whether the result is zero or nonzero.
2 is 10 in binary. & is a bitwise conjunction. So, i & 2 gets you the second-from-the-end bit of i. And dividing by 2 is the same as bit-shifting by 1 to the right, which gets the value of the last bit.
Actually, shifting to the right would be better here, as it clearly states your intent. So, this code would be normally written like this: (i & 0x02) >> 1
Could someone explain to me how this algorithm converts MSB to LSB or LSB to MSB on a 32-bit system?
unsigned char b = x;
b = ((b * 0x0802LU & 0x22110LU) | (b * 0x8020LU & 0x88440LU)) * 0x10101LU >> 16;
I've seen hex values end with LU or just U in code before, what do they mean?
Thanks!
Presumably, a char has eight bits, so unsigned char b = x takes the low eight bits of x.
The mask with 0x22110 extracts bits 4, 8, 13, and 17 (numbering from 0 for the least significant bit). So, in the multiplication by 0x0802, we only care about what it places at those bits. In 0x802, bits 1 and 11 are on, so this multiplication places a copy of the eight bits of b in bits 1 through 8 and another copy in bits 11 through 18. There is no overlap, so there are no effects from adding bits that overlap in more general multiplications.
From this product, we take these bits:
Bit 4, which is bit 3 of b. (Bit 4 from the copy starting at bit 1, so bit 4–1 = 3 of b.)
Bit 8, which is bit 7 of b. (8–1 = 7.)
Bit 13, which is bit 2 of b. (13–11 = 2.)
Bit 17, which is bit 6 of b. (17–11 = 6.)
Similarly, the mask by 0x88440 extracts bits 6, 10, 15, and 19. The multiplication by 0x8020 places a copy of b in bits 5 to 12 and another copy in bits 15 to 22. From this product, we take these bits:
Bit 6, which is bit 1 of b.
Bit 10, which is bit 5 of b.
Bit 15, which is bit 0 of b.
Bit 19, which is bit 4 of b.
Then we OR those together, producing:
Bit 4, which is bit 3 of b.
Bit 6, which is bit 1 of b.
Bit 8, which is bit 7 of b.
Bit 10, which is bit 5 of b.
Bit 13, which is bit 2 of b.
Bit 15, which is bit 0 of b.
Bit 17, which is bit 6 of b.
Bit 19, which is bit 4 of b.
Call this result t.
We are going to multiply that by 0x10101, shift right by 16, and assign to b. The assignment converts to unsigned char, so only the low eight bits are kept. The low eight bits after the shift are bits 24 to 31 before the shift. So we only care about bits 24 to 31 in the product.
The multiplier 0x10101 has bits 0, 8, and 16 set. Thus, bit 24 in the result is the sum of bits 24, 16, and 8 in t, plus any carry from elsewhere. However, there is no carry: Observe that none of the set bits in t are eight apart, as the bits in the multiplier are. Therefore, none of them can directly contribute to the same bit in the product. Each bit in the product is the result of at most one bit in t. We just need to figure out which bit that is.
Bit 24 must come from bit 8, 16, or 24 in t. Only bit 8 can be set, and it is bit 7 from b. Deducing all the bits this way:
Bit 24 is bit 8 in t, which is bit 7 in b.
Bit 25 is bit 17 in t, which is bit 6 in b.
Bit 26 is bit 10 in t, which is bit 5 in b.
Bit 27 is bit 19 in t, which is bit 4 in b.
Bit 28 is bit 4 in t, which is bit 3 in b.
Bit 29 is bit 13 in t, which is bit 2 in b.
Bit 30 is bit 6 in t, which is bit 1 in b.
Bit 31 is bit 15 in t, which is bit 0 in b.
Thus, bits 24 to 31 in the product are bits 7 to 0 in b, so the eight bits finally produced are bits 7 to 0 in b.
View b as an 8 bit value abcdefgh where each of those letters is a single bit (0 or 1), with a the most significant bit and h the least significant. Then look at what each of the operations do to those bits:
b * 0x0802LU = 00000abcdefgh00abcdefgh0
b * 0x0802LU & 0x22110LU = 000000b000f0000a000e0000
b * 0x8020LU = 0abcdefgh00abcdefgh00000
b * 0x8020LU & 0x88440LU = 0000d000h0000c000g000000
((b * 0x0802LU & 0x22110LU) | (b * 0x8020LU & 0x88440LU))
= 0000d0b0h0f00c0a0g0e0000
so at this point, it has shuffled the bits and spread them out.
(....) * 0x10101LU = d0b0h0f00c0a0g0e0000
+ d0b0h0f00c0a0g0e000000000000
+ d0b0h0f00c0a0g0e00000000000000000000
= d0b0f0f0dcbahgfedcbahgfe0c0a0g0e0000
(...) * 0x10101LU >> 16 = d0b0f0f0dcbahgfedcba
b = hgfedcba
the multiply is equivalent to shift/add/add (3 bits set in the constant), which lines up the bits where they should end up. Then the final shift and reduction to 8 bits gives you the final bit-reversed result.
To answer your second question, u means to treat the hex constant as an unsigned (if there is need to expand it to a longer width), and l means to treat it as a long.
I'm working on your first question.
It's difficult to visualize what this algorithm is doing when you look at it as multiplications and hex. It becomes more clear when you convert it to binary and replace the multiplications with an equivalent sum of shift operations. Essentially what it is doing is it is spreading out parts of the byte by shifting and masking it, and then implementing a parallel half-adder that reconstructs the parts in place, which happens to be the reverse of where they started.
For example,
b * 0x0802 = b << 11 | b << 1
Plug in some values (in binary) for b and follow along.
Reading or writing a C code, I often have difficulties translating the numbers from the binary to the hex representations and back. Usually, different masks like 0xAAAA5555 are used very often in low-level programming, but it's difficult to recognize a special pattern of bits they represent. Is there any easy-to-remember rule how to do it fast in the mind?
Each hex digit map exactly on 4 bit, I usually keep in mind the 8421 weights of each of these bits, so it is very easy to do even an in mind conversion ie
A = 10 = 8+2 = 1010 ...
5 = 4+1 = 0101
just keep the 8-4-2-1 weights in mind.
A 5
8+4+2+1 8+4+2+1
1 0 1 0 0 1 0 1
I always find easy to map HEX to BINARY numbers. Since each hex digit can be directly mapped to a four digit binary number, you can think of:
> 0xA4
As
> b 1010 0100
> ---- ---- (4 binary digits for each part)
> A 4
The conversion is calculated by dividing the base 10 representation by 2 and stringing the remainders in reverse order. I do this in my head, seems to work.
So you say what does 0xAAAA5555 look like
I just work out what A looks like and 5 looks like by doing
A = 10
10 / 2 = 5 r 0
5 / 2 = 2 r 1
2 / 2 = 1 r 0
1 / 2 = 0 r 1
so I know the A's look like 1010 (Note that 4 fingers are a good way to remember the remainders!)
You can string blocks of 4 bits together, so A A is 1010 1010. To convert binary back to hex, I always go through base 10 again by summing up the powers of 2. You can do this by forming blocks of 4 bits (padding with 0s) and string the results.
so 111011101 is 0001 1101 1101 which is (1) (1 + 4 + 8) (1 + 4 + 8) = 1 13 13 which is 1DD
I wanted to replace bit/bits (more than one) in a 32/64 bit data field without affecting other bits. Say for example:
I have a 64-bit register where bits 5 and 6 can take values 0, 1, 2, and 3.
5:6
---
0 0
0 1
1 0
1 1
Now, when I read the register, I get say value 0x146 (0001 0 10 0 0110). Now I want to change the value at bit position 5 and 6 to 01. (Right now it is 10, which is 2 in decimal, and I want to replace it to 1 e 01) without other bits getting affected and write back the register with only bits 5 and 6 modified (so it becomes 126 after changing).
I tried doing this:
reg_data = 0x146
reg_data |= 1 << shift // In this case, 'shift' is 5
If I do this, the value at bit positions 5 and 6 will become 11 (0x3), not 01 (0x1) which I wanted.
How do I go about doing read, modify, and write?
How do I replace only certain bit/bits in a 32/64 bit fields without affecting the whole data of the field using C?
Setting a bit is okay, but more than one bit, I am finding it little difficult.
Use a bitmask. It is sort of like:
new_value = 0, 1, 2 or 3 // (this is the value you will set in)
bit_mask = (3<<5) // (mask of the bits you want to set)
reg_data = (reg_data & (~bit_mask)) | (new_value<<5)
This preserves the old bits and OR's in the new ones.
reg_data &= ~( (1 << shift1) | (1 << shift2) );
reg_data |= ( (1 << shift1) | (1 << shift2) );
The first line clears the two bits at (shift1, shift2) and the second line sets them.
Here is a generic process which acts on a long array, considering it a long bitfield, and addresses each bit position individually:
#define set_bit(arr,x) ((arr[(x)>>3]) |= (0x01 << ((x) & 0x07)))
#define clear_bit(arr,x) (arr[(x)>>3] &= ~(0x01 << ((x) & 0x07)))
#define get_bit(arr,x) (((arr[(x)>>3]) & (0x01 << ((x) & 0x07))) != 0)
It simply takes the index, uses the lower three bits of the index to identify eight different bit positions inside each location of the char array, and the upper remainder bits addresses in which array location does the bit denoted by x occur.
To set a bit, you need to OR the target word with another word with 1 in that specific bit position and 0 in all other with the the target. All 0's in the other positions ensure that the existing 1's in the target are as it is during OR, and the 1 in the specific positions ensures that the target gets the 1 in that position. If we have mask = 0x02 = 00000010 (1 byte) then we can OR this to any word to set that bit position:
target = 1 0 1 1 0 1 0 0
OR + + + + + + + +
mask 0 0 0 0 0 0 1 0
---------------
answer 1 0 1 1 0 1 1 0
To clear a bit, you need to AND the target word with another word with 0 in that specific bit position and 1 in all. All 1's in all other bit positions ensure that during AND the target preserves its 0's and 1's as they were in those locations, and a 0 in the bit position to be cleared would also set that bit position 0 in the target word. If we have the same mask = 0x02, then we can prepare this mask for clearing by ~mask:
mask = 0 0 0 0 0 0 1 0
~mask = 1 1 1 1 1 1 0 1
AND . . . . . . . .
target 1 0 1 1 0 1 1 0
---------------
answer 1 0 1 1 0 1 0 0
Apply a mask against the bitfield to maintain the bits that you do not want to change. This will also clear out the bits that you will be changing.
Ensure that you have a bitfield that contains only the bits that you want to set/clear.
Either use the or operator to "or" the two bitfields, or just simply add them.
For instance, if you wanted to only change bits 2 thru 5 based on input of 0 thru 15.
byte newVal = (byte)value & 0x0F;
newVal = (byte)value << 2;
oldVal = oldVal & 0xC3;
oldVal = oldval + newVal;
The question was about how to implement it in C, but as all searches for "replace bits" lead to here, I will supply my implementation in VB.NET.
It has been unit test tested. For those who are wondering what the ToBinaryString extension looks like: Convert.ToString(value,2)
''' <summary>
''' Replace the bits in the enumValue with the bits in the bits parameter, starting from the position that corresponds to 2 to the power of the position parameter.
''' </summary>
''' <param name="enumValue">The integer value to place the bits in.</param>
''' <param name="bits">The bits to place. It must be smaller or equal to 2 to the power of the position parameter.</param>
'''<param name="length">The number of bits that the bits should replace.</param>
''' <param name="position">The exponent of 2 where the bits must be placed.</param>
''' <returns></returns>
''' <remarks></remarks>'
<Extension>
Public Function PlaceBits(enumValue As Integer, bits As Integer, length As Integer, position As Integer) As Integer
If position > 31 Then
Throw New ArgumentOutOfRangeException(String.Format("The position {0} is out of range for a 32 bit integer.",
position))
End If
Dim positionToPlace = 2 << position
If bits > positionToPlace Then
Throw New ArgumentOutOfRangeException(String.Format("The bits {0} must be smaler than or equal to {1}.",
bits, positionToPlace))
End If
'Create a bitmask (a series of ones for the bits to retain and a series of zeroes for bits to discard).'
Dim mask As Integer = (1 << length) - 1
'Use for debugging.'
'Dim maskAsBinaryString = mask.ToBinaryString'
'Shift the mask to left to the desired position'
Dim leftShift = position - length + 1
mask <<= leftShift
'Use for debugging.'
'Dim shiftedMaskAsBinaryString = mask.ToBinaryString'
'Shift the bits to left to the desired position.'
Dim shiftedBits = bits << leftShift
'Use for debugging.'
'Dim shiftedBitsAsBinaryString = shiftedBits.ToBinaryString'
'First clear (And Not) the bits to replace, then set (Or) them.'
Dim result = (enumValue And Not mask) Or shiftedBits
'Use for debugging.'
'Dim resultAsBinaryString = result.ToBinaryString'
Return result
End Function
You'll need to do that one bit at a time. Use the or, like you're currently doing, to set a bit to one, and use the following to set something to 0:
reg_data &= ~ (1 << shift)
You can use this dynamic logic for any number of bit and in any bit field.
Basically, you have three parts in a bit sequence of number -
MSB_SIDE | CHANGED_PART | LSB_SIDE
The CHANGED_PART can be moved up to the extreme MSB or LSB side.
The steps to replace a number of bit(s) are as follows -
Take only the MSB_SIDE part and replace all the remaining bits with 0.
Update the new bit sequence by adding your desired bit sequence in particular position.
Update the entire bit sequence with LSB_SIDE of the original bit sequence.
org_no = 0x53513C;
upd_no = 0x333;
start_pos = 0x6, bit_len = 0xA;
temp_no = 0x0;
temp_no = org_no & (0xFFFFFFFF << (bit_len + start_pos)); // This is step 1
temp_no |= upd_no << start_pos; // This is step 2
org_no = temp_no | (org_no & ~(0xFFFFFFFF << start_pos)); // This is step 3`
Note: The masking with 0xFFFFFFFF is considered as 32 bit. You can change accordingly with your requirement.