I struggle masking a uint64_t variable by N bytes. I do not know N but I know it is 8 or less. My current code looks like this:
// uint64_t n is given
uint64_t mask;
for( mask = 0x00; n; n--) {
mask = mask<<8 | 0xFF;
}
for building the mask. What am I doing wrong here?
Edit:
The question was answered. Anyway, for better understanding:
I want a mask like this:
0x000000FF // or:
0x0000FFFF // or:
0x00FFFFFF
to take 1, 2 or more byte from the data. As the comments say, my code works! Maybe I had an bug anywere else!
It should work, according to the [operator precendence table1].
Still, it's more clear to write it as:
mask <<= 8;
mask |= 0xff;
or:
mask = (mask << 8) | 0xff;
You can also do it with a look-up table, of course.
I am not sure if I get the question right, but your mask looks like
0x00000000000000ff
0x000000000000ffff
0x0000000000ffffff
...
I assume that you want something like the following, to mask individual bytes:
0x00000000000000ff
0x000000000000ff00
0x0000000000ff0000
...
For this, you can use e.g. the following code:
for( mask = 0xff; n; n--) {
// Use the mask HERE
...
mask = mask<<8;
}
You could use this code snippet, to replace the bytenoth byte with dest mask, in src:
uint64_t replacemyByte(uint64_t src, uint64_t byteno,uint64_t dest)
{
uint64_t shift = (dest << (8 * byteno));
uint64_t mask = 0xff << shift;
return (~mask & src) | shift;
}
Or did I get the question wrong?
Related
I am using GCC struct bit fields in an attempt interpret 8 byte CAN message data. I wrote a small program as an example of one possible message layout. The code and the comments should describe my problem. I assigned the 8 bytes so that all 5 signals should equal 1. As the output shows on an Intel PC, that is hardly the case. All CAN data that I deal with is big endian, and the fact that they are almost never packed 8 bit aligned makes htonl() and friends useless in this case. Does anyone know of a solution?
#include <stdio.h>
#include <netinet/in.h>
typedef union
{
unsigned char data[8];
struct {
unsigned int signal1 : 32;
unsigned int signal2 : 6;
unsigned int signal3 : 16;
unsigned int signal4 : 8;
unsigned int signal5 : 2;
} __attribute__((__packed__));
} _message1;
int main()
{
_message1 message1;
unsigned char incoming_data[8]; //This is how this message would come in from a CAN bus for all signals == 1
incoming_data[0] = 0x00;
incoming_data[1] = 0x00;
incoming_data[2] = 0x00;
incoming_data[3] = 0x01; //bit 1 of signal 1
incoming_data[4] = 0x04; //bit 1 of signal 2
incoming_data[5] = 0x00;
incoming_data[6] = 0x04; //bit 1 of signal 3
incoming_data[7] = 0x05; //bit 1 of signal 4 and signal 5
for(int i = 0; i < 8; ++i){
message1.data[i] = incoming_data[i];
}
printf("signal1 = %x\n", message1.signal1);
printf("signal2 = %x\n", message1.signal2);
printf("signal3 = %x\n", message1.signal3);
printf("signal4 = %x\n", message1.signal4);
printf("signal5 = %x\n", message1.signal5);
}
Because struct packing order varies between compilers and architectures, the best option is to use a helper function to pack/unpack the binary data instead.
For example:
static inline void message1_unpack(uint32_t *fields,
const unsigned char *buffer)
{
const uint64_t data = (((uint64_t)buffer[0]) << 56)
| (((uint64_t)buffer[1]) << 48)
| (((uint64_t)buffer[2]) << 40)
| (((uint64_t)buffer[3]) << 32)
| (((uint64_t)buffer[4]) << 24)
| (((uint64_t)buffer[5]) << 16)
| (((uint64_t)buffer[6]) << 8)
| ((uint64_t)buffer[7]);
fields[0] = data >> 32; /* Bits 32..63 */
fields[1] = (data >> 26) & 0x3F; /* Bits 26..31 */
fields[2] = (data >> 10) & 0xFFFF; /* Bits 10..25 */
fields[3] = (data >> 2) & 0xFF; /* Bits 2..9 */
fields[4] = data & 0x03; /* Bits 0..1 */
}
Note that because the consecutive bytes are interpreted as a single unsigned integer (in big-endian byte order), the above will be perfectly portable.
Instead of an array of fields, you could use a structure, of course; but it does not need to have any resemblance to the on-the-wire structure at all. However, if you have several different structures to unpack, an array of (maximum-width) fields usually turns out to be easier and more robust.
All sane compilers will optimize the above code just fine. In particular, GCC with -O2 does a very good job.
The inverse, packing those same fields to a buffer, is very similar:
static inline void message1_pack(unsigned char *buffer,
const uint32_t *fields)
{
const uint64_t data = (((uint64_t)(fields[0] )) << 32)
| (((uint64_t)(fields[1] & 0x3F )) << 26)
| (((uint64_t)(fields[2] & 0xFFFF )) << 10)
| (((uint64_t)(fields[3] & 0xFF )) << 2)
| ( (uint64_t)(fields[4] & 0x03 ) );
buffer[0] = data >> 56;
buffer[1] = data >> 48;
buffer[2] = data >> 40;
buffer[3] = data >> 32;
buffer[4] = data >> 24;
buffer[5] = data >> 16;
buffer[6] = data >> 8;
buffer[7] = data;
}
Note that the masks define the field length (0x03 = 0b11 (2 bits), 0x3F = 0b111111 (16 bits), 0xFF = 0b11111111 (8 bits), 0xFFFF = 0b1111111111111111 (16 bits)); and the shift amount depends on the bit position of the least significant bit in each field.
To verify such functions work, pack, unpack, repack, and re-unpack a buffer that should contain all zeros except one of the fields all ones, and verify the data stays correct over two roundtrips. It usually suffices to detect the typical bugs (wrong bit shift amounts, typos in masks).
Note that documentation will be key to ensure the code remains maintainable. I'd personally add comment blocks before each of the above functions, similar to
/* message1_unpack(): Unpack 8-byte message to 5 fields:
field[0]: Foobar. Bits 32..63.
field[1]: Buzz. Bits 26..31.
field[2]: Wahwah. Bits 10..25.
field[3]: Cheez. Bits 2..9.
field[4]: Blop. Bits 0..1.
*/
with the field "names" reflecting their names in documentation.
I want to read and write from/to an unsigned char according to the table below:
for example I have following variables:
unsigned char hsi_div = 0x01; /* HSI/2 */
unsigned char cpu_div = 0x05; /* Fmaster/32 */
I want to write hsi_div to bits 4,3 and cpu_div to bits 2,1,0 (imagine the whole char is named CLK_DIVR):
CLK_DIVR |= hsi_div << 4; //not correct!
CLK_DIVR |= cpu_div << 2; //not correct!
And lets say I want to read the register back to make sure I did it correct:
if( ((CLK_DIVR << 4) - 1) & hsi_div) ) { /* SET OK */ }
if( ((CLK_DIVR << 2) - 1) & cpu_div) ) { /* SET OK */ }
Is there something wrong with my bitwise operations!? I do not get correct behaviour.
I assume CLK_DIVR is a hardware peripheral register which should be qualified volatile. Such registers should be set up with as few writes as possible. You change all write-able bits, so just
CLK_DIVR = (uint8_t)((hsi_div << 3) | (cpu_div << 0));
Note using fixed width type. That makes mentioniong it is an 8 bit register unnecessary. According to the excerpt, the upper bits are read-only, so they are not changed when writing. The cast keeps the compiler from issuing a truncation warning which is one of the recommended warnings to always enable (included in -Wconversion for gcc).
The shift count is actually the bit the field starts (the LSbit). A shift count of 0 means "no shifting", so the shift-operator is not required. I still use it to clarify I meant the field starts at bit 0. Just let the compiler optimize, concentrate on writing maintainable code.
Note: Your code bit-or's whatever already is in the register. Bit-or can only set bits, but not clear them. Addiionally the shift counts were wrong.
Not sure, but if the excerpt is for an ARM Cortex-M CPU (STM32Fxxxx?), reducing external bus-cycles becomes more relevant, as the ARM can take quite some cycles for an access.
For the HSIDIV bit fields you want:
hw_register = (hw_register & 0x18) | (hsi_value & 0x03) << 0x03;
This will mask the value to 2 bits wide then shift to bit position 3 and 4.
The CPUDIV fields are:
hw_register = (hw_register & 0x7) | (cpu_value & 7);
Reading the register:
hsi_value = (hw_register & 0x18) >> 3;
cpu_value = hw_register & 0x07;
Just
CLK_DIVR |= hsi_div << 3;
CLK_DIVR |= cpu_div << 0;
Since hsi_div is a 2-digit binary, you have to move it three positions to skip the CPUDIV field. And the cpu_div is already at the end of the field.
I have a set of c-like snippets provided that describe a CRC algorithm, and this article that explains how to transform a serial implementation to parallel that I need to implement in Verilog.
I tried using multiple online code generators, both serial and parallel (although serial would not work in final solution), and also tried working with the article, but got no similar results to what these snippets generate.
I should say I'm more or less exclusively hardware engineer and my understanding of C is rudimentary. I also never worked with CRC other than straightforward shift register implementation. I can see the polynomial and initial value from what I have, but that is more or less it.
Serial implementation uses augmented message. Should I also create parallel one for 6 bits wider message and append zeros to it?
I do not understand too well how the final value crc6 is generated. CrcValue is generated using the CalcCrc function for the final zeros of augmented message, then its top bit is written to its place in crc6 and removed before feeding it to the function again. Why is that? When working the algorithm to get the matrices for the parallel implementation, I should probably take crc6 as my final result, not last value of CrcValue?
Regardless of how crc6 is obtained, in the snippet for CRC check only runs through the function. How does that work?
Here are the code snippets:
const unsigned crc6Polynom =0x03; // x**6 + x + 1
unsigned CalcCrc(unsigned crcValue, unsigned thisbit) {
unsigned m = crcValue & crc6Polynom;
while (m > 0) {
thisbit ^= (m & 1);
m >>= 1;
return (((thisbit << 6) | crcValue) >> 1);
}
}
// obtain CRC6 for sending (6 bit)
unsigned GetCrc(unsigned crcValue) {
unsigned crc6 = 0;
for (i = 0; i < 6; i++) {
crcValue = CalcCrc(crcValue, 0);
crc6 |= (crcValue & 0x20) | (crc6 >> 1);
crcValue &= 0x1F; // remove output bit
}
return (crc6);
}
// Calculate CRC6
unsigned crcValue = 0x3F;
for (i = 1; i < nDataBits; i++) { // Startbit excluded
unsigned thisBit = (unsigned)((telegram >> i) & 0x1);
crcValue = CalcCrc(crcValue, thisBit);
}
/* now send telegram + GetCrc(crcValue) */
// Check CRC6
unsigned crcValue = 0x3F;
for (i = 1; i < nDataBits+6; i++) { // No startbit, but with CRC
unsigned thisBit = (unsigned)((telegram >> i) & 0x1);
crcValue = CalcCrc(crcValue, thisBit);
}
if (crcValue != 0) { /* put error handler here */ }
Thanks in advance for any advice, I'm really stuck there.
xoring bits of the data stream can be done in parallel because only the least signficant bit is used for feedback (in this case), and the order of the data stream bit xor operations doesn't affect the result.
Whether the hardware would need a parallel version depends on how a data stream is handled. The hardware could calculate the CRC one bit at a time during transmission or reception. If the hardware is staged to work with 6 bit characters, then a parallel version would make sense.
Since the snippets use a right shift for the CRC, it would seem that data for each 6 bit character is transmitted and received least significant bit first, to allow for hardware that could calculate CRC 1 bit at a time as it's transmitted or received. After all 6 bit data characters are transmitted, then the 6 bit CRC is transmitted (also least significant bit first).
The snippets seem wrong. My guess at what they should be:
/* calculate crc6 1 bit at a time */
const unsigned crc6Polynom =0x43; /* x**6 + x + 1 */
unsigned CalcCrc(unsigned crcValue, unsigned thisbit) {
crcValue ^= thisbit;
if(crcValue&1)
crcValue ^= crc6Polynom;
crcValue >>= 1;
return crcValue;
}
Example for passing 6 bits at a time. A 64 by 6 bit table lookup could be used to replace the for loop.
/* calculate 6 bits at a time */
unsigned CalcCrc6(unsigned crcValue, unsigned sixbits) {
int i;
crcValue ^= sixbits;
for(i = 0; i < 6; i++){
if(crcValue&1)
crcValue ^= crc6Polynom;
crcValue >>= 1;
}
return crcValue;
}
Assume that telegram contains 31 bits, 1 start bit + 30 data bits (five 6 bit characters):
/* code to calculate crc 6 bits at a time: */
unsigned crcValue = 0x3F;
int i;
telegram >>= 1; /* skip start bit */
for (i = 0; i < 5; i++) {
crcValue = CalcCrc6(unsigned crcValue, telegram & 0x3f);
telegram >>= 6;
}
How can I switch the 0th and 3rd bits of each nibble in an integer using only bit operations (no control structures)? What kind of masks do I need to create in order to solve this problem? Any help would be appreciated. For example, 8(1000) become 1(0001).
/*
* SwitchBits(0) = 0
* SwitchBits(8) = 1
* SwitchBits(0x812) = 0x182
* SwitchBits(0x12345678) = 0x82a4c6e1
* Legal Operations: ! ~ & ^ | + << >>
*/
int SwitchBits(int n) {
}
Code:
#include <stdio.h>
#include <inttypes.h>
static uint32_t SwitchBits(uint32_t n)
{
uint32_t bit0_mask = 0x11111111;
uint32_t bit3_mask = 0x88888888;
uint32_t v_bit0 = n & bit0_mask;
uint32_t v_bit3 = n & bit3_mask;
n &= ~(bit0_mask | bit3_mask);
n |= (v_bit0 << 3) | (v_bit3 >> 3);
return n;
}
int main(void)
{
uint32_t i_values[] = { 0, 8, 0x812, 0x12345678, 0x9ABCDEF0 };
uint32_t o_values[] = { 0, 1, 0x182, 0x82A4C6E1, 0x93B5D7F0 };
enum { N_VALUES = sizeof(o_values) / sizeof(o_values[0]) };
for (int i = 0; i < N_VALUES; i++)
{
printf("0x%.8" PRIX32 " => 0x%.8" PRIX32 " (vs 0x%.8" PRIX32 ")\n",
i_values[i], SwitchBits(i_values[i]), o_values[i]);
}
return 0;
}
Output:
0x00000000 => 0x00000000 (vs 0x00000000)
0x00000008 => 0x00000001 (vs 0x00000001)
0x00000812 => 0x00000182 (vs 0x00000182)
0x12345678 => 0x82A4C6E1 (vs 0x82A4C6E1)
0x9ABCDEF0 => 0x93B5D7F0 (vs 0x93B5D7F0)
Note the use of uint32_t to avoid undefined behaviour with sign bits in signed integers.
To obtain a bit, you can mask it out using AND. To get the lowest bit, for example:
x & 0x01
Think about how AND works: both bits must be set. Since we're ANDing with 1, all bits except the first must be 0, because they're 0 in 0x01. The lowest bit will be either 0 or 1, depending on what's in x; said differently, the lowest bit will be the lowest bit in x, which is what we want. Visually:
x = abcd
AND 1 = 0001
--------
000d
(where abcd represent the bits in those slots; we don't know what they are)
To move it to bit 3's position, just shift it:
(x & 0x01) << 3
Visually, again:
x & 0x01 = 000d
<< 3
-----------
d000
To add it in, first, we need to clear out that spot in x for our bit. We use AND again:
x & ~0x08
Here, we invert 0x08 (which is 1000 in binary): this means all bits except bit 3 are set, and when we AND that with x, we get x except for that bit.
Visually,
0x08 = 1000
(invert)
-----------
0111
AND x = abcd
------------
0bcd
Combine with OR:
(x & ~0x08) | ((x & 0x01) << 3)
Visually,
x & ~0x08 = 0bcd
| ((x & 0x01) << 3) = d000
--------------------------
dbcd
Now, this only moves bit 0 to bit 3, and just overwrites bit 3. We still need to do bit 3 → 0. That's simply another:
x & 0x08 >> 3
And we need to clear out its spot:
x & ~0x01
We can combine the two clearing pieces:
x & ~0x09
And then:
(x & ~0x09) | ((x & 0x01) << 3) | ((x & 0x08) >> 3)
That of course handles only the lowest nibble. I'll leave the others as an exercise.
Try below code . Here you should know bitwise operator to implement and correct position to place.Also needs to aware of maintenance ,shifting and toggling basic properties.
#include<stdio.h>
#define BITS_SWAP(x) x=(((x & 0x88888888)>>3) | ((x & 0x11111111)<<3)) | ((x & ~ (0x88888888 | 0x11111111)))
int main()
{
int data=0;
printf("enter the data in hex=0x");
scanf("%x",&data);
printf("bits=%x",BITS_SWAP(data));
return 0;
}
OP
vinay#vinay-VirtualBox:~/c_skill$ ./a.out
enter the data in hex=0x1
bits=8
vinay#vinay-VirtualBox:~/c_skill$ ./a.out
enter the data in hex=0x812
bits=182
vinay#vinay-VirtualBox:~/c_skill$ ./a.out
enter the data in hex=0x12345678
bits=82a4c6e1
vinay#vinay-VirtualBox:~/c_skill$
Try this variant of the xor swap:
uint32_t switch_bits(uint32_t a){
static const mask = 0x11111111;
a ^= (a & mask) << 3;
a ^= (a >> 3) & mask;
a ^= (a & mask) << 3;
return a;
}
Move the low bits to the high bits and mask out the resulting bits.
Move the high bits to the low bits and mask out the resulting bits.
Mask out all bits that have not been moved.
Combine the results with ORs.
Code:
unsigned SwitchBits(unsigned n) {
return ((n << 3) & 0x88888888) | ((n >> 3) & 0x11111111) | (n & 0x66666666);
}
Alternativly, if you would like to be very clever. It can be done with two fewer operations, though this may not actually be faster due to some of the dependicies between instrutions.
Move the high bits to align with the low bits
XOR recording a 0 in the low bit if high an low bits are the same, and a 1 if they are different.
From this, mask out only the low bit of each nibble.
From this, multiply by 9, this will keep the low bit as is, and also copy it to the high bit.
From this, XOR with the original value. in the case that the high and low bit are the same, no change will correctly occure. In the case they are different, they will be effectivly exchanged.
Code:
unsigned SwitchBits(unsigned n) {
return ((((n >> 3) ^ n) & 0x11111111) * 0x9) ^ n;
}
I know how to set a bit, clear a bit , toggle a bit, and check if a bit is set.
But, how I can copy bit, for example nr 7 of byte_1 to bit nr 7 in byte_2 ?
It is possible without an if statement (without checking the value of the bit) ?
#include <stdio.h>
#include <stdint.h>
int main(){
int byte_1 = 0b00001111;
int byte_2 = 0b01010101;
byte_2 = // what's next ?
return 0;
}
byte_2 = (byte_2 & 0b01111111) | (byte_1 & 0b10000000);
You need to first read the bit from byte1, clear the bit on byte2 and or the bit you read earlier:
read_from = 3; // read bit 3
write_to = 5; // write to bit 5
the_bit = ((byte1 >> read_from) & 1) << write_to;
byte2 &= ~(1 << write_to);
byte2 |= the_bit;
Note that the formula in the other answer (if you extend it to using variables, instead of just bit 7) is for the case where read_from and write_to are the same value.