How to handle multibyte numbers? - c

I'm trying to read binary data from a file. At the bytes 10-13 is a litte-endian binary-encoded number and I'm trying to parse it using only the information that the offset is 10 and the "size" is 4.
I've figured out I will have to do some binary shifting operations, but I'm not sure which byte goes where and how "far" and where it should be shifted.

If you know for certain the data is little endian, you can do something like:
int32 value = data[10] | (data[11] << 8) | (data[12] << 16) | (data[13] << 24);
This gives you a portable solution in case your code will run on both endian machines.

Related

ADC raw data forming

I would like to ask you for an explanation about this part of my code. I am not sure what it really does. This is example code and I would like to understand it. The purpose of the original code should be acquiring the data from ADC in the streaming mode. This should be about forming the raw data. Thank you.
#define CH_DATA_SIZE 6
uint8_t read_buf[CH_DATA_SIZE];
uint32_t adc_data;
TI_ADS1293_SPIStreamReadReg(read_buf, count);
adc_data = ((uint32_t) read_buf[0] << 16) | ((uint16_t) read_buf[1] << 8)
| read_buf[2];
I will skip the variable declaration, because I will refer to it in the rest of the description.
The code begins at this line:
TI_ADS1293_SPIStreamReadReg(read_buf, count);
From a Google search, I assume you have this function from this file. If it is this function, it will read three register from this module (see 8.6 Register Maps, the data registers DATA_CHx_ECG are three bytes long, which is what should be in the count variable).
Once this function executed, you have the ECG data in the first three bytes of the read_buf variable, but you want a 24-bit value since the quantified value is a 24-bit value.
Since we don't have uint24_t in C (and no other language I know of), we take the next possible size, which is uint32_t to declare the adc_data variable.
Now the following code does rebuild a single 24-bit value from the 3 bytes we read from the ADC:
adc_data = ((uint32_t) read_buf[0] << 16) | ((uint16_t) read_buf[1] << 8)
| read_buf[2];
From the datasheet and the TI_ADS1293_SPIStreamReadReg, we know that the function does read the values in the order their addresses come, in this case high-byte, middle-byte and low-byte in this order (respectively in read_but[0], read_buf[1] and read_buf[2]).
To rebuild the 24-bit value, the code shifts the value with the appropriate offset: read_buf[0] goes from bits 23 to 16 thus shifted 16 bits, read_buf[1] from bits 15 to 8 thus shifted 8 bits and read_buf[2] from 7 to 0 thus shifted 0 bits (this shift is not represented). We will represent them as such (the 0xAA, 0xBB and 0xCC are example values to show what happens):
read_buf[0] = 0xAA => read_buf[0] << 16 = 0xAA0000
read_buf[1] = 0xBB => read_buf[0] << 8 = 0x00BB00
read_buf[2] = 0xCC => read_buf[0] << 0 = 0x0000CC
To combine the three shifted values, the code uses a bitwise-or | which results in this:
0xAA0000 | 0x00BB00 | 0x0000CC = 0xAABBCC
And you now have a 24-bit value of you ADC reading.

Reading two 8 bit registers into 12 bit value of an ADXL362 in C

I'm querying an ADXL362 Digital Output MEMS Accelerometer for its axis data which it holds as two 8 bit registers which combine to give a 12 bit value and I'm trying to figure out how to combine those values. I've never been good at bitwise manipulation so any help would be greatly appreciated. I would imagine it is something like this:
number = Z_data_H << 8 | Z_data_L;
number = (number & ~(1<<13)) | (0<<13);
number = (number & ~(1<<14)) | (0<<14);
number = (number & ~(1<<15)) | (0<<15);
number = (number & ~(1<<16)) | (0<<16);
ADXL362 data sheet (page 26)
Z axis data register
Your first line should be what you need:
int16_t number;
number = (Z_data_H << 8) | Z_data_L;
The sign-extension bits mean that you can read the value as if it was a 16-bit signed integer. The value will simply never be outside the range of a 12-bit integer. It's important that you leave those bits intact in order to handle negative values correctly.
You just have to do:
signed short number;
number = Z_data_H << 8 | Z_data_L;
The shift left by 8 bit combined with the lower bits you already
had figured out are combining the 2 bytes correctly. Just use the appropriate data size to have the C code recoginize the sign of the 12 bit number correctly.
Note that short not necessarily refers to a 16bit value, depending on your compiler and architecture - so, you might want to attempt to that.

Reinterpreting memory/pointers

Just a quick question concerning the rust programming language.
Assume you had the following in C:
uint8_t *someblockofdata; /* has certain length of 4 */
uint32_t *anotherway = (uint32_t*) someblockofdata;
Regardless of the code not being all that useful and rather ugly, how would I go about doing that in rust? Say you have a &[u8]with a length divisible by 4, how would you "convert" it to a &[u32] and back (preferrably avoiding unsafe code as much as possible and retaining as much speed as possible).
Just to be complete, the case where I would want to do that is an application which reads u8s from a file and then manipulates those.
Reinterpret casting a pointer is defined between pointers to objects of alignment-compatible types, and it may be valid in some implementations, but it's non-portable. For one thing, the result depends on the endianness (byte order) of your data, so you may lose performance anyway through byte-swapping.
First rewrite your C as follows, verify that it does what you expect, and then translate it to Rust.
// If the bytes in the file are little endian (10 32 means 0x3210), do this:
uint32_t value = someblockofdata[0] | (someblockofdata[1] << 8)
| (someblockofdata[2] << 16) | (someblockofdata[3] << 24);
// If the bytes in the file are big endian (32 10 means 0x3210), do this:
uint32_t value = someblockofdata[3] | (someblockofdata[2] << 8)
| (someblockofdata[1] << 16) | (someblockofdata[0] << 24);
// Middle endian is left as an exercise for the reader.

base 16 to base 2^64 conversion in GMP

I read in some hex numbers and I then would like to convert them to base 2^64. Unfortunately as this number cannot be stored in an int, it seems there is no function in GMP that can help me solve this problem.
Is there another way to do this that I am missing completely?
(The program is in C)
10 in base 2^1 is 1010 which in binary is 1 0 1 0
10 in base 2^2 is 22 which in binary is 10 10
10 in base 2^3 is 12 which in binary is 001 010
10 in base 2^4 is A which in binary is 1010
The pattern I'm trying to show you (and that others have noted) is that they all have the same binary representation. In other words, if you convert your number to base 256 (chars) and write it to a file or memory, you can read it in base 2^16 (reading 2 bytes at a time), or base 2^32 (4 bytes at a time), or in fact 2^anything. It will be the same binary representation (assuming you get your endians correct). So be careful with big vs little endian and read as int64_t.
To be clear, this only applies to bases which are 2^n. 10 in base 5 is 20 which in binary is 010 000; obviously different. But if you use trinary, the same principle applies to 3^n, and in pentary (?) it would apply to 5^n.
Update: How you could use this:
With some function
void convert( char *myBase16String, uint8_t *outputBase256 );
which we suppose takes a string encoded in base 16 and produces an array of unsigned chars, where each char is a unit in base 256, we do this:
uint8_t base2_8[8];
convert( "0123456789ABCDEF", base2_8 );
uint64_t base2_64[2];
base2_64[0] = (base2_8[0] << 24) | (base2_8[1] << 16) | (base2_8[2] << 8) | base2_8[3];
base2_64[1] = (base2_8[4] << 24) | (base2_8[5] << 16) | (base2_8[6] << 8) | base2_8[7];
// etc. You can do this in a loop, but make sure you know how long it is.
Suppose your input wasn't a nice multiple-of-4 bytes:
uint8_t base2_8[6];
convert( "0123456789AB", base2_8 );
uint64_t base2_64[2];
base2_64[0] = (base2_8[0] << 8) | base2_8[1];
base2_64[1] = (base2_8[2] << 24) | (base2_8[3] << 16) | (base2_8[4] << 8) | base2_8[5];
Slightly more complex, but still pretty easy to automate.
GMP comes with an extension of stdio.h that works on large numbers, see the manual on Formatted Input Functions.
There are the usual flavors that work on either standard input (gmp_scanf), files (gmp_fscanf) or strings you have already read into memory (gmp_sscanf).

C code to convert endianness?

I want to convert an unsigned 32-bit integer as follows:
Input = 0xdeadbeef
Output = 0xfeebdaed
Thank you.
That's not an endianness conversion. The output should be 0xEFBEADDE, not 0xFEEBDAED. (Only the bytes are swapped, and each byte is 2 hexadecimal digits.)
For converting between little- and big-endian, take a look at _byteswap_ulong.
The general process for nibble reversal is:
((i & 0xF0000000)>>28) | ((i &
0xF000000)>>20) | ((i & 0xF00000)>>12)
| ..... | ((i & 0xF)<<28)
Mask, shift, or (I hope I got the numbers right):
Extract the portion of the number you're interested in by ANDing (&) with a mask.
Shift it to it's target location with the >> and << operations.
Construct the new value by ORing (|) the pieces together.
If you want to reorder bytes you will mask with 0xFF. As everyone is saying that's probably what you want and if you're looking for a canned version follow other people's suggestions.

Resources