Combine two bytes from gyroscope into signed angular rate - c

I've got two 8-bit chars. They're the product of some 16-bit signed float being broken up into MSB and LSB inside a gyroscope.
The standard method I know of combining two bytes is this:
(signed float) = (((MSB value) << 8) | (LSB value));
Just returns garbage.
How can I do this?

Okay, so, dear me from ~4 years ago:
First of all, the gyroscope you're working with is a MAX21000. The datasheet, as far as future you can see, doesn't actually describe the endianness of the I2C connection, which probably also tripped you up. However, the SPI connection does state that the data is transmitted MSB-first, with the top 8-bits of the axis data in the first byte, and the additional 8 in the next.
To your credit, the datasheet doesn't really go into what type those 16 bits represent - however, that's because it's standardized across manufacturers.
The real reason why you got such meaningless values when converting to float is that the gyro isn't sending a float. Why'd you even think it would?
The gyro sends a plain 'ol int16 (short). A simple search for "i2c gyro interface" would have made that clear. How do you get that into a decimal angular rate? You divide by 32,768 (int16's max positive value), then multiply by the full-scale range set on the gyro.
Simple! Here, want a code example?
float X_angular_rate = ((((int16_t)((byte_1 << 8) | byte_2))/SHRT_MAX)*GYRO_SCALE
However, I think that it's important to note that the data from these gyroscopes alone is not, in itself, as useful as you thought; to my current knowledge, due to their poor zero-rate drift characteristics, MEMS gyros are almost always used in a sensor fusion setup with an accelerometer and a Kalman filter to make a proper IMU.
Any position and attitude derived from dead-reckoning without this added complexity is going to be hopelessly inaccurate after mere minutes, which is why you added an accelerometer to the next revision of the board.

You have shown two bytes, and float is 4 bytes on most systems. What did you do with the other two bytes of the original float you deconstructed? You should preserve and re-construct all four original bytes if possible. If you can't, and you have to omit any bytes, set them to zero, and make them the least significant bits in the fractional part of the float and hopefully you'll get an answer with satisfactory precision.
The diagram below shows the bit positions, so acting in accordance with the endianness of your system, you should be able to construct a valid float based on how you deconstructed the original. It can really help to write a function to display values as binary numbers and line them up and display initial, intermediate and end results to ensure that you're really accomplishing what you think (hope) you are.
To get a valid result you have to put something sensible into those bits.

Related

Packet framing for very short serial packets

We designed simple fixed-length protocol for embedded device. Every packet is just two bytes:
bits | 15..12 | 11..4 | 3..0 |
| OpCode | DATA | CRC4 |
We use "crc-based framing", i.e. receiver collects two bytes, compute CRC4 and if it matches frame is considered valid. As you can see, there is no start-of-frame or end-of-frame.
There is a catch: recommended message length for CRC4 is 11 bits and here it is computed for 12 bits. As far as I understand that means that CRC error-detection properties degrade (but I'm not sure how much).
(By the way, if anybody needs code for CRC4 (or any other) and does not feel skilled enough to write it himself, boost has very nice boost::crc function that can compute any crc )
The problem is: this crc-based framing doesn't work and we get framing errors, i.e. second byte from one message and first byte from the following message sometimes form correct message.
My question is - is there any way to correct framing without adding any more bytes? We spend quite some time squeezing everything in that two bytes and it would be kinda sad to just throw it away like that.
We do have a spare bit in opcode field though.
Time-based framing will not be very reliable because our radio-channel likes to "spit" several packets at once
Maybe there is some other error-detection method that will work better than CRC4?
It we have to append more bytes, what would be the best way to do it?
We can use start-of-frame byte and byte-stuffing (such as COBS) ( +2 bytes but I'm not sure what to do with corrupted messages )
We can use start-of-frame nibble and widen CRC to CRC8 ( +1 byte )
Something else?
A common way to do what you are asking is to "hunt for framing" at start up and require N consecutive good packets before accepting any packets. This can be implemented using a state machine with 3 states: HUNT, LOF (loss of frame), SYNC
It could be something like:
#define GOOD_PACKETS_REQUIRED_BEFORE_SYNC 8
int state = HUNT;
int good_count = 0;
Packet GetPacket(void)
{
unsigned char fb = 0;
unsigned char sb = 0;
while (1)
{
if (state == HUNT)
{
fb = sb;
sb = GetNextByteFromUART();
if (IsValidCRC(fb, sb))
{
state = LOF;
good_count = 1;
}
}
else if (state == LOF)
{
fb = GetNextByteFromUART();
sb = GetNextByteFromUART();
if (IsValidCRC(fb, sb))
{
good_count++;
if (good_count >= GOOD_PACKETS_REQUIRED_BEFORE_SYNC)
{
state = SYNC;
}
}
else
{
state = HUNT;
good_count = 0;
}
}
else if (state == SYNC)
{
fb = GetNextByteFromUART();
sb = GetNextByteFromUART();
if (IsValidCRC(fb, sb))
{
return packet(fb, sb);;
}
// SYNC lost! Start a new hunt for correct framing
state = HUNT;
good_count = 0;
}
}
}
You can find several standard communication protocols which use this (or similar) technique, e.g. ATM and E1 (https://en.wikipedia.org/wiki/E-carrier). There are different variants of the principle. For instance you may want to go from SYNC to LOF when receiving the first bad packet (decrementing good_count) and then go from LOF to HUNT on the second consecutive bad packet. That would cut down the time it takes to re-frame. The above just shows a very simple variant.
Notice: In real world code you probably can't accept a blocking function like the one above. The above code is only provided to describe the principle.
Whether you need a CRC or can do with a fixed frame-word (e.g. 0xB) depends on your media.
There is a catch: recommended message length for CRC4 is 11 bits and here it is computed for 12 bits.
No, here it is computed for 16 bits.
As far as I understand that means that CRC error-detection properties degrade (but I'm not sure how much).
Recommendations about CRC likely refer to whether you have a 100% chance of finding a single-bit error or not. All CRCs struggle with multi-bit errors and will not necessarily find them.
When dealing with calculations about CRC reliability of UART, you also have to take the start and stop bits in account. Bit errors may as well strike there, in which case the hardware may or may not assist in finding the error.
second byte from one message and first byte from the following message sometimes form correct message
Of course. You have no synch mechanism, what do you expect? This has nothing to do with CRC.
My question is - is there any way to correct framing without adding any more bytes?
Either you have to sacrifice one bit per byte as a synch flag or increase the packet length. Alternatively you could use different delays between data bits. Maybe send the two bytes directly after each other, then use a delay.
Which method to pick depends on the nature of the data and your specification. Nobody on SO can tell you what your spec looks like.
Maybe there is some other error-detection method that will work better than CRC4?
Not likely. CRC is pretty much the only professional checksum algorithm. The polynomials are picked based on the excepted nature of the noise - they pick a polynomial which reminds as little of the noise as possible. However, this is mainly of academic interest, as no CRC guru can know how the noise looks like in your specific application.
Alternatives are sums, xor, parity, count number of 1s etc... all of them are quite bad, probability-wise.
It we have to append more bytes, what would be the best way to do it?
Nobody can answer that question without knowing the nature of the data.
If the CRC is mainly for paranoia (from the comments), you can give up some error-checking robustness and processor time for framing.
Since there is a free bit in the opcode, always set the most-significant bit of the first byte to zero. Then before transmission, but after calculating the CRC, set the most-significant bit of the second byte to one.
A frame is then two consecutive bytes where the first most significant bit is zero and the second is one. If the two bytes fail the CRC check, set the most significant bit of the second byte to zero and recalculate to see if the packet had the bit flipped before transmission.
The downside is that the CRC will be calculated twice about half of the time. Also, setting the bit for framing may cause invalid data to match the CRC.

Fast conversion of int arrays to fp for audio processing

I am having to do floating point maths on arrays of ints, fast and with low latency for audio apps (multi channel). My code works but I wonder if theres more efficient ways of processing in places. I get buffers of around 120 frames of interleaved 32 bit audio integers of 16 or 24 channels, which I then have to convert to arrays of floats / doubles for processing (eg biquad filters). Currently I iterate through the arrays and cast each integer to an element of a float array. Then I process these and cast them back to ints for the write buffer, which I pass back to the lib function (I'm on linux using snd_pcm_readi and snd_pcm_writei). Theres lots of copying and it seems wasteful.
The quicker I can do it the lower my latency so the better the overall performance as its for live sound use.
I have read about SSE and other extensions which can be compiled in with gcc options, and some references allude to being able to pass arrays for streamlined conversion etc, and I wonder if these might help the above. Or maybe I should not bother casting to floats - change my processing functions to use ints, keep track of overflows, maybe use 64 bit ints instead, and or create a separate array for exponent - seems pretty esoteric to me, but I guess its not that hard to implement and only needs to be coded once etc. I have asked a separate question of 'Is FPM required for audio DSP maths, or can it be done in 32/64 bit integer maths before rounding back to 24 bits for output?' which is part of the same topic but I thought I should split it into a different question.
If your code is license-compatible, you can use the Vector optimized library of kernels (VOLK) from the GNU Radio project, which has a kernel that does that conversion
32i_s32f_convert_32
converts the input 32 bit integer data into floating point data, and divides the each floating point output data point by the scalar value
It comes with heavily optimized implementations for SSE2, aligned and unaligned data.
EDIT: By the way, why not just do your signal processing in GNU Radio itself?

Storing Large Integers/Values in an Embedded System

I'm developing a embedded system that can test a large numbers of wires (upto 360) - essentially a continuity checking system. The system works by clocking in a test vector and reading the output from the other end. The output is then compared with a stored result (which would be on an SD Card) that tells what the output should have been. The test-vectors are just a walking ones so there's no need to store them anywhere. The process would be a bit like follows:
Clock out test-vector (walking ones)
Read in output test-vector.
Read corresponding output test-vector from SD Card which tells what the output vector should be.
Compare the test-vectors from step 2 and 3.
Note down the errors/faults in a separate array.
Continue back to step 1 unless all wires are checked.
Output the errors/faults to the LCD.
My hardware consists of a large shift register thats clocked into the AVR microcontroller. For every test vector (which would also be 360 bits), I will need to read in 360 bits. So, for 360 wires the total amount of data would be 360*360 = 16kB or so. I already know I cannot do this in one pass (i.e. read the entire data and then compare), so it will have to be test-vector by test-vector.
As there are no inherent types that can hold such large numbers, I intend to use a bit-array of length 360 bit. Now, my question is, how should I store this bit array in a txt file?
One way is to store raw values i.e. on each line store the raw binary data that I read in from the shift register. So, for 8 wires, it would be 0b10011010. But this can get ugly for upto 360 wires - each line would contain 360 bytes.
Another way is to store hex values - this would just be two characters for 8 bits (9A for the above) and about 90 characters for 360 bits. This would, however, require me to read in the text - line by line - and convert the hex value to be represented in the bit-array, somehow.
So whats the best solution for this sort of problem? I need the solution to be completely "deterministic" - I can't have calls to malloc or such. They are a bit of a no-no in embedded systems from what I've read.
SUMMARY
I need to store large values that can't be represented by any traditional variable types. Currently I intend to store these values in a bitarray. What's the best way to store these values in a text file on an SD Card?
These are not integer values but rather bit maps; they have no arithmetic meaning. What you are suggesting is simply a byte array of length 360/8, and not related to "large integers" at all. However some more appropriate data structure or representation may be possible.
If the test vector is a single bit in 360, then it is both inefficient and unnecessary to store 360 bits for each vector, a value 0 to 359 is sufficient to unambiguously define each vector. If the correct output is also a single bit, then that could also be stored as a bit index, if not then you could store it as a list of indices for each bit that should be set, with some sentinel value >=360 or <0 to indicate the end of the list. Where most vectors contain less than fewer than 22 set bits, this structure will be more efficient that storing a 45 byte array.
From any bit index value, you can determine the address and mask of the individual wire by:
byte_address = base_address + bit_index / 8 ;
bit_mask = 0x01 << (bit_index % 8) ;
You could either test each of the 360 bits iteratively or generate a 360 bit vector on the fly from the list of bits.
I can see no need for dynamic memory allocation in this, but whether or not it is advisable in an embedded system is largely dependent on the application and target resources. A typical AVR system has very little memory, and dynamic memory allocation carries an overhead for heap management and block alignment that you may not be able to afford. Dynamic memory allocation is not suited in situations where hard real-time deterministic timing is required. And in all cases you should have a well defined strategy or architecture for avoiding memory leak issues (repeatedly allocating memory that never gets released).

efficient disk storage of decimal numbers in C (C89)

I am writing functions that serialize/deserialize a large data structure for efficient reloading later on. There is a particular set of decimal numbers for which precision is not a huge deal, and I would like to store them in 4 bytes of binary data.
For most, reading the bytes into a buffer and using memcpy to place them into a float is sufficient, and is the most common solution I've found. However, this is not portable, as floats on the systems this software is meant for are not guaranteed to be 4 bytes in size.
What I would like is something very portable (which is one of the reasons I'm limited to C89). I'm not wedded to 4 byte storage, but it is an attractive option to me. I am pretty wholly against storing the numbers as strings. I'm familiar with endianness issues, and such things are already taken into account.
What I am looking for, therefore, is a system-independent way to store and retrieve floating point numbers in a small amount of binary data (preferably around 4 bytes). I, in my foolishness, imagined this would be the easiest part of this task, since it seems like such a common problem, but popular search engines and various reference books have provided no material assistance.
You could store them in 32 bit IEEE float format (or a very close approximation to it, for instance you might what to restrict denorms and NaNs). Then have each platform adjust as necessary to coerce its own float type to that format and back.
Of course there will be some loss of accuracy, but that's inevitable anyway if you're transferring float values of difference precisions from one system to another.
It should be possible to write portable code to find the closest IEEE value to a native float value, and vice-versa, if that's required. You wouldn't really want to use it, though, because it would probably be far less efficient than code that takes advantage of knowing the float format. In the common case where the platform uses an IEEE representation it's a no-op or a simple narrowing/widening conversion. Even in the worst case you're likely to encounter, as long as it's a binary fraction you basically just have to extract the sign, exponent and significand bits and do the right thing with them (discard bits from the significand if it's too big, adjust the bias and possibly the width of the exponent, do the right thing with underflow and overflow).
If you want to avoid losing accuracy in the case where the file is saved and then reloaded on the same system (but that system doesn't use 32bit IEEE), you could look at storing some data indicating the format in the file (size of each value, number of bits of significand and exponent), then store each value at native precision, so that it only gets rounded if it's ever loaded onto a less-precise system. I don't know whether ASN.1 has a standard to encode floating-point values along these lines, but it's the kind of complicated trickery I'd expect from it.
Check this out:http://steve.hollasch.net/cgindex/coding/portfloat.html
They give a routine which is portable and doesnt add too much overhead.

How do I work with bit data in C

In class I've been tasked with writing a C program that decompresses a text file and prints out the characters it contains. Each character in the file is represented by 2 bits (4 possible characters).
I've recently been informed that a byte is not necessarily 8 bits on all systems, and a char is not necessarily 1 byte. This then makes me wonder how on earth I'm supposed to know how many bits got loaded from a file when I loaded 1 byte. Also how am I supposed to keep the loaded data in memory when there are no data types that can guarantee a set amount of bits.
How do I work with bit data in C?
A byte is not necessarily 8 bits. That much is certainly true. A char, on the other hand, is defined to be a byte - C does not differentiate between the two things.
However, the systems you will write for will almost certainly have 8-bit bytes. Bytes of different sizes are essentially non-existant outside of really, really old systems, or certain embedded systems.
If you have to write your code to work for multiple platforms, and one or more of those have differently sized chars, then you write code specifically to handle that platform - using e.g. CHAR_BIT to determine how many bits each byte contains.
Given that this is for a class, assume 8-bit bytes, unless told otherwise. The point is not going to be extreme platform independence, the point is to teach you something about bit fiddling (or possibly bit fields, but that depends on what you've covered in class).
This then makes me wonder how on earth I'm supposed to know how many
bits got loaded from a file when I loaded 1 byte.
You'll be hard pressed to find a platform where a byte is not 8 bits. (though as noted above CHAR_BIT can be used to verify that). Also clarify the portability requirements with your instructor or state your assumptions.
Usually bits are extracted using shifts and bitwise operations, e.g. (x & 3) are the rightmost 2 bits of x. ((x>>2) & 3) are the next two bits. Pick the right data type for the platforms you are targettiing or as others say use something like uint8_t if available for your compiler.
Also see:
Type to use to represent a byte in ANSI (C89/90) C?
I would recommend not using bit fields. Also see here:
When is it worthwhile to use bit fields?
You can use bit fields in C. These indices explicitly let you specify the number of bits in each part of the field, if you are truly concerned about width. This page gives a discussion: http://msdn.microsoft.com/en-us/library/yszfawxh(v=vs.80).aspx
As an example, check out the ieee754.h for usage in the context of implementing IEEE754 floats

Resources