printf support for MSP430 micro-controller - c

I am upgrading a fully tested C program for the Texas Instruments (TI) MSP430 micro-controller using a different C compiler, changing from the Quadravox, AQ430 Development Tool to the VisualGDB C compiler.
The program compiles with zero errors and zero warnings with VisualGDB. The interrupt service routines, timers, UART control, and more all seem to be working. Certain uses of sprintf, however, are not working. For example:
unsigned int duration;
float msec;
msec = (float)duration;
msec = msec / 125.0;
sprintf(glb_response,"break: %3.1f msec",msec);
This returns: break: %3.1f msec
This is what is expected: break: 12.5 msec
I have learned about the following (from Wikipedia):
The compiler option --printf_support=[full | minimal | nofloat] allows
you to use a smaller, feature limited, variant of printf/sprintf, and
make that choice at build time.
The valid values are:
full: Supports all format specifiers. This is the default.
nofloat: Excludes support for printing floating point values. Supports
all format specifiers except %f, %g, %G, %e, and %E.
minimal: Supports the printing of integer, char, or string values
without width or precision flags. Specifically, only the %%, %d, %o,
%c, %s, and %x format specifiers are supported
I need full support for printf. I know the MSP430 on my product will support this, as this C program has been in service for years.
My problem is that I can't figure out 1) if VisualGDB has the means to set printf support to full and 2) if so, where and to set it.
Any and all comments and answers will be appreciated.

I would suggest that full support for floating point is both unnecessary and ill-advised. It is a large amount of code to solve a trivial problem; and without floating-point hardware, floating point operations are usually best avoided in any case for performance, code space and memory usage reasons.
So it appears that duration is in units of 1/125000 seconds and that you wish to output a value to a precision of 0.1 milliseconds. So:
unsigned msec_x10 = duration * 10u / 125u ;
sprintf( glb_response,
"break: %u.%u msec",
msec_x10 / 10,
msec_x10 % 10 ) ;
If you want rounding to the nearest tenth (as opposed to rounding down), then:
unsigned msec_x10 = ((duration * 20u) + 125 ) / 250u ;

I agree with Clifford that if you don't need floats (or only need them for printing), don't use them.
However, if your program is already using floats extensively, and you need a way to print them, consider adapting a public domain printf such as the one from SQLite.

Related

Problem with rms and dB values of discrete samples

I'm trying to sample pcm-data via the ALSA-project on my RaspberryPi 4 in c. Recording things works like a charm, but tampering with the samples themselves leaves me confused, especially since i already did the same on a different project (ESP32).
Consider "buffer" as an array of varying size per session (ALSA allocates differently every time) containing 32bit 44100Hz discrete audio samples stored as 8 bit values (int32_t cast needed). In order to get the dBFS value of a time stretch as big as one buffer i thought to square every sample, add them together, divide by the number of samples, get the sqrt, divide by the INT32_MAX value and pull the log10 from that, which is finally multiplied by 20. A standard rms and then dBFS calculation:
uint32_t sum = 0;
int32_t* samples = (int32_t*)buffer;
for(int i = 0; i < (size / (BIT_DEPTH/8)); i ++){
sum += (uint32_t)pow(samples[i], 2);
}
double rms = sqrt(sum / (size / (BIT_DEPTH/8)));
int32_t decibel = (int32_t)(20 * log10(rms / INT32_MAX));
fprintf(stderr, "sum = %d\n", sum);
fprintf(stderr, "rms = %d\n", rms);
fprintf(stderr, "%d dBFS\n", decibel);
But instead of reasonable values for a somewhat quiet room (open window) or a speaker right next to the mics I get non-changing really quiet values of around -134 dBFS. Yes, the gain is low, so -134 could be possible but what I understand even less is what happens when I print out variables sum and rms:
buffersize: 262144
sum = -61773
rms = -262146
-138 dBFS
How could they ever be negative? This is probably a classic c-issue which I can't see at the moment.
Again: writing the samples to a file results in a high quality but low gain wav-file (header needed). Any help? thanks.
sum is a uint32_t, but you are printing it with %d, which is for int. The resulting behavior is not defined by the C standard. A common result is for values with the high bit set to be printed as negative numbers, but other behaviors are possible too. A correct conversion specification for unsigned int is %u, but, for uint32_t, you should include <inttypes.h> and use fprintf(stderr, "%" PRIu32 "\n", sum);.
Additionally, the squares and the summation may exceed what can be represented in a uint32_t, resulting in wrapping modulo 232.
rms is a double, but you are also printing it with %d, which is very wrong. Use %g, %f, or %e, or some other conversion for double, possibly with various modifiers to select formatting options.
With the int32_t decibel, %d might work in some C implementations, but a proper method is fprintf(stderr, "%" PRId32 " dBFS\n", decibel);.
Your compiler should have warned you of at least the double format issue. Pay attention to compiler warnings and fix the problems they report. Preferably, escalate compiler warnings to errors with the -Werror switch to GCC and Clang or the /WX switch to MSVC.
The line int32_t* samples = (int32_t*)buffer; may result in prohibited aliasing. Be very sure that the memory for buffer is probably defined to allow it to be aliased as int32_t. If it is not, the behavior is not defined by the C standard, and alternative techniques of accessing the buffer should be used, such as copying the data into an int32_t object one at a time or into an array of int32_t.
Do not use pow to compute squares as it is wasteful (and introduces inaccuracies when other types are involved). For your types, use static inline uint32_t square(int32_t x) { return x*x; } and call it as square(samples[i]). If overflow is occurring, consider using int64_t when computing the square and uint64_t for the sum.

Is there a less-space taking way than using sprintf to format floats to strings?

I am developing a C code for a microcontroller, this code takes input from sensors and outputs the data from the sensors along with other strings on an alphanumeric character LCD.
I generally use sprintf for this, but I came to notice that when using sprintf to format floats into strings, it takes up too much program memory space, which is quite low on a microcontroller.
(By too much I mean jumping straight from 34% of program memory to 99.2%)
So my question is, is there a less-space taking method to format floats into strings?
I only care about how simple the method is.
I use MPLABX IDE with XC8 compiler on a PIC16F877a 8-bit MCU.
Thanks a lot in advance.
Is there a less-space taking way than using sprintf to format floats to strings?
... code takes input from sensors and outputs the data from the sensors along with other strings on an alphanumeric character
Do not use floating point at all. #user3386109
The reading from the sensor is certainly an integer. Convert that reading to deci-degrees C using integer math and then print.
TMP235 example
Temperature Output
-40 C 100
0 C 500
150 C 2000
#define SCALE_NUM ((int32_t)(150 - -40) * 10)
#define SCALE_DEM (2000 - 100)
#define OFFSET (500)
int temperature_raw = temperature_sensor();
int temperature_decidegreesC = (temperature_raw - OFFSET)*SCALE_NUM/SCALE_DEN;
send_integer(temperature_decidegreesC/10);
send_char('.');
send_char(abs(temperature_decidegreesC/10) + '0');
Other improvements can be had, but avoiding FP variables and math and using integer math is the key.
There are lots of printf replacements available, but they are all not fully compliant to the standard, missing out certain functionality to get the code size down.
Some that I have used are Mpaland printf and Menie printf
There is also Chan printf but it doesn't support float at all.
is there a less-space taking method to format floats into strings?
Just write the bytes and do the conversion on reader side. Knowing the endianess and format of the floating point number on microcontroller the reader needs to develop a software way reading of the floating point from bytes.
From XC8 documentation you know the format of floating-point number:
Floating point is implemented using either a IEEE 754 32-bit format, or a truncated, 24-bit form of this.
You would do on microcontroller side just:
void send_byte(unsigned char b) {
// send the bytes as is as binary - the simplest there is
hardware_send(b);
// or as hex readable number
// depending on if you want it human readable or not.
char buf[10];
int len = snprintf(buf, sizeof(buf), "%#02x", b);
for (unsigned char i = 0; i < len; ++i) {
hardware_send(buf[i]);
}
}
void send_float(float data) {
const unsigned char *b = (const unsigned char*)&data;
for (unsigned char i = 0; i < sizeof(data); ++i) {
send_byte(b[i]);
}
}
int main() {
float data = get_data();
send_float(data);
}
this will cost, well, almost nothing to convert the data. Write your own byte->hex conversion and not use sprintf at all to save even more memory.
On the remote side, you would write software conversion to floating point number. Accumulate bytes into a buffer, fix endianess of input. Extract sign, mantissa and exponent using bitwise operations. In C you would then use scalb to convert mantissa and exponent into a floating-point number and then just multiply by sign. But better choice is to use a more flexible programming language on PC side when possible - I would go with python.
But from the pragmatic side...
on a PIC16F877a 8-bit MCU.
You would never ever use floating point numbers on a such small MCU. As you seem to want to transfer a temperature, an 32-bit long number expressed in milli-celsius will give you an endless range of temperature. But even a 16-bit short number expressed in centi-celsius will be more then enough. Do not use floating point numbers at all. Convert all your code to use integers only.
#subjective-side-note: My journey with XC8 has been more then unpleasant. The free version of XC8 generates very bad and unoptimized code and I enjoyed sdcc more. If this an amateur project I would suggest to move to STM32 (blue-pill for example) or Arduino (ex. ESP8266 even with wifi...), that are just cheaper, easier to work with, modern and gcc works on them.

can't figure out the sizeof(long double) in C is 16 bytes or 10 bytes

Although there are some answers are on this websites, I still can't figure out the meaning of sizeof(long double). Why is the output of printing var3 is 3.141592653589793115998?
When I try to execute codes from another person, it runs different from another person. Could somebody help me to solve this problem?
My testing codes:
float var1 =3.1415926535897932;
double var2=3.1415926535897932;
long double var3 =3.141592653589793213456;
printf("%d\n",sizeof(float));
printf("%d\n",sizeof(double));
printf("%d\n",sizeof(long double));
printf("%.16f\n",var1);
printf("%.16f\n",var2);
printf("%.21Lf\n",var3);
output of my testing codes:
4
8
16
3.1415927410125732
3.1415926535897931
3.141592653589793115998
Codes are the same with another person, but the output from another person is:
4
8
12
3.1415927410125732
3.1415926535897931
3.141592741012573213359
Could somebody tell me why the output of us are different?
Floating point numbers -inside our computers- are not mathematical real numbers.
They have lots of counter-intuitive properties (e.g. (1.0-x) + x in your C code can be different of 1....). For more, read the floating-point-gui.de
Be also aware that a number is not its representation in digits. For example, most of your examples are approximations of the number π (which, intuitively speaking, has an infinite number of digits or bits, since it is a trancendental number, as proven by Évariste Galois and Niels Abel). The continuum hypothesis is related.
I still can't figure out the meaning of sizeof(long double).
It is the implementation specific ratio of the number of bytes (or octets) in a long double automatic variable vs the number of bytes in a char automatic variable.
The C11 standard (read n1570 and see this reference) does allow an implementation to have sizeof(long double) being, like sizeof(char), equal to 1. I cannot name such an implementation, but it might be (theoretically) the case on some weird computer architectures (e.g. some DSP).
Could somebody tell me why the output of us are different?
What make you think they could be equal?
Practically speaking, floating point numbers are often IEEE754. But on IBM mainframes (e.g. z/Series) or on VAXes they are not.
float var1 =3.1415926535897932;
double var2 =3.1415926535897932;
Be aware that it could be possible to have a C implementation where (double)var1 != var2 or where var1 != (float)var2 after executing these above instructions.
If you need more precision that what long double achieve on your particular C implementation (e.g. your recent GCC compiler, which could be a cross-compiler), consider using some arbitrary precision arithmetic library such as GMPlib.
I recommend carefully reading the documentation of printf(3), and of every other function that you are using from your C standard library. I also suggest to read the documentation of your C compiler.
You might be interested by static program analysis tools such as Frama-C or the Clang static analyzer. Read also this draft report.
If your C compiler is a recent GCC, compile with all warnings and debug info, so gcc -Wall -Wextra -g and learn how to use the GDB debugger.
Could somebody tell me why the output of us are different?
C allows different compilers/implementations to use different floating point encoding and handle evaluations in slightly different ways.
Precision
The difference in sizeof hint that the 2 implementations may employ different precision. Yet the difference could be due to padding, In this case, extra bytes added to preserve an alignment for performance reasons.
A better precision assessment is to print epsilon: the difference between 1.0 and the next larger value of the type.
#include <float.h>
printf("%e %e %Le\n", FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON);
Sample result
1.192093e-07 2.220446e-16 1.084202e-19
FLT_EVAL_METHOD
When this is 0, floating point types evaluate to that type. With other values like 2, floating point evaluate using wider types and only in the end save the result to the target type.
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
Two of several possible values indicated below:
FLT_EVAL_METHOD
0 evaluate all operations and constants just to the range and precision of the type;
2 evaluate all operations and constants to the range and precision of the long double type.
Notice the constants 3.1415926535897932, 3.141592653589793213456 are both normally double constants. Neither has an L suffix that would make the long double. Both have the same double value of 3.1415926535897931... and val2, val3 should get the same value. Yet with FLT_EVAL_METHOD==2, constants can evaluated as a long double and that is certainly what happened in "the output from another person" code.
Print FLT_EVAL_METHOD to see that difference.

looking for snprintf()-replacement

I want to convert a float (e.g. f=1.234) to a char-array (e.g. s="1.234"). This is quite easy with snprintf() but for some size and performance-reasons I can't use it (it is an embedded platform where snprintf() is too slow because it uses doubles internally).
So: how can I easily convert a float to a char-array (positive and negative floats, no exponential representation, maximum three digits after the dot)?
Thanks!
PS: to clarify this: the platform comes with a NEON FPU which can do 32 bit float operations in hardware but is slow with 64 bit doubles. The C-library for this platform unfortunately does not have a specific NEON/float variant of snprintf, so I need a replacement. Beside of that the complete snprintf/printf-stuff increases code size too much
For many microcontrollers a simplified printf function without float/double support is available. For instance many platforms have newlib nano and texas instruments provides ustdlib.c.
With one of those non-float printf functions you could split up the printing to something using only integers like
float a = -1.2339f;
float b = a + ((a > 0) ? 0.0005f : -0.0005f);
int c = b;
int d = (int)(b * 1000) % 1000;
if (d < 0) d = -d;
printf("%d.%03d\n", c, d);
which outputs
-1.234
Do watch out for overflows of the integer on 8 and 16 bit platforms.
-edit-
Furthermore, as by the comments, rounding corner cases will provide different answers than printfs implementation.
You might check to see if your stdlib provide strfromf, the low-level routine that converts a float into a string that is normally used by printf and friends. If available, this might be lighter-weight than including the entire stdio lib (and indeed, that is the reason it is included in the 60559 C extension standard).

C long double in golang

I am porting an algorithm from C to Go. And I got a little bit confused. This is the C function:
void gauss_gen_cdf(uint64_t cdf[], long double sigma, int n)
{
int i;
long double s, d, e;
//Calculations ...
for (i = 1; i < n - 1; i++) {
cdf[i] = s;
}
}
And in the for loop value "s" is assigned to element "x" the array cdf. How is this possible? As far as I know, a long double is a float64 (in the Go context). So I shouldn't be able to compile the C code because I am assigning an long double to an array which just contains uint64 elements. But the C code is working fine.
So can someone please explain why this is working?
Thank you very much.
UPDATE:
The original C code of the function can be found here: https://github.com/mjosaarinen/hilabliss/blob/master/distribution.c#L22
The assignment cdf[i] = s performs an implicit conversion to uint64_t. It's hard to tell if this is intended without the calculations you omitted.
In practice, long double as a type has considerable variance across architectures. Whether Go's float64 is an appropriate replacement depends on the architecture you are porting from. For example, on x86, long double is an 80-byte extended precision type, but Windows systems are usually configured in such a way to compute results only with the 53-bit mantissa, which means that float64 could still be equivalent for your purposes.
EDIT In this particular case, the values computed by the sources appear to be static and independent of the input. I would just use float64 on the Go side and see if the computed values are identical to those of the C version, when run on a x86 machine under real GNU/Linux (virtualization should be okay), to work around the Windows FPU issues. The choice of x86 is just a guess because it is likely what the original author used. I do not understand the underlying cryptography, so I can't say whether a difference in the computed values impact the security. (Also note that the C code does not seem to properly seed its PRNG.)
C long double in golang
The title suggests an interest in whether of not Go has an extended precision floating-point type similar to long double in C.
The answer is:
Not as a primitive, see Basic types.
But arbitrary precision is supported by the math/big library.
Why this is working?
long double s = some_calculation();
uint64_t a = s;
It compiles because, unlike Go, C allows for certain implicit type conversions. The integer portion of the floating-point value of s will be copied. Presumably the s value has been scaled such that it can be interpreted as a fixed-point value where, based on the linked library source, 0xFFFFFFFFFFFFFFFF (2^64-1) represents the value 1.0. In order to make the most of such assignments, it may be worthwhile to have used an extended floating-point type with 64 precision bits.
If I had to guess, I would say that the (crypto-related) library is using fixed-point here because they want to ensure deterministic results, see: How can floating point calculations be made deterministic?. And since the extended-precision floating point is only being used for initializing a lookup table, using the (presumably slow) math/big library would likely perform perfectly fine in this context.

Resources