looking for snprintf()-replacement - c

I want to convert a float (e.g. f=1.234) to a char-array (e.g. s="1.234"). This is quite easy with snprintf() but for some size and performance-reasons I can't use it (it is an embedded platform where snprintf() is too slow because it uses doubles internally).
So: how can I easily convert a float to a char-array (positive and negative floats, no exponential representation, maximum three digits after the dot)?
Thanks!
PS: to clarify this: the platform comes with a NEON FPU which can do 32 bit float operations in hardware but is slow with 64 bit doubles. The C-library for this platform unfortunately does not have a specific NEON/float variant of snprintf, so I need a replacement. Beside of that the complete snprintf/printf-stuff increases code size too much

For many microcontrollers a simplified printf function without float/double support is available. For instance many platforms have newlib nano and texas instruments provides ustdlib.c.
With one of those non-float printf functions you could split up the printing to something using only integers like
float a = -1.2339f;
float b = a + ((a > 0) ? 0.0005f : -0.0005f);
int c = b;
int d = (int)(b * 1000) % 1000;
if (d < 0) d = -d;
printf("%d.%03d\n", c, d);
which outputs
-1.234
Do watch out for overflows of the integer on 8 and 16 bit platforms.
-edit-
Furthermore, as by the comments, rounding corner cases will provide different answers than printfs implementation.

You might check to see if your stdlib provide strfromf, the low-level routine that converts a float into a string that is normally used by printf and friends. If available, this might be lighter-weight than including the entire stdio lib (and indeed, that is the reason it is included in the 60559 C extension standard).

Related

Is there a less-space taking way than using sprintf to format floats to strings?

I am developing a C code for a microcontroller, this code takes input from sensors and outputs the data from the sensors along with other strings on an alphanumeric character LCD.
I generally use sprintf for this, but I came to notice that when using sprintf to format floats into strings, it takes up too much program memory space, which is quite low on a microcontroller.
(By too much I mean jumping straight from 34% of program memory to 99.2%)
So my question is, is there a less-space taking method to format floats into strings?
I only care about how simple the method is.
I use MPLABX IDE with XC8 compiler on a PIC16F877a 8-bit MCU.
Thanks a lot in advance.
Is there a less-space taking way than using sprintf to format floats to strings?
... code takes input from sensors and outputs the data from the sensors along with other strings on an alphanumeric character
Do not use floating point at all. #user3386109
The reading from the sensor is certainly an integer. Convert that reading to deci-degrees C using integer math and then print.
TMP235 example
Temperature Output
-40 C 100
0 C 500
150 C 2000
#define SCALE_NUM ((int32_t)(150 - -40) * 10)
#define SCALE_DEM (2000 - 100)
#define OFFSET (500)
int temperature_raw = temperature_sensor();
int temperature_decidegreesC = (temperature_raw - OFFSET)*SCALE_NUM/SCALE_DEN;
send_integer(temperature_decidegreesC/10);
send_char('.');
send_char(abs(temperature_decidegreesC/10) + '0');
Other improvements can be had, but avoiding FP variables and math and using integer math is the key.
There are lots of printf replacements available, but they are all not fully compliant to the standard, missing out certain functionality to get the code size down.
Some that I have used are Mpaland printf and Menie printf
There is also Chan printf but it doesn't support float at all.
is there a less-space taking method to format floats into strings?
Just write the bytes and do the conversion on reader side. Knowing the endianess and format of the floating point number on microcontroller the reader needs to develop a software way reading of the floating point from bytes.
From XC8 documentation you know the format of floating-point number:
Floating point is implemented using either a IEEE 754 32-bit format, or a truncated, 24-bit form of this.
You would do on microcontroller side just:
void send_byte(unsigned char b) {
// send the bytes as is as binary - the simplest there is
hardware_send(b);
// or as hex readable number
// depending on if you want it human readable or not.
char buf[10];
int len = snprintf(buf, sizeof(buf), "%#02x", b);
for (unsigned char i = 0; i < len; ++i) {
hardware_send(buf[i]);
}
}
void send_float(float data) {
const unsigned char *b = (const unsigned char*)&data;
for (unsigned char i = 0; i < sizeof(data); ++i) {
send_byte(b[i]);
}
}
int main() {
float data = get_data();
send_float(data);
}
this will cost, well, almost nothing to convert the data. Write your own byte->hex conversion and not use sprintf at all to save even more memory.
On the remote side, you would write software conversion to floating point number. Accumulate bytes into a buffer, fix endianess of input. Extract sign, mantissa and exponent using bitwise operations. In C you would then use scalb to convert mantissa and exponent into a floating-point number and then just multiply by sign. But better choice is to use a more flexible programming language on PC side when possible - I would go with python.
But from the pragmatic side...
on a PIC16F877a 8-bit MCU.
You would never ever use floating point numbers on a such small MCU. As you seem to want to transfer a temperature, an 32-bit long number expressed in milli-celsius will give you an endless range of temperature. But even a 16-bit short number expressed in centi-celsius will be more then enough. Do not use floating point numbers at all. Convert all your code to use integers only.
#subjective-side-note: My journey with XC8 has been more then unpleasant. The free version of XC8 generates very bad and unoptimized code and I enjoyed sdcc more. If this an amateur project I would suggest to move to STM32 (blue-pill for example) or Arduino (ex. ESP8266 even with wifi...), that are just cheaper, easier to work with, modern and gcc works on them.

C long double in golang

I am porting an algorithm from C to Go. And I got a little bit confused. This is the C function:
void gauss_gen_cdf(uint64_t cdf[], long double sigma, int n)
{
int i;
long double s, d, e;
//Calculations ...
for (i = 1; i < n - 1; i++) {
cdf[i] = s;
}
}
And in the for loop value "s" is assigned to element "x" the array cdf. How is this possible? As far as I know, a long double is a float64 (in the Go context). So I shouldn't be able to compile the C code because I am assigning an long double to an array which just contains uint64 elements. But the C code is working fine.
So can someone please explain why this is working?
Thank you very much.
UPDATE:
The original C code of the function can be found here: https://github.com/mjosaarinen/hilabliss/blob/master/distribution.c#L22
The assignment cdf[i] = s performs an implicit conversion to uint64_t. It's hard to tell if this is intended without the calculations you omitted.
In practice, long double as a type has considerable variance across architectures. Whether Go's float64 is an appropriate replacement depends on the architecture you are porting from. For example, on x86, long double is an 80-byte extended precision type, but Windows systems are usually configured in such a way to compute results only with the 53-bit mantissa, which means that float64 could still be equivalent for your purposes.
EDIT In this particular case, the values computed by the sources appear to be static and independent of the input. I would just use float64 on the Go side and see if the computed values are identical to those of the C version, when run on a x86 machine under real GNU/Linux (virtualization should be okay), to work around the Windows FPU issues. The choice of x86 is just a guess because it is likely what the original author used. I do not understand the underlying cryptography, so I can't say whether a difference in the computed values impact the security. (Also note that the C code does not seem to properly seed its PRNG.)
C long double in golang
The title suggests an interest in whether of not Go has an extended precision floating-point type similar to long double in C.
The answer is:
Not as a primitive, see Basic types.
But arbitrary precision is supported by the math/big library.
Why this is working?
long double s = some_calculation();
uint64_t a = s;
It compiles because, unlike Go, C allows for certain implicit type conversions. The integer portion of the floating-point value of s will be copied. Presumably the s value has been scaled such that it can be interpreted as a fixed-point value where, based on the linked library source, 0xFFFFFFFFFFFFFFFF (2^64-1) represents the value 1.0. In order to make the most of such assignments, it may be worthwhile to have used an extended floating-point type with 64 precision bits.
If I had to guess, I would say that the (crypto-related) library is using fixed-point here because they want to ensure deterministic results, see: How can floating point calculations be made deterministic?. And since the extended-precision floating point is only being used for initializing a lookup table, using the (presumably slow) math/big library would likely perform perfectly fine in this context.

How can I minimize the code size of this program?

I have some problems with memory. Is it possible to reduce memory of compiled program in this function?
It makes some calculations with time variables {hh,mm,ss.0} and returns time (in millis) that depends on current progress (_SHOOT_COUNT)
unsigned long hour_koef=3600000L;
unsigned long min_koef=60000;
unsigned long timeToMillis(int* time)
{
return (hour_koef*time[0]+min_koef*time[1]+1000*time[2]+100*time[3]);
}
float Func1(float x)
{
return (x*x)/(x*x+(1-x)*(1-x));
}
float EaseFunction(byte percent,byte type)
{
if(type==0)
return Func1(float(percent)/100);
}
unsigned long DelayEasyControl()
{
long dd=timeToMillis(D1);
long dINfrom=timeToMillis(Din);
long dOUTto=timeToMillis(Dout);
if(easyINmode==0 && easyOUTmode==0) return dd;
if(easyINmode==1 && easyOUTmode==0)
{
if(_SHOOT_COUNT<duration) return (dINfrom+(dd-dINfrom)*EaseFunction(_SHOOT_COUNT*100/duration,0));
else return dd;
}
if(easyOUTmode==1)
{
if(_SHOOT_COUNT>=_SHOOT_activation && _SHOOT_activation!=-1)
{
if((_SHOOT_COUNT-_SHOOT_activation)<current_settings.delay_easyOUT_duration) return (dOUTto-(dOUTto-dd)*(1-EaseFunction((_SHOOT_COUNT-_SHOOT_activation)*100/duration,0)));
else return dOUTto;
}
else
{
if(easyINmode==0) return dd;
else if(_SHOOT_COUNT<duration) return (dINfrom+(dd-dINfrom)*EaseFunction(_SHOOT_COUNT*90/duration,0));
else return dd;
}
}
}
You mention that it's code size you want to optimize, and that you're doing this on an Arduino clone (based on the ATmega32U4).
Those controllers don't have hardware support for floating-point, so it's all going to be emulated in software which takes up a lot of code.
Try re-writing it to do fixed-point arithmetic, you will save a lot of code space that way.
You might see minor gains by optimizing the other data types, i.e. uint16_t instead of long might suffice for some of the values, and marking functions as inline can save the instructions needed to do the jump. The compiler might already be inlining, of course.
Most compilers have an option for optimizing for size, try it first. Then you may try a non-standard 24-bit float type available in some compilers for 8-bit MCUs like NXP's MRK III or MPLAB XC8
By default, the XC8 compiler uses a 24-bit floating-point format that is a truncated form of the 32-bit format and that has eight bits of exponent but only 16 bits of signed mantissa.
Understanding Floating-Point Values
That'll reduce the floating-point math library size a lot without any code changes, but it may still be too big for your MCU. In this case you'll need to rewrite the program. The most effective solution is to switch to fixed-point (A.K.A scaled integers) like #unwind said if you don't need very wide ranges. In fact that's a lot faster and takes much less ROM size than a software floating-point solution. Microchip's document above also suggests that solution:
The larger IEEE formats allow precise numbers, covering a large range of values to be handled. However, these formats require more data memory to store values of this type and the library routines that process these values are very large and slow. Floating-point calculations are always much slower than integer calculations and should be avoided if at all possible, especially if you are using an 8-bit device. This page indicates one alternative you might consider.
Also, you can try storing duplicated expressions like x*x and 1-x to a variable instead of calculating them twice like this (x*x)/(x*x+(1-x)*(1-x)), which helps a little bit if the compiler is too dumb. Same to easyINmode==0, easyOUTmode==1...
Some other things:
ALL_CAPS should be used for macros and constants only
Identifiers begin with _ and a capital letter is reserved for libraries. C may also use it for future features like _Bool or _Atomic. See What are the rules about using an underscore in a C++ identifier? (Arduino is probably C++)
Use functions instead of macros for things that are reused many times, because the inline expansion will eat some space each time it's used

printf support for MSP430 micro-controller

I am upgrading a fully tested C program for the Texas Instruments (TI) MSP430 micro-controller using a different C compiler, changing from the Quadravox, AQ430 Development Tool to the VisualGDB C compiler.
The program compiles with zero errors and zero warnings with VisualGDB. The interrupt service routines, timers, UART control, and more all seem to be working. Certain uses of sprintf, however, are not working. For example:
unsigned int duration;
float msec;
msec = (float)duration;
msec = msec / 125.0;
sprintf(glb_response,"break: %3.1f msec",msec);
This returns: break: %3.1f msec
This is what is expected: break: 12.5 msec
I have learned about the following (from Wikipedia):
The compiler option --printf_support=[full | minimal | nofloat] allows
you to use a smaller, feature limited, variant of printf/sprintf, and
make that choice at build time.
The valid values are:
full: Supports all format specifiers. This is the default.
nofloat: Excludes support for printing floating point values. Supports
all format specifiers except %f, %g, %G, %e, and %E.
minimal: Supports the printing of integer, char, or string values
without width or precision flags. Specifically, only the %%, %d, %o,
%c, %s, and %x format specifiers are supported
I need full support for printf. I know the MSP430 on my product will support this, as this C program has been in service for years.
My problem is that I can't figure out 1) if VisualGDB has the means to set printf support to full and 2) if so, where and to set it.
Any and all comments and answers will be appreciated.
I would suggest that full support for floating point is both unnecessary and ill-advised. It is a large amount of code to solve a trivial problem; and without floating-point hardware, floating point operations are usually best avoided in any case for performance, code space and memory usage reasons.
So it appears that duration is in units of 1/125000 seconds and that you wish to output a value to a precision of 0.1 milliseconds. So:
unsigned msec_x10 = duration * 10u / 125u ;
sprintf( glb_response,
"break: %u.%u msec",
msec_x10 / 10,
msec_x10 % 10 ) ;
If you want rounding to the nearest tenth (as opposed to rounding down), then:
unsigned msec_x10 = ((duration * 20u) + 125 ) / 250u ;
I agree with Clifford that if you don't need floats (or only need them for printing), don't use them.
However, if your program is already using floats extensively, and you need a way to print them, consider adapting a public domain printf such as the one from SQLite.

Scramble a floating point number?

I need a repeatable pseudo-random function from floats in [0,1] to floats in [0,1]. I.e. given a 32-bit IEEE float, return a "different" one (as random as possible, given the 24 bits of mantissa). It has to be repeatable, so keeping tons of internal state is out. And unfortunately it has to work with only 32-bit int and single-float math (no doubles and not even 32x32=64bit multiply, though I could emulate that if needed -- basically it needs to work on older CUDA hardware). The better the randomness the better, of course, within these rather severe limitations. Anyone have any ideas?
(I've been through Park-Miller, which requires 64-bit int math, and the CUDA version of Park-Miller which requires doubles, Mersenne Twisters which have lots of internal state, and a few other things which didn't work.)
Best I understand the requirements, a hash accomplishes the desired functionality. Re-interprete the float input as an integer, apply the hash function to produce an integer approximately uniformly distributed in [0,2^32), then multiply this integer by 2^-32 to convert the resulting integer back to a float roughly uniformly distributed in [0,1]. One suitable hash function which does not require multiplication is Bob Jenkin's mix(), which can be found here: http://www.burtleburtle.net/bob/hash/doobs.html.
To re-interpret the bits of a float as an integer and vice versa, there are two choices in CUDA. Use intrinsics, or use C++-style reinterpretation casts:
float f;
int i;
i = __float_as_int(f);
f = __int_as_float(i);
i = reinterpret_cast<int&>(f);
f = reinterpret_cast<float&>(i);
So as a self-contained function, the entire process might look something like this:
/* transform float in [0,1] into a different float in [0,1] */
float scramble_float (float f)
{
unsigned int magic1 = 0x96f563ae; /* number of your choice */
unsigned int magic2 = 0xb93c7563; /* number of your choice */
unsigned int j;
j = reinterpret_cast<unsigned int &>(f);
mix (magic1, magic2, j);
return 2.3283064365386963e-10f * j;
}
The NVIDIA CUDA Toolkit includes a library called CURAND that I believe fits your requirements: it produces repeatable results (assuming you start with the same seed), works on the GPU, supports 32-bit floats and ints, and should work on older GPUs. It also supports multiple pseudo- and quasi-random generation algorithms and distributions.
[Note: a problem with using the C library rand() function (other than that it does not run in CUDA on the device) is that on Windows, rand() only returns a 16-bit value, and thus any float created by division by RAND_MAX has only 16 random bits of precision. What's more, on linux/mac it returns a 32-bit value so code that uses it is not numerically portable.]
Why not use the standard C library rand() function and divide the result by RAND_MAX?
#include <stdlib.h>
float randf (void)
{
return rand() / (float) RAND_MAX;
}

Resources