C long double in golang - c

I am porting an algorithm from C to Go. And I got a little bit confused. This is the C function:
void gauss_gen_cdf(uint64_t cdf[], long double sigma, int n)
{
int i;
long double s, d, e;
//Calculations ...
for (i = 1; i < n - 1; i++) {
cdf[i] = s;
}
}
And in the for loop value "s" is assigned to element "x" the array cdf. How is this possible? As far as I know, a long double is a float64 (in the Go context). So I shouldn't be able to compile the C code because I am assigning an long double to an array which just contains uint64 elements. But the C code is working fine.
So can someone please explain why this is working?
Thank you very much.
UPDATE:
The original C code of the function can be found here: https://github.com/mjosaarinen/hilabliss/blob/master/distribution.c#L22

The assignment cdf[i] = s performs an implicit conversion to uint64_t. It's hard to tell if this is intended without the calculations you omitted.
In practice, long double as a type has considerable variance across architectures. Whether Go's float64 is an appropriate replacement depends on the architecture you are porting from. For example, on x86, long double is an 80-byte extended precision type, but Windows systems are usually configured in such a way to compute results only with the 53-bit mantissa, which means that float64 could still be equivalent for your purposes.
EDIT In this particular case, the values computed by the sources appear to be static and independent of the input. I would just use float64 on the Go side and see if the computed values are identical to those of the C version, when run on a x86 machine under real GNU/Linux (virtualization should be okay), to work around the Windows FPU issues. The choice of x86 is just a guess because it is likely what the original author used. I do not understand the underlying cryptography, so I can't say whether a difference in the computed values impact the security. (Also note that the C code does not seem to properly seed its PRNG.)

C long double in golang
The title suggests an interest in whether of not Go has an extended precision floating-point type similar to long double in C.
The answer is:
Not as a primitive, see Basic types.
But arbitrary precision is supported by the math/big library.
Why this is working?
long double s = some_calculation();
uint64_t a = s;
It compiles because, unlike Go, C allows for certain implicit type conversions. The integer portion of the floating-point value of s will be copied. Presumably the s value has been scaled such that it can be interpreted as a fixed-point value where, based on the linked library source, 0xFFFFFFFFFFFFFFFF (2^64-1) represents the value 1.0. In order to make the most of such assignments, it may be worthwhile to have used an extended floating-point type with 64 precision bits.
If I had to guess, I would say that the (crypto-related) library is using fixed-point here because they want to ensure deterministic results, see: How can floating point calculations be made deterministic?. And since the extended-precision floating point is only being used for initializing a lookup table, using the (presumably slow) math/big library would likely perform perfectly fine in this context.

Related

can't figure out the sizeof(long double) in C is 16 bytes or 10 bytes

Although there are some answers are on this websites, I still can't figure out the meaning of sizeof(long double). Why is the output of printing var3 is 3.141592653589793115998?
When I try to execute codes from another person, it runs different from another person. Could somebody help me to solve this problem?
My testing codes:
float var1 =3.1415926535897932;
double var2=3.1415926535897932;
long double var3 =3.141592653589793213456;
printf("%d\n",sizeof(float));
printf("%d\n",sizeof(double));
printf("%d\n",sizeof(long double));
printf("%.16f\n",var1);
printf("%.16f\n",var2);
printf("%.21Lf\n",var3);
output of my testing codes:
4
8
16
3.1415927410125732
3.1415926535897931
3.141592653589793115998
Codes are the same with another person, but the output from another person is:
4
8
12
3.1415927410125732
3.1415926535897931
3.141592741012573213359
Could somebody tell me why the output of us are different?
Floating point numbers -inside our computers- are not mathematical real numbers.
They have lots of counter-intuitive properties (e.g. (1.0-x) + x in your C code can be different of 1....). For more, read the floating-point-gui.de
Be also aware that a number is not its representation in digits. For example, most of your examples are approximations of the number π (which, intuitively speaking, has an infinite number of digits or bits, since it is a trancendental number, as proven by Évariste Galois and Niels Abel). The continuum hypothesis is related.
I still can't figure out the meaning of sizeof(long double).
It is the implementation specific ratio of the number of bytes (or octets) in a long double automatic variable vs the number of bytes in a char automatic variable.
The C11 standard (read n1570 and see this reference) does allow an implementation to have sizeof(long double) being, like sizeof(char), equal to 1. I cannot name such an implementation, but it might be (theoretically) the case on some weird computer architectures (e.g. some DSP).
Could somebody tell me why the output of us are different?
What make you think they could be equal?
Practically speaking, floating point numbers are often IEEE754. But on IBM mainframes (e.g. z/Series) or on VAXes they are not.
float var1 =3.1415926535897932;
double var2 =3.1415926535897932;
Be aware that it could be possible to have a C implementation where (double)var1 != var2 or where var1 != (float)var2 after executing these above instructions.
If you need more precision that what long double achieve on your particular C implementation (e.g. your recent GCC compiler, which could be a cross-compiler), consider using some arbitrary precision arithmetic library such as GMPlib.
I recommend carefully reading the documentation of printf(3), and of every other function that you are using from your C standard library. I also suggest to read the documentation of your C compiler.
You might be interested by static program analysis tools such as Frama-C or the Clang static analyzer. Read also this draft report.
If your C compiler is a recent GCC, compile with all warnings and debug info, so gcc -Wall -Wextra -g and learn how to use the GDB debugger.
Could somebody tell me why the output of us are different?
C allows different compilers/implementations to use different floating point encoding and handle evaluations in slightly different ways.
Precision
The difference in sizeof hint that the 2 implementations may employ different precision. Yet the difference could be due to padding, In this case, extra bytes added to preserve an alignment for performance reasons.
A better precision assessment is to print epsilon: the difference between 1.0 and the next larger value of the type.
#include <float.h>
printf("%e %e %Le\n", FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON);
Sample result
1.192093e-07 2.220446e-16 1.084202e-19
FLT_EVAL_METHOD
When this is 0, floating point types evaluate to that type. With other values like 2, floating point evaluate using wider types and only in the end save the result to the target type.
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
Two of several possible values indicated below:
FLT_EVAL_METHOD
0 evaluate all operations and constants just to the range and precision of the type;
2 evaluate all operations and constants to the range and precision of the long double type.
Notice the constants 3.1415926535897932, 3.141592653589793213456 are both normally double constants. Neither has an L suffix that would make the long double. Both have the same double value of 3.1415926535897931... and val2, val3 should get the same value. Yet with FLT_EVAL_METHOD==2, constants can evaluated as a long double and that is certainly what happened in "the output from another person" code.
Print FLT_EVAL_METHOD to see that difference.

looking for snprintf()-replacement

I want to convert a float (e.g. f=1.234) to a char-array (e.g. s="1.234"). This is quite easy with snprintf() but for some size and performance-reasons I can't use it (it is an embedded platform where snprintf() is too slow because it uses doubles internally).
So: how can I easily convert a float to a char-array (positive and negative floats, no exponential representation, maximum three digits after the dot)?
Thanks!
PS: to clarify this: the platform comes with a NEON FPU which can do 32 bit float operations in hardware but is slow with 64 bit doubles. The C-library for this platform unfortunately does not have a specific NEON/float variant of snprintf, so I need a replacement. Beside of that the complete snprintf/printf-stuff increases code size too much
For many microcontrollers a simplified printf function without float/double support is available. For instance many platforms have newlib nano and texas instruments provides ustdlib.c.
With one of those non-float printf functions you could split up the printing to something using only integers like
float a = -1.2339f;
float b = a + ((a > 0) ? 0.0005f : -0.0005f);
int c = b;
int d = (int)(b * 1000) % 1000;
if (d < 0) d = -d;
printf("%d.%03d\n", c, d);
which outputs
-1.234
Do watch out for overflows of the integer on 8 and 16 bit platforms.
-edit-
Furthermore, as by the comments, rounding corner cases will provide different answers than printfs implementation.
You might check to see if your stdlib provide strfromf, the low-level routine that converts a float into a string that is normally used by printf and friends. If available, this might be lighter-weight than including the entire stdio lib (and indeed, that is the reason it is included in the 60559 C extension standard).

c: change variable type without casting

I'm changing an uint32_t to a float but without changing the actual bits.
Just to be sure: I don't wan't to cast it. So float f = (float) i is the exact opposite of what I wan't to do because it changes bits.
I'm going to use this to convert my (pseudo) random numbers to float without doing unneeded math.
What I'm currently doing and what is already working is this:
float random_float( uint64_t seed ) {
// Generate random and change bit format to ieee
uint32_t asInt = (random_int( seed ) & 0x7FFFFF) | (0x7E000000>>1);
// Make it a float
return *(float*)(void*)&asInt; // <-- pretty ugly and nees a variable
}
The Question: Now I'd like to get rid of the asInt variable and I'd like to know if there is a better / not so ugly way then getting the address of this variable, casting it twice and dereferencing it again?
You could try union - as long as you make sure the types are identical in memory sizes:
union convertor {
int asInt;
float asFloat;
};
Then you can assign your int to asFloat (or the other way around if you want to). I use it a lot when I need to do bitwise operations on one hand and still get a uint32_t representation on the number on the other hand
[EDIT]
Like many of the commentators rightfully state, you must take into consideration values that are not presentable by integers like like NAN, +INF, -INF, +0, -0.
So you seem to want to generate floating point numbers between 0.5 and 1.0 judging from your code.
Assuming that your microcontroller has a standard C library with floating point support, you can do this all standards compliant without actually involving any floating point operations, all you need is the ldexp function that itself doesn't actually do any floating point math.
This would look something like this:
return ldexpf((1 << 23) + random_thing_smaller_than_23_bits(), -24);
The trick here is that we happen to know that IEEE754 binary32 floating point numbers have integer precision between 2^23 and 2^24 (I could be off-by-one here, double check please, I'm translating this from some work I've done on doubles). So the compiler should know how to convert that number to a float trivially. Then ldexp multiplies that number by 2^-24 by just changing the bits in the exponent. No actual floating point operations involved and no undefined behavior, the code is fully portable to any standard C implementation with IEEE754 numbers. Double check the generated code, but a good compiler and c library should not use any floating point instructions here.
If you want to peek at some experiments I've done around generating random floating point numbers you can peek at this github repo. It's all about doubles, but should be trivially translatable to floats.
Reinterpreting the binary representation of an int to a float would result in major problems:
There are a lot of undefined codes in the binary representation of a float.
Other codes represent special conditions, like NAN, +INF, -INF, +0, -0 (sic!), etc.
Also, if that is a random value, even if catching all non-value representations, that would yield a very bad random distribution.
If you are working on an MCU without FPU, you should better think about avoiding float at all. An alternative might be fraction or scaled integers. There are many implementations of algorithms which use float, but can be easily converted to fixed point types with acceptable loss of precision (or even none at all). Some might even yield more precision than float (note that single precision float has only 23 bits of mantissa, an int32 would have 31 bits (+ 1 sign for either), same for a fractional or fixed scaled int.
Note that C11 added (optional) support for _Frac. You might want to research on that.
Edit:
According you your comments, you seem to convert the int to a float in range 0..<1. For that, you can assemble the float using bit operations on an uint32_t (e.g. the original value). You just need to follow the IEEE format (presumed your toolchain does comply to the C standard! See wikipedia.
The result (still uint32_t) can then be reinterpreted by a union or pointer as described by others already. Pack that in a system-dependent, well-commented library and dig it deep. Do not forget to check about endianess and alignment (likely both the same for float and uint32_t, but important for the bit-ops).

How can I minimize the code size of this program?

I have some problems with memory. Is it possible to reduce memory of compiled program in this function?
It makes some calculations with time variables {hh,mm,ss.0} and returns time (in millis) that depends on current progress (_SHOOT_COUNT)
unsigned long hour_koef=3600000L;
unsigned long min_koef=60000;
unsigned long timeToMillis(int* time)
{
return (hour_koef*time[0]+min_koef*time[1]+1000*time[2]+100*time[3]);
}
float Func1(float x)
{
return (x*x)/(x*x+(1-x)*(1-x));
}
float EaseFunction(byte percent,byte type)
{
if(type==0)
return Func1(float(percent)/100);
}
unsigned long DelayEasyControl()
{
long dd=timeToMillis(D1);
long dINfrom=timeToMillis(Din);
long dOUTto=timeToMillis(Dout);
if(easyINmode==0 && easyOUTmode==0) return dd;
if(easyINmode==1 && easyOUTmode==0)
{
if(_SHOOT_COUNT<duration) return (dINfrom+(dd-dINfrom)*EaseFunction(_SHOOT_COUNT*100/duration,0));
else return dd;
}
if(easyOUTmode==1)
{
if(_SHOOT_COUNT>=_SHOOT_activation && _SHOOT_activation!=-1)
{
if((_SHOOT_COUNT-_SHOOT_activation)<current_settings.delay_easyOUT_duration) return (dOUTto-(dOUTto-dd)*(1-EaseFunction((_SHOOT_COUNT-_SHOOT_activation)*100/duration,0)));
else return dOUTto;
}
else
{
if(easyINmode==0) return dd;
else if(_SHOOT_COUNT<duration) return (dINfrom+(dd-dINfrom)*EaseFunction(_SHOOT_COUNT*90/duration,0));
else return dd;
}
}
}
You mention that it's code size you want to optimize, and that you're doing this on an Arduino clone (based on the ATmega32U4).
Those controllers don't have hardware support for floating-point, so it's all going to be emulated in software which takes up a lot of code.
Try re-writing it to do fixed-point arithmetic, you will save a lot of code space that way.
You might see minor gains by optimizing the other data types, i.e. uint16_t instead of long might suffice for some of the values, and marking functions as inline can save the instructions needed to do the jump. The compiler might already be inlining, of course.
Most compilers have an option for optimizing for size, try it first. Then you may try a non-standard 24-bit float type available in some compilers for 8-bit MCUs like NXP's MRK III or MPLAB XC8
By default, the XC8 compiler uses a 24-bit floating-point format that is a truncated form of the 32-bit format and that has eight bits of exponent but only 16 bits of signed mantissa.
Understanding Floating-Point Values
That'll reduce the floating-point math library size a lot without any code changes, but it may still be too big for your MCU. In this case you'll need to rewrite the program. The most effective solution is to switch to fixed-point (A.K.A scaled integers) like #unwind said if you don't need very wide ranges. In fact that's a lot faster and takes much less ROM size than a software floating-point solution. Microchip's document above also suggests that solution:
The larger IEEE formats allow precise numbers, covering a large range of values to be handled. However, these formats require more data memory to store values of this type and the library routines that process these values are very large and slow. Floating-point calculations are always much slower than integer calculations and should be avoided if at all possible, especially if you are using an 8-bit device. This page indicates one alternative you might consider.
Also, you can try storing duplicated expressions like x*x and 1-x to a variable instead of calculating them twice like this (x*x)/(x*x+(1-x)*(1-x)), which helps a little bit if the compiler is too dumb. Same to easyINmode==0, easyOUTmode==1...
Some other things:
ALL_CAPS should be used for macros and constants only
Identifiers begin with _ and a capital letter is reserved for libraries. C may also use it for future features like _Bool or _Atomic. See What are the rules about using an underscore in a C++ identifier? (Arduino is probably C++)
Use functions instead of macros for things that are reused many times, because the inline expansion will eat some space each time it's used

Integer type with floating point semantics for C or D

I'm looking for an existing implementation for C or D, or advice in implementing, signed and/or unsigned integer types with floating point semantics.
That is to say, an integer type that behaves as floating point types do when doing arithmetic: Overflow produces infinity (-infinity for signed underflow) rather than wrapping around or having undefined behavior, undefined operations produce NaN, etc.
In essence, a version of floating point where the distribution of presentable numbers falls evenly on the number line, instead of conglomerating around 0.
In addition, all operations should be deterministic; any given two's complement 32-bit architecture should produce the exact same result for the same computation, regardless of its implementation (whereas floating point may, and often will, produce slightly differing results).
Finally, performance is a concern, which has me worried about potential "bignum" (arbitrary-precision) solutions.
See also: Fixed-point and saturation arithmetic.
I do not know of any existing implementations of this.
But I would imagine implementing it would be a matter of (in D):
enum CheckedIntState : ubyte
{
ok,
overflow,
underflow,
nan,
}
struct CheckedInt(T)
if (isIntegral!T)
{
private T _value;
private CheckedIntState _state;
// Constructors, getters, conversion helper methods, etc.
// And a bunch of operator overloads that check the
// result on every operation and yield a CheckedInt!T
// with an appropriate state.
// You'll also want to overload opEquals and opCmp and
// make them check the state of the operands so that
// NaNs compare equal and so on.
}
Saturating arithmetic does what you want except for the part where undefined operations produce NaN; this is going to turn out to be problematic, because most saturating implementations use the full number range, and so there are not values left over to reserve for NaN. Thus, you probably can't easily build this on the back of saturating hardware instructions unless you have an additional "is this value NaN" field, which is rather wasteful.
Assuming that you're wedded to the idea of NaN values, all of the edge case detection will probably need to happen in software. For most integer operations, this is pretty straightforward, especially if you have a wider type available (let's assume long long is strictly larger than whatever integer type underlies myType):
myType add(myType x, myType y) {
if (x == positiveInfinity && y == negativeInfinity ||
x == negativeInfinity && y == positiveInfinity)
return notANumber;
long long wideResult = x + y;
if (wideResult >= positiveInfinity) return positiveInfinity;
if (wideResult <= negativeInfinity) return negativeInfinity;
return (myType)wideResult;
}
One solution might be to implement multiple-precision arithmetic with abstract data types. The book C Interfaces and Implementations by David Hanson has a chapter (interface and implementation) of MP arithmetic.
Doing calculations using scaled integers is also a possibility. You might be able to use his arbitrary-precision arithmetic, although I believe this implementation can't overflow. You could run out of memory, but that's a different problem.
In either case, you might need to tweak the code to return exactly what you want on overflow and such.
Source code (MIT license)
That page also has a link to buy the book from amazon.com.
Half of the requirements are satisfied in saturating arithmetic, which are implemented in e.g. ARM instructions, MMX and SSE.
As also pointed out by Stephen Canon, one needs additional elements to check overflow / NaN. Some instruction sets (Atmel at least) btw have a sticking flag to test for overflows (could be used to differentiate inf from max_int). And perhaps "Q" + 0 could mark for NaN.

Resources