Displaying the FPU registers in LLDB in floating point - lldb

I’m trying to display the contents of a desired FPU register in floating point. It’s very simple in GDB:
p $st0
I know that register is called stmm0 in LLDB, but I can only get it to display the ten byte in hexadecimal. I haven’t been able to figure out how to display (cast) it as floating point like GDB does. Surely there must be a way. What am I missing?

Related

Passage of Floating point arguments in ARM processors

I would like an example of the convention used when writing an assembly subroutine called from C.There are no resources online that explain where arguments that are integer and floating point type are stored. I have floating point hardware on the ARM and am able to pass in floating point numbers into floating point registers. But how do I pass in integers into integer registers and floating point values into floating point registers at the same time.

Floating type value in ATMega8

My Microcontroller doesn't process floating type values so how can I do operations on floating type values using int?
Like I have a value stored in a register a=5
now I want to multiply it with 0.65 and store the result in another register c?
How do I do it?
on using int it solves leaving the fractional value aside while using a float it displays a "?"
you are mixing multiple problems:
first: even if your target controller does not contain a floating point unit (FPU) the calculations can be done with SW libries.
Usage of this libs usally happens automaticly. and you can do calculations in float.
these libraries are relativly huge in codesize and execution speed. even if you only add simple float arithmetics you will notice a huge increment in code size.
the second problem is the output via the printf-routines. Since floating port support is nomaly not needed, it is striped to save code size. you can explicitly activate it by adding the libraries libprintf_flt.a libm.a and use the linker options -Wl,-u,vfprintf
alternativly you can use the ftoa function.

How to avoid FPU when given float numbers?

Well, this is not at all an optimization question.
I am writing a (for now) simple Linux kernel module in which I need to find the average of some positions. These positions are stored as floating point (i.e. float) variables. (I am the author of the whole thing, so I can change that, but I'd rather keep the precission of float and not get involved in that if I can avoid it).
Now, these position values are stored (or at least used to) in the kernel simply for storage. One user application writes these data (through shared memory (I am using RTAI, so yes I have shared memory between kernel and user spaces)) and others read from it. I assume read and write from float variables would not use the FPU so this is safe.
By safe, I mean avoiding FPU in the kernel, not to mention some systems may not even have an FPU. I am not going to use kernel_fpu_begin/end, as that likely breaks the real-time-ness of my tasks.
Now in my kernel module, I really don't need much precision (since the positions are averaged anyway), but I would need it up to say 0.001. My question is, how can I portably turn a floating point number to an integer (1000 times the original number) without using the FPU?
I thought about manually extracting the number from the float's bit-pattern, but I'm not sure if it's a good idea as I am not sure how endian-ness affects it, or even if floating points in all architectures are standard.
If you want to tell gcc to use a software floating point library there's apparently a switch for that, albeit perhaps not turnkey in the standard environment:
Using software floating point on x86 linux
In fact, this article suggests that linux kernel and its modules are already compiled with -msoft-float:
http://www.linuxsmiths.com/blog/?p=253
That said, #PaulR's suggestion seems most sensible. And if you offer an API which does whatever conversions you like then I don't see why it's any uglier than anything else.
The SoftFloat software package has the function float32_to_int32 that does exactly what you want (it implements IEEE 754 in software).
In the end it will be useful to have some sort of floating point support in a kernel anyway (be it hardware or software), so including this in your project would most likely be a wise decision. It's not too big either.
Really, I think you should just change your module's API to use data that's already in integer format, if possible. Having floating point types in a kernel-user interface is just a bad idea when you're not allowed to use floating point in kernelspace.
With that said, if you're using single-precision float, it's essentially ALWAYS going to be IEEE 754 single precision, and the endianness should match the integer endianness. As far as I know this is true for all archs Linux supports. With that in mind, just treat them as unsigned 32-bit integers and extract the bits to scale them. I would scale by 1024 rather than 1000 if possible; doing that is really easy. Just start with the mantissa bits (bits 0-22), "or" on bit 23, then right shift if the exponent (after subtracting the bias of 127) is less than 23 and left shift if it's greater than 23. You'll need to handle the cases where the right shift amount is greater than 32 (which C wouldn't allow; you have to just special-case the zero result) or where the left shift is sufficiently large to overflow (in which case you'll probably want to clamp the output).
If you happen to know your values won't exceed a particular range, of course, you might be able to eliminate some of these checks. In fact, if your values never exceed 1 and you can pick the scaling, you could pick it to be 2^23 and then you could just use ((float_bits & 0x7fffff)|0x800000) directly as the value when the exponent is zero, and otherwise right-shift.
You can use rational numbers instead of floats. The operations (multiplication, addition) can be implemented without loss in accuracy too.
If you really only need 1/1000 precision, you can just store x*1000 as a long integer.

How floats are computed on a machine without an FPU

C language has a data-type float. Some machines have a floating point processor that carries out all the floating point computations. My question is: Could there be some machines without a floating point processor? How do such machines use floating point?
Many small controllers do not have floating point units. In that case, there is a floating point software library.
In the mid-1980s, we considered ourselves blessed if our system had an 8087, the FPU for the 8086 and 8088. Unfortunately our software had to work correctly if an 8087 was present or not. That meant trapping and emulating 8087 instructions if it was missing.
The c standard allows floating points.
It is the compiler's responsibility to translate it to the specific hardware architecture.
If the hardware instruction set supports floating points [and most modern machines do], then - the compiler will most likely use it.
Otherwise, it will have to create a native language code that simulates the behavior of floating points by its own. How is it done? You could read more about floating points in the wikipeida page and in this more detailed article about floating point arithmetics
Up till and including the 486SX, no CPU's had a a builtin FPU unit.
As for microcontrollers, most of them do not have a FPU unit.
You'll find that nearly all modern desktop computers and servers include a FPU.
High end mobile devices have begun to include FPUs, but not all of them have them. And if we're talking about mobile devices other than at the high end, you won't find many devices that have FPUs.
In many applications, it's possible to do arithmetic on fractional numbers using "fixed point arithmetic"--that doesn't require an FPU.
In other cases, you can do the same math that an FPU does, but it takes longer when you have to build it yourself out of other arithmetic primitives rather than having a complex chip take care of it for you.
My favorite example of floating point simulation on fixed point processors is provided in Donald Knuth's MMIXware, a complete processor simulation in very portable C.
Emulating floating point is a bit slow, but theoretically fairly simple. It's just about like most people learned in high school or so: you have a number with an exponent. To add or subtract, you have to adjust the numbers so they have the same exponents, then add/subtract the mantissas. To multiply or divide, you multiply/divide the mantissas and add/subtract the exponents.
When you've finished that, you normalize the result. In high school we used decimal, and normally required exactly one digit before the decimal point, so (for example) 10001 would be written as 1.0001 x 104. On the computer, the details are a bit different (e.g., we're dealing in binary instead of decimal) but the basic idea is pretty much the same.

A floating point array in C

i am developing a code where i have to load floating point values stored in each line at a time in a text file...i loaded each of these data into a array of floats using fscanf()...however i found that the floating points were stored in a different way, example 407.18 was stored as 407.179993, 414.35 as 414.350006...now i am stuck because it is absolutely important that the numbers be stored in the form they were in the file but here it seems to be different even though essentially its the same....how do i get the numbers to store in the original form?
If it's absolutely important that the numbers be stored in the form they were in the file, don't use floating-point values. Not all fractional base-10 values can be represented exactly in binary.
For details, you should consult the article "What Every Computer Scientist Should Know About Floating-Point Arithmetic" [PDF link].
You should store the numbers in string form or as scaled integers.
What you see is correct since floating point cannot exactly represent many real numbers. This is a must-read: What Every Computer Scientist Should Know About Floating-Point Arithmetic
The C++ float and double types are represented as IEEE floating point numbers. There are binary floating point numbers, not decimals. They don't have infinite precision, and even relatively 'simple' decimal values will change when converted to binary floating point and back.
Languages that focus on money (COBOL and PL/I, e.g.) have decimal data types that use slower representations that don't have this problem. There are libraries for this purpose in C or C++, or you can use scaled integers if you don't need too much range.
sfactor,
A bit of clarification: you state "...absolutely important that the numbers be stored in the form they were in the file" - what 'form' is this? I'm assuming a display form?! Here's the deal: If you store in the file 407.18, but it is displayed as 407.179993 - this is totally normal and expected. When you construct your sprintf or whatever formatted print you are using, you must instruct it to limit the precision after the decimal point. This should do the trick:
#include <stdio.h>
void main(void);
void main()
{
printf("%.2f",407.179993);
}
You might want to take a look at an existing library that implements floating point with controlled rounding. I have used MPFR. It is fast and open source. If you are working with money then a different library would be better.
There is a lot of different ways to store numbers - each with their strenghts and weaknesses.
In your case floating point numbers are not the best choice because they don't represent most real numbers exactly.
Scaled integers is a case of the common design idiom: separate representation from the presentation.
Decide on how many digits you want after the decimal point, eg. 2 digits as in 123,45. You should then internaly represent the number as the integer 12345. Whenever you display the value to the user you should insert a decimal point at the relevant spot.
Short InformIT article on Scaled Integers by Danny Kalev
Before going overboard on scaled integers do inspect the domain of your problem closely to assert that the "slack" in the floating point numbers is significant enough to warrent using scaled integers.

Resources