Passage of Floating point arguments in ARM processors - c

I would like an example of the convention used when writing an assembly subroutine called from C.There are no resources online that explain where arguments that are integer and floating point type are stored. I have floating point hardware on the ARM and am able to pass in floating point numbers into floating point registers. But how do I pass in integers into integer registers and floating point values into floating point registers at the same time.

Related

Converting 32-bit number to 16 bits or less

On my mbed LPC1768 I have an ADC on a pin which when polled returns a 16-bit short number normalised to a floating point value between 0-1. Document here.
Because it converts it to a floating point number does that mean its 32-bits? Because the number I have is a number to six decimal places. Data Types here
I'm running Autocorrelation and I want to reduce the time it takes to complete the analysis.
Is it correct that the floating point numbers are 32-bits long and if so is it correct that multiplying two 32-bit floating point numbers will take a lot longer than multiplying two 16-bit short value (non-demical) numbers together?
I am working with C to program the mbed.
Cheers.
I should be able to comment on this quite accurately. I used to do DSP processing work where we would "integerize" code, which effectively meant we'd take a signal/audio/video algorithm, and replace all the floating point logic with fixed point arithmetic (ie: Q_mn notation, etc).
On most modern systems, you'll usually get better performance using integer arithmetic, compared to floating point arithmetic, at the expense of more complicated code you have to write.
The Chip you are using (Cortex M3) doesn't have a dedicated hardware-based FPU: it only emulates floating point operations, so floating point operations are going to be expensive (take a lot of time).
In your case, you could just read the 16-bit value via read_u16(), and shift the value right 4 times, and you're done. If you're working with audio data, you might consider looking into companding algorithms (a-law, u-law), which will give a better subjective performance than simply chopping off the 4 LSBs to get a 12-bit number from a 16-bit number.
Yes, a float on that system is 32bit, and is likely represented in IEEE754 format. Multiplying a pair of 32-bit values versus a pair of 16-bit values may very well take the same amount of time, depending on the chip in use and the presence of an FPU and ALU. On your chip, multiplying two floats will be horrendously expensive in terms of time. Also, if you multiply two 32-bit integers, they could potentially overflow, so there is one potential reason to go with floating point logic if you don't want to implement a fixed-point algorithm.
It is correct to assume that multiplying two 32-bit floating point numbers will take longer than multiplying two 16-bit short value if special hardware(Floating point unit) is not present in the processor.

How floating point conversion was handled before the invention of FPU and SSE?

I am trying to understand how floating point conversion is handled at the low level. So based on my understanding, this is implemented in hardware. So, for example, SSE provides the instruction cvttss2si which converts a float to an int.
But my question is: was the floating point conversion always handled this way? What about before the invention of FPU and SSE, was the calculation done manually using Assembly code?
It depends on the processor, and there have been a huge number of different processors over the years.
FPU stands for "floating-point unit". It's a more or less generic term that can refer to a floating-point hardware unit for any computer system. Some systems might have floating-point operations built into the CPU. Others might have a separate chip. Yet others might not have hardware floating-point support at all. If you specify a floating-point conversion in your code, the compiler will generate whatever CPU instructions are needed to perform the necessary computation. On some systems, that might be a call to a subroutine that does whatever bit manipulations are needed.
SSE stands for "Streaming SIMD Extensions", and is specific to the x86 family of CPUs. For non-x86 CPUs, there's no "before" or "after" SSE; SSE simply doesn't apply.
The conversion from floating-point to integer is considered a basic enough operation that the 387 instruction set already had such an instruction, FIST—although not useful for compiling the (int)f construct of C programs, as that instruction used the current rounding mode.
Some RISC instruction sets have always considered that a dedicated conversion instruction from floating-point to integer was an unnecessary luxury, and that this could be done with several instructions accessing the IEEE 754 floating-point representation. One basic scheme might look like this blog post, although the blog post is about rounding a float to a
float representing the nearest integer.
Prior to the standardization of IEEE 754 arithmetic, there were many competing vendor-specific ways of doing floating-point arithmetic. These had different ranges, precision, and different behavior with respect to overflow, underflow, signed zeroes, and undefined results such as 0/0 or sqrt(-1).
However, you can divide floating point implementations into two basic groups: hardware and software. In hardware, you would typically see an opcode which performs the conversion, although coprocessor FPUs can complicate things. In software, the conversion would be done by a function.
Today, there are still soft FPUs around, mostly on embedded systems. Not too long ago, this was common for mobile devices, but soft FPUs are still the norm on smaller systems.
Indeed, the floating point operations are a challenge for hardware engineers, as they require much hardware (leading to higher costs of the final product) and consume much power. There are some architectures that do not contain a floating point unit. There are also architectures that do not provide instructions even for basic operations like integer division. The ARM architecture is an example of this, where you have to implement division in software. Also, the floating point unit comes as an optional coprocessor in this architecture. It is worth thinking about this, considering the fact that ARM is the main architecture used in embedded systems.
IEEE 754 (the floating point standard used today in most of the applications) is not the only way of representing real numbers. You can also represent them using a fixed point format. For example, if you have a 32 bit machine, you can assume you have a decimal point between bit 15 and 16 and perform operations keeping this in mind. This is a simple way of representing floating numbers and it can be handled in software easily.
It depends on the implementation of the compiler. You can implement floating point math in just about any language (an example in C: http://www.jhauser.us/arithmetic/SoftFloat.html), and so usually the compiler's runtime library will include a software implementation of things like floating point math (or possibly the target hardware has always supported native instructions for this - again, depends on the hardware) and instructions which target the FPU or use SSE are offered as an optimization.
Before Floating Point Units doesn't really apply, since some of the earliest computers made back in the 1940's supported floating point numbers: wiki - first electro mechanical computers.
On processors without floating point hardware, the floating point operations are implemented in software, or on some computers, in microcode as opposed to being fully hardware implemented: wiki - microcode , or the operations could be handled by separate hardware components such as the Intel x87 series: wiki - x87 .
But my question is: was the floating point conversion always handled this way?
No, there's no x87 or SSE on architectures other than x86 so no cvttss2si either
Everything you can do with software, you can also do in hardware and vice versa.
The same to float conversion. If you don't have the hardware support, just do some bit hacking. There's nothing low level here so you can do it in C or any other languages easily. There is already a lot of solutions on SO
Converting Int to Float/Float to Int using Bitwise
Casting float to int (bitwise) in C
Converting float to an int (float2int) using only bitwise manipulation
...
Yes. The exponent was changed to 0 by shifting the mantissa, denormalizing the number. If the result was too large for an int an exception was generated. Otherwise the denormalized number (minus the factional part and optionally rounded) is the integer equivalent.

Does ALU read and write floating point number?

I know that for floating point operation FPU (Floating point Unit) is required and the ALU can only perform arithmetic operations. So I am using fixed point arithmetic.
These are the flollowing steps I am doing:
read floating point number.
Convert it into fixed point
Do all operation using fixed point arithmetic
Convert result into floating point
write o/p
My question is if there is no FPU present in system, how would it read floating point as input and output.
Does ALU read and write floating point number? If yes, how?
No, the ALU can not read nor write floating point numbers as floating point numbers, just the FPU. From the ALU point of view an FP number is a series of random bits.
The FPU is present today for performance reasons; you have a dedicated piece of silicon on your CPU to perform FP operations.
Since floating point numbers are base two numbers with a mantissa and an exponent, you can always perform floating point operations using the ALU. Which, again, is slower than using a hardware FPU but gets the job done anyway.
For example you have FLIP which is a floating point library implemented in C to perform floating point operatins using just integer numbers; that's it, the ALU.
FLIP is a C library that provides a software support for binary32
floating-point arithmetic on integer processors. This library is
particularly targeted to VLIW or DSP processors (that is, embedded
systems), and has been validated on VLIW integer processors like those
of the ST200 family from STMicroelectronics.
This library provides software implementation for the five basic
arithmetic operations (addition, subtraction, multiplication,
division, and square root) with subnormal numbers support, and for the
four rounding-direction attributes (RoundTiesToEven,
RoundTowardPositive, RoundTowardNegative, RoundTowardZero) required by
the IEEE 754-2008 standard.
The GCC compiler also contains a software emulation layer for floating point operations:
The software floating point library is used on machines which do not
have hardware support for floating point. It is also used whenever
-msoft-float is used to disable generation of floating point instructions. (Not all targets support this switch.)
With an ALU you can only use integer or use fixed point arithmetics. Otherwise, you have to emulate it. The emulation can be done either at compiler level (see soft float) or application level.

How floats are computed on a machine without an FPU

C language has a data-type float. Some machines have a floating point processor that carries out all the floating point computations. My question is: Could there be some machines without a floating point processor? How do such machines use floating point?
Many small controllers do not have floating point units. In that case, there is a floating point software library.
In the mid-1980s, we considered ourselves blessed if our system had an 8087, the FPU for the 8086 and 8088. Unfortunately our software had to work correctly if an 8087 was present or not. That meant trapping and emulating 8087 instructions if it was missing.
The c standard allows floating points.
It is the compiler's responsibility to translate it to the specific hardware architecture.
If the hardware instruction set supports floating points [and most modern machines do], then - the compiler will most likely use it.
Otherwise, it will have to create a native language code that simulates the behavior of floating points by its own. How is it done? You could read more about floating points in the wikipeida page and in this more detailed article about floating point arithmetics
Up till and including the 486SX, no CPU's had a a builtin FPU unit.
As for microcontrollers, most of them do not have a FPU unit.
You'll find that nearly all modern desktop computers and servers include a FPU.
High end mobile devices have begun to include FPUs, but not all of them have them. And if we're talking about mobile devices other than at the high end, you won't find many devices that have FPUs.
In many applications, it's possible to do arithmetic on fractional numbers using "fixed point arithmetic"--that doesn't require an FPU.
In other cases, you can do the same math that an FPU does, but it takes longer when you have to build it yourself out of other arithmetic primitives rather than having a complex chip take care of it for you.
My favorite example of floating point simulation on fixed point processors is provided in Donald Knuth's MMIXware, a complete processor simulation in very portable C.
Emulating floating point is a bit slow, but theoretically fairly simple. It's just about like most people learned in high school or so: you have a number with an exponent. To add or subtract, you have to adjust the numbers so they have the same exponents, then add/subtract the mantissas. To multiply or divide, you multiply/divide the mantissas and add/subtract the exponents.
When you've finished that, you normalize the result. In high school we used decimal, and normally required exactly one digit before the decimal point, so (for example) 10001 would be written as 1.0001 x 104. On the computer, the details are a bit different (e.g., we're dealing in binary instead of decimal) but the basic idea is pretty much the same.

A floating point array in C

i am developing a code where i have to load floating point values stored in each line at a time in a text file...i loaded each of these data into a array of floats using fscanf()...however i found that the floating points were stored in a different way, example 407.18 was stored as 407.179993, 414.35 as 414.350006...now i am stuck because it is absolutely important that the numbers be stored in the form they were in the file but here it seems to be different even though essentially its the same....how do i get the numbers to store in the original form?
If it's absolutely important that the numbers be stored in the form they were in the file, don't use floating-point values. Not all fractional base-10 values can be represented exactly in binary.
For details, you should consult the article "What Every Computer Scientist Should Know About Floating-Point Arithmetic" [PDF link].
You should store the numbers in string form or as scaled integers.
What you see is correct since floating point cannot exactly represent many real numbers. This is a must-read: What Every Computer Scientist Should Know About Floating-Point Arithmetic
The C++ float and double types are represented as IEEE floating point numbers. There are binary floating point numbers, not decimals. They don't have infinite precision, and even relatively 'simple' decimal values will change when converted to binary floating point and back.
Languages that focus on money (COBOL and PL/I, e.g.) have decimal data types that use slower representations that don't have this problem. There are libraries for this purpose in C or C++, or you can use scaled integers if you don't need too much range.
sfactor,
A bit of clarification: you state "...absolutely important that the numbers be stored in the form they were in the file" - what 'form' is this? I'm assuming a display form?! Here's the deal: If you store in the file 407.18, but it is displayed as 407.179993 - this is totally normal and expected. When you construct your sprintf or whatever formatted print you are using, you must instruct it to limit the precision after the decimal point. This should do the trick:
#include <stdio.h>
void main(void);
void main()
{
printf("%.2f",407.179993);
}
You might want to take a look at an existing library that implements floating point with controlled rounding. I have used MPFR. It is fast and open source. If you are working with money then a different library would be better.
There is a lot of different ways to store numbers - each with their strenghts and weaknesses.
In your case floating point numbers are not the best choice because they don't represent most real numbers exactly.
Scaled integers is a case of the common design idiom: separate representation from the presentation.
Decide on how many digits you want after the decimal point, eg. 2 digits as in 123,45. You should then internaly represent the number as the integer 12345. Whenever you display the value to the user you should insert a decimal point at the relevant spot.
Short InformIT article on Scaled Integers by Danny Kalev
Before going overboard on scaled integers do inspect the domain of your problem closely to assert that the "slack" in the floating point numbers is significant enough to warrent using scaled integers.

Resources