I'm trying to create a customized FS and eventually a kernel for my Banana Pi using BuildRoot.
The fact is I'm new to it. Banana Pi isn't part of the pre-made configurations.
My main problem is that can't find the specific hardware specs I'm searching for.
My CPU is an Allwinner A20 SoC, which has an ARM architecture. But is it Big or Little endian ?
1. What is the "Target ABI" ?
2. What is its "Floating point strategy" ?
Thanks for your answers !
For both, you can chose whatever you want, since Buildroot will only show you what's compatible with the ARM core you've selected (in the case of the Allwinner A20, the ARM is Cortex-A7).
For the target ABI, you have the choice between EABI and EABIhf. Unless you need to use some pre-built binaries that are built with the EABI ABI, I would suggest you to use EABIhf, since it allows to pass floating point arguments directly in floating point registers (rather than through integer registers), which makes calling floating point related functions slightly more efficient.
For the Floating point strategy, use the higest VFPvX available for your ARM core.
Related
Due to the current chip shortage, I have had to purchase PIC microcontrollers that have a different specification to what was initially designed.
Initial: PIC24FJ256GA606
Revision 1: PIC24FJ512GA606
Revision 2: PIC24FJ1024GA606
In this instance, the microcontrollers are within the same family but have different size of memory.
Initially, the binary was created to support multiple product variants and they all use this microcontroller (using hardware pins to define the type of product and thus the software features it supports). I would like to continue with a single binary but to be able to support the different microcontrollers specified above.
We flash the microcontrollers using a PICKIT 4 during manufacturing.
A custom bootloader is also flashed onto the microcontroller during manufacturing to allow the firmware update procedure to be is driven by another PIC microcontroller out in the field (it's a distributed system connected by RS-485).
I use MPLAB X IDE for development and buildings production binaries.
I guess the key question is about if this is even possible?
If so, then how would I achieve creating the single binary that supports multiple processors?
Normally a single binary should only correspond to the specific controller. Because especially Microchip has really wide variaty of microcontrollers. But as you mentioned in your question:
In this instance, the microcontrollers are within the same family but have different size of memory.
You can slightly use the same binary as long as you select the hardware very carefully. I mean if those 3 different models has the same pin mapping but some has less or some has more, then you would select the common corresponding pins for the I/O functions wherever possible. Since those devices are from the same family they must have common IO pins with the same port and pin numbering.
If those similarities including of that the internal registers are enough for the functionality of your system, you can use the same binary for those 3 or more devices as long as you select the right hardware very carefully and none of the functions remain without touching its hardware.
But it is very hard to say the same for the others that are not belong to a series in the same family. In this case you can check the hardware similarities for each functionality of your system. If that micro provides the same hardware, then you can go and firstly give it a try to see whether it will be programmed and then it will funtion in the same way. After making sure enough you can add that model in your usable models list, too.
Hope this give you a helpful idea.
For two microcontrollers to have compatible binaries, they need to fulfil the following:
The CPU cores must have identical Instruction Set Architecture. Be aware that the term "code compatible" by the manufacturer might only mean that two parts have the same ISA and are compatible on the assembly language level, as long as no peripherals or memories are used...
In case they have different memory sizes, the part with larger memory must be a superset of the part with smaller memory and they must map memory to the same addresses.
All hardware peripherals used must be identical and any peripheral routing registers used must also be identical. Please note that the very same core of the same family but with a different package and pin routing might mean that peripheral routing registers must be set differently.
There can be no check of MCU partnumber registers etc inside the firmware, nor can there be any in the flash programming equipment.
In general this means that the MCUs must be of the very same family and the manufacturer must guarantee that they are replaceable.
You can very likely switch between different temperature spec parts of the same model and between automotive/non-automotive qual parts of the same model without changing the code.
I want to know which code and files in the glibc library are responsible for generating traps for floating point exceptions when traps are enabled.
Currently, GCC for RISC-V does not trap floating point exceptions. I am interested in adding this feature. So, I was looking at how this functionality is implemented in GCC for x86.
I am aware that we can trap signals as described in this [question]
(Trapping floating-point overflow in C) but I want to know more details about how it works.
I went through files in glibc/math which according to me are in some form responsible for generating traps like
fenv.h
feenablxcpt.c
fegetexpect.c
feupdateenv.c
and many other files starting with fe.
All these files are also present in glibc for RISC-V. I am not able to
figure out how glibc for x86 is able to generate traps.
These traps are usually generated by the hardware itself, at the instruction set architecture (ISA) level. In particular on x86-64.
I want to know which code and files in the glibc library are responsible for generating traps for floating point exceptions when traps are enabled.
So there are no such file. However, the operating system kernel (notably with signal(7)-s on Linux...) is translating traps to something else.
Please read Operating Systems: Three Easy Pieces for more. And study the x86-64 instruction set in details.
A more familiar example is the integer division by zero. On most hardware, that produces a machine trap (or machine exception), handled by the kernel. On some hardware (IIRC, PowerPC), its gives -1 as a result and sets some bit in a status register. Further machine code could test that bit. I believe that the GCC compiler would, in some cases and with some optimizations disabled, generate such a test after every division. But it is not required to do that.
The C language (read n1570, which practically is the C11 standard) has defined the notion of undefined behavior to handle such situations the most quickly and simply possible. Read Lattner's What every C programmer should know about undefined behavior blog.
Since you mention RISC-V, read about the RISC philosophy of the previous century, and be aware that designing out-of-order and super-scalar processors requires a lot of engineering efforts. My guess is that if you invest as much R&D (that means tens of billions of US$ or €) as Intel -or, to a lesser extent, AMD- did on x86-64 into a RISC-V chip, you could get comparable performance to current x86-64 processors. Notice that SPARC or PowerPC (or perhaps ARM) chips are RISC-like, and their best processors are nearly comparable in performance to Intel chips but got probably ten times less R&D investment than what Intel put in its microprocessors.
I was wondering what would be the fastest way to generate a random number on the zedboard (Xilinx Zynq-7020) which has an ARM processor as well as an FPGA, that to my understanding both can do this.
Thanks,
If you are developing a FPGA only or a bare metal application + FPGA project, I would generate the (pseudo)random number in the FPGA. If you are using Linux and building an embedded application + FPGA, I would generate the number in software.
If you generate the number in the FPGA, Then you can use a code template built into Xilinx ISE! Odds are you're using ISE because I doubt anything else supports Zynq yet.
Generate a LFSR in ISE like this:
From the menu at the top: Edit -> Language Templates.
In the language templates tree-view:
(VHDL or Verilog) -> Synthesis Constructs -> Coding Examples -> Counters -> LFSR.
One of the 4 templates is 32 bits.
You will need to provide a seed number and a clock line.
If you generate the random number from a Linux app, there must be tons of different ways to do this. From a C application, there is a rand function in cstdlib.
You're probably looking for a "Linear Feedback Shift Register" or LFSR. A quick internet search will get you specifics.
Linear Feedback Shift Register is no doubt the easiest way to generate random numbers.
check this document: http://www.xilinx.com/support/documentation/application_notes/xapp052.pdf
everything you need is there.
I'm working on a micro-controller that contains access to floating-point operations.
I need to make use of a power function. The problem is, there isn't enough memory to support the pow and sqrt function. This is because the microcontroller doesn't support FP operations natively, and produces a large number of instructions to use them. I can still multiply and divide floating point numbers.
Architecture: Freescale HCS12 (16-bit)
If you mentioned the architecture, you might get a more specific answer.
The linux kernel still has the old x87 IEEE-754 math emulation library for i386 and i486 processors without a hardware floating point unit, under: arch/x86/math-emu/
There are a lot of resources online for floating point routines implemented for PIC micros, and AVR libc has a floating point library - though it's in AVR assembly.
glibc has implementations for pow functions in sysdeps/ieee754. Obviously, the compiler must handle the elementary floating point ops using hardware instructions or emulation / function calls.
Make your own function that multiplies repeatedly in a loop.
At link http://talk.maemo.org/showthread.php?t=9081 I found that interpret armel as little endian ARM is wrong. But what in this case is armel?
In the context of Maemo and Debian architecture names, it refers to a binary-incompatible change in the ABI (the function-calling and return-value conventions) which necessitated a complete new port of Debian.
https://wiki.debian.org/ArmEabiPort will tell you far more about the differences than you ever wanted to know. The bottom line is that *_arm.deb and *_armel.deb are two incompatible ports, and *_armel.deb is 11 times faster when doing floating point, as well as allowing you to compile your own applications using hardfloat (precisely, -mfloat-abi=softfp) and link then with the softfloat libraries in your generic distro to gain a further 3 to 7 times speed increase.
It's ARM running in little-endian mode.