Could a chip revision change break binary compatibility by CMSIS? - arm

Suppose the following situation:
You develop something to a certain manufacturer's ARM based microcontroller.
You are using CMSIS for interfacing the hardware (as recommendable on this architecture).
The software is finalized, compiled into a binary, product is being shipped.
Five years later when you wanted to issue a new batch of the product, and find that the binary doesn't work correctly on the then-current available revision of the microcontroller you were using.
After recompiling with a fresh CMSIS provided for the micro, it works.
Could this situation happen? Did it ever happen?
Why this is important is that there are some areas where recompiling a binary might not be an acceptable solution.
As far as I see it seems like this scenario is a possibility as CMSIS contains interface code (it is not just a bunch of header files, at least as far as I see it isn't, I could be wrong), and ARM's recommendation about it seems to be just that the manufacturer should implement it and provide this interface to developers.
I couldn't so far find anything at either ARM manufacturer about whether and how binary compatibility is maintained across chip revisions (if applicable).

Related

Is it possible to build C source code written for ARM to run on x86 platform?

I got some source code in plain C. It is built to run on ARM with a cross-compiler on Windows.
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
Since the C source code is instruction set independent, and I just want to verify the software logic at the C-level, I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
Or is there some proper way to do white-box testing of C code written for ARM?
Thanks!
BTW, I have read the thread: How does native android code written for ARM run on x86?
It seems not to be what I need.
ADD 1 - 10:42 PM 7/18/2021
The physical ARM hardware that the code targets may not be ready yet. So I want to verify the software logic at a very early phase. Based on John Bollinger's answer, I am thinking about another option: Just build the binary as usual for ARM. Then use QEMU to find a compatible ARM cpu to run the code. The code is assured not to touch any special hardware IO. So a compatible cpu should be enough to run all the code I think. If this is possible, I think I need to find a way to let QEMU load my binary on a piece of emulated bare-metal. And to get some output, I need to at least write a serial port driver to bridge my binary to the serial port.
ADD 2 - 8:55 AM 7/19/2021
Some more background, the C code is targeting ARMv8 ISA. And the code manipulates some hardware IPs which are not ready yet. I am planning to create a software HAL for those IPs and verify the C code over the HAL. If the HAL is good enough, everything can be purely software and I guess the only missing part is a ARMv8 compatible CPU, which I believe QEMU can provide.
ADD 3 - 11:30 PM 7/19/2021
Just found this link. It seems QEMU user mode emulation can be leveraged to run ARM binaries directly on a x86 Linux. Will try it and get back later.
ADD 4 - 11:42 AM 7/29/2021
An some useful links:
Override a function call in C
__attribute__((weak)) and static libraries
What are weak functions and what are their uses? I am using a stm32f429 micro controller
Why the weak symbol defined in the same .a file but different .o file is not used as fall back?
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
What does efficiency have to do with it if you cannot be sure that your test results are representative of the real target platform?
Since the C source code is instruction set independent,
C programs vary widely in how portable they are. This tends to be less related to CPU instruction set than to target machine and implementation details such as data type sizes, word endianness, memory size, and floating-point implementation, and implementation-defined and undefined program behaviors.
It is not at all safe to assume that just because the program is written in C, that it can be successfully built for a different target machine than it was developed for, or that if it is built for a different target, that its behavior there is the same.
I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
It is probably possible to build the program. There are several good C compilers for various x86 and x86_64 platforms, and if your C code conforms to one of the language specifications then those compilers should accept it. Whether the behavior of the result is representative of the behavior on ARM is a different question, however (see above).
It may nevertheless be a worthwhile exercise to port the program to another platform, such as x86 or x86_64 Windows. Such an exercise would be likely to unmask some bugs. But this would be a project in its own right, and I doubt that it would be worth the effort if there is no intention to run the program on the new platform other than for testing purposes.
Or is there some proper way to do white-box testing of C code written for ARM?
I don't know what proper means to you, but there is no substitute for testing on the target hardware that you ultimately want to support. You might find it useful to perform initial testing on emulated hardware, however, instead of on a physical ARM device.
If you were writing ARM code for a windows desktop application there would be no difference for the most part and the code would just compile and run. My guess is you are developing for some device that does some specific task.
I do this for a lot of my embedded ARM code. Typically the core algorithms work just fine when built on x86 but the whole application does not. The problems come in with the hardware other than the CPU. For example I might be using a LCD display, some sensors, and Free RTOS on the ARM project but the code that runs on Windows does not have any of these. What I do is extract important pieces of C/C++ code and write a test framework around it. In the real ARM code the device is reading values from a sensor and doing something with it. In the test code that runs on a desktop the code reads from a data file with fake sensor values and writes its output to a datafile that can be analyzed. This way I can have white box tests for the most complicated code.
May I ask, roughly what does this code do? An ARM processor with no peripherals would be kind of useless. Typically we use the processor to interact with some other hardware like a screen, some buttons, or Bluetooth. It's those interactions that are going to be the most problematic.

Flashing a Cortex-M0+ device using an ISO file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I programmed a little code for motion detection device(DA14531 SmartBond TINY module based on a CortexM0+) and I am having some experiments with it. at the end and after debugging and testing, I generated an ISO file and now I want to flash the device. Is the process similar to burning the ISO file on a USB flash or is the process different? I only have one device and I dont want to do something irreversible so I came here for some guidance first.
I looked online for a while but nothing matches my specifique situation, so providing me with the correct links is also helpful
The ISO 9660 format was designed for optical disks, using it is likely irrelevant to your use case, since there is IMHO a near to zero chance you will find a tool that will allow you to flash directly your program in a Cortex-M0+ device from a file in ISO 9660 format.
And if you may flash the ISO file as is in your Cortex-M0+ flash memory, your device will likely be unable to boot since it does rely on very specific information (stack pointer, first instruction to be executed) to be flashed in a verify specific location, not mentioning the waste of flash memory space this would cause.
That is, if Dialog documentation does not specifically mention the possibility of flashing a file in ISO 9660 format, this is likely (and not surprising) that this is not possible using Dialog software and hardware support tools.
So when you read the documentation for this product you noticed there is an SWD interface which is certainly one way into the part. When you further examined the pro kit and other solutions from them you see they mention segger jlink interfaces for debugging etc. Further reinforcing SWD as at least one interface into the part. Through that interface (SWD is ARM) you access the flash controller (has nothing whatsoever to do with arm it is chip specific) and through that you write your application binary that the part will run (the machine code and data that the processor uses, your application).
ISO is closely related to PC's with a BIOS/EFI which also means x86, and has nothing whatsoever to do with a microcontroller much less a non x86, non BIOS/EFI PC/Laptop. It is extremely unlikely that you can get enough software on a cortex-m0(+) based platform that if you had an interface to media that can hold it that you could parse it and extract anything useful, and then have resources left to load and execute any programs in ram. No way whatsoever in any part I have heard of that you could do this in ram such that you could extract something you could load to the flash on the part. Plus you have to get that program into the part before you can then later support ISO, if you could, which you can't.
The only remote way an ISO makes any sense at all or has any context is if on your PC you boot off of an ISO image and that ISO image for the PC (not the mcu) contains a development system. For example a pre-prepared Linux operating system distro with the tools from the vendor for this part so that you don't have to install the development system on your computer you can run it off a ramdisk using a live image on an ISO. That development system would not use ISO files but the proper file formats to develop binaries and load them to the board via SWD or some other chip/board specific interface.
Beyond that there is no further reason to talk about ISO's and microcontrollers.
Some chip vendors (not arm, the chip vendor) may also provide a factory bootloader or logic that supports for example a uart, maybe spi, maybe i2c, maybe usb interface that you can use chip (not arm) specific software to talk to software running on that chip (the bootloader) that can then write to the flash. You can also write your own bootloader if there are enough resources in the system. The (arm based) mcu world is moving away from these bootloaders, two of the three main companies that used to always have them have started to remove them or disable them as a default feature.
Other companies provide no other interface than SWD to program the part, SWD or nothing. Certainly in the cortex-m0+ market where every penny counts and the extra flash for a bootloader and extra chip real estate, etc add to the overall cost for a legacy feature that is becoming less important because developers can now easily obtain SWD interface modules for a few dollars. It is not like the old days where a JTAG board cost $2000. At this time all cortex-m parts support SWD, making it the most useful interface and reminding developers that having tools that can access that interface being worth the ($5, plus time to learn to use it) investment.
The tools used to write the flash dictate what file formats are supported, these days a raw binary image or an elf file format are the main two. The old days included file formats like Intel hex and Motorola s-record but it is only old timers like me that favor those file formats, even though an elf is trivial to parse, and a raw binary image even simpler, about four lines of code.
Some chip vendors do not provide enough information to roll your own, but most often they do. Arm long ago released the SWD interface information, so it is technically possible to roll your own and then support whatever file format you want. But you would have to distribute this tool along with the ISO file, so you would what use a second ISO file to distribute the tools to read the first one. Based on your question and comments you are a long long way from writing tools like these. Especially when working tools like openocd exist that support the main file formats (elf and raw binary) and can speak SWD into the current line of cortex-m cores.
Again if you are suggesting using an ISO to distribute tools along with your binary to be loaded and run on a PC that might make sense, but it is easier for the end user to simply download the tools from the chip vendor or tools vendor and then download the binary file from you, rather than put in the extra work to deal with an ISO.

Which ARM Cortex M product line to use?

I am trying to select a particular companies line of ARM Cortex M microprocessors to work with for a project I want to do. Since all the companies license the architecture from ARM Holdings I am wondering how much difference there can be in the hardware between brands? I am thinking the documentation and software productivity and overall experience may be the deciding factor?
I have tried one company and their documentation was lacking! Thousands of pages of fluff about what wonderful stuff they make but very little info on how to use the things.
Mostly I am looking for good documentation. I don't need endless obsolete sample programs that don't compile and use mystery macros and functions! A line that includes a microprocessor with 2 ADC not just multiplexing channels to 1 ADC would be good but I may end up using external ADC.
Would anybody care to say what they recommend and why.
BTW: My history is programming C++, C# in MS Visual Studio for machine tool HMI as well as motion controllers.
Thanks In Advance
Chris
Most of your application has nothing to do with ARM and the cortex-m. Each chip vendor adds its own peripherals (or sometimes purchased) around the arm core. Most of your code is talking to peripherals. The processor core doesnt make the chip, the things other than the processor core make the chip.
You need to go do your research, stackoverflow is not about doing it for you. You should be able to find a list of parts with the number of peripherals you want, independent of processor core used. While a particular chip vendor may have different uarts across their product line or different gpios, adcs, etc, you can still get a feel for a vendor without having to look at every part on that list as you narrow in.
No vendor has great docs, some do have bad docs, that is part of the exercise. All of them provide libraries, same deal nobody has great libraries, some have bad ones, but the point of the libraries is to hide the details. You need to do your homework and look at the docs, look at that code, can you live with it can you replace it or repair it, or is it better to move on to another vendor or same vendor and an alternate library.
1% of the job is writing the application the other 99% is reading docs and doing experiments to make the peripherals do what you want them to do.
Same brand or different brands with the same name of processor core doesnt mean anything with respect to portability. If you read the arm docs as you should read any of the processor core docs for whatever parts you are evaluating or choosing, you will see that even if 7 vendors have products with the cortex-m0, that core has compile time and runtime options that each vendor could choose from making either the code or the performance incompatible with other chips using a cortex-m0. But the amount of code that would port anyway is a very very small percentage of your project. Most of your project is the not-processor-core stuff.
Note ARM makes a number of cortex-m cores that are not 100% compatible with each other. If you feel the need to go with an ARM core, then narrow in on the one you want, that will narrow your choices as far as available chips goes.
Built in ADCs are there to save on chips, depending on the specs you want, accuracy or performance, you may very well end up with an external ADC which makes the specific microcontroller less important if the ADC and its specs are your primary requirement.
Software productivity, also has little to nothing to do with the processor core. The vendors are going to cobble together an IDE with a compiler and libraries because folks expect that, doesnt make any of them any good nor productive. The text editor alone goes the way of religion and politics with developers, there is no single editor or environment that is perfect for every developer, developers have their ways of doing things and some are compatible and some are not. Some developers can bend some cant. Very rarely do you have to use the tools they provide.
It is not possible for us to choose your part for you nor is it possible for us to choose your development environment. That is not the purpose of stackoverflow.
The time it took to write your question and wait for an answer thus far you could have looked at all the major vendors docs several times over. I hope you didnt stop after the first one.

CMSIS for Cortex-M1

Sadly I'm forced to use and obscure microcontroller based on ARM Cortex-M1 core. I just found out that the latest CMSIS (5.2) does not support it and official CMSIS docs say this:
CMSIS supports the complete range of Cortex-M processors (with
exception of Cortex-M1) and the ARMv8-M architecture including
security extensions.
I guess Cortex-M1 is not very popular. But what should I do without CMSIS? My vendor ships support package which strangely enough includes CMSIS files for this core, namely, core_cm1.h; it's full of ARM copyrights and does not appear to be written directly by said vendor. File comment lists CMSIS version V3.20 from 25 February 2013. But I can't find it anywhere else, neither in higher versions of CMSIS nor in lower.
In "Definitive Guide to the ARM Cortex-M0" by Joseph Yiu I found this quote:
There is also a small chance that the software needs minor adjustment
because of execution timing differences. At the time of writing, no
CMSIS software package is available for the Cortex-M1. However, you
can use the same CMSIS files for the Cortex-M0 on Cortex-M1
programming, because they are based on the same version of the ARMv6-M
architecture.
I diffed core_cm0.h from CMSIS 4.0 and core_cm1.h from my vendor and found only very minor differences (like, 1 << smthn became 1u << smthn in a couple of places).
Than I diffed core_cm0.h from CMSIS 5.0.2 and core_cm1.h from my vendor and found a lot of differences, structs are different, inline functions for NVIC are different and so on.
So my question is: is it really safe to use core_cm0 for Cortex-M1 even for latest CMSIS? Or should I play it safe and stick to my vendor's files (even though I have no idea where did it get them)?
You can use the Cortex-M0 CMSIS-CORE header on Cortex-M1. There are couple of things you need to be aware:
- WFI, WFE and SEV instructions are not available in Cortex-M1.
- Cortex-M1 has an auxiliary control register for I-TCM enable control. You need to declare that manually if you need to switch I-TCM enable.
- CPU ID register has different value
- Instruction execution timings are different
- Interrupt latency is not constant.
There is a lot of code changes from CMSIS-CORE 4 to CMSIS-CORE 5. But those changes are focus on supporting of additional tools, general coding styles and for future extension of CMSIS.
Hope this helps.
Cortex-M1 is very similar to Cortex-M0 from a software point of view. At the CMSIS level, using core_cm0.h (latest CMSIS) will work fine.
You might also find compiler switches don't support Cortex-M1 - in this case treat it as if it were M0.

What are the conditions to make the embedded C code written for one processor to work on another processor (when architecture is same)?

I am reading a primer text on embedded C programming (it is: Barr & Massa, 2007). For companion hardware board to run examples, they recommend Arcom VIPER-Lite. But I do already have Beaglebone Black (BBB) board and I don't want to buy a new board.
The two boards have same architecture, namely, ARM but BBB uses TI AM3358BZCZ100 processor, clocked at 1GHz, whereas VIPER-Lite uses Intel's PXA255 processor, clocked at 200MHz. The BBB board has more memory and basically more of everything.
My question is, can I follow and execute embedded C code examples given in this book on my BBB board? Does embedded C code depend on processors or architecture or something else? I understand that very specific examples addressing particular peripherals/drivers may not be portable from one platform to next but is entire embedded code like this? I am hope I am making sense.
Intel X-Scale is not the same as Cortex-A8 - ARM architecture has been through a number of versions since then, and Intel implemented some proprietary features too. Moreover ARM licencees are free to implement proprietary peripheral sets and subsets of the core architecture.
In particular for board bring-up the PLL and SDRAM controller will be entirely different between different vendor's devices and even between different generations of device from the same vendor.
If you are running code on an already implemented OS (BeagleBone is delivered with Linux already installed), then you will not need to worry about board bring up and peripheral support; but you will also miss out in learning a great deal about embedded systems (other than perhaps embedded systems that run pre-installed or vendor supplied Linux distros, which is a small subset or all embedded systems).
Beyond board bring up the boards will have entirely different peripheral sets, different on-board devices, and differing I/O at different addresses and with different register sets - no code that directly accesses the I/O is will work. Code accessing devices through a standard Linus device driver interface may well work because an abstraction to a common interface is provided by the OS and board vendor or third party device drivers.
If you are not running the code on Linux - or are implementing low-level device drivers, then the programming environment in terms of memory map, MMU, PLL, I/O control, peripherals, and even instruction set will be different and any code will require adaptation, and you will need to get familiar with the corresponding data sheets or reference manuals and also the ARM technical reference.
So the answer is that it depends largely on where you are starting from; bare-metal or Linux.
There are resources related to "bare-metal" development on BeagleBone Black in particular TI's own bare-metal StarterWare library.
The concept you learn from most good text books can be applied any microprocessor or micro controller at a very high level. But if you want learn embedded system programming using Beagle Bone Black I suggest the following youtube links from Prof Derek Molloy. Prof Molloy does a fantastic job of teaching embedded System programming using BBB. Here are few links for you to get started.
The Beaglebone - Unboxing, Introduction Tutorial and First Example
Beaglebone: C/C++ Programming Introduction for ARM Embedded Linux Development using Eclipse CDT
Beaglebone: GPIO Programming on ARM Embedded Linux
The one problem you might want to be aware is that the video were based on Angstrom Distribution. The current BBB is shipped with Debian Distribution.
Also if you want to learn bare-metal embedded system program you might want check out
Embedded Systems - Shape The World
You might also want to take look at the following link for more material.
Beginning with programming microcontrollers
The xscale although ARM instruction set derived is not ARM in the sense that you want to use it. For some reason the native mode is big endian and normal ARM native mode is little endian. But more important the core processor is not insignificant, but not the bulk if the porting effort, most of not all of the peripherals are expected to differ between those two chips, most of the code would need a re-write unless it is a purely portable C program that runs on any say linux, then arm, xscale, x86 are completely irrelevant to the discussion. I suspect you are not in that situation. Even compiled as a generic command line linux app would still have problems in this situation with the endianness.
Basically you are saying I have two fords and I want to take the wheels off of one and put them on the other, without understanding that one is a ford festiva and the other is lets say an F350 pickup. Just because they have the same looking tiny ford badge on them, doesnt mean that the entirety of their components are identical.
If you are desperate to re-use these binaries, you are better off finding or making a simulator for the prior platform and then you can run that on anything.

Resources