In our company we develop bare metal embedded software for microcontrollers. Until now we have been using manual unit test on targets or simulators, specially for Renesas microcontrollers (RL78 and RX families). We're planning now to go into automatic unit tests. The idea is to integrate them in our existing CI system.
At this point we've got a dilema. Until now we've been running unit test using the same compiler and target (or simulator) that later has been used to deploy the software into production. We'd like to maintain this approach, as the developers (and everybody) specially appreciate to test and deploy using the same conditions. So the idea would be to take a testing tool/library programmed in C that allows as to compile and run the tests in an embedded environment using a simulator. (Ex. http://www.throwtheswitch.org/unity)
But, on the other side, we cope with two upcoming situations that make the dilema arise:
We're more and more going to Cortex uC, where it's more difficult to get specific simulators to allow automation. (Ex. Renesas RA family)
Many of the advanced testing tools are developed in C++ and thought for PC environment using gcc/g++ compiler in a x86 architecture, that doesn't match that of the Cortex targets compiled using arm-none-eabi-gcc that we foresee to use.
So, at this point, we're wondering, and this would be my question, what kind of reliability can have unit tests run using gcc if our final target will be a Cortex uC and the binaries will finally be generated using arm-none-eabi-gcc. indirectly I'd be asking for the differences between gcc and arm-none-eabi-gcc when compiling for different targets.
I'd appreciate feedback from someone knowing about gcc internals that could have coped with the same kind of problem.
Thanks in advance,
Ignasi Villagrasa
Generally, simulators are useless, but especially so for production testing. Since it is an embedded system, you want to test software and hardware both - testing software without the intended MCU and hardware in place is just nonsense.
If you insist on using fluffware like simulators or PC "test suites" then realize:
It is an incomplete test which does not test core functionality of your product.
It cannot be used to test drivers/hardware-related code, it can only test abstract algorithms.
It can only be used for development testing, never for production testing.
As for how to correctly test your specific embedded system, it depends on the application and what the product is supposed to do. If you do your projects by the book then you have: Specification, leading to implementation, leading to tests. The sole purpose of a test is to verify that the implementation follows the specification.
So if the specification says that the product should activate 10 relays, you will need to flash the software onto the live MCU on the real PCB and a correctly performed test then verifies that all 10 relays get activated as they should.
This complete and correct product test cannot be done in any other way. So ask yourself if you actually need the incorrect and incomplete simulated test at all. Perhaps your development-related testing should focus on more meaningful things like design reviews, coding standards, static analysis, code reviews etc.
Related
I got some source code in plain C. It is built to run on ARM with a cross-compiler on Windows.
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
Since the C source code is instruction set independent, and I just want to verify the software logic at the C-level, I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
Or is there some proper way to do white-box testing of C code written for ARM?
Thanks!
BTW, I have read the thread: How does native android code written for ARM run on x86?
It seems not to be what I need.
ADD 1 - 10:42 PM 7/18/2021
The physical ARM hardware that the code targets may not be ready yet. So I want to verify the software logic at a very early phase. Based on John Bollinger's answer, I am thinking about another option: Just build the binary as usual for ARM. Then use QEMU to find a compatible ARM cpu to run the code. The code is assured not to touch any special hardware IO. So a compatible cpu should be enough to run all the code I think. If this is possible, I think I need to find a way to let QEMU load my binary on a piece of emulated bare-metal. And to get some output, I need to at least write a serial port driver to bridge my binary to the serial port.
ADD 2 - 8:55 AM 7/19/2021
Some more background, the C code is targeting ARMv8 ISA. And the code manipulates some hardware IPs which are not ready yet. I am planning to create a software HAL for those IPs and verify the C code over the HAL. If the HAL is good enough, everything can be purely software and I guess the only missing part is a ARMv8 compatible CPU, which I believe QEMU can provide.
ADD 3 - 11:30 PM 7/19/2021
Just found this link. It seems QEMU user mode emulation can be leveraged to run ARM binaries directly on a x86 Linux. Will try it and get back later.
ADD 4 - 11:42 AM 7/29/2021
An some useful links:
Override a function call in C
__attribute__((weak)) and static libraries
What are weak functions and what are their uses? I am using a stm32f429 micro controller
Why the weak symbol defined in the same .a file but different .o file is not used as fall back?
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
What does efficiency have to do with it if you cannot be sure that your test results are representative of the real target platform?
Since the C source code is instruction set independent,
C programs vary widely in how portable they are. This tends to be less related to CPU instruction set than to target machine and implementation details such as data type sizes, word endianness, memory size, and floating-point implementation, and implementation-defined and undefined program behaviors.
It is not at all safe to assume that just because the program is written in C, that it can be successfully built for a different target machine than it was developed for, or that if it is built for a different target, that its behavior there is the same.
I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
It is probably possible to build the program. There are several good C compilers for various x86 and x86_64 platforms, and if your C code conforms to one of the language specifications then those compilers should accept it. Whether the behavior of the result is representative of the behavior on ARM is a different question, however (see above).
It may nevertheless be a worthwhile exercise to port the program to another platform, such as x86 or x86_64 Windows. Such an exercise would be likely to unmask some bugs. But this would be a project in its own right, and I doubt that it would be worth the effort if there is no intention to run the program on the new platform other than for testing purposes.
Or is there some proper way to do white-box testing of C code written for ARM?
I don't know what proper means to you, but there is no substitute for testing on the target hardware that you ultimately want to support. You might find it useful to perform initial testing on emulated hardware, however, instead of on a physical ARM device.
If you were writing ARM code for a windows desktop application there would be no difference for the most part and the code would just compile and run. My guess is you are developing for some device that does some specific task.
I do this for a lot of my embedded ARM code. Typically the core algorithms work just fine when built on x86 but the whole application does not. The problems come in with the hardware other than the CPU. For example I might be using a LCD display, some sensors, and Free RTOS on the ARM project but the code that runs on Windows does not have any of these. What I do is extract important pieces of C/C++ code and write a test framework around it. In the real ARM code the device is reading values from a sensor and doing something with it. In the test code that runs on a desktop the code reads from a data file with fake sensor values and writes its output to a datafile that can be analyzed. This way I can have white box tests for the most complicated code.
May I ask, roughly what does this code do? An ARM processor with no peripherals would be kind of useless. Typically we use the processor to interact with some other hardware like a screen, some buttons, or Bluetooth. It's those interactions that are going to be the most problematic.
I can't figure out how a desktop environment developer test his code. Usually, a C or C++ programmer compiles his code an then run it (i'm not one of those programmers, i'm a web one).
So, you usually build your gui application over some kind of desktop environment (windows, mac os x, gnome, kde, xfce...), sow how they build and test their gui desktop?
And if this is a silly question, how does a kernel programmer test his code? for example linux kernel? how do you know that what you just wrote works?
Testing is a very broad term there are many types (partial list):
unit tests - test small pieces of code. test that the code behaves as expected.
system tests - test whole application in real world scenarios.
performance tests - test what is the performance of the application or part of it.
GUI testing - test operation of GUI elements (not so common as automated tests)
static analysis - compiler warnings on steroids
dynamic analysis - at a minimum memory checks - check mem allocations and usage
coverage tests - check that all code is executed.
formal verification tests (very advanced) - e.g. check when assertions/assumptions are broken.
Kernel code can be debugged by connecting using a 2nd computer (host). Virtual machines uses the same principal and simplify the setup but can't always work as HW might not exist in the guest VM.
The kernel (all OSes) has trace mechanism(s) for printing progress/problems. In Linux the simple trace is shown via the dmesg command (prints a cyclic buffer).
User mode code can easily be stopped and debugged via a debugger.
Desktop Environments
Testing Desktop Environments in real world scenarios can be kind of annoying, so the developer would have to watch out for every small error he makes, if he doesn't, he will have a hard time developing the DE.
As stated by #egur, there are multiple ways of testing his code, the easiest one and most important (but cannot be used in some cases, of course), he can test that code in a simplified program.
A Desktop Environment consists of many parts, however, in your case, I suppose you're talking about the session manager (or window manager) which is responsible for almost everything. So, if he were to test that, he would simply exit his current DE and use the new executable. In case of some error, he can always keep a backup of the old executable or fix the faulty code using some commandline text editor (like vim, or nano).
Kernel
It's quite hard to test, some kernel developers just write some code and make sure it's fine and compiles, then simply let his users test (by ACK'ing the code, etc.), then it can be submitted into the kernel code. Reasoning behind that is, the developer may not have the hardware needed to test the code.
Right now, you can compile and run the kernel in usermode (UML) if you have heard of it, so some developers may go for it. However, some developers may also want to test it themselves (They of course back up the current kernel incase of a screw up).
The way to test a desktop application is related to the way of control the application unassisted or remotely.
The Cross Platform GUI Test Automation tool (I don't know if this project has a web) project helps you to chose the interfaces/libraries required to solve the problem.
In Linux[1] uses the accessibility libraries to control the application, you have Cobra[2] for Windows and PyATOM[3] for MacOS, but I don't know what kind of technology uses in this platforms.
http://ldtp.freedesktop.org/wiki/
https://github.com/ldtp/cobra
https://github.com/pyatom/pyatom
I want to run a simple hello world, written in c, app.
on my at91sam9rl-ek.
Is it possible without an os?
And (if it is) how do I have to compile it?
-right now I try using g++ lite for creating arm code
(In general which programms can the board start without OS,
assembler, arm code?)
Sure, no problem running without an operating system, I do that kind of thing daily...
http://sam7stuff.blogspot.com/
You programs are, at least at first, not going to resemble desktop applications, I would avoid any libraries C libraries, no printfs or strcmps or things like that until you get the feel for it and find the right tools. No floating point as well. add some numbers do some shifting blink some leds.
codesourcery lite is probably the fastest way to get started, the gnueabi one I believe is the one you want.
This winarm site has a compiler and tons of non-os projects for seems like every arm based microcontroller.
http://www.siwawi.arubi.uni-kl.de/avr_projects/arm_projects/
Atmel is very very good about information, no doubt they have example programs you can try as well on the eval board.
emdebian is another cross compiler that is somewhat up to date and has binaries. building a gcc from scratch for cross compiling is not bad at all. The C library is another story though, and even the gcc library for that matter. I find it easier to do without either library.
It is possible get a C library working and run a great many kinds of programs. Depends on what you are looking to do. Ahh, just looked at the specs, that is a pretty serious eval board, plenty of power for an operating system should you choose to run one. You can certainly run programs that use the display as a user interface. read/write sd cards, usb, basically everything on the board, without an os, if you choose.
Background:
I am developing a largish project using at Atmel AVR atmega2560. This project contains a lot of hardware based functions (7 SPI devices, 2 I2C, 2 RS485 MODBUS ports, lots of Analogue and Digital I/O). I have developed "drivers" for all of these devices which provide the main application loop with an interface to access the required data.
Question:
The project I am developing will eventually have to meet SIL standards.
I would like to be able to test the code and provide a good level of code coverage. However I am unable to find any information to get me started on how such a testing framework should be set up.
The idea is that I can have a suite of automated tests which will allow future bug fixes and feature additions to be tested to see if they break the code. The thing is I don't understand how the code can be tested on chip.
Do I require hardware to monitor the I/O on the device and emulate externally connected devices? Any pointers that could be provided would be highly appreciated.
--Steve
This is a very good question - a common concern for embedded developers. Unfortunately, most embedded developers aren't as concerned as you are and only test the code on real hardware. But as another answer pointed out, this can basically test just the nominal functionality of the code and not the corner/error cases.
There is no single and simple solution to this problem. Some guidelines and techniques exist, however, to do a relatively good job.
First, separate your code into layers. One layer should be "hardware agnostic" - i.e. function calls. Do not ask the user to write into HW registers directly. The other (lower) layer deals with the HW. This layer can be "mocked" in order to test the higher level. The lower level can not be really tested without the HW, but it's not going to change often and needs deep HW integration, so it's not an issue.
A "test harness" will be all your high-level HW agnostic code with a "fake" lower level specifically for testing. This can simulate the HW devices for correct and incorrect functionality and thus allow you to run automated tests on the PC.
Never run unit tests on or against the real hardware. Always mock your I/O interfaces. Otherwise, you can't simulate error conditions and, more importantly, you can't rely on the test to succeed.
So what you need is to split your app into various pieces that you can test independently. Simulator (or mock) all hardware that you need for those tests and run them on your development PC.
That should cover most of your code and leaves you with the drivers. Try to make as much of the driver code as possible work without the hardware. For the rest, you'll have to figure a way to make the code run on the hardware. This usually means you must create a test bed with external devices which respond to signals, etc. Since this is brittle (as in "your tests can't make this work automatically"), you must run these tests manually after preparing the hardware.
Vectorcast is a commercial tool to run unit tests on the hardware with code coverage.
Do you have a JTAG connector? You may be able to use JTAG to simulate error conditions on the chip.
I like to separate the tasks. For instance, when I made a circular buffer for my Atmel AVR I wrote it all in Code::Blocks and compiled it with the regular GCC compiler instead of the AVR GCC compiler, then I create a unit test for it. I used a special header file to provide the proper data types that I wanted to work with (uint8_t for example). I found errors with the unit tests, fixed them, then took the fixed code over to AVR Studio and integrated it. After that I used wrote support functions and ISRs to fit the buffer into useful code (ie, pop one byte off the buffer, push it into the UART data output register, append a string constant to the buffer for a printf function, etc). Then I used the AVR simulator to make sure that my ISRs and functions were being called and that the right data showed up in registers. After that I programmed it onto the chip and it worked perfectly.
I greatly prefer the debugging capabilities of Code::Blocks compared to AVR Studio so I use the above approach whenever I can. When I can't I'm usually dealing with hardware only. For instance I have a timer that automatically produces a square wave. The best I could do was see that the pin bit was being twiddled in the simulator. After that I just had to hook a scope up and make sure.
I like to use a multi-level approach when debugging problems. For instance with the clock the first layer is 'Put a probe on the clock pin and see if there's a signal there'. If not, probe out the pin on the uC and look for the signal. Then, I coded a debug interface in one of my UARTs where I can look at specific register values and make sure they are what they're supposed to be. So if it doesn't work the next step is 'call up the register value and ensure it's correct.'
Try to think ahead four steps or so whenever you're planning your debugging. There should be +5V here, but what if there isn't? Write into the debug interface a way to toggle the pin and see if that changes it. What if that doesn't work? Do something else, etc etc etc. You get to a point where you run into 'I HAVE NO IDEA WHY THIS DANG THING DOESN'T WORK!!!!' but hopefully you'll figure out the reason beforehand.
I'm starting a new project and decided that I should give this unit testing thing that everybody keeps talking about a try.
The project is a set of C libraries (so no UI or user interaction testing is necessary) and aimed at being cross platform, with Linux, FreeBSD and Windows being my first priority and OS X planned once the first release is out the door (assuming I can get a hold of a machine running OS X to test on).
Does anybody have any experience or recommendations for a good C unit testing framework that easily works across multiple platforms?
I used to use CUnit which I liked. Also google have open sourced their C++ unit testing framework.
Back when I was writing C code all of my unit tests were ad-hoc, no framework involved, so I can't recommend any of these, but you may want to look at the list of C/C++ unit testing tools at http://www.opensourcetesting.org/unit_c.php.
I started with minunit then evolved it to suit my requirements. Rather than having support for test suites from a testing framework, I just use make to build and run different executables, each with a dozen or so related tests in them. So now I've about 250 lines of macros, mainly for printing out different types or doing string or memory comparison, and that's enough for now.
I've grown quite partial to tap (scroll down near the bottom) because its easy to drop in place. It stands for "test anything protocol". I don't think you'd hit many portability issues with it if using compilers released in the last 5 years, but I've only tried it with gcc / glibc.
I use my own unit testing framework CUnitWin32 for embedded C stuff. My host dev environment is Win32 so the framework that is available on-line is for Windows only. However, I did port it to Linux as well, so if the framework works for you, drop me a comment here and we'll work out a way to get your the Linux version.
D