Testing Code for Embedded Application - c

Background:
I am developing a largish project using at Atmel AVR atmega2560. This project contains a lot of hardware based functions (7 SPI devices, 2 I2C, 2 RS485 MODBUS ports, lots of Analogue and Digital I/O). I have developed "drivers" for all of these devices which provide the main application loop with an interface to access the required data.
Question:
The project I am developing will eventually have to meet SIL standards.
I would like to be able to test the code and provide a good level of code coverage. However I am unable to find any information to get me started on how such a testing framework should be set up.
The idea is that I can have a suite of automated tests which will allow future bug fixes and feature additions to be tested to see if they break the code. The thing is I don't understand how the code can be tested on chip.
Do I require hardware to monitor the I/O on the device and emulate externally connected devices? Any pointers that could be provided would be highly appreciated.
--Steve

This is a very good question - a common concern for embedded developers. Unfortunately, most embedded developers aren't as concerned as you are and only test the code on real hardware. But as another answer pointed out, this can basically test just the nominal functionality of the code and not the corner/error cases.
There is no single and simple solution to this problem. Some guidelines and techniques exist, however, to do a relatively good job.
First, separate your code into layers. One layer should be "hardware agnostic" - i.e. function calls. Do not ask the user to write into HW registers directly. The other (lower) layer deals with the HW. This layer can be "mocked" in order to test the higher level. The lower level can not be really tested without the HW, but it's not going to change often and needs deep HW integration, so it's not an issue.
A "test harness" will be all your high-level HW agnostic code with a "fake" lower level specifically for testing. This can simulate the HW devices for correct and incorrect functionality and thus allow you to run automated tests on the PC.

Never run unit tests on or against the real hardware. Always mock your I/O interfaces. Otherwise, you can't simulate error conditions and, more importantly, you can't rely on the test to succeed.
So what you need is to split your app into various pieces that you can test independently. Simulator (or mock) all hardware that you need for those tests and run them on your development PC.
That should cover most of your code and leaves you with the drivers. Try to make as much of the driver code as possible work without the hardware. For the rest, you'll have to figure a way to make the code run on the hardware. This usually means you must create a test bed with external devices which respond to signals, etc. Since this is brittle (as in "your tests can't make this work automatically"), you must run these tests manually after preparing the hardware.

Vectorcast is a commercial tool to run unit tests on the hardware with code coverage.

Do you have a JTAG connector? You may be able to use JTAG to simulate error conditions on the chip.

I like to separate the tasks. For instance, when I made a circular buffer for my Atmel AVR I wrote it all in Code::Blocks and compiled it with the regular GCC compiler instead of the AVR GCC compiler, then I create a unit test for it. I used a special header file to provide the proper data types that I wanted to work with (uint8_t for example). I found errors with the unit tests, fixed them, then took the fixed code over to AVR Studio and integrated it. After that I used wrote support functions and ISRs to fit the buffer into useful code (ie, pop one byte off the buffer, push it into the UART data output register, append a string constant to the buffer for a printf function, etc). Then I used the AVR simulator to make sure that my ISRs and functions were being called and that the right data showed up in registers. After that I programmed it onto the chip and it worked perfectly.
I greatly prefer the debugging capabilities of Code::Blocks compared to AVR Studio so I use the above approach whenever I can. When I can't I'm usually dealing with hardware only. For instance I have a timer that automatically produces a square wave. The best I could do was see that the pin bit was being twiddled in the simulator. After that I just had to hook a scope up and make sure.
I like to use a multi-level approach when debugging problems. For instance with the clock the first layer is 'Put a probe on the clock pin and see if there's a signal there'. If not, probe out the pin on the uC and look for the signal. Then, I coded a debug interface in one of my UARTs where I can look at specific register values and make sure they are what they're supposed to be. So if it doesn't work the next step is 'call up the register value and ensure it's correct.'
Try to think ahead four steps or so whenever you're planning your debugging. There should be +5V here, but what if there isn't? Write into the debug interface a way to toggle the pin and see if that changes it. What if that doesn't work? Do something else, etc etc etc. You get to a point where you run into 'I HAVE NO IDEA WHY THIS DANG THING DOESN'T WORK!!!!' but hopefully you'll figure out the reason beforehand.

Related

Automatic unit test

In our company we develop bare metal embedded software for microcontrollers. Until now we have been using manual unit test on targets or simulators, specially for Renesas microcontrollers (RL78 and RX families). We're planning now to go into automatic unit tests. The idea is to integrate them in our existing CI system.
At this point we've got a dilema. Until now we've been running unit test using the same compiler and target (or simulator) that later has been used to deploy the software into production. We'd like to maintain this approach, as the developers (and everybody) specially appreciate to test and deploy using the same conditions. So the idea would be to take a testing tool/library programmed in C that allows as to compile and run the tests in an embedded environment using a simulator. (Ex. http://www.throwtheswitch.org/unity)
But, on the other side, we cope with two upcoming situations that make the dilema arise:
We're more and more going to Cortex uC, where it's more difficult to get specific simulators to allow automation. (Ex. Renesas RA family)
Many of the advanced testing tools are developed in C++ and thought for PC environment using gcc/g++ compiler in a x86 architecture, that doesn't match that of the Cortex targets compiled using arm-none-eabi-gcc that we foresee to use.
So, at this point, we're wondering, and this would be my question, what kind of reliability can have unit tests run using gcc if our final target will be a Cortex uC and the binaries will finally be generated using arm-none-eabi-gcc. indirectly I'd be asking for the differences between gcc and arm-none-eabi-gcc when compiling for different targets.
I'd appreciate feedback from someone knowing about gcc internals that could have coped with the same kind of problem.
Thanks in advance,
Ignasi Villagrasa
Generally, simulators are useless, but especially so for production testing. Since it is an embedded system, you want to test software and hardware both - testing software without the intended MCU and hardware in place is just nonsense.
If you insist on using fluffware like simulators or PC "test suites" then realize:
It is an incomplete test which does not test core functionality of your product.
It cannot be used to test drivers/hardware-related code, it can only test abstract algorithms.
It can only be used for development testing, never for production testing.
As for how to correctly test your specific embedded system, it depends on the application and what the product is supposed to do. If you do your projects by the book then you have: Specification, leading to implementation, leading to tests. The sole purpose of a test is to verify that the implementation follows the specification.
So if the specification says that the product should activate 10 relays, you will need to flash the software onto the live MCU on the real PCB and a correctly performed test then verifies that all 10 relays get activated as they should.
This complete and correct product test cannot be done in any other way. So ask yourself if you actually need the incorrect and incomplete simulated test at all. Perhaps your development-related testing should focus on more meaningful things like design reviews, coding standards, static analysis, code reviews etc.

Is it possible to build C source code written for ARM to run on x86 platform?

I got some source code in plain C. It is built to run on ARM with a cross-compiler on Windows.
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
Since the C source code is instruction set independent, and I just want to verify the software logic at the C-level, I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
Or is there some proper way to do white-box testing of C code written for ARM?
Thanks!
BTW, I have read the thread: How does native android code written for ARM run on x86?
It seems not to be what I need.
ADD 1 - 10:42 PM 7/18/2021
The physical ARM hardware that the code targets may not be ready yet. So I want to verify the software logic at a very early phase. Based on John Bollinger's answer, I am thinking about another option: Just build the binary as usual for ARM. Then use QEMU to find a compatible ARM cpu to run the code. The code is assured not to touch any special hardware IO. So a compatible cpu should be enough to run all the code I think. If this is possible, I think I need to find a way to let QEMU load my binary on a piece of emulated bare-metal. And to get some output, I need to at least write a serial port driver to bridge my binary to the serial port.
ADD 2 - 8:55 AM 7/19/2021
Some more background, the C code is targeting ARMv8 ISA. And the code manipulates some hardware IPs which are not ready yet. I am planning to create a software HAL for those IPs and verify the C code over the HAL. If the HAL is good enough, everything can be purely software and I guess the only missing part is a ARMv8 compatible CPU, which I believe QEMU can provide.
ADD 3 - 11:30 PM 7/19/2021
Just found this link. It seems QEMU user mode emulation can be leveraged to run ARM binaries directly on a x86 Linux. Will try it and get back later.
ADD 4 - 11:42 AM 7/29/2021
An some useful links:
Override a function call in C
__attribute__((weak)) and static libraries
What are weak functions and what are their uses? I am using a stm32f429 micro controller
Why the weak symbol defined in the same .a file but different .o file is not used as fall back?
Now I want to do some white-box unit testing of the code. And I don't want to run the test on an ARM board because it may not be very efficient.
What does efficiency have to do with it if you cannot be sure that your test results are representative of the real target platform?
Since the C source code is instruction set independent,
C programs vary widely in how portable they are. This tends to be less related to CPU instruction set than to target machine and implementation details such as data type sizes, word endianness, memory size, and floating-point implementation, and implementation-defined and undefined program behaviors.
It is not at all safe to assume that just because the program is written in C, that it can be successfully built for a different target machine than it was developed for, or that if it is built for a different target, that its behavior there is the same.
I am wondering if it is possible to build the C source code to run on x86. It makes debugging and inspection much easier.
It is probably possible to build the program. There are several good C compilers for various x86 and x86_64 platforms, and if your C code conforms to one of the language specifications then those compilers should accept it. Whether the behavior of the result is representative of the behavior on ARM is a different question, however (see above).
It may nevertheless be a worthwhile exercise to port the program to another platform, such as x86 or x86_64 Windows. Such an exercise would be likely to unmask some bugs. But this would be a project in its own right, and I doubt that it would be worth the effort if there is no intention to run the program on the new platform other than for testing purposes.
Or is there some proper way to do white-box testing of C code written for ARM?
I don't know what proper means to you, but there is no substitute for testing on the target hardware that you ultimately want to support. You might find it useful to perform initial testing on emulated hardware, however, instead of on a physical ARM device.
If you were writing ARM code for a windows desktop application there would be no difference for the most part and the code would just compile and run. My guess is you are developing for some device that does some specific task.
I do this for a lot of my embedded ARM code. Typically the core algorithms work just fine when built on x86 but the whole application does not. The problems come in with the hardware other than the CPU. For example I might be using a LCD display, some sensors, and Free RTOS on the ARM project but the code that runs on Windows does not have any of these. What I do is extract important pieces of C/C++ code and write a test framework around it. In the real ARM code the device is reading values from a sensor and doing something with it. In the test code that runs on a desktop the code reads from a data file with fake sensor values and writes its output to a datafile that can be analyzed. This way I can have white box tests for the most complicated code.
May I ask, roughly what does this code do? An ARM processor with no peripherals would be kind of useless. Typically we use the processor to interact with some other hardware like a screen, some buttons, or Bluetooth. It's those interactions that are going to be the most problematic.

Question to any embedded systems engineers out there using STM32 NUCLEO

I have recently bought an STM32 NUCLEO Dev Kit and wondered if this what an actual Embedded Systems Engineer would use in the industry when developing a product?
I'm using Kiel Uvision 5, STMCubeMX and STM32 ST-LINK Utility to develop certain projects. As I am used to using PIC and using registers like PORTA, OSCCON, TIMER0 etc, I see that Kiel Uvision 5 uses ready made functions like HAL_GPIO_TogglePin(.........) etc. Is this the usual way they do this in industry or work more directly with the registers?
It's heavily opinion based and I wouldn't get surprised if this question got closed for this reason. This answer only touches on few aspects of what you're asking about. It's a very broad topic and it's going to be hard - if not impossible - to include everything in one post which wouldn't end up being several pages long. However to give you my perspective on the topic, while trying to remain unbiased, the short answer is.. it depends.
If you're asking about what is used in most common cases, it's likely going to be the HAL (previously StdPeriph) functions you've mentioned. The reason is - they get the job done in most common cases. After all it always comes down to what the cost of creating a product is going to be. If HAL functions are "good enough" for the purpose, they're going to be used simply because they're faster to develop with. The higher the development cost, the more you'll want to cut it (or move it elsewhere) and using abstractions is one way of doing so.
However, even though I think it's safe to assume that HAL / Std Periph / any other (including proprietary) abstraction layer is generally used, it's not always the case for at least two reasons I can think of:
Existing functions may not be suitable for your purpose. Giving HAL as an example, it works pretty well for most common cases, but sometimes your needs may be so specific that you'll have to go and mess "under the hood", often ending up either writing your own variation of the functions of building something new on top of HAL. Personally I can think of at least few examples where HAL functions weren't exactly what I needed. It doesn't necessarily mean that the library is bad, it's just sometimes the requirements are very specific.
Messing with registers directly may sometimes be required for performance reasons. HAL and similar are an abstraction layer and as any abstraction, they take more time to execute than using the registers directly. If you're trying to squeeze absolute maximum out of given peripheral, you'll sometimes have to go down to register level.
Now to a more biased portion of my answer.. I can see why you ask this question. Coming from PIC world where Flash or CPU clocks were more precious, it does make sense to use registers directly there. In case of STM32, it's not as critical anymore. Having said that, you'll sometimes stumble upon opinions that "using registers is the only true way", but personally I find such discussions ending up being purely academic. I see registers or any abstraction built on top of it as tools and you should use the right tools for the right job. Two examples of NOT using the right tools:
You use only registers as "the only right way" either because you believe it yourself or you've been told so. Your products take twice (if not more) time to develop, your code takes less space in flash (so now you use 46% of 1MB flash instead of 48%). Code that is performance-critical meets its goals. Code that has relaxed execution time constraints is also super efficient, but it doesn't affect the end customer much, if at all. Your code is also less reusable - you find yourself rewriting same portions of code over and over every time you release new product for a new MCU family.
You only use HAL / any other similar abstraction because "you didn't pick so powerful MCU to have to go down to register level", or because you're told you should never ever touch registers. You develop much faster and you're able to release two products instead of just one using registers. However when there are execution time constraints / transmission speeds you have to hit, you find yourself picking MCUs more powerful than should theoretically be needed. Sometimes you find yourself writing wrappers around HAL because they don't give you exactly the functionality you need - it feels like making it more complicated than it should be.
So after all, if there was anything to take out of what I'm trying to say is that you should use what is suitable for the job on a case-by-case basis. In case of STM32, you nowadays have 3 options: HAL (top abstraction level), HAL LL (Low Level abstraction - often simple wrapper functions around register acceses) or using registers directly. Which one you choose should come from what your requirements are.
I use Nucleo boards all the time. It allows me to start write software before I have the actual hardware ready.
HAL & register way is rather the programmer choice than the "industry standard". I personally use HAL drivers when program more complicated peripherals like USB and Ethernet to avoid wiring the full stacks from the scratch.
I have just finished my degree and we heavily used the STM32 platform with CMSIS and HAL.
It is a matter of preference. The HAL libraries offer higher abstraction but come with some quirks that are not very intuitive. The HAL (was/)is buggy sometimes. We have encountered SPI transfer bugs which made the higher transfer rates of SPI unusable because of a delay in the per byte transfer.
CMSIS offers lower level access but still abstracts away from simple bit manipulation. I don't think that direct register access is a great way to program anymore and at least CMSIS should be used. But it still a matter of opinion, preference and what is right for the job at hand. If you need something quick: HAL. If you need really fine control: CMSIS.
(sidenote I believe that CMSIS has been phased out in favor of HAL but it is still usable at this time)
All of the other answers are valid.
Just wanted to chime in and say I used STM32 regularly at my job (and at two different companies). We used the HAL drivers at both companies. There are obviously some issues with them, but used regularly enough by others that you can easily find support online. Additionally CubeMx does a decent job with them, at least enough to get you started on using the peripherals. So the 0->something step feel smaller. But getting really optimized code and design, you may want to dive deeper to really understand what each of the HAL drivers are doing and decide which method works for your project, goals, and requirements.

how to code for an arm processor

Hey I'm fairly new to coding, I joined a coding club because I have an interest for the growing world of technology. The topics lately have been about computer architecture and in the group we have been talking about ARM processor.
Our club captain notified us of the following:
"Develop a software product for an ARM processor to do a task of your choice. However, before you start the task, you need to send me a paragraph to let me know what you are trying to solve."
Does anyone know of any reference or code examples I could look up? Or any other helpful links
The question does not make a lot of sense. The Arm architecture spans a wide range from fairly small microcontrollers aimed at low power or fast response tasks, mid range devices which are good for set-top-box, camera or router embedded applications, smartphones, right up to server class products.
There are also variants which are designed for safety critical systems, or high reliability.
Each specific application will require different features, but at the level that you are writing code, you are unlikely to appreciate the factors which would help you to chose a device. Is it something portable, or needing a long battery life, or providing secure communication? If you had a specific device which you needed to use, the application would be more obvious (but still very wide).
I am going to give an answer to a generic question- How to code for an arm processor/any processor-
So, the first step is to understand what ISA is! ISA is a manual(for beginners) which the company releases which acts an information source for those who wish to code for their specific processor.(In architecture world, ISA has lot more implications. What I have given is somewhat an oversimplification). Now, I would recommend that you start looking at the ARM ISA. Go through the instructions that ARM supports, understand all the features in ARM, understand the state of the machine,registers, addressing modes, etc. Get a good understanding of what your version of ARM specifically supports(NEON, Thumb, Vectors(?)).
Once you have a hang of the ISA, the next step is to decide what you want to build since now you are aware what all your processor is capable of doing. You know your limitations as well. So, now will be a good time to come up with an idea.
Now, the next step is setting up your environment. Read the specs of the board which you will be using. Set up the environment. Know the compilers/assembler. Know what flags they support.
Finally, write a simple basic assembly code. Write a code to add two numbers. I usually follow this convention- For Assembly, I write addition of two numbers as my first code. For High Level(C,C++, Java), I write HelloWorld. Execute your code and observe the output. I recommend that for the first code, you step into your debugger and observe the state of the machine after every instruction. (Keep the ISA handy).
Slowly, up your game. Write codes with functions, strings,etc. Finally, get yourself some fun stuff and learn to interface them.(LEDs, LCDs, Sensors, Camera).
Cheers!

Run executable on MINI2440 with NO OS

I have Fedora installed on my PC and I have a Friendly ARM Mini2440 board. I have successfully installed Linux kernel and everything is working. Now I have some image processing program, which I want to run on the board without OS. The only process running on board should be my program. And in that program how can I access the on board camera to take image from, and serial port to send output to the PC.
You're talking about what is often called a bare-metal environment. Google can help you, for example here. In a bare-metal environment you have to have a good understanding of your hardware because you have to take care of a lot of things that the OS normally handles.
I've been working (off and on) on bare-metal support for my ELLCC cross development tool-chain. I have the ARM implementation pretty far along but there is still quite a bit of work to do. I have written about some of my experiences on my blog.
First off, you have to get your program started. You'll need to write some start-up code, usually in assembly, to handle the initialization of the processor as it comes out of reset (or is powered on). The start-up code then typically passes control to code written in C that ultimately directly or indirectly calls your main() function. Getting to main() is a huge step in your bare-metal adventure!
Next, you need to decide how to support your hardware's I/O devices which in your case include the camera and serial port. How much of the standard C (or C++) library does your image processing require? You might need to add some support for functions like printf() or malloc() that normally need some kind of OS support. A simple "hello world" would be a good thing to try next.
ELLCC has examples of various levels of ARM bare-metal in the examples directory. They range from a simple main() up to and including MMU and TCP/IP support. The source for all of it can be browsed here.
I started writing this before I left for work this morning and didn't have time to finish. Both dwelch and Clifford had good suggestions. A bootloader might make your job a lot simpler and documentation on your hardware is crucial.
First you must realise that without an OS, you are responsible for bringing the board up from reset including configuring the PLL and SDRAM, and also for the driver code for every device on the board you wish to use. To do that required adequate documentation of the board and it devices.
It is possible that you can use the existing bootloader to configure the core and SDRAM, but that may not meet your requirement for the only process running on the board should be your image processing program.
Additionally you will need some means of loading and bootstrapping; again the existing Linux bootstrapper may suit.
It is by no means straightforward and cannot really be described in detail here.

Resources