Writing device library C/C++ for STM32 or ARM - arm

I need to develop device libraries like uBlox, IMUs, BLE, ecc.. from scratch (almost). Is there any doc or tutorial that can help me?
Question is, how to write a device library using C/C++ (Arduino style if you want) given a datasheet and a platform like STM32 or other ARMs?
Thanks so much
I've tried to read device libraries from Arduino library and various Github, but I would like to have a guide/template to follow (general rules) to write proper device libraries from a given datasheet.
I'm not asking a full definitive guide, just where to start, docs, methods approach.
I've found this one below, but is very basic and quite lite for my targets.
http://blog.atollic.com/device-driver-development-the-ultimate-guide-for-embedded-system-developers

I don't think that you can actually write libraries for STM32 in Arduino style. Most Arduino libraries you can find in the wild promote ease of usage rather than performance. For example, a simple library designed for a specific sensor works well if reading the sensor and reporting the results via serial port is the only thing that firmware must do. When you work on more complex projects where uC has lots to do and satisfy some real time constraints, the general Arduino approach doesn't solve your problems.
The problem with STM32 library development is the complex connection between peripherals, DMA and interrupts. I code them in register level without using the Cube framework and I often find myself digging the reference manual for tables that shows the connections between DMA channels or things like timer master-slave relations. Some peripherals (timers mostly) work similar but each one of them has small differences. It makes development of a hardware library that fits all scenarios practically impossible.
The tasks you need to accomplish are also more complex in STM32 projects. For example, in one of my projects, I fool SPI with a dummy/fake DMA transfer triggered by a timer, so that it can generate periodic 8-pulse trains from its clock pin (data pins are unused). No library can provide you this kind of flexibility.
Still, I believe not all is lost. I think it may be possible to build an hardware abstraction layer (HAL, but not The HAL by ST). So, it's possible to create useful libraries if you can abstract them from the hardware. A USB library can be a good example for this approach, as the STM32 devices have ~3 different USB peripheral hardware variations and it makes sense to write a separate HAL for each one of them. The upper application layer however can be the same.
Maybe that was the reason why ST created Cube framework. But as you know, Cube relies on external code generation tools which are aware of the hardware of each device. So, some of the work can be avoided in runtime. You can't achieve the same result when you write your own libraries unless you also design a similar external code generation tool. And also, the code Cube generates is bloated in most cases. You trade development time for runtime performance and code space.

I assume you will be using a cross toolchain on some platform like Linux, and that the cross toolchain is compatible with some method to load object code on the target CPU. I also assume that you already have a working STM32 board that is documented well enough to figure out how the sensors will connect to the board or to the CPU.
First, you should define what your library is supposed to provide. This part is usually surprisingly difficult. It’s a bit hard to know what it can provide, without knowing a bit about what the hardware sensors are capable of providing. Some iteration on the requirements is expected.
You will need to have access to the documentation for the sensors, usually in the form of the manufacturer’s data sheets. Using the datasheet, and knowing how the device is connected to the target CPU/board, you will need to access the STM32 peripherals that comprise the interface to the sensors. Back to the datasheets, this time for the STM32, to see how to access its peripheral interfaces. That might be simple GPIO bits and bytes, or might be how to use built-in peripherals such as SPI or I2C.
The datasheets for the sensors will detail a bunch of registers, describing the meaning of each, including the meanings of each bit, or group of bits, in certain registers. You will write code in C that accesses the STM32 peripherals, and those peripherals will access the sensors across the electrical interface that is part of the STM32 board.
The workflow usually starts out by writing to a register or three to see if there is some identifiable effect. For example, if you are exercising a digital IO port, you might wire up an LED to see if you can turn it on or off, or a switch to see if you can correctly read its state. This establishes that your code can poke or peek at IO using register level access. There may be existing helper functions to do this work as part of the cross toolchain. Or you might have to develop your own, using pointer indirection to access memory mapped IO. Or there might be specially instructions needed that can only be accessed from inline assembler code. This answer is generic as I don’t know the specifics of the STM32 processor or its typical ecosystem.
Then you move on to more complex operations that might involve sequences of operations, like cycling a bit or two to effect some communication with the device. Or it might be as simple as finding the proper sequence of registers to access for operation of a SPI interface. Often, you will find small chunks of code are complete enough to be re-used by your driver; like how to read or write an individual byte. You can then make that a reusable function to simplify the rest of the work, like accessing certain registers in sequence and printing the contents of register that you read to see if they make sense. Ultimately, you will have two important pieces of information: and understanding of the low-level register accesses needed to create a formal driver, and an understanding of what components and capabilities make up the hardware (ie, you know how the device(s) work).
Now, throw away most of what you’ve done, and develop a formal spec. Use what you now know to include everything that can be useful. Use what you now know to develop a spec that includes an appropriate interface API that your application code can use. Rewrite the driver, armed with the knowledge of how are the pieces work, and taking advantage of the blank canvas afforded you by the fresh rewrite of the spec. Only reuse code that you are completely confident is optimal and appropriate to the format dictated by the spec. Write test code for all of the modules, and use the test code to actually test that the code works and that it conforms to the spec. Re-use the test code every time you modify anything it tests.

Related

msp432 - Can code written for the TI MSP432 ARM Cortex M4 be ported automatically to other Cortex M4 microcontrollers?

I will preface by saying that I am fairly new to programming at the hardware level and that I'm interested in building apps based on the MSP432 microcontroller by Texas Instruments.
I understand that to program this controller, one writes C code, links to the MSPWare library/drivers, and compiles with gcc. Is it possible to take the code written for this controller and deploy to other controllers that are also based on the the Cortex M4 32-bit architecture? What sorts of differences are there between the various implementations of the Cortex M4?
I will say generally not, unlike an x86 pc or a mac, where the masses at least are used to one operating system and that operating system allows for a lot of reverse compatibility, you write a program today on a dell and it works on an acer, and will probably run for another 10 years or more on your daily driver computer or at least many 10 year old or more programs run today, actually in 10 years todays programs probably wont run (on your phone or brain implant).
the cortex-m4 is a processor core, arm doesnt make chips they make processor cores that chip companies buy and surround with chip company stuff. So instead of saying I can drive a car move me from one car to another and there is a pretty good chance I can drive it, instead this is I am a specific sized tire and move me from one car to another and it is quite likely I dont work.
almost all of the code in the libraries that you are making calls to are for things within the chip, but outside the arm core, the chip vendor specific stuff. So while that particular chip vendor may make libraries that are close to the same from one of their arm chips to another within a class of chips or within the same production time frame or whatever, that doesnt mean that that code apis or how the peripherals work is in any way portable from one family of chips in that company to another and certainly not from one chip vendor to another. your ti code is likely not close to what you see on an atmel or nxp or st arm based chip.
Now saying that there are folks trying, the mbed stuff is an attempt to be arduino like where arduino is an attempt to make a high enough set of libraries and port them to specific boards (which are mostly within a family of chips from one vendor). There are some arm based attempts to make arduino libraries such that code developed on a real arduino will compile for these arm based things and just work, but those arm based things are specific boards designed to be ardunio compatable and the libraries are thick and hold all the conversion magic from avr/atmel peripherals to whatever arm based chip was chosen.
mbed is probably closer to it, originally just nxp chips but now some st boards with st chips that are trying to be both arduino compatible and mbed compatible. not sure how that will work out.
then there are phones of course but that is a lot closer to the windows thing write an iphone app and it will/should work on all the iphones for some period of time, even though those phones all use different arm based chips from different vendors with vastly different peripherals.
This question likely will be closed for being primarily opinion based, since it is not really a black and white fact question. I suggest you just enjoy the board you bought, make the leds blink and stuff, get used to dealing with a whole new environment compared to operating system stuff, and the very limited resources compared to a laptop/desktop.
If you have a specific porting question or something that is more of a question with a more specific answer then ask it that way. If you are wanting to play with this but ultimately do X with it (port the code to an stm32f4 for example), will it work.
Now, it is quite likely that if you wanted to create your own abstraction layer, then you could create it such that it works on top of multiple chips/platforms.
Arm has this cmsis thing but I think that is for the debuggers to get common access to the board, you may or may not know or have noticed that the access to the stellaris launchpad now tiva C is a different interface/protocol than the one used now. The one used now is on the hercules and now msp432 (I hate that, it is in no way shape or form related to the msp430, perhaps this is a pic vs pic32 thing which in no way are related to each other except for being from the same parent company) uses the same XDS100 compatible front end. The thing that was formerly a board with an attempt to be arduino like easy to use web based environment (arduino is java based not web based, but run anywhere is the idea) and a lot of libraries so you dont have to know as many details, this is mbed, now mbed appears to becoming an rtos or something so kind of like writing for arduino or for android, you might...might...be able to develop on top of that and have it port. Understand the more layers the thicker the abstraction layer the more resources you need the more power you consume the more the chip cost, etc. So it is a tradeoff of saving a little bit of software development time vs the price or size or power consumption of the product. We dont know, nor necessarily need to know, what you are making, that is your business but, there are tradeoffs to making the software "simpler", portable, readable, etc...

Keep Microcontroller Peripheral Drivers Independent

This is a question that I have been asking myself for a while. I've tried finding good reads about this, but I cannot seem to find a solution that's suitable for how I think things should be.
I believe that for portability and maintenance reasons, drivers should preferably not be dependent on each other. However, sometimes one driver may require functionality provided by an other driver. An I²C bus for example may have a timeout that depends on the Timer driver.
The way I have been doing this until now is by simply #include'ing the drivers in the other drivers, but this is not a desirable solution. I feel like there should be a better way of doing this.
I'm thinking of adding another layer, a sort of abstraction between the main application and all drivers. However, this feels like it's just moving the problem somewhere else and not solving it.
I've used Function pointers, but this, too, makes maintenance a nuissance.
Are there any good sources or ideas about driver interdependency and how to neatly solve a problem like this?
On big controllers, Cortex M3/4 and the like, it is totally fine to have countless layers. For example the SD-Card interface of the LPC1822 consists of an "sdif" driver, handling the basic communication and pin toggling of the card interface. On top of that there is the "sdmmc" driver, providing more sophisticated functions. Over top this one could be the FAT system (using the real time clock), and on and on....
In contrary, on a tiny 8-bit controller it is maybe better to have no layers at all. Those 3 registers you have to set for an i2c communication are manageable. Don't write hundred lines of code to do something trivial. In that case it is totally fine to include the timer directly in the I2C routines. If you want your program to be more understandable for your colleagues, use your time to write good comments and documentation instead of encapsulating all and everything in functions and abstraction layers.
When you are ressource constrained and your program isn't that big anyway, don't burden yourself with too much overhead only to get consistent layers. Layers is something for big complicated software. In embedded computing you are sometimes better off keeping your sideways dependencies instead of writing huge libs that don't fit into the flash space.

Software SPI Implementation

I'm thinking about creating my own pure-C software SPI library because there are none available (as far as I can tell).
Which also worries me - why aren't there any software SPI libraries? Is there some hardware limitation I'm not considering?
EDIT:
I've decided to write my own library due to how buggy the SPI peripheral is in STM32. Especially in 8 bit mode, but I've also had a lot of problems with 16 bit mode. Many other issues I didn't even bother documenting.
I've now written the software implementation (it is pretty easy) and in works just fine.
why aren't there any software SPI libraries?
Because it's about 10 lines of code each for the WriteByte and ReadByte functions, and most of that is bit banging processor-specific registers. The higher level protocol depends on the device connected to the SPI. Here's what wikipedia has to say on the subject
The SPI bus is a de facto standard. However, the lack of a formal
standard is reflected in a wide variety of protocol options. Different
word sizes are common. Every device defines its own protocol,
including whether or not it supports commands at all. Some devices are
transmit-only; others are receive-only. Chip selects are sometimes
active-high rather than active-low. Some protocols send the least
significant bit first.
So there's really no point in making a library. You just write the code for each specific situation, and combination of devices.
Although the others have answered that its just bitbanging; I would argue that there is benefit to writing a small layer:
If you don't use a HAL, or the Standard Libs (like myself) you can write a layer to deal with initialization which can do things as per the initialisation sequence of the chip.
You can map all your interrupt vectors for a specific peripheral in this layer utilising callback mechanisms
Create separation between application and system domains which is a core principle for modular design
Increase of code reusability utilising techniques such as function pointers and common interfaces
Add input validation on settings/parameters that would otherwise cause code duplication if no layer was used. For example, ensuring that the HCLK does not exceed 180MHz on the stm32f429 when initialising it.
Whilst it is true that to send data generally all you need to do is set a register, but more often that not the initialisation sequence is complicated.
With the increase in power and capacity in microcontrollers along with project size, its important to implement balanced design choices that promote scalability and maintainability - especially in commercial projects.
If you are asking for microcontrollers then you can have your own SPI library.
You need to use bit-banging technique for that.
There are software SPI libraries available.
As every microcontroller have different PORT architecture and registers those are not generic and they are specific for that controller only.
e.g. For 8051 architecture you can find this.

Programming on an FPGA device

I'm trying to learn a bit about FPGA cards and I'm very new to the subject. I'm more of a software developper and have had no real experience programming FPGA devices.
I am currently building a project on a linux OS in the C language. I would like to know how it may be possible to implement such code on an FPGA device. For that, I have a few questions.
Firstly, do I have to translate my code to VHDL or can I use C? Also, how would one come about to installing an OS on an FPGA card, and are there devices that already have an OS installed in them?
Sorry for the newbie type questions, and any help would be appreciated!
FPGAs are great at running simple, fixed data flows through parallel processing, while CPUs are optimized for complex and/or dynamic data flows.
The C language is not designed for describing highly parallel systems, as it follows a clearly sequential pattern ("assign a to b, then add c to d"); while compilers introduce some parallelization as an optimization, the focus is on generating code that behaves as if the instructions were sequentialized.
In an FPGA, on the other hand, you want to break up sequences as far as possible and create parallel circuitry and pipelines, so normally the system is described in the form of interconnected blocks, where each is kept as simple as possible.
For example, where you have (a+b)*(c+d), a CPU based design would probably have a single adder, feed it with a and b first, then with c and d, and finally pass both results to the multiplier.
In an FPGA design, that is rather costly, as you have to create a state machine that keeps track of which of the three computation stages we are at and where the results are kept, so it may be easier to have two dedicated adders hardwired to a and b, and c and d, respectively, and to have their outputs connected to a multiplier block.
At this point, you basically have created a dedicated machine that can compute this single term and nothing else, but its speed is limited by the speed of the transistors making up the logic gates only, and compared to the state machine you get a speed increase of at least a factor of three (because we only have a single state/instruction now), probably more because we can also discard the logic for storing intermediate results.
In order to decide when to create a state machine/processor, and when to hardcode computations, the compiler would have to know more about the program flow and timing requirements than can be expressed in C/C++, so these languages are not a good choice.
The OS as such also looks vastly different. There are no resources to arbitrate dynamically, so this part is omitted, and all that is left are device drivers. As everything is parallel, these take the form of external modules that are simply linked into your design, and interfaced directly.
If you are just starting out, I'd suggest you get a development kit with a few LEDs, and start with the basic functionality:
Make the LED blink
Use a PLL block from the system library to derive a secondary clock, and make the LED blink with a different frequency
Add a simple bus interface, e.g. SPI, and communicate with a simple external device, e.g. a WS2811 based LED strip
After you have a basic grasp of how the system works, try to get a working simulation environment (the equivalent of a Debug build), and begin including more complex peripherals.
It sounds like you could use a tutorial for beginners. I would recommend starting here and reading through an introduction to digital design. Some of your basic questions should be answered by reading through these tutorials. This will put you in a better place to ask more specific questions in the future.

Unit testing patterns for microcontroller C code

Although there are plenty of unit test frameworks that support C, I'm a little stumped on how to write unit tests for micro controller code (PIC in my case, but I think the question is more general than that).
Much of the code written for micro controllers revolves around Writing configuration and data values to registers, reading incoming data from registers and responding to interrupt events. I'm wondering if anyone can provide some pointers on the most effective way to this.
You write;
"Much of the code written for micro controllers revolves around Writing configuration and data values to registers, reading incoming data from registers and responding to interrupt events".
I agree that this is often the case in practice, but I don't actually think this is a good thing, and I think rethinking things a little will help you with your test goals.
Perhaps because microcontroller programmers can reach out and touch the hardware any time they like, many (most?) of them have got into the habit of doing just that, throughout their code. Often this habit is followed unquestioningly, maybe because so many people doing this sort of work are EEs not computer scientists by training and inclination. I know, I started out that way myself.
The point I am trying to make, is that microcontroller projects can and should be well designed like any other software project. A really important part of good design is to restrict the hardware access to hardware drivers! Partition off all the code that writes registers, responds to interrupts etc. into modules that provide the rest of your software with nice, clean, abstracted access to the hardware. Test those driver modules on the target using logic analyzers, oscilloscopes, custom test rigs or whatever else makes sense.
A really important point is that now the rest of your software, hopefully the great majority of it, is now just C code that you can run and test on a host system. On the host system the hardware modules are stubbed out in a way that provides visibility into what the code under test is doing. You can use mainstream unit testing approaches on this code. This needs some preparations and work, but if you are well organized you can create a reusable system that can apply to all your projects. The potential benefits are enormous. I wrote a little more about these ideas here;
[http://discuss.joelonsoftware.com/default.asp?joel.3.530964.12][1]
One approach to this might be to use an emulator. I've been working on an AVR emulator and one of the ideas for using it is indeed to unit test code. The emulator implements the CPU and registers, interrupts and various peripherals, and (in my case) bytes written to the emulated UART go to the regular stdout of the emulator. In this way, unit test code can run in the emulator and write its test results to the console.
Of course, one must also ensure that the emulator is correctly implementing the behaviour of the real CPU, otherwise the unit tests on top of that can't be trusted.
Write mock versions of your register access functions/macros. Note that this assumes that your code uses a common set of register access functions, and not ad-hoc stuff like *(volatile int*)0xDEADBEEF = 0xBADF00D everywhere.
Call your interrupt handlers directly from your test code (may be problematic on some architectures¹), a "software interrupt" if available, or from a timer interrupt handler if you need them to execute asynchronously. This may require wrapping your interrupt enable/disable code in functions/macros that you can mock up.
¹ 8051 comes to mind: at least with the Keil 8051 compiler, you can't call interrupt functions directly. This could be worked out with the C preprocessor though.
Is there perhaps any kind of loopback mode so that you can use the controller itself to generate events that you can test against?

Resources