This is a question that I have been asking myself for a while. I've tried finding good reads about this, but I cannot seem to find a solution that's suitable for how I think things should be.
I believe that for portability and maintenance reasons, drivers should preferably not be dependent on each other. However, sometimes one driver may require functionality provided by an other driver. An I²C bus for example may have a timeout that depends on the Timer driver.
The way I have been doing this until now is by simply #include'ing the drivers in the other drivers, but this is not a desirable solution. I feel like there should be a better way of doing this.
I'm thinking of adding another layer, a sort of abstraction between the main application and all drivers. However, this feels like it's just moving the problem somewhere else and not solving it.
I've used Function pointers, but this, too, makes maintenance a nuissance.
Are there any good sources or ideas about driver interdependency and how to neatly solve a problem like this?
On big controllers, Cortex M3/4 and the like, it is totally fine to have countless layers. For example the SD-Card interface of the LPC1822 consists of an "sdif" driver, handling the basic communication and pin toggling of the card interface. On top of that there is the "sdmmc" driver, providing more sophisticated functions. Over top this one could be the FAT system (using the real time clock), and on and on....
In contrary, on a tiny 8-bit controller it is maybe better to have no layers at all. Those 3 registers you have to set for an i2c communication are manageable. Don't write hundred lines of code to do something trivial. In that case it is totally fine to include the timer directly in the I2C routines. If you want your program to be more understandable for your colleagues, use your time to write good comments and documentation instead of encapsulating all and everything in functions and abstraction layers.
When you are ressource constrained and your program isn't that big anyway, don't burden yourself with too much overhead only to get consistent layers. Layers is something for big complicated software. In embedded computing you are sometimes better off keeping your sideways dependencies instead of writing huge libs that don't fit into the flash space.
Related
I need to develop device libraries like uBlox, IMUs, BLE, ecc.. from scratch (almost). Is there any doc or tutorial that can help me?
Question is, how to write a device library using C/C++ (Arduino style if you want) given a datasheet and a platform like STM32 or other ARMs?
Thanks so much
I've tried to read device libraries from Arduino library and various Github, but I would like to have a guide/template to follow (general rules) to write proper device libraries from a given datasheet.
I'm not asking a full definitive guide, just where to start, docs, methods approach.
I've found this one below, but is very basic and quite lite for my targets.
http://blog.atollic.com/device-driver-development-the-ultimate-guide-for-embedded-system-developers
I don't think that you can actually write libraries for STM32 in Arduino style. Most Arduino libraries you can find in the wild promote ease of usage rather than performance. For example, a simple library designed for a specific sensor works well if reading the sensor and reporting the results via serial port is the only thing that firmware must do. When you work on more complex projects where uC has lots to do and satisfy some real time constraints, the general Arduino approach doesn't solve your problems.
The problem with STM32 library development is the complex connection between peripherals, DMA and interrupts. I code them in register level without using the Cube framework and I often find myself digging the reference manual for tables that shows the connections between DMA channels or things like timer master-slave relations. Some peripherals (timers mostly) work similar but each one of them has small differences. It makes development of a hardware library that fits all scenarios practically impossible.
The tasks you need to accomplish are also more complex in STM32 projects. For example, in one of my projects, I fool SPI with a dummy/fake DMA transfer triggered by a timer, so that it can generate periodic 8-pulse trains from its clock pin (data pins are unused). No library can provide you this kind of flexibility.
Still, I believe not all is lost. I think it may be possible to build an hardware abstraction layer (HAL, but not The HAL by ST). So, it's possible to create useful libraries if you can abstract them from the hardware. A USB library can be a good example for this approach, as the STM32 devices have ~3 different USB peripheral hardware variations and it makes sense to write a separate HAL for each one of them. The upper application layer however can be the same.
Maybe that was the reason why ST created Cube framework. But as you know, Cube relies on external code generation tools which are aware of the hardware of each device. So, some of the work can be avoided in runtime. You can't achieve the same result when you write your own libraries unless you also design a similar external code generation tool. And also, the code Cube generates is bloated in most cases. You trade development time for runtime performance and code space.
I assume you will be using a cross toolchain on some platform like Linux, and that the cross toolchain is compatible with some method to load object code on the target CPU. I also assume that you already have a working STM32 board that is documented well enough to figure out how the sensors will connect to the board or to the CPU.
First, you should define what your library is supposed to provide. This part is usually surprisingly difficult. It’s a bit hard to know what it can provide, without knowing a bit about what the hardware sensors are capable of providing. Some iteration on the requirements is expected.
You will need to have access to the documentation for the sensors, usually in the form of the manufacturer’s data sheets. Using the datasheet, and knowing how the device is connected to the target CPU/board, you will need to access the STM32 peripherals that comprise the interface to the sensors. Back to the datasheets, this time for the STM32, to see how to access its peripheral interfaces. That might be simple GPIO bits and bytes, or might be how to use built-in peripherals such as SPI or I2C.
The datasheets for the sensors will detail a bunch of registers, describing the meaning of each, including the meanings of each bit, or group of bits, in certain registers. You will write code in C that accesses the STM32 peripherals, and those peripherals will access the sensors across the electrical interface that is part of the STM32 board.
The workflow usually starts out by writing to a register or three to see if there is some identifiable effect. For example, if you are exercising a digital IO port, you might wire up an LED to see if you can turn it on or off, or a switch to see if you can correctly read its state. This establishes that your code can poke or peek at IO using register level access. There may be existing helper functions to do this work as part of the cross toolchain. Or you might have to develop your own, using pointer indirection to access memory mapped IO. Or there might be specially instructions needed that can only be accessed from inline assembler code. This answer is generic as I don’t know the specifics of the STM32 processor or its typical ecosystem.
Then you move on to more complex operations that might involve sequences of operations, like cycling a bit or two to effect some communication with the device. Or it might be as simple as finding the proper sequence of registers to access for operation of a SPI interface. Often, you will find small chunks of code are complete enough to be re-used by your driver; like how to read or write an individual byte. You can then make that a reusable function to simplify the rest of the work, like accessing certain registers in sequence and printing the contents of register that you read to see if they make sense. Ultimately, you will have two important pieces of information: and understanding of the low-level register accesses needed to create a formal driver, and an understanding of what components and capabilities make up the hardware (ie, you know how the device(s) work).
Now, throw away most of what you’ve done, and develop a formal spec. Use what you now know to include everything that can be useful. Use what you now know to develop a spec that includes an appropriate interface API that your application code can use. Rewrite the driver, armed with the knowledge of how are the pieces work, and taking advantage of the blank canvas afforded you by the fresh rewrite of the spec. Only reuse code that you are completely confident is optimal and appropriate to the format dictated by the spec. Write test code for all of the modules, and use the test code to actually test that the code works and that it conforms to the spec. Re-use the test code every time you modify anything it tests.
I'm thinking about creating my own pure-C software SPI library because there are none available (as far as I can tell).
Which also worries me - why aren't there any software SPI libraries? Is there some hardware limitation I'm not considering?
EDIT:
I've decided to write my own library due to how buggy the SPI peripheral is in STM32. Especially in 8 bit mode, but I've also had a lot of problems with 16 bit mode. Many other issues I didn't even bother documenting.
I've now written the software implementation (it is pretty easy) and in works just fine.
why aren't there any software SPI libraries?
Because it's about 10 lines of code each for the WriteByte and ReadByte functions, and most of that is bit banging processor-specific registers. The higher level protocol depends on the device connected to the SPI. Here's what wikipedia has to say on the subject
The SPI bus is a de facto standard. However, the lack of a formal
standard is reflected in a wide variety of protocol options. Different
word sizes are common. Every device defines its own protocol,
including whether or not it supports commands at all. Some devices are
transmit-only; others are receive-only. Chip selects are sometimes
active-high rather than active-low. Some protocols send the least
significant bit first.
So there's really no point in making a library. You just write the code for each specific situation, and combination of devices.
Although the others have answered that its just bitbanging; I would argue that there is benefit to writing a small layer:
If you don't use a HAL, or the Standard Libs (like myself) you can write a layer to deal with initialization which can do things as per the initialisation sequence of the chip.
You can map all your interrupt vectors for a specific peripheral in this layer utilising callback mechanisms
Create separation between application and system domains which is a core principle for modular design
Increase of code reusability utilising techniques such as function pointers and common interfaces
Add input validation on settings/parameters that would otherwise cause code duplication if no layer was used. For example, ensuring that the HCLK does not exceed 180MHz on the stm32f429 when initialising it.
Whilst it is true that to send data generally all you need to do is set a register, but more often that not the initialisation sequence is complicated.
With the increase in power and capacity in microcontrollers along with project size, its important to implement balanced design choices that promote scalability and maintainability - especially in commercial projects.
If you are asking for microcontrollers then you can have your own SPI library.
You need to use bit-banging technique for that.
There are software SPI libraries available.
As every microcontroller have different PORT architecture and registers those are not generic and they are specific for that controller only.
e.g. For 8051 architecture you can find this.
I'm trying to learn a bit about FPGA cards and I'm very new to the subject. I'm more of a software developper and have had no real experience programming FPGA devices.
I am currently building a project on a linux OS in the C language. I would like to know how it may be possible to implement such code on an FPGA device. For that, I have a few questions.
Firstly, do I have to translate my code to VHDL or can I use C? Also, how would one come about to installing an OS on an FPGA card, and are there devices that already have an OS installed in them?
Sorry for the newbie type questions, and any help would be appreciated!
FPGAs are great at running simple, fixed data flows through parallel processing, while CPUs are optimized for complex and/or dynamic data flows.
The C language is not designed for describing highly parallel systems, as it follows a clearly sequential pattern ("assign a to b, then add c to d"); while compilers introduce some parallelization as an optimization, the focus is on generating code that behaves as if the instructions were sequentialized.
In an FPGA, on the other hand, you want to break up sequences as far as possible and create parallel circuitry and pipelines, so normally the system is described in the form of interconnected blocks, where each is kept as simple as possible.
For example, where you have (a+b)*(c+d), a CPU based design would probably have a single adder, feed it with a and b first, then with c and d, and finally pass both results to the multiplier.
In an FPGA design, that is rather costly, as you have to create a state machine that keeps track of which of the three computation stages we are at and where the results are kept, so it may be easier to have two dedicated adders hardwired to a and b, and c and d, respectively, and to have their outputs connected to a multiplier block.
At this point, you basically have created a dedicated machine that can compute this single term and nothing else, but its speed is limited by the speed of the transistors making up the logic gates only, and compared to the state machine you get a speed increase of at least a factor of three (because we only have a single state/instruction now), probably more because we can also discard the logic for storing intermediate results.
In order to decide when to create a state machine/processor, and when to hardcode computations, the compiler would have to know more about the program flow and timing requirements than can be expressed in C/C++, so these languages are not a good choice.
The OS as such also looks vastly different. There are no resources to arbitrate dynamically, so this part is omitted, and all that is left are device drivers. As everything is parallel, these take the form of external modules that are simply linked into your design, and interfaced directly.
If you are just starting out, I'd suggest you get a development kit with a few LEDs, and start with the basic functionality:
Make the LED blink
Use a PLL block from the system library to derive a secondary clock, and make the LED blink with a different frequency
Add a simple bus interface, e.g. SPI, and communicate with a simple external device, e.g. a WS2811 based LED strip
After you have a basic grasp of how the system works, try to get a working simulation environment (the equivalent of a Debug build), and begin including more complex peripherals.
It sounds like you could use a tutorial for beginners. I would recommend starting here and reading through an introduction to digital design. Some of your basic questions should be answered by reading through these tutorials. This will put you in a better place to ask more specific questions in the future.
I'm attempting to write a very simple OS in ASM and C. (NASM assembler)
I would like to access the sound card directly, with or without drivers.
If I don't need drivers, how could I access and send a sample audio file
to the sound card? (An example would be nice)
If I do need drivers, is there anyway to interface them and call functions
from the drivers? And how do I access and send a sample audio file to the
sound card? (Another example would be nice)
I hate to discourage you, but modern sound card drivers are extremely complicated, and as you probably know, OS-specific. This is one of the difficult challenges in OS development - driver support. It's not something that can be achieved with a simple code snippet.
In order to load a file, you need a file system. Have you implemented that yet? The fact that you used the "kernel" flag suggests that your OS is still in its infancy. I'm not sure I would want to put sound support into the kernel of an operating system.
That being said, there is a good emulator called Bochs that has Sound Blaster 16 emulation. And some really old documentation for how to program it. This might be your best bet. Accessing sound hardware was much easier back in the day.
Your best bet is probably to look at either the Linux or FreeBSD sound drivers and see what they do. You're not likely to get much better implementation documentation for any but the simplest sound card...
This is a hard problem. Be warned :-p
Of course you need a driver, and of course there's no easy way to interface with existing ones (there was some proposal for a unified OS-agnostic "Uniform Driver Interface" - but I don't think it got anywhere).
So, after you've written the code to read a file from your hard drive, you'll need to roll your own audio driver.
Now, I haven't done this in a while, so this may be outdated, but in the 90's you'd configure your sound card with a few 'out dx, al' (details varied across soundcards), and then setup DMA to send data from a memory buffer to your card. The card (or was it the DMA controller?) would fire off an interrupt when it reached the end of the buffer, which you'd use to fill the buffer with new data.
If your card has a working linux driver I'd start by looking at its code. Otherwise, you'll have to reverse engineer the windows driver, Soft-Ice's bpio (break on io port access) logging used to be good for that iirc.
Good luck.
Here is a free open-sourced operating system written in all assembly. It is great reference for assembly kernel programming if you are new to it.
http://www.menuetos.net/index.htm
Although there are plenty of unit test frameworks that support C, I'm a little stumped on how to write unit tests for micro controller code (PIC in my case, but I think the question is more general than that).
Much of the code written for micro controllers revolves around Writing configuration and data values to registers, reading incoming data from registers and responding to interrupt events. I'm wondering if anyone can provide some pointers on the most effective way to this.
You write;
"Much of the code written for micro controllers revolves around Writing configuration and data values to registers, reading incoming data from registers and responding to interrupt events".
I agree that this is often the case in practice, but I don't actually think this is a good thing, and I think rethinking things a little will help you with your test goals.
Perhaps because microcontroller programmers can reach out and touch the hardware any time they like, many (most?) of them have got into the habit of doing just that, throughout their code. Often this habit is followed unquestioningly, maybe because so many people doing this sort of work are EEs not computer scientists by training and inclination. I know, I started out that way myself.
The point I am trying to make, is that microcontroller projects can and should be well designed like any other software project. A really important part of good design is to restrict the hardware access to hardware drivers! Partition off all the code that writes registers, responds to interrupts etc. into modules that provide the rest of your software with nice, clean, abstracted access to the hardware. Test those driver modules on the target using logic analyzers, oscilloscopes, custom test rigs or whatever else makes sense.
A really important point is that now the rest of your software, hopefully the great majority of it, is now just C code that you can run and test on a host system. On the host system the hardware modules are stubbed out in a way that provides visibility into what the code under test is doing. You can use mainstream unit testing approaches on this code. This needs some preparations and work, but if you are well organized you can create a reusable system that can apply to all your projects. The potential benefits are enormous. I wrote a little more about these ideas here;
[http://discuss.joelonsoftware.com/default.asp?joel.3.530964.12][1]
One approach to this might be to use an emulator. I've been working on an AVR emulator and one of the ideas for using it is indeed to unit test code. The emulator implements the CPU and registers, interrupts and various peripherals, and (in my case) bytes written to the emulated UART go to the regular stdout of the emulator. In this way, unit test code can run in the emulator and write its test results to the console.
Of course, one must also ensure that the emulator is correctly implementing the behaviour of the real CPU, otherwise the unit tests on top of that can't be trusted.
Write mock versions of your register access functions/macros. Note that this assumes that your code uses a common set of register access functions, and not ad-hoc stuff like *(volatile int*)0xDEADBEEF = 0xBADF00D everywhere.
Call your interrupt handlers directly from your test code (may be problematic on some architectures¹), a "software interrupt" if available, or from a timer interrupt handler if you need them to execute asynchronously. This may require wrapping your interrupt enable/disable code in functions/macros that you can mock up.
¹ 8051 comes to mind: at least with the Keil 8051 compiler, you can't call interrupt functions directly. This could be worked out with the C preprocessor though.
Is there perhaps any kind of loopback mode so that you can use the controller itself to generate events that you can test against?