Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
After writing several different custom serial protocols for various projects, I've started to become frustrated with re-inventing the wheel every time. In lieu of continuing to develop custom solutions for every project, I've been searching for a more general solution. I was wondering if anyone knows of a serial protocol (or better yet, implementation) that meets the following requirements:
Support multiple devices. We'd like to be able to support an RS485 bus.
Guaranteed delivery. Some sort of acknowledgement mechanism, and some simple error detection (CRC16 is probably fine).
Not master/slave. Ideally the slave(s) would be able to send data asynchronously. This is mostly just for aesthetic reasons, the concept of polling each slave doesn't feel right to me.
OS independence. Ideally it wouldn't rely on a preemptive multitasking environment at all. I'm willing to concede this if I can get the other stuff.
ANSI C. We need to be able to compile it for several different architectures.
Speed isn't too much of an issue, we're willing to give up some speed in order to meet some of those other needs. We would, however, like to minimize the amount of required resources.
I'm about to start implementing a sliding window protocol with piggybacked ACKs and without selective repeat, but thought that perhaps someone could save me the trouble. Does anyone know of an existing project that I could leverage? Or perhaps a better strategy?
UPDATE
I have seriously considered a TCP/IP implementation, but was really hoping for something more lightweight. Many of the features of TCP/IP are overkill for what I'm trying to do. I'm willing to accept (begrudgingly) that perhaps the features I want just aren't included in lighter protocols.
UPDATE 2
Thanks for the tips on CAN. I have looked at it in the past and will probably use it in the future. I'd really like the library to handle the acknowledgements, buffering, retries etc, though. I guess I'm more looking for a network/transport layer instead of a datalink/physical layer.
UPDATE 3
So it sounds like the state of the art in this area is:
A trimmed down TCP/IP stack. Probably starting with something like lwIP or uIP.
A CAN based implementation, it would probably rely heavily on the CAN bus, so it wouldn't be useful on other physical layers. Something like CAN Festival could help along the way.
An HDLC or SDLC implementation (like this one). This is probably the route we'll take.
Please feel free to post more answers if you come across this question.
Have you considered HDLC or SDLC?
There's also LAP/D (Link Access Protocol, D-Channel).
Uyless Black's "Data Link Protocols" is always nearby on my bookshelf - you might find some useful material in there too (even peruse the TOC & research the different protocols)
CAN meets a number of your criteria:
Support multiple devices: It supports a large number of devices on one bus. It's not, however, compatible with RS485.
Guaranteed delivery: The physical layer uses bit-stuffing and a CRC, all of which are implemented in hardware on an increasing number of modern embedded processors. If you need acknlowedgement, you need to add that on top yourself.
Not master/slave: There are no masters or slaves; all devices can transmit whenever they want. The processor hardware deals with arbitration and contention.
OS independence: Not applicable; it's a low-level bus. What you put on top of that is up to you.
ANSI C: Again, not applicable.
Speed: Typically, up to 1 Mbps up to 40 m; you can choose your own speed for your application.
As mentioned, its definition is fairly low-level, so there's still work to be done to turn it into a full protocol to meet your needs. However, the fact that a lot of the work is done in hardware for you does it make very useful for a variety of applications.
I'd guess a reasonable starting point could be uIP.
(Adding Wikipedia article on µIP since original link is dead.)
Would you consider the MODBUS protocol? It is master/slave oriented, so the slave could not initiate the transfer, but otherwise is lightweight for implementation, free, and well supported with high level tools. You should just get a grasp on their terminology /like holding register, input register, output coil etc).
Phy level could be RS232, RS485, Ethernet...
Have a look at Microcontroller Internet Network (MIN):
https://github.com/min-protocol/min
Inspired by CAN but using standard UART hardware, with Fletcher's checksum and frame format checking for error detection and byte-stuffing to mark a frame header.
Take a look at Profibus.
If you don't want master/slave, I think you ought to do the arbitration with hardware (Canbus, FlexRay).
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I understand that implementing a state machine is the perfect way to program the computer. Since state machines are typically programmed using non-blocking calls, I wonder why blocking calls similar to the Berkeley sockets APIs were invented? Don't they encourage bad programming practice?
Thanks in advance.
Edit: The idea behind this question is to establish the fact that a multi-context event driven state machine based on non-blocking IO is indeed the perfect way to program the computer. Everything else is amateur. People who think otherwise should allow for a debate.
Your question makes some pretty substantial assertions / assumptions:
the underlying nature of computers is a state machine?
Well, surely you can model computers as state machines, but that does not in itself mean that such a model represents some fundamental "underlying nature".
I understand that implementing a state machine is the perfect way to program the computer.
Then by all means, write all your programs as state machines. Good luck.
In real life, some tasks can be conveniently and effectively written as state machines, but there are many for which a state-machine approach would be cumbersome to write and difficult to understand or maintain.
There is no "perfect" way to program a computer. Indeed, it would be pretty pretentious to claim perfection even for a single program.
Since state machines are typically programmed using non-blocking calls,
You don't say? I think you would need to be a lot more specific about what you mean by this. I have written state-machine based software at times in the past, and I would not characterize any of it as having been implemented using non-blocking calls, nor as exposing a non-blocking external API.
I wonder why blocking calls similar to the Berkeley sockets APIs were invented? Don't they encourage bad programming practice?
Before we could even consider this question, you would have to define what you mean by "bad programming practice". From what I can see, however, you are assuming the conclusion:
you assert that a state-machine approach to programming is ideal, with the implication that anything else is sub-par.
you claim, without support, that only non-blocking calls have state-machine nature
you conclude that anything that uses blocking calls must exhibit bad programming practice.
Your conclusion is not consistent with the prevailing opinion and practice of the programming community, to the extent that I can gauge it. Your argument is hollow and unconvincing.
Multiple processes (or later, threads) with synchronous (blocking) calls are easy to understand and program and easily composable - that is, you can take two tasks that are made up of synchronous calls and run them at the same time (via a scheduler) without having to modify either one in any way.
Programming as a state machine, on the other hand, requires either manually adding states (possibly in combinatorically growing numbers) when you add new code, or some kind of tightly-coupled framework of registering handlers and explicitly storing state for the next handler to use.
What? 'blocking call' implies preemptive multitasking. The kernel of such an OS is a state-machine with interrupts as input events and a set of running threads as output actions.
The OS kernel is a state machine, and blocking calls conveniently move the FSM functionality into the kernel so that you don't have to write the miserable state-machines in user apps.
I understand that implementing a state machine is the perfect way to program the computer
What? 'perfect'? What? Have you ever developed, debugged and delivered any non-trivial multithreaded app?
This is a question that I have been asking myself for a while. I've tried finding good reads about this, but I cannot seem to find a solution that's suitable for how I think things should be.
I believe that for portability and maintenance reasons, drivers should preferably not be dependent on each other. However, sometimes one driver may require functionality provided by an other driver. An I²C bus for example may have a timeout that depends on the Timer driver.
The way I have been doing this until now is by simply #include'ing the drivers in the other drivers, but this is not a desirable solution. I feel like there should be a better way of doing this.
I'm thinking of adding another layer, a sort of abstraction between the main application and all drivers. However, this feels like it's just moving the problem somewhere else and not solving it.
I've used Function pointers, but this, too, makes maintenance a nuissance.
Are there any good sources or ideas about driver interdependency and how to neatly solve a problem like this?
On big controllers, Cortex M3/4 and the like, it is totally fine to have countless layers. For example the SD-Card interface of the LPC1822 consists of an "sdif" driver, handling the basic communication and pin toggling of the card interface. On top of that there is the "sdmmc" driver, providing more sophisticated functions. Over top this one could be the FAT system (using the real time clock), and on and on....
In contrary, on a tiny 8-bit controller it is maybe better to have no layers at all. Those 3 registers you have to set for an i2c communication are manageable. Don't write hundred lines of code to do something trivial. In that case it is totally fine to include the timer directly in the I2C routines. If you want your program to be more understandable for your colleagues, use your time to write good comments and documentation instead of encapsulating all and everything in functions and abstraction layers.
When you are ressource constrained and your program isn't that big anyway, don't burden yourself with too much overhead only to get consistent layers. Layers is something for big complicated software. In embedded computing you are sometimes better off keeping your sideways dependencies instead of writing huge libs that don't fit into the flash space.
I'm thinking about creating my own pure-C software SPI library because there are none available (as far as I can tell).
Which also worries me - why aren't there any software SPI libraries? Is there some hardware limitation I'm not considering?
EDIT:
I've decided to write my own library due to how buggy the SPI peripheral is in STM32. Especially in 8 bit mode, but I've also had a lot of problems with 16 bit mode. Many other issues I didn't even bother documenting.
I've now written the software implementation (it is pretty easy) and in works just fine.
why aren't there any software SPI libraries?
Because it's about 10 lines of code each for the WriteByte and ReadByte functions, and most of that is bit banging processor-specific registers. The higher level protocol depends on the device connected to the SPI. Here's what wikipedia has to say on the subject
The SPI bus is a de facto standard. However, the lack of a formal
standard is reflected in a wide variety of protocol options. Different
word sizes are common. Every device defines its own protocol,
including whether or not it supports commands at all. Some devices are
transmit-only; others are receive-only. Chip selects are sometimes
active-high rather than active-low. Some protocols send the least
significant bit first.
So there's really no point in making a library. You just write the code for each specific situation, and combination of devices.
Although the others have answered that its just bitbanging; I would argue that there is benefit to writing a small layer:
If you don't use a HAL, or the Standard Libs (like myself) you can write a layer to deal with initialization which can do things as per the initialisation sequence of the chip.
You can map all your interrupt vectors for a specific peripheral in this layer utilising callback mechanisms
Create separation between application and system domains which is a core principle for modular design
Increase of code reusability utilising techniques such as function pointers and common interfaces
Add input validation on settings/parameters that would otherwise cause code duplication if no layer was used. For example, ensuring that the HCLK does not exceed 180MHz on the stm32f429 when initialising it.
Whilst it is true that to send data generally all you need to do is set a register, but more often that not the initialisation sequence is complicated.
With the increase in power and capacity in microcontrollers along with project size, its important to implement balanced design choices that promote scalability and maintainability - especially in commercial projects.
If you are asking for microcontrollers then you can have your own SPI library.
You need to use bit-banging technique for that.
There are software SPI libraries available.
As every microcontroller have different PORT architecture and registers those are not generic and they are specific for that controller only.
e.g. For 8051 architecture you can find this.
I'm trying to learn a bit about FPGA cards and I'm very new to the subject. I'm more of a software developper and have had no real experience programming FPGA devices.
I am currently building a project on a linux OS in the C language. I would like to know how it may be possible to implement such code on an FPGA device. For that, I have a few questions.
Firstly, do I have to translate my code to VHDL or can I use C? Also, how would one come about to installing an OS on an FPGA card, and are there devices that already have an OS installed in them?
Sorry for the newbie type questions, and any help would be appreciated!
FPGAs are great at running simple, fixed data flows through parallel processing, while CPUs are optimized for complex and/or dynamic data flows.
The C language is not designed for describing highly parallel systems, as it follows a clearly sequential pattern ("assign a to b, then add c to d"); while compilers introduce some parallelization as an optimization, the focus is on generating code that behaves as if the instructions were sequentialized.
In an FPGA, on the other hand, you want to break up sequences as far as possible and create parallel circuitry and pipelines, so normally the system is described in the form of interconnected blocks, where each is kept as simple as possible.
For example, where you have (a+b)*(c+d), a CPU based design would probably have a single adder, feed it with a and b first, then with c and d, and finally pass both results to the multiplier.
In an FPGA design, that is rather costly, as you have to create a state machine that keeps track of which of the three computation stages we are at and where the results are kept, so it may be easier to have two dedicated adders hardwired to a and b, and c and d, respectively, and to have their outputs connected to a multiplier block.
At this point, you basically have created a dedicated machine that can compute this single term and nothing else, but its speed is limited by the speed of the transistors making up the logic gates only, and compared to the state machine you get a speed increase of at least a factor of three (because we only have a single state/instruction now), probably more because we can also discard the logic for storing intermediate results.
In order to decide when to create a state machine/processor, and when to hardcode computations, the compiler would have to know more about the program flow and timing requirements than can be expressed in C/C++, so these languages are not a good choice.
The OS as such also looks vastly different. There are no resources to arbitrate dynamically, so this part is omitted, and all that is left are device drivers. As everything is parallel, these take the form of external modules that are simply linked into your design, and interfaced directly.
If you are just starting out, I'd suggest you get a development kit with a few LEDs, and start with the basic functionality:
Make the LED blink
Use a PLL block from the system library to derive a secondary clock, and make the LED blink with a different frequency
Add a simple bus interface, e.g. SPI, and communicate with a simple external device, e.g. a WS2811 based LED strip
After you have a basic grasp of how the system works, try to get a working simulation environment (the equivalent of a Debug build), and begin including more complex peripherals.
It sounds like you could use a tutorial for beginners. I would recommend starting here and reading through an introduction to digital design. Some of your basic questions should be answered by reading through these tutorials. This will put you in a better place to ask more specific questions in the future.
In previous versions of Arduino, the limiting 8-bit microcontroller board, it seems that implementing HTTPS (not merely HTTP) was almost impossible. But the newer version of Arduino Due provides 32-bit ARM core - see spec here.
I tried to check several network libraries (libcurl, openssl, yaSSL), but I didn't find anyone that was already ported to work with Arduino Due.
OpenSSL is probably too heavy to be able to run on this processor, but I believe that yaSSL as an embedded library should be possible to do.
Do you have any information of a library that I can use to trigger HTTPS requests on Arduino Due?
Unfortunately this is too long for a comment.
► No out of the box solution
From what I have gathered, there is no straightforward solution for a webserver running on the Atmel SAM3X8E ARM Cortex-M3 CPU that outputs HTTPS out of the box.
Texas Intstruments provides better options at the moment using their boards equipped with a Stellaris Microcontroller ARM Cortex-M3 CPU.
► Alternative
There are several options available that render cryptographic functions, based upon which one could lay out and implement a simple secure communication protocol that communicates with an intermediary device, which in turn facilitates Rapid Application Development and SSL.
This intermediary device, for instance an off-the-shelf 70$ Android smartphone that keeps your project mobile and connected, runs a service on a specified port which in turn communicates with Amazon SQS. Already available. This may sound ugly or tough, but is much easier than doing the programmatic groundwork for a webserver with full TLS 3 support on the Arduino. Given the proper motivation the latter may be easy, but not if one just wants a fast pragmatic solution to one's own project.
► Cryptographic libraries
crypto-arduino-library http://code.google.com/p/crypto-arduino-library/ (not maintained since 2010)
matrixssl
mbed TLS (formerly PolarSSL)
wolfSSL (formerly CyaSSL)
► Discussions
Following is a list of discussions to get you started:
HTTPS alternative on Arduino
SSL from a Microcontroller
Lightweight Packet Encryption
Many of these libraries would still need to be adapted, but community experts can help you with that fairly quickly.
Good luck! If you are at liberty to upload your final project to github then you just gained a thanks and a follower.
IMHO Arduino (including the DUE) is the wrong tool for heavy and/or encrypted web based communication. I would strongly suggest to look for more appropriate hardware in the same size and price range. As soon you get into https you are close enough to also want a lot of the other stuff that real operating systems provide. With other words I suggest to go for something like the Raspi. Similar size and prize but way more powerful, especially it can run Linux. --> HTTPS becomes simple.
The big problem with https support on an arduino is the danger of overloading your processor which could make the project unviable.
Even embedded platform targetted solutions like PolarSSL can eat up too much memory and use too much processing power. Remember that even on the most streamlined implementations, SSL support is going to have to be generalized for wide adoption and will include components that you won't find necessary. There's also the question of which Certificate Authorities you will trust and how you will communicate with them for things like certificate revocation.
I would look instead towards a solution that isn't as broken on the surface for your needs. Something like CurveProtect, which is an implementation of CurveCP.
Of course, your decision will largely be based on what you want to do and how much time you want to spend figuring the problem out. PolarSSL has a footprint that can be as small as 30K (more typically close to 100K).