I am usually a wood worker and not a developer. I'm learning C/C++ for embedded systems while trying to make some of my tool autonomous to save me hours of repetitive work.
For now, its fun and going well, I have spend maybe a hundred of hours coding/learning and already saved more time*.
As I want to keep going is buying and following MISRA coding rule a "mandatory good idea"? What does MISRA contain? Only coding rules, or kind of tips to make it safer?
Those tools could be dangerous (after all they cut wood and a human body is far less resistant...).
Note: I obviously do my test in 4 steps:
Just the pic running with an OSD & SD card logger (one day I'll make an anylze tool and stop reading those).
I plug the tool with nothing on it
I use soft drill/cutters on foam
I conduct real test at good distance with my hand on the emergency stop button.
Also I'm the only employee and no one else has access to my work-place.
*for now I've turn a drill into a kinda 3D wood printer (doing the not precise part of the work), and a "cutter-board" into an automated one.
Note2: I'm not a native speaker so tools' names are probably off.
MISRA is designed originally for use in the automotive industry, though it has grown well past that at this stage. The MISRA guidlines stated aims are:
Ensure safety
Bring in robustness, reliability to the software.
Human safety must take precedence when in conflict with security of property.
Consider both random and systematic faults in system design.
Demonstrate robustness, not just rely on the absence of failures.
Application of safety considerations across the design, manufacture, operation, servicing and disposal of products.
The documents mainly consist of rule based advisory information for code that tries to meet these aims. MISRA document prices have dropped somewhat over the years, some documents can be bought online from MISRA for as little as GBP £10 + VAT.
However, as a beginner and amateur coder, I would advise first bolstering your knowledge of C and C++. While in most areas of industry it is often good to follow a pertinent standard, if applicable, the documents are written with the assumption that the reader has a very solid grounding in the languages and also in the concerns and processes governing full-scale commercial type applications written in them. If your workshop is for personal use only, and depending on rules governing workplace safety in your jurisdiction, I can say that having a good understanding of the languages, language tools and the hardware would allow you to start making good choices with regards to how to code things more-so than reading MISRA could at such a stage in.
As commented above, and it is worth reiterating, MISRA is not some kind of magic wand or concrete way of going about things that will guarantee your code is good, works and is safe. Both good and bad code can meet standards. Following MISRA before having a good and complete grasp of what you are doing might be the same as ensuring every cable in your work shop is neatly tacked in place but then stabbing yourself with a chisel.
MISRA-C is a set of rules which will enforce you to weed out well-known problems and poorly-defined behavior from a C program. It is a "safe subset" of the C language, banning various forms of dangerous practice through rules aiming for well-known bugs such as reliance on poorly-defined behavior or implicit type conversions. C has the advantage of being a very old language, meaning that all the language flaws are well-known.
MISRA-C has a heavy focus on static code analysis to find bugs at compile-time. This is something to keep in mind, as to my knowledge there exists no open source static code analyser tools that can check for MISRA-C compliance. The commercial tools tend to be very expensive and often also full of bugs/false positives. Still, most of them are useful.
MISRA-C is only focused on C programming, it does not address CPU or microcontroller issues etc, although it does enforce some forms of defensive programming, which is a counter against EMI, run-away code and other forms of unexpected program behavior. (For a list of general tips & tricks beyond C, see this. Not all of these will necessarily apply to your specific machine though.)
To demonstrate MISRA compliance, you create a "compliance matrix" which shows how you catch every directive/rule of the MISRA-C document: through compiler messages, peer review, static code analysis etc.
Most rules in the document make a lot of sense, but some do not. MISRA-C does however allow deviations from most rules, ranking them as one of:
Mandatory. No deviations allowed.
Recommended. One must invoke a formal deviation procedure if not following the rule.
Advisory. One can deviate from the rule without making a formal deviation.
Typically, creating MISRA-C compliance is therefore done by establishing a company coding standard, which addresses all rules. The easiest way to implement it is write down in this document which rules that are followed and which ones that are skipped, on a company level. Then set static code analysis filters accordingly.
Related
I have the following Questions About embedded unit testing:
Can Google unit testing be used for embedded C Code?
Is Google unit test compatible with IEC 62304?
I tried to find a document, which could answer my Question but unable to find that. Even in the Google Unit Test Official Documentation1, I was not able to find the answer. Also, I have the following Question:
How would I know an open source Unit Testing Tool is compatible with IEC 62304 Standard?
Please help me with your Knowledge in Unit Testing.
Thanks in Advance.
To answer your questions directly:
googletest is intended for use with C++. The link in Vertexwahn’s answer shows that at least one person has been able to use it for testing C.
2 & 3. IEC 62304 is a software life cycle process, it has nothing to say about whether you can or cannot use a particular tool, only the steps which you must go through in your project.
Unit testing is certainly one step that you would go through in your software life cycle. As the engineer responsible, it is your job to decide whether or not a tool is suitable for a particular task. No person outside your project can ever tell you that a tool is suitable for use in developing a particular medical device, because this depends very highly not only on the design of the particular device, but also on the testing strategy that your are going to adopt.
The testing strategy in turn will depend on the particular risks that you need to mitigate. You will need to follow ISO 14971 for your risk management process.
At every stage of the process you will have to document the reasons for the decisions that you have made according to an ISO 13485 quality management process.
When you come to make a regulatory submission to an approved body they will appoint an auditor who will look through your documentation. In the vast majority of cases the auditor will have absolutely no technical expertise in software. They will check that you have followed the appropriate documentation process but ultimately they will take your word on whether or not a tool is suitable.
It is easy to trick an auditor and use an unsuitable tool by creating a large volume of paperwork which falsely explains why it is suitable. If you do this no one will know until or unless the medical device causes harm to someone and your company (or you personally) gets sued or prosecuted and the documents get examined by technical experts appointed by a court.
What you need to think about when you put your signature on the document that states the tool is suitable is whether you could stand up in court and defend your decision after someone has been harmed.
After all this, having said that no tool is ever either inherently suitable or unsuitable, there are some software suppliers that make claims of suitability or even "pre-approval". What this means is that they have pre-written many of the documents that your regulatory submission will require. These are always very expensive (nothing free like googletest fits into this category). Even if you use these pre-written documents, it is your responsibility to review them and put your signature against them and say that they are correct and more importantly that they are applicable to your specific project. Buying a product like this saves you time, but not liability.
GoogleTest seems to work with C -> https://meekrosoft.wordpress.com/2009/11/09/unit-testing-c-code-with-the-googletest-framework/
Google will not take over the responsibility for you to be compliant with IEC 62304 regarding your use cases. You have to make sure that the tools you use do what they should do for the use case you use them. For instance, you can come up with an acceptance test for GoogleTest that proves that it works for you as expected.
When doing this consider also known bugs. Even if a company offers you a unit test framework that is IEC 62304 compliant I would ask myself if this test framework has more users and is better tested than gtest.
I think something like this does not exist - it would mean that the Open Source project would take over the liability for damages resulting from its users
I've been trying to do some research on rule-based AI, but I can't seem to find a great distinction between production and expert systems. They both use rules to dictate their decisions and they can both use forward or backward-chaining. Yet, they are talked about as if they were seperate entities.
Also, I can't seem to find anything else that fits under the "rule-based system" umbrella; is there anything else?
Any insight on this is greatly appreciated. Thanks!
A production system is a type of programming language. An expert systems is a type of program.
Production systems are a form of declarative programming where you specify what you want done, but not how it is done. Declarative programming works best when your program can be naturally expressed using productions/rules (when/then) and you need to be able to frequently add or delete productions. For example, many email programs allow you to add rules for processing emails when they arrive. It is convenient to use productions/rules for processing emails since what you want done can be expressed naturally as productions (when subject contains "nigerian prince" then move message to junk mailbox), but since you will be adding/deleting productions it is also convenient to maintain them in this form and to allow the process of how the productions are applied to be automatically handled for you.
Generally expert systems refer to programs that emulate specialized human expertise (for example diagnosing diseases). Such expertise can frequently be expressed using rules and can be incrementally added or removed, so these types or programs are often implemented using production systems since it is convenient to do so.
So while production systems have a strong association with expert systems, not all programs written with a production systems are expert systems and not all expert systems are written with production systems.
For grammar parser, I used to "play" with Bison which have its pros/cons.
Last week, I noticed on SqLite site that the engine is done with another grammar parser: Lemon
Sounds great after reading the thin documentation.
Do you have some feedback about this parser?
Cannot really see pertinent information on Google and Wikipedia (just a few examples, same tutorials) It doesn't seem very popular. (there is no lemon tag in Stack Overflow [ed: there is now :P])
Reasons we are using Lemon in our firmware project are:
Small size of generated code and memory footprint. It produces the smallest parser I found (I compared parsers of similar complexity generated by flex, bison, ANTLR, and Lemon);
Excellent support of embedded systems: Lemon doesn't depend on standard library, you can specify external memory management functions, debug logging is removable.
Public domain license. There is separate fork of Lemon licensed under GPLv2 that is not suitable for our needs because of viral license. So we get latest sqlite sources and compile Lemon out of them (it consists of only two files);
Pull-parsing. It makes code more straightforward to understand and maintain than Flex/Bison parsing code. Thread-safety as an additional bonus I admire.
Simple integration with tokenizers. Our project nature requires tokenizing of binary stream with variable tokens size. It was quite an easy to implemented tokenizer and integrate with parser API of only 3 functions and one feedback context variable. We investigated ways of integrating Lemon with re2c and Ragel and found them also quite easy to implement.
Very simple syntax fast to learn.
Lemon explicitly separate development of tokenizer and lexical analyzer(parser). My development flow starts with designing of parser grammar. I'm able to check complex rules with implicit token sequence by the means of several Parser(...) calls at this first stage. Tokenizer is implemented afterwards.
Surely Lemon is not a silver bullet, it has limited area of application. Among disadvantages:
Lemon requires to write more rules in comparison with Bison because of simplified syntax: no repetitions and optionals, one action per rule, etc.
Complete set of LALR(1) parser limitations.
Only the C language.
Weigh the pros and cons before making your choice. I've done mine ;-)
Interesting find! I haven't actually used it, so the commentary is based on reading the documentation.
The redesign so that the lexical analysis is done separately from the parsing immediately seems to have merit. In particular, it has the potential to simplify operations such as handling multiple or nested source files. The Lex-based yywrap() mechanism is less than ideal. That it avoids all global variables and has careful memory allocation and deallocation control should count in its favour (that it allows the choice of allocator and deallocator greatly helps too - at least for the environments where I work, where memory allocation is always an issue).
The rethinking on how the rules are organized and how the terminals are identified is a good idea.
All in all, it looks like a well thought out redesign of Bison.
It is in the public domain according to the referenced web pages.
Object-Oriented programmers seem to have all the fun. Not only are they treated to major framework revisions every two years, and new and Improved languages every five, they also get to deal with design practices tailor-made to their programming style. From test-driven development to design patterns, Object-Oriented programmers have a lot to keep up with.
By contrast, the C programming world seems far more sedate. The last major revision to the language was in 1999, and the next one is likely to be far less impressive. K&R 2nd edition is still held up as a good introductory text by many, despite being twenty years old now.
If we, as C programmers, have developed and improved our skills and practices (and I think we probably have), we don't seem to be very good at communicating them. We don't sell books about them, post about them on blogs, or organise workshops around them. Not in the way the rest of the software development world seems to.
So, let's share.
What are your preferred 'modern' C programming practices?
Do you use `template' libraries of long, involved preprocessor macros to squeeze the last inch of performence out of hardware in the same way C++ programmers can? Do you use a allocation library like halloc to minimize the time you spend on managing memory, or do you use a full-blown automatic garbage collector?
Of course, if you've been using these things since 1987, feel free to chime in as well; the point of this question is to share practices that are out of the ordinary but might benefit others.
What are your preferred 'modern' C software design practices?
Design considerations are at least as important, of course. Do you adapt design practices from the Object-Oriented world? Do you use UML? Or you opt to iron out specifications in a language-neutral style (flowcharts, Z, weakest precondition calculus, anything)?
I try to use ready-made libraries for basic functionality when possible. I find glib (part of the GTK+ GUI framework) absolutely brilliant when it comes to general data structures and such. No more writing your own hash table, linked list, dynamic array or whatever.
I also think the object-oriented ideas in the GTK+ toolkit are great, and often structure my code the same. There's nothing stopping you from adopting paradigms in C, it's flexible enough to express many things that are just made "first-class" in other languages, even if doing so often involves a certain ... verbosity, of course.
Not really a C programming practice, because I'm one of those newfangled object-oriented programmers working in C++, but this:
Object Oriented Programming is not a silver bullet
I wish my company had more pure C programmers to teach the juniors that there is life beyond Object Orientation.
To be honest, my answer would be that I finally gave in to C++ after fighting it for a long time. I've come to really enjoy its advantages.
I like being able to let the compiler take care of the OO plumbing, being able to use exceptions and RAII instead of littering return codes and resource releases all over, not reimplementing a linked list or an automatically expanding vector or a smarter string library for the umpteenth time, operator overloading instead of vector_add() everywhere, etc. Granted, there are libraries for much of this in C, but it seems like such things are rather fragmented between competing solutions. It's nice having such amenities standardized in C++.
The nice thing is that I'm still free to drop down and do all the stuff I might have done in C if I feel like that's what suits the program best. There's no OO straight-jacket like in Java.
1999: Use C, it is fast, low-level, efficient
2009: Use Python, it is fast-enough, productive, multi-platform, popular and fun
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've started to write a file format specification for a domain-specific data type. My goal is to improve interoperability between a large number of data providers and search algorithms. I want the result to be available for use, patent-free and without distribution fees.
I'm looking for advice on which license to use, both for the specification and for the contributor agreement, if I need one.
If this were software then I know enough about the GPL, MIT, etc. licenses to make an informed decision. If this were a straight document then I would pick one of the Creative Commons licenses, likely CC by attribution.
Looking around, I don't find any common license statement or much in the way of advice. I'm leaning towards the one used in RFC (for example, the HTTP/1.1 copyright statement) but that says "this document itself may not be modified in any way" (with exceptions), which is something I'm not used to from developing code under the MIT and GPL licenses. But that restrictions seems pretty common in specifications.
Unlike most documents but like code, specifications can be affected by patent. Is it best practices these days to also state that the specification is patent-free and require any contributors to reveal any patent conflicts they may know of and/or freely license those patents for the purposes of implementing the spec?
Should I require some sort of contributor agreement?
Or should I just wing it, choose the RFC copyright statement (or CC-By-Attribution), and not worry about this?
"this document itself may not be modified in any way" (with exceptions) [...] But that restrictions seems pretty common in specifications.
Actually, it is pretty much a requirement. If anybody could change it at will, it wouldn't be much of a specification: that would defeat the whole purpose to "improve interoperability between a large number of data providers and search algorithms".
Dalke: Is it? I'm so used to implementation-defined and ad hoc format definitions and people who break the spec left and right that I didn't think it would add anything, and protection would hinder future extension if I decide to not continue maintaining the code. I thought conformance was better handled by trademark law, like how DRM-based CDs which violate Phillips' Red Book can't use the "CD" logo.
[...] which is something I'm not used to from developing code under the MIT and GPL licenses
Actually, you are used to it, you just don't realize it: the whole reason why you were able to just write the three letters "GPL" above and blindly assume that everybody knows precisely what you mean, is because the GPL itself contains exactly that same restriction. ("Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.") The GPL itself is not distributed under a Free License, precisely because if anybody where allowed to modify it, it would lose its meaning.
Dalke: You're right, although the GFDL's "invariant section" sprang immediately to mind when I was considering the possibilities. I will point out that people do things in the license grant which modify the terms of the GPL to, among other things, make it non-free, and I've personally modified the three-clause BSD license to scratch out Berkeley and put in my name, but those are quibbles.
Is it best practices these days to also state that the specification is patent-free and require any contributors to reveal any patent conflicts they may know of and/or freely license those patents for the purposes of implementing the spec?
Yes. It is clear from your question that you care a great deal about making the barrier for implementors as low as possible. Then, what good is a free, open, royalty-free specification if I have to pay for a patent license anyway? This has to be addressed, preferably by an IP/patent lawyer with extensive expertise in such questions (including, but not limited to, the specific challenges that open source projects face with regards to patent licensing).
There are some quite subtle pitfalls in there. For example, one common theme is to require that patent licenses be made available under what is usually called FRAND (or RAND) terms, which stands for fair, reasonable and non-discriminatory. Which sounds good, right? Except there's a subtle problem there: charging 1 cent for every copy is certainly reasonable and if you charge everybody the same amount, it's also fair and non-discriminatory. Except that open source projects (and even freely distributable proprietary ones) cannot enforce those terms, therefore they cannot implement the specification.
Dalke: Very true. But for licenses that's a well described topic. There are reams of text on the matter, and suggestions, and podcasts, and even automated license choosers. For specifications, not so much. I did know about the RAND issue, and I've heard stories about other spec where a contributor at the end said "Oh! Look at that! We've got a patent on it. Well lucky us!" A question is how much I should worry about it.
So, proper patent promises or covenants or whatever you call them, are very important. (As are trademarks, by the way.)
For example, the W3C originally wanted to adopt a RAND license for its specifications, but after significant protests from projects such as Mozilla and Apache, they decided upon a royalty-free model. So, even an organization which cares deeply about freedom and openness almost made a mistake with the potential of killing every single open source web browser, feedreader and XML parser.
Or should I just wing it, choose the RFC copyright statement (or CC-By-Attribution), and not worry about this?
"Winging" important legal decisions is how people end up bankrupt or even in jail. Or at least extremely unhappy. While the first two are pretty unlikely in this case, I assume that you will be unhappy if you find out in two years that your specification is completely useless because of a glitch in its patent/copyright/IP legalese.
Dalke: I knew that word would be a draw. ;)
There are legal firms that specialize in pro bono work for non-profit developers of open source projects; maybe one of those will help you. The most well-known ones are probably the Software Freedom Law Center (SLFC) in the US and the Institut für Rechtsfragen der Freien und Open Source Software (ifrOSS) in Germany.
And whaddaya know, the fourth news item on the ifrOSS homepage is about the Open Web Foundation Agreement, which is a license template by the Open Web Foundation specifically for open, non-proprietary community-driven specifications for web technologies.
Dalke: Thanks. I'm in Sweden, so I wonder how well those resources will apply to me. Looking at the OWF I see it's US-based but it tries hard to be international, and I see one thing I don't like; the requirement for attribution. It does look like they are the people to talk to. Thanks for the pointer!