MISRA 14.5 says continue statement must not be used. Can anyone explain the reason?
Thank you.
It is because of the ancient debate about goto, unconditional branching and spaghetti code, that has been going on for 40 years or so. goto, continue, break and multiple return statements are all considered more or less equally bad.
The consensus of the world's programming community has roughly ended up something like: we recognize that you can use these features of the language without writing spaghetti code if you know what you are doing. But we still discourage them because there is a large chance that someone who doesn't know what they are doing are going to use the features if they are available, and then create spaghetti. And we also discourage them because they are superfluous features: you can obviously write programs without using them.
Since MISRA-C is aimed towards critical systems, MISRA-C:2004 has the approach to ban as many of these unconditional branch features as possible. Therefore, goto, continue and multiple returns were banned. break was only allowed if there was a single break inside the same loop.
However, in the "MISRA-C:2011" draft which is currently under evaluation, the committee has considered to allow all these features yet again, with a restriction that goto should only be allowed to jump downwards and never upwards. The rationale from the committee said that there are now tools (ie static analysers) smart enough to spot bad program flow, so the keywords can be allowed.
The goto debate is still going strong...
Programming in C makes it notoriously hard to keep track of multiple execution branches. If you allocate resources somewhere, you have to release them elsewhere, non-locally. If your code branches, you will in general need to have separate deallocation logic for each branch or way to exit a scope.
The continue statement adds another way to exit from the scope of a for loop, and thus makes such a loop harder to reason about and understand all the possible ways in which control can flow through it, which in turn makes it harder to ascertain that your code behaves correctly in all circumstances.
This is just speculation on my part, but I imagine that trying to limit complexity coming from this extra branching behaviour is the driving reason for the rule that you mention.
I've just run into it. We have items, which
should be checked for several things,
checks require some preparation,
we should apply cheap checks first, then go with expensive ones,
some checks depends others,
whichever item fails on any check, it should be logged,
if the item passes all the checks, it should be passed to further processing.
Watch this, without continue:
foreach (items) {
prepare check1
if (check1) {
prepare check2
if (check2) {
prepare check3
if (check3) {
log("all checks passed")
process_good_item(item)
} else {
log("check3 failed")
}
} else {
log("check2 failed")
}
} else {
log("check 1 failed")
}
}
...and compare with this, with continue:
foreach (items) {
prepare check1
if (!check1) {
log("check 1 failed")
continue
}
prepare check2
if (!check2) {
log("check 2 failed")
continue
}
prepare check3
if (!check3) {
log("check 3 failed")
continue
}
log("all checks passed")
process_good_item(item)
}
Assume that "prepare"-s are multiple line long each, so you can't see the whole code at once.
Decide yourself, which is
less complex, have a simpler execution graph
have lower cyclomatic complexity value
more readable, more linear, no "eye jumps"
better expandable (e.g. try to add check4, check5, check12)
IMHO Misra is wrong in this topic.
As with all MISRA rules, if you can justify it, you can deviate from the rule (section 4.3.2 of MISRA-C:2004)
The point behind MISRA (and other similar guidelines) is to trap the things that generally cause problems... yes, continue can be used properly, but the evidence suggested that it was a common cause of problem.
As such, MISRA created a rule to prevent its (ab)use, and the reviewing community approved the rule. And the views of the user community are generally supportive of the rule.
But I repeat, if you really want to use it, and you can justify it to your company hierarchy, deviate.
Related
I am writing a .y4m video generator in C that takes all the .bmp images in a directory and writes them to a .y4m video file (after appropriately converting the colours to YCbCr format).
One of the command-line options I am allowing to be specified is whether the .bmp files should be deleted during the video generation. Right now, this happens right at the end of the program, but it would be best for it to occur as images are written (to not increase disk space usage by more than 1 frame at at a time, since the .y4m files are uncompressed video, so can get pretty big).
Thus, if the delete option is specified, the deleting should take place within one of the 5 main loops I have (there is one for each colour-subsampling scheme). The loops have lots of code within, however, so I really would like to avoid duplicating them.
In summary, even though this:
if (delete) {
while (bmps_remain) {
// do lots of funky stuff
remove(path_to_single_bmp);
}
}
else {
while (bmps_remain) {
// do the exact same funky stuff as above
// ... but do not delete bmp file
}
}
... is better than this:
while (bmps_remain) {
// lots and lots of code
if (delete)
remove(path_to_single_bmp);
}
... how much of a difference does it really make, and how frowned upon is it to opt for the second option (taking performance into account as much as possible), given that the second option re-evaluates the (unchanging) condition during each iteration of the loop?
Even though it would probably get compiled into some kind of cmp instruction (probably followed by a kind of jnz) which would only take a fraction of a second to perform, this situation occurs commonly in programming, so I would be interested to hear people's opinions.
Thanks.
P.S. The first option would produce a lot of code duplication in my case (and I would prefer to not stick everything into functions, given the previous layout of the program).
The remove( path_to_single_bmp ); instruction is several orders of magnitude slower than your if( invariant ); therefore, it makes absolutely no sense to be worrying the slightest bit about the overhead of re-executing it in the loop.
More generally, an if( invariant ) is so trivial as to never be worth considering for optimization, even if it only controls trivial code.
Even more generally, things like if( invariant ) will be optimized by the C compiler in any way it sees fit, regardless of what your intentions are, so they are generally not worth considering.
And even more-more generally, one of the most important qualities of code is readability, (second only to correctness,) and more code is less readable than less code, so any approach that results in less code is preferable. In other words, any approach that requires more code is generally far worse than any approach that requires less code. Exceptions to this rule tend to be algorithmic optimizations, where you introduce an entire algorithm to achieve performance. (E.g. a hash map instead of a naive array scan.) Tweaking and hacking code all over the place to squeeze clock cycles here and there never results in anything good.
When I read open source codes (Linux C codes), I see a lot functions are used instead of performing all operations on the main(), for example:
int main(void ){
function1();
return 0;
}
void function() {
// do something
function2();
}
void function2(){
function3();
//do something
function4();
}
void function3(){
//do something
}
void function4(){
//do something
}
Could you tell me what are the pros and cons of using functions as much as possible?
easy to add/remove functions (or new operations)
readability of the code
source efficiency(?) as the variables in the functions will be destroyed (unless dynamic allocation is done)
would the nested function slow the code flow?
Easy to add/remove functions (or new operations)
Definitely - it's also easy to see where does the context for an operation start/finish. It's much easier to see that way than by some arbitrary range of lines in the source.
Readability of the code
You can overdo it. There are cases where having a function or not having it does not make a difference in linecount, but does in readability - and it depends on a person whether it's positive or not.
For example, if you did lots of set-bit operations, would you make:
some_variable = some_variable | (1 << bit_position)
a function? Would it help?
Source efficiency(?) due to the variables in the functions being destroyed (unless dynamic allocation is done)
If the source is reasonable (as in, you're not reusing variable names past their real context), then it shouldn't matter. Compiler should know exactly where the value usage stops and where it can be ignored / destroyed.
Would the nested function slow the code flow?
In some cases where address aliasing cannot be properly determined it could. But it shouldn't matter in practice in most programs. By the time it starts to matter, you're probably going to be going through your application with a profiler and spotting problematic hotspots anyway.
Compilers are quite good these days at inlining functions though. You can trust them to do at least a decent job at getting rid of all cases where calling overhead is comparable to function length itself. (and many other cases)
This practice of using functions is really important as the amount of code you write increases. This practice of separating out to functions improves code hygiene and makes it easier to read. I read somewhere that there really is no point of code if it is only readable by you only (in some situations that is okay I'm assuming). If you want your code to live on, it must be maintainable and maintainability is one created by creating functions in the simplest sense possible. Also imagine where your code-base exceeds well over 100k lines. This is quite common and imagine having that all in the main function. That would be an absolute nightmare to maintain. Dividing the code into function helps create degrees of separability so many developers can work on different parts of the code-base. So basically short answer is yes, it is good to use functions when necessary.
Functions should help you structure your code. The basic idea is that when you identify some place in the code which does something that can be described in a coherent, self-contained way, you should think about putting it into a function.
Pros:
Code reuse. If you do many times some sequence of operations, why don't you write it once, use it many times?
Readability: it's much easier to understand strlen(st) than while (st[i++] != 0);
Correctness: the code in the previous line is actually buggy. If it is scattered around, you may probably not even see this bug, and if you will fix it in one place, the bug will stay somewhere else. But given this code inside a function named strlen, you will know what it should do, and you can fix it once.
Efficiency: sometimes, in certain situations, compilers may do a better job when compiling a code inside a function. You probably won't know it in advance, though.
Cons:
Splitting a code into functions just because it is A Good Thing is not a good idea. If you find it hard to give the function a good name (in your mother language, not only in C) it is suspicious. doThisAndThat() is probably two functions, not one. part1() is simply wrong.
Function call may cost you in execution time and stack memory. This is not as severe as it sounds, most of the time you should not care about it, but it's there.
When abused, it may lead to many functions doing partial work and delegating other parts from here to there. too many arguments may impede readability too.
There are basically two types of functions: functions that do a sequence of operations (these are called "procedures" in some contexts), and functions that does some form of calculation. These two types are often mixed in a single function, but it helps to remember this distinction.
There is another distinction between kinds of functions: Those that keep state (like strtok), those that may have side effects (like printf), and those that are "pure" (like sin). Function like strtok are essentially a special kind of a different construct, called Object in Object Oriented Programming.
You should use functions that perform one logical task each, at a level of abstraction that makes the function of each function easy to logically verify. For instance:
void create_ui() {
create_window();
show_window();
}
void create_window() {
create_border();
create_menu_bar();
create_body();
}
void create_menu_bar() {
for(int i = 0; i < N_MENUS; i++) {
create_menu(menus[i]);
}
assemble_menus();
}
void create_menu(arg) {
...
}
Now, as far as creating a UI is concerned, this isn't quite the way one would do it (you would probably want to pass in and return various components), but the logical structure is what I'm trying to emphasize. Break your task down into a few subtasks, and make each subtask its own function.
Don't try to avoid functions for optimization. If it's reasonable to do so, the compiler will inline them for you; if not, the overhead is still quite minimal. The gain in readability you get from this is a great deal more important than any speed you might get from putting everything in a monolithic function.
As for your title question, "as much as possible," no. Within reason, enough to see what each function does at a comfortable level of abstraction, no less and no more.
One condition you can use: if part of the code will be reuse/rewritten, then put it in a function.
I guess I think of functions like legos. You have hundreds of small pieces that you can put together into a whole. As a result of all of those well designed generic, small pieces you can make anything. If you had a single lego that looked like an entire house you couldn't then use it to build a plane, or train. Similarly, one huge piece of code is not so useful.
Functions are your bricks that you use when you design your project. Well chosen separation of functionality into small, easily testable, self contained "functions" makes building and looking after your whole project easy. Their benefits WAYYYYYYY out-weigh any possible efficiency issues you may think are there.
To be honest, the art of coding any sizeable project is in how you break it down into smaller pieces, so functions are key to that.
in a current project I dared to do away with the old 0 rule, i.e. returning 0 on success of a function. How is this seen in the community? The logic that I am imposing on the code (and therefore on the co-workers and all subsequent maintenance programmers) is:
.>0: for any kind of success/fulfillment, that is, a positive outcome
==0: for signalling no progress or busy or unfinished, which is zero information about the outcome
<0: for any kind of error/infeasibility, that is, a negative outcome
Sitting in between a lot of hardware units with unpredictable response times in a realtime system, many of the functions need to convey exactly this ternary logic so I decided it being legitimate to throw the minimalistic standard return logic away, at the cost of a few WTF's on the programmers side.
Opininons?
PS: on a side note, the Roman empire collapsed because the Romans with their number system lacking the 0, never knew when their C functions succeeded!
"Your program should follow an existing convention if an existing convention makes sense for it."
Source: The GNU C Library
By deviating from such a widely known convention, you are creating a high level of technical debt. Every single programmer that works on the code will have to ask the same questions, every consumer of a function will need to be aware of the deviation from the standard.
http://en.wikipedia.org/wiki/Exit_status
I think you're overstating the status of this mythical "rule". Much more often, it's that a function returns a nonnegative value on success indicating a result of some sort (number of bytes written/read/converted, current position, size, next character value, etc.), and that negative values, which otherwise would make no sense for the interface, are reserved for signalling error conditions. On the other hand, some functions need to return unsigned results, but zero never makes sense as a valid result, and then zero is used to signal errors.
In short, do whatever makes sense in the application or library you are developing, but aim for consistency. And I mean consistency with external code too, not just your own code. If you're using third-party or library code that follows a particular convention and your code is designed to be closely coupled to that third-party code, it might make sense to follow that code's conventions so that other programmers working on the project don't get unwanted surprises.
And finally, as others have said, whatever your convention, document it!
It is fine as long as you document it well.
I think it ultimately depends on the customers of your code.
In my last system we used more or less the same coding system as yours, with "0" meaning "I did nothing at all" (e.g. calling Init() twice on an object). This worked perfectly well and everybody who worked on that system knew this was the convention.
However, if you are writing an API that can be sold to external customers, or writing a module that will be plugged into an existing, "standard-RC" system, I would advise you to stick to the 0-on-success rule, in order to avoid future confusion and possible pitfalls for other developers.
And as per your PS, when in Rome, do like the romans do :-)
I think you should follow the Principle Of Least Astonishment
The POLA states that, when two
elements of an interface conflict, or
are ambiguous, the behaviour should be
that which will least surprise the
user; in particular a programmer
should try to think of the behavior
that will least surprise someone who
uses the program, rather than that
behavior that is natural from knowing
the inner workings of the program.
If your code is for internal consumption only, you may get away with it, though. So it really depends on the people your code will impact :)
There is nothing wrong with doing it that way, assuming you document it in a way that ensures others know what you're doing.
However, as an alternative, if might be worth exploring the option to return an enumerated type defining the codes. Something like:
enum returnCode {
SUCCESS, FAILURE, NO_CHANGE
}
That way, it's much more obvious what your code is doing, self-documenting even. But might not be an option, depending on your code base.
It is a convention only. I have worked with many api that abandon the principle when they want to convey more information to the caller. As long as your consistent with this approach any experienced programmer will quickly pick up the standard. What is hard is when each function uses a different approach IE with win32 api.
In my opinion (and that's the opinion of someone who tends to do out-of-band error messaging thanks to working in Java), I'd say it is acceptable if your functions are of a kind that require strict return-value processing anyway.
So if the return value of your method has to be inspected at all points where it's called, then such a non-standard solution might be acceptable.
If, however, the return value might be ignored or just checked for success at some points, then the non-standard solution produces quite some problem (for example you can no longer use the if(!myFunction()) ohNoesError(); idiom.
What is your problem? It is just a convention, not a law. If your logic makes more sense for your application, then it is fine, as long as it is well documented and consistent.
On Unix, exit status is unsigned, so this approach won't work if you ever have to run your program there, and this will confuse all your Unix programmers to no end. (I looked it up just now to make sure, and discovered to my surprised that Windows uses a signed exit status.) So I guess it will probably only mostly confuse your Windows programmers. :-)
I'd find another method to pass status between processes. There are many to choose from, some quite simple. You say "at the cost of a few WTF's on the programmers side" as if that's a small cost, but it sounds like a huge cost to me. Re-using an int in C is a miniscule benefit to be gained from confusing other programmers.
You need to go on a case by case basis. Think about the API and what you need to return. If your function only needs to return success or failure, I'd say give it an explicit type of bool (C99 has a bool type now) and return true for success and false for failure. That way things like:
if (!doSomething())
{
// failure processing
}
read naturally.
In many cases, however, you want to return some data value, in which case some specific unused or unlikely to be used value must be used as the failure case. For example the Unix system call open() has to return a file descriptor. 0 is a valid file descriptor as is theoretically any positive number (up to the maximum a process is allowed), so -1 is chosen as the failure case.
In other cases, you need to return a pointer. NULL is an obvious choice for failure of pointer returning functions. This is because it is highly unlikely to be valid and on most systems can't even be dereferenced.
One of the most important considerations is whether the caller and the called function or program will be updated by the same person at any given time. If you are maintaining an API where a function will return the value to a caller written by someone who may not even have access to your source code, or when it is the return code from a program that will be called from a script, only violate conventions for very strong reasons.
You are talking about passing information across a boundary between different layers of abstraction. Violating the convention ties both the caller and the callee to a different protocol increasing the coupling between them. If the different convention is fundamental to what you are communicating, you can do it. If, on the other hand, it is exposing the internals of the callee to the caller, consider whether you can hide the information.
In C there is a do while loop and pascal's (almost) equivalent is the repeat until loop, but there is a small difference between the two, while both structures will iterate at least once and check whether they need to do the loop again only in the end, in pascal you write the condition that need to met to terminate the loop (REPEAT UNTIL something) in C you write the condition that needs to be met to continue the loop (DO WHILE something). Is there a reason why there is this difference or is it just an arbitrary decision?
There's no fundamental difference at all, and no advantage to one over the other. It's just "syntactic sugar" — a change to the language's syntax that doesn't change its behavior in any real way. Some people find "repeat until" easier to conceptualize, while others find "repeat while" easier.
If, in C, you encounter a situation where "until" is what's desire, you can always just negate the condition:
do {
excitingThings();
} while ( !endOfTheWorld() );
In C the statement
while(some_condition);
might either be a "do nothing" loop or might have become detached from a "do ... while" loop.
do {
statement;
statement;
statement;
lots more statements;
}
while(some_condition);
Using a different keyword - until - avoids this possible misinterpretation.
Not such a problem these days when everybody turns on all compiler warnings and heeds them, don't they?
Still, I suspect that most veteran C programmers have wished - at some time or other - that C used "until" in this case.
I'm not sure about historical influences, but in my opinion C is more consistent, in the sense that ifs require a condition to be true for the code to run, as do whiles and do whiles.
The design of Pascal was motivated in part by the structured-programming work of the 1960s, including Edsger Dijkstra's groundbreaking work A Discipline of Programming. Dijkstra (the same man who considered goto harmful) invented methods for creating programs that were correct by construction. These methods including methods for writing loops that focus on the postcondition established when the loop terminates. In creating the repeat... until form, Wirth was inspired by Dijkstra to make the termination condition, rather than its complement, explicit in the code.
I have always admired languages like Smalltalk and Icon, which offer two syntactic forms, thus allowing the programmer to express his or her intent clearly, without necessarily having to rely on an easily missed complement operator. (In Icon the forms are while e1 do e2 and until e1 do e2; in Smalltalk they are block1 whileTrue: block2 and block1 whileFalse: block2.) From my perspective neither C nor Pascal is a fully built out, orthogonal design.
It's just an arbitrary decision. Some languages have both. The QBASIC/VB DO...LOOP statement supports all four combinations of pretest/posttest and WHILE/UNTIL.
There was no "decision" that would in any way connect the behavior of Pascal repeat/until loop with the behavior of C do/while loop, neither deliberate nor arbitrary. These are simply two completely unrelated issues.
Just some information.
Road Runner : while(not edge) { run(); }
Wily E Coyote : do { run(); } while(not edge);
I've always found UNTIL loops backwards, but that might just be because I'm from a C background. There are modern languages like Perl that provide both, but there isn't any particular advantage for one over the other
The C syntax requires no extra keywords.
In C, the two keywords do and while work for two kinds of loops. Pascal requires four keywords: while, do, repeat, and until.
I often hear people praise languages, frameworks, constructs, etc. for being "explicit". I'm trying to understand this logic. The purpose of a language, framework, etc. is to hide complexity. If it makes you specify all kinds of details explicitly, then it's not hiding much complexity, only moving it around. What's so great about explicitness and how do you make a language/framework/API "explicit" while still making it serve its purpose of hiding complexity?
Whether you should be explicit or implicit depends on the situation. You are correct in that often you are trying to hide complexity, and certain things being done behind the scenes for you automatically is good. encapsulation, etc.
Sometimes though frameworks or constructs hide things from us that they should not, and this makes things less clear. Sometimes certain information or settings are hidden from us and hence we don't know what's happening. Assumptions are made that we don't understand and can't determine. Behaviors happen that we can't predict.
Encapsulation: good. Hiding: bad. Making the right call takes experience. Where logic belongs, it should be explicit.
Example: I once removed about 90 lines of code from a series of a dozen code behind pages; data access code, business logic, etc., that did not belong there. I moved them to base pages and the key business object. This was good (encapsulation, separation of concerns, code organization, decoupling, etc.).
I then excitedly realized that I could remove the last line of code from many of these pages, moving it to the base page. It was a line that took a parameter from url and passed it to the business object. Good, right? Well, no, this was bad (I was hiding). This logic belonged here, even though it was almost the same line on every page. It linked the UI intention with the business object. It need to be explicit. Otherwise I was hiding, not encapsulating. With that line, someone looking at that page would know what that page did and why; without it, it would be a pain to determine what was going on.
I believe that explicit refers to knowing exactly what it is doing when you use it. That is different from knowing exactly how it's done, which is the complex part.
It's not so much that explicit is good (certainly the closely-related verbose is bad) as that when implicit goes wrong, it's so hard to tell WTF is going on.
Hack C++ for a decade or two and you'll understand exactly what I mean.
It is about expressing intentions. The reader can't tell if the default was left by mistake or by design. Being explicit removes that doubt.
Code is harder to read than to write. In nontrivial applications, a given piece of code will also be read more often than it is written. Therefore, we should write our code to make it as easy on the reader as possible. Code that does a lot of stuff that isn't obvious is not easy to read (or rather, it's hard to understand when you read it). Ergo, explicitness is considered a good thing.
Relying on default behaviour hides important details from people who aren't intimately familiar with the language/framework/whatever.
Consider how Perl code which relies extensively on shorthands is difficult to understand for people who don't know Perl.
Being explicit vs. implicit is all about what you hide, and what you show.
Ideally, you expose concepts that either the user cares about, or has to care about (whether they want to or not).
The advantage of being explicit is that it's easier to track down and find out what's going on, especially in case of failure. For instance, if I want to do logging, I can have an API that requires explicit initialization with a directory for the log. Or, I can use a default.
If I give an explicit directory, and it fails, I'll know why. If I use an implicit path, and it fails, I will have no idea of what has gone wrong, why, or where to look to fix it.
Implicit behavior is almost always a result of hiding information from the consumer. Sometimes that's the right thing to do, such as when you know in your environment there's only one "answer". However, it's best to know when you're hiding information and why, and ensure that you're letting your consumers work closer to their level of intent, and without trying to hide items of essential complexity.
Frequently implicit behavior is a result of "self-configuring" objects that look at their environment and try to guess the correct behavior. I'd avoid this pattern in general.
One rule I'd probably follow overall is that, for a given API, any operation should either be explicit, or implicit, but never a combination. Either make the operation something the user has to do, or make it something they don't have to think about. It's when you mix those two that you will run into the biggest problems.
Frameworks, etc., can be both explicit and hide complexity by offering the right abstractions for the job to be done.
Being explicit allows others to inspect and understand what is meant by the original developer.
Hiding complexity is not equivalent with being implicit. Implicitness would result in code that is only understandable by the person who wrote it as trying to understand what goes on under the hood is akin to reverse engineering in this case.
Explicit code has a theoretical chance of being proved correct. Implicit code never stands a chance in this respect.
Explicit code is maintainable, implicit code is not - this links to providing correct comments and choosing your identifiers with care.
An "explicit" language allows the computer to find bugs in software that a less-explicit language does not.
For example, C++ has the const keyword for variables whose values should never change. If a program tries to change these variables, the compiler can state that the code is likely wrong.
Good abstraction doesn't hide complexities, it takes decisions that are best left to the compiler off of your plate.
Consider garbage collection: The complexities of releasing resources are delegated to a garbage collector which is (presumably) better qualified to make a decision than you, the programmer. Not only does it take the decision off your hands, but it makes a better decision than you would have yourself.
Explicitness is (sometimes) good because it makes it so that certain decisions that in some cases are better left to the programmer are not automatically made by a less qualified agent. A good example is when you're declaring a floating point data type in a c-type language and initializing it to an integer:
double i = 5.0;
if instead you were to declare it as
var i = 5;
the compiler would rightfully assume you want an int and operations later on would be truncated.
Explicitness is desirable in the context of making it clear to the reader of your code what you intended to do.
There are many examples, but it's all about leaving no doubt about your intent.
e.g. These are not very explicit:
while (condition);
int MyFunction()
bool isActive; // In C# we know this is initialised to 0 (false)
a = b??c;
double a = 5;
double angle = 1.57;
but these are:
while (condition)
/* this loop does nothing but wait */ ;
private int MyFunction()
int isActive = false; // Now you know I really meant this to default to false
if (b != null) a = b; else a = c;
double a = 5.0;
double angleDegrees = 1.57;
The latter cases leave no room for misinterpretation. The former might lead to bugs when someone fails to read them carefully, or doesn't clearly understand a less readable syntax for doing something, or mixes up integer and float types.
In some cases the opposite is "magic" - as in "then a miracle occurs".
When a developer's reading code trying to understand or debug what's going on, explicitness can be a virtue.
The purpose of frameworks moving things around is to remove duplication in code and allow easier editing of chunks without breaking the whole thing.
When you have only one way of doing something, like say SUM(x,y);
We know exactly what this is going to do, no reason to ever need to rewrite it, and if you must you can, but its highly unlikely.
The opposite of that is programming languages like .NET that provide very complex functions that you often will need to rewrite if your doing anything but the obvious simple example.