In C there is a do while loop and pascal's (almost) equivalent is the repeat until loop, but there is a small difference between the two, while both structures will iterate at least once and check whether they need to do the loop again only in the end, in pascal you write the condition that need to met to terminate the loop (REPEAT UNTIL something) in C you write the condition that needs to be met to continue the loop (DO WHILE something). Is there a reason why there is this difference or is it just an arbitrary decision?
There's no fundamental difference at all, and no advantage to one over the other. It's just "syntactic sugar" — a change to the language's syntax that doesn't change its behavior in any real way. Some people find "repeat until" easier to conceptualize, while others find "repeat while" easier.
If, in C, you encounter a situation where "until" is what's desire, you can always just negate the condition:
do {
excitingThings();
} while ( !endOfTheWorld() );
In C the statement
while(some_condition);
might either be a "do nothing" loop or might have become detached from a "do ... while" loop.
do {
statement;
statement;
statement;
lots more statements;
}
while(some_condition);
Using a different keyword - until - avoids this possible misinterpretation.
Not such a problem these days when everybody turns on all compiler warnings and heeds them, don't they?
Still, I suspect that most veteran C programmers have wished - at some time or other - that C used "until" in this case.
I'm not sure about historical influences, but in my opinion C is more consistent, in the sense that ifs require a condition to be true for the code to run, as do whiles and do whiles.
The design of Pascal was motivated in part by the structured-programming work of the 1960s, including Edsger Dijkstra's groundbreaking work A Discipline of Programming. Dijkstra (the same man who considered goto harmful) invented methods for creating programs that were correct by construction. These methods including methods for writing loops that focus on the postcondition established when the loop terminates. In creating the repeat... until form, Wirth was inspired by Dijkstra to make the termination condition, rather than its complement, explicit in the code.
I have always admired languages like Smalltalk and Icon, which offer two syntactic forms, thus allowing the programmer to express his or her intent clearly, without necessarily having to rely on an easily missed complement operator. (In Icon the forms are while e1 do e2 and until e1 do e2; in Smalltalk they are block1 whileTrue: block2 and block1 whileFalse: block2.) From my perspective neither C nor Pascal is a fully built out, orthogonal design.
It's just an arbitrary decision. Some languages have both. The QBASIC/VB DO...LOOP statement supports all four combinations of pretest/posttest and WHILE/UNTIL.
There was no "decision" that would in any way connect the behavior of Pascal repeat/until loop with the behavior of C do/while loop, neither deliberate nor arbitrary. These are simply two completely unrelated issues.
Just some information.
Road Runner : while(not edge) { run(); }
Wily E Coyote : do { run(); } while(not edge);
I've always found UNTIL loops backwards, but that might just be because I'm from a C background. There are modern languages like Perl that provide both, but there isn't any particular advantage for one over the other
The C syntax requires no extra keywords.
In C, the two keywords do and while work for two kinds of loops. Pascal requires four keywords: while, do, repeat, and until.
Related
I see the term 'construct' come up very often in programming readings. The current book I am reading, "Programming in C" by Stephen Koching has used it a few times throughout the book. One example is in the chapter on looping, which says:
"When developing programs, it sometimes becomes desirable to have the
test made at the end of the loop rather than at the beginning.
Naturally, the C language provides a special language construct to
handle such a situation. This looping statement is known as the do
statement."
In this case what does the term 'construct' mean, and does the word 'construct' have any relation to an object 'constructor' in other languages?
In this case you can replace the word construct with syntax.
does the word 'construct' have an relation to an object 'constructor' in other languages?
No. These two terms are different. There is nothing like constructor in C
It's a generic term that normally refers to some particular syntax included in the language to perform some task (like a loop with the condition at the end). It has no relation at all with constructors.1
Well, besides the fact that constructors are a particular language construct in many OO languages.
does the word 'construct' have an relation to an object 'constructor' in other languages?
The sentence uses the noun, not a verb, meaning of the word "construct":
construct (n) - something (such as an idea or a theory) that is formed in people's minds.
In this case, "construct" refers to an abstract way of describing something (namely, a loop) in terms of the syntax of that particular language. "Language construct" means "a way to do [something] with that language".
A construct is simply a concept implementation mechanism used by a given programming language - the language's syntax.
In your case, the concept here is a loop and its construct is the manner in which it is implemented by the C programming language.
Programming languages provide constructs for various programming concepts that define how these programming concepts are implemented in that language.
Does the word 'construct' have an relation to an object 'constructor' in other languages?
The two terms are different, a constructor is used in Object Oriented Languages such as java, it is not available in the C programming language.
first of all, remember that c-language, not an Object Oriented Programming language. the constructor is an OOP terminology. So there is Construct refers to syntax and pre-defined keywords like do and while in this case.
The word “construct” as used in computer programming has very broad or general meaning. I hope these thoughts will help to explain what is meant by the word “construct” in programming.
A computer program is a list of instructions that the computer is able to (a) understand and (b) execute (perform or carry out). The simplest program would be a list of - let’s call them statements - that the computer would execute in sequence, one after the other, from the first to the last, and then end. But that would be incredibly limiting - so limiting in fact that I don’t think computers would ever have become much more than simple calculators. One of the fundamental differences between a simple calculator and a computer is that the statements do not have to be executed in sequence. The sequence can be interrupted by “special” instructions (statements) which can divert the flow of execution from one stream to a totally different stream which has a completely different agenda.
The first obvious way this is done is with methods (functions or procedures). When a method is called, the flow of execution is diverted from one stream of statements to a totally different stream of statements, often unrelated to the stream from which it came. If that concept is accepted, then I think that an instruction that calls a method could also be regarded as a “construct”.
Let’s divert this discussion for a moment to talk about “blocks” of code.
Programmers who work in languages like C, C++ or Java know that pairs of opening and closing braces (curly brackets), are used to identify blocks of code. And it’s blocks of code that divide a program up into different processes or procedures. A block of code that is headed by say a while() loop is just as valid as a method, in that it interrupts the otherwise unimpeded flow of execution through a program. The same applies to the many categories of operators. “new” will divert the flow of statement execution to a constructor method. So we have all these various syntactical expressions that have one thing in common - they divert the flow of execution which, left to its own devices - would happily proceed executing statements of code in sequence one after the other, without any interruption.
So I am suggesting that the word “construct” is a useful collective noun that embraces all of these different and diverse syntactical expressions e.g. if() for() switch() etc. that use the different categories of operators to perform functions that are defined in their respective blocks of code. Would love to hear other opinions.
In short, all in-built features(like an array, loop, if else statements) of the programming language are language constructs.
MISRA 14.5 says continue statement must not be used. Can anyone explain the reason?
Thank you.
It is because of the ancient debate about goto, unconditional branching and spaghetti code, that has been going on for 40 years or so. goto, continue, break and multiple return statements are all considered more or less equally bad.
The consensus of the world's programming community has roughly ended up something like: we recognize that you can use these features of the language without writing spaghetti code if you know what you are doing. But we still discourage them because there is a large chance that someone who doesn't know what they are doing are going to use the features if they are available, and then create spaghetti. And we also discourage them because they are superfluous features: you can obviously write programs without using them.
Since MISRA-C is aimed towards critical systems, MISRA-C:2004 has the approach to ban as many of these unconditional branch features as possible. Therefore, goto, continue and multiple returns were banned. break was only allowed if there was a single break inside the same loop.
However, in the "MISRA-C:2011" draft which is currently under evaluation, the committee has considered to allow all these features yet again, with a restriction that goto should only be allowed to jump downwards and never upwards. The rationale from the committee said that there are now tools (ie static analysers) smart enough to spot bad program flow, so the keywords can be allowed.
The goto debate is still going strong...
Programming in C makes it notoriously hard to keep track of multiple execution branches. If you allocate resources somewhere, you have to release them elsewhere, non-locally. If your code branches, you will in general need to have separate deallocation logic for each branch or way to exit a scope.
The continue statement adds another way to exit from the scope of a for loop, and thus makes such a loop harder to reason about and understand all the possible ways in which control can flow through it, which in turn makes it harder to ascertain that your code behaves correctly in all circumstances.
This is just speculation on my part, but I imagine that trying to limit complexity coming from this extra branching behaviour is the driving reason for the rule that you mention.
I've just run into it. We have items, which
should be checked for several things,
checks require some preparation,
we should apply cheap checks first, then go with expensive ones,
some checks depends others,
whichever item fails on any check, it should be logged,
if the item passes all the checks, it should be passed to further processing.
Watch this, without continue:
foreach (items) {
prepare check1
if (check1) {
prepare check2
if (check2) {
prepare check3
if (check3) {
log("all checks passed")
process_good_item(item)
} else {
log("check3 failed")
}
} else {
log("check2 failed")
}
} else {
log("check 1 failed")
}
}
...and compare with this, with continue:
foreach (items) {
prepare check1
if (!check1) {
log("check 1 failed")
continue
}
prepare check2
if (!check2) {
log("check 2 failed")
continue
}
prepare check3
if (!check3) {
log("check 3 failed")
continue
}
log("all checks passed")
process_good_item(item)
}
Assume that "prepare"-s are multiple line long each, so you can't see the whole code at once.
Decide yourself, which is
less complex, have a simpler execution graph
have lower cyclomatic complexity value
more readable, more linear, no "eye jumps"
better expandable (e.g. try to add check4, check5, check12)
IMHO Misra is wrong in this topic.
As with all MISRA rules, if you can justify it, you can deviate from the rule (section 4.3.2 of MISRA-C:2004)
The point behind MISRA (and other similar guidelines) is to trap the things that generally cause problems... yes, continue can be used properly, but the evidence suggested that it was a common cause of problem.
As such, MISRA created a rule to prevent its (ab)use, and the reviewing community approved the rule. And the views of the user community are generally supportive of the rule.
But I repeat, if you really want to use it, and you can justify it to your company hierarchy, deviate.
I recently found this theorem here, (at the bottom):
Any program can be transformed into a semantically equivalent program of one procedure containing one switch statement inside a while loop.
The Article went on to say :
A corollary to this theorem is that any program can be rewritten into a program consisting of a single recursive function containing only conditional statements
My questions are, are both these theorems applicable today ? Does similarly transforming a program reap any benefits ? I mean to say, is such a code optimized ? (Although recursion calls are slower, I know)
I read, from here, that switch-cases are almost always faster when optimized by the compiler. Does that make a difference. ?
PS: I'm trying to get some idea about compiler optimizations from here
And I've added the c tag as that's the only language I've seen optimized.
Its true. A Turing machine is essentially a switch statement on symbols that repeats forever, so its based pretty directly on Turing-machines-compute everything. A switch statement is just a bunch of conditionals, so you can clearly write such a program as a loop with just conditionals. Once you have that, making the loop from recursion is pretty easy although you may have to pass a lot of state variables as parameters if your language doesn't have true lexical scoping.
There's little reason to do any of this in practice. Such programs generally operate more slowly than the originals, and may take more space. So why would you possibly slow your program down, and/or make its load image bigger?
The only place this makes sense is if you intend to obfuscate the code. This kind of technique is often used as "control flow obfuscation".
This is basically what happens when a compiler translates a program into machine code. The machine code runs on a processor, which executes instructions one-by-one in a loop. The complex structure of the program has become part of the data in memory.
Recursive loops through a switch statement can be used to create a rudimentary virtual machine. If your virtual machine is Turing complete then, in theory, any program could be rewritten to work on this machine.
int opcode[] {
PUSH,
ADD
....
};
while (true) {
switch (*opcode++) {
case PUSH:
*stack++ = <var>;
break;
case ADD:
stack[-1] += stack[0];
--stack;
break;
....
}
}
Of course writing a compiler for this virtual machine would be another matter.
:-)
I created a special-purpose "programming language" that deliberately (by design) cannot evaluate the same piece of code twice (ie. it cannot loop). It essentially is made to describe a flowchart-like process where each element in the flowchart is a conditional that performs a different test on the same set of data (without being able to modify it). Branches can split and merge, but never in a circular fashion, ie. the flowchart cannot loop back onto itself. When arriving at the end of a branch, the current state is returned and the program exits.
When written down, a typical program superficially resembles a program in a purely functional language, except that no form of recursion is allowed and functions can never return anything; the only way to exit a function is to call another function, or to invoke a general exit statement that returns the current state. A similar effect could also be achieved by taking a structured programming language and removing all loop statements, or by taking an "unstructured" programming language and forbidding any goto or jmp statement that goes backwards in the code.
Now my question is: is there a concise and accurate way to describe such a language? I don't have any formal CS background and it is difficult for me to understand articles about automata theory and formal language theory, so I'm a bit at a loss. I know my language is not Turing complete, and through great pain, I managed to assure myself that my language probably can be classified as a "regular language" (ie. a language that can be evaluated by a read-only Turing machine), but is there a more specific term?
Bonus points if the term is intuitively understandable to an audience that is well-versed in general programming concepts but doesn't have a formal CS background. Also bonus points if there is a specific kind of machine or automaton that evaluates such a language. Oh yeah, keep in mind that we're not evaluating a stream of data - every element has (read-only) access to the full set of input data. :)
I believe that your language is sufficiently powerful to encode precisely the star-free languages. This is a subset of that regular languages in which no expression contains a Kleene star. In other words, it's the language of the empty string, the null set, and individual characters that is closed under concatenation and disjunction. This is equivalent to the set of languages accepted by DFAs that don't have any directed cycles in them.
I can attempt a proof of this here given your description of your language, though I'm not sure it will work precisely correctly because I don't have full access to your language. The assumptions I'm making are as follows:
No functions ever return. Once a function is called, it will never return control flow to the caller.
All calls are resolved statically (that is, you can look at the source code and construct a graph of each function and the set of functions it calls). In other words, there aren't any function pointers.
The call graph is acyclic; for any functions A and B, then exactly one of the following holds: A transitively calls B, B transitively calls A, or neither A nor B transitively call one another.
More generally, the control flow graph is acyclic. Once an expression evaluates, it never evaluates again. This allows us to generalize the above so that instead of thinking of functions calling other functions, we can think of the program as a series of statements that all call one another as a DAG.
Your input is a string where each letter is scanned once and only once, and in the order in which it's given (which seems reasonable given the fact that you're trying to model flowcharts).
Given these assumptions, here's a proof that your programs accept a language iff that language is star-free.
To prove that if there's a star-free language, there's a program in your language that accepts it, begin by constructing the minimum-state DFA for that language. Star-free languages are loop-free and scan the input exactly once, and so it should be easy to build a program in your language from the DFA. In particular, given a state s with a set of transitions to other states based on the next symbol of input, you can write a function that
looks at the next character of input and then calls the function encoding the state being transitioned to. Since the DFA has no directed cycles, the function calls have no directed cycles, and so each statement will be executed exactly once. We now have that (∀ R. is a star-free language → ∃ a program in your language that accepts it).
To prove the reverse direction of implication, we essentially reverse this construction and create an ε-NFA with no cycles that corresponds to your program. Doing a subset construction on this NFA to reduce it to a DFA will not introduce any cycles, and so you'll have a star-free language. The construction is as follows: for each statement si in your program, create a state qi with a transition to each of the states corresponding to the other statements in your program that are one hop away from that statement. The transitions to those states will be labeled with the symbols of input consumed making each of the decisions, or ε if the transition occurs without consuming any input. This shows that (∀ programs P in your language, &exists; a star-free language R the accepts just the strings accepted by your language).
Taken together, this shows that your programs have identically the power of the star-free languages.
Of course, the assumptions I made on what your programs can do might be too limited. You might have random-access to the input sequence, which I think can be handled with a modification of the above construction. If you can potentially have cycles in execution, then this whole construction breaks. But, even if I'm wrong, I still had a lot of fun thinking about this, and thank you for an enjoyable evening. :-)
Hope this helps!
I know this question is somewhat old, but for posterity, the phrase you are looking for is "decision tree". See http://en.wikipedia.org/wiki/Decision_tree_model for details. I believe this captures exactly what you have done and has a pretty descriptive name to boot!
The halting problem cannot be solved for Turing complete languages and it can be solved trivially for some non-TC languages like regexes where it always halts.
I was wondering if there are any languages that has both the ability to halt and not halt but admits an algorithm that can determine whether it halts.
The halting problem does not act on languages. Rather, it acts on machines
(i.e., programs): it asks whether a given program halts on a given input.
Perhaps you meant to ask whether it can be solved for other models of
computation (like regular expressions, which you mention, but also like
push-down automata).
Halting can, in general, be detected in models with finite resources (like
regular expressions or, equivalently, finite automata, which have a fixed
number of states and no external storage). This is easily accomplished by
enumerating all possible configurations and checking whether the machine enters
the same configuration twice (indicating an infinite loop); with finite
resources, we can put an upper bound on the amount of time before we must see
a repeated configuration if the machine does not halt.
Usually, models with infinite resources (unbounded TMs and PDAs, for instance),
cannot be halt-checked, but it would be best to investigate the models and
their open problems individually.
(Sorry for all the Wikipedia links, but it actually is a very good resource for
this kind of question.)
Yes. One important class of this kind are primitive recursive functions. This class includes all of the basic things you expect to be able to do with numbers (addition, multiplication, etc.), as well as some complex classes like #adrian has mentioned (regular expressions/finite automata, context-free grammars/pushdown automata). There do, however, exist functions that are not primitive recursive, such as the Ackermann function.
It's actually pretty easy to understand primitive recursive functions. They're the functions that you could get in a programming language that had no true recursion (so a function f cannot call itself, whether directly or by calling another function g that then calls f, etc.) and has no while-loops, instead having bounded for-loops. A bounded for-loop is one like "for i from 1 to r" where r is a variable that has already been computed earlier in the program; also, i cannot be modified within the for-loop. The point of such a programming language is that every program halts.
Most programs we write are actually primitive recursive (I mean, can be translated into such a language).
The short answer is yes, and such languages can even be extremely useful.
There was a discussion about it a few months ago on LtU:
http://lambda-the-ultimate.org/node/2846