Preference of "for(; ;)" over "while(1)" - c

I am asking this question only for the sake of knowledge(because i think it has nothing to do with beginner like me).
I read that C-programmers prefer
for(; ;) {....}
over
while(1) {....}
for infinite loop for reasons of efficiency. Is it true that one form of loop is more efficient than the other, or is it simply a matter of style?

Both constructs are equivalent in behavior.
Regarding preferences:
The C Programming Language, Kernighan & Ritchie,
uses the form
for (;;)
for an infinite loop.
The Practice of Programming, Kernighan & Pike,
also prefers
for (;;)
"For a infinite loop, we prefer for(;;) but while(1) is also popular. Don't use anything other than these forms."
PC-Lint
also prefers
for (;;):
716 while(1) ... -- A construct of the form while(1) ... was found. Whereas this represents a constant in a context expecting a Boolean, it may reflect a programming
policy whereby infinite loops are prefixed with this construct. Hence it is given a separate number and has been placed in the informational category. The more conventional form of infinite loop prefix is for(;;)
One of the rationale (maybe not the most important, I don't know) that historically for (;;) was preferred over while (1) is that some old compilers would generate a test for the while (1) construct.
Another reasons are: for(;;) is the shortest form (in number of characters) and also for(;;) does not contain any magic number (as pointed out by #KerrekSB in OP question comments).

Related

How to definitely solve the scanf input stream problem

Suppose I want to run the following C snippet:
scanf("%d" , &some_variable);
printf("something something\n\n");
printf("Press [enter] to continue...")
getchar(); //placed to give the user some time to read the something something
This snippet will not pause! The problem is that the scanf will leave the "enter" (\n)character in the input stream1, messing up all that comes after it; in this context the getchar() will eat the \n and not wait for an actual new character.
Since I was told not to use fflush(stdin) (I don't really get why tho) the best solution I have been able to come up with is simply to redefine the scan function at the start of my code:
void nsis(int *pointer){ //nsis arconim of: no shenanigans integer scanf
scanf("%d" , pointer);
getchar(); //this will clean the inputstream every time the scan function is called
}
And then we simply use nsis in place of scanf. This should fly. However it seems like a really homebrew, put-together-with-duct-tape, solution. How do professional C developers handle this mess? Do they not use scanf at all? Do they simply accept to work with a dirty input stream? What is the standard here?
I wasn't able to find a definite answer on this anywhere! Every source I could find mentioned a different (and sketchy) solution...
EDIT: In response to all commenting some version of "just don't use scanf": ok, I can do that, but what is the purpose of scanf then? Is it simply an useless broken function that should never be used? Why is it in the libraries to begin with then?
This seems really absurd, especially considering all beginners are taught to use scanf...
[1]: The \n left behind is the one that the user typed when inputting the value of the variable some_variable, and not the one present into the printf.
but what is the purpose of scanf then?
An excellent question.
Is it simply a useless broken function that should never be used?
It is almost useless. It is, arguably, quite broken. It should almost never be used.
Why is it in the libraries to begin with then?
My personal belief is that it was an experiment. It tries to be the opposite of printf. But that turned out not to be such a good idea in practice, and the function never got used very much, and pretty much fell out of favor, except for one particular use case...
This seems really absurd, especially considering all beginners are taught to use scanf...
You're absolutely right. It is really quite absurd.
There's a decent reason why all beginners are taught to use scanf, though. During week 1 of your first C programming class, you might write the little program
#include <stdio.h>
int main()
{
int size = 5;
for(int i = 0; i < size; i++) {
for(int j = 0; j < size; j++)
putchar('*');
putchar('\n');
}
}
to print a square. And during that first week, to make a square of a different size, you just edit the line int size = 5; and recompile.
But pretty soon — say, during week 2 — you want a way for the user to enter the size of the square, without having to recompile. You're probably not ready to muck around with argv. You're probably not ready to read a line of text using fgets and convert it back to an integer using atoi. (You're probably not even ready to seriously contemplate the vast differences between the integer 5 and the string "5" at all.) So — during week 2 of your first C programming class — scanf seems like just the ticket.
That's the "one particular use case" I was talking about. And if you only used scanf to read small integers into simple C programs during the second week of your first C programming class, things wouldn't be so bad. (You'd still have problems forgetting the &, but that would be more or less manageable.)
The problem (though this is again my personal belief) is that it doesn't stop there. Virtually every instructor of beginning C classes teaches students to use scanf. Unfortunately, few or none of those instructors ever explicitly tell students that scanf is a stopgap, to be used temporarily during that second week, and to be emphatically graduated beyond in later weeks. And, even worse, many instructors go on to assign more advanced problems, involving scanf, for which it is absolutely not a good solution, such as trying to do robust or "user friendly" input validation.
scanf's only virtue is that it seems like a nice, simple way to get small integers and other simple input from the user into your early programs. But the problem — actually a big, shuddering pile of 17 separate problems — is that scanf turns out to be vastly complicated and full of exceptions and hard to use, precisely the opposite of what you'd want in order to make things easy for beginners. scanf is only useful for beginners, and it's almost perfectly useless for beginners. It has been described as being like square training wheels on a child's bicycle.
How do professional C developers handle this mess?
Quite simply: by not using scanf at all. For one thing, very few production C programs print prompts to a line-based screen and ask users to type something followed by Return. And for those programs that do work that way, professional C developers unhesitatingly use fgets or the like to read a full line of input as text, then use other techniques to break down the line to extract the necessary information.
In answer to your initial question, there's no good answer. One of the fundamental rules of scanf usage (a set of rules, by the way, that no instructor ever teaches) is that you should never try to mix scanf and getchar (or fgets) in the same program. If there were a good way to make your "Press [enter] to continue..." code work after having called scanf, we wouldn't need that rule.
If you do want to try to flush the extra newline, so that a later call to getchar might work, there are several questions here with a bunch of good answers:
scanf() leaves the newline character in the buffer
Using fflush(stdin)
How to properly flush stdin in fgets loop
There's one more unrelated point that ends up being pretty significant to your question. When C was invented, there was no such thing as a GUI with multiple windows. Therefore no C programmer ever had the problem of having their output disappear before they could read it. Therefore no C programmer ever felt the need to write printf("Press [enter] to continue..."); followed by getchar(). I believe (another personal belief) that it is egregiously bad behavior for any vendor of a GUI-based C compiler to rig things up so that the output disappears upon program exit. Persistent output windows ought to be the default, for the benefit of beginning C programmers, with some kind of non-default option to turn that behavior off for those who don't want it.
Is scanf broken? No it is not. It is an excellent input function when you want to parse free form input data where few errors are to be expected. Free form means here that new lines are not relevant exactly as when you read/write very long paragraphs on a normal screen. And few errors expected is common when you read from files.
The scanf family function has another nice point: you have the same syntax when reading from the standard input stream, a file stream or a character string. It can easily parse simple common types and provide a minimal return value to allow cautious programmers to know whether all or part of all the expected data could be decoded.
That being said, it has major drawbacks: first being a C function, it cannot directly control whether the programmer has passed types meeting the format specifications, and second, as beginners are not consistenly hit on their head when they forget to control its return value, it is really too easy to make fully broken programs using it.
But the rule is:
if input is expected to be line oriented, first use fgets to get lines and then sscanf testing return values of both
only if input is expect to be free form (irrelevant newlines), scanf should be used directly. But never without testing its return value except for trivial tests.
Another drawback is that beginners hope it to be clever. It can indeed parse simple input formats, but is only a poor man's parser: do not use it as a generic parser because that is not what it is intended for.
Provided those rules are observed, it is a nice tool consistent with most of C language and its standard library: a simple tool to do simple things. It is up to programmers or library implementers to build richer tools.
I have only be using C language for more than 30 years, and was never bitten by scanf (well I was when I was a beginner, but I now know that I was to blame). Simply I have just tried for decades to only use it for what it can do...

What does an "extra" semicolon means? [duplicate]

This question already has answers here:
Two semicolons inside a for-loop parentheses
(4 answers)
Endless loop in C/C++ [closed]
(12 answers)
Closed 4 years ago.
I was searching for an explanation for the "double semicolon" in the code:
for(;;){}
Original question.
I do not have enough reputation to even leave a comment so I need to create a new question.
My question is,
What does an "extra" semicolon means. Is this "double semicolon" or "extra semicolon" used for "something else"?
The question comes from a person with no knowledge of programing, just powered on my first arduino kit and feel like a child when LED blinked.
But it is the reality that the questions, like general occurence, are radiating the actual knowledge of the person asking the question.
And beyond "personal preference" in using while(1) or for(;;) is it helpful in using for(;;) where you do not have enough room for the code itself?
I have updated the question.
Thank you for the quick explanation. You opened my eyes with the idea of not using anything in for loop = ). From basic high school knowledge I am aware of for loop so thank you for that.
So for(;;) returns TRUE "by default"?
And my last line about the size of the code?
Or it is compiled and using for or while actually does not affect the compiled code size?
A for loop is often written
int i;
for (i = 0; i < 6; i++) {
...
}
Before the first semicolon, there is code that is run once, before the loop starts. By omitting this, you just have nothing happen before the loop starts.
Between the semicolons, there is the condition that is checked after every loop, and the for loop end if this evaluates to false. By omitting this, you have no check, and it always continues.
After the second semicolon is code that is run after every cycle, before that condition is evaluated. Omitting this just runs additional code at that time.
The semicolons are always there, there is nothing special about the face that they are adjacent. You could replace that with
for ( ; ; )
and have the same effect.
While for (;;) itself does not return true (It doesn't return anything), the second field in the parentheses of the for always causes the loop to continue if it is empty. There's no reason for this other than that someone thought it would be convenient. See https://stackoverflow.com/a/13147054/5567382. It states that an empty condition is always replaced by a non-zero constant, which is the case where the loop continues.
As for size and efficiency, it may vary depending on the compiler, but I don't see any reason that one would be more efficient than the other, though it's really up to the compiler. It could be that a compiler would allow the program to evaluate as non-zero a constant in the condition of a while loop, but recognise an empty for loop condition and skip comparison, though I would expect the compiler to optimize the while loop better. (A long way of saying that I don't know, but it depends on a lot.)

for without increment vs while

I'm currently studying computer science in Germany but did work on several C/C++ opensource projects before.
Today we kind of started with C at school and my teacher said it would be a no go to modify a for loop variable inside the loop, which I absolutely agree with. However, I often use a for loop without the last incrementing part and then modify it only inside the loop, which he also did not like.
So basically it comes down to
for(int i=0; i<100;) {
[conditionally modify i]
}
vs
int i=0;
while(i<100) {
[conditionally modify i]
}
I know that they are essentially the same after compile, but I don't like using a while loop because:
It's common practice to limit variables to smallest possible scope
It can introduce bugs if you reuse i (which you have to because of larger scope)
You can not use a different type for i in a later loop without using a different name
Are there any style guides/common practices which one to choose ?
If you answer with "I like while/for more" at least provide a reason why you do so.
I did already search for similar questions, however, I could not find any real answer to this case.
Style guides differ between different people / teams.
The C standard describes the syntax of for like this:
for ( clause-1 ; expression-2 ; expression-3 ) statement
and it's common practice to use for as soon as you have a valid use for "clause-1", and the reason to do so is indeed because of the limited scope:
[...] If clause-1 is a
declaration, the scope of any identifiers it declares is the remainder of the declaration and
the entire loop, including the other two expressions; [...]
So, your argumentation is fine, and you could try to convince your teacher. But don't try too hard. After all, questions of style rarely have one definitive answer, and if your teacher insists on his rules, just follow them when coding for that class.
There are some common practices which programmers follow while taking the decision on which loop to use - for or while!
The for loop seems most appropriate when the number of iterations is known in advance. For example -
/*N - size of array*/
for (i = 0; i < N; i++) {
  ...
}
But, there could be many complex problems where the number of iterations depend upon a certain condition and can't be predicted beforehand, while loop is preferable in those situations.
For example -
while( fgets( buf, MAXBUF, stdin ) != NULL)
However, both for and while loops are entry controlled loops and are interchangeable. Its completely up to the programmer to take the decision on which one to use.
If you are modifying the loop counter inside the body, I would recommend not using a for loop because it is more likely to cause reader's confusion than the while loop.
You can work around the lack of scope limitation with an additional block:
{
int i = 0;
while (i < 100) {
[conditionally modify i]
}
}
for(int i=0; i<100;) {
[conditionally modify i]
}
Because this looks confusing, not standard way to write for loop. Also, conditionally modify i is dangerous, you don't want to do that. Someone reading your code would have problems understanding how you increment, why, when..etc. You want to keep your code clean and easy to understand.
I would personally never write for loop with conditionally modifying iterator. If you need it, you can use additional counter, or something like that. But you shouldn't control flow with conditioning iterator, some people even avoid break and continue.

Why is for(;;) used?

I have seen code (in C) which contains something similar to:
for(;;){
}
How would this work, and why is it used in any instance?
It is the idiomatic way to have an infinite loop.
In the C Programming Language book, by Kernighan & Ritchie book it was introduced in section 3.5:
for (;;) {
...
}
is an infinite loop, presumably to be broken by other means, such as a break or return.
is an infinite loop something like
while(true)
{}
This is equivalent to an infinite loop, as many other have explained. However, few of them explained why this executes as an infinite loop.
A for loop can be broken down into three parts:
for(initialization(s); condition(s); looping command(s))
None of these fields are actually required. Without a condition provided, there's nothing to stop the command from running. This is because the for loop looks for a false statement. Without conditions provided, nothing is false, therefore the loop runs indefinitely.
Therefore to cause a for loop to be infinite, all you need is to not provide a condition. This means that this is also a valid infinite loop:
for(int i = 0;; i++)
printf("Iteration: %i\n", i);
For readability, and to make sure that the second semi-colon isn't a typo, some programmers might put a space between them, or put true as the condition.
Honestly, I prefer while(true), as this is a clear infinite loop. Using while(1) works as well, but '1' is an integer, not a boolean. While it is equivalent to true, it does not always mean true.
Between these three types of infinite loops, for(;;) has the fewest characters (7). while(1) comes second at 8 characters, and while(true) at 11.
I suppose that certain programmers prefer a low byte count over a readable program, but I wouldn't recommend using for(;;). While equivalent, I believe that using while(true) is better practice.
A for loop needs three expressions, which are separated by semicolons, and are completely optional:
An initialization (e.g. int i=0)
A condition (e.g. i < 10)
An afterthought (e.g. i++)
In this case, the three expressions are empty, and thus there's no condition that will make the loop stop, thus creating an infinite loop, unless a flow control instruction like break (which will exit the loop) or return is used.
Its an infinite loop. Its equivalent to:
while (true) {
}
The C# compiler directly translates for (; ;) into the exact same construct as while (true).
Infinite loop
same as
while(true){}
the code for(;;){} or while(true){} or while(1){} all represent infinite loops.
An infinite loop is something to be expected in a software system that is expected to run and "unlimited" amount of time. Every OS has at least one - it's how a background task or idle task is implemented.
Real Time systems use infinite loops as well because the system has to handle events which are asynchronous;
Basically anything that runs software uses infinite loops in one way or another.
I don't know why no one answered why people do this instead of while(true): It's because while(true) will often generate a compilation warning that the condition of the loop is constant.
This kind of infinite loop can be used in microcontroleurs. In your main function, you initialize the basic functions of your microcontroleur, then put a while (1).
void MAIN(void)
{
Init_Device();
while(1);
}
The microcontroleur will then do nothing but wait for interrupts of internal or external devices (like a timer overflow, or UART data ready to be read), and react to these interrupts.

PC Lint while(TRUE) vs for(;;)

I'm using PC Lint for the first time. I was "linting" my code, when PC Lint warns me about my while(TRUE).
This is what it says:
716: while(1) ... -- A construct of the form while(1) ... was found.
Whereas this represents a constant in a context expecting a Boolean,
it may reflect a programming policy whereby infinite loops are
prefixed with this construct. Hence it is given a separate number and
has been placed in the informational category. The more conventional
form of infinite loop prefix is for(;;).
I didn't understand this statement. Can anyone help me to understand it?
The text says that although while(TRUE) (which gets preprocessed into while(1)) is a perfectly valid infinite loop, the more conventional form of writing an infinite loop is
for(;;)
{
...
}
because it doesn't use any values at all and therefore is less error-prone.
It says that the more conventional infinite loop is for(;;), which I'd say is an arguable assertion, and that it's categorized this construct as an "informational category" finding - I suspect that if you used the for(;;) instead it would go away. I've always written these as while(1) and never for(;;) myself. If it does what you expect, I'd ignore the findings of PC LINT on this one or switch it if you're worried about someone redefining TRUE because if someone redefined TRUE your loop wouldn't run at all.

Resources