In a C book there was a true/false question in which the following two statements were described as true.
1) Compiler implements a jump table for cases used in a switch.
2) for loop can be used if we want statements in a loop to get executed at least once.
I have following questions regarding these two points:
What is the meaning of statement number 1?
According to me, the second statement should be false, because for this task we a use a do while loop. Am I right?
The first point is somewhat misleading, if it's worded just like that. That might just be the point, of course. :)
It refers to one common way of generating fast code for a switch statement, but there's absolutely no requirement that a compiler does that. Even those that do, probably don't do it always since there's bound to be trade-offs that perhaps only make it worthwhile for a switch with more than n cases. Also the cases themselvesl typically have to be "compact" in order to provide a good index to use in the table.
And yes, a do loop is what to use if you want at least one iteration, since it does the test at the end whereas both for and while do it at the start.
1) It means that a common optimization is for a compiler to build a "jump table" which is like an array where the values are addresses of instructions where the program will execute next. The array is built such that the indexes correspond to values being switched on. Using a jump table like this is O(1), whereas a cascade of "if/else" statements is O(n) in the number of cases.
2) Sure, you can use a "do-while" loop to execute something "at least once." But you'll find that do-while loops are fairly uncommon in most applications, and "for" loops are the most common--partly because if you omit the first and third parts between their parentheses, they are really just fancy "while" loops! For example:
for (; i < x; ) // same as while (i < x)
for (i = 0; i == 0 || i < x; ) // like i = 0; do ... while (i < x)
Related
According to my knowledge loops are used in programming to do repetitive task..
There are certain types of loops like for, while, do while etc... and their syntax differs from each other like for example in while loops we intialize the counter outside and check the conditions in while() and ++ || -- inside the code block, whereas in for loop we do all certain things like initialization,condition checking and ++ || -- in for keyword.
So my question is which loops is efficient and occupies less memory
The loops you listed aren't really going to differ in memory usage or "efficiency". Rather, each one should be used in different situations. A for loop is often used when one needs to iterate through some object that contains multiple indexes or lines etc. For example (Java):
for(int i = 0; i<fooString.length(); i++){
fooCharArray[i] = fooString.charAt(i);
}
You could also achieve the same with a while loop:
int i = 0;
while(i<fooString.length()){
fooCharArray[i] = fooString.charAt(i);
i++;
}
Often, recursion can achieve the sams results as loops, too (though in my example it'd seem slightly wasteful, since a loop could do it so easily). So really, it's more about what you're doing, what's easiest for you, and what makes it the most readable/understandable for you and other programmers.
As the title suggests, which languages support while or do/while loops with automatic iteration indexing?
In other words a [while] or [do/while] loop that provides the iteration index automatically without having to resort to an intrinsically indexed construct such as the [for] loop. This was an odd question to Google which fetched out of context results.
Take the following as an example for C#:
int count = 0;
while (count < 10)
{
Console.WriteLine("This is iteration #{0}", count++);
}
As opposed to the following fictitious loop:
while<int> (value < 10)
{
Console.WriteLine("This is iteration #{0}", value); // Or value-- for that matter.
}
I do not know much about language design and thus the question. the [for] loop has a lot of flexibility but each kind of loop is best suited for certain scenarios. Still, it makes certain scenarios very odd such as combining iterators with indexers.
This is not meant to be an open-ended question. Simply, Do any languages support such constructs, and if not, ummm, cannot ask that here as that would make it open ended
UPDATE: I do realize the complexity that would arise out of nesting such loops but am sure that can be circumvented by some clever naming convention.
Something I had in mind but did not mention was the use of a clever lambda expression in the case of C# for example. That would not be an addition to the language but merely an extension (which I believe is only valid for reflection-friendly platforms such as .NET and Java).
In ruby: array.each_index{|i| print i } will go through each index.
From my memory, the DO ... LOOP words in Forth supported kind of that.
The word to get the index of the current loop was I (obviously!) and that for the next outer loop was J.
Hence
10 1 DO I . 32 EMIT LOOP NL
would print:
1 2 3 4 5 6 7 8 9 10
Given the code :
for (int i = 0; i < n; ++i)
{
A(i) ;
B(i) ;
C(i) ;
}
And the optimization version :
for (int i = 0; i < (n - 2); i+=3)
{
A(i)
A(i+1)
A(i+2)
B(i)
B(i+1)
B(i+2)
C(i)
C(i+1)
C(i+2)
}
Something is not clear to me : which is better ? I can't see anything that works any faster using the other version . Am I missing something here ?
All I see is that each instruction is depending on the previous instruction , meaning that
I need to wait that the previous instruction would finish in order to start the one after ...
Thanks
In the high-level view of a language, you're not going to see the optimization. The speed enhancement comes from what the compiler does with what you have.
In the first case, it's something like:
LOCATION_FLAG;
DO_SOMETHING;
TEST FOR LOOP COMPLETION;//Jumps to LOCATION_FLAG if false
In the second it's something like:
LOCATION_FLAG;
DO_SOMETHING;
DO_SOMETHING;
DO_SOMETHING;
TEST FOR LOOP COMPLETION;//Jumps to LOCATION_FLAG if false
You can see in the latter case, the overhead of testing and jumping is only 1 instruction per 3. In the first it's 1 instruction per 1; so it happens a lot more often.
Therefore, if you have invariants you can rely on (an array of mod 3, to use your example) then it is more efficient to unwind loops because the underlying assembly is written more directly.
Loop unrolling is used to reduce the number of jump & branch instructions which could potentially make the loop faster but will increase the size of the binary. Depending on the implementation and platform, either could be faster.
Well, whether this code is "better" or "worse" totally depends on implementations of A, B and C, which values of n you expect, which compiler you are using and which hardware you are running on.
Typically the benefit of loop unrolling is that the overhead of doing the loop (that is, increasing i and comparing it with n) is reduced. In this case, could be reduced by a factor of 3.
As long as the functions A(), B() and C() don't modify the same datasets, the second verion provides more parallelization options.
In the first version, the three functions could run simultaneously, assuming no interdependencies. In the second version, all three functions could be run with all three datasets at the same time, assuming you had enough execution units to do so and again, no interdependencies.
Generally its not a good idea to try to "invent" optimizations, unless you have hard evidence that you will gain an increase, because many times you may end up introducing a degradation. Typically the best way to obtain such evidence is with a good profiler. I would test both versions of this code with a profiler to see the difference.
Also, many times loop unrolling isnt very protable, as mentioned previously, it depends greatly on the platform, compiler, etc.
You can additionally play with the compiler options. An interesting gcc option is "-floop-optimize", that you get automatically with "-O, -O2, -O3, and -Os"
EDIT Additionally, look at the "-funroll-loops" compiler option.
I would like to know Which is the best way to write Loops?
Is Count Down_to_Zero Loop better than Count_Up_Loops?And particularly in Embedded Systems context which one is better choice ???
In the embedded world it can be better to use one scheme in preference to another dependant upon the processor that you are using. For example the PIC processor has an instruction "decrement and jump if not zero". This is really good for doing a count down "for" loop in a single instruction.
Other processors have different instruction sets so different rules apply.
You may also have to factor in the effects of compiler optimisation which may convert a count up into the possibly more efficient count down version.
As always, if you have to worry about these things then you are using the wrong processor and tools. The idea of writing in a higher level language than assembler is to explain to the maintenance engineer how the software works. If it is counter intuitive to use a count down loop then don't, regardless of the (minor) loss in processor efficiency.
It's personal preference. In the case of arrays, counting up from 0 is usually better because you typically want to process each value in order. Neither style is inherently better, but they may have different results (e.g. if you were printing each value in an array, the order of the output would be different).
In many cases (with the notable exception of arrays), the most logical choice is to use a while loop rather than a for loop, e.g. reading from a file:
int c;
while ((c = fgetc(somefile)) != EOF)
/* Do something */
The main thing to worry about is that you, or someone else, will read the code some time in the future, and person must be able to understand what the code is intended to do.
In other words, write your code in plain text. If you intend to do something ten times, loop from 0 to less-than-10. If you intend to walk backwards through an array, loop from the higher value to the lower.
Avoid placing anything that is not related to the control of the loop in the header of the for statement.
When it comes to efficiency, you can safely leave that to the compiler.
The best way to write a for loop is:
for(i=0; i<N; i++)
for(j=0; j<N; j++)
{
array[i][j] = ... ;
}
Everything else is "premature optimizations", ie things that the compiler really should be able to handle for you.
If you have a dumb compiler however, it may be more effective to count from N down to zero, as compare against zero is faster than compare against value on most CPUs.
Note that N should be a constant expression if possible. Leave out function calls like strlen() etc from the loop comparison.
++i will also be faster if the code might end up at a C++ compiler, where the C++ standard guarantees that ++i is faster than i++, because i++ creates a temporary invisible variable.
The order of the loop should be just as above for most systems, as this is often the most effective way to address cache memories, which is quite an advanced topic.
Are simple loops as powerful as nested loops in terms of Turing completeness?
In terms of Turing completeness, yes they are.
Proof: It's possible to write a Brainf*** interpreter using a simple loop, for example here:
http://www.hevanet.com/cristofd/brainfuck/sbi.c
For loops with a fixed number of steps (LOOP, FOR and similar): Imagine the whole purpose of a loop is to count to n. Why should make it a difference if I loop i times in an outer loop and j times in an inner loop as opposed n = i * j in just a single loop?
Assume that no WHILE, GOTO or similar constructs are allowed in a program (just assignment, IF, and fixed loops). Then all these programs end after a finite number of steps.
The next step to more expressibility is to allow loops, where the number of iterations is e.g. determined by a condition, and it is not sure, whether this condition is ever satisfied (e.g. WHILE). Then is may happen, that a program won't halt. (This type of expressiveness is also known as Turing-completeness).
Corresponding to these two forms of programs are two kinds of functions, which were historically developed around the same time and which are called primitive recursive functions and μ-recursive functions.
The number of nestings doesn't play a role in this.