Here in the followed program if last condition is true then unnecessarily we have to check all conditions before it.
Is there any possibility to implement switch case in the below program?
I've to convert very similar code to this into Arm assembly.
main()
{
int x;
if (x< 32768)
x<<15;
elseif(x<49152)
(x<<12)- 7;
elseif(x<53248)
(x<<11)- 13;
elseif(x<59392)
(x<<10)-27;
elseif(x<60928)
(x<<9)-61;
elseif(x<62208)
(x<<8)-139;
elseif(x<64128)
(x<<7)-225;
elseif(x<65088)
(x<<6)-414;
elseif(x<65344)
(x<<5)-801;
elseif(x<65488)
(x<<4)-1595;
elseif(x<65512)
(x<<3)-2592;
elseif(x<65524)
(x<<2)-4589;
elseif(x<65534)
(x<<1)-8586;
}
Hope someone will help me.
So first things first: are you concerned about performance? If so, do you have actual profiling data showing that this code is a hot-spot and nothing else shows up on the profile?
I doubt that. In fact, I am willing to bed that you haven't even benchmarked it. You are instead looking at code and trying to micro-optimize it.
If that's the case then the answer is simple: stop doing that. Write your code the way it makes sense to write it and focus on improving the algorithmic efficiency of your code. Let the compiler worry about optimizing things. If the performance proves inadequate then profile and focus on the results of the profiling. Almost always the answer to your performance problems will use: choose a better-performing algorithm. The answer will almost never be "tinker with an if statement".
Now, to answer your question: A switch isn't helpful in this scenario because there's no sane way to represent the concept x < 32768 in a case statement, short of writing one statement for every such value of x. Obviously this is neithe practical nor sane.
More importantly you seem to operate under the misconception that a switch would translate to fewer comparisons. It's possible in some rare cases for a compiler to be able to avoid comparisons, but most of the time a switch will mean as many comparisons as you have case statements. So if you need to check a variable against 10000 different possible values using a switch, you'll get 10000 comparisons.
In your case, you're checking for way more than 10,000 possible values, so the simple if construct combined with the "less than" operator makes a lot more sense and will be much more efficient than a switch.
You write that "Here in the followed program if last condition is true then unnecessarily we have to check all conditions before it." True, you do. You could rewrite it so that if the last condition were true you would only need two comparisons. But then you'd simply flip the problem on it's head: if x< 32768 you'd end up having to check all the other possible values so you'd be back where you started.
One possible solution would be to perform binary search. This would certainly qualify as an algorithmic improvement, but again without hard data that this is, indeed, a hotspot, this would be a rather silly exercise.
The bottom line is this: write correct code that is both easy to understand and easy to maintain and improve and don't worry about reordering if statements. The excellent answer by perreal shows a good example of simple and easy to understand and maintain code.
And on the topic of writing correct code, there's no such thing as elseif in C and C++. Which brings us to my last point: before micro-optimizing code at least try to run the compiler.
You can't do that with switches since you need to have constant values to compare to x, not boolean conditions. But you can use a struct like this:
struct {
int u_limit;
int shift_left;
int add;
} ranges[13] = { {32768, 15, 0}, {49152, 12, -7} /*, ...*/};
for (int i = 0; i < 13; i++) {
if (x < ranges[i].u_limit) {
x = x << ranges[i].shift_left + ranges[i].add; break;
}
}
Of course, you can replace the linear search with binary search for some speedup.
Related
If a human wants to re-arrange the numbers 1,2,3,4 from the biggest to the smallest, the human will check to see if the second number is bigger than the first number, but will not swap locations until done examining the rest of the numbers.
The new order becomes : 4,2,3,1
However the code below will swap the locations of "1" and "2" soon as it determines "2" is bigger than "1".
The new order becomes : 2,1,3,4
The program will do more swapping than a human will be doing..
and thus perhaps it is less efficient than human method ?
Is there a way to apply the efficiency of human method to this program ? or perhaps the human method is not more efficient but simply appears that way ?
int a[] = {1,2,3,4};
int total;
total = 4;
int i;
int i2;
int holder = 0;
for (i=0; i<total;) {
for (i2=i+1; i2<total;) {
if(a[i] < a[i2]) {
holder = a[i];
a[i] = a[i2];
a[i2] = holder;
}
i2=i2+1;
}
i=i+1;
}
Please note that the opinions below are my own, regardless of any literature that might exist on the topic. They are not supposed to be considered science, regardless of how "science" might be defined.
The short answers:
We have no idea how the human brain actually works - referring here to mathematical computations, comparisons etc.
It is guaranteed that the human brain works radically different from any computer.
Maybe I am not accurate in my "metaphor", but I reached this conclusion: the human brain does many (most?) calculations "visually": you just look and you know the correct answer. A computer would need very complex algorithms, and it might still not be able to solve the problem.
Also, the human brain is able to generate a totally different problem, with the same result / answer like the original, but a lot easier to calculate. And that, without us even being aware of that, most of the times.
It was already mentioned in a comment: for the example of your problem, a human would not sort that list of digits. He would just countdown from 4 to 1.
If the problem would provide different numbers, e.g. {5, 21, 48, 16}, the brain cores would "visually" detect the maximums and minimums in the list, and rearrange them in the correct order, without real comparisons (at least, we are not aware of them).
The human brain is definitely multi-core. But the cores are not independent like in a computer, which only exchange some data. They are permanently reconfigurable, and I suspect that these "cores" of the brain actually overlap, not only regarding data, but also regarding execution.
To understand the kind of computing done in "biological computers":
References: Rod_cell, Cone_cell, Optic_nerve
Math:
100 million rod cells;
7 million cone cells;
Each human optic nerve contains between 770,000 and 1.7 million nerve fibers
Now you see, a max of 1.7 million optic nerves connect 107 million sensors to the brain. That is actually the "definition" of image / video compression. The eye (retina?) is a standalone computer in itself. If it is able to do video compression, then it MUST be able (my opinion) to sort a short list without the need to relay the data to the brain. That could be an explanation WHY we know an answer to a problem just looking at it - we receive the answer together with the problem - all work was done elsewhere.
It seems "obvious" that the biological computers perform mathematical comparisons at some level, it is just that we have no idea where and how they are made. Maybe in a low level "driver"? Maybe they are off-loaded to some other processing unit? A "hardware accelerator"? Maybe, hopefully, the future will tell us.
Reading a couple of questions about Post-Increment and Pre-Increment I am in need of trying to explain a new programmer in what cases I would actually need one or the other. In what type of scenarios one would apply a Post-Increment Operator and in what it is better to apply a Pre-Increment one.
This is to teach case studies where, in a particular code, one would need to apply one or the other in order to obtain specific values for certain tasks.
The short answer is: You never need them!
The long answer is that the instruction sets of early micro-computers had features like that. Upon reading of a memory cell, you could post-increment or pre-decrement that cell when reading it. Such machine level features inspired the predecessors of C, from whence it found its way even into more recent languages.
To understand this, one must remember that RAM was extremely scarce in those days. When you have 64k addressable RAM for your program, you'll find it worth it to write very compact code. The machine architectures of those days reflected this need by providing extremely powerful instructions. Hence you could express code like:
s = s + a[j]
j = j + 1
with just one instruction, given that s and j were in a register.
Thus we have language features in C that allowed the compiler without much effort to generate efficient code line:
register int s = 0; // clr r5
s += a[j++]; // mov j+, r6 move j to r6 and increment j after
// add r5,a[r6]
The same goes for the short-cut operations like +=, -=, *= etc.
They are
A way to save typing
A help for a compiler that had to fit in small RAM, and couldn't afford much optimizations
For example,
a[i] *= 5
which is short for
a[i] = a[i] * 5
in effect saves the compiler some form of common subexpression analysis.
And yet, all that language features, can always be replaced by equivalent, maybe a bit longer code that doesn't use them. Modern compilers should translate them to efficient code, just like the shorter forms.
So the bottom line and answer to your question: you don't need to look for cases where one needs to apply those operators. Such cases simply do not exits.
Well, some people like Douglas Crockford are against using those operators because they can lead to unexpected behaviors by hiding the final result from the untrained eye.
But, since I'm using JSHint, let's share an example here:
http://jsfiddle.net/coma/EB72c/
var List = function(values) {
this.index = 0;
this.values = values;
};
List.prototype.next = function() {
return this.values[++this.index];
};
List.prototype.prev = function() {
return this.values[--this.index];
};
List.prototype.current = function() {
return this.values[this.index];
};
List.prototype.prefix = function(prefixes) {
var i;
for (i = 0; i < this.values.length; i++) {
this.values[i] = prefixes[i] + this.values[i];
}
};
var animals = new List(['dog', 'cat', 'parrot']);
console.log('current', animals.current());
console.log('next', animals.next());
console.log('next', animals.next());
console.log('current', animals.current());
console.log('prev', animals.prev());
console.log('prev', animals.prev());
animals.prefix(['Snoopy the ', 'Gartfield the ', 'A ']);
console.log('current', animals.current());
As others have said, you never "need" either flavor of the ++ and -- operators. Then again, there are a lot of features of languages that you never "need" but that are useful in clearly expressing the intent of the code. You don't need an assignment that returns a value either, or unary negation since you can always write (0-x)... heck, if you push that to the limit you don't need C at all since you can always write in assembler, and you don't need assembler since you can always just set the bits to construct the instructions by hand...
(Cue Frank Hayes' song, When I Was A Boy. And get off my lawn, you kids!)
So the real answer here is to use the increment and decrement operators where they make sense stylistically -- where the value being manipulated is in some sense a counter and where it makes sense to advance the count "in passing". Loop control is one obvious place where people expect to see increment/decrement and where it reads more clearly than the alternatives. And there are many C idioms which have almost become meta-statements in the language as it is actually used -- the classic one-liner version of strcpy(), for example -- and which an experienced C programmer will recognize at a glance and be able to recreate at need; many of those do take advantage of increment/decrement as side effect.
Unfortunately, "where it makes sense stylistically" is not a simple rule to teach. As with any other aspect of coding style, it really needs to come from exposure to other folks' code and from an understanding of how "native speakers" of the programming language think about the code.
That isn't a very satisfying answer, I know. But I don't think a better one exists.
Usually, it just depends on what you need in your specific case. If you need the result of the operation, it's just a question of whether you need the value of the variable before or after incrementing / decrementing. So use the one which makes the code more clear to understand.
In some languages like C++ it is considered good practice to use the pre-increment / pre-decrement operators if you don't need the value, since the post operators need to store the previous value in a temporary during the operation, therefore they require more instructions (additional copy) and can cause performance issues for complex types.
I'm no C expert but I don't think it really matters which you use in C, since there are no increment / decrement operators for large structs.
I have a large switch statement, with about 250 cases, in Visual C:
#define BOP -42
#define COP -823
#define MOP -5759
int getScarFieldValue(int id, int ivIndex, int rayIndex, int scarIndex, int reamIndex)
{
int returnValue = INT_MAX;
switch (id)
{
case BOP : returnValue = Scar[ivIndex][rayIndex].bop[scarIndex][reamIndex]; break;
case COP : returnValue = Scar[ivIndex][rayIndex].cop[scarIndex][reamIndex]; break;
case MOP : returnValue = Scar[ivIndex][rayIndex].mop[scarIndex][reamIndex]; break;
.....
default: return(INT_MAX);
}
}
The #defines, you will notice, have a huge range, from -1 to -10,000. The thing is dog slow, and I'm wondering if spending several hours redefining these 250 defines to a narrower (or even consecutive) range could speed things up. I always thought the compiler would treat the case values in a way that made their numeric value irrelevant, but I have not been able to find any discussion to validate/invalidate that assumption.
Disassemble the compiled code and see what the compiler does. I've looked at the output from several different compilers and large switch statements were always compiled into binary decision trees or jump tables. Jump tables are the most optimal thing you can get and they are more likely to be generated by the compiler if the values you're switching on are in a narrow range. It also helps have a default statement on some compilers (but not necessary on others).
This is one situation where disassembling is your only good option, the details of code generation on this level are rarely well documented.
Simple solution:
Break switch case into multiple parts.
if(id<=50)
{
switch(id)
{
//All cases between 0 to 50
}
}
else if (id>50 && id<=100)
{
switch(id)
{
//All cases between 51 to 100
}
//and so on
the choice of range is yours. And dont create many ranges. This will ensure a code faster than your current code.
Alternatively you can use function pointers and write functions containing the statements that are to be executed in the case.
I would have preferred this method.
typedef struct
{
void(*Cur_FunctnPtr)();
}CMDS_Functn;
void (*Cur_Func)();
CMDS_Functn FunctionArray[66] = {
/*00-07*/ (*Func1),(*Func2),(*Func3),...
/*08-0F*/
/*40-47*/ };
void my_func()
{
... //what ever code
Cur_Func = FunctionArray[id].Cur_FunctnPtr; //Load current Function Pointer
(*Cur_Func)();
... // what ever code
}
Read the code to figure out what the switch compiles to.
If you have a hash table implementation handy, you could try using that, but it will of course require you to extract all the "action" code into something you can jump to from a hashtable lookup result.
If using GCC, I would do a quick test combining GCC's computed goto with a simple sorted array so you can use good old binary search. The latter will cut the number of worst-case comparisons done by your code from 250/2 to log2(250), i.e. around 8.
This will require a look-up table declared at compile-time (and perhaps sorted, once, at run-time), which probably is better in terms of memory overhead than most hashtables will manage.
If you look at the assembly output for the code, you will probably notice that your switch statement is being compiled into code that resembles cascading if statements:
if (id == BOP) ...
else if (id == COP) ...
else if (id == MOP) ...
...
else ...
Because of this, a simple tip to speed up the switch statement is to move the most commonly hit cases near the top.
If the case values are sorted, then the compiler may be able to generate a binary decision tree, reducing the complexity from linear to logarithmic.
At a high enough optimization level on a compiler that supports it, the compiler may be able to generate a computed goto style code. For non-consecutive values, the offset to jump to would be stored in a hash table, and a perfect hash function is generated for the case values. For consecutive values, there is no need for a hash function, as a simple indexed array can be used to store the jump offsets. You would have to check the assembler output for the optimized code to see if your compiler supports this functionality.
Otherwise, it may be better to create your own hash on the case value, and instead of using switch, you would do your own hash table lookup to find the right matrix to use, and then acquire your value.
Maybe you should use hash table, so you can search hash table instead of "switch case".
If you know the characteristics of the distribution of likely id values, test for them in most-likely to least-likely order in your case statement.
If this gets called frequently, you might want to consider storing the choices in a Dictionary: they get resolved without serial comparisons, and thus might save a lot of time if there are really 10,002 choices.
Your problem is that the range of the IDs is not consecutive. No compiler can do better on that than with a cascade of conditions of logarithmic depth, here about 8.
A method to heal this would be to use an enum that gets you the IDs consecutive and the compiler then can use a jump table to speed things up. To really know if that'd work you'd have to check the rest of your application to see if it supports changing the values.
The compiler will only optimise using techniques it's aware of, and if none of those techniques work then you'll get something terrible.
You could either implement something yourself, or you could try to give the compiler some clues. In the latter case you have a fair chance of the compiler "getting it" and then optimising the solution further than your own implementation can -- and the compiler can avoid C syntax limitations which would constrain your own solution.
As for solutions; obviously the best is to renumber them to be consecutive.
Another approach is to take your 250 values and search for a perfect hash function to reduce them to an 8-bit quantity.
#define PERFECT_HASH(x) ((x) & 0xff) /* some clever function of x */
switch (PERFECT_HASH(id))
{
case PERFECT_HASH(BOP): returnValue = Scar[ivIndex][rayIndex].bop[scarIndex][reamIndex]; break;
case PERFECT_HASH(COP): returnValue = Scar[ivIndex][rayIndex].cop[scarIndex][reamIndex]; break;
case PERFECT_HASH(MOP): returnValue = Scar[ivIndex][rayIndex].mop[scarIndex][reamIndex]; break;
.....
default: return(INT_MAX);
}
But, having cut and paste that code, it looks like you're using a switch statement to convert id into what is effectively a pointer value to different pieces of the data structure. If all the cases contain a single read of the same kind of pointer then you definitely don't want to use a switch for that. You need to look at the shape of your data and find a way to compute that pointer more directly. Or switch simply on the type and compute the address separately.
The question is really simple: in a laboratory class I attended this year, professor presented the switch/case statement alongside the classic if/then/else statement without saying anything about which one was better in different programming situations.
Which one is better when checking a variable which can have at least 10/15 possible values?
Breifly (your question is vague), a switch compiles to a jump table in assembler and is therefore faster than if / then / else. Note that a switch statement in C has a 'follow-through' feature (google this) which can be circumvented with break statements.
You can only switch on things that evaluate to integral types. In particular this means that you cannot switch on strings: strings are not part of the natural C language in any case.
An if / then / else checks several conditions in succession. Comparison is not restricted to integral types as all you're testing is true (not zero) or false (zero).
That's probably enough to get you started.
I think
If then else is better in case only when you have 2 conditions only.
Otherwise its better to use switch case if conditions are more than 2
When the value to be compared has a type that is amenable to being switch'd, and it makes your code more readable, then go ahead and use a switch. For example,
if (val == 0) {
// do something
} else if (val == 1) {
// do something else
} else if (val == 2) {
// yet another option
} ...
is cluttered and hard to maintain compared to a switch. Imagine that some day, you don't want to switch on val but on validate(val); then you'd need to change all the conditions.
Also, switch may be faster than if/else sometimes, because a compiler may turn it into either a jump table or a binary search. Then again, a compiler might do the same to a series of if/else statements, although that's a more difficult optimization to make because the clauses order might matter and the compiler must be able to detect that it doesn't.
switch is better performance-wise too because it can be optimized in various ways by the compiler, depending on whether the values are consecutive. If yes, it can outright use the value as an index to an array of pointers. If not, it can sometimes use a binary search instead of a linear search, when it's faster.
switch looks better than lots of ifs. However it only works on numeric expressions (as a char is essentially a number, it can still be applied to it, however you cannot use it with strings).
If I may point you to here as it has a nice description of the switch statement. Note the opening sentence:
Switch case statements are a substitute for long if statements that
compare a variable to several "integral" values ("integral" values are
simply values that can be expressed as an integer, such as the value
of a char).
Was just a quick question to see if there are different ways you code something similar when it comes to evaluating conditional statements/control flow.
For example:
If Statements
Switch Statements
Is there any tidier way to do these as I have basically the option of If (value == X) { // do X } and Switch(value) { case X: ...
When doing this with over 100 values is there any data driven approach that could be taken or any different evaluation methods that would tidy up the code?
If your values are integers and are not sparse sometimes it can be convenient to use a lookup table (both for data and for code - in this last case you'd use function pointers and is often called a jump table, which is incidentally what the compiler often does with switch blocks); if the alternative is checking the possible values one by one, the performance jumps from O(N) to O(1).
Also, for non-integer data, hash tables can be used. How convenient they are depends from case to case.