Related
I read somewhere that the switch statement uses "Binary Search" or some sorting techniques to exactly choose the correct case and this increases its performance compared to else-if ladder.
And also if we give the case in order does the switch work faster? is it so? Can you add your valuable suggestions on this?
We discussed here about the same and planned to post as a question.
It's actually up to the compiler how a switch statement is realized in code.
However, my understanding is that when it's suitable (that is, relatively dense cases), a jump table is used.
That would mean that something like:
switch(i) {
case 0: doZero(); break;
case 1: doOne();
case 2: doTwo(); break;
default: doDefault();
}
Would end up getting compiled to something like (horrible pseudo-assembler, but it should be clear, I hope).
load i into REG
compare REG to 2
if greater, jmp to DEFAULT
compare REG to 0
if less jmp to DEFAULT
jmp to table[REG]
data table
ZERO
ONE
TWO
end data
ZERO: call doZero
jmp END
ONE: call doOne
TWO: call doTwo
jmp END
DEFAULT: call doDefault
END:
If that's not the case, there are other possible implementations that allow for some extent of "better than a a sequence of conditionals".
How swtich is implemented depends on what values you have. For values that are close in range, the compiler will generally generate a jump table. If the values are far apart, it will generate a linked branch, using something like a binary search to find the right value.
The order of the switch statements as such doesn't matter, it will do the same thing whether you have the order in ascending, descending or random order - do what makes most sense with regard to what you want to do.
If nothing else, switch is usually a lot easier to read than an if-else sequence.
On some googling I found some interestin link and planned to post as an answer to my question.
http://www.codeproject.com/Articles/100473/Something-You-May-Not-Know-About-the-Switch-Statem
Comments are welcome..
Although it can be implemented as several ways it depends on how the language designer wants to implement it.
One possible efficient way is to use Hash Maps
Map every condition (usually integer) to the corresponding expression to be evaluated followed by a jump statement.
Other solutions also might work as often switch has finite conditions but a efficient solution shall be to use Hash map
Here in the followed program if last condition is true then unnecessarily we have to check all conditions before it.
Is there any possibility to implement switch case in the below program?
I've to convert very similar code to this into Arm assembly.
main()
{
int x;
if (x< 32768)
x<<15;
elseif(x<49152)
(x<<12)- 7;
elseif(x<53248)
(x<<11)- 13;
elseif(x<59392)
(x<<10)-27;
elseif(x<60928)
(x<<9)-61;
elseif(x<62208)
(x<<8)-139;
elseif(x<64128)
(x<<7)-225;
elseif(x<65088)
(x<<6)-414;
elseif(x<65344)
(x<<5)-801;
elseif(x<65488)
(x<<4)-1595;
elseif(x<65512)
(x<<3)-2592;
elseif(x<65524)
(x<<2)-4589;
elseif(x<65534)
(x<<1)-8586;
}
Hope someone will help me.
So first things first: are you concerned about performance? If so, do you have actual profiling data showing that this code is a hot-spot and nothing else shows up on the profile?
I doubt that. In fact, I am willing to bed that you haven't even benchmarked it. You are instead looking at code and trying to micro-optimize it.
If that's the case then the answer is simple: stop doing that. Write your code the way it makes sense to write it and focus on improving the algorithmic efficiency of your code. Let the compiler worry about optimizing things. If the performance proves inadequate then profile and focus on the results of the profiling. Almost always the answer to your performance problems will use: choose a better-performing algorithm. The answer will almost never be "tinker with an if statement".
Now, to answer your question: A switch isn't helpful in this scenario because there's no sane way to represent the concept x < 32768 in a case statement, short of writing one statement for every such value of x. Obviously this is neithe practical nor sane.
More importantly you seem to operate under the misconception that a switch would translate to fewer comparisons. It's possible in some rare cases for a compiler to be able to avoid comparisons, but most of the time a switch will mean as many comparisons as you have case statements. So if you need to check a variable against 10000 different possible values using a switch, you'll get 10000 comparisons.
In your case, you're checking for way more than 10,000 possible values, so the simple if construct combined with the "less than" operator makes a lot more sense and will be much more efficient than a switch.
You write that "Here in the followed program if last condition is true then unnecessarily we have to check all conditions before it." True, you do. You could rewrite it so that if the last condition were true you would only need two comparisons. But then you'd simply flip the problem on it's head: if x< 32768 you'd end up having to check all the other possible values so you'd be back where you started.
One possible solution would be to perform binary search. This would certainly qualify as an algorithmic improvement, but again without hard data that this is, indeed, a hotspot, this would be a rather silly exercise.
The bottom line is this: write correct code that is both easy to understand and easy to maintain and improve and don't worry about reordering if statements. The excellent answer by perreal shows a good example of simple and easy to understand and maintain code.
And on the topic of writing correct code, there's no such thing as elseif in C and C++. Which brings us to my last point: before micro-optimizing code at least try to run the compiler.
You can't do that with switches since you need to have constant values to compare to x, not boolean conditions. But you can use a struct like this:
struct {
int u_limit;
int shift_left;
int add;
} ranges[13] = { {32768, 15, 0}, {49152, 12, -7} /*, ...*/};
for (int i = 0; i < 13; i++) {
if (x < ranges[i].u_limit) {
x = x << ranges[i].shift_left + ranges[i].add; break;
}
}
Of course, you can replace the linear search with binary search for some speedup.
I have a large switch statement, with about 250 cases, in Visual C:
#define BOP -42
#define COP -823
#define MOP -5759
int getScarFieldValue(int id, int ivIndex, int rayIndex, int scarIndex, int reamIndex)
{
int returnValue = INT_MAX;
switch (id)
{
case BOP : returnValue = Scar[ivIndex][rayIndex].bop[scarIndex][reamIndex]; break;
case COP : returnValue = Scar[ivIndex][rayIndex].cop[scarIndex][reamIndex]; break;
case MOP : returnValue = Scar[ivIndex][rayIndex].mop[scarIndex][reamIndex]; break;
.....
default: return(INT_MAX);
}
}
The #defines, you will notice, have a huge range, from -1 to -10,000. The thing is dog slow, and I'm wondering if spending several hours redefining these 250 defines to a narrower (or even consecutive) range could speed things up. I always thought the compiler would treat the case values in a way that made their numeric value irrelevant, but I have not been able to find any discussion to validate/invalidate that assumption.
Disassemble the compiled code and see what the compiler does. I've looked at the output from several different compilers and large switch statements were always compiled into binary decision trees or jump tables. Jump tables are the most optimal thing you can get and they are more likely to be generated by the compiler if the values you're switching on are in a narrow range. It also helps have a default statement on some compilers (but not necessary on others).
This is one situation where disassembling is your only good option, the details of code generation on this level are rarely well documented.
Simple solution:
Break switch case into multiple parts.
if(id<=50)
{
switch(id)
{
//All cases between 0 to 50
}
}
else if (id>50 && id<=100)
{
switch(id)
{
//All cases between 51 to 100
}
//and so on
the choice of range is yours. And dont create many ranges. This will ensure a code faster than your current code.
Alternatively you can use function pointers and write functions containing the statements that are to be executed in the case.
I would have preferred this method.
typedef struct
{
void(*Cur_FunctnPtr)();
}CMDS_Functn;
void (*Cur_Func)();
CMDS_Functn FunctionArray[66] = {
/*00-07*/ (*Func1),(*Func2),(*Func3),...
/*08-0F*/
/*40-47*/ };
void my_func()
{
... //what ever code
Cur_Func = FunctionArray[id].Cur_FunctnPtr; //Load current Function Pointer
(*Cur_Func)();
... // what ever code
}
Read the code to figure out what the switch compiles to.
If you have a hash table implementation handy, you could try using that, but it will of course require you to extract all the "action" code into something you can jump to from a hashtable lookup result.
If using GCC, I would do a quick test combining GCC's computed goto with a simple sorted array so you can use good old binary search. The latter will cut the number of worst-case comparisons done by your code from 250/2 to log2(250), i.e. around 8.
This will require a look-up table declared at compile-time (and perhaps sorted, once, at run-time), which probably is better in terms of memory overhead than most hashtables will manage.
If you look at the assembly output for the code, you will probably notice that your switch statement is being compiled into code that resembles cascading if statements:
if (id == BOP) ...
else if (id == COP) ...
else if (id == MOP) ...
...
else ...
Because of this, a simple tip to speed up the switch statement is to move the most commonly hit cases near the top.
If the case values are sorted, then the compiler may be able to generate a binary decision tree, reducing the complexity from linear to logarithmic.
At a high enough optimization level on a compiler that supports it, the compiler may be able to generate a computed goto style code. For non-consecutive values, the offset to jump to would be stored in a hash table, and a perfect hash function is generated for the case values. For consecutive values, there is no need for a hash function, as a simple indexed array can be used to store the jump offsets. You would have to check the assembler output for the optimized code to see if your compiler supports this functionality.
Otherwise, it may be better to create your own hash on the case value, and instead of using switch, you would do your own hash table lookup to find the right matrix to use, and then acquire your value.
Maybe you should use hash table, so you can search hash table instead of "switch case".
If you know the characteristics of the distribution of likely id values, test for them in most-likely to least-likely order in your case statement.
If this gets called frequently, you might want to consider storing the choices in a Dictionary: they get resolved without serial comparisons, and thus might save a lot of time if there are really 10,002 choices.
Your problem is that the range of the IDs is not consecutive. No compiler can do better on that than with a cascade of conditions of logarithmic depth, here about 8.
A method to heal this would be to use an enum that gets you the IDs consecutive and the compiler then can use a jump table to speed things up. To really know if that'd work you'd have to check the rest of your application to see if it supports changing the values.
The compiler will only optimise using techniques it's aware of, and if none of those techniques work then you'll get something terrible.
You could either implement something yourself, or you could try to give the compiler some clues. In the latter case you have a fair chance of the compiler "getting it" and then optimising the solution further than your own implementation can -- and the compiler can avoid C syntax limitations which would constrain your own solution.
As for solutions; obviously the best is to renumber them to be consecutive.
Another approach is to take your 250 values and search for a perfect hash function to reduce them to an 8-bit quantity.
#define PERFECT_HASH(x) ((x) & 0xff) /* some clever function of x */
switch (PERFECT_HASH(id))
{
case PERFECT_HASH(BOP): returnValue = Scar[ivIndex][rayIndex].bop[scarIndex][reamIndex]; break;
case PERFECT_HASH(COP): returnValue = Scar[ivIndex][rayIndex].cop[scarIndex][reamIndex]; break;
case PERFECT_HASH(MOP): returnValue = Scar[ivIndex][rayIndex].mop[scarIndex][reamIndex]; break;
.....
default: return(INT_MAX);
}
But, having cut and paste that code, it looks like you're using a switch statement to convert id into what is effectively a pointer value to different pieces of the data structure. If all the cases contain a single read of the same kind of pointer then you definitely don't want to use a switch for that. You need to look at the shape of your data and find a way to compute that pointer more directly. Or switch simply on the type and compute the address separately.
The question is really simple: in a laboratory class I attended this year, professor presented the switch/case statement alongside the classic if/then/else statement without saying anything about which one was better in different programming situations.
Which one is better when checking a variable which can have at least 10/15 possible values?
Breifly (your question is vague), a switch compiles to a jump table in assembler and is therefore faster than if / then / else. Note that a switch statement in C has a 'follow-through' feature (google this) which can be circumvented with break statements.
You can only switch on things that evaluate to integral types. In particular this means that you cannot switch on strings: strings are not part of the natural C language in any case.
An if / then / else checks several conditions in succession. Comparison is not restricted to integral types as all you're testing is true (not zero) or false (zero).
That's probably enough to get you started.
I think
If then else is better in case only when you have 2 conditions only.
Otherwise its better to use switch case if conditions are more than 2
When the value to be compared has a type that is amenable to being switch'd, and it makes your code more readable, then go ahead and use a switch. For example,
if (val == 0) {
// do something
} else if (val == 1) {
// do something else
} else if (val == 2) {
// yet another option
} ...
is cluttered and hard to maintain compared to a switch. Imagine that some day, you don't want to switch on val but on validate(val); then you'd need to change all the conditions.
Also, switch may be faster than if/else sometimes, because a compiler may turn it into either a jump table or a binary search. Then again, a compiler might do the same to a series of if/else statements, although that's a more difficult optimization to make because the clauses order might matter and the compiler must be able to detect that it doesn't.
switch is better performance-wise too because it can be optimized in various ways by the compiler, depending on whether the values are consecutive. If yes, it can outright use the value as an index to an array of pointers. If not, it can sometimes use a binary search instead of a linear search, when it's faster.
switch looks better than lots of ifs. However it only works on numeric expressions (as a char is essentially a number, it can still be applied to it, however you cannot use it with strings).
If I may point you to here as it has a nice description of the switch statement. Note the opening sentence:
Switch case statements are a substitute for long if statements that
compare a variable to several "integral" values ("integral" values are
simply values that can be expressed as an integer, such as the value
of a char).
I am writing a Time table generator in java, using AI approaches to satisfy the hard constraints and help find an optimal solution. So far I have implemented and Iterative construction (a most-constrained first heuristic) and Simulated Annealing, and I'm in the process of implementing a genetic algorithm.
Some info on the problem, and how I represent it then :
I have a set of events, rooms , features (that events require and rooms satisfy), students and slots
The problem consists in assigning to each event a slot and a room, such that no student is required to attend two events in one slot, all the rooms assigned fulfill the necessary requirements.
I have a grading function that for each set if assignments grades the soft constraint violations, thus the point is to minimize this.
The way I am implementing the GA is I start with a population generated by the iterative construction (which can leave events unassigned) and then do the normal steps: evaluate, select, cross, mutate and keep the best. Rinse and repeat.
My problem is that my solution appears to improve too little. No matter what I do, the populations tends to a random fitness and is stuck there. Note that this fitness always differ, but nevertheless a lower limit will appear.
I suspect that the problem is in my crossover function, and here is the logic behind it:
Two assignments are randomly chosen to be crossed. Lets call them assignments A and B. For all of B's events do the following procedure (the order B's events are selected is random):
Get the corresponding event in A and compare the assignment. 3 different situations might happen.
If only one of them is unassigned and if it is possible to replicate
the other assignment on the child, this assignment is chosen.
If both of them are assigned, but only one of them creates no
conflicts when assigning to the child, that one is chosen.
If both of them are assigned and none create conflict, on of
them is randomly chosen.
In any other case, the event is left unassigned.
This creates a child with some of the parent's assignments, some of the mother's, so it seems to me it is a valid function. Moreover, it does not break any hard constraints.
As for mutation, I am using the neighboring function of my SA to give me another assignment based on on of the children, and then replacing that child.
So again. With this setup, initial population of 100, the GA runs and always tends to stabilize at some random (high) fitness value. Can someone give me a pointer as to what could I possibly be doing wrong?
Thanks
Edit: Formatting and clear some things
I think GA only makes sense if part of the solution (part of the vector) has a significance as a stand alone part of the solution, so that the crossover function integrates valid parts of a solution between two solution vectors. Much like a certain part of a DNA sequence controls or affects a specific aspect of the individual - eye color is one gene for example. In this problem however the different parts of the solution vector affect each other making the crossover almost meaningless. This results (my guess) in the algorithm converging on a single solution rather quickly with the different crossovers and mutations having only a negative affect on the fitness.
I dont believe GA is the right tool for this problem.
If you could please provide the original problem statement, I will be able to give you a better solution. Here is my answer for the present moment.
A genetic algorithm is not the best tool to satisfy hard constraints. This is an assigment problem that can be solved using integer program, a special case of a linear program.
Linear programs allow users to minimize or maximize some goal modeled by an objective function (grading function). The objective function is defined by the sum of individual decisions (or decision variables) and the value or contribution to the objective function. Linear programs allow for your decision variables to be decimal values, but integer programs force the decision variables to be integer values.
So, what are your decisions? Your decisions are to assign students to slots. And these slots have features which events require and rooms satisfy.
In your case, you want to maximize the number of students that are assigned to a slot.
You also have constraints. In your case, a student may only attend at most one event.
The website below provides a good tutorial on how to model integer programs.
http://people.brunel.ac.uk/~mastjjb/jeb/or/moreip.html
For a java specific implementation, use the link below.
http://javailp.sourceforge.net/
SolverFactory factory = new SolverFactoryLpSolve(); // use lp_solve
factory.setParameter(Solver.VERBOSE, 0);
factory.setParameter(Solver.TIMEOUT, 100); // set timeout to 100 seconds
/**
* Constructing a Problem:
* Maximize: 143x+60y
* Subject to:
* 120x+210y <= 15000
* 110x+30y <= 4000
* x+y <= 75
*
* With x,y being integers
*
*/
Problem problem = new Problem();
Linear linear = new Linear();
linear.add(143, "x");
linear.add(60, "y");
problem.setObjective(linear, OptType.MAX);
linear = new Linear();
linear.add(120, "x");
linear.add(210, "y");
problem.add(linear, "<=", 15000);
linear = new Linear();
linear.add(110, "x");
linear.add(30, "y");
problem.add(linear, "<=", 4000);
linear = new Linear();
linear.add(1, "x");
linear.add(1, "y");
problem.add(linear, "<=", 75);
problem.setVarType("x", Integer.class);
problem.setVarType("y", Integer.class);
Solver solver = factory.get(); // you should use this solver only once for one problem
Result result = solver.solve(problem);
System.out.println(result);
/**
* Extend the problem with x <= 16 and solve it again
*/
problem.setVarUpperBound("x", 16);
solver = factory.get();
result = solver.solve(problem);
System.out.println(result);
// Results in the following output:
// Objective: 6266.0 {y=52, x=22}
// Objective: 5828.0 {y=59, x=16}
I would start by measuring what's going on directly. For example, what fraction of the assignments are falling under your "any other case" catch-all and therefore doing nothing?
Also, while we can't really tell from the information given, it doesn't seem any of your moves can do a "swap", which may be a problem. If a schedule is tightly constrained, then once you find something feasible, it's likely that you won't be able to just move a class from room A to room B, as room B will be in use. You'd need to consider ways of moving a class from A to B along with moving a class from B to A.
You can also sometimes improve things by allowing constraints to be violated. Instead of forbidding crossover from ever violating a constraint, you can allow it, but penalize the fitness in proportion to the "badness" of the violation.
Finally, it's possible that your other operators are the problem as well. If your selection and replacement operators are too aggressive, you can converge very quickly to something that's only slightly better than where you started. Once you converge, it's very difficult for mutations alone to kick you back out into a productive search.
I think there is nothing wrong with GA for this problem, some people just hate Genetic Algorithms no matter what.
Here is what I would check:
First you mention that your GA stabilizes at a random "High" fitness value, but isn't this a good thing? Does "high" fitness correspond to good or bad in your case? It is possible you are favoring "High" fitness in one part of your code and "Low" fitness in another thus causing the seemingly random result.
I think you want to be a bit more careful about the logic behind your crossover operation. Basically there are many situations for all 3 cases where making any of those choices would not cause an increase in fitness at all of the crossed-over individual, but you are still using a "resource" (an assignment that could potentially be used for another class/student/etc.) I realize that a GA traditionally will make assignments via crossover that cause worse behavior, but you are already performing a bit of computation in the crossover phase anyway, why not choose one that actually will improve fitness or maybe don't cross at all?
Optional Comment to Consider : Although your iterative construction approach is quite interesting, this may cause you to have an overly complex Gene representation that could be causing problems with your crossover. Is it possible to model a single individual solution as an array (or 2D array) of bits or integers? Even if the array turns out to be very long, it may be worth it use a more simple crossover procedure. I recommend Googling "ga gene representation time tabling" you may find an approach that you like more and can more easily scale to many individuals (100 is a rather small population size for a GA, but I understand you are still testing, also how many generations?).
One final note, I am not sure what language you are working in but if it is Java and you don't NEED to code the GA by hand I would recommend taking a look at ECJ. Maybe even if you have to code by hand, it could help you develop your representation or breeding pipeline.
Newcomers to GA can make any of a number of standard mistakes:
In general, when doing crossover, make sure that the child has some chance of inheriting that which made the parent or parents winner(s) in the first place. In other words, choose a genome representation where the "gene" fragments of the genome have meaningful mappings to the problem statement. A common mistake is to encode everything as a bitvector and then, in crossover, to split the bitvector at random places, splitting up the good thing the bitvector represented and thereby destroying the thing that made the individual float to the top as a good candidate. A vector of (limited) integers is likely to be a better choice, where integers can be replaced by mutation but not by crossover. Not preserving something (doesn't have to be 100%, but it has to be some aspect) of what made parents winners means you are essentially doing random search, which will perform no better than linear search.
In general, use much less mutation than you might think. Mutation is there mainly to keep some diversity in the population. If your initial population doesn't contain anything with a fractional advantage, then your population is too small for the problem at hand and a high mutation rate will, in general, not help.
In this specific case, your crossover function is too complicated. Do not ever put constraints aimed at keeping all solutions valid into the crossover. Instead the crossover function should be free to generate invalid solutions and it is the job of the goal function to somewhat (not totally) penalize the invalid solutions. If your GA works, then the final answers will not contain any invalid assignments, provided 100% valid assignments are at all possible. Insisting on validity in the crossover prevents valid solutions from taking shortcuts through invalid solutions to other and better valid solutions.
I would recommend anyone who thinks they have written a poorly performing GA to conduct the following test: Run the GA a few times, and note the number of generations it took to reach an acceptable result. Then replace the winner selection step and goal function (whatever you use - tournament, ranking, etc) with a random choice, and run it again. If you still converge roughly at the same speed as with the real evaluator/goal function then you didn't actually have a functioning GA. Many people who say GAs don't work have made some mistake in their code which means the GA converges as slowly as random search which is enough to turn anyone off from the technique.