Interview : function pointers vs switch case - c

During my Interview, I was asked to implement a state machine for a system having 100 states where each state in turn has 100 events, I answered 3 following approaches:
if-else
switch-case
function pointers
If-else is obviously not suited for such a state machine, hence main comparison was between switch-case vs function pointers, here is the comparison as per my understanding:
Speed wise both are almost same.
Switch-case is less modular than function-pointers
Function-pointers has more memory overhead.
Could someone confirm if above understanding is correct ?

There might be a variant of the function pointer approach: a struct which includes a function pointer as well as other information. So you could let one function handle several cases.
Beside of this, I think you are right. Plus, I would consider the overhead concerning memory and speed worth to be considered, but hopefully small enough to be ignored at the end.

I don't know what your interviewers wanted to hear and I hope this is not too off topic but if I were interviewing someone I would give points for knowing of the pros and cons of existing frameworks before justifying rolling your own, especially at that scale.
C++ alternatives (if you can use them, thanks to glglgl for pointing out that you seem to want C) would be:
Boost.MSM although blazingly fast is out of the question at that scale. Reasons are compile time, mpl::vector/list constraints and because you would have one gigantic source file.
Boost.Statecharts can work with 100 states but 100 events per state would max out the mpl::vector/list constraints. Personally if I had 100 events in a state I would try to group them anyway and use custom reactions but that obviously depends on the application.
I don't see any reason why Qt's state machine wouldn't scale that big (please correct me if I'm wrong) but its orders of magnitude slower so I never use it.
The only good C alternative I know of is:
QP which is available in C and C++ and can scale that big, has good organization and is "more than a state-machine" in that it handles event queues, concurrency and memory management etc. Rolling your own may yield better performance (depending on your skill and how much time you put into it) but it should be noted that the memory management of the events is probably going to end up needing more optimization than the state machine implementation it's self. QP does this for you and quite well.

You could specify more detail about your states and events.
Assume your state is continuous integer number. Then you can
Write a table to contain all states and per state handler function on it.
When receiving an event, reference this table and call corresponding handler function.
For each state, write a table that contain all events and its event handler function. Look up this table when processing event on the state.
The time complexity of these 2 table looking up is O(1), and space complexity is O(m*n)
However, how can you have FSM with 100 states and event with 100 types?
I suggest you to simplify your FSM design and the 1~100 number may be parameter of one particular event.

Related

Reinforcement Learning in Dynamic Environment with large state -action space

I have a 500*500 grid with 7 different penalty values. I need to make an RL agent whose action space contains 11 actions. (Left, Right, Up, Down, 4 Diagonal Directions, Speed Up, Speed Down And Normal Speed). How can I solve this problem?
The Probability of 'action performed' which was chosen is 0.8. Otherwise a random action is selected. Also, the penalty values can change dynamically.
Take a look at this chapter by Sutton incompleteideas.net/sutton/book/ebook/node15.html, especially his experiments in later sections. Your problem seems similar to the N-Armed bandit in that each of the arms returns a normal distribution of reward. While this chapter mostly focuses on exploration, the problem applies.
Another way to look at it, is if your state really returns a normal distribution of penalties, you will need to sufficiently explore the domain to get the mean of the state, action tuple. The mean in these cases is Q*, which will give you the optimal policy.
As a follow up, if the state space is too large or continuous, it may be worth looking into generalization with a function approximator. While the same convergence rules apply, there are cases where function approximations run into issues. I would say that is beyond the scope of this discussion though.

Should I represent database data with immutable or mutable data structures?

I'm currently programming in Scala, but I guess this applies to any functional programming language, or rather, any programming language that recommends immutability and can interact with a database.
When I fetch data from my database, I map it to a model data structure. In functional programming, data structures tend to be immutable. But the data in a database is mutable, so I wonder whether or not my model should be mutable as well. In general, what would be a good and well-accepted practice in such a case?
Following Scala courses by Martin Odersky on Coursera, I remember he said something like:
It's better to use immutable data structures, but when you want to
interact with the real world, it can be useful to use mutable data
structures.
So, again, I wonder what should I do. As of now, my data structures are immutable, and this is leading to a lot of boilerplate code when I want to update a record in my database. Would using a mutable model help reduce this boiler plate?
(I already asked a similar question which was quite specific to the technologies I use, but I wasn't satisfied with the actual answers, so I've generalized it here.)
Why is a database mutable? Is it a fundamental nature of databases to be mutable? The relational model and using it as a persistence store for your application data might steer you towards this conclusion, but it may not be a fundamental property.
Given that you may have other options such as storing a new version of your data when you update it, perhaps the premise of the question is undermined somewhat. Perhaps, even if you do have a 'mutable' database, you still need to provide a new value for the update function that is separate from the old value – consider for instance an optimistic lock where the update should only occur if the old value has not in the meantime changed.
In other words, the mutability or otherwise of the database should not matter at all, you are dealing with a separate domain layer in your application. If you need to ask then the answer will always be immutable. Mutability is a complexity vector that experts should only introduce as a performance optimisation when it has been demonstrated to be necessary.
In the trading app I'm currently working on, almost everything is immutable - certainly the model is.
Our experience is that this has greatly simplified how we work with the model, including persistence.
I don't understand yet why things have become simpler, it just has. I need to ponder on this more. Reasoning about the code and working with it is simpler.
Yes, you need to use things like lenses but I tend to write them - a mechanical process - and move on. It's a tiny part which I am sure can be finessed.
"Interacting with the real world" has nothing to do with whether you use mutable or immutable data structures. This is a furfy that is repeated all too often and it is great that you have questioned it.
While it is typically more healthy to dismiss garbage like this, you might be interested in a cursory debunking:
http://blog.higher-order.com/blog/2012/09/13/what-purity-is-and-isnt/
However, I strongly recommend dismissing it and moving on.
Onto your question, you say you have boilerplate when you want to perform operations on your immutable data structures. In fact, there is very well established theory that solves this problem to a large extent. Here is a paper written about it using Scala:
http://dropbox.tmorris.net/media/doc/lenses.pdf
Hope that helps.

Fastest conversion of MFC CArray<int> to List<int> in Managed C++

We are currently in the process of changing a lot of code as we move from a 'single system' application to one that can farm out tasks to distributed processing nodes. The existing code is in a mix of unmanaged and now managed C++ code, but is also using C# code to provide a WCF interface between node and controller.
As a result of this move, a common code pattern I'm seeing which is likely to remain for the foreseeable future is a basic conversion of integer ID values from an MFC CArray to a managed List to enable serialisation over WCF. The current pattern is:
List<int>^ fixtures = gcnew List<int>(aFixtureIds.GetCount());
for(int i = 0; i < aFixtureIds.GetCount(); i++) //aFixtureIds is of MFC type CArray<int,int>
{
fixtures->Add(aFixtureIds[i]);
}
We also use something similar in reverse, where if a List is returned we may convert it to a CIntArray for the calling function by iterating through it in a loop and calling Add.
I appreciate that the above doesn't look very intensive but it does get called a lot - is there a better pattern for performing this basic List<->CArray conversion that would use up less processing time? Is this the kind of code that can be optimised effectively by the compiler (I suspect not but am willing to be corrected)? List sizes vary but will typically be anything from 1 to tens of thousands of items, potentially more.
A few suggestions although a lot of it will depend on your application details:
Measure: I suppose it gets old that every SO question on performance has people just saying "measure first" but this is usually (always?) good advice, particularly in this case. I fear that you're drastically over estimating how much time such a conversion takes on a modern desktop computer. For example, on my pokey 4-year old desktop a quick test shows I can add 100 million integers to a CArray in just 250 ms. I would expect roughly the same performance in C# for the List. This means that your 10,000 element list would take around 25 microseconds to run. Now, obviously if you are doing this 1000s of times a second or for billions of elements it becomes an issue although the solution in these cases would not be a faster algorithm but a better one.
Update When Needed: Only update the arrays when you actually need the information. If you are updating frequently just because you 'might' need it you will likely be wasting conversions that are never used.
Synchronize Updates: Instead of updating the entire array at a time update the elements in both copies of lists at the same time.
Just Use One List: While it sounds like you can't do this at least consider just using one list instead of two copies of the same information. Hide the element access in a function/method/class so you don't need to know whether the data is stored in a CArray<> or a List<>.
Better Design/Algorithm: If you measure and you do find that a lot time is being spent in this conversion then you're probably going to get a far better return by improving the code design to reduce or eliminate the conversions needed. Off-hand, there's probably not a whole lot of improvement possible in just copying a CArray<> to a List<>.

Best way to automate testing of AI algorithms?

I'm wondering how people test artificial intelligence algorithms in an automated fashion.
One example would be for the Turing Test - say there were a number of submissions for a contest. Is there any conceivable way to score candidates in an automated fashion - other than just having humans test them out.
I've also seen some data sets (obscured images of numbers/letters, groups of photos, etc) that can be fed in and learned over time. What good resources are out there for this.
One challenge I see: you don't want an algorithm that tailors itself to the test data over time, since you are trying to see how well it does in the general case. Are there any techniques to ensure it doesn't do this? Such as giving it a random test each time, or averaging its results over a bunch of random tests.
Basically, given a bunch of algorithms, I want some automated process to feed it data and see how well it "learned" it or can predict new stuff it hasn't seen yet.
This is a complex topic - good AI algorithms are generally the ones which can generalize well to "unseen" data. The simplest method is to have two datasets: a training set and an evaluation set used for measuring the performances. But generally, you want to "tune" your algorithm so you may want 3 datasets, one for learning, one for tuning, and one for evaluation. What defines tuning depends on your algorithm, but a typical example is a model where you have a few hyper-parameters (for example parameters in your Bayesian prior under the Bayesian view of learning) that you would like to tune on a separate dataset. The learning procedure would already have set a value for it (or maybe you hardcoded their value), but having enough data may help so that you can tune them separately.
As for making those separate datasets, there are many ways to do so, for example by dividing the data you have available into subsets used for different purposes. There is a tradeoff to be made because you want as much data as possible for training, but you want enough data for evaluation too (assuming you are in the design phase of your new algorithm/product).
A standard method to do so in a systematic way from a known dataset is cross validation.
Generally when it comes to this sort of thing you have two datasets - one large "training set" which you use to build and tune the algorithm, and a separate smaller "probe set" that you use to evaluate its performance.
#Anon has the right of things - training and what I'll call validation sets. That noted, the bits and pieces I see about developments in this field point at two things:
Bayesian Classifiers: there's something like this probably filtering your email. In short you train the algorithm to make a probabilistic decision if a particular item is part of a group or not (e.g. spam and ham).
Multiple Classifiers: this is the approach that the winning group involved in the Netflix challenge took, whereby it's not about optimizing one particular algorithm (e.g. Bayesian, Genetic Programming, Neural Networks, etc..) by combining several to get a better result.
As for data sets Weka has several available. I haven't explored other libraries for data sets, but mloss.org appears to be a good resource. Finally data.gov offers a lot of sets that provide some interesting opportunities.
Training data sets and test sets are very common for K-means and other clustering algorithms, but to have something that's artificially intelligent without supervised learning (which means having a training set) you are building a "brain" so-to-speak based on:
In chess: all possible future states possible from the current gameState.
In most AI-learning (reinforcement learning) you have a problem where the "agent" is trained by doing the game over and over. Basically you ascribe a value to every state. Then you assign an expected value of each possible action at a state.
So say you have S states and a actions per state (although you might have more possible moves in one state, and not as many in another), then you want to figure out the most-valuable states from s to be in, and the most valuable actions to take.
In order to figure out the value of states and their corresponding actions, you have to iterate the game through. Probabilistically, a certain sequence of states will lead to victory or defeat, and basically you learn which states lead to failure and are "bad states". You also learn which ones are more likely to lead to victory, and these are subsequently "good" states. They each get a mathematical value associated, usually as an expected reward.
Reward from second-last state to a winning state: +10
Reward if entering a losing state: -10
So the states that give negative rewards then give negative rewards backwards, to the state that called the second-last state, and then the state that called the third-last state and so-on.
Eventually, you have a mapping of expected reward based on which state you're in, and based on which action you take. You eventually find the "optimal" sequence of steps to take. This is often referred to as an optimal policy.
It is true of the converse that normal courses of actions that you are stepping-through while deriving the optimal policy are called simply policies and you are always implementing a certain "policy" with respect to Q-Learning.
Usually the way of determining the reward is the interesting part. Suppose I reward you for each state-transition that does not lead to failure. Then the value of walking all the states until I terminated is however many increments I made, however many state transitions I had.
If certain states are extremely unvaluable, then loss is easy to avoid because almost all bad states are avoided.
However, you don't want to discourage discovery of new, potentially more-efficient paths that don't follow just this-one-works, so you want to reward and punish the agent in such a way as to ensure "victory" or "keeping the pole balanced" or whatever as long as possible, but you don't want to be stuck at local maxima and minima for efficiency if failure is too painful, so no new, unexplored routes will be tried. (Although there are many approaches in addition to this one).
So when you ask "how do you test AI algorithms" the best part is is that the testing itself is how many "algorithms" are constructed. The algorithm is designed to test a certain course-of-action (policy). It's much more complicated than
"turn left every half mile"
it's more like
"turn left every half mile if I have turned right 3 times and then turned left 2 times and had a quarter in my left pocket to pay fare... etc etc"
It's very precise.
So the testing is usually actually how the A.I. is being programmed. Most models are just probabilistic representations of what is probably good and probably bad. Calculating every possible state is easier for computers (we thought!) because they can focus on one task for very long periods of time and how much they remember is exactly how much RAM you have. However, we learn by affecting neurons in a probabilistic manner, which is why the memristor is such a great discovery -- it's just like a neuron!
You should look at Neural Networks, it's mindblowing. The first time I read about making a "brain" out of a matrix of fake-neuron synaptic connections... A brain that can "remember" basically rocked my universe.
A.I. research is mostly probabilistic because we don't know how to make "thinking" we just know how to imitate our own inner learning process of try, try again.

Explaining benefits of an array to a lay person?

I develop code in our proprietry system using a scripting language that is unique to that system.
Our director has allowed us to request enhancements to this language, which currently lacks user definable arrays.
I need to write a concept brief on why we need arrays and how they can benefit us, however I need to explain it in a fashion that someone who has no understanding of code will understand.
I'm a programmer, therefore I suck at documentation and explaining things in a non-technical manner. I tried banging my head on the desk to see if anything useful would come out but it hasn't. Can anyone help?
I love analogies.
Much easier to have a 100 DVD holder that sits neatly on your floor and holds 100 dvds in order than 100 individual DVDs scattered around your house where you last used them
Especially relevant when you need to move the collection from one place to another or share it with a friend.
What's your application area? To speak the users' language you need to know that. Suppose it's stocks trading: then what to you is an array, to the users may be a portfolio -- get the quotes for several stock at once rather than having to do it repeatedly for one at a time. If your application area is CRM, then the array will let the users check on a group of customers at once, rather than do it one at a time. And so on, and so forth.
In every application area there will be cases in which users may want to deal with a bunch of things at once, it being easier than dealing with one thing at a time. Phrase it in the appropriate vocabulary, and you have the case for arrays!
You might want to see if you can move the business away from your custom scripting environment and into a standard scripting environment like LUA or Python. You might be surprised at how much easier it is to get LUA up and running than it is to :
Support an in house system
Create tools for it (do you have an IDE?)
Train new programmers in it
Live without modern features that you lack the time/skills to impliment.
Key to getting that to happen would be to make LUA interoperable with your standard scripting system or writing a translation from your old scripts to LUA scripts.
The benefit is that it makes the code shorter, and thus less money is spent coding and debugging. You can then present some example code that you could make it shorter had the language supported arrays.
Sounds like you've been asked to create code in the past (or anticipate having to create code in the future), where your job would have been faster/easier/cheaper if the system that you used had arrays.
That's the issue: you want to do more for your director and you need arrays to help you.
Your director will understand the business benefits of you having a better toolkit--you'll be able to do more for him or her. And that's how you increase business efficiency.
Tell your director: I want to improved my productivity for you and our team. To do so, arrays would be very helpful.
I like Alex's answer - it has to be put in terms of the user's problems. What problem (that they care about) can they do with it that they cannot do without it?
I used to teach introductory programming in college, and arrays are simply not something that comes easily to non-programmers. They need to understand some other basics first, like the sequential nature of programs, the lego-block way programs are constructed, the idea of run-time (as opposed to write-time) and really importantly the concept of a variable as a container of a value, and how that is different from its name, and how its contents changes with time while its name does not.
I found a useful way to get into this area is to let them program a very simple, decimal, simulated computer, in "machine language". They get the notion of memory address vs. memory contents, and that address is just a number. That makes it a lot easier to introduce arrays in a more "real" language.
Another approach is to have them work on a kind of problem where they really start wishing they could invent variables on-the-fly. Like they don't want to just have a variable A, but they feel a need for A1, A2, etc. and then they would really like to say Ai where i is a another variable. Once they feel the need for that, then they will grasp arrays. (For example, they could take a simple program that asks for their name and has a simple conversation with them, and then extend it to talk to two people at once, then three, and so on.)
Then, a useful next step is "parallel arrays" which can serve as rudimentary arrays of structures. i.e. N$(i) can be name of student i, while A(i) can be age of student i. This makes the idea useful.
Only then would I dare to start to introduce algorithms like sorting, merging, table lookup, and so on.
I think to fully realize the potential of arrays, you must somehow mention two things:
1) Array Algorithms
Sort, Find, etc. All the basics. Equate this in your business brief as structured data that can organize itself. No extra query language. No variable naming conventions. All you need is good standards.
2) Multi-Dimensional Arrays
The power of arrays seems fully realized to me with matrices. With these you can practically hold limitless data.
Plus, depending on the power of the propriety language you are using, arrays can store objects.
The power of an array is that it allows you to put a group of things together so that you can perform the same operation on all of them with less code.
Sorting is one example of an array operation, and is like having a box of index cards that you are putting in order.
Or if you had a collection of letters that need to go out, being able to write a loop that stamps each letter and then sends it is better than writing out
Take first letter, stamp it.
Mail it.
Takes second letter, stamp it,
Mail it.
Basically anything you'ld use to refer to the first, second, third, fifth, etc, is basically like an array.
And then indexed/hashed arrays are like having an index in the book - you know the author describes the Defenistration of Prague somewhere in the volume, but looking in the index shows that it's on page 255.
Here's the easiest-to-understand benefit: It lets you refer to things by a number. Try to emphasize the importance of this.

Resources