I'm pursuing an Artificial Intelligence course online and I don't understand how the state space is calculated. In the following PDF on page 3 slide 2 it says that the possible pacman positions are:
12x10 = 120
Why is it so? And how did we get to this number?
Actually this is an excerpt from an online course by UC Berkley on Edx and though it is not shown in this slide but while calculating the state space they do the following:
12 x 10 x 4
where 4 is for the four directions in which the pacman can face, also it was never told that the area is bigger or they are only showing a portion o it.
This is because there are:
10 columns: five with "pellets" in them, and five that are empty.
12 rows: six with "pellets" in them, and six that are empty.
Related
My programming class is having us do dog race betting, and one of the aspects of it is putting the Odds of a dog (9 total) winning into a struct. For example, dog 1 is supposed to win 40% of the time, dog 2 is supposed to win 10% of the time, etc. Our teacher has also made one of our goals to "demonstrate the ability to generate and use random numbers." Here's what my struct/array is:
//DOG
typedef struct {
char Name[100];
int Payout;
double OddsWin;
}Dog;
Dog racers[9];
Any tips/ ideas for how to tackle this? Other than how to do the random generation of numbers with the probabilities, I am also wondering how I would use that to determine if a dog won or lost. Would the probability of a certain number equal a win? like if I made 1 = to a win, would I have to code it so that it generates 1 about 40% of the time for dog 1 and so on?
Basically, if I understand you correctly and trying to simplify your question, you want to find a way to choose randomly between an array of Dog's (racers), based on their OddsWin field, which holds the chance they have to win (in percent, e.g. 40%, 20%, 5.6%, etc.).
General tip - always try to solve a programming problem without coding, and then start coding.
So basically you are trying to make a choice between X, say 4, dogs. Say they have the following odds:
dog 1 - 40% to win
dog 2 - 20% to win
dog 3 - 7% to win
dog 4 - 33% to win
So I would take an array of 100 pieces, and fill it up with 40 times 1, 20 times 2, 7 times 3 and 33 times 4. Then, I would choose a random number, between 0 and 99 (array indices), and check the array at than index. If the array's value in that index is 1, than dog 1 won the race, and same for the other dogs.
Maybe there's a more efficient way, but I think that solves the problem :)
One approach would be
-Choose a random num between the range(initially 100)
-assign it to the dog
-subtract the num from range
(loop till 8 and give the rest to 9)
Implementation has many alternatives, the easiest would be giving out values and in a way rigging the system to generate the said probability of that dog say out of 100, run down counters and choose the dog if val>0. Here you can have another random 0-9 generator as well to automate the process.
Problem:
The city of Siruseri is impeccably planned. The city is divided into a rectangular array of cells with M rows and N columns. Each cell has a metro station. There is one train running left to right and back along each row, and one running top to bottom and back along each column. Each trains starts at some time T and goes back and forth along its route (a row or a column) forever.
Ordinary trains take two units of time to go from one station to the next. There are some fast trains that take only one unit of time to go from one station to the next. Finally, there are some slow trains that take three units of time to go from one station the next. You may assume that the halting time at any station is negligible.
Here is a description of a metro system with 3 rows and 4 columns:
S(1) F(2) O(2) F(4)
F(3) . . . .
S(2) . . . .
O(2) . . . .
The label at the beginning of each row/column indicates the type of train (F for fast, O for ordinary, S for slow) and its starting time. Thus, the train that travels along row 1 is a fast train and it starts at time 3. It starts at station (1,1) and moves right, visiting the stations along this row at times 3, 4, 5 and 6 respectively. It then returns back visiting the stations from right to left at times 6, 7, 8 and 9. It again moves right now visiting the stations at times 9, 10, 11 and 12, and so on. Similarly, the train along column 3 is an ordinary train starting at time 2. So, starting at the station (3,1), it visits the three stations on column 3 at times 2, 4 and 6, returns back to the top of the column visiting them at times 6,8 and 10, and so on.
Given a starting station, the starting time and a destination station, your task is to determine the earliest time at which one can reach the destination using these trains.
For example suppose we start at station (2,3) at time 8 and our aim is to reach the station (1,1). We may take the slow train of the second row at time 8 and reach (2,4) at time 11. It so happens that at time 11, the fast train on column 4 is at (2,4) travelling upwards, so we can take this fast train and reach (1,4) at time 12. Once again we are lucky and at time 12 the fast train on row 1 is at (1,4), so we can take this fast train and reach (1,1) at time 15. An alternative route would be to take the ordinary train on column 3 from (2,3) at time 8 and reach (1,3) at time 10. We then wait there till time 13 and take the fast train on row 1 going left, reaching (1,1) at time 15. You can verify that there is no way of reaching (1,1) earlier than that.
Test Data: You may assume that M, N ≤ 50.
Time Limit: 3 seconds
As the size of N,M is very small we can try to solve it by recursion.
At every station, we take two trains which can take us nearer to our destination. E.g.: If we want to go to 1,1 from 2,3 , we take the trains which take us more near to 2,3 and get down to the nearest station to our destination, while keeping track of the time we take, if we reach the destination, we keep track of the minimum time so far, and if the time taken to reach the destination is lesser than the minimum we update it.
We can determine which station a train is at a particular time using this method:
/* S is the starting time of the train and N is the number of stations it
visits, T is the time for which we want to find the station the train is at.
T always be greater than S*/
T = T-S+1
Station(T) = T%N, if T%N = 0, then Station(T) = N;
Here is my question:
How do we determine the earliest time when a particular train reaches the station we want in the direction we want?
As my above algorithm uses greedy strategy, will it give an accurate answer? If not then how do I approach this problem?
P.S : This is not homework, it is an online judge problem.
I believe greedy solution will fail here, but it will be a bit hard to construct a counter-example.
This problem is meant to be solved using Dijkstra's algorithm. Edges are the connection between adjacent nodes and depend on the type of train and its starting time. You also don't need to compute the whole graph - only compute edged for the current node you are considering. I have solved numerous similar problems and this is the way you solved. Also tried to use greedy several times before I learnt it never passes.
Hope this helps.
I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm.
I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd.
Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all.
The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic
The largest axis is divided by locating the bin with the median pixel
(by population), selecting the longer side, and dividing in the center
of that side. We could have simply put the bin with the median pixel
in the shorter side, but in the early stages of subdivision, this
tends to put low density clusters (that are not considered in the
subdivision) in the same vbox as part of a high density cluster that
will outvote it in median vbox color, even with future median-based
subdivisions. The algorithm used here is particularly important in
early subdivisions, and 3is useful for giving visible but low
population color clusters their own vbox. This has little effect on
the subdivision of high density clusters, which ultimately will have
roughly equal population in their vboxes.
For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following
Iterate over all the possible green and blue variations of the red color
For each iteration it adds to the total number of pixels (population) it's found along the red axis
For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red
note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red
So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations
4 8 20 16 1 9 12 8 8
After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above
4 12 32 48 49 58 70 78 86
And total would have a value of 86
Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346
It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2
Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49
The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was..
Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin.
Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)
In the 9-bin example, 49 is the number of pixels in the first 5 bins. 49 is the median number in the set of 9 partial sums, but we want the median pixel in the set of 86 pixels, which is 43 (or 44), and it resides in the 4th bin.
Inspection of the modified median cut algorithm in colorquant2.c of leptonica shows that the actual cut location for the 3d box does not necessarily occur adjacent to the bin containing the median pixel. The reasons for this are explained in the function medianCutApply(). This is one of the "modifications" to Paul Heckbert's original method. The other significant modification is to make the decision of which 3d box to cut next based on a combination of both population and the product (population * volume), thus permitting splitting of large but sparsely populated regions of color space.
I do not know the algo, but I would assume your array contains the population of each red; let's explain this with an example:
Assume you have four gradations of red: A,B,C and D
And you have the following sequence of red values:
AABDCADBBBAAA
To find the median, you would have to sort them according to red value and take the middle:
median
v
AAAAAABBBBCDD
Now let's use their approach:
A:6 => 6
B:4 => 10
C:1 => 11
D:2 => 13
13/2 = 6.5 => B
I think the mismatch happened because you are counting the population; the average color would be:
(6*A+4*B+1*C+2*D)/13
I am trying to implement Drools Planner for allocating timetables. At the moment, my proficiency in Java and JavaBean design pattern is low and I need something simple to practice on.
Is there an AI optimization problem that
known to be solved very well with 'X' algorithm
the data model lends itself to be expressed in JavaBean design pattern in a simple manner
uses fewest number of extra features (like planning entity difficulty)
Such a problem would be good to cut my teeth on Drools Planner.
I am trying N-Queens problem right now which seems the simplest of these. So I am looking for something of this league.
Update: See CloudBalancingHelloWorld.java in optaplanner-examples (Drools Planner is renamed to OptaPlanner).
You could also try implementing the ITC2007 curriculum course scheduling yourself and then compare it with the source code of the example in Drools Planner.
If you want to keep it simple but get decent results too, follow this recipe and go for First Fit followed by Tabu Search.
Another good idea, is to join the ITC2011 scheduling competition: it's still open till 1-MAY-2012 and very similar to the curriculum course scheduling example.
I am trying 2X2 Sudoku (generating and solving) as something simple. You can model it on Nqueens code. While 2x2 sudokus are solved easily, 3x3 sudokus may get stuck. So you can implement swap moves.
Another interesting problem would be bucket sums. Given 10 buckets, each able to contain 5 numbers each, and 50 numbers; make a program to allocate the numbers so that the sum of numbers in each bucket are more or less even.
Bucket Bucket0 3 6 19 16 11 =55
Bucket Bucket1 8 2 5 25 15 =55
...
Bucket Bucket7 3 25 4 16 8 =56
Bucket Bucket8 12 20 12 9 2 =55
Bucket Bucket9 4 9 11 12 20 =56
This has practical implications, such as evenly distributing tasks of varying toughness throughout the week.
A collection of some problems: http://eclipseclp.org/examples/index.html
I came across an algorithmic problem to find out the number of inversion pairs in an array in O(nlogn) time. I got the solution to this. But, my question is that what is the real-life application of this problem? Like I want to know some applications where we need to know the inversion pairs.
One example is the fifteen puzzle. If you want to randomly shuffle a grid of numbers, can you tell at a glance if
1 14 5 _
7 3 2 12
6 9 13 15
4 10 8 11
can be solved by sliding moves or not? The parity of the permutation will tell you that it is not.
Here is the use of inversion count in real life..
suppose you want to know how similar two list are..based on ranking..
on any movie site..two wishlist of movies are compared and few of them who are similar , are shown to users who have same choice.
Same logic applies to shopping list on any shopping website.. for recommending shopping items based on his activity..