Cross product of incomplete DFA's - dfa

I am trying to do a cross product between two DFA's but they are both incomplete DFA's.
The following image is the answer I came up with for the intersection of cross product between two incomplete DFA's. The alphabet is {a,b,c,d,e}.
Is it correct or does the fact that they are incomplete change everything?

The fact that they're incomplete does make your job different if you are constructing the cross product to find the union; however, for the intersection you can still use this basic approach. But, you made some errors:
Let's start in the top left, and look at all the transitions out of the state A1:
First, you have a state transition labeled c from state A1 to state A2. However, that's incorrect because there is no transition from A to A in the top DFA that's labeled with c. Then, you have a transition labeled a that goes from state A1 to itself. This is also incorrect because there's no transition from state 1 to anything labeled a. Likewise, there's no transition out from state 1 labeled b so that also invalidates your transition from A1 to B1.
When working with incomplete DFAs and making a cross-product like this, you'll only have outgoing edges for a state (p,q) when there's a letter that both state p and state q have outgoing edges for.
Therefore, there are no transitions out of the start state. At this point, we can stop working because with no transitions out of the start state, there's no point - the resulting DFA matches nothing.
Another possibility of how to approach this is to first make both DFAs complete by adding a non-accepting state (I call this state ∅). This state should have an edge going to it from every state for every letter for which there isn't already a different outgoing edge. For example, in the first DFA there would be an edge from A to ∅ for c, d, and e. Also, there should be an edge from ∅ to ∅ for every letter. Now both DFAs are complete.
When you do this, you end up with edges out of A1 going to: A∅ for a, B∅ for b, ∅1 for c, and ∅∅ for d and e. The rest is left as an exercise, but if you draw it out completely you'll discover again that there's no path from A1 to any accepting state.
Making both DFAs complete first is in fact what you need to do if you're constructing the cross-product to find the union - with the intersection it's permissible to simply throw away any state involving ∅ in either DFA since reaching ∅ means that you'll never reach an accepting state, but with the union you need to keep them because some states involving ∅ may be accepting states of the cross-product. (You can still throw away the state ∅∅ and any edge to that)

Related

In what state is this 3-qubit state?

So, I have a state of 3 qubits that is in one of the states in the picture. How can I find out which state it is in?
I tried to measure the qubits but the amplitudes are 1/3 for each, so...
This is task 1.15 from the Measurements kata, so I'll outline the solution broadly and point you to the workbook for that task for the formulas and details that are painful to spell out on StackOverflow without LaTeX support.
When you need to distinguish two orthogonal quantum states, you can first apply a unitary to them to rotate them to two different orthogonal quantum states that are easy to distinguish - for example, states that have different basis states in their superposition makeup.
In this case,
We can first apply some rotation gates (R1 gate in Q# or similar gates) to the second and the third qubits to get rid of the 𝜔 amplitudes in the first state, converting it into the W state.
Then, apply adjoint of the transformation you'd use to prepare the W state from the |000⟩ state, so that this state ends up becoming the |000⟩ state.
Since the transformations you've applied are unitaries and they preserve the product of state vectors, you know that the vectors remain orthogonal after their application. This means that if you do the measurements now, you'll always get 000 for the first state, and some other basis state for the second one.

How to improve performance using Transposition Table in Game Playing?

I have implemented iterative deepening with alpha-beta pruning in my game and I also added a Transposition Table to store already evaluated boards.
Right now, I am doing the following:
When running iterative deepening, at depth = 0 it evaluates and stores all positions with their scores in TT.
Now, when it re-runs with depth = 1. I simply return the value of the board if it exists in the TT. This stops the algorithm at depth = 0 as all values are already in the TT for depth = 0 boards.
If I return values from TT when the depth limit is reached eg. depth = MAX_DEPTH then big sub-trees will never be cut.
So, I am not understanding how should I re-use the values stored in the TT for making my game faster?
I will use chess for explanation rethoric in this answer, of course this reasoning with slight modifications can be applied for other board games as well.
Transposition Tables in board game programs are caches which store already evaluated boards in a cache. It is great to have an easy-to-handle cache value which would uniquely identify a position, like:
WKe5Qd6Pg2h3h4 BKa8Qa7
So if you get to a position, you check for the cache key to be present and if so, then reuse its evaluation. Whenever you visit a position at depth=0, after it's properly evaluated, it can be cached. So, if some moves are made, in the sub-variations you can more-or-less jump over the evaluation. For example, let's consider the example that in the starting position white moved 1. Nf3 and black replied 1... Nf6. After the resulting positions for both plies the position was cached, white's 2. Ng1 needs evaluation, since this was not evaluated nor cached yet, but Black's possible 2... Ng8 doesn't need to be evaluated, because it's resulting in the starting position.
Of course, you can do more aggressive caching and store positions up to depth = 1 or even more.
You will need to make sure that you do not miss some strategic details of the game. In the case of chess you will need to keep in mind:
the 50-move rule's effect
3-time repetition draw
who's on the move
were/are some special moves like castling or en-passant possible in the past/in the present and not at the other case
So, you might want to add some further nuances into your algorithm, but to answer the original question: positions already occurred in the game or being very high in the variation table can be cached and more-or-less ignored (the more means in most of the cases, the less means the nuances outlined above)

The purpose of using Q-Learning algorithm

What is the point of using Q-Learning? I have used an example code that represents 2D board with pawn moving on this board. At the right end of the board there is goal which we want to reach. After completion of algorithm I have a q table with values assigned to every state-action junction. Is it all about getting this q table to see which state-actions (which actions are the best in case of specific states) pairs are the most useful? That's how I understand it right now. Am I right?
Is it all about getting this q table to see which state-actions (which
actions are the best in case of specific states) pairs are the most
useful?
Yep! That's pretty much it. Given a finite state space, Q-learning is guaranteed to eventually learn the optimal policy. Once an optimal policy is reached (also known as convergence) every time the agent is in a given state s, it looks in its Q-table for the action a with the highest reward for that (s, a) pair.

How does the winged-edge structure for meshes work?

I'm implementing an algorithm in which I need manipulate a mesh, adding and deleting edges quickly and iterating quickly over the edges adjacent to a vertex in CCW or CW order.
The winged-edge structure is used in the description of the algorithm I'm working from, but I can't find any concise descriptions of how to perform those operations on this data structure.
I've learned about it in University but that was a while ago.
In response to this question i've searched the web too for any good documentation, found none that is good, but we can go through a quick example for CCW and CW order and insertion/deletion here.
Have a look at this table and graphic:
from this page:
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/model/winged-e.html
The table gives only the entry for one edge a, in a real table you have this row for every edge. You can see you get the:
left predecessor,
left successor,
right predecessor,
right successor
but here comes the critical point: it gives them relative to the direction of the edge which is X->Y in this case, and when it is right-traversed (e->a->c).
So for the CW-order of going through the graph this is very easy to read: edge a left has right-successor c and then you look into the row for edge c.
Ok, this table is easy to read for CW-order traversal; for CCW you have to think "from which edge did i come from when i walked this edge backwards". Effectively you get the next edge in CCW-order by taking the left-traverse-predecessor in this case b and continue with the row-entry for edge b in the same manner.
Now insertion and deletion: It is clear that you cant just remove the edge and think that the graph would still consist of only triangles; during deletion you have to join two vertices, for example X and Y in the graphic. To do this you first have to make sure that everywhere the edge a is referred-to we have to fix that reference.
So where can a be referred-to? only in the edges b,c,d and e (all other edges are too far away to know a) plus in the vertex->edge-table if you have that (but let's only consider the edges-table in this example).
As an example of how we have to fix edges lets take a look at c. Like a, c has a left and right pre- and successor (so 4 edges), which one of those is a? We cannot know that without checking because the table-entry for c can have the node Y in either its Start- or End-Node. So we have to check which one it is, let's assume we find that c has Y in its Start-Node, we then have to check whether a is c's right predecessor (which it is and which we find out by looking at c's entry and comparing it to a) OR whether it is c's right successor. "Successor??" you might ask? Yes because remember the two "left-traverse"-columns are relative to going the edge backward. So, now we have found that a is c's right predecessor and we can fix that reference by inserting a's right predecessor. Continue with the other 3 edges and you are done with the edges-table. Fixing an additional Node->Vertices is trivial of course, just look into the entries for X and Y and delete a there.
Adding edges is basically the reverse of this fix-up of 4 other edges BUT with a little twist. Lets call the node which we want to split Z (it will be split into X and Y). You have to take care that you split it in the right direction because you can have either d and e combined in a node or e and c (like if the new edge is horizontal instead of the vertical a in the graphic)! You first have to find out between which 2 edges of the soon-to-be X and between which 2 edges of Y the new edge is added: You just choose which edges shall be on one node and which on the other node: In this example graphic: choose that you want b, c and the 2 edges to the north in between them on one node, and it follows that the other edges are on the other node which will become X. You then find by vector-subtraction that the new edge a has to be between b and c, not between say c and one of the 2 edges in the north. The vector-subtraction is the desired position of the new X minus the desired position of Y.

Implementing a basic predator-prey simulation

I am trying to implement a predator-prey simulation, but I am running into a problem.
A predator searches for nearby prey, and eats it. If there are no near by prey, they move to a random vacant cell.
Basically the part I am having trouble with is when I advanced a "generation."
Say I have a grid that is 3x3, with each cell numbered from 0 to 8.
If I have 2 predators in 0 and 1, first predator 0 is checked, it moves to either cell 3 or 4
For example, if it goes to cell 3, then it goes on to check predator 1. This may seem correct
but it kind of "gives priority" to the organisms with lower index values.. I've tried using 2 arrays, but that doesn't seem to work either as it would check places where organisms are but aren't. ._.
Anyone have an idea of how to do this "fairly" and "correctly?"
I recently did a similar task in Java. Processing the predators starting from the top row to bottom not only gives "unfair advantage" to lower indices but also creates patterns in the movement of the both preys and predators.
I overcame this problem by choosing both row and columns in random ordered fashion. This way, every predator/prey has the same chance of being processed at early stages of a generation.
A way to randomize would be creating a linked list of (row,column) pairs. Then shuffle the linked list. At each generation, choose a random index to start from and keep processing.
More as a comment then anything else if your prey are so dense that this is a common problem I suspect you don't have a "population" that will live long. Also as a comment update your predators randomly. That is, instead of stepping through your array of locations take your list of predators and randomize them and then update them one by one. I think is necessary but I don't know if it is sufficient.
This problem is solved with a technique called double buffering, which is also used in computer graphics (in order to prevent the image currently being drawn from disturbing the image currently being displayed on the screen). Use two arrays. The first one holds the current state, and you make all decisions about movement based on the first array, but you perform the movement in the other array. Then, you swap their roles.
Edit: Looks like I didn't read your question thoroughly enough. Double buffering and randomization might both be needed, depending on how complex your rules are (but if there are no rules other than the ones you've described, randomization should suffice). They solve two distinct problems, though:
Double buffering solves the problem of correctness when you have rules where decisions about what will happen to a creature in a cell depends on the contents of neighbouring cells, and the decisions about neighbouring cells also depend on this cell. If you e.g. have a rule that says that if two predators are adjacent, they will both move away from each other, you need double buffering. Otherwise, after you've moved the first predator, the second one won't see any adjacent predator and will remain in place.
Randomization solves the problem of fairness when there are limited resources, such as when a prey only can be eaten by one predator (which seems to be the problem that concerned you).
How about some sort of round robin method. Put your predators in a circular linked list and keep a pointer to the node that's currently "first". Then, advance that first pointer to the next place in the list each generation. You could insert new predators either at the front or the back of your circular list with ease.

Resources