NFA to DFA conversion - dfa

When we converting from nfa to dfa there may be result like the image below... My question is, is it necessary to write that from state {4} it's going to Zero state? I mean that without showing the input symbol 1 of {4} is the same with picture below right? or no?

It’s a matter of convention. Personally, I prefer not to clutter my DFA with unnecessary states, especially since DFAs obtained via transformation from NFAs tend to become quite complex anyway, and since it’s deterministic we know that any non-displayed transition must be invalid.
However, I’ve experienced that many people in academia teach / use the other convention, and require all transitions to be explicitly shown. When working as a TA (tutor) I’ve actually had a discussion with a professor about this – he wanted us tutors to deduct points on the final tests for missing transitions in DFAs but I convinced him that deducting points for this was unfair.

it is not necessary to write {4}-> 0 transition because the automat is already accepting the word. this transition means only that this is "nothing" meaningful for our solution. but for details, it is useful to give it, to show the whole automaton.

It matters only if you are trying to draw the MinimalFA (MFA).
Actually you can generate infinite number of DFA s from a single NFA, each of which differ in number of states.
If you remove 'Dead States' in the figure,you will get the MFA.
The figure is OK if you just want a DFA

Related

Klocwork Analysis Metrics Issue not Clear

I don't understand this issue:
Issue: HIS Metriken - Cyclomatic (CR-MET4): [function_name] 13>10
It appears in Klocwork analysis while checking the issues of Code: METRICS.E.HIS_Metriken___Cyclomatic__CR_MET4_
Can anyone support?
Thanks
Do you see all those ifs, elses, loops in that function?
Those are the problem, you need to either design this function's logic more elegantly or split it into more functions with well-defined purposes.
By the way, I can only see that problematic function of yours because I am especially clairvoyant. For this kind of question you should normally show your code, just to be fair towards all those other users which cannot read your mind like I did.
Naaa, not really. The cyclomatic complexity is a measure for number of potential paths through your function. And that you have crossed the treshold of 10 by 3 means your function must be full of control structs, which create many paths.

changing HM reference software to display some information about the bitstream

I am very new to the HM HEVC (and the JEM) reference software, and I am currently trying to understand the source code. I want to add some lines to display for each component: name of Algo (i.e. inter/intra Algos) + length of the bitstream+ position in output bin file.
To know which component cost more bits to code and how codec is working. I want to do same thing for the JEM also after that.
my problem first is that I am unable of understanding a lot of function there, the comment is not sufficient, so is there any references to understand the code??!! (I already read the Manuel ,doesn’t help).
2nd I don’t know where & how exactly to add these lines; is it in TEncGOP, TEncSlice or TEncCU. Ps: I don’t think in TEncGOP.compressGOP so maybe in the 2 other classes.
(I put the answer to comment that #Mourad put four hours ago here, becuase it will be long)
I assume that you could manage to find where the actual encoding after the RDO loop is implemented. As you correctly mentioned, xEncodeCU is the function you need to refer to make sure you are not still in the RDO.
Now you need to find the exact function in xEncodeCU that is responsible for your target codec tool.
For instance, if you want to count the number of bits for coefficient coding, you should be looking into the m_pcEntropyCoder->encodeCoeff() (It's a JEM function and may have different name in the HM). Once you find this line in the xEncodeCU, you may do this and get the number of bits written inside encodeCoeff() function:
UInt b_before = m_pcEntropyCoder->getNumberOfWrittenBits();
m_pcEntropyCoder->encodeCoeff( ... );
UInt b_after = m_pcEntropyCoder->getNumberOfWrittenBits();
UInt writtenBitsCoeff = b_after - b_before;
One important point: as you cas see, the function getNumberOfWrittenBits() gives you integer rates, which is obtained by rounding sum of fractional rates corresponding to all syntax elements coded inside the function encodeCoeff. This error might or might not be acceptable, depending on your problem. For example, if instead of coefficient coding rate, you wanted to know the rate of CBF, then this error would not be acceptable at all. Because, CBF rate is mostly less than one bit. If this is your case, then you would need to calculate the fractional bits one-by-one. It would be totally different and relatively more complicated than this.
Point 1: There is one rule of tumb that logging coding decisions (e.g. pred mode, MV, IPM, block size) is much easier at the decoder side than encoder. This is because of the fact that you have super complicated RDO process at the encoder side that can easily make you get lost in the loops. But at the decoder side, everything appears only once. However, if you insist on doing it at the encoder side, you may find some tips here: Get some information from HEVC reference software
Point 2: Unlike coding decisions, logging rate (i.e. number of written bits for different syntax elements) is more complicated at the decoder side than encoder. This is particularly true for fractional bits associated to anything that is encoded in non-EP mode (i.e. with CABAC contexts). So you may do this part at the ecoder side. But I am afraid it is not easy.
Point 3: I think the best way to understand the code is to read it line-by-line. It's very time-consuming but if you theoritically know the standard(s), you will probably be able to distiguish important parts and ignore the rest.
PS: I think there are too many questions, mostly too general, in your post. It makes it a bit difficult for me to answer them all together. So you I'll wait for you to take your next step and ask more precise questions.

Implementing Intelligent design sort

This might be frivolous question, so please have understanding for my poor soul.
After reading this article about Intelligent Design sort (http://www.dangermouse.net/esoteric/intelligentdesignsort.html) which is in no way made to be serious in any way, I started wondering whether this could be possible.
An excerpt from article says:
The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter.
Let's for a second forget about intelligent Sorter, and think about possibility that random occurrences of members in array are in some way sorted. Our algorithm should determine the pattern without changing array's structure.
Is there any way to do this? Speed is not a requirement.
The implementation is very easy actually. The entire point of the article is that you don't actually sort anything. In other words, a correct implementation is a simple NOP. As my preferred language is Java, I'll show a simple in-place implementation in Java as a lambda function:
list->{}
Funny article, I had a good laugh.
If the only thing you're interested in is that whether your List is sorted, then you could simply keep an internal sorted flag (defaulted to true for an empty list) and override your add() method to check if the element you're adding fits the ordering of the List - that is, compare it to the adjacent elements and setting the sorted flag appropriately.

STRIPS representation of monkey in the lab

I have been reviewing some material on the representation of an AI plan given the STRIPS format, and found that different people seem to formulate the same problem in different ways.
For instance, Wikipedia has an example regarding the Monkey in the lab problem. The problem states that:
A box is available that will enable the monkey to reach the bananas hanging from the ceiling if he climbs up on it. Initially, the monkey is at A, the bananas at B, and the box at C. The monkey and the box have height Low, but if the monkey climbs onto the box, he will have height High, the same as the bananas. The actions available to the monkey include Go from one place to another, Push an object from one place to another, ClimbUp onto or CLimbDown from an object, and Grasp or UnGrasp an object. Grasping the object results in holding the object if the monkey and the object are in the same place at the same height.
Here is the Wikipedia plan (please note that it is not matched exactly to this problem description, but it is the same problem. It doesn't seem to implement Ungrasp, which is not important for this discussion):
Now nowhere in this plan can I see that the bananas are located at Level(high), so the only way this could actually be divulged from the plan would be to read through the entire set of Actions and deduce from there that the Monkey must be at Level(high) to interact with the bananas, hence they must be at Level(high).
Would it be a good idea to put this information in the Initial State, and have something like:
Monkey(m) & Bananas(ba) & Box(bx) & Level(low) & Level(high) & Position(A) & Position(B) & Position(C) & At(m, A, low) & At(ba, B, high) & At(bx, C, low)
It looks quite verbose like that, but at the same time, it allows the reader to understand the scenario just through reading the Initial State. I've also been told that we should not be using constants anywhere in STRIPS, so I thought declaring the A, B, and C as Positions was a good idea.
Is it that some people do it differently (which I feel would kind of ruin the idea of having a standardized language to represent things), or is it that one of the ways I have presented isn't in the correct format? I am new to STRIPS, so it is entirely possible (and likely) that I am missing some key points.
This is not the greatest wikipedia ever. The description of STRIPS is accurate, but a little outdated.
Generally you don't need to worry about defining all the variables in the initial state because the variables are defined by the domain (the P in the quadruple in the linked article). For an intuition as to why, you have an operator for MONKEY in your initial state, but you're still introducing a free variable m that is not defined anywhere else. You end up with a chicken and egg problem if you try to do it that way, so instead the facts in the system are just propositional variables which are effectively sentinel values that mean something to the users of the system, not the system itself.
You are correct that you need to define the level for each item as part of the initial state, but the initial state of the example actually correct considering the constraints that the bananas are always high, the box is always low and the monkey is the only thing that changes level. I would probably change the example to have the At proposition take into account the object in question instead of using different proposition names for each object but that's just a style choice; the semantics are the same.
Operators in STRIPS are generally represented by 3 distinct components:
preconditions - each variable in the preconditions list must exactly match the corresponding variable in the current state (trues must be true, falses must be falses) but you ignore all other variables not explicit in the preconditions
add effects - when the action is performed, these are the effects that variables that are added to the state
delete effects - when the action is performed, these are the effects that are deleted from the state
and sometimes a 4th cost component when considering cost optimality
The post conditions listed in your example are the union of the add effects and delete effects. The advantage of separating them will come later when you get into delete relaxation abstractions.
In your proposed initial state you have propositions that contain multiple properties for the same object (e.g. At(bx, C, low)). This is typically avoided in favor of having a proposition for each property of each object in the state. Doing it this way makes you end up with a larger state, but makes for a much simpler implementation since you don't have to decompose a state variable in order to identify the value of a specific property of an object in the preconditions list.

DFA to PDA conversion

I am looking for an algorithm to convert a Deterministic Finite Automata to Push Down Automata.
Any help appreciated.
Thanks!
The PDA version of DFA would look the same except each state transition also pushes nothing on the stack and pops nothing off the stack.
Since a PDA is an extension of a DFA with just one additional feature : stack.
Because the transition of a PDA is determined by a triple (current state, input, element at the top of the stack) while transition of a DFA is determined by a tuple (current state, input). And the only difference is the element at the top of the stack. You can convert all the transitions of DFA by transforming the tuple to a triple, e (empty string) inserted as the element at the top of the stack
And after changing the state, push e (empty string) to the stack.
I'm answering this old question just in case someone else looks at it.
The conversions of DFA to PDA can be automated easily by just adding a stack. But there could be possible changes to the semantics of the DFA and after you change it that way manually you could end up in a PDA with less number of states. I faced this problem recently. Its somewhat like this,
In a system (not a compiler or anything like that) the code written earlier was written using an DFA due to some reasons. The transitions occur as the user progress through the code using various functions. After some time a new set of transitions functions arrived which can be used in any order. and also the state after any of these new functions can change back to previous state by one of these functions. The only way to solve this using FST was to add a large number of new states to support this behavior which i a huge amount of work. But instead I just changed from DFA to a PDA. The stack keeps track of the transitions very nicely and the problem is solved with far less number of states. Actually i only had to add N number of states where N is the number of new functions that arrived.
I do not know if someone can automate this kind of a process easily. But there you go, just in case someone is curious about it.
The wikipedia article says
Pushdown automata differ from finite
state machines in two ways:
They can use the top of the stack to decide which transition to
take.
They can manipulate the stack as part of performing a transition.
Pushdown automata choose a transition
by indexing a table by input signal,
current state, and the symbol at the
top of the stack. This means that
those three parameters completely
determine the transition path that is
chosen. Finite state machines just
look at the input signal and the
current state: they have no stack to
work with. Pushdown automata add the
stack as a parameter for choice.
...
Pushdown automata are equivalent to
context-free grammars: for every
context-free grammar, there exists a
pushdown automaton such that the
language generated by the grammar is
identical with the language generated
by the automaton, which is easy to
prove. The reverse is true, though
harder to prove: for every pushdown
automaton there exists a context-free
grammar such that the language
generated by the automaton is
identical with the language generated
by the grammar.

Resources