show that emptiness and finiteness are unsolvable for linear bounded automata - theory

Show that emptiness and finiteness are unsolvable for linear bounded automata, I didn't understand, Can anyone help me out?

Solving emptiness means you can determine whether or not a linear bounded automata accepts anything at all and solving finiteness means you can determine whether or not a linear bounded automata accepts a finite set.
The proof that emptiness for linear bounded automata is unsolvable depends on the fact that it is also unsolvable for Turing machines.
For every Turing machine, there is a linear bounded automaton that accepts the set of strings which are valid halting computations for the Turing machine.
If a Turing machine accepts nothing, then the set of strings which are valid halting computations is empty. The linear bounded automata which accepts this Turing machine's halting computations will also accept nothing. If it were possible to determine whether or not a linear bounded automata accepts nothing, then it would be possible to determine whether or not a Turing machine accepts nothing, but this is a contradiction, because it is not possible to tell whether or not a Turing machine accepts nothing.
The proof for finiteness being unsolvable for linear bounded automata is the same. If the set a Turing machine accepts is finite, a linear bounded automaton accepts this finite set. If it were possible to determine if a linear bounded automaton accepts a finite set, it would also be possible to determine if a Turing machine accepts a finite set, but this is a contradiction, because it is impossible to tell whether or not a Turing machine accepts a finite set.
These problems are unsolvable for Turing machines because only trivial set properties are solvable for the computable sets. The set K = { i | M_i(i) halts } is unsolvable, and finiteness and emptiness can be reduced to the set K, which is why they are unsolvable for Turing machines.

Related

Sub-Turing Complete Class of computational models

Many programming languages and systems are Turing-complete; they can simulate any Turing machine, and therefore can simulate any finite state machine as well.
Consider the following informal model:
Language A defines a finite set of NAND gates, their connections to each other, and which gates receive input and which are outputted.
In this model, any finite state machine can be built. The NANDs can form latches, registers, busses and control structures, and in the end any finite state machine, including full computers and other systems.
The model, however, does not have the ability to simulate an infinite tape, only a tape of finite size. It cannot simulate any Turing machine because it may not have the memory to do so.
Are Language A and all other systems that can simulate any finite state machine considered Turing complete? Is there a separate class for them, or is there an opportunity to define such?
As you have realized, there is a hierarchy - with potentially infinitely many levels - of classes of languages, including regular languages (recognizable by finite automata) and decidable (accepted by a Turing machine).
All real computers - including theoretical models which can be used to construct them, like yours involving NAND gates - are not Turing equivalent because they cannot in theory access an infinite tape. In practice, time, space and matter are insufficient in physical reality to allow for real Turing-equivalent computation. All physical computation can be carried out by a finite automata. There are regular languages, in practice, too complex to ever accept by constructing a real finite state machine or general-purpose computer.
Modeling languages as being of a type higher than regular is for convenience - it is a lie in the same way that modeling matter as continuous (e.g., when computing moment of inertia) is a lie. Matter is really made of discrete molecules, which in turn are comprised of smaller discrete particles.

Can a Turing machine decide if a formal model of computation is Turing complete?

That is, can a Turing machine take a formal system, S, as its input and decide if S is Turing complete?
I think this is an undecidable problem, am I right?
If it is undecidable, why can we (as humans) decide Turing completeness?
Hmm :-) deciding turing completeness is not central to determining whether or not human brain is TM complete; one can go through mechanical steps to determine turing completeness; that is not an issue.
The key issue is, whether or not human brain is hyper-computational[1] or hyper-turing.
One test would be to have an answer of "Yes" to one of the following questions: can a human being predict when a turing machine will halt? (i.e. solve the halting problem)
Or is a human brain not subject to Rice's Theorem.
Trivially the answer to both question seems to be No in the general case because one can imagine a TM with infinitely long tape and just jumping around, we never can tell when it will hit a cell that tells it to stop.
The seeming hypercomputing capability comes from the fact that we mistake normal computers/software/mechanical processes etc with TMs.
Rice's Theorem can be side-stepped in the "special case" of systems that exhibit Markov Property and has finite number of Place-Transition net representation. Our general environment has these "special cases" in abundance so it may seem as if human brain is capable of Hyper-computing because it tends to jump to general conclusions from special cases, however it probably is not since we as human beings have yet to experience an interaction with a Turing Machine.

Is a Turing machine a real device or an imaginary concept?

When I am studying about Turing machines and PDAs, I was thinking that the first computing device was the Turing machine.
Hence, I thought that there existed a practical machine called the Turing machine and its states could be represented by some special devices (say like flip-flops) and it could accept magnetic tapes as inputs.
Hence I asked how input strings are represented in magnetic tapes, but by the answer and by the details given in my book, I came to know that a Turing machine is somewhat hypothetical.
My question is, how would a Turing machine be implemented practically? For example, how it is used to check spelling errors in our current processors.
Are Turing machines outdated? Or are they still being used?
When Turing first devised what are now called Turing machines, he was doing so for purely theoretical reasons (they were used to prove the existence of undecidable problems) and without having actually constructed one in the real world. Fast forward to the present, and with the exception of hobbyists building Turing machines for the fun of doing so, TMs are essentially confined to Theoryland.
Turing machines aren't used in practice for several reasons. For starters, it's impossible to build a true TM, since you'd need infinite resources to construct the infinite tape. (You could imagine building TMs with a limited amount of tape, then adding more tape in as necessary, though.) Moreover, Turing machines are inherently slower than other models of computation because of the sequential nature of their data access. Turing machines cannot, for example, jump into the middle of an array without first walking across all the elements of the array that it wants to skip. On top of that, Turing machines are extremely difficult to design. Try writing a Turing machine to sort a list of 32-bit integers, for example. (Actually, please don't. It's really hard!)
This then begs the question... why study Turing machines at all? Fortunately, there are a huge number of reasons to do this:
To reason about the limits of what could possibly be computed. Because Turing machines are capable of simulating any computer on planet earth (or, according to the Church-Turing thesis, any physically realizable computing device), if we can show the limits of what Turing machines can compute, we can demonstrate the limits of what could ever hope to be accomplished on an actual computer.
To formalize the definition of an algorithm. Why is binary search an algorithm while the statement "guess the answer" is not? In order to answer this question, we have to have a formal model of what a computer is and what an algorithm means. Having the Turing machine as a model of computation allows us to rigorously define what an algorithm is. No one actually ever wants to translate algorithms into the format, but the ability to do so gives the field of algorithms and computability theory a firm mathematical grounding.
To formalize definitions of deterministic and nondeterministic algorithms. Probably the biggest open question in computer science right now is whether P = NP. This question only makes sense if you have a formal definition for P and NP, and these in turn require definitions if deterministic and nndeterministic computation (though technically they could be defined using second-order logic). Having the Turing machine then allows us to talk about important problems in NP, along with giving us a way to find NP-complete problems. For example, the proof that SAT is NP-complete uses the fact that SAT can be used to encode a Turing machine and it's execution on an input.
Hope this helps!
It is a conceptual device that is not realizable (due to the requirement of infinite tape). Some people have built physical realizations of a Turing machine, but it is not a true Turing machine due to physical limitations.
Here's a video of one: http://www.youtube.com/watch?v=E3keLeMwfHY
Turing Machine are not exactly physical machines, instead they are basically conceptual machine. Turing concept is hypothesis and this is very difficult to implement in real world since we require infinite tapes for small and easy solution too.
It's a theoretical machine, the following paragraph from Wikipedia
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This machine along with other machines like non-deterministic machine (doesn't exist in real) are very useful in calculating complexity and prove that one algorithm is harder than another or one algorithm is not solvable...etc
Turing machine (TM) is a mathematical model for computing devices. It is the smallest model that can really compute. In fact, the computer that you are using is a very big TM. TM is not outdated. We have other models for computation but this one was used to build the current computers and because of that, we owe a lot to Alan Turing who proposed this model in 1936.

What are the consequences of saying a non-deterministic Turing Machine can solve NP in polynomial time?

these days I have been studying about NP problems, computational complexity and theory. I believe I have finally grasped the concepts of Turing Machine, but I have a couple of doubts.
I can accept that a non-deterministic turing machine has several options of what to do for a given state and symbol being read and that it will always pick the best option, as stated by wikipedia
How does the NTM "know" which of these
actions it should take? There are two
ways of looking at it. One is to say
that the machine is the "luckiest
possible guesser"; it always picks the
transition which eventually leads to
an accepting state, if there is such a
transition. The other is to imagine
that the machine "branches" into many
copies, each of which follows one of
the possible transitions. Whereas a
DTM has a single "computation path"
that it follows, an NTM has a
"computation tree". If any branch of
the tree halts with an "accept"
condition, we say that the NTM accepts
the input.
What I can not understand is, since this is an imaginary machine, what do we gain from saying that it can solve NP problems in polynomial time? I mean, I could also theorize of a magical machine that solves NP problems in O(1), what do I gain from that if it may never exist?
Thanks in advance.
A non-deterministic Turing machine is a tricky concept to grasp. Try some other viewpoints:
Instead of running a magical Turing machine that is the luckiest possible guesser, run an even more magical meta-machine that sets up an infinite number of randomly guessing independent Turing machines in parallel universes. Every possible sequence of guesses is made in some universe. If in at least one of the universes the machine halts and accepts the input, that's enough: the problem instance is accepted by the meta-machine that set up these parallel universes. If in all universes the machine rejects or fails to halt, the meta-machine rejects the instance.
Instead of any kind of guessing or branching, think of one person trying to convince another person that the instance should be accepted. The first person provides the set of choices to be made by the non-deterministic Turing machine, and the second person checks whether the machine accepts the input with those choices. If it does, the second person is convinced; if it does not, the first person has failed (which may be either because the instance cannot be accepted with any sequence of choices, or because the first person chose a poor sequence of choices).
Forget Turing machines. A problem is in NP if it can be described by a formula in existential second-order logic. That is, you take plain-vanilla propositional logic, allow any quantifiers over propositional variables, and allow tacking at the beginning existential quantifiers over sets, relations, and functions. For example, graph three-colorability can be described by a formula that starts with existential quantification over colors (sets of nodes):
∃ R ∃ G ∃ B
Every node must be colored:
∃ R ∃ G ∃ B (∀ x (R(x) ∨ G(x) ∨ B(x)))
and no two adjacent nodes may have the same color – call the edge relation E:
∃ R ∃ G ∃ B (∀ x (R(x) ∨ G(x) ∨ B(x))) ∧ (∀ x,y ¬ (E(x,y) ∧ ((R(x) ∧ R(y)) ∨ (G(x) ∧ G(y)) ∨ (B(x) ∧ B(y)))))
The existential quantification over second-order variables is like a non-deterministic Turing machine making perfect guesses. If you want to convince someone that a formula ∃ X (...) is true, you can start by giving the value of X. That polynomial-time NTMs and these formulas not just "like" but actually equivalent is Fagin's theorem, which started the field of descriptive complexity: complexity classes characterized not by Turing machines but by classes of logical formulas.
You also said
I could also theorize of a magical machine that solves NP problems in O(1)
Yes, you can. These are called oracle machines (no relation to the DBMS) and they have yielded interesting results in complexity theory. For example, the Baker–Gill–Solovay theorem states that there are oracles A and B such that for Turing machines that have access to A, P=NP, but for Turing machines that have access to B, P≠NP. (A is a very powerful oracle that makes non-determinism irrelevant; the definition of B is a bit complicated and involves a diagonalization trick.) This is a kind of a meta-result: any proof solving the P vs NP question must be sensitive enough to the definition of a Turing machine that it fails when you add some kinds of oracles.
The value of non-deterministic Turing machines is that they offer a comparatively simple, computational characterization of the complexity class NP (and others): instead of computation trees or second-order logical formulas, you can think of an almost-ordinary computer that has been (comparatively) slightly modified so that it can make perfect guesses.
What you gain from that is that you can prove that a problem is in NP by proving that it can be solved by an NTM in polynomial time.
In other words you can use NTMs to find out whether a given problem is in NP or not.
By definition, NP stands for nondeterministic polynomial time as can be looked up in Wikipedia.
An incarnation of a nondeterministic Turing machine that randomly chooses and examines (or assembles) the next potential solution will solve an NP problem in polynomial time with some probability (it would solve the problem in poly time with absolute certainty if it were the "luckiest possible guesser").
Therefore, saying that an NTM can solve a problem in polynomial time effectively means that that problem is in NP. This again is equivalent to the definition of the NP class of problems.
I think your answer is in your question. In other words, given a problem you can prove that it is an NP problem if you can find an NTM that solves it.
NP problems are a special class of problems, and the NTM is just a tool to check if the given problem belongs to the class or not.
Note that the NTM is not a specific machine - it is a whole class of machines with well defined rules of what they can and cannot do. In order to use "magical" machines, you need to define them, and show which complexity class of problems they correspond to.
See http://en.wikipedia.org/wiki/Computational_complexity_theory#Complexity_classes
for more info.
From Hebrew Wikipedia - "NTM is mainly a tool for thinking, and it's impossible to actualy implement such machine". You can replace the term "NTM" with "Algorithm that at every step tries all possible steps" or "Algorithm that at every step chooses the best possible next step".. And I think you understand the rest. NTM is here only to help us visualize such algorithm. You can see here how it's supposed to help you visualize (at Pascal Cuoq's answer).
What we gain is that if we have the magical power to guess the correct step, which will always turn out to be correct, we can solve NPC problems in POLYTIME. Of course, we can't always "guess" the correct step. So it's imaginary. But just as imaginary numbers are applicable to real world problems, consequences can be theoretically useful.
One positive aspect of morphing the original problems this way is that we can tackle them from different angles. In a theoretical domain, it is a good thing because we have (1) more approaches we can take (thus more papers) and (2) more tools we can use if they can be phrased in other fields.

What are the useful limits of Linear Bounded Automata compared to Turing Machines?

There are languages that a Turing machine can handle that an LBA can't, but are there any useful, practical problems that LBAs can't solve but TMs can?
An LBA is just a Turing machine with a finite tape, and actual computers have finite storage, so it would seem to me that there's nothing of practical importance that an LBA can't do. Except for the fact that a Linear Bounded Automaton has not just a finite tape, but a tape with a size that's a linear function of the size of the input. Does the linearity of the finiteness restrict the LBA in some way?
Are there problems that a LBA can't cope with, but an Exponentially Bounded Automaton could (if such things exist)?
I'm going to go out on a limb and say "no". Pretty much every programming language that we use today is context sensitive. (Actually not even context sensitive, only slightly stronger than context free, IIRC). And obviously, if we can't program it, we don't really care about it...
OTOH, this all depends on your definition of "interesting"... Ackerman's function clearly fits into this category.... is that interesting?
The Wikipedia article for context-sensitive languages states that any recursive
language (that is, recognizable by a Turing machine) whose decision is EXPSPACE-hard
is not context-sensitive, and therefore cannot be recognized by a LBA. They
give an example of such a language: the set of pairs of equivalent regular expressions including exponentiation.

Resources