When I am studying about Turing machines and PDAs, I was thinking that the first computing device was the Turing machine.
Hence, I thought that there existed a practical machine called the Turing machine and its states could be represented by some special devices (say like flip-flops) and it could accept magnetic tapes as inputs.
Hence I asked how input strings are represented in magnetic tapes, but by the answer and by the details given in my book, I came to know that a Turing machine is somewhat hypothetical.
My question is, how would a Turing machine be implemented practically? For example, how it is used to check spelling errors in our current processors.
Are Turing machines outdated? Or are they still being used?
When Turing first devised what are now called Turing machines, he was doing so for purely theoretical reasons (they were used to prove the existence of undecidable problems) and without having actually constructed one in the real world. Fast forward to the present, and with the exception of hobbyists building Turing machines for the fun of doing so, TMs are essentially confined to Theoryland.
Turing machines aren't used in practice for several reasons. For starters, it's impossible to build a true TM, since you'd need infinite resources to construct the infinite tape. (You could imagine building TMs with a limited amount of tape, then adding more tape in as necessary, though.) Moreover, Turing machines are inherently slower than other models of computation because of the sequential nature of their data access. Turing machines cannot, for example, jump into the middle of an array without first walking across all the elements of the array that it wants to skip. On top of that, Turing machines are extremely difficult to design. Try writing a Turing machine to sort a list of 32-bit integers, for example. (Actually, please don't. It's really hard!)
This then begs the question... why study Turing machines at all? Fortunately, there are a huge number of reasons to do this:
To reason about the limits of what could possibly be computed. Because Turing machines are capable of simulating any computer on planet earth (or, according to the Church-Turing thesis, any physically realizable computing device), if we can show the limits of what Turing machines can compute, we can demonstrate the limits of what could ever hope to be accomplished on an actual computer.
To formalize the definition of an algorithm. Why is binary search an algorithm while the statement "guess the answer" is not? In order to answer this question, we have to have a formal model of what a computer is and what an algorithm means. Having the Turing machine as a model of computation allows us to rigorously define what an algorithm is. No one actually ever wants to translate algorithms into the format, but the ability to do so gives the field of algorithms and computability theory a firm mathematical grounding.
To formalize definitions of deterministic and nondeterministic algorithms. Probably the biggest open question in computer science right now is whether P = NP. This question only makes sense if you have a formal definition for P and NP, and these in turn require definitions if deterministic and nndeterministic computation (though technically they could be defined using second-order logic). Having the Turing machine then allows us to talk about important problems in NP, along with giving us a way to find NP-complete problems. For example, the proof that SAT is NP-complete uses the fact that SAT can be used to encode a Turing machine and it's execution on an input.
Hope this helps!
It is a conceptual device that is not realizable (due to the requirement of infinite tape). Some people have built physical realizations of a Turing machine, but it is not a true Turing machine due to physical limitations.
Here's a video of one: http://www.youtube.com/watch?v=E3keLeMwfHY
Turing Machine are not exactly physical machines, instead they are basically conceptual machine. Turing concept is hypothesis and this is very difficult to implement in real world since we require infinite tapes for small and easy solution too.
It's a theoretical machine, the following paragraph from Wikipedia
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This machine along with other machines like non-deterministic machine (doesn't exist in real) are very useful in calculating complexity and prove that one algorithm is harder than another or one algorithm is not solvable...etc
Turing machine (TM) is a mathematical model for computing devices. It is the smallest model that can really compute. In fact, the computer that you are using is a very big TM. TM is not outdated. We have other models for computation but this one was used to build the current computers and because of that, we owe a lot to Alan Turing who proposed this model in 1936.
Related
Many programming languages and systems are Turing-complete; they can simulate any Turing machine, and therefore can simulate any finite state machine as well.
Consider the following informal model:
Language A defines a finite set of NAND gates, their connections to each other, and which gates receive input and which are outputted.
In this model, any finite state machine can be built. The NANDs can form latches, registers, busses and control structures, and in the end any finite state machine, including full computers and other systems.
The model, however, does not have the ability to simulate an infinite tape, only a tape of finite size. It cannot simulate any Turing machine because it may not have the memory to do so.
Are Language A and all other systems that can simulate any finite state machine considered Turing complete? Is there a separate class for them, or is there an opportunity to define such?
As you have realized, there is a hierarchy - with potentially infinitely many levels - of classes of languages, including regular languages (recognizable by finite automata) and decidable (accepted by a Turing machine).
All real computers - including theoretical models which can be used to construct them, like yours involving NAND gates - are not Turing equivalent because they cannot in theory access an infinite tape. In practice, time, space and matter are insufficient in physical reality to allow for real Turing-equivalent computation. All physical computation can be carried out by a finite automata. There are regular languages, in practice, too complex to ever accept by constructing a real finite state machine or general-purpose computer.
Modeling languages as being of a type higher than regular is for convenience - it is a lie in the same way that modeling matter as continuous (e.g., when computing moment of inertia) is a lie. Matter is really made of discrete molecules, which in turn are comprised of smaller discrete particles.
In No Silver Bullet, Fred Brooks claims that "Software systems have orders of magnitudes more states than computers do", which makes them harder to design and test (and chips are already pretty hard to test!).
This is counter-intuitive to me: any running software system can be mapped to a computer in a certain state, and it seems like a computer could be in a state that doesn't represent a running software system. Thus, a computer should have many more potential states than a software system.
Does Brooks intend some particular meaning that I'm missing? Or does a computer really have fewer potential states than the software systems it can run?
Well, let's first think about Turing machines.
A Turing machine consists of an unbounded tape which contains symbols, a head and a small control unit which is a finite state automata that controls how the machine reads, moves and modifies the symbols on the tape.
Fact: there exist universal Turing machines, i.e. machines that read from the tape the description of an other Turing machine and execute it on some given input. In other words: even with just a finite number of states in the control unit such machines can simulate every possible other Turing machine.
Reading the description of a Turing machine is the same as reading a software program stored in memory.
In this sense if you count as the number of states of the hardware the number of states in the control unit, and if software is the description of a Turing machine written on the tape, then yes a finite hardware can simulate infinite softwares, yet the softwares surely contains Turing machines with more states than the one simulating it.
If you however consider as state the whole state of the computation, i.e. including the state of the tape, then you are right: every simulation corresponds to specific possible states in this sense and there are many states that are not valid, or are unreachable.
In the same way modern computers consists of a set of hardware that implements this control unit, and then memory which is our tape. If you do not consider the state of the memory as part of the state of the hardware, the same applies: a finite computer, given enough memory, could execute every possible program on every possible input, yet its controlling parts are only finite.
This said I wouldn't take such assertions too literally or too seriously...
The point is simply: software systems's number of states grows extremely rapidly.
That is, can a Turing machine take a formal system, S, as its input and decide if S is Turing complete?
I think this is an undecidable problem, am I right?
If it is undecidable, why can we (as humans) decide Turing completeness?
Hmm :-) deciding turing completeness is not central to determining whether or not human brain is TM complete; one can go through mechanical steps to determine turing completeness; that is not an issue.
The key issue is, whether or not human brain is hyper-computational[1] or hyper-turing.
One test would be to have an answer of "Yes" to one of the following questions: can a human being predict when a turing machine will halt? (i.e. solve the halting problem)
Or is a human brain not subject to Rice's Theorem.
Trivially the answer to both question seems to be No in the general case because one can imagine a TM with infinitely long tape and just jumping around, we never can tell when it will hit a cell that tells it to stop.
The seeming hypercomputing capability comes from the fact that we mistake normal computers/software/mechanical processes etc with TMs.
Rice's Theorem can be side-stepped in the "special case" of systems that exhibit Markov Property and has finite number of Place-Transition net representation. Our general environment has these "special cases" in abundance so it may seem as if human brain is capable of Hyper-computing because it tends to jump to general conclusions from special cases, however it probably is not since we as human beings have yet to experience an interaction with a Turing Machine.
There are languages that a Turing machine can handle that an LBA can't, but are there any useful, practical problems that LBAs can't solve but TMs can?
An LBA is just a Turing machine with a finite tape, and actual computers have finite storage, so it would seem to me that there's nothing of practical importance that an LBA can't do. Except for the fact that a Linear Bounded Automaton has not just a finite tape, but a tape with a size that's a linear function of the size of the input. Does the linearity of the finiteness restrict the LBA in some way?
Are there problems that a LBA can't cope with, but an Exponentially Bounded Automaton could (if such things exist)?
I'm going to go out on a limb and say "no". Pretty much every programming language that we use today is context sensitive. (Actually not even context sensitive, only slightly stronger than context free, IIRC). And obviously, if we can't program it, we don't really care about it...
OTOH, this all depends on your definition of "interesting"... Ackerman's function clearly fits into this category.... is that interesting?
The Wikipedia article for context-sensitive languages states that any recursive
language (that is, recognizable by a Turing machine) whose decision is EXPSPACE-hard
is not context-sensitive, and therefore cannot be recognized by a LBA. They
give an example of such a language: the set of pairs of equivalent regular expressions including exponentiation.
I was recently reading about artificial life and came across the statement, "Conway’s Game of Life demonstrates enough complexity to be classified as a universal machine." I only had a rough understanding of what a universal machine is, and Wikipedia only brought me as close to understanding as Wikipedia ever does. I wonder if anyone could shed some light on this very sexy statement?
Conway's Game of Life seems, to me, to be a lovely distraction with some tremendous implications: I can't make the leap between that and calculator? Is that even the leap that I should be making?
Paul Rendell implemented a Turing machine in Life. Gliders represent signals, and interactions between them are gates and logic that together can create larger components which implement the Turing machine.
Basically, any automatic machinery that can implement AND, OR, and NOT can be combined together in complex enough ways to be Turing-complete. It's not a useful way to compute, but it meets the criteria.
You can build a Turing machine out of Conway's life - although it would be pretty horrendous.
The key is in gliders (and related patterns) - these move (slowly) along the playing field, so can represent streams of bits (the presence of a glider for a 1 and the absence for a 0). Other patterns can be built to take in two streams of gliders (at right angles) and emit another stream of bits corresponding to the AND/OR/etc of the original two streams.
EDIT: There's more on this on the LogiCell web site.
Conway's "Life" can be taken even further: It's not only possible to build a Life pattern that implements a Universal Turing Machine, but also a Von Neumann "Universal Constructor:" http://conwaylife.com/wiki/Universal_constructor
Since a "Universal Constructor" can be programmed to construct any pattern of cells, including a copy of itself, Coway's "Life" is therefore capable of "self-replication," not just Universal Computation.
I highly recommend the book The Recursive Universe by Poundstone. Out of print, but you can probably find a copy, perhaps in a good library. It's almost all about the power of Conway's Life, and the things that can exist in a universe with that set of natural laws, including self-reproducing entities and IIRC, Darwinian evolution.
And Paul Chapman actually build a universal turing machine with game of life: http://www.igblan.free-online.co.uk/igblan/ca/ by building a "Universal Minsky Register Machine".
The pattern is constructed on a
lattice of 30x30 squares. Lightweight
Spaceships (LWSSs) are used to
communicate between components, which
have P60 logic (except for Registers -
see below). A LWSS takes 60
generations to cross a lattice square.
Every 60 generations, therefore, any
inter-component LWSS (pulse) is in the
same position relative to the square
it's in, allowing for rotation
.