Can a Turing machine decide if a formal model of computation is Turing complete? - turing-complete

That is, can a Turing machine take a formal system, S, as its input and decide if S is Turing complete?
I think this is an undecidable problem, am I right?
If it is undecidable, why can we (as humans) decide Turing completeness?

Hmm :-) deciding turing completeness is not central to determining whether or not human brain is TM complete; one can go through mechanical steps to determine turing completeness; that is not an issue.
The key issue is, whether or not human brain is hyper-computational[1] or hyper-turing.
One test would be to have an answer of "Yes" to one of the following questions: can a human being predict when a turing machine will halt? (i.e. solve the halting problem)
Or is a human brain not subject to Rice's Theorem.
Trivially the answer to both question seems to be No in the general case because one can imagine a TM with infinitely long tape and just jumping around, we never can tell when it will hit a cell that tells it to stop.
The seeming hypercomputing capability comes from the fact that we mistake normal computers/software/mechanical processes etc with TMs.
Rice's Theorem can be side-stepped in the "special case" of systems that exhibit Markov Property and has finite number of Place-Transition net representation. Our general environment has these "special cases" in abundance so it may seem as if human brain is capable of Hyper-computing because it tends to jump to general conclusions from special cases, however it probably is not since we as human beings have yet to experience an interaction with a Turing Machine.

Related

Is a Turing machine a real device or an imaginary concept?

When I am studying about Turing machines and PDAs, I was thinking that the first computing device was the Turing machine.
Hence, I thought that there existed a practical machine called the Turing machine and its states could be represented by some special devices (say like flip-flops) and it could accept magnetic tapes as inputs.
Hence I asked how input strings are represented in magnetic tapes, but by the answer and by the details given in my book, I came to know that a Turing machine is somewhat hypothetical.
My question is, how would a Turing machine be implemented practically? For example, how it is used to check spelling errors in our current processors.
Are Turing machines outdated? Or are they still being used?
When Turing first devised what are now called Turing machines, he was doing so for purely theoretical reasons (they were used to prove the existence of undecidable problems) and without having actually constructed one in the real world. Fast forward to the present, and with the exception of hobbyists building Turing machines for the fun of doing so, TMs are essentially confined to Theoryland.
Turing machines aren't used in practice for several reasons. For starters, it's impossible to build a true TM, since you'd need infinite resources to construct the infinite tape. (You could imagine building TMs with a limited amount of tape, then adding more tape in as necessary, though.) Moreover, Turing machines are inherently slower than other models of computation because of the sequential nature of their data access. Turing machines cannot, for example, jump into the middle of an array without first walking across all the elements of the array that it wants to skip. On top of that, Turing machines are extremely difficult to design. Try writing a Turing machine to sort a list of 32-bit integers, for example. (Actually, please don't. It's really hard!)
This then begs the question... why study Turing machines at all? Fortunately, there are a huge number of reasons to do this:
To reason about the limits of what could possibly be computed. Because Turing machines are capable of simulating any computer on planet earth (or, according to the Church-Turing thesis, any physically realizable computing device), if we can show the limits of what Turing machines can compute, we can demonstrate the limits of what could ever hope to be accomplished on an actual computer.
To formalize the definition of an algorithm. Why is binary search an algorithm while the statement "guess the answer" is not? In order to answer this question, we have to have a formal model of what a computer is and what an algorithm means. Having the Turing machine as a model of computation allows us to rigorously define what an algorithm is. No one actually ever wants to translate algorithms into the format, but the ability to do so gives the field of algorithms and computability theory a firm mathematical grounding.
To formalize definitions of deterministic and nondeterministic algorithms. Probably the biggest open question in computer science right now is whether P = NP. This question only makes sense if you have a formal definition for P and NP, and these in turn require definitions if deterministic and nndeterministic computation (though technically they could be defined using second-order logic). Having the Turing machine then allows us to talk about important problems in NP, along with giving us a way to find NP-complete problems. For example, the proof that SAT is NP-complete uses the fact that SAT can be used to encode a Turing machine and it's execution on an input.
Hope this helps!
It is a conceptual device that is not realizable (due to the requirement of infinite tape). Some people have built physical realizations of a Turing machine, but it is not a true Turing machine due to physical limitations.
Here's a video of one: http://www.youtube.com/watch?v=E3keLeMwfHY
Turing Machine are not exactly physical machines, instead they are basically conceptual machine. Turing concept is hypothesis and this is very difficult to implement in real world since we require infinite tapes for small and easy solution too.
It's a theoretical machine, the following paragraph from Wikipedia
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This machine along with other machines like non-deterministic machine (doesn't exist in real) are very useful in calculating complexity and prove that one algorithm is harder than another or one algorithm is not solvable...etc
Turing machine (TM) is a mathematical model for computing devices. It is the smallest model that can really compute. In fact, the computer that you are using is a very big TM. TM is not outdated. We have other models for computation but this one was used to build the current computers and because of that, we owe a lot to Alan Turing who proposed this model in 1936.

Computer simulation of the brain

I've been hunting on the net periodically for several months for an answer to this with no joy. Grateful if anyone can shed any light..
I'm interested in work that's been done on simulating the human brain. I could of course mean many things by that. Here's what I do mean, followed by what I don't mean:
I AM interested in simulations of how we think and feel. I'm not talking about down to the level of neurons, but more simulation of the larger modules that are involved. For example one might simulate the 'anger' module as a service that measures the degree one has been disrespected (in some system of representation) and outputs an appropriate measure of anger (again in some system of representation).
I am NOT interested in projects like the Blue Brain etc, where accurate models of neuron clusters are being built. I'm interested in models operating at much higher levels of abstraction, on the level of emotional modules, cognitive reasoning systems etc.
I'm also NOT interested in AI projects that take as their inspiration or paradigm human mechanisms, like Belief-Desire-Intention systems, but which are not actually trying to replicate human behavior. Interesting though these systems are, I'm not interested in making effective systems, but effectively modelling human thought and emotion.
I've been searching far and wide, but all I've found are papers from the 60s like this one:
Computer Simulation of Human Interaction in Small Groups
It almost appears to me as if psychologists were excited by simulating brains when computers were first available, but now don't do it at all?
Can anyone point me in the direction of more recent research/efforts, if there have been any?
There are a lot of people who've given it some thought, but one of the problems is that as AI research as continued, it seems increasingly that AI leads us to think certain things are actually relatively easy that seemed hard, while the apparently east stuff is what is hard.
Consider, for example, what an expert does in some field of discourse. We used to think, in the 60's or so, that things like medical diagnosis and chess playing were hard. We now know that as far as anyone can tell, they are simple search problems; it just happens that the meat computer does search relatively fast and with a lot of parallelism.
There are a number of people, like Jeff Hawkins, who are taking a different approach, and think simulation of the brain is the only way to get something more like what we mean by "thinking"; if they're right, then you're making a category error by saying those don't interest you.
The worst problem with the whole issue is that it appears increasingly difficult to say what we mean when we say we "think and feel" at all. John Searle, with his "Chinese Room" analogy, would argue that it's actually not possible for a mechanism to "think" or "be conscious". On the other hand, Alan Turing, with the famous Turing Test, proposed a weaker definition: for Turing, if you can't tell the difference between a "really" thinking and feeling being and a computer simulation of one, then you must assume the simulation is a "thinking and feeling" being.
I tend to come down on Turing's side: after all, I don't know that anyone but me is "
really" a thinking and feeling being. (To think about that question, look into the idea of a "philosophical zombie", which isn't -- as you might suspect -- a member of the Undead who wonders if there is Meaning in the eating of brains, but instead is a hypothetical entity that isn't conscious, but that perfectly simulates a conscious entity.)
So here's a suggestion: first, think of a way to test, with an effective computation (that is, a halting program or a sequence of tests that is sure to come to a conclusion) if you have really implemented something that can "think and feel"; once you do that, you'll be a long way toward thinking about how to build it.
You might be interested in work on Affective Computing:
http://en.wikipedia.org/wiki/Affective_computing
http://affect.media.mit.edu/
http://psychometrixassociates.com/bio.htm
you should take a look into neural networks if you haven't already.
http://en.wikipedia.org/wiki/Neural_network
In the book "On Intelligence", Jeff Hawkins talks a lot about how we need high-level models of the human. He provides a good literature survey of existing (at the time) research on that topic.
Act-R is a framework that serves the cognitive sciences to simulate the cognitive functions of the human mind. It is about memory, recognition, language understanding and so on. I'm not that familiar with it, so I have to point you to the wiki page.
https://secure.wikimedia.org/wikipedia/en/wiki/ACT-R

What are the useful limits of Linear Bounded Automata compared to Turing Machines?

There are languages that a Turing machine can handle that an LBA can't, but are there any useful, practical problems that LBAs can't solve but TMs can?
An LBA is just a Turing machine with a finite tape, and actual computers have finite storage, so it would seem to me that there's nothing of practical importance that an LBA can't do. Except for the fact that a Linear Bounded Automaton has not just a finite tape, but a tape with a size that's a linear function of the size of the input. Does the linearity of the finiteness restrict the LBA in some way?
Are there problems that a LBA can't cope with, but an Exponentially Bounded Automaton could (if such things exist)?
I'm going to go out on a limb and say "no". Pretty much every programming language that we use today is context sensitive. (Actually not even context sensitive, only slightly stronger than context free, IIRC). And obviously, if we can't program it, we don't really care about it...
OTOH, this all depends on your definition of "interesting"... Ackerman's function clearly fits into this category.... is that interesting?
The Wikipedia article for context-sensitive languages states that any recursive
language (that is, recognizable by a Turing machine) whose decision is EXPSPACE-hard
is not context-sensitive, and therefore cannot be recognized by a LBA. They
give an example of such a language: the set of pairs of equivalent regular expressions including exponentiation.

Why do safety requirements like to discourage use of AI?

Seems that requirements on safety do not seem to like systems that use AI for safety-related requirements (particularly where large potential risks of destruction/death are involved). Can anyone suggest why? I always thought that, provided you program your logic properly, the more intelligence you put in an algorithm, the more likely this algorithm is capable of preventing a dangerous situation. Are things different in practice?
Most AI algorithms are fuzzy -- typically learning as they go along. For items that are of critical safety importance what you want is deterministic. These algorithms are easier to prove correct, which is essential for many safety critical applications.
I would think that the reason is twofold.
First it is possible that the AI will make unpredictable decisions. Granted, they can be beneficial, but when talking about safety-concerns, you can't take risks like that, especially if people's lives are on the line.
The second is that the "reasoning" behind the decisions can't always be traced (sometimes there is a random element used for generating results with an AI) and when something goes wrong, not having the ability to determine "why" (in a very precise manner) becomes a liability.
In the end, it comes down to accountability and reliability.
The more complex a system is, the harder it is to test.
And the more crucial a system is, the more important it becomes to have 100% comprehensive tests.
Therefore for crucial systems people prefer to have sub-optimal features, that can be tested, and rely on human interaction for complex decision making.
From a safety standpoint, one often is concerned with guaranteed predictability/determinism of behavior and rapid response time. While it's possible to do either or both with AI-style programming techniques, as a system's control logic becomes more complex it's harder to provide convincing arguments about how the system will behave (convincing enough to satisfy an auditor).
I would guess that AI systems are generally considered more complex. Complexity is usually a bad thing, especially when it relates to "magic" which is how some people perceive AI systems.
That's not to say that the alternative is necessarily simpler (or better).
When we've done control systems coding, we've had to show trace tables for every single code path, and permutation of inputs. This was required to insure that we didn't put equipment into a dangerous state (for employees or infrastructure), and to "prove" that the programs did what they were supposed to do.
That'd be awfully tricky to do if the program were fuzzy and non-deterministic, as #tvanfosson indicated. I think you should accept that answer.
The key statement is "provided you program your logic properly". Well, how do you "provide" that? Experience shows that most programs are chock full of bugs.
The only way to guarantee that there are no bugs would be formal verification, but that is practically infeasible for all but the most primitively simple systems, and (worse) is usually done on specifications rather than code, so you still don't know of the code correctly implements your spec after you've proven the spec to be flawless.
I think that is because AI is very hard to understand and that becomes impossible to maintain.
Even if a AI program is considered fuzzy, or that it "learns" by the moment it is released, it is very well tested to all know cases(and it already learned from it) before its even finished. Most of the cases this "learning" will change some "thresholds" or weights in the program and after that, it is very hard to really understand and maintain that code, even for the creators.
This have been changing in the last 30 years by creating languages easier to understand for mathematicians, making it easier for them to test, and deliver new pseudo-code around the problem(like mat lab AI toolbox)
As there is no accepted definition of AI, the question shall be more specific.
My answer is on adaptive algorithms merely employing parameter estimation - a kind of learning - to improve the safety of the output information. Even this is not welcome in functional safety although it may seem that the behaviour of a proposed algorithm is not only deterministic (all computer programs are) but also easy to determine.
Be prepared for the assessor asking you to demonstrate test reports covering all combinations of input data and failure modes. Your algorithm being adaptive means it depends not only on current input values but on many or all of the earlier values. You know that a full test coverage is impossible within the age of the universe.
One way to score is showing that previously accepted simpler algorithms (state of the art) are not safe. This shall be easy if you know your problem space (if not, keep away from AI).
Another possibility may exist for your problem: a compelling monitoring function indicating whether the parameter is estimated accurately.
There are enough ways that ordinary algorithms, when shoddily designed and tested, can wind up killing people. If you haven't read about it, you should look up the case of Therac 25. This was a system where the behaviour was supposed to be completely deterministic, and things still went horribly, horribly wrong. Imagine if it were trying to reason "intelligently", too.
"Ordinary algorithms" for a complex problem space tend to be arkward. On the other hand, some "intelligent" algorithms have a simple structure. This is especially true for applications of Bayesian inference. You just have to know the likelihood function(s) for your data (plural applies if the data separates into statistically independent subsets).
Likelihood functions can be tested. If the test cannot cover the tails far enough to reach the required confidence level, just add more data, for example from another sensor. The structure of your algorithm will not change.
A drawback is/was the CPU performance required for Bayesian inference.
Besides, mentioning Therac 25 is not helpful, since no algorithm at all was involved, just multitasking spaghetti code. Citing the authors, "[the] accidents were fairly unique in having software coding errors involved -- most computer-related accidents have not involved coding errors but rather errors in the software requirements such as omissions and mishandled environmental conditions and system states."

Why can Conway’s Game of Life be classified as a universal machine?

I was recently reading about artificial life and came across the statement, "Conway’s Game of Life demonstrates enough complexity to be classified as a universal machine." I only had a rough understanding of what a universal machine is, and Wikipedia only brought me as close to understanding as Wikipedia ever does. I wonder if anyone could shed some light on this very sexy statement?
Conway's Game of Life seems, to me, to be a lovely distraction with some tremendous implications: I can't make the leap between that and calculator? Is that even the leap that I should be making?
Paul Rendell implemented a Turing machine in Life. Gliders represent signals, and interactions between them are gates and logic that together can create larger components which implement the Turing machine.
Basically, any automatic machinery that can implement AND, OR, and NOT can be combined together in complex enough ways to be Turing-complete. It's not a useful way to compute, but it meets the criteria.
You can build a Turing machine out of Conway's life - although it would be pretty horrendous.
The key is in gliders (and related patterns) - these move (slowly) along the playing field, so can represent streams of bits (the presence of a glider for a 1 and the absence for a 0). Other patterns can be built to take in two streams of gliders (at right angles) and emit another stream of bits corresponding to the AND/OR/etc of the original two streams.
EDIT: There's more on this on the LogiCell web site.
Conway's "Life" can be taken even further: It's not only possible to build a Life pattern that implements a Universal Turing Machine, but also a Von Neumann "Universal Constructor:" http://conwaylife.com/wiki/Universal_constructor
Since a "Universal Constructor" can be programmed to construct any pattern of cells, including a copy of itself, Coway's "Life" is therefore capable of "self-replication," not just Universal Computation.
I highly recommend the book The Recursive Universe by Poundstone. Out of print, but you can probably find a copy, perhaps in a good library. It's almost all about the power of Conway's Life, and the things that can exist in a universe with that set of natural laws, including self-reproducing entities and IIRC, Darwinian evolution.
And Paul Chapman actually build a universal turing machine with game of life: http://www.igblan.free-online.co.uk/igblan/ca/ by building a "Universal Minsky Register Machine".
The pattern is constructed on a
lattice of 30x30 squares. Lightweight
Spaceships (LWSSs) are used to
communicate between components, which
have P60 logic (except for Registers -
see below). A LWSS takes 60
generations to cross a lattice square.
Every 60 generations, therefore, any
inter-component LWSS (pulse) is in the
same position relative to the square
it's in, allowing for rotation
.

Resources