Non-computability – Gödel, Turing machines and brain

This article is a subsection of the main article

How physics changes the way we look at mind

The current (and almost universally) accepted view on consciousness is that it is an emergent phenomenon arising from the complex interconnections and communication among neurons. Artificial Intelligence researchers latch their idea of a robotic equivalent of the brain on this concept of consciousness. Basically they believe that the creation of an ‘intelligent’ processor is only hindered by the tedious job of writing out algorithms powerful and complex enough to mimic brain’s functions.

But is that optimism farfetched?

Kurt Gödel’s *incompleteness theorem* doesn’t actually set some arbitrary limit to the process of acquiring knowledge through human or machine effort. And Penrose doesn’t say it does either. The Incompleteness Theorem is cited by Penrose as an example of how human brain can go beyond a computer or a robotic processor brain. Computers and Artificial intelligence devices function on the basis of ‘formal logic’ based programs (or algorithms for short) that tell them to deduce or derive results in series of logical steps. Penrose argues that such an operation can be used to trick the computer easily so that the computer will soon start contradicting its own logic and break down presumably.

Take for example the simple old puzzle:

* If the barber shaves all those who do not shave themselves then who shaves the barber? *

The simple tit-for-tat reply that pops into your mind now may be: *Another barber! *

But that answer is too tricky for a computer to arrive at, if it follows logical algorithms, even though a twelve year old can really “see through” the puzzle’s logic. And this is what Gödel’s theorem basically says. **Rudy Rucker**, in his book Infinity and the Mind: The Science and Philosophy of the Infinite, has simplified Gödel’s Incompleteness Theorem through an excellent example in a stepwise manner.

Think over this:

- Someone introduces Gödel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all.
- Gödel asks for the program and the circuit design of the Truth Machine. The program may be complicated, but it can only be finitely long. Call the program
TMPfor Truth Machine Program.- Now, Gödel writes out the following sentence: “The machine constructed on the basis of the
TMPwill never say that this sentence is true.” Call this sentenceGfor Gödel.Note that G is equivalent to: “UniversalTruth Machinewill never say G is true.”- Now Gödel laughs and asks the
whetherTruth MachineGis true or not.- If the
says G is true, thenTruth Machineis false. If“Universal Truth Machine”will never say G is trueis false, then G is false (since G =“Universal Truth Machine”will never say G is true). So if the“Universal Truth Machine”will never say G is truesays G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements.Truth Machine- We have established that UTM will never say G is true. So “UTM will never say G is true” is in fact a true statement. So G is true (since G = “UTM will never say G is true”).
And having tricked the Truth Machine, Gödel triumphantly declares: “I know a truth that Truth Machine can never utter,”

You may have a hunch: “isn’t this a problem of the language we use?”

The answer is NO.

The original incompleteness theorem is mathematically worded. The above simplified version using linguistic conundrums is just an abstraction of the real one. Gödel’s Incompleteness Theorem is a real problem that is faced by artificial intelligence researchers at least on theory level; the fact that we have not been able to attain any sort of complex thinking in machines doesn’t preclude incompleteness theorem from interfering in these matters.

Roger Penrose suggests that the ability of human brain / conscious brain to “go out of the labyrinth of axioms and find the truth outside” is due to the quantum nature of consciousness. He suggests that human consciousness may depend on some new, as yet unknown, quantum physics that has significant role in the neuronal processes of the brain.

More of that in the coming sections of **How physics changes the way we look at mind.**

on February 3, 2008 at 9:55 am |Armchair GuyI think there’s something wrong with the argument Roger Penrose uses. He says that in the TMP’s place, a human would be able to prove the truth of a question like G because he would see it’s a Godel sentence. But that’s what Penrose doesn’t explain: how do you recognize a Godel sentence for your own formal system?

The barber’s paradox should read: “If a barber shaves people who don’t shave themselves, does he shave himself?” (Not “who shaves the barber?”.)