Brilliant physicist and Nobel Laureate Sir Roger Penrose argues, using Gödel’s Theorem, that consciousness cannot be computational. In other words, there is more to human consciousness than can currently be explained scientifically.

Gödel’s theorems are among the most important—and most difficult to understand—breakthroughs in modern mathematics, and science in general. They are related to the vexing problem of self-referring language, as discussed in the classic book

Gödel’s theorems are among the most important—and most difficult to understand—breakthroughs in modern mathematics, and science in general. They are related to the vexing problem of self-referring language, as discussed in the classic book *Gödel*, *Escher, Bach: An Eternal Golden Braid* by Douglas Hofstadter.

As an example of the problem, take the following statement:

“This sentence is false.”

If the statement is true, then it’s false; and if its false, then it’s true. At first sight this might appear to be just a silly trick of self-referential language, but the problems associated with self-reference are not at all trivial. They formed the basis of Gödel’s deep investigation into the theory of arithmetic systems.

Gödel’s two “incompleteness theorems” say that any logical system contains either contradictory statements or statements that cannot be proven. A consistent logical system is composed of a set of axioms which allows you to do arithmetic according to rules related to the axiomatic statements. Gödel found that in any such system there must be at least one statement which is unprovable using just that system’s axioms.

Gödel was able to prove these theorems, in a way that is far beyond my understanding. What Penrose says (for example, in this recent interview with Jordan Peterson) is that Gödel’s theorem shows that consciousness cannot be fully explained by any kind of numerical computational process. https://www.youtube.com/watch?v=Qi9ys2j1ncg

I do not grasp all the details of how this works, but here is something he said in the interview: “Understanding—whatever that word means—is not computational. It’s not the following of rules. It’s something else.”

The implications of this idea are quite staggering. For one thing, it suggests that while AI might become very intelligent, it could never become fully conscious, in the human sense. It also raises the question of how consciousness could arise in human (and perhaps other) brains if things like computational complexity cannot explain all of it. Indeed, it makes the task of even defining consciousness in a formal scientific sense difficult.

I might be overstating the implications and might not even have a correct understanding of what Sir Roger was saying. But these are my impressions. Corrections in comments are entirely welcome.