*I seek a number that cannot be calculated. This number will become my line of flight out of the madhouse of a world… *

Gregory Chaitin once wrote a paper on *The Limits of Reason *implying that Ideas on complexity and randomness originally suggested by Gottfried W. Leibniz in 1686, combined with modern information theory, imply that there can never be a “theory of everything” for all of mathematics.

*So perhaps mathematicians should not try to prove everything. Sometimes they should just add new axioms. That is what you have got to do if you are faced with irreducible facts. The problem is realizing that they are irreducible! In a way, saying something is irreducible is giving up, saying that it cannot ever be proved. Mathematicians would rather die than do that, in sharp contrast with their physicist colleagues, who are happy to be pragmatic and to use plausible reasoning instead of rigorous proof. Physicists are willing to add new principles, new scientific laws, to understand new domains of experience. This raises what I think is an extremely interesting question: Is mathematics like physics?*

**– **Gregory Chaitin and The Limits of Reason

In 1956 Scientific American published an article by Ernest Nagel and James R. Newman entitled “Gödel’s Proof.” Two years later the writers published a book with the same title—a wonderful work that is still in print. Chaitin was a child, not even a teenager, and he was obsessed by this little book. He remembered the thrill of discovering it in the New York Public Library. He used to carry it around with him and try to explain it to other children. As he’d say:

It fascinated me because Kurt Gödel used mathematics to show that mathematics itself has limitations. Gödel refuted the position of David Hilbert, who about a century ago declared that there was a theory of everything for math, a finite set of principles from which one could mindlessly deduce all mathematical truths by tediously following the rules of symbolic mathematical logic. But Gödel demonstrated that mathematics contains true statements that cannot be proved that way. His result is based on two self-referential paradoxes: “This statement is false” and “This statement is unprovable.” (For more on Gödel’s incompleteness theorem, see box.)

My attempt to understand Gödel’s proof took over my life, and now half a century later I have published a little book of my own. In some respects, it is my own version of Nagel and Newman’s book, but it does not focus on Gödel’s proof. The only things the two books have in common are their small size and their goal of critiquing mathematical methods.

Unlike Gödel’s approach, mine is based on measuring information and showing that some mathematical facts cannot be compressed into a theory because they are too complicated. This new approach suggests that what Gödel discovered was just the tip of the iceberg: an infinite number of true mathematical theorems exist that cannot be proved from any finite system of axioms.

For the details read here. To cut to the short version I quote:

Overview/Irreducible Complexity

- Kurt Gödel demonstrated that mathematics is necessarily incomplete, containing true statements that cannot be formally proved. A remarkable number known as Ω reveals even greater incompleteness by providing an infinite number of theorems that cannot be proved by any finite system of axioms. A “theory of everything” for mathematics is therefore impossible.
- Ω is perfectly well defined and has a definite value, yet it cannot be computed by any finite computer program.
- Ω’s properties suggest that mathematicians should be more willing to postulate new axioms, similar to the way that physicists must evaluate experimental results and assert basic laws that cannot be proved logically.
- The results related to Ω are grounded in the concept of algorithmic information. Gottfried W. Leibniz anticipated many of the features of algorithmic information theory more than 300 years ago.

The incomputable number that solved the problem, which is now known as Chaitin’s constant is in information theory, and as omega constant or Adamchik constant is a mathematical constant defined by:

**Gregory Chaitin** is a researcher at the IBM Thomas J. Watson Research Center. He is also honorary professor at the University of Buenos Aires and visiting professor at the University of Auckland. He is co-founder, with Andrei N. Kolmogorov, of the field of algorithmic information theory. His nine books include the nontechnical works *Conversations with a Mathematician* (2002) and *Meta Math!* (2005). When he is not thinking about the foundations of mathematics, he enjoys hiking and snowshoeing in the mountains.

This is a timely post for me, Craig – many thanks. I was just thinking on a conversation I had yesterday with Pete Wolfendale on AI as first philosophy. I’d been thinking of something along similar lines while working through Andy Clark’s latest on predictive processing.

The bald idea is that perhaps there are abstract constraints on how any system can acquire knowledge of its world and that there is some way of sketching these in abstract computational terms. Pete mentioned homotopy type theory and “computational trinitarianism” – both of which are not areas that I know much about. But the basic idea is that there is an a priori, but it’s not one that’s open to transcendental reflection. Well, up to know my critical efforts have mostly been directed at transcendental positions which do depend on some kind of reflective structure – whether phenomenology or (as I’ve tried to argue) post-Sellarsian analytic pragmatism. But maybe, there’s an abstract a priori out there and it will be revealed to be a collection of good computational tricks. There seem to be negative facts about heuristic reasoning – take the No Free Lunch Theorem which states roughly that all learning algorithms are equally optimal when averaged over all possible situations. There’s no killer app out there. Rather, as Scott suggests, it all depends on fitting heuristics to the informational structure of the world.

But maybe there are still more abstract computational structures to be found (I dunno) which could still qualify AI as transcendental philosophy (I dunno). Still, if Chaitin’s right, and most mathematical truths whose proofs are as complicated as the facts proving (rendering them brute) then it strikes me that there could be computational neat tricks out there that we could never prove to be out there. So my question is: if mathematics is radically incomplete, where does this leave the formal a priori?

LikeLiked by 1 person

Following Badiou who followed Cantor’s set theoretic one will always be limited. According to the “axiom of foundation” followed by Badiou (and other set theorists) paradoxical sets are simply ruled out axiomatically. Yet, critics have said of this that eliminating the need for an extramathematical, theological infinite can rightly be regarded as begging the question.” Kenneth Reynhout says that this is “a priori exclusion by fiat, and a fiat that serves no useful purpose other than making that specific exclusion. It would not be an exaggeration, therefore, to rephrase the axiom of foundation in this way: ‘there are not other infinities than the ones we construct.’” Reynhout also raises the question of whether the axiom of foundation imposes “an unacceptably limitation of the scope of ontology from the very beginning.”

All this comes down to agreeing with Scott that there is a Great Wall beyond which we cannot go as limited and localized beings. We’re just not made to know more than what our equipment is capable of providing. Yet, I question an aspect of this since if Stiegler is correct we’ve been off-loading memory from the beginning, and thereby through technics and technology we’ve supplemented the very processes we did not come with therefore as animals we’ve constructed systems that can surpass our limited range of thought and being to do and know in ways we will never know. Maybe this is the point of AGI – it is the axiomatic foundation of intelligence in the universe, the thing we seeking from the beginning that will overtake us and move on past our limits.

That’s been my thought for years that humans are about to reach a limit, but that we’ve been preparing for that all along. Our constructions, our machinic progeny will inherit the project beyond which we as a species cannot go. They’ll do what we have only dreamed of. Intelligence is in excess of our ability to know, yet oddly we can construct it because we have been doing that all along. This is where I go with Land, Deleuze, etc. in the sense that conscious self-reflective awareness is limited, always has been, but that diagrammatic or unconscious modes of processing is what our brain does 99% before it hands us 1% to work with; or, what Scott calls “neglect” – that small piece of the pie we think is all.

Yet, sometimes I also think of certain progeny like Srinivasa Ramanujan: He lived a rather spartan life at Cambridge. Ramanujan’s first Indian biographers describe him as a rigorously orthodox Hindu. He credited his acumen to his family goddess, Mahalakshmi of Namakkal. He looked to her for inspiration in his work and claimed to dream of blood drops that symbolised her male consort, Narasimha. Afterward he would receive visions of scrolls of complex mathematical content unfolding before his eyes. He often said, “An equation for me has no meaning unless it represents a thought of God.” https://en.wikipedia.org/wiki/Srinivasa_Ramanujan

I remember reading that while working in the Sandia Laboratories in Alamogordo, New Mexico on the Manhattan Project Einstein had a special black room constructed that allowed both no light and sound with only one chair in it so he could contemplate and visualize math.

Francis Crick had worked hard on DNA/RNA etc. but had one night dreamed of its structure: the double-helix…

Tons of such scientific stories all based on certain leaps of the illogical or imaginative and structural collusion in pre-conscious processes between waking or sleeping or hypnogogic type information… one may also think of all those induced aboriginal hallucinogens in culture after culture of gaining wisdom or knowledge in other than logical forms by way of dream visions, etc.

I have a feeling that for far too long we’ve discounted other modes of knowledge and learning modes because of their associations with ritual and religious heritages. And, for me these practices were materialist practices rather that had hard impact on neurochemical changes in the brain… of course this is all surmise. Are we moving toward the need to move beyond the limited and restrictive Englightenment reductions of the Analytical and Continental systems? Maybe, after all this is what is transpiring: we are either at an end, or a beginning?

LikeLike

https://artificialintelligencenow.com/

LikeLike