4 Comments

As a mathematician, I am annoyed by the common assumption that proving the Riemann hypothesis *doesn't* require managing complexity, metacognition, judgement, learning+memory, and creativity/insight/novel heuristics. Certainly, if a human were to establish a major open conjecture, in the process of doing so they would demonstrate all of these qualities. I think people underestimate the extent to which a research project (in math, or in science) differs from an exam question that is written by humans with a solution in mind.

Perhaps AI will be able to answer major open questions through a different, more brute-force method, as in chess. But chess is qualitatively very different from math: to play chess well requires much greater calculational ability than many areas of math. (At the end of the day, chess has no deep structure).

Also, prediction timelines for the Riemann Hypothesis or any specific conjecture are absurd. For all we know, we could be the same situation as Fermat in the 1600's, where to prove the equation a^n + b^n = c^n has no solutions you might need to invent modular forms, etale cohomology, deformation theory of Galois representations, and a hundred other abstract concepts that Fermat had no clue about. (Of course, there is likely some alternate proof out there, but is it really much simpler?). It is possible that we could achieve ASI and complete a Dyson sphere before all the Millenium problems are solved-- math can be arbitrarily hard.

Expand full comment

Agreed!

I won't be surprised if AI is able to partially substitute brute search for judgement, insight, novel heuristics, etc. And I won't be surprised if it's *more* able to do this in mathematics than many other fields. But I would be surprised if it is able to solve difficult open questions in mathematics without substantial progress in judgement and so forth.

I couldn't resist leading off with that quote about the Reimann hypothesis, it gets at what I believe is an important point in an extremely pithy way, but I do agree with you that it likely overstates the case.

Expand full comment

It would be interesting to have you take a crack at possible training data for these missing areas, even if just a few examples that others could expand on later. I wonder what training data AIs would generate for these and how hard the data would be to verify and correct.

Expand full comment

Intelligence is substrate agnostic. Machines can demonstrate intelligence and perform any task a human can perform. (To those who disagree tell me where exactly is the intelligence in your brain)?

So what is the difference between a human and AGI? AGI isn't human, humans aren't AGI. AGI can't interact with the universe in a human like way because it doesn't have the same experience(s).

It's like asking you to become a fish for a week. You can replicate a lot of things a fish can do but you will never a fish.

Expand full comment