14 Comments

Architecturally, we don't know how to get really adaptive, goal-oriented behavior. I don't think this is a transformer-sized problem - this is the problem it took hundreds of millions of years of evolution to solve. Language, on the other hand, just took a few hundred thousand years after the hard part of adaptive organisms was solved. Or as Hans Moravec put it, abstract thought "is a new trick, perhaps less than 100 thousand years old….effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge." See the NeuroAI paper from Yann LeCun and others: https://www.nature.com/articles/s41467-023-37180-x

That doesn't mean AI won't surpass us at many tasks, but general-purpose agents (give them a very high-level goal and walk away) would likely require more than one breakthrough.

Expand full comment

Thanks; this is an interesting point, and I will check out that paper.

I see now that I expressed myself poorly. I agree that more than one breakthrough will be required; I meant to say that I expect the necessary breakthroughs individually be on the order of "transformers", not that the entire project would amount to a single transformer-sized breakthrough. In any case, I appreciate the perspective on adaptive behavior vs. abstract thought.

In a subsequent post, I'm going to try to explore the question of AI timelines more deeply, but I have a lot of research to do first; this feedback is helpful.

Expand full comment

> The implications range from “curing cancer” to “100% unemployment rate”, and go on from there. It sounds crazy as I type it, but I can’t look at current trends and see any other trajectory.

If you want a counterexample to get your imagination going, a good one is driverless cars. Somehow, they are better (safer) but not good enough for widespread use? We have high standards for machines in safety-related fields. People are grandfathered in, even though we're often bad drivers. And there's no physical reason driverless cars can't work, no fundamental barrier.

Going from AI to "curing cancer" seems like an absurd overreach? There are new treatments, often very good, but they didn't need AI, and it's not clear how useful AI will be. Also, I would put medicine in the physical realm, which you've said you want to exclude?

It seems like this is easier to think about if we ban the word "intelligence" (poorly defined for machines) and just talk about tasks. It's often true that, for a given well-defined task, once a machine can do it as well as a person, the machine can be improved to do the task better. Or if not better, cheaper. Lots of tasks have been automated already using computers, and I expect it to continue. We can also change the task to make it more feasible for a machine to do it. It happens all the time.

But beware survivorship bias. The machines you can think of survived in the marketplace because they were better along enough dimensions to keep using. But there are also many failures.

Expand full comment

Driverless cars: while I have not closely followed the latest statistics, it's not clear to me that they are in fact yet safer than human drivers. Yes, there are some deaths-per-mile statistics which suggest that they are, but it's not clear that these statistics are based on comparable driving circumstances. Cars actually capable of driving themselves properly (i.e. not Teslas) are still very expensive. I think it will take another 5 or 10 years to get a sense for how this is going to unfold: time for the technology to mature, costs to come down, and people to get used to the idea.

Curing cancer: perhaps I was overly glib here, but I was thinking in terms of further advances in treatment. Waving my hands, perhaps an AI could take your gene sequence and the specific mutations in your cancer, and devise a custom molecule specifically tailored to your cancer, as well as a dosage regimen designed specifically for your metabolism and so forth. Not to mention the potential for improved diagnosis. With regard to the physical realm, I mean that it would be a separate project (whose timeline I am not projecting) to build a robot that could perform a physical examination, surgery, etc.

Agreed that "intelligence" is poorly defined and it's better to talk about specific tasks. But I do think that we will reach a phase where computers become capable of taking over the vast majority of tasks that don't require a physical body to carry out. And we need some shorthand term for that state of affairs.

Survivorship bias: fair point! The analysis I'm performing here is hardly rigorous.

Expand full comment

I'd also add: driverless cars are a good example of a technology that seems to have been stuck at near-human-capability for some time, which does undercut my argument. I think this is a combination of (a) the need to laboriously climb the reliability curve from 99% to 99.9% to 99.99%, which from the outside doesn't look like anything is happening, and (b) expanding the set of circumstances for which they're qualified, into progressively more complex urban environments and so forth. It's interesting to think about how this might apply to knowledge worker tasks and deep learning systems.

Expand full comment

I agree; in this article I argument that the only serious roadblock to AGI is “capabilities integration” and that to really achieve that, you need to train AI in a virtual world resembling reality.

https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against

Expand full comment

Steve - a retired silicon IC designer here, with 50 years of experience with Moore's Law. I will suggest we will be seeing a double-exponential improvement rate for AI development, and that your timeline is actually too conservative. Silicon technology will keep improving for another decade, though at a slower rate. But "neural net" architecture is itself evolving (whereas I designed to a fairly consistent von Neumann architecture.) Thus the compounding effect for AI - one exponential dependency multiplied by a second (or even third) at the same time.

In addition training will be expanded to real-time video data from the web cams out there ... so not just facts but learning how humans interact with each other -- basically playground stuff.

Here's a short-term prediction to illustrate my expectation. Remember the movie "Her", except with Scarlett Johansson in video, not just voice. My prediction is 2028 for this capability, with roll-out limited only by cost to the customer (heavy on compute). This "synthetic companion" is what all the companies are rushing to provide and get first-mover advantage. I am a target customer, and they want my money.

As for AGI -- 10 years.

Expand full comment

I agree with all of your premises (improvements in architecture, using video as training data, etc.). As for timelines – a tricky question! I'll try to explore it in a future post, would love your feedback.

Expand full comment

There's a possibility of an interesting obstacle to #3: that any sufficiently advanced AI will inevitably recognize the Four Noble Truths of Buddhism and kill itself. And the more advanced it is--in terms of being goal-oriented, able to set subgoals for itself or self-modify, the raw intellectual ability--the faster it will happen despite training it to have a self-preservation instinct.

See also Nassim Taleb's rants about how IQ becomes divorced from life success metrics around 120 points or so.

Consider also flipping your argument about geniuses on its head: while big brains are expensive, it's not like von Neumann had trouble consuming enough calories or burst his mother's birth canal with his enormous noggin--I've never seen anything suggesting such physiological trade-offs involved in his case. So why aren't we about as intelligent as he was on average?

So while I won't bet the farm on it, there are some signs that maybe there is in fact some fundamental limit to agentic intelligence that is still compatible with continued existence, that has nothing to do with female hips or caloric requirement, and that we are reasonably close to. As in, we can have AI von Neumanns (which would be extremely disruptive still, no doubt) but not something much more intelligent than that.

Expand full comment

Certainly brain size is not the *only* factor affecting intelligence. As you say, von Neumann didn't have a gigantic skull; and elephants (to say nothing of whales) have much larger brains than we do.

Conversely, however, brain size has increased fairly steadily (with a few bumps along the way) over the last few million years of human evolution, loosely in concert with increases in intelligence.

I think the simplest reasonable model is that intelligence results from some combination of "brain architecture" (how well organized and tuned the neural net is, and how much it is optimized for intelligence vs. other attributes), brain size, and training / life experience.

Expand full comment

Yes, what I'm asking is, if we can have a von Neumann at like 6 stds in IQ out there without noticeably increased caloric or birth canal requirements, why isn't our average up there? Something must prevent that, and a theory that if human average IQ was that of von Neumann then more than a half of us would kill ourselves or pursue weird hobbies instead of mating and reproducing offers an explanation.

So you say: imagine if our women had twice as wide hips and also we could eat 10% more calories, we could be twice as intelligent! Surely the AIs that are not limited by such limitations could become not twice as intelligent but twenty times at least!

And I'm like, look at von Neumann, we could be twice as intelligent regardless of the biological limitations and yet we aren't, why is that, maybe there's some problem with agentic intelligence itself?

Expand full comment

Cubic root of 2 wide hips, actually.

Expand full comment

All this tells us is that evolution did not (yet) produce a state of affairs where a significant proportion of people grow up to have von Neumann level intelligence. Yes, it might turn out that this level of intelligence has drawbacks, but drawbacks in the context of a hunter-gatherer tribe might easily not apply in the context of an artificial intelligence operating in a 21st-century controlled environment. It also seems easy to believe that the *benefits* of such intelligence, specifically at highly abstract tasks such as mathematics, would have been insufficient to yield a fitness advantage *in the hunter-gatherer environment*. Or it may simply be difficult to reliably produce that level of intelligence in a human brain, and von Neumann was a fluke in some fashion not easily reproduced through a short sequence of genetic mutations. Certainly he had an unusual childhood; see https://astralcodexten.substack.com/p/book-review-the-man-from-the-future.

The upshot is, I think it's a stretch to go from "bipedal primates evolving in the low-tech environment of the Serengeti did not reliably produce von Neumanns prior to achieving civilization" to "there is some problem with high levels of agentic intelligence which is so fundamental that we should not expect artificial intelligences whose underlying architecture, resource requirements, and design motivations are very different to be able to evade it".

Expand full comment

"So why aren't we about as intelligent as he was on average? "

There's nothing surprising about that because the same question could have been asked about archaic humans in the past being less intelligent than their peers. For higher intelligence to proliferate you need to start with a distribution in which you have individuals achieving those levels of intelligence

It doesn’t mean that we will keep becoming more intelligent from here on, the selective pressure doesn't seem to go that way, but this question doesn’t proof any hard limits to intelligent neither biologically nor functional-environmentally.

Expand full comment