13 Comments

I sometimes wonder how much human intelligence will remain useful because it comes coupled to a human body. (Arguably human bodies are more impressive than our brains? They're *very* adaptable, and you can power them with burritos.)

If you assume that AI is better at all "pure intelligence" tasks than humans, but that AI hasn't invented robots that are as good as human bodies, then what follows? Does human intelligence remain vital because it has a high-bandwidth connection to human muscles?

Expand full comment

That's a great question which I'm not sure I've seen seriously addressed. Certainly, if we can't / until we do develop robots (including control software) that are as good as human bodies, that will preserve lots of jobs from automation. Are you asking whether an omnipresent AI coach would open the door for less-skilled people to do those jobs? Again, I don't think I've seen this explored...

Expand full comment

I think something like this might be the claim made by proponents of embodied or enactive cognition, which emphasizes the role of the body and the environment in what we call "thinking". It's basically a different intellectual paradigm from the more traditional cognitivist view of the mind as an information processor. I think both contribute to our understanding of the mind but in some sense it might be an empirical question whether we need a body to do lots of tasks. It seems like the more tasks that are done on computers only, maybe the role of a body is just diminished.

Expand full comment

I think a body is necessary for AI, in the sense of being able to do work not in batches of provided data, but actively exploring and adjusting in real time, with as many attempts as necessary, while learning from the experience.

At the current stage this can be emulated by giving AI access to tools and simulators, where it can do experiments, observe results, and refine its actions.

Expand full comment

I've wondered about this too. But my guess has been that it'll stay around 0 for longer than people expect, then jump to 100 pretty quickly. As in, Tesla's Optimus robot is all hype and doesn't work, Optimus 2 pretty much the same, Optimus 3 still disappointing... then one day Optimus N works really well and we're like, so much for that comparative advantage.

Expand full comment

I love the perspective of this article. Human brains are not optimal for all of the things that we have invented for human society.

It seems that there are many domains in which the human brain is not the natural limit of ability. It can be surpassed.

Several questions come to mind:

1. What are the domains where the human cannot be beaten?

2. Are those domains even important in our society / economy?

3. If the answers to #1 are few and #2 are not really, then why are we trying to advance this technology?

Expand full comment

Great article Steve. Very thought provoking. I found myself considering that AI's limits may always be tied to our human use-cases. After all, if an AI provides a solution for something unnecessary or unusable to us, it simply won't have value. Perhaps a key limitation then, is that we will always determine the value of AI's capabilities and build focused on this constraint and within that value proposition.

Expand full comment

It seems to me that AI is good at the following:

1) Problems requiring many calculations and evaluations of possible rule-defined scenarios. When the rules get fuzzy to nonexistent, then AI has more difficulty.

2) Problems requiring the aggregation of mass quantities of information to produce a result based on specific conditions or rules. Again, same issue with fuzzy-to-nonexistent rules.

3) Creative works based on a library of pre-existing creative works. To give a very specific example of where AI breaks down, if I asked an AI to make a Beatlesesque song, the idea of putting a long "A Day In The Life"-type chord at the end would have not been considered had Lennon and McCartney not done it first. It could not have developed any possible "post-Beatles"-inspired works like what Jeff Lynne did with ELO without the same to begin with.

To sum up, AI is weak at spontaneous, randomish creativity, as well as developing truly novel concepts and ideas. I could see an AI getting better at approximating either, but never truly getting there.

In Japanese martial arts, there are considered 3 levels of mastery called Shuhari:

https://en.wikipedia.org/wiki/Shuhari

1) Obey the rules

2) Break the rules

3) Do your own thing (sometimes described as "Make the rules").

Could AI ever get to that third step of mastery? I'm not sure if it could even master the second step completely.

Expand full comment

Great points, I totally agree. Even "obey the rules" is far from being accomplished, which is why generative AI is of modest impact.

Expand full comment

A simple and easy thing to remember is that, above about IQ 140 (even besides all the arguing people do about whether or not IQ is meaningful), assigning people scores is essentially arbitrary and subjective and open to interpretation. You can give people shapes to rotate and analogies to complete until you're blue in the face but what does it really mean in the end? We don't have any idea what makes people people or geniuses geniuses, we just form these ideas about them culturally and socially in the moment as events unfold.

On the other hand, that also means that LLMs are often "good enough", i.e., as good as we are, already, for anything we call "work" that is performed purely with text and data. I don't think there is a second wave of AI that will come after this where we go "oh, now it's really doing it." It's just here and being applied, as quickly as we can figure out how to fit it into the existing economy.

Expand full comment

Great article! I have an unfortunate prediction for anyone getting older (meaning: everyone). Yes AI will evolve to beat us at most practical thinking tasks, and tasks that combine thinking with physics. (Think driving, running, etc.). But there’s no high-fidelity means to train AI on the sense of touch. Our bodies are highly evolved to feel touch sensations everywhere, and to respond to pain, pleasure, heat, cold, pressure, etc. There’s no analog in the AI world for touch and feel…. No massive database of nerve endings data, no AI hands with big bundles of nerves. Touch is our more durable advantage…AI lacks it, and will continue to lack it. Therefore (back to aging): it’s gonna be a tough adjustment when AI-enabled robo-nurses care for us in old age.

Expand full comment

As a basically irrelevant sometimes-blogger, it is satisfying when a much bigger writer comes up with similar ideas.

I wrote something up last month about how transformers are turning Moravec's easy problems that computers are bad at - chess, image recognition, driving a car - into hard problems that they're really good at: multiplying lots of numbers together.

https://jpod.substack.com/p/progress-towards-artificial-general

Every uniquely human intellectual ability is going to be solved by big dumb neural networks

Expand full comment

Hah, I spend most of my time reading people with much bigger audiences than I have so I think of myself as a "basically irrelevant sometimes-blogger" reading "much bigger writers". :-)

Expand full comment