The networker: AI can crunch data but to evolve, it needs the human factor – learning by experience

The networker: AI can crunch data but to evolve, it needs the human factor – learning by experience

Sam Altman, chief executive of OpenAI

Artificial generative intelligence has taken another step forward with chatbot maker OpenAI’s latest model but it will only become truly smart by interacting with its environment OpenAI, that curious profit-making nonprofit oxymoron run by Sam Altman, recently released its newest large language model (LLM), coyly named o3. Cue the usual chorus of superlatives from Altman’s admirers. Tyler Cowen, a prominent economist who should know better, kicked off early on the theme of artificial general intelligence (AGI). “I think it is AGI, seriously,” quoth he. “Try asking it lots of questions, and then ask yourself: just how much smarter was I expecting AGI to be?”

So, I dutifully asked it lots of questions, and pushed back a bit on some of its answers, and found it quicker and a bit slicker than other LLMs I regularly use. It’s multimodal – that is, it handles text, images, and audio input and output. It produces near-human speech, engages in lively interactions, and seems quite good at the kind of knowledge tasks that researchers use to test LLMs.

It can “see”, and seems to understand, images (charts, graphs, diagrams, photographs). But was it as close to AGI as Cowen thinks? Answer: no – unless one accepts the ultra-narrow definition of AGI that OpenAI uses; that it can “outperform humans at most economically valuable work”. The “G” in AGI is still missing.

LLMs are the outcome of machine-learning technology, a term that does not exactly trip off the tongue, which perhaps explains why the tech industry rebranded it as AI, to give it a veneer of respectability. But it’s misleading to describe LLMs as AIs when they’re really something far more interesting: what the eminent psychologist Alison Gopnik calls a “cultural technology” , which allows humans to access information from many other humans and use that information to make decisions.

We have been doing this, Gopnik points out, for as long as we’ve been human. Language itself is a cultural technology. So are writing, print, libraries and the internet – all ways that we get information from other people. And LLMs provide a very effective way of accessing information from other humans by summarising the information humans have put on to the web.

The thing about cultural technologies is that they shape societies over the long term. Think of print, which shaped our world for four centuries, right down to even changing our conceptions of childhood, as the US author and educator Neil Postman pointed out many years ago. LLMs may well have a similar impact, which is why they’re worth taking seriously. They’re still at an early stage in their development, with lots of flaws, and they’re tainted by the intellectual property theft implicit in their training. But they’re getting better, more reliable and more useful. And they’re not going away.

That said, they also have huge blind spots. They might have “read” or ingested everything that’s ever been published in machine-readable form, but a great deal of human knowledge has never been written down. Think, for example, of tacit knowledge; individual wisdom, experience, insight, motor skill, intuition – knowledge that is difficult to extract or articulate, as opposed to conceptualised, formalised, codified or explicit knowledge. Or of craft knowledge – the kind that people possess in their hands or bodies that is uncommunicable but nevertheless tangible, audible or visible when they practise it.

The tech industry knows little of such things and cares less. Instead, it seems fixated on the belief that “scaling up” LLMs and other generative AIs (heating up the planet and breaking electricity grids in the process) will eventually get us to the nirvana of AGI – machines with human-level capabilities. Well, maybe it will, but I wouldn’t bet on it. Machine learning may, in the end, turn out to be a dead end: useful for many things, but ultimately limited and definitely not the route to superintelligent machines.

If we’re to get to those, we need something different. Which is why it was interesting to see David Silver, lead research scientist at Google DeepMind, coming up with a new idea that he and the computer scientist Richard Sutton call “the era of experience”.

The argument is that there is only so much that machines can learn from human experience as documented on the web. If they are to become smarter, AI agents have to be freed to learn continually from their own experience – chiefly, data that is generated by the agent interacting with its environment.

Silver thinks that “AI is at the cusp of a new period in which experience will become the dominant medium of improvement and ultimately dwarf the scale of human data used in today’s systems.”

To which the only sensible response is the one film producer Samuel Goldwyn used to give to people pitching movie scripts to him: “a definite maybe”.

Photograph Getty Images


Jill Lepore
Jill Lepore

What I’m reading


Newsletters
Sign up to hear the latest from The Observer

For information about how The Observer protects your data, read our Privacy Policy.


Big bang theorists

The Atomic Bomb Considered as Hungarian High School Science Fair Project is a marvellous blogpost on the LessWrong website.

The enemy within

A sobering Substack essay by Timothy Snyder on how things might play out in the US is The Next Terrorist Attack and What Comes After.

Yesterday’s man

The New York Times has The Failed Ideas That Drive Elon Musk, a column by Jill Lepore that situates the tech entrepreneur where he belongs – in the past.


Share this article