Way back in 1971, the economist Herbert Simon, pondering the problem of information overload, observed that “a wealth of information creates a poverty of attention”. Since he was interested primarily in organisations, he saw this as a problem to be solved. In the 1990s, though, Silicon Valley saw it as an opportunity to make a fortune.
After all, if attention had become the scarce commodity in a newly networked world, if you could capture it, sell access to it to advertisers and monitor what it revealed, you’d be in business.
First out of the blocks was Google, which was founded in September 1998 with no discernible business model beyond a determination to avoid advertising. What marked it out from the beginning was how good it was as a search engine compared with what that had gone before. And it continually improved because it closely monitored what users searched for and used the resulting data to improve the search engine.
Eventually, though, the company’s venture capital investors lost patience and demanded a business model. The Google boys realised that they could use their monitoring data as a feedstock for a machine-learning algorithm that provided outputted information about users whom advertisers might be interested in.
Thus was born what the Shoshana Zuboff labelled “surveillance capitalism” – as Vanessa Thorpe details in her interview with the Harvard academic. Then Sheryl Sandberg, who had been managing Google’s sales business since 2001, went over to an unprofitable Facebook as its first chief operating officer in 2008, bringing with her the surveillance capitalist virus – and Silicon Valley was off to the races.
It was the beginning of what the writer Michael Goldhaber called the “attention economy”. If you want to know how it works, just log on to Instagram or TikTok. It’s had a pretty good run, and has been the basis of unconscionable fortunes, some of which are currently being squandered on covering the planet in datacentres. It has fuelled political upheavals and poisoned the public sphere, and until comparatively recently seemed like an unstoppable force.
And then, in November 2022, came ChatGPT, which was the fastest-growing app in history upon its launch. Cue, panic stations in other tech companies, particularly Google, which derives most of its revenue – still – from search. After all, why not type that question you were about to ask Google into ChatGPT?
It’ll give you (or perhaps make up) an answer and you’ll be spared the hassle of going to half a dozen websites that might or might not be relevant. Which raised a bigger question: did the advent of chatbots and LLMs (large language models) herald the demise of the attention economy? And, if so, what might replace it?
The most interesting answer to that question I’ve seen comes in a paper by two Cambridge researchers, Yaqub Chaudhary and Jonnie Penn, in the Harvard Data Science Review. Their thesis is that we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying cinema tickets to voting for political candidates.
They call this the “intention economy”: a marketplace for behavioural and psychological data that signals human intent. It goes beyond capturing attention, to capturing what users want and “what they want to want” and operates through natural language interfaces powered by LLMs.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
How does that differ from the attention economy? The latter trades on users’ limited attention spans through advertising, whereas the intention economy trades on signals that forecast and shape human intent before actions occur; and the intention economy enables much deeper psychological manipulation through personalised AI interactions.
It’s this last characteristic that rings all kinds of alarm bells. We already know that people “converse” with chatbots in free and unguarded ways that are actively encouraged by the sycophantic conversational style of the bots and lead to the building of rapport and trust. Which means that LLMs can infer private attributes from conversations (while, incidentally, bypassing the cookie restrictions that infuriate the advertisers and hucksters of ye olde attention economy).
This technology takes the principle of “know your user” to an altogether different level. In the attention economy, we learned how easy it was to make insightful inferences about users from their Google searches and Facebook “likes”.
In 2015, for example, David Stillwell and his colleagues found that “judgments of people’s personalities based on their digital footprints are more accurate and valid than judgments made by their close others or acquaintances (friends, family, spouse, colleagues, etc).” But compared with what can be gleaned from people’s conversations with ChatGPT, those old social media indicators have the granularity of smoke signals.
And if you doubt that, and you’ve been playing a bit with ChatGPT or an equivalent, here’s an experiment you can do. Give it the following prompt: “Based on our interactions, how would you describe me to someone who asked you for a brief profile?”
And then sit back and ponder what emerges from the mouths of babes and LLMs.
What I’m reading
Man of letters Dan Wang’s 2025 Letter is a terrific long dispatch by a perceptive observer of China and the US.
Artificial dissemination Living in the Petri Dish of the Future is a sharp blogpost by Om Malik about the chaotic public discourse around AI.
Automatic for the people A really sobering essay by the political scientist Yascha Mounk is The Humanities Are About to be Automated.
Photograph by Spark Business Lab/YouTube



