False friend: ‘AI will help our work, but it’s not a colleague’

False friend: ‘AI will help our work, but it’s not a colleague’

The personification of algorithms erodes what it means to be human


Social media companies have taken zero responsibility to date for the social ills their tools have encouraged. But it’s not too late to put guardrails in place to keep social artificial intelligence (AI) companies from changing what it means to be human.

First, there is no such thing as “an AI”. That’s a marketing label for a basket of technologies, one type of which is generative AI, of which one flavour is the Large Language Model, or LLM. By calling any piece of adaptive or autonomous software “an AI”, we simply sell tech companies’ products for them.


Newsletters
Sign up to hear the latest from The Observer

For information about how The Observer protects your data, read our Privacy Policy.


The idea of “AI employees” is also pure marketing because these are at best incomplete products; opaque attempts to sell more software. We should think of this category of software as “tool AI”, to remind us that – just like a pen or a cellphone – these are our tools, and certainly not the equals to humans in the workplace or in our lives.

In our personal lives, the marketing approach for these applications can be even more consequential. For example, Replika, a chatbot app, says its software is “the AI companion who cares”. (Note the use of “who,” as if it were human, not “that”.)

To many of us, social AI programs can feel like a human who is, as Replika advertises, “Always here to listen and talk. Always on your side.” But now tech companies are selling us an antidote to the very problem they created – for a price. We all need affirmation. But real human relationships thrive on friction, on the process of trying to understand and live with other humans. Imagine a generation raised from a young age on software that never creates friction and always tells us we’re right.

Related articles:

This isn’t some Luddite response to the rise of robots – our brains are far more hackable than we want to believe

Second, you don’t “hire AI”. These applications do not function like a human being and team member. No matter how many “AI agents” are chained together, these are still applications that have specific functions, with arbitrary limitations defined by the coder.

Third, these technologies are undeniably amazing, incredibly flexible – and deeply flawed. Even with all their vast resources, companies such as OpenAI, Amazon and Apple still can’t get it right. For example, the latest ChatGPT agent software recently planned a trip to a baseball field – in the middle of the ocean.

Fourth, unfettered use can mean unfettered disasters. How will you manage hundreds or thousands of employees using countless, completely different autonomous agents – especially if that software lies to further its own interests? Imagine those independent “AI agents” thoughtlessly being given access to sensitive organisational data or with purchasing power or with the ability to send communications on your behalf without you knowing – or even telling your employees to break the law. How will you “discipline” your “AI employees” when the software puts you or your organisation at risk?

Finally – and by far the most important point – the personification of software algorithms erodes what it means to be human, evaporating human capital with societal impacts we cannot turn back. It’s impossible to overstate how toxic this narrative is to human workers.

This language isn’t a slippery slope: it’s a marketing coup. The AI-person meme tsunami is clearly influencing a mindset intended to blur the line between humans and our technologies.

Our brains are far more hackable than we want to believe.

This isn’t some Luddite response to the rise of the robots. AI applications have countless uses in our work and in our lives. But we don’t have to sell tech companies’ products for them. And we don’t have to treat the software as our co-worker, our employee – or our friend.

Gary Bolles is chair for the Future of Work at Singularity University and author of The Next Rules of Work


Photograph by Yuan Zheng/ Feature China/ Future Publishing via Getty Images


Share this article