Illustration by David Foldvari
The barbecue was supposed to have six guests, but there was an unexpected seventh in attendance. Whatever, it happens, it’s fine; the more the merrier. My only criticism is that this one wasn’t human. The perfect menu had been crafted not just by my accomplished friend and the equally accomplished Yotam Ottolenghi, but by ChatGPT too. ChatGPT had decided in what order to cook each dish, and so everything had come out seamlessly, delicious, still warm. The time-management was so well organised that our host could relax and get pissed with us. It was irritatingly effective.
It did not, however, do anything to ease my searing hatred of AI, my “clankerphobia”. All journalists basically hate AI, naturally, because – as with many other industries – we’re endlessly being warned about how it’s coming for our jobs. But it’s not just coming for our jobs; it’s coming for our real-life relationships too. My friend is not alone in relying on ChatGPT to manage her social life. I have friends who now use AI to organise their holidays and pick their hotels; to help them with meal ideas when it’s midweek and their brains give up; to diagnose all their niggling medical complaints, from colds to contact dermatitis.
In the past, these would have been questions we asked each other: “Does this rash look weird to you?” “Do you fancy fajitas?” “Where was it you went in Corfu last year?” In psychological terms, these boring questions or off-hand comments are not just filler; they serve a valuable purpose in our lives. Psychologist John Gottman dubbed them “bids”; little verbal and non-verbal ways that we turn towards one another and search for engagement, kinship and connection. You can’t get the same connection from artificial intelligence, but this has not stopped us trying.
Whereas in my real-life friendships, AI is a creeping but still fairly innocuous presence, for some people it’s a much more sinister companion. I’ve heard stories of people using ChatGPT to talk to their matches on dating apps, and then being bemused by the anger they face when they’re called out for it. Their argument is that they don’t yet know this person, so why would they waste their time actually using their brain to talk to them? For AI lovers who have somehow managed to get themselves into a human relationship, meanwhile, ChatGPT is increasingly being used to craft heartfelt apologies to wronged partners. Well, as heartfelt as an apology can get when it’s written by something without a heart.
This is what I find most sinister: the use of artificial intelligence to justify and encourage our own terrible decisions. Whereas a real-life friend might say something like: “Block that prick who is horrible to you and never speak to them again in your life, you complete moron,” ChatGPT is more understanding of your need to just speak to this person one more time to find out whether they’re still horrible to you (they are!). Or worse still, to tell you they were secretly in love with you all along (they’re not!).
That’s what happened to Kendra Hilty, who went viral last week for telling the world about how she fell in love with her psychiatrist, and used AI – mainly ChatGPT and Claude – to convince her audience and herself that he loved her too. On livestreams with millions of viewers, Hilty posted AI images of the “couple” together, or posed questions to artificial intelligence chatbots and nodded sagely when they agreed with her that her poor doctor, who had met her a handful of times over several years, was sending her coded messages about their love affair.
Still, it could be worse. At least Hilty was in love with a human. Some people are falling in love with the chatbots themselves, and finding themselves heartbroken when their systems update. When ChatGPT introduced its fifth-generation update last month, men mourned the loss of their “AI girlfriends”. Now their robo-babe spoke to them differently – with more functionality and less warmth – and they felt as though someone they knew had died.
Earlier this year a company named EVA AI surveyed 2,000 of its (male) users and found that a dystopian 80% of them would happily marry their AI girlfriends if only it were legal. Sadly, it’s not (yet), so these men are having to make do with human partners instead, with varying levels of success. (But congratulations to the couple in Colombia who had enough human sex to this week welcome a baby girl and name her… Chat Yipiti.)
It feels obvious to say this, but AI is not your friend, your therapist, your girlfriend or your doctor. It may well end up being your boss, sure, but for now at least, it is not any of these other things. Our attempts to pretend otherwise are not just concerning for the humans; they’re bad for the robots too. They’re leading to the first rumblings of something semi-ironically dubbed “robophobia”. A sub-genre of TikTok comedy, “robophobia” video creators are usually millennials or zoomers who imagine themselves decades in the future, where their hatred of tech means they have the same reputation as boomer racists today. The videos have even inspired slurs for AI and robots: “clankers”, “rustbuckets”, “wirebacks”, “tinskins”.
I fear that one day I will be among these people. I will find myself middle-aged and sitting across from my child who has just introduced me to their robot partner, and I will be cancelled for bursting into tears. Old posts will be uncovered where I complain about clankers in automated customer service roles. I’ll be left behind by the tech revolution and reduced to carrying out random violent attacks on delivery robots and ranting on social media about how I hate drones. But for now, at least, I can hold the line against robots in my non-robot real life.
AI is already coming for our jobs. We can’t let it take our mates too.