Portrait by Ian Bates
We have a window of around five years to shape what AI can be, says Zoë Hitzig: crucially, who owns it, who regulates it. When the research scientist quit Sam Altman’s OpenAI last month in protest at the company’s decision to introduce ads to ChatGPT, she warned that we had reached a crisis point. OpenAI’s advertising would be built on “an archive of human candour that has no precedent”, Hitzig wrote in the New York Times, “in part because people believed they were talking to something that had no ulterior agenda”. ChatGPT’s 800 million weekly users don’t want to talk about work; they share their deepest fears and secrets, their loneliness and needs. Monetising all that was like the Pope announcing he had sold a trillion confessions.
Speaking a month later on a video call from New York, Hitzig, in her early thirties, is still surprised that her resignation made headlines around the world. Wearing a black T-shirt and no makeup, she apologises for a cold that makes her feel woolly, in which case she must be fearsome on a sharp day. People leave tech companies all the time, she shrugs. But that same week, the head of the safeguards research team at OpenAI’s rival Anthropic, Mrinank Sharma, also resigned, warning that “the world is in peril”; he planned to write poetry instead. “I really loved that he included a William Stafford poem in his resignation letter,” Hitzig says. “I think often of a different poem [of Stafford’s], At the Bomb Testing Site, where he imagines a lizard in a desert watching the nuclear tests and thinking about the consequences.” For her, the AI race has reached a similar moment.
“The biggest surprise was that I was heard above all the noise,” she says. “Everyone’s talking about AI, and so much of it is about the very long-term view: ‘At some point we’re all going to be out of work. At some point, the AIs could build a biological weapon.’ The executives love to talk about problems that feel far away, because they’re distracting us.
“And then,” she says, “there are the things that are happening now, which are shocking but hard to see as a pattern – an [AI] psychosis case, a suicide. What I want to do is help fill that inbetween point, where we can think a little way ahead but keep it concrete.” In her article, she detailed three alternatives to OpenAI-controlled advertising, with its built-in risks of mass manipulation. Instead, AI could be treated like a public utility, with some businesses subsidising access for all; or its advertising could be overseen by an independent board; or users’ data could be managed by an independent trust.
An award-winning poet with a PhD in economics from Harvard, Hitzig never wanted to work in what she calls “industry”. Before joining OpenAI, she researched the effects of new technologies, mapping out an infinite number of what-ifs. She co-wrote a paper with Vitalik Buterin, co-founder of the cryptocurrency Ethereum, that imagined a new social ecosystem. At its most philosophical, economic theory is like poetry, she explains: “It allows you to construct a little world and investigate what happens when you pull different levers.”
Moving to San Francisco and becoming an employee of OpenAI was a different proposition. “At first, I was sceptical,” Hitzig says. “But I started to see that we had this very brief moment where all the important decisions were getting made. So joining could have an enormous influence.” In its early years, she points out, Facebook promised users control of their data and a vote on policy, before reneging. “This was a chance to be a part of making [OpenAI] not Facebook.”
She joined in early 2024, a couple of months after Altman was fired and then rehired as CEO by OpenAI’s board. Hitzig can’t talk about individuals because she has signed a non-disclosure agreement, but says: “It’s widely understood that the aftermath of that event lasted about a year. It wasn’t the high drama of the coup, but there were other pieces of dust that still had to settle.” Throughout 2024 there was an exodus of senior staff to other startups.
Hitzig led research into the mid-term effects of AI, and to start with it was fun. For instance, a project exploring “personhood credentials”: how do you prove your humanity online as bots get more sophisticated, without sharing vast amounts of personal data? “Already, we fill out Captchas. A Captcha is based on the idea that there’s a test that’s easy for a human but hard for bots. That’s over.” Instead, Hitzig’s team looked at an ID that you might register for in person (“a sign-up no robot could fake”), and that used encryption to generate a non-trackable code: “It could be a cryptographic hash of your passport number, or your retina, though we didn’t like biometric solutions.”
But the horizon for how far ahead OpenAI wanted to look kept shrinking. “The mood was defined by just how quickly everything was changing; there was always more to do. It was a pretty small team serving an extraordinarily [large] fraction of the global population, with a tool that we were still trying to understand. I found it very stressful at times, but also exhilarating.”
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
Hitzig paints a picture of San Francisco as a city obsessed with AI. There were “goofy” AI dinner parties in fancy houses in Pacific Heights, hacker meetups in Hayes Valley, everyone scrambling for the one bit of information or the work connection that might give them the edge, and the resultant fortune. “Even in the coffee shops, people were talking about the latest launch – it felt suffocating and everywhere. Someone would brag about having talked to this or that Google executive, or gone to this robotics demo. The more interesting question, for all of those dinners, was: why are you all trying so hard to figure out what’s going to happen when you can together figure out what you want to happen? Let’s make this more about sharing a vision.”
A mystique has sprung up around the kingpins of this scene, with their dizzying wealth and bland exteriors: this year will see the release of a film about Altman (Artificial, directed by Luca Guadagnino), a Netflix series about the AI evangelist and fraudster Sam Bankman-Fried (The Altruists) and a sequel to Aaron Sorkin’s The Social Network, about Mark Zuckerberg. The reality is both more humdrum and alarming than the Hollywood version, Hitzig says. “There are people in San Francisco who are excited about perpetuating those myths and this sense that they’re at the centre of the renaissance. But for me it was going into office buildings and tech-people parties with bad lighting and no one drinking.”
The human condition is always evolving … I continue to believe it needs to be updated by people
The human condition is always evolving … I continue to believe it needs to be updated by people
The alarming bit was how out of touch the AI world was. “People would talk about how AI is going to bring all this wealth that will rain down – all we have to do is figure out how to distribute it. But the Industrial Revolution already [promised] this, and we did not distribute the wealth globally. Why would this be any different? That [attitude] was pervasive and hard to watch. If you take your eyes off your two smartphones and look around, San Francisco is a city that has enormous poverty.”
Hitzig is in many ways the anti-tech bro, a woman and writer of poems about the limits of language: her second collection Not Us Now (2024) was published after winning a prize (and $10,000) judged by the Nobel laureate Louise Glück. Does Hitzig think she was hired by OpenAI partly for this reason? If you wanted the most human humans training your machine learning, you might go out and hire a poet.
She’s not convinced. “There’s a distinction between the kind of voice you need as an employer, and the kind you need as data. In all the AI companies there is a firm understanding that they need not just more data, but different kinds. That includes finding people who know obscure languages, obscure forms of knowledge, writers, art historians.” Hitzig points out that Anthropic has hired a lot of philosophers, including the Scot Amanda Askell, who curates its large language model Claude’s “personality”. “But I don’t think a philosopher is the most human human,” says Hitzig. “Philosophers can be quite abstract and calculating.”
Askell recently revealed that she had written an 80-page “soul document”, a map of Claude’s ethical framework. Does Hitzig think Claude is more moral than ChatGPT?
“I think there’s a lot to be learned about what the builders of these tools have in mind. If Claude has moral status, is it a piece of humanity? OpenAI’s mission is ‘to ensure that artificial general intelligence benefits all of humanity’, but I’m not sure what their vision of humanity really is. When you hear Sam Altman comparing the energy consumption required to train a human mind to the energy consumption required to train an artificial mind… Is that really how you think of us, as inefficient consumers of resources?”
By the start of this year, Hitzig could no longer justify the OpenAI job to herself, “or to my friends who are artists and writers who hate AI”. She felt the opportunity not to repeat the same mistakes as had been made with social media was being missed, as the rollout of OpenAI’s super-sycophantic ChatGPT-4o showed: in November 2025 seven lawsuits were filed against OpenAI on behalf of people who had become psychotic or died by suicide after intense engagement with the new model, which has since been withdrawn.
“I knew we’d make new mistakes with AI because it’s not the same as social media,” says Hitzig. “But there are some problems we could avoid, like let’s not maximise for engagement using psychometric profiles, because you know what that leads to: eating disorders, political manipulation, fake news and violence on the basis of fake news. Optimising for engagement before you understand the social and psychological consequences is really dangerous. It’s gambling with people’s minds.”
What does she make of ChatGPT’s erotic “adult mode”, whose launch has been delayed?
“I have maybe a controversial view, which is that I don’t think there’s something awful about being intimate with a bot – to each her own. What is alarming is being intimate with a business model that doesn’t have your interests in mind.”
Since 2022 Hitzig has been poetry editor of the American magazine The Drift, publishing seven or more poems an issue. What if she chose one written by an AI? Hitzig is more interested in whether a poem is good or bad: humans write bad verse, too. “If someone says, ‘That was written by Claude’, I wouldn’t care, because if it’s interesting and moving, then whoever prompted Claude would have been doing something.” She pauses: she doesn’t want to be sent a slew of AI poems. “I mean, it would certainly be a scandal and something to grapple with.”
But an AI prompt can be far more creative than “write me a poem about love”, she adds. “Someone could say, ‘Write me a poem about love that has the texture of a Robert Browning poem, that features a kitesurfer coming in and out of the waves, where that kitesurfer is standing in for something else.’ OK, now you’re doing the work.”
In her own work, Hitzig pushes towards “glitchiness”: poems that are half maths (Simplex Algorithm), or where words break apart like code (Exit Museum), or where the language trips and repeats, like Beckett or TS Eliot: “who could say, at a certain point, not, no, this, isn’t this all, isn’t this all fracture” (Fieldnotes). There are poems of great beauty – “They prepared for the gala all summer… sun-buzzed by the lake” (Not Us Now) – and others that play with the threadbare prose of a Reddit thread or Zoom transcript. “In the age of LLMs, we’re engaged in a war against ready-made language,” she says. “We have to work even harder… when these new organs are writing on our behalf in a way that is, by definition, thoughtless.”
Back on the east coast with her partner, Hitzig has returned to postdoctoral research but is not sure whether she’s done with the AI industry yet. “I saw at OpenAI that it’s very valuable to be on the inside, and so part of me wants to give it another shot.” Either way, in five years’ time she expects to be on her own, writing. “Because five years from now, all the important decisions will have been made.” Her worst-case scenario is not that far from the present-day one, in which “a very small group of people have extraordinary economic and political power that comes from having tons of information on everyone. It’s nothing short of a total failure of capitalism.” The more hopeful vision is the one where people have a say in what happens next.
With or without large language models, poetry will survive. How artists make a living is another question, Hitzig says, but not a new one. “There will always be a demand for authenticity and originality, and the human condition is always evolving. It always needs to be updated – and I continue to believe that it needs to be updated by people.”
Not Us Now is published by Changes Press


