One of the most eloquent, impactful books I’ve read this year is Elaine Castillo’s essay collection How to Read Now. In it, she makes a case for reading as a radical act of interrogation and refusal. It’s not just about books and personal taste. It’s about facing up to the world—and refusing to be satisfied with others’ incuriosity and indifference:
When I talk about how to read now, I’m not just talking about how to read books now; I’m talking about how to read our world now. How to read films, TV shows, our history, each other. How to dismantle the forms of interpretation we’ve inherited; how those ways of interpreting are everywhere and unseen.
Ultimately, reading entails an undeceived attentiveness to other people and things—an openness to their complexities, whatever risks this may entail—that’s akin to love:
Loving this world, loving being alive in it, means living up to that world; living up to that love. I can’t say I love this world or living in it if I don’t bother to know it; indeed, be known by it. It’s that mutual promise of knowing that reading holds us in—an inheritance that belongs to us, whether we accept it or not. Whether we read its pages or not.
This may seem a strange place to begin a discussion of Artificial Intelligence. But Castillo’s words were in my mind earlier this week when I tried to explain on Twitter why I don’t think AIs read texts in anything like the humans sense—and why this matters. What follows is an extended exploration of those ideas.
To be clear, this isn’t about whether AIs can be said to “understand” texts in the way we do: there are a whole other set of questions there that need exploring. Rather, it’s about the kind of ethically and imaginatively engaged form of reading that Castillo identifies: one that comes from a particular perspective; that teases out tensions, assumptions and begged questions within rival framings of the world; that is able passionately to dispute received ideas and partial histories. Here, for example, is Castillo writing about popular misreadings and mis-tellings of her own heritage:
Willful misreading is a violence. To warp the history of a place to serve one vision of the past—and therefore, preserve a specific vision of the present and future—is an obscenity, and yet we live in obscenities like this every day. The fact that so few people know that the Philippine-American War produced a genocide in the archipelago is an obscenity.
Depending upon how skilfully you engineer your prompts, I have no doubt that GPT4 or its sequels could be made to write in a similar way on similar themes. But it would be doing so in response to its users’ desires—and would as gladly tell any one of an infinite number of different stories.
This bottomless, brilliant fluency is a profound problem when it comes to the stories we tell about the world—and how we read them. Why? Because it actively encourages us to treat words as fungible content; as mere interchangeable vessels for information.
Sometimes and in some circumstances, this may seem true. But even then, words can always be read more closely, challenged and reframed. And this needs to come from us. Indeed, as Ted Chiang recently argued, the critically engaged business of reading, rereading and rewriting is precious precisely because it is how we learn to think well about the world: by entering into dialogue with our own and others' assumptions; by insistently asking how far what is being written or claimed does justice to the human histories, experiences and hopes at stake:
Sometimes it’s only in the process of writing that you discover your original ideas… Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say.
Much like the art of conversation, this intellectually and emotionally active drawing out of meanings is not just a nice, optional distraction. It's at the heart of our capacity for relating richly to one another and the world.
Technology, I should emphasise, isn't inherently the enemy of any of this. But using and deploying it wisely demands precisely those things that many of its current forms (and creators) encourage us to ignore: careful, close readings of what is being said and done and enabled.
Similarly, the empty clichés of hype and over-claiming aren't just marketing, grift or PR. They're also active obstacles to better thinking, making and doing. Take this kind of phrase:
“The future of work is being transformed!”
What, exactly, is being claimed by people who say things like this? What are they really saying, and why are they saying it in this way? What is this “future” (a thing that does not and cannot yet exist) that is somehow being transformed before it has even happened? Whose work? How? What isn’t being addressed by this kind of hand-waving—or by the implication that technology is busy writing the book of the future on all our behalfs?
When it comes to the future of AI, I do not know whether we are creating new forms of intelligence. But I do know that we are creating something that demands curious, skeptical, rigorous, precise engagement—and that our capacity to achieve this is reliant upon words and ideas, debates and open-minded investigations.
We cannot afford to debase our words, or to take them too lightly.
We cannot afford to let others (whether or not they are human) speak for us.