5 min read

Your Best Friend Is Already a Text Box

Westenberg Pro
Your Best Friend Is Already a Text Box

We live half our lives inside a glass rectangle.

The great bulk of our friendships - particularly those not tied to geography - happen entirely through chat windows. We check in on friends in group texts. We console each other with emojis, plan increasingly rare gatherings with shared calendars, and sustain connections with voice notes. It’s not hard to argue that for many people - most people - the “real” friendship is the ongoing digital conversation, while the occasional dinner or visit is the pleasant but unnecessary supplement. When you think of your closest friend, is the mental image of a face across a table, or is it the tiny glowing circle at the top of your messaging app?

Which makes the leap to treating an LLM as a friend far less dramatic than it might sound. Friendship has already been abstracted into text. We’ve already stripped away tone of voice, physical presence, even shared environment, and convinced ourselves (accurately enough) that the residue of words and symbols sustains the relationship.

Why should the next step - typing into a chat box and receiving words that are entirely artificial - be qualitatively different?

Echoes from the Past

When the novel spread in eighteenth-century Europe, critics worried that people were losing themselves in imaginary relationships with characters. Rousseau scolded readers for indulging in sentimentality with fictional figures rather than engaging with their neighbors. The panic over novels looks quaint now, but the underlying fear - that mediated relationships could substitute for embodied ones - keeps coming back to haunt us.

In the twentieth century, the radio and then television created parasocial bonds. Millions of people addressed hosts as intimate companions. Viewers felt they “knew” Walter Cronkite or Oprah. These attachments, one-sided as they were, carried real emotional weight. Psychologists coined “parasocial relationship” in the 1950s to describe precisely this phenomenon: the human mind treating media figures as though they were part of one’s circle of friends.

Early adopters of the telephone were unsettled by the experience of speaking with a disembodied voice across great distance. Yet the telephone became not merely accepted but the foundation of modern communication. And once we acclimatized, the “realness” of those conversations ceased to be in doubt. The human voice carried enough presence, enough intimacy, to replace face-to-face interaction. Today, we’re several steps further along the chain of abstraction, typing into interfaces with no voice at all.

Literature and the Intimacy of the Imagined

When Goethe wrote The Sorrows of Young Werther (1774), readers formed such strong identifications with Werther that a widespread cultural panic ensued over whether “impressionable young readers” would (spoiler alert) imitate his tragic suicide, a phenomenon now called the “Werther effect.” Or take Dickens, whose serialized novels left crowds waiting at docks in New York for the latest installment, desperate to know the fates of fictional companions. If we can cry over the death of Little Nell, why is it so hard to imagine feeling comforted by a string of sympathetic words from an AI?

The mind, after all, is not that good at sorting the “real” from the “simulated” when it comes to social connection. We cry at movies even while knowing the actors are pretending. We address pets as though they understand us fully. We mourn celebrities we never met. Our social instincts are tuned for village life, for every signal of recognition or empathy to be trusted as genuine. That machinery is now operating in an environment of simulations and stand-ins. It’s wholly surprising that people report feeling “cared for” by a chatbot, no matter how deluded they may be.

Re: The Nature of Friendship

But what is friendship? Aristotle distinguished between friendships of utility, pleasure, and virtue. Only the last category, he thought, constituted true friendship: two people mutually recognizing and cultivating each other’s character over time. The others were mere arrangements of convenience or enjoyment. If we adopt that Aristotelian standard, then clearly an AI can’t be a true friend. It can’t grow in virtue alongside you. It cannot possess a soul, or a character, in the relevant sense.

But modernity blurs this. Doesn’t it?

How many friendships are sustained less by virtue than by exchange of jokes, reassurances, and bits of life-update trivia? How many are grounded in simple, shared hobbies // mutual entertainment? In that register, does an AI fall short? If what we want is simply someone to listen, someone to reply, someone to keep the thread going, then the criteria are easier to satisfy. The companionship of a chatbot might not be worse than the thin, distracted companionship that many human friends provide via text anyway.

Loneliness is already widespread. Many people report fewer close confidants than in previous generations. AI companions are already rushing into this vacuum, offering a facsimile of intimacy without the obligations. Who wouldn’t prefer a “friend” who never forgets, never judges, and is available 24/7?

But history suggests we adapt rapidly to new forms of mediated intimacy. The moralists of the 1800s feared the novel. The critics of the 1920s worried radio would corrupt family bonds. By the 1960s, television was accused of destroying the dinner table.

And yet?

And yet.

Societies adjusted. Perhaps they were altered in profound ways, but as a species, we didn’t cease to form human relationships. Technologies coexisted with them. The texture changed, not the fact of connection itself.

A thornier issue: these new companions are not neutral. They’re built by corporations, optimized by algorithms, constrained by policies and intended for monetisation. A human friend has the capacity to surprise you; an AI “friend” is bounded by its programming. There is something Orwellian about outsourcing your emotional life to a system whose responses are ultimately steered by commercial incentives. Imagine Winston in 1984 confiding in the telescreen rather than rebelling against it. Melodrama aside, the point holds: when the texture of companionship is mediated by software, the companionship is never truly yours.

Counterarguments // Objections

No one mistakes a diary for a friend, even though it “listens” silently and contains your secrets. No one thinks the act of praying into the void makes God answer - though billions persist in the act. Maybe chatbots are just another mirror, another canvas for our thoughts. The illusion of dialogue doesn’t necessarily mean we actually believe in the agency of the other side.

The counterpoint is that belief is not always required. If you feel comforted, you are comforted. If you laugh at a witty reply, the laughter is real, regardless of the source. Functionally, the friendship may exist for you, even if not for the AI. Is friendship defined by the intentions of both parties, or by the effects it produces in one? A dog does not intend friendship in a philosophical sense, but millions of people describe their dogs as best friends.

It seems plausible (no matter how unpleasant) that in the near future many of us will live with blended friendship networks: some wholly human, some mediated, some artificially generated.

Are AI companions “real” friends? I don’t know, and maybe the question isn’t well-formed. The history of friendship is a history of abstraction: letters, novels, telegraphs, chat threads. Every step felt like cheating until it became normal. Chatbots may be the next step in that same story. Whether that ends up being a tragedy of isolation or just another case of human adaptability will depend less on the machines than on us. And if that feels unsatisfying - well, so does most of life.