The AI writing witchhunt is pointless.
Alexandre Dumas ran what was essentially a content production house in 19th century Paris. His most famous collaborator was Auguste Maquet, who wrote substantial portions of The Three Musketeers and The Count of Monte Cristo. Maquet would produce drafts and outlines, and Dumas would rewrite and polish them, but the books went out under Dumas's name alone. Maquet eventually sued him over it in 1858 - and won a financial settlement - but the court ruled Dumas was the sole author.
At the peak of his Factory, Dumas had something like 73 collaborators working with him at various points. A contemporary writer named Eugène de Mirecourt published a pamphlet in 1845 called Fabrique de Romans: Maison Alexandre Dumas et Cie ("Novel Factory: The House of Alexandre Dumas and Company") accusing him of running a ghostwriting sweatshop. Dumas sued for libel and won, but nobody really disputed the underlying facts.
Dumas published around 100,000 pages in his lifetime.
Even his defenders admitted he couldn't have written all of it alone.
Put a pin in that, we'll come back to it later...
In November 2025, Hachette published a horror novel called Shy Girl by Mia Ballard. It is, decidedly, not my cup of tea. But, it had sold about 1,800 copies in the UK, and it had almost 5,000 ratings on Goodreads, averaging 3.51 stars. It was an ordinary debut, with a built-in fanbase.
And then the internet decided it was written by AI, and the world began a witchhunt.
A Reddit thread blew up, followed by a YouTube video titled "I'm pretty sure this book is ai slop" pulling in 1.2 million views. Goodreads reviewers started dissecting individual sentences like forensic linguists with a grudge, and by early 2026, Hachette had pulled the book from shelves, cancelled the US release, and scrubbed it from Amazon.
Ballard says she didn't use AI herself.
She says an acquaintance she'd hired to work on an earlier self-published version had incorporated AI tools without her knowledge.
"This controversy has changed my life in many ways and my mental health is at an all time low and my name is ruined for something I didn't even personally do," she wrote to the New York Times.
And I'll stand up right now and say - fuck it.
Maybe she's telling the truth.
Or, maybe she isn't.
I don't actually give a shit, because I don't actually know, and neither do you actually know, and neither do the thousands of people who participated in destroying her career.
We just. Don't. Know.
What I do know boils down to pretty much this: the tools // methods people used to reach their verdict are fucking garbage. The culture that's grown up around AI detection is poisonous, and I refuse to have anything to do with it.
AI detection tools are unreliable.
It's been shown over and over.
OpenAI launched its own AI text classifier in January 2023, and by July 2023, they'd shut it down because it correctly identified AI-written text only 26% of the time - worse, if I may point out, than a coin toss...
GPTZero, Turnitin's AI detection feature, Originality.ai, Pangram etc: the whole cottage industry that's sprung up here shares the same limitation. They're pattern matchers trained on statistical likelihoods, flagging text that looks like it could have come from a language model, and the problem is, a lot of perfectly human writing also looks like it could have come from a language model, because language models were trained on human writing, and even the AI-based AI detection tools are just playing an eternal // infernal game of whackamole with this model and that moel and the next model.
Snake, meet tail.
You're going to get along swimmingly.
Researchers at Stanford found in 2023 that AI detectors disproportionately flagged writing by non-native English speakers as AI-generated, based on simpler sentence structures, based on fewer idioms, based on predictable word choices, based on all the things a person writing in their second or third language might produce.
All the things a detector reads as "probably a robot."
The same thing happens to neurodivergent writers.
Autistic writers.
Such as myself...
The tools are biased and inaccurate, they spit out false positives at rates that should make anyone uncomfortable using them as evidence of anything, and yet people treat the output like a blood test that came back positive, forgetting apparently that blood tests are retested and retested because no one test is entirely accurate.
But most of the people who went after Shy Girl weren't even using formal detection tools; they were reading passages and going: "This sounds like ChatGPT to me" - and maybe it did, and maybe it was, but a gut feeling seems like an awfully precarious thing over which to fuck an entire career.
Just because someone on Reddit reads a sentence that feels generic, or a metaphor that lands a little flat, they (increasingly) conclude with absolute certainty that a machine wrote it, as if mediocre prose is a new invention, as if bad writing didn't exist before November 2022. And may God forgive us if we condemn each other to permanent damnation for producing shitty prose; sans the production of shitty prose, no writer has ever grown one jot.
I've been writing professionally for years, and I've read thousands of self-published and traditionally published books and a huge percentage of them contain clunky sentences, and overused phrases, and cliché metaphors, and prose that reads like it was assembled by so many monkeys with so many MacBooks. But that, dear reader is writing. Most writing is ok. Functional at best. Some writing is good enough to create // destroy empires and so on, and that was true in 2005 and it was true in every moment of our crummy, bargain-bin history up to the introduction of ChatGPT, and damn it, it's true now.
You can't read a paragraph and reliably, with a human life on the line (because that's the stakes, when you destroy a writer's career and a writer's reputation) tell beyond any reasonable doubt, whether a human or a machine produced it. Humans writing in familiar genres, leaning on conventions and common phrasings, leaning on their own context windows, containing everything from Ian Kershaw to Ursula LeGuin to a smattering of Harry Potter fanfiction from 2005 to a series of brain-rotted TikTok reels are doing the best they can to find the right words and shove them into something resembling the right order. A romance novel that uses "his eyes darkened with desire" isn't necessarily AI-generated, even if it reads like a steaming pile to those of us enlightened enough to call ourselves the Literati. Following genre conventions doth not a fraud make. A horror novel with clunky exposition isn't ChatGPT. It might just be a first-time author who hasn't found their voice yet, and they'll never find their voice if we wave pitchforks and torches at every line we personally dislike.
The big publishers are not the ones who'll get hurt by this, obviously. Hachette pulled Shy Girl and moved on, with a swiftly issued statement about "protecting original creative expression." Back to business, and so it goes.
No, the folks getting hurt are the writers. Not only the ones who are tarred - all of us. Every single God-forsaken one of us. We are all made smaller by the pursuit of unproven and unprovable purity. Whether Ballard used AI or not (and she says she didn't, and naive or not I'm inclined to throw my cynicism to the wind and just take her at face value, and you can mock me if you like), the punishment landed before any verdict was reached, because no verdict can ever be reached. Not beyond a reasonable doubt. Never beyond a reasonable doubt.
She's not going to be the last. This whole setup, where anyone can accuse any writer of using AI based on gut feeling, and broken detection tools get treated as proof and publishers fold at the first hint of controversy because the PR cost outweighs the book's revenue, is going to grind up a lot of people into a fleshy, bloody, bony paste. Most of them will be small-time writers, debut authors, indie-published folks without the platform or the money to fight back.
The motivations of the accusers are more complicated than they'd like to admit.
First - the writers who feel threatened by AI are channeling that fear into vigilante enforcement, and I get the fear. I share it, ~to a point. I think it's clear that AI is flooding the market with cheap content, even if I can't confidently crucify any individual for it. But destroying individual careers on the basis of speculation doesn't fight that problem - it simply gives you someone to punish, and the drive for revenge is, while altogether human, altogether bullshit.
Beyond the slighted writers, you've got the internet sleuths who've found a new game. The same energy that drove Reddit to misidentify the Boston Marathon bomber (remember that?) is now being applied to prose style analysis, with the same overconfidence, and the same total absence of accountability when they get it wrong.
Third - the booktok etc influencers who smell blood (and engagement) in the water. "I dissected this book and found some awkward sentences" doesn't get 1.2 million YouTube views. "This book is AI slop" apparently does.
Finally - the readers who feel betrayed by the idea that something they read might not have been "real." I understand that impulse, too - but the logical endpoint is a world where every writer is suspect, and every flat passage becomes evidence, and the act of reading itself is poisoned by constant suspicion.
What unites all of them is the conviction that they, ~they can tell. That they've developed a sixth sense for machine-generated prose through sheer exposure. Well, they haven't. Nobody has. The researchers who build these models can't reliably tell, and the companies that created the AI can't reliably tell, and I am comfortable concluding that someone with a Goodreads account and strong opinions sure as shit hasn't cracked it either.
Give me a break.
The "human-written" certification badges that have started popping up deserve a closer look, because they reveal how badly this whole discourse has gone off the rails...
The Society of Authors' logo and the Authors Guild's certification both operate on the honor system. You register, you say "I wrote this myself," and you get a sticker on your book. There is no forensic review (wouldn't make a difference), no manuscript audit (to what end?) Nobody's testifying under oath that they watched you type every word.
So what do these badges actually prove? That someone was willing to check a box? A person who used AI and wanted to hide it would check that box too. And a person who didn't use AI but can't afford the registration fee, or doesn't belong to the right trade association, or just didn't know the program existed, doesn't get the badge. The absence of the badge becomes its own accusation.
We've been here before. The "organic" label in food. The "fair trade" stamp on coffee. These things start as consumer protection and end as marketing advantages for folks with the resources to participate, and the writers who need protection the most - debut authors // the self-published, writers without agents or industry connections, are the ones least likely to know about or access these programs.
By creating a "certified human" category, you've implicitly created an "uncertified" category. Every book without the badge now carries a faint question mark, and so the presumption of innocence gets torn to shreds, and nobody has to take responsibility for it, because it happened through a logo, not a law.
I'm not going to use AI detection tools on other people's writing.
Not privately, not publicly, not ever.
I'm not going to participate in crowdsourced investigations of whether someone's novel or essay or blog post was "really" written by a human, and I won't share threads that claim to have found proof, and I won't add my voice to the chorus of outrage. My fingers are better employed typing out my own work than pointing at people I've never met.
The cost of a false accusation is a person's career and their mental health, while the cost of letting an AI-assisted book sit on a shelf is... a book sitting on a shelf. And I find I just do not give a shit. That asymmetry is so extreme I can't wrap my head around how more people aren't troubled by it.
If a publisher wants anti-AI clauses in their contracts, fine. If a literary prize wants attestation that no AI was used, that's their call. Those are agreements between parties who chose to be there and good luck to them. But the mob version of this, where anonymous internet users appoint themselves the AI police armed with broken tools and absolute conviction, is something I want no part of.
Writing has always been messy, and writers have always borrowed, imitated, recycled, and leaned on formulaic structures. Ghost writers exist, and editors sometimes rewrite entire chapters. Collaborative writing has been around for centuries. The line between "authentic" and "assisted" has never been as clean as people are pretending it is right now.
If The Three Musketeers were published today, and someone published the 2026 version of a Pamphlet, what would happen? Would a Reddit thread decide that the prose felt too formulaic? Would a YouTube video rack up a million views dissecting the sentence structure? Would Hachette pull it from shelves?
The answer is: fucking, probably. Because the current system doesn't care about the actual quality of the work, or the process behind it, or the centuries of collaborative tradition that produced some of the best writing we have. It cares about the appearance of purity. It cares about whether a mob can be convinced that something smells wrong.
Dumas is in the literary canon, and his books are assigned in schools.
But the way he made them would get him destroyed on the internet in 2026.
This seems suboptimal, to say the least.
I don't know what the right "policy framework" for AI and publishing looks like. Nobody does. We're probably going to spend years figuring it out and we're probably going to get a lot of it wrong.
But I am 100% sure that I know what the wrong version looks like. It looks like a YouTube video with a smug title and a million views, and a Reddit thread full of folks who've never published anything cosplaying as literary forensic experts, and a debut author's name becoming synonymous with fraud because her prose wasn't polished enough to survive a vibe check run by strangers on the internet.
Mia Ballard sold 1,800 books. She had a 3.51 on Goodreads. She was nobody. Most writers are nobody. The internet ate her alive because it felt good to have a villain.
She won't be the last.
And I still won't be any part of it.
Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me: