A phenomenon you can observe if you spend enough time on any social platform:
Someone encounters a new topic they've never thought about before. Maybe it's a Supreme Court ruling on some obscure administrative law question, or a scientific paper about microplastics, or a controversy involving a celebrity they couldn’t pick out of a lineup. Pick your poison.
Within minutes, they'll have a fully formed opinion, complete with moral outrage or smug dismissal, delivered with the confidence of someone who's spent years studying the subject.
And that opinion will be entirely predictable if you know which tribe they belong to.
Progressive tech workers will say one thing, conservative parents another, libertarian economists a third. Ask them why they believe what they believe, and they'll give you reasons. Lots of reasons. Reasons that sound pretty good, actually. Reasons they clearly believe are their own.
They're not lying.
They really do believe those are their reasons.
But if you could rewind the tape and watch their brain in real time, you'd see something different. The opinion came first, assembled instantly from pattern matching, and the reasons came later, constructed on the fly to justify what they already knew they believed. This is confabulation, and it's running maybe 80% of our discourse.
The mechanism is simple. You encounter a new piece of information. Before your conscious reasoning kicks in, your brain runs a quick heuristic: what do high-status people in my ingroup think about this? Sometimes the answer is obvious because you've already seen three people you follow and respect weigh in. Sometimes it's less obvious, but your brain is good at extrapolating. If you're in the rationalist-adjacent crowd and someone proposes a new government regulation, you can probably guess what the consensus will be before anyone says anything. If you're in the progressive activist sphere and someone gets cancelled, same deal. The pattern-matching happens in milliseconds.
You feel a strong conviction about this topic you learned about thirty seconds ago. That feeling needs to be explained, both to others and to yourself. So your brain, obligingly, generates those explanations. This is what psychologists call confabulation, the process by which we create plausible-sounding stories to explain decisions or beliefs that actually came from unconscious processes. Split-brain patients do this constantly when you ask them why they performed actions triggered by information sent only to their non-verbal hemisphere. They make up reasons that sound good but have nothing to do with the actual cause.
We're all split-brain patients on Twitter.
The confabulated reasons aren't random, though. They're constructed from your tribe's standard toolkit of arguments and frameworks. If you're a libertarian, the reason will involve property rights or government overreach. If you're a progressive, it might center on systemic inequality or harm reduction. If you're in the heterodox thinker space, you'll probably gesture at captured institutions or preference falsification. These aren't bad arguments, necessarily. They might even be correct!
But they're not why you believe what you believe.
They're the post-hoc justification for an inevitable conclusion.
You can test this yourself. Find a topic where the tribal lines are clear but you don't yet know the tribal consensus. Maybe some esoteric academic controversy or a news story that's too fresh for the standard takes to have propagated. Notice what your gut tells you before you've seen anyone else's opinion. Then watch as people you follow start weighing in. Do you find yourself agreeing with your tribe and disagreeing with the outgroup?
We've all gotten good at this. Years of Twitter have trained us to generate sophisticated-sounding arguments on demand. We can invoke studies (that we half-remember from other people's tweets), cite principles (that we apply selectively), and construct seemingly logical chains of reasoning (that we'd never accept from the outgroup). We can do this in 280 characters or in a 20-tweet thread. We can do it fast enough that we genuinely believe we're engaging in real-time reasoning rather than real-time rationalization.
Does this mean all Twitter arguments are worthless? Not quite, but maybe, mostly. Sometimes people actually do change their minds when presented with new information, though this seems to happen almost exclusively in private DMs rather than public discourse. Sometimes the confabulated reasons, despite their dubious origin, point toward legitimate considerations that deserve attention. And sometimes, rarely, someone has actually thought deeply about a topic before opining on it. But these are exceptions.
The bulk of what we see is tribal signaling, passing itself off as individual thought. Someone proposes a policy, and within hours there are ten thousand takes, neatly divided along tribal lines, each person convinced they've reached their conclusion through careful reasoning. The progressive has their reasons, the conservative has theirs, the libertarian has a third set, and all of them feel like they're thinking for themselves. None of them notice that they could have predicted their own position just by checking which tribe they belong to.
What makes this particularly hard to escape: nobody wants to believe they're just regurgitating tribal talking points. We're all the heroes of our own intellectual journeys, carefully weighing evidence and following arguments where they lead. Suggesting that actually, most of the time, we're just pattern-matching to high-status opinions and then making up reasons feels like an attack on our autonomy and intelligence.
But the evidence is pretty overwhelming once you start looking for it. The correlation between someone's position on seemingly unrelated issues is far too high to be explained by independent reasoning. Why would your opinion on trans athletes predict your view on monetary policy? Why would your stance on police funding correlate with your beliefs about nuclear energy? These connections make sense if you're inheriting a bundle of positions from your tribe, but they're bizarre if everyone's actually reasoning shit out for themselves.
The real red pill isn't about politics or gender or any specific culture war topic. The real red pill is realizing that the vast majority of confident opinion-having on Twitter is this exact process: pattern-match to tribe, generate plausible reasons, hit send. Once you see it, you can't unsee it. And once you've seen it in others, you start noticing it in yourself, which is considerably less fun but probably more important.
What do you do with this knowledge? You could quit social media, though that feels a bit like throwing the baby out with the bathwater. You could try to catch yourself confabulating and pause before opining, though good luck maintaining that discipline for more than a week. Or you could just hold your own opinions a little more loosely, remain curious about why you believe what you believe, and maybe, occasionally, admit when you don't actually know enough to have an informed view.
That last one might be the hardest of all.