The Coherence Premium
I don't necessarily believe in second brains. The notion (pun-intended) that you can offload your thinking to a perfectly organized system of notes and links has always struck me as a fantasy. The people I know who've built elaborate Notion databases or Obsidian vaults mostly end up with digital hoarding problems, where the system becomes the work. And I'm broadly skeptical of the Claude Code productivity discourse, the idea that AI tools will let you 10x your output if you prompt them correctly. (Most people using AI are producing more stuff faster without any clear sense of whether the stuff is good or consistent or even pointed in the right direction.)
But I do believe in something adjacent to both of these ideas, something that borrows from the second brain concept without the hoarding, and from AI tooling without the context-free prompting: I believe in coherence as a system.
In 1937, the British economist Ronald Coase asked a question that seems almost embarrassingly simple: why do firms exist at all? If markets are so efficient at allocating resources, why don't we just have billions of individuals contracting with each other for every task? Why do we need these hulking organizational structures called companies?
His answer, which eventually won him a Nobel Prize, was transaction costs. It's expensive to negotiate contracts and coordinate with strangers, to monitor performance and enforce agreements. Firms exist because sometimes it's cheaper to bring activities inside an organization than to contract for them on the open market. The boundary of the firm, Coase argued, sits wherever the cost of internal coordination equals the cost of external transaction.
This was a brilliant insight in '37, but Coase couldn't have anticipated what happens when transaction costs collapse. When software eats coordination. When a single person with the right tools can do what used to require a department. When AI can execute tasks that once demanded teams of specialists.
We're in a Coasean inversion. The economics that made large firms necessary are reversing. But most people are looking at this transformation through the wrong lens. They see AI as a productivity tool, a way to do more faster. They measure success in hours saved or output multiplied, and this misses the point entirely.
The solopreneur's advantage is not solely speed, and it's certainly not "lower costs" despite what a good too many seem to think.
The advantage is coherence.
what coherence actually means
When I say coherence, I mean something specific: the degree to which every part of an operation derives from the same understanding, the same model of reality and set of priorities and tradeoffs.
When you work alone, you have a problem and you understand the context because you lived it and touched it and experienced it first-hand. You make a decision based on that understanding, execute the decision, see the results, and update your understanding. The entire loop happens inside one mind.
What happens in a large organization facing the same problem? Someone identifies the problem, but they don't have authority to solve it. They write a report explaining the problem to someone who does have authority. That person reads the report, but they don't have the original context, so they ask clarifying questions. The answers come back, filtered through email or a meeting. A decision gets made, but the people who have to implement it weren't in the room. They recieve instructions that encode the decision but not the reasoning. They execute the instructions as best they understand them. The results come back through multiple layers of reporting. By the time the original decision-maker sees what happened, months have passed and the context has shifted again.
This is the basic challenge of coordination across minds. Every handoff loses information, every translation introduces drift, and every layer of abstraction moves further from ground truth.
Organizations have spent decades trying to solve this problem. They've built elaborate systems of documentation, standardized processes, metrics and KPIs, regular meetings, shared values statements, company cultures. All of these are attempts to create coherence across minds. And they all fail, in different ways and to different degrees, because they're fighting against something that won't budge: knowledge is sticky and context is lossy, and understanding doesn't transfer perfectly between humans.
the pathology of process drift
A company starts small, with the founders doing everything themselves. They make decisions quickly because they understand everything about the business, and the business works.
The company grows and the founders can't do everything anymore. They hire people and try to transfer their understanding. But understanding doesn't transfer easily, so they also transfer processes. "This is how we do X. Use this checklist for Y. Follow these steps."
The processes work, mostly. But the new employees don't have the context that generated those processes. They don't know why step three comes before step four, and they don't know which parts are essential and which parts were arbitrary choices. So when situations arise that the process doesn't quite cover, they either follow the process rigidly and get suboptimal results, or they improvise and create inconsistency.
More growth, more employees, more processes. The processes start interacting in ways nobody anticipated. The sales process assumes certain things about the product process. The product process assumes certain things about the engineering process. When those assumptions drift out of alignment, you get friction and delays and finger-pointing.
The company responds by adding coordination mechanisms like project managers, alignment meetings, and cross-functional reviews. These help, but they also add overhead, and they create their own drift: the coordination layer develops its own processes, its own assumptions, its own information loss.
Eventually you reach a point where a significant fraction of the organization's energy goes toward internal coordination rather than actual value creation. A 2022 Microsoft study found that employees in large organizations spend over 50% of their time on internal communication and coordination. Half the payroll, dedicated to getting the organization to agree with itself.
context fragmentation
More information means the coordination problem gets worse, not better. This seems counterintuitive, because shouldn't more information make everyone more aligned?
But information isn't understanding. Understanding = integration, and integration happens in minds. More information means more raw material that each mind has to process differently.
A typical large organization's knowledge base is spilling over with strategy documents from last year and the year before, project postmortems from dozens of initiatives, customer research reports, competitive analyses, technical specifications, meeting notes, email threads, Slack channels, and wiki pages.
Somewhere in there (the elusive somewhere...) is everything you need to know to make a good decision.
But nobody has synthesized it all, and nobody has integrated it into a coherent model. Each person reads a fragment, interprets it through their own context, and forms their own understanding. When they discuss decisions with colleagues, they're not comparing the same mental models but rather different interpretations of different subsets of the available information.
This is context fragmentation. People don't disagree on facts; they're operating from different maps of the same territory. And because the maps are implicit, inside people's heads, nobody realizes they're not looking at the same thing.
The proliferation of AI tools in large organizations means that now each employee has their own AI assistant, trained on whatever context they happen to feed it, producing outputs that reflect their particular understanding of the situation. The AI amplifies individual perspectives rather than creating shared ones.
single-player mode advantage
When you're operating alone, you have one context, one understanding, one model of your business and your market and your customers and your strategy. That model lives in your head, and it's coherent because there's only one mind maintaining it.
If // when you use AI tools, you're feeding them from that single source of truth. The AI doesn't have its own understanding that might drift from yours, and it operates within the context you provide. If you give it good context, it executes within that context. If your understanding is coherent, the AI's outputs will be coherent.
This is the inversion of the traditional organization's problem. In a large organization, you have many minds with their own contexts, trying to coordinate through AI tools that amplify their differences. As a solo operator, you have one mind with one context, using AI tools to execute within that coherent frame.
The AI handles the execution at scale while you maintain the coherence. This division of labor plays to the strengths of each party: humans are good at integration and judgment, while AI is good at execution and volume. The solo operator with AI gets the benefits of scale without the costs of coordination.
But this only works if you actually maintain coherence.
If you're using AI to do random shit faster, you're not capturing the advantage. The advantage comes from having a tight operating model that the AI operates within.
the coherence stack
Think of it as a stack with four layers, each feeding the one below it.
┌─────────────────────────────────────┐
│ MIND LAYER (You) │
│ Understanding, judgment, strategy │
│ The source of coherence │
└─────────────────┬───────────────────┘
│ feeds
▼
┌─────────────────────────────────────┐
│ CONTEXT LAYER │
│ Operating model, constraints │
│ Voice guidelines, decision logs │
└─────────────────┬───────────────────┘
│ constrains
▼
┌─────────────────────────────────────┐
│ EXECUTION LAYER (AI) │
│ Content, code, research, analysis │
│ Customer responses at scale │
└─────────────────┬───────────────────┘
│ produces
▼
┌─────────────────────────────────────┐
│ OUTPUT LAYER │
│ Coherence-checked artifacts │
│ What actually ships │
└─────────────────┬───────────────────┘
│ feedback
└────────────────────► Mind Layer
At the top is the mind layer, which is you: your understanding, your judgment, your integrated model of the business. This layer can't be automated or delegated, and it's the source of coherence.
Below that is the context layer, where you externalize your understanding into documents that AI tools can consume. Your operating model, your constraints and tradeoffs, your voice guidelines, your decision history. This layer translates what's in your head into something machines can work with.
Below that is the execution layer, where AI operates. Content generation, research, analysis, code, customer responses. The AI works within the constraints provided by the context layer, producing outputs at scale.
At the bottom is the output layer, which is what actually ships. But nothing reaches this layer without passing through a coherence check: does this output reflect my model? Would I have produced something like this? Does it fit with everything else?
The stack only works if information flows correctly. The mind layer feeds the context layer through deliberate documentation, the context layer constrains the execution layer through careful prompting, and the output layer feeds back to the mind layer through review, which sometimes triggers updates to your understanding.
Most people using AI skip the context layer entirely. They go straight from a vague intention to an AI prompt to shipped output. This is how you get drift // how you end up with an operation that feels incoherent, where different pieces don't quite fit together, where customers sense something is off even if they can't articulate what.
building your context layer
The context layer is where the work happens. It's the translation mechanism between your understanding and AI execution. Get this right and coherence becomes automatic; get it wrong and you're constantly fighting drift.
Start with your operating model - a working description of how your business actually functions.
I structure mine around five questions.
┌────────────────────────────────────────────────────────┐
│ OPERATING MODEL │
│ (Five Core Questions) │
├────────────────────────────────────────────────────────┤
│ │
│ 1. PROBLEM & AUDIENCE │
│ What problem do I solve, for whom specifically? │
│ Not demographics. The person in the moment. │
│ │
│ 2. THESIS │
│ Why does my approach work? │
│ The real theory, not marketing language. │
│ │
│ 3. TRADEOFFS │
│ What am I optimizing for, at what expense? │
│ Make the choices explicit. │
│ │
│ 4. BOUNDARIES │
│ What do I explicitly not do? │
│ The boundaries define the shape. │
│ │
│ 5. VOICE │
│ How do I actually sound? │
│ Words I use. Words I avoid. Stance toward reader. │
│ │
└────────────────────────────────────────────────────────┘
- What problem do I solve, and for whom specifically? Steer clear of demographics; you need a description of the person in the moment they need what I offer. What are they trying to do, and what's getting in their way?
- What's my actual thesis for why my approach works? Why does my solution address the problem better than alternatives?
- What are the core tradeoffs I've chosen? Every business is a bundle of tradeoffs, and I'm optimizing for X at the expense of Y. Making these explicit prevents drift, because when a new opportunity arises, I can check it against my tradeoffs rather than deciding ad hoc.
- What do I explicitly not do? This is more useful than describing what you do, because the boundaries define the shape.
- How do I sound? What words do I use, what words do I avoid, what's my stance toward the reader? Capturing this helps AI maintain consistency across outputs.
This document should be short enough to include in AI prompts. If it's longer than a page, you haven't distilled it enough. The goal is compression without loss of generative power.
Next, build your constraints file. These are the decision-making rails that keep outputs on track. I think of them as principles // functional rules that generate answers.
For example: "When choosing between comprehensive and focused, choose focused. Our readers are busy and will bounce if they don't get value in the first paragraph." That's a constraint that actually constrains, telling the AI (and me) how to resolve a common tradeoff.
Include examples. Point to a piece you wrote that exemplifies the voice, and one that doesn't. Concrete examples communicate more than abstract descriptions.
Finally, maintain a decision log. When you make a significant choice, write down what you decided and why. This creates institutional memory for a one-person institution. When similar situations arise later, you (or your AI tools) can refrence how you've handled them before. This prevents the common failure mode where you decide the same question differently each time because you forgot your previous reasoning.
a coherence check
Every output that ships should pass through a coherence check. This can be quick, but it can't be skipped.
┌────────────────────────────────────────────────────────┐
│ COHERENCE CHECK │
├────────────────────────────────────────────────────────┤
│ □ Does this sound like one person wrote it? │
│ □ Would I explain it this way? │
│ □ Does it reflect my specific tradeoffs? │
│ □ Could a competitor produce this? (Should be no) │
│ □ Does it fit with everything else I've shipped? │
│ │
└────────────────────────────────────────────────────────┘
The questions:
- Does this sound like one person wrote it? AI tends toward a certain homogeneity, and if an output could have been produced by anyone, it's not coherent with my operation.
- Would I explain it this way? The same framing, the examples, the emphasis? If I'd approach it differently, the output needs revision or I need to update my context documents.
- Does it show // hold my specific tradeoffs? If I've chosen focused over comprehensive, is this output focused? If I've chosen accessible over technical, is this accessible?
- Could a competitor produce this? If the answer is yes, the output isn't coherent enough. It's not distinctive and doesn't come from my particular understanding.
- Does it fit with everything else I've shipped? Coherence is cumulative, and each output should feel like it belongs with the others. If this piece would feel out of place next to my other work, something's wrong.
You can automate part of this. Feed your AI tool your recent outputs and ask it to compare a new draft against them, flagging inconsistencies. But the final judgment has to be yours. You're the source of coherence, and the check is really asking: does this feel like mine?
anti-patterns that break coherence
I've watched myself and others struggle with this.
A few failure modes reliably produce drift:
┌────────────────────────────────────────────────────────┐
│ COHERENCE ANTI-PATTERNS │
├────────────────────────────────────────────────────────┤
│ │
│ CONTEXT STARVATION │
│ └─► AI works from generic training, not your model │
│ │
│ OUTPUT ACCUMULATION │
│ └─► Shipping without review; deviations compound │
│ │
│ MODEL STALENESS │
│ └─► Understanding evolves, documents don't │
│ │
│ FRAGMENTED TOOLING │
│ └─► Different tools, different contexts, drift │
│ │
│ DECISION AMNESIA │
│ └─► No rationale logged; inconsistent future choices │
│ │
└────────────────────────────────────────────────────────┘
- Context starvation is the most common. You ask AI to do something without feeding it your operating model, your constraints, your voice. The AI does its best, but it's working from generic training rather than your specific understanding. The output is competent but not coherent.
- Output accumulation is context starvation's downstream consequence. You ship AI outputs without proper review, and each one is slightly off from your model. The deviations accumulate. After a few months, your operation no longer reflects your understanding because most of what you've shipped wasn't actually generated from your understanding.
- Model staleness happens when your understanding evolves but your context documents don't. You learn something that changes how you think about the business, but you don't propagate that update to your operating model or constraints file. Now your AI tools are working from an outdated picture.
- Fragmented tooling = using different AI tools with different contexts. You use one tool for writing and another for code and another for research, and each has different context, different prompts, different understandings of what you're doing. The outputs don't cohere because they're not coming from the same source.
- Decision amnesia = making choices without recording the reasoning. You decide something, move on, and three months later face a similar choice with no memory of how you handled it before. You decide differently this time. Now you have inconsistent decisions in your history, and any AI tool referencing your past work will find contradictions.
the fragmentation audit
You should audit your operation for coherence as often as possible. I do this monthly, AI tools or not, AI be damned, but the frequency matters less than doing it at all.
Pull your last twenty or thirty outputs, whether blog posts, emails, product updates, or whatever you've shipped. Lay them out and look for the implied beliefs and positions in each. What does this piece assume about the reader? What does it prioritize, and what stance does it take?
You're looking for drift: places where piece A assumes one thing and piece B assumes something different, places where your voice shifted without intention, places where you contradicted yourself purely because you genuinely forgot what you'd said before.
When you find inconsistencies, you have two options. Either reconcile them by updating your model (maybe you actually did change your mind, and the recent piece reflects your current thinking), or flag them as errors and correct going forward.
This audit also reveals context layer gaps. If you keep finding drift in a particular area, your context documents probably don't cover that area well enough. Add constraints, add examples, make the implicit explicit.
why this beats scale
Large organizations have obvious advantages. They have capital, they have brand recognition, distribution, expertise, redundancy, Las Vegas conferences etc. A solo operator can't compete on those dimensions.
But think what those advantages actually buy. Capital lets you hire more people, and more people means more coordination overhead and context fragmentation. Brand recognition helps customers find you, but it doesn't help you serve them coherently. Distribution gets your product to more places, but each touchpoint introduces opportunities for inconsistency. Expertise is great, but experts in different domains don't automaticaly share mental models.
Meanwhile, the solo operator with a coherent system has advantages that don't show up on traditional metrics. Every customer interaction comes from the same understanding, and every piece of content reflects the same perspective. Every product decision follows from the same model. The operation feels like one thing, because it is one thing.
Customers experience this as quality, even if they can't articulate why. They sense that someone understands what they're doing and why, and they don't encounter the cognitive dissonance of dealing with an organization that can't agree with itself.
The dynamic that makes this sustainable is that coherence compounds. Each decision you make within your model reinforces the model, and each output that reflects your understanding strengthens your position. Your operation becomes more legible over time, both to you and to your customers. Meanwhile, large organizations' incoherence also compounds, with each misalignment creating more misalignment and each process drift opening space for more drift.
The gap widens.
The coherence advantage works best in domains where the value comes from understanding rather than from physical or regulatory scale. Knowledge work, creative work, advisory work, software, content, education, consulting. These are domains where a coherent perspective can outcompete a fragmented organization's superior resources.
And the opportunity is unique: the technology exists to operate at scale while maintaining the coherence of a single mind. The window is open because large organizations haven't figured out how to respond. Their answer to AI so far has been to give everyone AI tools and hope for productivity gains, which accelerates their fragmentation...
the coherence moat
Scale used to be the moat. You built a big organization with lots of resources, and the sheer weight of your operation protected you from smaller competitors. Transaction costs made it hard for anyone to replicate what you'd built.
But transaction costs are collapsing. The activities that used to require organizations can increasingly be performed by individuals with the right tools.
The Coasean logic that justified large firms is weakening.
If there is a new moat (and I'll admit, that's a big "if") it probably looks like coherence; the ability to operate as one mind, one understanding, one model, even as you execute at scale. Large organizations can't have this because they're composed of many minds. Solo operators can have it by default, if they're deliberate about maintaining it.
The solo operator who builds a coherent system and lets AI execute within it has an advantage that doesn't need venture capital, doesn't need hiring, and doesn't ask for the whole apparatus of organizational scaling.
Scale breaks coherence. Coherence is the moat.