4 min read

Members-Only Q&A

July Q&A
Members-Only Q&A

Let me begin with a disclaimer: I don’t know everything. I have biases, blind spots, and the occasional compulsion to overexplain historical metaphors. But I’m going to do my best to make this interesting…

Every month, I keep my inbox informally open to questions from Pro members. I try to answer as many as I can privately, but occasionally, a handful of them deserve more room to breathe. So I’m adding a new Pro benefit.

This post is for you - the Pro members who sent in thoughtful questions over the past few weeks. Thank you. If you want to contribute to next month’s Q&A, feel free to email me at joan@joanwestenberg.com.

Below are a few of the best questions I’ve received this month.


How can we prepare our children for the future? And what tools can we give them to be positive agents of change to lead us out of the situation we are in?

Teach them history. Not a trivia contest, I'm talking more about a sense of causality. Revolutions that ate their children, policies that turned into dogma, utopias that turned bureaucratic. Give them Gibbon, Arendt, Orwell, and de Tocqueville. Show them that ideas are dangerous, but their absence can be, too.

But more than anything: give them agency. Let them try things. Let them fail in recoverable ways. Praise the attempt more than the outcome. Cultivate what the Greeks called thumos - spiritedness, ambition, righteous indignation when the world is misaligned. Because we do not need more passive passengers on Spaceship Earth. We need engineers, poets, whistleblowers, reformers.

And maybe - tell them that they're not here to optimize their brand. They're here to do work that makes everything around us slightly less fragile than it was yesterday.

Are we over-optimizing our lives? Can productivity culture become pathological?

It already has. The original promise of productivity tools was liberation: spend less time on routine tasks, so you can do the hard, creative, meaningful work. That became "do all the things, faster."

David Allen's Getting Things Done was supposed to help clear your mind. And it's an awesome framework. But it ended up creating a priesthood of knowledge workers tending to the altar of Inbox Zero. The planner became the goal. We went from freeing our time to harvesting it like a resource.

You can hear echoes of Bentham here. The dream of a life optimized for maximum utility, minimum waste. But people are not spreadsheets. What makes life valuable is often what defies scheduling: wandering conversations, unexpected detours, the boredom that births creativity.

Optimizing inputs is easy. Optimizing meaning is not. If your productivity system doesn’t have room for play, for moral confusion, for mess, it's just another kind of trap.

How do we reconcile rapid technological progress with philosophical lag?

Badly. At least so far.

Nuclear fission = a marvel of 20th-century physics. The bomb came before the ethics. Oppenheimer, quoting the Bhagavad Gita, said "Now I am become Death." But quoting scripture is not the same as answering to it.

Social media = tool for connection, turned machine for outrage. The engineers asked "can we connect everyone?" Not "should we?" Nor what kind of polity emerges from a digital panopticon of performative opinions.

There is no fixed solution here. Philosophy, by nature, asks us for reflection. But technology does not wait. So we need institutional memory. We need ethicists in the room where it happens. And we need technologists who read Paul Graham essays - but who can read other sources, too.

Slow thinking is the only known patch for existential risk.

Is Artificial Intelligence a philosophical breakthrough or a productivity tool?

It's both, and that's the problem.

For the sake of sanity here, I'm going to go with the broadly accepted definition of AI as LLMs.

Large language models seem to collapse the boundary between process and product. The work of generating text, code, even images, is suddenly unbundled from human intention. This is astonishing. And also deeply confusing.

Philosophically, it disrupts theories of meaning and authorship. What does it mean to say something, when a non-agent can say it too? Wittgenstein argued that meaning is use. But if a machine "uses" a sentence fluently, does it mean anything by it?

From a productivity angle, the implications are clearer. It is a force multiplier. Good writers get faster. Mediocre ones get automated. Agencies replace interns with prompts.

But the real tension: AI doesn’t understand truth. It models coherence. That works well enough, until it doesn’t. Ask it for legal advice, medical summaries, or scientific hypotheses and you'll see: it's confident, fluent, and often wrong.

So if it's a tool, it needs guardrails. If it's a breakthrough, it needs scrutiny. And if it's both, then the ethical frameworks must evolve as fast as the codebase.

Why does it feel like no one knows what the hell is going on anymore?

Because they don’t. We don't. I sure as shit don't.

The 20th century was organized around institutions that promised legibility: governments, newspapers, universities. You might not have agreed with them, but they gave you a map. Now the map has too many layers and half the streets lead to algorithmic dead ends.

Part of this is informational. We are overfed and undernourished. Part of it is institutional decline. Trust evaporates when incentives reward visibility over rigor. But part of it is metaphysical: we’ve lost the narrative thread.

Nietzsche saw it coming. Once you kill God, you need a new organizing principle. We tried capitalism, human rights, science, climate justice, Mars. But nothing has stuck.

So now we flail. One minute it's crypto libertarianism. The next it's tech ethics or post-rationalism or monk-mode minimalism. Everyone is building boats, but no one agrees where the shore is.


More next month. If you’ve got a question, send it in.

JW.