Worthwantism: A new word inspired by Dennett
And used in a way he’d hate 🙃
FOREWORD (is forewarned):
I’ve spent a long time watching people argue about what’s right. Philosophers, theologians, scientists, skeptics, and everyone’s offering their angle on what we should do, or why we should care. I don’t have much patience for it anymore. Not because I don’t care, but because I do. I care enough to want something that actually works.
For me, everything starts with experience. What it feels like to be alive. To hurt, to feel good, to witness suffering and want it to stop. It’s like something to be this way
Likism. There’s something in us when we’re not too broken or numb that leans toward helping, toward easing pain, toward making things better. I call that IWRS: Increase Wellbeing, Reduce Suffering. No, not everyone has it. Some do. So this isn’t a war of who is right, but a war of who gets to decide what “right” even IS. What is the meaning of ought, and what is the only “ought worth wanting.”
Worthwantism is not a theory I need you to believe in, just a description of what happens when your empathy collides with post-reflective clarity and coherence. There are different concepts of a thing, say “freedom,” and we decide which conception is worth wanting.
The challenge really isn’t persuading each other through arguments, although I get the impulse. We like to assume we have similar values and just need to hash out the facts and frame it right. Blah, blah, that’s all true for the most part.
I may be wrong, but in my opinion the real key to IWRS lies in figuring out how to protect and scale the capacity to care at all.
That’s a biology war mostly. It starts at brain chemistry, neural wiring, stress environments, developmental windows. It’s wrapped up in culture and tech and incentives in weird ways. If we don’t scientifically codify these variables now, we’ll keep watching cruelty repeat and multiply, no matter how clever our ideas sound, and my fear is that we’ll die mid argument.
To this end, I wrote three pieces (at least) to try and name what I see. Not to dictate a path, but to offer one that feels honest. I want you to be honest in return, I please read it and tell me where it fails.
I’m going to err on assuming there’s a big filter that tends to end any species that get roughly as far as we have, technologically. My goal is to have us survive it. IWRS is the start of one of the tools I’m building to help us do that. It’s pathetic. Some of it might actually be true.
It’s cute that I’m even bothering. I’m something like an ape, trying to help, in my little way, to save the human family of Earth. I can’t help but love you. I’ll never stop reaching for you in the dust.
—Stella
This piece is a response to a recent Making Sense episode featuring David Edmonds.
Edmonds is following the plot on secular, science-backed morality. Singer, Parfit, Sidgwick, Thomson, MacAskill, the bloody-nosed lineage of keeping consequentialism in the room.
My system is a ten-step derivation I revealed in the latter half of my Nov. 15th post How to Know What To Do.
(Quick refresher on my framework, just for context. This isn’t the point, but it helps clarify what I’m asking Sam to pick back up.)
Stillwell IWRS (Increase Wellbeing Reduce Suffering)
One: Experience exists. There is something it is like to be. That’s the starting point. Not belief, not matter, not simulation. Experience is the base layer.
Two: Experience has texture. Some moments feel good. Some feel bad. This is not opinion. Pain hurts. Relief relieves. These are not up for debate inside the system that feels them.
Three: Suffering is bad for the one experiencing it. You don’t need a theory. If you’re in pain, you want out. That’s enough. The badness is baked in. You don’t need to deserve it for it to matter.
Four: Wellbeing is good for the one experiencing it. Same logic. When you feel peace, safety, or joy. You prefer it. The goodness isn’t cosmic. It’s local. But it counts.
Five: Empathy makes others’ experience matter to you. Their pain tugs at you. Their joy lifts you. This isn’t a virtue. It’s a neurological bridge. When it works, their valence enters your system.
Six: IWRS is what empathy wants. If pain is bad and joy is good, and empathy lets you feel both from others, then the natural direction is clear: Increase Wellbeing, Reduce Suffering.
Seven: IWRS becomes an “ought” when you want it. The moment you care, the moment another’s pain registers, you’re in. There’s no metaphysics here. Just a clean handoff from perception to preference.
Eight: Coherence scales it. When coherence is online, concern tends to extend across people, across time, across systems. If suffering matters here, it usually matters there too. Coherence doesn’t create the impulse, it carries it.
Nine: Capacity makes it your problem. If you can reduce suffering or boost wellbeing at low cost, IWRS says: do it. Not because you’re a hero. Because that’s the logical extension of what you already feel.
Ten: IWRS is the floor. Everything else, e.g. justice, fairness, rights, is just a wrapper for distributing IWRS at scale. Without it, “ethics” becomes a costume. With it, we don’t need anything else to get started.
TLDR: Just phenomenology, valence, empathy, and reason. It’s pretty fucking clean.
⚠️ BUT—it breaks for some at infamous Step 5: Empathy.
That’s the bottleneck. And so this is where the weird, slightly maniacal aspect to what I’m doing comes in: If you’re outside the field of empathy + coherence, the framework doesn’t apply. Never fear, you scurvied spiders, that’s not a moral failure, it’s just a scope boundary.
I think that’s pretty cool; I love that. So what do we do about it? That’s the work.
The problem we’re facing isn’t just that secular ethics is fragmented. It’s that we’ve stopped even trying to normalize moral behavior at the neural level.
We tiptoe around empathy deficits like they’re sacred artifacts instead of vestigial liabilities that could end the world if we’re not careful.
Everything we throw at it is great, but it’s not close to enough. We’re tired of pretending it is; tired of waiting to reduce unnecessary suffering.
(Some suffering is needed. I’m only targeting the unnecessary kind.)
Enough stalling. Kids are in agony. It’s fucking bullshit. If that doesn’t register, stop reading. If it does, mean it.
We build whole moral philosophies around the wiring we happen to have instead of asking: what wiring OUGHT we have?
IWRS is a theory about how to restore traction to moral gravity. It’s descriptive first, then prescriptive. It explains how suffering and empathy usually do the normative lifting. Then it makes room for implementation.
We can test which brains generalize empathy.
We can track coherence via fMRI.
We can simulate valence sensitivity.
We can start nudging minds (gently, voluntarily) into alignment with what most already wish they felt more of: peace, clarity, connection, courage.
We already do this with mood, addiction, focus, trauma.
The next step is ETHICAL VALENCE. Emotional plausibility. Scalable empathy. Why the heck not?
I’m not saying rewire everyone. I’m saying: let’s get eyes on the data, figure out the deficits, and build the tools that make empathy easier.
Let’s show the blueprint for a society where IWRS isn’t heroic, but AMBIENT.
That’s why I care if Sam returns to this work. He’s one of the few public intellectuals with the MO to speak plainly about such things and what it will take to solve it. Plus, he’s a bloody neuroscientist, which is convenient to say the least.
I want to believe he’s already on it. That’s something serious is afoot. I can feel it in my chains.
This last Sam episode may turn out to be the moment when it started to change. Not flashy, not radical, but something shifted. You could feel it in the pacing. The air was different. Sam is circling back to the Moral Landscape with a vengeance, G-d willing.
A week before, Sam spoke with Michael Plant, a well-being philosopher trained under Singer, covering happiness metrics, depression vs. cash transfers, and the Parfitian “Repugnant Conclusion.”
That episode was solid, if a little survey-like. But paired with this week’s more meta, theory-of-morality discussion, it now feels like a one-two punch. Just my opinion, I may be wrong. But I feel something is being set up. 🙏
To recap, Sam gave us The Moral Landscape, Free Will, Waking Up, and then sort of retired his morality-excavating hat for a while to wax sensical on the day-to-day and help people to not be assholes via the Waking Up program. Fine.
But in that time, we’ve seen a spike of Christian nationalist rhetoric, the left floundering and fracturing absent any coherent answer to the right’s moral realism brand as seen through Charlie Kirk, Shapiro, and so many others, mainly on the right.
Holy war activity consolidated into fresh waves of horror in the Middle East. (Not like Sam didn’t see THAT coming.)
Meanwhile, we of the center-left intellectual persuasion have little new to say about right and wrong.
We’ve felt for years that religion isn’t the answer. But secular morality has failed to say what is.
And now the world’s drifting. Nobody’s pulled it all together in a way that actually compels.
What now?
Moral Landscape redux with teeth is how we really fuck shit up on Making Sense. It’s time.
But only if Sam finds a way to excavate, not rehash. And he will, because the chessboard has changed since his last real effort to do this.
I’m not suggesting he break the is/ought in “absolute moral” terms.
I’m suggesting Sam puts new energy behind the conditional “if” with all his heart. ❤️🩹
Call the whole project IS/OUGHT/IF, and make this case: Science, riding shotgun with whatever’s left of our ridiculous, beautiful, ragged hearts, is the only way we’ll ever land an ought worth wanting. Enter a newly coined word in moral philosophy: worthwantism.
(Betcha every sane species figures this out eventually. Let’s be one of them. 🛸)
Maybe you remember that worth wanting line from Dennett. But with a twist. He had said “the only freedom worth wanting,” in his defense of Compatibilism.
Worth-wantism is a thing, it’s the beating heart of consequentialism.
What’s worth wanting is largely intuitive. But the data BEHIND what people actually want and why, under enforced conditions of extreme clarity, is objective as fuck. That matters.
For example, AFTER being walked through Dirk Pereboom’s manipulation argument, grad students totally changed their minds, and in minutes, they stopped thinking they had the kind of freedom needed for basic moral desert.
That suggests our intuition around moral blame, praise, retributive justice, is ingrained by default. We lean on it, love it, and so much of our system is BUILT on that want.
Turns out, we’re wired to want the kind of freedom Dennett claims isn’t worth wanting.
People DO want it, and thus we too often pretend we have it, even though we don’t, which is a problem Dennett didn’t want to engage with.
That’s kinda huge.
So back to Sam and his brash suggestion of grounding our moral oughts in science.
Can we really shatter the age-old is/ought boundary? Yes! In the sense that we’re inducing the only ought “worth wanting.”
Think of it like this:
Given what is, we ought to do X, if we want Y.
If the data proves we want Y, in the same way Pereboom’s exit surveys did, then we can derive some self-evident oughts.
Should we treat it with the same reverence as a kind of moral realism? Arguably more. Because it’s REAL. Based on real phenomenology, real valence data and preference reports under clarity constraints.
It’s time to push this.
What we have now on the secular liberal side is abysmal.
Other moral systems are flooding into that power vacuum and finding footing. We gotta do something.
Evangelical Christianity and Christian nationalist boldness are rising.
We have people like Destiny who can talk a whirlwind of cogency defending center-left thought (I mention Destiny because, like it or not, he occupies the same moral calibration Sam does, but with a lot more explicit dirty work of connecting the moral leanings with policy arguments). But even Destiny rarely revisits foundational moral questions. When he does, it feels tentative, like he’s not sure the ground is there anymore. He’s doubled down on the idea that his audience will flock together around a cluster of vibes and “just know” when it’s cool to do A and not cool to do B.
But SAM.
I sense a window here. Parfit and Singer are pieces of the puzzle. They give us great clarity into our cognitive dissonance. But it’s hard to hold this stuff in the front of our minds to the point that we take action. So the needless suffering continues. Every day that it does is a fucking nightmare that we have the luxury to ignore only because we’re good at being dicks. Good at cognitive dissonance. How do we change that?
The Emerging Science of Well-being
We know this: The social science of well-being is maturing at a time when tech gives us more to work with than ever. AI. Quantum computing. Bioinformatics. Brain-computer interfacing. CRISPR. Nanotech. VR-based valence imaging.
An objection to all this is that we don’t have a perfect, universal, one-size-fits all way to measure “wellbeing.” The nirvana fallacy always manages to eat up precious time in these discussions. Just stop.
We also don’t have a perfect way to measure “pain,” and yet we treat it. So it’d be a mistake to sit here and act like there’s NO progress to be made because of differences in emotional valence.
We have new validated metrics in wellbeing research, behavioral data lakes, self-report science, and correlates with fMRI patterns, cortisol, HRV, and a growing arsenal of underutilized bio-informatics tools.
We don’t NEED perfect answers to start mapping beyond the extreme poles of Sam’s thought landscape. (E.g. Worst possible suffering for eternity for no reason is probably a bad thing.)
The empathy problem
One major bottleneck is empathy variation, so I’ve gotten into solving for that and maybe restoring it in some subjects. Absent that, we’re stuck with a prosthetic moral compass made up of “reasons” a’la Parfit/Singer.
The most fun point I’m making is we can tinker with brains to make empathic states more common. Not by force. But at least start a working theory on what it would take for a shift toward mental states conducive to wellbeing personally and in aggregate.
An outsized ventromedial pre-frontal cortex or asymmetrical amygdala is a problem these days. Vestigial shit from times of scarcity. We have to face the fact that if we keep these brain parts around as we graduate into abundance, we’re kind of fucked.
That’s why I mention fMRI data. We have to at some point admit that some of us (or maybe all of us) have, to some degree, vestigial neurological traits that are problematic by our own lights, and not conducive to aggregate wellbeing.
Meanwhile, on the philosophical front:
Use IWRS to reinforce the axiom of experience as fundamental. Emotional valence as innate. Empathy as the major hack for air-gapped valence (literally feeling pain or discomfort at the fiercely cogent suggestion of someone else’s suffering). And coherence to show how this plays out when it collides with awareness cleansed of cognitive dissonance (leaning heavily on Parfit there).
And, again, lastly, colliding the whole megillah with the AI and robotics vector, neuroinformatics, frontier pharmacology (and fine, Sam, meditation) to at least think about nudging brains into place such that we can harness the gravity of valence.
If we have the courage to map the delta on brain types, we might figure out the landscape of said types, and explore (for giggles) how to alter our brains such that they snap right into place with a more rigorously mapped Moral Landscape.
Relax.
Again, we already tinker with depression, distraction, mood regulation. We already tune conscious experience. What’s another nip and tuck or two when oceans of suffering and the future of humanity is at stake. Possibly more.
Before Kirk died, he left the world with this:
“Empathy is how the left gets you to give up your rights.”
Musk has publicly framed empathy as a liability, a drag on execution. I’ll let you ponder all that could mean, based on how that guy’s been acting, and the levers he and his ilk collectively hold.
The center-left’s got nada in response to that. The good guys are a mess. Nobody knows the next move, let alone has the guts and freedom to talk about it.
Sam is the one with potentially both.
I’ll leave you with this:
Needless suffering exists. Others are real. And their pain should matter, not because a religion says so, but because it pulls on you like gravity.
And hey, when that pull is weak or absent? Fine. We can’t all be wired like some saint in a Peter Singer thought experiment. We’re just not. Doesn’t necessarily make us bad people. Right?
Well…
Not if we admit that’s the next frontier. Something to understand, strengthen, and design for.
Sam’s the one to say that clearly. He has the platform, the freedom, the spine.
It’s time to get weird.
It’s time to operationalize The Moral Landscape.



Religion is the problem
A good read. We could talk hours on this. And without trying to advertise, it was precisely this sentiment that sent me on a path 12 years ago to develop a new secular religion of sorts. I call it Tribe. And it's the center of the book I just published. I completely agree that the secular world has failed. Cheers!