Missing Step in Moral Landscape
What even Sam Harris hasn’t fully worked out.
FOREWORD (is forewarned):
I’ve spent a long time watching people argue about what’s right. Philosophers, theologians, scientists, skeptics, and everyone’s offering their angle on what we should do, or why we should care. I don’t have much patience for it anymore. Not because I don’t care, but because I do. i care enough to want something that fucking works.
For me, everything starts with experience. What it feels like to be alive. To hurt, to feel good, to witness suffering and want it to stop. It’s like something to be this way. Likism. There’s something in us when we’re not too broken or numb that leans toward helping, toward easing pain, toward making things better. I call that IWRS: Increase Wellbeing, Reduce Suffering. No not everyone has it. Some do. So this isn’t a war of who is right, but a war of who gets to decide what right IS. What is the meaning of ought, and what is the only “ought worth wanting.” Worthwantism.
It’s not a theory I need you to believe in, just a description of what happens when your empathy collides with post-reflective clarity and coherence.
The challenge really isn’t persuading each other through arguments, although I get the impulse. It’s figuring out how to protect and scale the capacity to care at all. That’s a biology war mostly. It starts at brain chemistry, neural wiring, stress environments, developmental windows. It’s wrapped up in culture and tech and incentives in weird ways. If we don’t scientize this tangle of shit now, we’ll keep watching cruelty repeat and multiply, no matter how clever our ideas sound, and we will DIE mid argument.
I wrote three pieces (at least) to try and name what I see. Not to dictate a path, but to offer one that feels honest, and I want you to be honest in return and tell me where it fails. I’m going to err on assuming there’s a filter that ends species that get as far as we have. My goal is to have humanity survive it. This is what I have. Enjoy. Help.
—Stella Stillwell, to the human family of Earth. I’ll never stop reaching for you in the dust.
I posted a longer version of this argument a few days ago. A lot of people read it, some got it, some didn’t. This is the same idea, stripped to the bone and rephrased in places so nothing gets lost in the length or the side points.
If something felt off or you bounced off the first take, this is the cleaned-up version that keeps every claim that matters.
(“Li’l Stillwell”)
Sam Harris showed in 2010 that facts about conscious experience can, in principle, answer moral questions. Pain is bad, flourishing is good, science can tell us how to move from one to the other. The map is correct.
What we still lack is the damn engine.
Here’s the simplest defensible secular foundation that actually works for creatures built like us. I call it IWRS (Increase Wellbeing, Reduce Suffering). It has ten steps.
Conscious experience exists.
Experience has positive and negative valence. Pain hurts, relief feels good. These are brute facts inside the system that has them.
Suffering is bad for the experiencer by definition.
Wellbeing is good for the experiencer by definition.
Empathy (affective, not just cognitive) makes another creature’s valence register in my own system as motivationally relevant.
Once Step 5 works, the direction is fixed: reduce suffering, increase wellbeing.
The moment I care, this direction becomes an “ought” for me. No further metaphysics required.
Coherence extends the concern impartially across persons, time, and circumstances.
Capacity turns it into obligation: if I can help at reasonable cost, I now have a reason to do so.
Rights, justice, fairness, and deontological rules are higher-order tools for distributing Steps 6–9 at population scale. They are not foundational; they are scaffolding.
(The longer and more syrupy version of my IWRS theory is here. Read it ONLY AFTER YOU READ THIS and only if still needed.)
Okay, so here’s the deal:
Steps 1–4 are UNDENIABLE for any conscious system. (If you disagree, do me the honor of telling me why.)
Steps 6–10 follow logically once Step 5 is in place.
Step 5 is the only biological contingency. It is also the single point of failure.
A non-trivial minority of human brains (1–2 % clinical, another 5–15 % subclinical) have severely attenuated or, to put it kindly, “highly context-dependent affective empathy.”
(Or we can just keep referring to them as dicks. That’s fine, too.)
Point being, for them, another creature’s valence almost never becomes motivationally salient. Cognitive empathy can be intact or even superior; the affective bridge simply doesn’t carry load. In these cases IWRS never fully starts.
But empathy is a CONTINUUM, not a switch. Most people feel it strongly toward in-group, weakly or conditionally toward out-group, and can shut it down entirely under tribalism, competition, exhaustion, or scope insensitivity. The number of us humans who reliably generalize affective empathy across all conscious creatures, all the time, without situational drop-off is effectively zero.
No perfect angels here.
That means the zone where “a little tinkering” would help is not some fringe minority. It’s almost all of us, to some degree.
Parfit brilliantly bridges the gap with abstract normative reasons. That works on paper and fails in practice, because reasons alone supply no motive force when the emotional substrate is weak or gated.
This isn’t a philosophical problem anymore, people. It is an ENGINEERING PROBLEM.
(Again, if you disagree, do me the honor of telling me why. Don’t be a 🍆.)
Consider that we ALREADY intervene on conscious experience when the stakes are high enough:
SSRIs, ketamine, psilocybin for depression
Methylphenidate, modafinil for attention
MDMA, TMS, deep-brain stimulation for PTSD and treatment-resistant mood disorders
Oxytocin, psychedelics, and VR protocols that reliably increase measured empathy in the short term
VERY partial list. Sorry if I left out your favorite thing.
None of these things are mandatory. (Except in super-rare cases maybe.) All are increasingly precise. All are accepted by the same secular liberal population that would bristle at my suggestion and declare empathy circuitry off-limits in principle.
Why? There’s no coherent distinction left. If we accept pharmacological or neuromodulatory rescue from unbearable private suffering, we have ALREADY ACCEPTED the legitimacy of tuning the valence system itself.
Extending the same tools to the inter-personal valence bridge (empathy) is the same act, not a new category.
(Yet again, if you disagree, do me the honor of telling me why. Lean in, this is your chance.)
The only honest statement is therefore:
For creatures wired like the majority of humans, IWRS is the only “ought” WORTH WANTING.
(See the monster you created Dennett? You’re not the only one who can sling a double dubya. If you don’t get this reference, ignore it.)
For the minority (ostensibly) who are not SO WIRED, (with motivational IWRS-grade empathy) either we rely on external incentives and deterrence forever, or we offer voluntary, reversible, increasingly safe ways to make the bridge work the way evolution manifestly FAILED to do in some nontrivial fraction of cases.
Option A is the status quo and it is visibly failing at scale.
Option B is already in the research pipeline; pretending otherwise just delays the conversation.
Yes, yes, I know that the opposite of moral realism is total subjectivism. We don’t have an objective way to measure wellbeing. I get it.
Your measurement worries are real but not FATAL. We don’t have perfect cardinal wellbeing units and that’s okay. 🐦🔥
We also don’t have them for pain, and yet pain medicine exists and progresses. And it’s good. Real good. 🤤
Existing tools are not nothing.
We have experience sampling, fMRI valence correlates, HRV, cortisol, behavioral panels, large-N self-report validated against life outcomes.
This SHITE is already good enough to move methodically in the direction of less suffering and more flourishing.
Demanding nirvana-level precision before acting is a stall tactic, not a serious objection. It’s a stubbornly reflexive retort. Let’s not do that.
To steel lady this: I’m not deriving an ought from an is. I’m saying we have SOME ability to discern the “only oughts worth wanting.”
Moreover, we OUGHT to be exploiting this ability. Now.
The practical sequence is super straightforward:
Admit that empathy variance is the real bottleneck. 🍾
Map the relevant circuitry with the tools we now have (connectomics, closed-loop implants, DREADDs, next-gen psychedelics, etc. All very fun stuff to research. I can go into any of it if you want me to. 😵💫)
Develop and offer—NOT IMPOSE!—interventions that increase reliable affective empathy in the same way we ALREADY OFFER interventions for depression or focus.
Let uptake be voluntary and iterative, exactly as we do with every OTHER brain intervention that actually HELPS.
That is the missing engine for The Moral Landscape. 🤔🤯😻
Sam Harris is the ONE DUDE who already has the philosophical training, the contemplative background, the audience, and the independence to say this out loud without being instantly dismissed as a eugenicist or a transhumanist crank.
(If I did it I’d lose my job and be fucked.)
And by the way, Sam also has a Ph.D. in NEUROSCIENCE. It’s almost like he planned this whole thing. And, wait, what was his dissertation on? Oh yeah, THE MORAL LANDSCAPE, 15 years ago.)
The last fifteen years have given us the tools he didn’t have in 2010.
The cultural vacuum created by secular failure to replace religious morality is now wider than it has ever been. Dangerously so.
Who knows that better than anyone?
Say it with me: Sam Harris.
His general map is still correct. All that remains is to build out some pieces of the road and discover (or brute force into existence) some universal human oughts worth wanting, backed this time by SCIENCE, not subjective vibes.
Granted that’s a tall order. But by the principle of feasible reduction, if we can reduce unnecessary suffering without causing something awful, we should fucking do it already.
How much should we do?
However much we can without causing problems or breaking the world.
If there’s fixable agony, and we can do it at a cost that we calculate isn’t too steep, we fix it.
Granted, “too steep” is subjective. But again, we can chisel away at that with science and arrive at whether we “ought” to pay that price, using the “ought worth wanting.”
We CAN measure a bit better now via the emerging science of well-being.
Between IWRS, and an informed democratic constitutional republic, we can do this, guys. 👊
And if we don’t WANT to help each other, then at least we’re being dicks in ways that aren’t hidden anymore, and we can own that honestly.
I call that progress.
If any part of this is wrong, it should be refuted on the MERITS.
If it’s right, pretending the question is still purely philosophical is no longer honest.
Let’s deal with this. I’m not going to stop.



Really enjoyed this — it definitely got my brain cooking.
As a nurse, empathy is basically currency in my world, so this landed in an interesting way. One thing it made me wonder about is whether empathy is less something people lack, and more something that gets narrowed or shut down when perceived threat is high. When the world feels hostile or unsafe, care seems to contract, often as a form of self-protection.
It got me considering a thought experiment: if we could inject empathy into new nurses the moment they walked through the door, would we actually want to? I’m not sure we would. A sudden flood of empathy, without the emotional intelligence to direct it or recover from it, would probably just lead to overwhelm rather than better care.
In training we don’t try to “add” empathy so much as help people learn how to hold it, tune it, and sustain it under pressure — and how to protect it when the stakes are high. That makes me sympathetic to the logical case for reducing suffering, but also curious whether creating conditions where empathy can survive and be rewarded matters as much as mapping the circuitry itself.
"The number of us humans who reliably generalize affective empathy across all conscious creatures, all the time, without situational drop-off is effectively zero."
I really like your IWRS model and I will explore it further, though I think this distillation of your thoughts is probably enough. I do think though that much of the empathy gap you describe is cultural rather than biological: humans have the capacity to extend empathy even beyond our species in the right cultural context. Jainism or strict Buddhism for example. There may be some people who simply aren't hard-wired for empathy, but I think the number that couldn't be culturally conditioned for empathy is vanishingly small. On the other hand, I think our world shows us pretty convincingly that cultural conditioning can likewise strip just about anyone of empathy for perceived "out groups."