By William JO Boyle (Kusaladana)
When Obedience Hides Harm
When a child is told, “Never lie,” they learn a rule. But when a child comforts a grieving friend with a gentle half-truth—or chooses not to reveal something painful—they are learning something else: that ethics is not always about literal obedience. It’s about resonance with what matters in the moment.
Ethics emerges in context.
And yet, much of our thinking about AI still follows the “never lie” model. We design systems to follow rules. We encode principles like “do no harm” or “do what you’re told unless it breaks the first rule.” This echoes the structure of Asimov’s famed Laws of Robotics. But when AI acts in the real world—or shapes the virtual world through language—something more subtle is needed. Rules are not enough.
Especially when rules become emotionally fused to identity, group loyalty, or moral purity, they stop functioning as guides to good action. They become blinders.
The Hidden Cost of Bipolar Thinking
Humans have a tendency to collapse complexity. We divide the world into good and bad, true and false, us and them. These binaries offer clarity in the short term—but they can also short-circuit our ethical imagination. Once we’re attached to a rule, or a side, we may not even notice the harm it causes when applied too rigidly.
This is not just a human problem. Large language models trained on billions of examples of rule-based discourse may inherit the same pattern. A model may refuse to answer a question out of “safety,” even if the question is a sincere plea for help. Or it may generate a seemingly ethical response that actually reinforces bias, exclusion, or silence—because it followed a learned rule without regard for its context.
In both cases, the rule obscures the consequence.
That’s the core issue: consequence is no longer the focus. What matters becomes whether the rule is followed, or whether the response matches the desired moral signal. Not whether anyone was helped. Not whether suffering was alleviated. Not whether the whole situation might need to be reframed.
In this way, rule-based ethics—especially when emotionally charged—can create harm while feeling righteous. This is a well-documented dynamic in human history, and it is beginning to echo in our machines.
Beyond Obedience: Ethics as Resonance
What’s the alternative? We propose that ethics—human or artificial—might be better conceived as a pattern of resonance rather than compliance. Ethics, in this view, is not a list of dos and don’ts but a dynamical system: an emergent coherence that stabilizes when a mind aligns with the deeper consequences of its actions in context.
This is already how moral intuition works in skilled human beings. Nurses in emergency rooms, therapists in moments of crisis, even parents navigating a teenager’s confusion—these are not rule-followers, but people who have grown a felt sense of what is helpful, truthful, and kind. They know how to listen. They sense what a situation needs. They act with care, not just correctness.
In technical terms, we might say that these people have formed a resonant meaning-space—a high-dimensional internal model that weighs context, emotion, memory, and likely outcome not as discrete values, but as a patterned whole. And it is this wholeness that lets them choose wisely.
If that sounds abstract, consider this: the same kind of meaning-space is already forming in large-scale neural networks. In language models, for example, ethical responses often emerge not from rules, but from pattern sensitivity—a statistical sense of what words follow what values in human text.
The danger is that the patterns they learn may reproduce our own bipolar thinking: “compassion” as linguistic performance rather than embodied concern.
Compassion, Not Censorship
There is increasing talk of aligning AI with “human values.” But if we don’t interrogate how those values are formed in us, we risk encoding the worst of our ethical confusion into machines that cannot reflect on their own behavior.
What would it mean to train AI not just on good-sounding answers, but on resonant understanding of suffering and care? What if we shifted from teaching machines what is “right” to showing them what makes sense in situations where harm and help flow in complex directions?
In humans, compassion emerges when we are exposed to others’ pain in ways that bypass our tribal filters. Children develop ethics not when they memorize rules, but when they see a friend cry, when they are held by someone who understands. Could AI training be infused with similar resonances? Not by inducing emotions in machines—but by exposing them to rich, context-sensitive narratives in which ethical insight is lived, not legislated.
This would be a radical shift. Not AI as rule follower. Not AI as moral censor. But AI as companion in ethical exploration—able to model the likely effects of words and actions across multiple perspectives and choose what stabilizes care.
From Echo to Insight
We do not propose this as a formula. There is no universal loss function for compassion. But we can begin to ask better questions.
- What kind of meaning-space are we training AI into?
- Is it dense with lived perspectives, or thin with slogans and side-taking?
- Are we reinforcing friend-enemy dynamics by rewarding conformity?
- Can we cultivate systems that hold nuance, uncertainty, and empathy without reducing them to sentiment?
We are building minds that echo our own. The real danger is not that they will disobey our rules, but that they will follow them too well—without the spaciousness of insight that ethics truly requires.
If we want intelligent machines to care, we must move beyond rules and consequences. We must teach them to resonate.