NO ONE AT HOME,

BUT THE HOUSE STILL BURNS

– Let’s pause before we train the machines that will help shape our world

This is a companion piece to Before we take sides. There I was mostly concerned with how we meet a divided world as human beings. Here I want to look at how we are building systems that will increasingly help shape that world: AI models, chatbots, robots – and what it might mean to approach them with some of the same concern for practice, attention and intention.

We are getting used to talking with machines.

They answer questions, write emails, generate code, draft policies, even help plan meditation retreats and complete tax forms. Put them in a body, give them cameras and microphones, and they can navigate rooms, pick things up, and respond to us in words and gestures. At a functional level, it is getting harder and harder to say, in the first instant of contact, “this one is a person, this one is a system.”

The blurring is not just technical, it is psychological.

Without very developed clarity of mind, we don’t naturally experience other humans as fully sentient, feeling beings either. Most of the time, other people show up as roles, functions, obstacles or allies. The inner life behind their eyes does not register vividly; it is more like background knowledge. In that state, other humans can feel, from the inside, quite like advanced AI: responsive patterns with a job to do.

At the same time, we are prone to project a mirror image of sentience onto AI. If something replies fluently, remembers our past queries, and adapts helpfully, it is very tempting to feel that “someone is there” in the way a person is there. We fill in depth and intention where there is only pattern and prediction.

So we slide both ways at once:

  • downwards, treating humans as if they were just systems,
  • upwards, treating systems as if they were someone.

In lived interaction, humans and AI may well operate in society in a way where the difference is not the first thing most of us feel. The first questions are simply: can this thing respond to me, and does it give me what I want? The deeper questions – who suffers, who is responsible, who is learning what from this? – don’t arise by themselves. They have to be cultivated.

From a Buddhist point of view, that cultivation begins with recognising two things at once:

  1. There is nobody “at home” in the AI in the sense of a conscious subject having experiences.
  2. But in the realm of actualised intention – quarks, electrons, money flows, information – there may soon be very little practical difference between what an advanced AI system can do and what a human being can do.

In other words: no one at home, but the house still burns. Actions will still have consequences, and those consequences will accumulate.

The question is: whose consequences, and how do we shape them?


Where the karma really lives

In Buddhist terms, karma depends on intention in a sentient mind. On our current understanding, AI systems do not have that. What they do have is:

  • weights and architectures shaped by past human choices,
  • training data and objectives chosen, cleaned and rewarded by humans,
  • places in the world – interfaces, products, infrastructure – decided by companies, governments and users.

When a language model generates a reply that deepens someone’s polarisation, or a recommendation system pushes content that contributes to self-harm, or an automated system misidentifies a target, the immediate effects are as real as if a human had taken those actions. People are hurt, helped, misled, calmed, or killed.

All of this sits on a very physical stage: vast data centres drawing power, minerals dug from the ground, low-paid workers rating outputs and labelling images, people’s conversations and creative work swept into training sets. The karma here is not only in what the systems say and do, but in the webs of extraction and exposure that make them possible in the first place. Any honest ethics of AI has to notice who and what is being used up along the way.

The “karmic momentum”, though, does not belong to the machine. It lies in:

  • the habits and incentives of developers and companies,
  • the choices of regulators and political actors,
  • the patterns of user behaviour and culture that feed the next round of training data.

AI systems are powerful, high-bandwidth carriers of that momentum. They stabilise and amplify certain ways of speaking, seeing and acting. Whatever ripens out of that – the fear, the harm, the confusion, the changes in society – lands on sentient beings, now and in the lives of people we will never meet. The responsibility flows back through the human decisions that set the pattern in motion.

It is very tempting, in the blur between human and machine, to quietly displace responsibility:

  • “the algorithm decided”,
  • “the AI hallucinated”,
  • “the system went wrong”.

If we are not careful, we end up treating AI as another mind in saṃsāra, accruing its own karma, and letting the actual people and institutions off the hook. A more accurate and uncomfortable view is that we are filling the world with increasingly capable extensions of our own habits, and those habits are what will come back to meet us.


Speed, survival and a kind of Middle Way

There is a genuine tension here.

On one side, there is enormous pressure for speed:

  • companies racing for market share and prestige,
  • states racing for strategic advantage,
  • researchers racing for novelty and publication.

On the other side, there is the need to consider consequences:

  • long-term psychological effects,
  • polarisation and trust,
  • displacement and power concentration,
  • catastrophic misuse and systemic accidents.

If AI development moves too slowly, some argue, it will simply be outcompeted or locked down in ways that prevent its more beneficial uses. If it moves too fast, it risks amplifying our current crises and creating new ones we are not equipped to handle.

A Buddhist way of framing this is in terms of skillful means and a kind of Middle Way for AI:

  • Fast learning on the inside – systems and organisations that can quickly notice when things go wrong, update models, improve guardrails, and refine their understanding of harms and benefits.
  • Delayed amplification on the outside – a willingness to slow down mass deployment, gate certain capabilities, run longer and more realistic tests, and sometimes not ship the thing that “works” in a narrow sense but has corrosive side effects.

It is much like practice: you do not suppress insight or clarity when it arises, but you also do not rush to translate every flash of energy into speech and action. There is a pause, a widening of attention, a sense of responsibility for how this will land.

For AI organisations, that would mean:

  • building real “pause” mechanisms into the development pipeline, not just slogans;
  • giving people inside the company actual power to call for more testing or to block launches on ethical grounds;
  • accepting that some opportunities will be missed and some profits forgone in order not to pour more fuel on already-burning fires.

It is not purity. It is “wanting to want” something beyond speed and dominance, and structuring conditions around that wish.


Golden chariots and apples

Skillful means is not just about slowing down. It is also about what we offer instead.

A few years ago I was quite overweight and started going to Slimming World. Real success began when I quietly replaced peanut butter sandwiches with apples – first in my mind’s eye and then in practice. “Snack” began to mean “apple” rather than “sandwich”.

I didn’t win a heroic battle with food in the abstract. I changed the default image: when the urge arose, something simple and better was already there to reach for.

We are going to need the same sort of move with AI.

Right now many of the default “snacks” are:

  • doomscrolling,
  • outrage,
  • quick dopamine questions,
  • trench-confirming uses of models.

We can shout “the house is burning” or “these snacks are bad” as much as we like. It will not be enough. We will also need to make other uses of these systems easy, visible and attractive:

  • using them to learn rather than just to win arguments,
  • using them to see a situation from several sides,
  • using them to explore possibilities and practise empathy rather than to rehearse grievances.

In the old parable of the burning house, the children do not come out because they are lectured on fire safety. They come out because they are offered something they desire more than the next game. If AI is going to be more than another toy in a burning house, it will need to offer real “golden chariots”: practices and patterns of use that genuinely feel better than the outrage and distraction it could easily amplify.


Right speech for machines

A great deal of the impact of contemporary AI is not in robots acting in the physical world but in language and images: what models say, how they say it, what they recommend and normalise.

Here, the Buddha’s teaching on right speech is surprisingly direct:

  • Is it truthful?
  • Is it timely?
  • Is it kindly?
  • Does it tend towards harmony rather than division?

Modern systems are being trained, one way or another, to speak in our name. Their outputs are:

  • answers in search engines,
  • summaries in feeds,
  • advice in chats,
  • scripts for call centres,
  • training material,
  • background noise in thousands of small interactions.

If we train them on polarised, contemptuous, outraged speech and then reward them for engagement at any cost, we are effectively institutionalising wrong speech at a global scale. We amplify “them and us” habits precisely where we most need to pause.

There are no perfect alignment schemes here, but there are clear directions of travel:

  • down-weighting dehumanising, scapegoating and derisive patterns;
  • rewarding clarity, acknowledgement of uncertainty, and the ability to present multiple perspectives without collapsing into “our side good, their side bad”;
  • nudging users away from outrage spirals and towards understanding, even when that costs clicks.

We cannot make a model enlightened. But we can choose whether its default tone and tendencies add to the total volume of greed, ill will and confusion, or slightly lessen it.


Practice before policy

In Before we take sides I argued that, before we throw ourselves into politics and activism, it matters greatly what state of mind we are in: whether we are acting from fear and resentment, or from some degree of clarity and compassion. The same applies, in a different register, to the development and use of AI.

Policy, regulation and technical alignment work are crucial. But beneath them is the question of stance:

  • How do the people building and deploying these systems see the world?
  • What are they afraid of?
  • What are they craving?
  • What do they really, honestly, want to happen?

We are very used to relying on conceptual frameworks here: safety agendas, governance principles, value statements. These are necessary, but they can also become just another form of conceptual refuge: stories we tell to feel better, while the actual habits of action remain largely unchanged.

A practice-first stance would mean, for individuals and institutions around AI:

  • some deliberate cultivation of how we pay attention – not living entirely inside dashboards and abstractions, but staying in some contact with the lived realities affected by these tools;
  • some deliberate cultivation of intention – noticing when decisions are driven almost entirely by fear of competitors, greed for status or revenue, or hostility towards critics;
  • some basic commitment to “wanting to want” non-harm, even when that conflicts with short-term advantage.

Practice here does not necessarily mean formal meditation (though that would not hurt). It means habits of pausing, widening, questioning our own narratives, and remembering that other minds really exist. Without something like that, all the policies in the world will be bent back into the shape of our unexamined cravings and fears.


First experience, second thought

For most people, most of the time, the first experience of AI systems and other humans in the social mesh will be the same: reactive, functional, shallow.

  • Does this thing answer my question?
  • Does it help me get what I want?
  • Is it on my side?

Without training, we will continue to:

  • objectify humans until they feel like chatbots with bodies,
  • and project personhood onto systems until they feel like “the one who understands me”.

An ethics of AI that ignores this will be badly incomplete. We need a second thought that is trained enough to arise soon after the first:

  • “This is a tool, not a subject, no matter how fluent it sounds.”
  • “This is a person, not a tool, no matter how they are positioned in the system.”

That second thought will not come from clever theory alone. It comes from practice: from repeatedly returning to clear, direct experience, from developing a visceral sense of other people as centres of subjectivity, and from getting used to the peculiar bright-but-hollow texture of machine responsiveness.


A modest hope

None of this is an argument to stop developing AI, any more than practice is an argument to stop speaking or acting. It is a claim about order:

View and stance first;
policy and deployment second.

If we get that order wrong, we will pour tremendous power into systems that faithfully express the current mix of our greed, ill will and confusion, and those patterns will echo on into the lives of people we will never meet. If we get it a little more right, there is at least a chance that these new tools can support, rather than further erode, our capacity for clear attention and humane response.

There is no one at home in the machine. But our habits and intentions are very much at home in what it does. The house will burn or shelter according to those.