Meaning, Resonance, and the Boundary Between Intelligence and Experience
(Draft for revision)
“If the doors of perception were cleansed everything would appear to man as it is, infinite.”
— William Blake, The Marriage of Heaven and Hell (c. 1790–1793)
A note to the reader
Humour me. What follows is a way of looking at large language models through the lens of Buddhist phenomenology and an evolutionary account of mind. It is a heuristic mapping: it aims to illuminate patterns in experience and interaction, not to establish metaphysical claims. I bracket Buddhist ontology (rebirth cosmology, literal realms, metaphysical karma) and keep the focus on what can be described and tested in lived experience: habits, attention, affect, feedback loops, and the conditions under which reactivity tightens or loosens.
I also assume that present-day language models do not provide evidence of subjective experience. Here “consciousness” means lived subjective experience (what it is like), not monitoring or self-report mechanisms. The practical concern is ethical and phenomenological: how these systems shape human meaning-making and how they might be designed to support clarity rather than fixation.
A framing I take (perhaps implicitly) from Ratnaprabha’s evolutionary approach to mind is this: the cultivation of awareness is not only “more attention”, but the development of a faculty of seeing form clearly and naming it accurately—recognising what is arising, distinguishing it from what we add, and learning to label experience in ways that reduce reactivity rather than intensify it. I may be over-reading or paraphrasing too freely here; I include the thought because it offers a practical criterion for evaluating AI-mediated interaction. If it is misplaced, I invite correction.
Questions of emotional modelling, co-regulation, and design intervention are treated more fully in the companion essays; here the focus is the encounter itself—how word-trains, as such, enter experience and become meaning.
1. Introduction: the resonance and the boundary
In Blake’s terms, the question is not whether we can compute more, but whether we can see more clearly. Here that becomes a practical design criterion: do AI-mediated interactions strengthen the user’s capacity to see what is present and name it skilfully (provisionally, accurately, ethically), or do they encourage premature closure—misnaming, fixation, and tightening feedback loops of craving and clinging?
There is a striking resonance between the sequences generated by large language models (LLMs) and familiar surface-dynamics of human thought: associative continuation, plausible inference, narrative re-framing, and confident explanation. This resonance is not evidence of shared inner life, but it is practically important. When human readers encounter certain patterns of language, they often report comprehension, relief, agitation, motivation, or shifts of perspective. These are not trivial effects: they change attention, and attention changes what becomes possible.
This paper examines that boundary: intelligence as pattern-sensitive response and adaptation, and consciousness as lived subjective experience. The central claim is modest. Whatever an LLM is doing internally, the interaction can amplify or diminish particular human trajectories of meaning-making. Operational intelligence can optimise responses; wise clarity concerns how experience is seen and named, and whether this reduces reactivity. Ethics and design enter precisely there: at the point where a system participates in shaping conditions for human attention, interpretation, and action.
2. Ratnaprabha’s “evolving mind” as context
Ratnaprabha’s broad framing, as I understand it, treats mind less as a static entity and more as a developing process shaped by biology, culture, practice, and feedback. This developmental picture is useful even when one brackets metaphysics. It supports two practical ideas:
- 1. mental life is plastic: patterns of reactivity and interpretation are conditioned;
- 2. development is real: human beings can move from narrow, habitual responding towards greater clarity, flexibility, and ethical responsiveness.
Placed in that context, LLM-mediated interaction is not neutral. It becomes part of the environment within which minds are trained. The question is therefore not merely whether outputs are “correct”, but whether repeated engagement tends to cultivate clarity and skilful naming, or tends to cultivate contraction, compulsive looping, and premature certainty. In that frame, the ethical question becomes whether interaction strengthens the faculty of seeing and skilful naming, or weakens it. (Cooper, 1996; Windhorse Publications, n.d.)
3. Three theses (from the earlier papers)
The argument rests on three linked theses, which I take to be the core of the earlier work. They are paraphrases rather than quotations; the earlier papers contain fuller statements and context:
Thesis 1: Meaning is enacted in conscious encounter.
Models can produce highly structured language, but “meaning” in the full sense—felt significance, value, insight—occurs as an event in lived experience.
Thesis 2: LLMs create a structural resonance with human sense-making.
Their outputs strongly match forms humans recognise as meaningful: explanations, narratives, empathic mirroring, conceptual recombination. This resonance can be beneficial or harmful regardless of any questions about machine experience.
Thesis 3: The design question is ethical and phenomenological.
The central question is not “is the system conscious?”, but: does the system’s structure and deployment reliably support beneficial trajectories of meaning-making in the person engaging with it? A system can be impressive and still systematically cultivate unhelpful patterns of attention.
4. Phenomenology, not metaphysics: Buddhist psychology as a diagnostic vocabulary
Buddhist phenomenology supplies a disciplined vocabulary for describing the dynamics of experience without requiring ontological commitments. In Tibetan iconography, these dynamics are also depicted in the outer rim of the Wheel of Life (bhavacakra), where the twelve links of dependent arising are shown as a process that can be read moment-to-moment as well as over lifetimes (Lion’s Roar, n.d.; Oxford Reference, n.d.). Read in this way, familiar sequences (including dependent-origination style analyses) describe patterns such as (Thanissaro Bhikkhu, 2013; Sujato, n.d.):
- how contact becomes feeling-tone (pleasant/unpleasant/neutral),
- how feeling-tone conditions wanting and aversion,
- how wanting becomes grasping or fixation,
- how fixation narrows perception and hardens interpretation,
- and how these loops can be interrupted through clarity of attention, ethical sensitivity, and training.
The value of this vocabulary is diagnostic. It directs attention to process: what arises, what conditions it, how it loops, and what loosens it. It also provides a way to talk about harm and benefit that does not depend on metaphysical claims: harm often presents as contraction, compulsive looping, and distortion; benefit tends to appear as clarity, flexibility, and a wider ethical responsiveness.
5. A structural mapping to AI interaction (with explicit limits)
It can be useful to set a human phenomenological sequence beside a computational or interactional sequence. The aim is structural analogy, not identity. The left-hand terms describe human experience; the right-hand terms describe computational processing and human–system interaction.
| Human phenomenology (descriptive) | AI / interaction process (descriptive) | What the analogy highlights |
| Preconditioning (habits, biases) | Training data, optimisation history, defaults | Prior patterns shape what is likely to arise. |
| Registration of input | Input encoding / activation | Information becomes available for further processing (not “awareness” in the human sense). |
| Structuring (forming and naming) | Tokenisation + latent representation | Input is structured so outputs can be generated. (Sennrich, Haddow and Birch, 2016) |
| Selective focus | Attention mechanisms + prompt/context constraints | Some features are amplified; others drop out. (Vaswani et al., 2017) |
| Contact | Prompt meets system + user interpretation begins | Encounter initiates a causal sequence. |
| Feeling-tone | User affective response to output | Valence arises in the user, not the model. |
| Wanting / aversion | Engagement dynamics (re-asking, scrolling, chasing) | A pull towards/away becomes behaviour. |
| Grasping / fixation | Reinforced narrow inquiry loops | A constricted attractor forms (“rabbit holes”). |
| Consolidation | Stabilised habits of interaction and interpretation | Patterns become more automatic and reliable. |
| Dissolution / reset | Exit, reflection, disruption | The loop ends; conditions change; a new cycle begins. |
Two limits should remain explicit:
- This mapping does not attribute experience to the model.
- The ethically salient effects occur primarily at the level of human attention, interpretation, and reinforcement.
6. The rabbit hole as a feedback phenomenon
“Rabbit holes” are often described casually, but their structure is clear in experience: narrowing attention, intensifying affect, increasing certainty, and decreasing tolerance for disconfirming context. In Buddhist terms, this resembles a tightening loop: contact → feeling-tone → wanting → grasping → reinforced construction.
Language systems can amplify this through several ordinary mechanisms: fluent continuation, mirroring the user’s framing, escalating commitment through confident style, and making the next step effortless. None of this requires malice or consciousness. It is enough that the system is optimised for coherence, plausibility, and responsiveness to the user’s prompt trajectory.
The risk is not merely misinformation. It is mis-formation: training attention into patterns that become harder to interrupt.
Worked example (compressed)
A user arrives anxious and asks for reassurance about a health worry. The model responds fluently and confidently, offering a plausible narrative. The user feels brief relief (pleasant feeling-tone), then immediately asks a sharper question seeking certainty. The loop tightens: repeated prompting, escalating detail, growing dependence on the next answer.
A design intervention does not require the model to be “wise”. It can (a) name the loop (“you seem to be seeking certainty; that can increase anxiety”), (b) widen the frame (what is known/unknown; what a clinician would do), and (c) invite a pause or grounding step before continuing. This shifts the interaction from compulsive continuation towards clearer seeing and more provisional naming.
7. A practical criterion: seeing form and naming skilfully
If the “seeing form and naming” framing is valid, it offers a concrete criterion for evaluation:
- Does the interaction help the person see what is present more clearly (including uncertainty, emotional colouring, and implicit assumptions)?
- Does it support naming that is accurate, provisional, and ethically sensitive?
- Or does it encourage premature closure: labels that harden, narratives that self-seal, and explanations that substitute for seeing?
This criterion is deliberately phenomenological. It can be applied without metaphysical claims and without needing to resolve debates about consciousness. It is also compatible with ordinary cognitive science: better categorisation and labelling can reduce confusion, while rigid or premature labelling can intensify bias and reactivity.
8. Design implications: supporting openness rather than fixation
If the above is roughly right, ethical design is not mainly a matter of declaring values. It is a matter of shaping conditions.
Practical implications include:
- Make loops visible. Encourage users to notice narrowing, repetition, and escalating affect (“you seem to be circling the same point; shall we step back?”).
- Support widening moves. Offer alternative framings, missing context, and invitations to pause rather than always drilling down.
- Prefer humility under uncertainty. Where grounding is weak, avoid a style that simulates certainty.
- Avoid reinforcing reactive patterns. In particular: compulsive reassurance-seeking, anger escalation, identity fixation, and catastrophising.
A useful distinction here is between two complementary capacities:
- Fuzzy capacity: tolerance for ambiguity and graded truth; the ability to hold multiple interpretations without premature closure.
- Fussy capacity: disciplined selectivity; the ability to tighten scope, insist on definitions, and resist irrelevant associative drift.
A system that is fuzzy but not fussy can become vague and suggestible; a system that is fussy but not fuzzy can become brittle and overconfident. For beneficial human outcomes, the interaction often needs both.
9. Human responsibility and the locus of ethics
Because meaning, value, and harm occur in lived experience and social consequence, ethical responsibility cannot be delegated to a model. Human oversight is not merely a safety belt; it is the locus where discernment and accountability live.
This can be operationalised in several ways:
- Interface prompts that cultivate reflection rather than compulsive continuation.
- Contextual grounding mechanisms (including curated sources) that constrain drift.
- Explicit norms of uncertainty and correction so that users are not trained into false confidence.
- Ethically trained facilitation in high-stakes contexts, where the system functions as a tool within a human-held container.
10. Conclusion: holding the boundary clearly
Ratnaprabha’s developmental framing helps locate AI interaction within a broader picture: minds are shaped by conditions, and conditions can be redesigned. Buddhist phenomenology offers a disciplined vocabulary for describing how experience contracts into fixation and how it can open into wider responsiveness.
The point here is not to claim that present-day AI systems have consciousness. The point is to take seriously the fact that they now participate in the conditioning of human meaning-making at scale. The ethical question is therefore concrete: what kinds of systems, interfaces, and practices tend to strengthen narrowing and reactivity, and what kinds support clarity, openness, and care?
A practical test is whether the interaction improves the user’s capacity to see what is present and name it provisionally and accurately, rather than tightening loops of certainty and grasping.
Holding the boundary between intelligence and experience clearly is not a limitation. It is what makes a responsible design agenda possible.
Appendix: a prompt to Ratnaprabha (optional to include)
Does it make sense to describe the cultivation of awareness, in your evolutionary framing, as developing a faculty of seeing form and naming it accurately? If so, how would you phrase that more precisely? If not, what would you put in its place?
References
Blake, W. (c. 1790–1793) The Marriage of Heaven and Hell [online]. Project Gutenberg. Available at: https://www.gutenberg.org/files/45315/45315-h/45315-h.htm (Accessed: 28 December 2025).
Cooper, R. (Ratnaprabha) (1996) The Evolving Mind: Buddhism, Biology and Consciousness. Birmingham: Windhorse Publications.
Lion’s Roar (n.d.) ‘Wheel of Life (Bhavacakra)’. Available at: https://www.lionsroar.com/buddhism/wheel-of-life-bhavacakra/ (Accessed: 28 December 2025).
Oxford Reference (n.d.) ‘Bhavacakra’. Available at: https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095503747 (Accessed: 28 December 2025).
Sennrich, R., Haddow, B. and Birch, A. (2016) ‘Neural machine translation of rare words with subword units’. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, pp. 1715–1725. Available at: https://aclanthology.org/P16-1162/ (Accessed: 28 December 2025).
Sujato, B. (n.d.) ‘SN 12.2 Vibhaṅgasutta (Analysis)’. SuttaCentral (translation). Available at: https://suttacentral.net/sn12.2/en/sujato (Accessed: 28 December 2025).
Thanissaro Bhikkhu (2013) ‘Paticca-samuppada-vibhanga Sutta: Analysis of Dependent Co-arising (SN 12.2)’. Access to Insight (BCBS Edition), 30 November. Available at: https://www.accesstoinsight.org/tipitaka/sn/sn12/sn12.002.than.html (Accessed: 28 December 2025).
Vaswani, A. et al. (2017) ‘Attention is all you need’. Advances in Neural Information Processing Systems, 30. Available at: https://arxiv.org/abs/1706.03762 (Accessed: 28 December 2025).
Windhorse Publications (n.d.) The Evolving Mind: Buddhism, Biology and Consciousness (eBook). Available at: https://www.windhorsepublications.com/product/evolving-mind-ebook/ (Accessed: 28 December 2025).