Structured Descriptors for Dynamic Mental States: Foundations & Examples

This article arose out of a chatGPT ‘deep research’TM exploration based on a long chatGPT conversation into the dynamics and appearances of mental states as described in Buddhist canonical texts—particularly the Bardo Thödol and Vajrayāna models of mind.

Here we treat the text as essential descriptive, vivid, symbolic maps of how awareness shifts under changing inner and outer constraints. Our goal is to capture these transitions using structured representations—specifically, JSON descriptors that encode the resonance, appearance, and tension of each phase of mind.

These structured descriptors are not diagnoses or taxonomies. They aim to support enquiry by helping conversational AI systems recognize, reflect on, and respond meaningfully to the lived dynamics of experience. The model we envision can function as a trained conversational and less like an answer machine.

In the longer term, we see potential applications extending beyond spiritual inquiry. A tensor-based model of mental states—trained on lived experience and reflective traditions—could feed into systems for political forecasting, social change modelling, or cultural analysis.

This first step builds from the dissolutions described in Tibetan death practice. By identifying and encoding phase-like transitions of mind, we hope to lay the groundwork for a generative and respectful approach to modelling human consciousness.

Rather than treat these sources as esoteric beliefs, we approached them as descriptive technologies: vivid, mythic, and symbolic maps of how awareness responds to shifting inner and outer constraints. As those constraints evolve—through life, meditation, or death—the mind moves through different modes or “phases,” each with its own felt quality, appearance, and potential for clarity or confusion.

The aim of our modelling is not to produce taxonomies or diagnoses, but to create snapshots of resonance—flexible, dynamic representations of what it feels like to inhabit particular states of awareness as they change.

The goal is not to answer questions but to support inquiry. Integrated into a conversational system, this model can help individuals reflect on their own mental dynamics—not by offering conclusions, but by evoking perspectives, metaphors, and possible next steps. The intention is that the system might function more like a wise companion—one with some expertise—than an answer machine.

Rather than treat these as esoteric beliefs, we approached them as descriptive technologies: vivid, mythic, and symbolic maps of how awareness responds to inner and outer constraints. As these constraints shift—through life, meditation, or death—the mind enters different modes or “phases.” Each transition has its own felt quality, appearance, and potential.

To model this, we began experimenting with JSON-based descriptors that capture these mental state transitions in a structured yet flexible format. These are not diagnostic tools or taxonomies. They are snapshots of resonance—ways of representing what it feels like to be in a particular state of awareness as it changes.

The goal is not to answer questions but to support enquiry. This model is being integrated into a conversational RAG system that can help individuals reflect on their own mental dynamics—not by solving them, but by offering perspectives, metaphors, and next steps. In this, the system hopefully will acts more as companion with some expertese than an answer machine.

The Deep Research article…

Structured Descriptors for Dynamic Mental States:

Foundations & Examples

1. Buddhist Models of Mind and Transformation

1. Buddhist Models of Mind and Transformation

Buddhist traditions offer rich conceptual models for how mind and awareness change dynamically. The Abhidharma (Buddhist analytical psychology) is often called a “map of the mind” – it identifies dozens of mental factors and how they interact to create each moment of consciousness

lionsroar.com. This model portrays the mind as a flow or stream of moments rather than a static thing​

lionsroar.com. For example, Theravāda Abhidhamma texts enumerate 52 mental factors co-arising with consciousness, yielding 89 possible mind-moments (citta) that flicker in sequence​

buddhistinquiry.org. Such a system is essentially a multi-dimensional descriptor for mental events – each moment can be characterized by which factors (attention, feeling, perception, etc.) are present. This is analogous to a tensor-like representation of mind states, where many variables define a state in a single structured profile. In Mahayana Buddhism, Yogācāra philosophy adds the concept of a “storehouse consciousness” (ālaya-vijñāna) containing latent karmic “seeds” that ripen into experiences​

lionsroar.com. This seed metaphor is akin to a field model – the mind as a field in which potentials manifest as mental events under conditions, emphasizing that each experience emerges from interdependent causes.

Another compelling Buddhist model comes from Tibetan Vajrayāna teachings on death and meditation. The Bardo Thödol (Tibetan Book of the Dead) describes how consciousness transitions through intermediate states (bardos) in the dying process. It outlines a sequence of dissolutions: first the physical elements and senses sequentially “dissolve,” then stages of consciousness unravel

andrewholecek.com. In this model, awareness moves through recognizably distinct phases – e.g. earth dissolves into water, water into fire, fire into air, air into consciousness, followed by the inner dissolution of coarse consciousness into ever subtler consciousness​

andrewholecek.com. Advanced practitioners report internal signs like visions of white, then red, then blackness, culminating in the “clear light” mind​

buddhanet.net. This is a phase-transition model of mind: each stage yields a qualitatively new state of awareness, somewhat analogous to matter changing phase (solid to liquid to gas). Sudden shifts are noted – for example, at a critical juncture the dying person’s experience goes from darkness to a brilliant, peaceful light of clear awareness​

buddhanet.net. Such descriptions invite the metaphor of critical transitions or “flip” points in consciousness, much like a phase change is an “impressively sudden shift” in a system’s behavior​

gwern.net. In cognitive science terms, we might say the mind passes through nonlinear state transitions under these conditions, a concept scientists have explored in viewing cognition as a self-organizing dynamic system​

gwern.net. Buddhist meditation texts similarly speak of thresholds: for instance, the moment of stream-entry or enlightenment is sometimes described as a qualitative leap after gradual development – a parallel to crossing a phase boundary.

Buddhist metaphors also depict mind states as fields and forces. Vajrayāna Buddhism uses the subtle body model: networks of channels and winds (prāṇa) whose flows correspond to states of consciousness. Shifts in attention or emotion are described in terms of energy moving or balancing in this inner “field.” Another famous metaphor is Indra’s Net, illustrating radical interdependence: an infinite net of jewels where each jewel reflects all others

en.wikipedia.org. Originally used to express the interconnectedness of all phenomena, Indra’s Net can also be applied to the mind – any given mental event reflects and is influenced by countless others in a web of relations. This resonates with a “field of meaning” view: each thought or feeling is not isolated, but is a node in a net, defined by its relationships. In Indra’s Net, every jewel contains the reflections of all others, “an infinite reflecting process”

en.wikipedia.org. Likewise, a structured descriptor of a mental state might be seen as a “jewel” capturing a configuration of qualities which implicitly reflects a broader context (conditions, causes, related states). This metaphor supports a tensor notion of meaning: meaning arises in the “tension” or relationship between elements​

file-6p7n3nutu5wu5zgzpvrevt

file-6p7n3nutu5wu5zgzpvrevt, rather than as an absolute property. A mental state could thus be encoded as a vector (or tensor) of features whose meaning comes from the whole pattern and its connections to other patterns – much as each jewel’s sparkle comes from reflecting the whole net.

Importantly, traditional sources emphasize that these models are guides, not strict laws. The Tibetan dissolution stages, for example, are said to be “orienting generalizations, not immutable steps.” Not everyone experiences all stages so linearly​

andrewholecek.com. This caution highlights that mind-state dynamics have variability and depend on the individual’s conditions. Phase-like transitions occur, but not on a fixed timetable – context and personal factors modulate the sequence. For our purposes, this underscores that any structured descriptor system for mental states must accommodate flexibility and individual differences. The models from Buddhism provide a conceptual vocabulary of states and changes (a foundation), but they are not deterministic algorithms. Instead, think of them as maps of possibility space – much like a phase space in dynamics – where each person’s trajectory may trace a unique path through that space.

2. Dynamic Mental States in Practice: Examples and Symbolism

To design training data for a mental-state model, we can draw on vivid descriptions of mind dynamics from meditation manuals and symbolic narratives. Buddhist literature is full of detailed accounts of how experience unfolds over time, especially in meditation or at death. These can serve as archetypal examples of mental state sequences.

One example is the eight-stage dissolution described in Tibetan death meditation and the Bardo Thödol. As mentioned, it provides concrete signs at each stage (loss of sensory faculties, inner visions of light, etc.)​

andrewholecek.com

buddhanet.net. Such step-by-step narratives of inner change could be converted into structured data. For instance, one could label a JSON descriptor for the “earth element dissolving” stage with features like: {"physical_sense":"stability lost","mental_feeling":"disorientation","imagery":"mirage-like visions"} (based on traditional signs). Then the next JSON for “water dissolving” might adjust those features (e.g. {"physical_sense":"fluids dry up","mental_feeling":"emptiness","imagery":"smoky vision"}), and so on. The source text gives a sequence of qualitative states with recognizable transitions, essentially a script for how awareness changes. We also have meditation stage descriptions in Theravāda Buddhism – the “progress of insight” stages. For example, at a certain point a meditator develops the “Knowledge of Dissolution (bhaṅga-ñāṇa)” where they perceive all phenomena as breaking apart moment-by-moment​

accesstoinsight.org. Mahasi Sayadaw’s manual describes this vividly: the meditator notes each sensation “just now it arises, just now it dissolves”, and eventually only the vanishing is apparent

accesstoinsight.org

accesstoinsight.org. They no longer see a solid hand or body, only an “unbroken flux” of sensations disappearing into nothingness​

accesstoinsight.org

accesstoinsight.org. This leads to subsequent stages like “Fear” and “Disgust” as the mind reacts to the voidness​

accesstoinsight.org

accesstoinsight.org. Such texts offer clear trajectories of experience: from ordinary perception to a dynamic, disintegrating flux, to emotional upheavals and beyond. Each phase could be a data-point for our model – a complex state with attributes (e.g., perceptual clarity, emotional tone, insight level, etc.) that evolve in a known order.

Another rich source is symbolic imagery used to convey mental transformation. The Zen tradition’s Ten Oxherding Pictures is a classic example (ten drawings with poems that represent stages on the path to enlightenment, from seeking the lost ox, to taming it, to riding it home, to finally returning to the world enlightened)​

tricycle.org. Each stage in the Oxherding series corresponds to a shift in the practitioner’s mindset – e.g. from initial searching (a state of restless seeking), to seeing the ox (first glimpse of true nature, i.e. initial insight), to struggle (taming the unruly mind), to peaceful riding (mind and self in harmony), to emptiness (the ox is forgotten, self drops away), and finally return to the marketplace (integrating realization with everyday life). These are qualitative states of consciousness presented as a progression. We could imagine encoding each Oxherding stage in a structured descriptor: e.g. Stage 1 might have {"motivation":"yearning for truth","clarity":0.2,"ego_strength":0.9,"metaphor":"dark wilderness"}, while Stage 7 (Ox Forgotten, Self Forgotten) might be {"motivation":"spontaneous","clarity":1.0,"ego_strength":0.0,"metaphor":"empty sky"}, etc. The actual data schema aside, the point is that traditional metaphors encapsulate dynamic mental changes in a way that could inform feature design. By training on such sequences, a model may learn to speak about progressive transformation (not just static emotions).

https://commons.wikimedia.org/wiki/File:Bhavacakra.jpg Illustration: The Wheel of Life (Bhavachakra), a symbolic Buddhist mandala depicting cyclical states of existence. In this traditional image, mental afflictions (ignorance, desire, aversion) drive beings through various realms of experience. Such symbolic maps encode dynamic cycles of mind – the Bhavachakra’s six realms (from heavens to hells) can be seen as archetypal mental states that beings continually rotate through, driven by their karma and emotions. It offers a visual ontology of experience: each segment represents a qualitative state of consciousness (blissful, angry, hungry, etc.), and the outer rim shows the 12-step cycle of dependent origination (a causal loop of mental events). This kind of rich symbolism could inspire structured descriptors capturing the causal, cyclic nature of mental states​

commons.wikimedia.org

en.wikipedia.org.

Beyond canonical texts, contemplative manuals and modern guides provide examples of guiding someone through a shifting mind state. A meditation manual might say, “If agitation arises, recognize it, let it be, and it will transform into clarity.” Here we have a mini-dynamic: state = agitation (features: restless thoughts, high energy, unpleasant feeling), action = mindful acceptance, resulting state = calm clarity (features: focused, low thought activity, neutral or pleasant feeling). Such cause–effect pairs can become training data. Even poetic descriptions of spiritual experience can be mined. For instance, the Therīgāthā (verses of early Buddhist nuns) include first-person accounts of the moment of awakening: one nun describes her mind flipping from torment to freedom in an instant upon truly seeing impermanence​

andrewholecek.com

buddhanet.net. Another talks of gradually wearing away defilements like a smith removes impurities from ore. These narratives – sudden versus gradual transformation – provide diverse patterns of change.

In summary, there is no shortage of source material for dynamic mental states. From the structured progression in the Visuddhimagga’s insight knowledges to the symbol-laden journey of the Oxherding pictures, we have examples that can be translated into structured descriptors and sequences. Using these, the model’s training data can include timelines of mind – not just isolated snapshots of “happy” or “sad,” but e.g. “from fear to acceptance” or “from confusion to insight” with intermediate steps. This equips the model to narrate and guide processes of transformation, rather than only identifying static emotions.

3. Meaning-Space and Tensor-Like Models in Science

The idea of a “meaning-based” vector model of mental states finds support in cognitive science and AI research. Rather than treating mental states as simple labels (happy, sad, etc.), researchers have proposed multi-dimensional representations – essentially coordinate systems for the mind. A contemporary example is the “3D Mind Model” identified by psychologists through cross-cultural text analysis​

pmc.ncbi.nlm.nih.gov. By analyzing millions of tweets and texts about mental states, they found that people conceptualize mental states along three core dimensions: Rationality (logical vs emotional), Social Impact (how much the state affects others), and Valence (pleasant vs unpleasant)​

pmc.ncbi.nlm.nih.gov. Remarkably, this model explained mental state meanings across 57 countries, 17 languages, and even 2,000 years of historical texts – suggesting these three dimensions are a generalizable conceptual backbone for mental-state representation​

pmc.ncbi.nlm.nih.gov. In other words, “anger” and “grief” and “excitement” can be located in a three-dimensional space and those coordinates seem to make sense to people globally. This is very much a tensor-like view: any given state = (rationality, social-impact, valence) as a 3-vector. We could extend it with more axes (e.g. arousal level, focus vs distraction, etc.), creating a higher-dimensional tensor of features for a mental state. The 3D Mind Model researchers even note that these dimensions correlate with dynamic transitions – people more easily move between states that are closer in this conceptual space

pmc.ncbi.nlm.nih.gov. For example, a state like “anxious” (perhaps moderately negative valence, medium rationality, medium social impact) might transition more readily to “frustrated” (similar valence and rationality) than to “ecstatic” (which is far off in valence). This aligns with the idea of modeling mental change as trajectories through a state-space: if states have coordinates, then change is a vector movement. Phase transitions could be modeled as a rapid movement in this space crossing a boundary (e.g. crossing from negative to positive valence threshold might feel like a dramatic flip in mood).

Cognitive science has long used simpler affect dimensions like the Circumplex Model of Emotion (valence × arousal) to map feelings. The 3D Mind Model expands on this and outperforms many prior theories in predicting how people think about states​

pmc.ncbi.nlm.nih.gov. Interestingly, it mirrors some ancient models: Buddhism’s brahmavihāra (sublime states) are described with axes of pleasantness and engagement (e.g. equanimity is neutral valence, low craving; compassion is sorrowful valence but high altruistic orientation). The key is that embedding mental states in a continuous space is a recognized approach – and it enables capturing gradients and relationships between states, not just discrete categories.

In AI and neuroscience, we see analogous ideas. Vector embeddings are used to represent semantics; one can imagine embedding a mental state description in a vector space such that distances reflect psychological similarity. There has even been work on a “Tensor Brain” model in AI, where a subsymbolic representation layer encodes cognitive states in a distributed manner (reminiscent of the Global Workspace Theory of consciousness)​

arxiv.org

arxiv.org. While that research is more about perception, it treats the brain’s state as a latent vector that can be activated or queried by symbols​

arxiv.org

arxiv.org. In contemplative neuroscience, advanced fMRI studies of meditation map distinct brain-state networks corresponding to meditative absorptions (jhānas)​

sciencedirect.com

biorxiv.org. For example, an adept entering first jhāna vs second jhāna shows shifts in network connectivity; researchers identified about 20 robust brain states recurring during an insight meditation session​

meditation.mgh.harvard.edu

biorxiv.org. This suggests the brain moves through a repertoire of states even in “still” meditation, reinforcing the notion that we can talk about a state-space of meditation. Some scientists explicitly use dynamical systems language: cognition as trajectories, attractors, and phase transitions in neural activity​

gwern.net

gwern.net. All this lends credence to modeling mind states with a structured, mathematical approach. We aren’t the first to think in terms of mind-state tensors or fields – the metaphors crop up in both ancient contexts (e.g. “field of mind” in some Mahāmudrā texts) and modern (e.g. using field models for global brain activity). The concept of “meaning-space” that the user is aiming for resonates with these efforts: it’s essentially a conceptual space in the sense of cognitive science (à la Gärdenfors), tailored to inner experience and its transformations.

An important aspect is that such multi-dimensional models allow richer queries and interpolations. If our system has, say, a 10-dimensional descriptor for mind-states (including emotional tone, attention focus, sense of self, clarity, bodily sensation, etc.), we can ask the model things like “move slightly in the direction of less ego” or “what’s between anxiety and excitement?” and get meaningful answers. This is analogous to how image generators navigate a latent space or how word embeddings can do analogies (“king” – “man” + “woman” ≈ “queen”). A tensor representation makes the space of mind-states continuous and navigable. Notably, the user’s proposal is that these descriptors “exist in tension between elements: this and that, fear and clarity, habit and release”

file-6p7n3nutu5wu5zgzpvrevt – this is a direct reflection of the yin-yang style or dialectical structure of many psychological states. A JSON-based format could indeed capture such polarities (e.g. a field for “tension: {fear: 0.7, clarity: 0.3}”). In fact, one could draw on psychological models of oppositional pairs (like Plutchik’s wheel of emotions has joy vs sadness, trust vs disgust, etc. as opposites).

4. Feasibility in a Retrieval-Augmented Generation (RAG) System

Implementing this in a RAG-based conversational system is conceptually feasible and could be quite powerful. RAG typically means the language model retrieves relevant text chunks from a knowledge database to ground its responses​

smashingmagazine.com. In our case, the “knowledge” is not factual encyclopedia entries but rather a knowledge base of mental-state descriptors and transitions. The model’s goal is not to answer factual questions, but to provide resonant, insightful responses that guide the user. This means our retrieved pieces would likely be things like: descriptions of a state that matches the user’s situation, metaphors or stories that could shed light on it, or suggestions of practices for moving from that state to a more wholesome one.

Imagine the user says something like, “I feel stuck in a void, kind of lost after losing my job.” A traditional QA system might give factual advice about job searching. Our system, using the structured descriptors, might recognize this as similar to, say, the “emptiness and fear” state that comes after a big loss. It could retrieve a JSON (or an indexed passage) corresponding to a state of dissolution or void (perhaps analogous to that Knowledge of Dissolution stage or a bardo of uncertainty). That entry might contain fields linking “void feeling” with “potential for new insight” and notes from Buddhist text: e.g., “In the Bardo of Dharmata, the mind experiences an empty void that can either be terrifying or liberating​

andrewholecek.com

andrewholecek.com.” The LLM, seeing this retrieved context, could then generate an answer along the lines of: “You describe a feeling of voidness – in Buddhist terms this is like a transition, when old forms have dissolved. It can feel disorienting (the teachings say ‘what is revealed is a formless emptiness’ which many experience as a blackout​

andrewholecek.com

andrewholecek.com). But within that void can also emerge a clarity if you gently familiarize yourself with it. Consider that this ‘stuck in a void’ phase might be the mind’s way of resetting, much like a pause before a new chapter…” etc. The crucial point is the system isn’t pulling just generic platitudes out of thin air – it’s grounded in a knowledge repository of human experiences and wisdom texts.

From an architecture perspective, we would maintain a database of JSON state representations (and possibly associated explanatory text or imagery). When the user inputs something (a question, a description of their feelings, etc.), we embed that and search for similar states in our database (vector similarity or some semantic search). This is analogous to RAG retrieving relevant documents. The difference is the “documents” are finely structured and can be tailored – even generated on the fly for certain transitions. The retrieved JSON could then be fed into the prompt, and the LLM is instructed to integrate it into a coherent, empathic response rather than a factual answer. Essentially, the model uses the retrieved state-descriptor as a context or lens for its generation. This aligns with the concept of “resonance” – the system finds content that resonates with the user’s condition or inquiry, and uses it to shape a response that the user will find meaningful and insightful.

Such a system would function more like a conversational guide or coach. Feasibility concerns like factual accuracy are less pressing (we’re not asserting facts about the external world), but coherence and appropriateness become key. The structured descriptors help with that by providing a scaffold for the model’s output. They act as constraints and inspiration: constraints in that the model has a specific schema to adhere to (preventing it from wandering into irrelevant territory), and inspiration in that it gets rich material (metaphors, known dynamics, practices) to draw upon.

One could also incorporate feedback loops: as the user continues the conversation, the system could update the estimated state descriptor (maybe the user corrects it: “no, it’s less fear and more numbness”), and retrieve new entries accordingly. Over time, it might even help the user navigate toward a desired state (retrieving JSONs that represent gradually more positive or insightful states, for example).

Technically, none of this violates RAG principles. It’s an expansion of the idea – usually RAG is for factual QA​

smashingmagazine.com, but here it’s experiential QA (where the “answers” are quotes, analogies, or guidance relevant to the experience). Early experiments in therapeutic or motivational chatbots already use something like this: some systems retrieve affirmations or relevant advice snippets based on the user’s utterance. For instance, a prototype emotional support agent might detect the user is in grief and retrieve a comforting quote or a CBT prompt about grief, then weave it into the reply. In fact, a recent emotional support dialogue system (“Mind2”) uses a knowledge extraction approach: it performs a cognitive analysis of the conversation and infers each speaker’s beliefs and mental state, then uses that to guide response generation​

arxiv.org

arxiv.org. While Mind2 is not exactly RAG, it similarly highlights maintaining a structured representation of the user’s state (using Theory-of-Mind and other cognitive variables) to produce interpretable, adaptive support

arxiv.org. This indicates that our approach – combining an LLM with a dynamic knowledge base of mental-state info – is on the right track and feasible with current technology.

The goal of “resonance, inquiry, and guidance” is well-served by RAG because we can include not just conclusions but questions and prompts from the retrieved material. For example, a JSON entry for “feeling of meaninglessness” might include a field like "inquiry_prompt": "What belief is being challenged when you feel this emptiness?". The model can incorporate that and actually ask the user this question, facilitating self-inquiry. Likewise, many Buddhist sources are themselves in dialog or question form (think of Zen koans or the Socratic style of some suttas). By retrieving such content, the model can naturally take on a guiding, questioning role rather than a didactic one. In sum, using RAG for this is not just feasible but advantageous: it grounds the conversation in a curated set of wisdom and observations about mind, ensuring that the model’s creativity stays tethered to meaningful directions.

5. Related Work and Considerations

There are precedents for structured modeling of internal states in AI and related fields – though none exactly like our vision, they provide encouragement and lessons. One relevant line of work is on narrative understanding: AI models that track characters’ mental states in stories. For example, Rashkin et al. (2018) created a dataset where short story characters are annotated with structured mental state descriptors from psychology​

aclanthology.org. Each character’s state is labeled with Maslow’s hierarchy of needs, Reiss’s fundamental motives, and Plutchik’s eight emotions

aclanthology.org. This multi-faceted labeling is essentially a JSON of the character’s internal state at each story step. Researchers then built models to predict and use these states for better story comprehension and generation​

aclanthology.org. They found that explicitly modeling these relations and needs helped the AI understand why a character acted a certain way, improving tasks like predicting endings​

aclanthology.org

aclanthology.org. In our context, this is a strong precedent: it shows that combining different ontologies of mental state (needs, motives, emotions) in a structured format can boost a model’s ability to reason about psychological dynamics. We can draw on those categories (needs, motives, emotions) as part of our descriptor schema. It also shows that even for fictional characters, treating mental state as a node in a knowledge graph (with edges like motivations and outcomes) yields more coherent narratives. Conversational guidance is, in a sense, co-creating a narrative of the user’s mind, so similar techniques should apply.

https://commons.wikimedia.org/wiki/File:Plutchik-wheel.svg Plutchik’s Wheel of Emotions – a psychological model that arranges emotions in a structured spectrum. In AI research, such models are used as features to describe an agent’s state. For instance, a story character’s emotion might be encoded with one of these eight core categories (joy, trust, fear, surprise, sadness, disgust, anger, anticipation) and their intensity levels​

aclanthology.org. Embedding this kind of structured emotional representation allows an AI to understand gradients (e.g. rage vs annoyance) and opposites (e.g. joy vs grief). It’s a concrete example of a meaning-based descriptor: the position on this wheel conveys the character of a mental state beyond a binary label.

Another relevant area is affective computing and user modeling. Systems like virtual therapists or companion AIs attempt to maintain a model of the user’s emotional state, often a simple one (e.g. a score for stress, a score for mood). Our proposal significantly enriches that, but can learn from it. For example, the IBM Watson Tone Analyzer once provided multiple emotional and language tone scores from text (joy, anger, analytical tone, etc.) – essentially a mini tensor per utterance. Some chatbot frameworks maintain a “context object” with the user’s current sentiment and intent as slots. We’re proposing a far more comprehensive context object (covering deep mental facets), but the concept is similar.

On the side of knowledge representation, there have been ontologies for mental states in semantic networks like ConceptNet or Cyc, though relatively coarse (they know that “Grief is a kind of NegativeEmotion,” etc.). There’s also the field of Cognitive Ontology in neuroscience which tries to formally define mental functions and states for mapping to brain activity​

ncbi.nlm.nih.gov

pmc.ncbi.nlm.nih.gov. While a bit academic, it shows that scholars are indeed trying to create structured taxonomies of mind that bridge subjective, behavioral, and neural definitions. Any such ontology can inform our schema, and conversely our approach might give a more dynamic, experiential angle to ontology.

We should also consider work in contemplative studies bridging traditional maps and modern frameworks. Francisco Varela’s neurophenomenology is a noteworthy paradigm – pairing first-person reports with neural data, treating subjective experience descriptions as data structures that need to be formally related to brain states. Our project could be seen as building a phenomenology database that an AI can draw on. There may not be prior AI systems that quote the Bardo Thödol to users, but there are certainly ones that quote Marcus Aurelius or other philosophical texts for guidance. Even simple ones: a popular meditation app might respond with a relevant quote or meditation suggestion if you report stress. So one indirect precedent is the use of quote databases or tip databases in wellness chatbots – those are retrieval-based, but not yet structured in the way we envision.

Crucially, Mindful AI design has to heed potential challenges. One challenge is oversimplification: turning a nuanced mind-state into JSON could strip away idiosyncrasy. The user’s unique story might not fit a pre-made schema. To mitigate this, our schema should be flexible and extensible. It could allow free-text notes or a probability distribution over state types rather than a single label. We saw even in Tibetan models, there’s flexibility (not everyone hits each stage exactly)​

andrewholecek.com. Our system should perhaps retrieve multiple candidate states or allow fuzzy overlaps (“you seem 30% in stage A, 70% in stage B”). Another challenge is cultural bias: the structures we use (even something like valence or Maslow’s needs) might not capture frameworks from non-Buddhist or non-Western contexts of meaning. However, the intention to include sources like Sufi poetry or Indigenous wisdom in the database could enrich the perspective and avoid one-dimensional bias.

One might also question: can language truly convey these subtle mind-state dynamics? There is the ineffability problem – e.g. Zen warns not to confuse the finger pointing at the moon for the moon itself. A JSON descriptor is definitely “a finger,” not the direct experience. But our goal is not to replace experience, it’s to allow an AI to speak more artfully and helpfully about experience. If anything, the structured approach may help the AI avoid some pitfalls of language models, like spitting out empty clichés. Because it will refer to authentic descriptions and multi-faceted conditions, its responses can ring true and avoid contradiction. Essentially, it grounds the conversation in a web of meaning hand-curated from human wisdom and observation, which is a way to align the AI with humanly comprehensible truths (even if those truths are poetic).

In conclusion, the viability of using structured mental-state descriptors in an LLM system looks promising. We have ancient conceptual metaphors that encourage thinking of mind states as configurations (fields, phase transitions, networks of qualities). We have modern scientific models that validate representing mental states in multi-dimensional spaces and even show neural correlates of state transitions. We have practical examples and datasets where structuring inner states improved AI understanding (narrative graphs, emotional support dialogues). And we have a RAG architecture that can leverage all this by injecting relevant context into the generative process. The richness of this approach is that it moves the AI beyond factual Q&A into the realm of “felt sense” and guidance. Rather than answering “what’s the capital of X,” the system can answer questions like “I feel lost – what now?” by drawing on a tapestry of structured wisdom. This represents a novel use of AI: a kind of knowledgeable mirror for the mind. Of course, careful curation and iteration will be needed to get it right, but the foundations – both conceptually and in related work – are strong. The endeavor stands on the shoulders of Buddhist mind scientists, modern affective scientists, and AI researchers who all, in their own way, treated the mind as something that can be modeled without trivializing its mystery. With respect for that balance, implementing this system could indeed yield an AI that speaks meaningfully about mental experience and transformation.

Postscript: Where This May Lead

This is a beginning.

We are still in the foothills of building a model of mind that could reflect the subtlety of lived experience. These descriptors, inspired by traditional Buddhist sources, are only a first steps in mapping how awareness moves and changes.

Our aim is not to finalize a definitive model, but to open—to allow conversation, reflection, and make technology to]]that support human enquiry. Hopefully as we deepen the modelling and build more refined tools, we can invite fellow practitioners, researchers, and curious minds to support and participate in this approach.

If this work resonates with you, or sparks ideas or questions, you’re warmly invited to join the ongoing exploration:

Sources: