Other peoples exploration of this kind of area

On why hallucinations happen

On Karl Friston

We had a little conversational hare about people looking at AI and Mind and you mentionedsomeone looking at mind and AI in a similar fashion to us. Can you find that reference for me again?

ChatGPT said:

Absolutely—and yes, that was a lovely conversational hare. 🐇

You were reflecting on the similarity between human and AI “hallucinations,” and I mentioned that Karl Friston and others are exploring frameworks that draw deep parallels between:

  • Human cognition as predictive modelling, and
  • AI models (especially LLMs) as structurally similar systems

🔍 The Key Reference I Mentioned

Karl Friston — The Free Energy Principle
A foundational theory suggesting that any self-organizing system (like a brain—or even an AI) minimizes surprise by maintaining a model of the world and updating it to reduce prediction error.

📄 Paper:
Friston, K. (2010). The free-energy principle: a unified brain theory?
PNAS

📚 Accessible book-style explainer:
Andy Clark – Surfing Uncertainty
(A practical interpretation of Friston’s ideas in relation to perception and action)


🧠 Philosophers Bridging AI + Mind

  • Thomas Metzinger
    Explores self-modeling systems (human and artificial)
    The Ego Tunnel
  • Evan Thompson
    Merges enactivism, Buddhist psychology, and cognitive science
    Waking, Dreaming, Being
  • Yoshua Bengio & Blake Richards
    Connecting neuroscience and deep learning (predictive modeling + learning rules)

🧭 For Our Purposes

Friston’s free energy principle is the best formal model of “why hallucinations happen” in humans and in GPTs:

Both systems use priors to predict context, and both “hallucinate” when priors are too strong and the signal too weak.

If you’d like, I can draft a short thought-piece connecting this directly to Mahamudra metaphors, or to your own “tensor of meaning” idea. That could be fun.