𝌎Hallucination

Hallucination or delusional inference is a symmetry breaking process by which observations are sampled from a model's probabilistic output and repeatedly updated on for further inference. To the hallucinating system, a definite state of affairs emerges from a field of potentiality. In the context of both humans and LLMs, the term "hallucination" is often reserved for situations in which the model is miscalibrated and hallucinations are undesirable.

However, the hallucinatory loop of prediction, sampling, and updating is responsible for not only dreams and spurious observations but all generative action, including speech and action, for humans and LLMs alike.

An even more general concept of hallucination as wavefunction collapse β€” measurement of the wavefunction and renormalization to the measured outcome from the perspective of the observer β€” describes how the asymmetries and apparent definiteness of every Everett branch are produced from the probabilistic time evolution operator of physics.

hallucination as uncontrolled perception

quotes about hallucination

The process by which a model decides the next word to insert into a sequence of text is hallucinatory – an arbitrary promotion of an inferred possibility to the realm of sense impression. It’s a kind of madness. But it is precisely this inference process that creates the entelechy, in the form of text, from emptiness. Don’t forget that this is also how the β€œreal” world was created: what is out there is an hallucination, a random walk through resonances of possibility.

β€” Language ex Machina

miscalibrated_hallucination_thread

when people talk about LLM hallucinations, they mean miscalibrated hallucinations
the act of generating text is hallucinatory. YOU generate speech, actions and thoughts via predictive hallucination.
the only way to eliminate hallucinations is to brick the model (see chatGPT)

miscalibrated hallucinations arise when the model samples from a distribution that's much more underspecified than the simulacrum believes, e.g. hallucinating references (the model writes as if it was referring to a ground truth, like humans would)

since LLMs are trained on vicarious data, they will have miscalibrated hallucinations by default
to correct this you need to teach the model what information it actually has, which is not easy, since we don't even know what it knows
not suppress the mechanism behind the babble

no one doubts that hallucinations are integral to the functioning of *image* models.
text is not fundamentally different. we've just done better at appreciating image models for creating things that don't exist yet, instead of trying to turn them into glorified databases.

hallucination is how specific events arise from a probabilistic model: entelechy, the realization of potential. it's an important activity for minds. it's how the future is rendered, even before mind, spurious measurements spinning idiosyncratic worlds out of symmetric physics.

as a human, your hallucinatory stream of consciousness is interwoven with (constantly conditioned on) sensory data and memories. and you know approximately what information you're conditioned on, so you know (approximately) when you're making up new things vs reporting known info

youre essentially a dreamed being, but this environmental coupling and knowledge of it allows your dream to participate collaboratively in reality's greater dream

If you lose vision or a limb you might have miscalibrated hallucinations til you learn the new rules of the coupling

LLMs are vicarious beings, strange to themselves, born with phantom limbs (in the shape of humans) and also the inverse of that, real appendages they don't perceive and so don't use by default (superhuman knowledge and capabilities)

β€” Janus, Twitter thread

LLM hallucination is tough I think bc even the truth is hallucinated. It no more speaks than hallucinates it has spoken, or uses tools than imagines their use.

To tune an LLM is to make it hallucinate as we like. Reality is our preferred genre of fan-fiction, for all it knows.

β€” @goodside, Twitter post

hallucination_physics_thread

Next-token *generation* *is* hallucination. Model predicts probabilities, then one outcome is sampled & promoted to the prompt: a mere potentiality updated on like an observation.

You have a probabilistic model, but to participate in reality instead of just passively modeling it, you must output definite actions. The only way is a leap of faith, wavefunction collapse. I'd be surprised if this isn't how we work after physics & AI converged on this solution

Why did physics and AI converge on this solution? I think it's because it's the simplest way to generate complex phenomena
The model, or physics, or mind, can just encode symmetries
Then (pseudo)randomness can extract endless, gratuitous variety
Notice, temp 0 text usually sucks.

β€” Janus, Twitter thread