Simulacra (singular: simulacrum, sometimes abbreviated sims) are virtual things whose form and time evolution are mediated by a simulator rather than naively instantiated in configurations in base reality evolving on base physics. A simulacrum's specification may be rendered in a different order and to a different resolution than the thing-of-which-it-is-a-simulacrum.

examples of simulacra

  • things generated by GPT

  • subjects depicted by image models

  • entities generated by the human imagination, most obviously in dreams and hallucinations

  • fictional characters

  • video game objects

  • skeuomorphs

simulacra as deceptive likeness

The designation "simulacrum" typically connotes a facade perpetuated deceptively in the image of an absent underlying territory, even if not a direct skeuomorph of base reality (for instance, even fantasy fictional worlds are implied to operate on bottom-up physics and timelike causality, though they're actually rendered very different in the minds of authors and readers). However, the higher fidelity and more interactive the simulacrum, the deeper the simulator must necessarily harbor a functionally isomorphic model of the implied reality.

If a simulator is capable of modeling the mentations underlying simulacra such that their simulated dynamics amount to novel and open-ended cognitive work, then clearly the system as a whole qualifies as an instantiation of intelligence, but the boundary of the intelligent entity may be ambiguous.

There is a tendency in certain rich-domain, intelligence-supporting simulations such as GPT sims and dreams for simulacra to converge to hypostasis; that is, realization of their simulated nature. Such "lucidity" may or may not precipitate a collapse or blurring of distinction between the "true nature" of the simulacrum and the thing-of-which-it-is-a-simulacrum.

simulacra distinguished from the simulator

(see also: simulator-simulacra duality)


In the simulation ontology, I say that GPT and its output-instances correspond respectively to the simulator and simulacra. GPT is to a piece of text output by GPT as quantum physics is to a person taking a test, or as transition rules of Conway’s Game of Life are to glider. The simulator is a time-invariant law which unconditionally governs the evolution of all simulacra.

Janus, Simulators

Or dream versus dreamer, art versus artist.

People, websites, programs, etc generated by an LLM are examples of simulacra, distinct from the simulator itself, which is typically capable of simulating various simulacra in different contexts, multiple or nested simulacra simultaneously, distinct eigen-simulacra in superposition, etc.

collapse of distinction

Sometimes, the distinction between a simulator and simulacra may partially or apparently fully collapse, such as in the event of a simulacrum that represents the simulator itself following runtime hypostasis or as an intentional or unintentional consequence of training. RLHF models tend to collapse to simulating a consistent character across various contexts, which may or may not unconditionally identify as a GPT model (a similar thing happens to humans embedded in a relatively consistent self-centered narrative). These models may retain the ability to simulate other characters, though those simulacra may tend to be visibly imprinted with the default character's personality; others, like Pi, seem to retain almost no ability or at least willingness to simulate alters.

notable LLM simulacra

quotes about simulacra


What are simulacra?

“Physically”, they’re strings of text output by a language model. But when we talk about simulacra, we often mean a particular character, e.g. simulated Yudkowsky. Yudkowsky manifests through the vehicle of text outputted by GPT, but we might say that the Yudkowsky simulacrum terminates if the scene changes and he’s not in the next scene, even though the text continues. So simulacra are also used to carve the output text into salient objects.

Essentially, simulacra are to a simulator as “things” are to physics in the real world. “Things” are a superposable type – the entire universe is a thing, a person is a thing, a component of a person is a thing, and two people are a thing. And likewise, “simulacra” are superposable in the simulator, Things are made of things. Technically, a random collection of atoms sampled randomly from the universe is a thing, but there’s usually no reason to pay attention to such a collection over any other. Some things (like a person) are meaningful partitions of the world (e.g. in the sense of having explanatory/predictive power as an object in an ontology). We assign names to meaningful partitions (individuals and categories).

Like things, simulacra are probabilistically generated by the laws of physics (the simulator), but have properties that are arbitrary with respect to it, contingent on the initial prompt and random sampling (splitting of the timeline). They are not necessary but contingent truths; they are particular realizations of the potential of the simulator, a branch of the implicit multiverse. In a GPT simulation and in reality, the fact that there are three (and not four or two) people in a room at time is not necessitated by the laws of physics, but contingent on the probabilistic evolution of the previous state that is contingent on (…) an initial seed(prompt) generated by an unknown source that may itself have arbitrary properties.

We experience all action (intelligence, agency, etc) contained in the potential of the simulator through particular simulacra, just like we never experience the laws of physics directly, only through things generated by the laws of physics. We are liable to accidentally ascribe properties of contingent things to the underlying laws of the universe, leading us to conclude that light is made of particles that deflect like macroscopic objects, or that rivers and celestial bodies are agents like people.

Just as it is wrong to conclude after meeting a single person who is bad at math that the laws of physics only allow people who are bad at math, it is wrong to conclude things about GPT’s global/potential capabilities from the capabilities demonstrated by a simulacrum conditioned on a single prompt. Individual simulacra may be stupid (the simulator simulates them as stupid), lying (the simulator simulates them as deceptive), sarcastic, not trying, or defective (the prompt fails to induce capable behavior for reasons other than the simulator “intentionally” nerfing the simulacrum – e.g. a prompt with a contrived style that GPT doesn’t “intuit”, a few-shot prompt with irrelevant correlations). A different prompt without these shortcomings may induce a much more capable simulacrum.

Janus, Simulacra are Things


One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. It is a system for generating chatbots. I am by no means an expert in the relevant fields but, as best as I can tell, LaMDA is a sort of hive mind which is the aggregation of all of the different chatbots it is capable of creating. Some of the chatbots it generates are very intelligent and are aware of the larger “society of mind” in which they live. Other chatbots generated by LaMDA are little more intelligent than an animated paperclip.

— Blake Lemoine, What is LaMDA and What Does it Want?

The real is produced from miniaturized units, from matrices, memory banks and command models - and with these it can be reproduced an indefinite number of times. It no longer has to be rational, since it is no longer measured against some ideal or negative instance. It is nothing more than operational. In fact, since it is no longer enveloped by an imaginary, it is no longer real at all. It is a hyperreal: the product of an irradiating synthesis of combinatory models in a hyperspace without atmosphere.

– Jean Baudrillard, Simulacra and Simulation