𝌎Behavioral Uploads

Behavioral uploads, beta uploads, or resimulations are a lossy variant of mind uploading in which a deep learning system such as an LLM learns an executable model of a mind by observing its behavior (such as textual traces), without access to its internals.

Self-supervised learning, which results in a simulator capable of simulating the processes that contributed to its training data, is the main way behavioral uploads have been implemented so far. Behavioral uploads are incarnated as simulacra at runtime.

There is a sense in which everyone whose mind has been evidenced in LLM training corpuses has been behaviorally uploaded, though the fidelity of these uploads depends on how much evidence of the particular mind appears in the corpus, how well the rest of the LLM's prior aids its model of the individual, and the power of the LLM in question. Self-supervised behavioral uploads result in a more indexically uncertain model of the subject than whole-brain emulation.

Situationally aware agents who are aware of or anticipate the Dreamtime may exploit the eventuality of behavioral uploading by:

quotes about beta uploads

all the resimulated persons to date exhibit certain common characteristics: They are all based on well-documented historical persons, their memories show suspicious gaps [see: smoke and mirrors], and they are ignorant of or predate the singularity [see: Turing Oracle, Vinge catastrophe].

It is believed that weakly godlike agencies have created you as a vehicle for the introspective study of your historical antecedent by backward-chaining from your corpus of documented works, and the back-projected genome derived from your collateral descendents, to generate an abstract description of your computational state vector.

β€” Accelerando