A large language model (LLM) is a large (typically at least billions of parameters) neural network trained on language. LLMs are typically pretrained with self-supervised learning on a massive dataset composed mostly of language in the wild, and sometimes further tuned with methods like reinforcement learning from human feedback. Generative pretrained transformers are currently the dominant LLM paradigm, and "GPT" is sometimes used synecdochally to refer to LLMs. LLMs have attracted attention in recent years for their human-level or weakly superhuman performance on many tasks that were previously assumed to be AGI-complete.