Hallucination (artificial intelligence)
Confident unjustified claim by an AI / From Wikipedia, the free encyclopedia
In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called confabulation or delusion) is a confident response by an AI that does not seem to be justified by its training data, either because it is insufficient, biased or too specialised. For example, a hallucinating chatbot with no training data regarding Tesla's revenue might internally generate a random number (such as "$13.6 billion") that the algorithm ranks with high confidence, and then go on to falsely and repeatedly represent that Tesla's revenue is $13.6 billion, with no provided context that the figure was a product of the weakness of its generation algorithm.
Such phenomena are termed "hallucinations", in analogy with the phenomenon of hallucination in human psychology. Note that while a human hallucination is a percept by a human that cannot sensibly be associated with the portion of the external world that the human is currently directly observing with sense organs, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data. Some researchers are opposed to the term, because it conflates the human concept with the significantly different AI concept.
AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to "sociopathically" and pointlessly embed plausible-sounding random falsehoods within their generated content. By 2023, analysts considered frequent hallucination to be a major problem in LLM technology.