Hallucination (artificial intelligence)
Confident unjustified claim by AI / From Wikipedia, the free encyclopedia
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation[1] or delusion[2]) is a response generated by AI which contains false or misleading information presented as fact.[3][4][5] This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.[5]
For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses. Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[6][7][8] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[1]