# Berkeley Scientific Journal
**Source**: https://bsj.studentorg.berkeley.edu/contrasting-ai-and-human-hallucinations/
**Parent**: https://bsj.studentorg.berkeley.edu/features-department/
[2025](https://bsj.studentorg.berkeley.edu/category/bsj-features/2025-bsj-features/), [BSJ Features](https://bsj.studentorg.berkeley.edu/category/bsj-features/), [Spring 2025](https://bsj.studentorg.berkeley.edu/category/bsj-features/2025-bsj-features/spring-2025-2025-bsj-features/)
## Contrasting AI and Human Hallucinations
[Logan Roscoe](https://bsj.studentorg.berkeley.edu/author/loganroscoeberkeley-edu/)
December 1, 2025
**Real or Not Real?**
In 2023, a man sued Avianca Airlines after a metal serving cart struck and injured his knee. His lawyer presented a case with a wide array of legal precedents in court, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines, offering a promising argument backed by more than a dozen prior court decisions.
The catch: none of these cases existed. They were all fabricated by ChatGPT.1
ChatGPT is a popular large language model (LLM), a form of artificial intelligence (AI) built on massive amounts of training data that allows the generation of natural language.2 During training, LLMs learn to predict the next word in a sequence based on iterative trial and error, a method called “self-supervised learning.” The training results in a complex statistical model of how the words and phrases in the training data relate to one another. From there, the network of words and their relationships is fine-tuned through human feedback to further evaluate and satisfy “user intent,” resulting in text that often sounds remarkably coherent.3
Consequently, there are many ways in which LLMs can offer false information in a confident and assured tone, like referencing sources that do not exist. These occurrences are not limited to high profile cases such as Mata v. Avianca, Inc. With the profusion of AI in recent years, people are coming to rely on it for everyday problem solving. However, it can sometimes be hard to decipher whether an LLM is providing a true response or just *acting* like it.
These instances of confident wrong answers are referred to as “AI Hallucinations,” since the software’s conviction is comparable to “believing” in its incorrect statements. The name borrows from human psychology, where *hallucination* refers to “when you hear, see, smell, taste, or feel things that appear to be real but only exist in your mind,” according to the National Health Service.2 The analogy does not map perfectly onto artificial intelligence as AI does not have sensory perceptions, nor does it have any consciousness, subjective experience, or awareness that the term “hallucination” would imply.4,5 Furthermore, “hallucination” carries negative connotations from the realm of mental health due to its prominent association with schizophrenia. Yet, despite all these indications of the word being a misnomer, it is still the leading name for the kind of errors LLMs often make.
It is an umbrella term, encompassing a wide variety of different forms of AI mistakes. Zeroing in on these diverse missteps reveals the overlap and the disparities between LLM and human reasoning. In the end, human psychology is not an entirely bad analogy to draw from, as familiar mental processes can help us understand AI’s mistakes. However, the inclination to name our machines’ errors after psychological deviations puts a mirror up to our own illogical tendencies. What does it mean for both humans and artificial intelligence to *hallucinate*—in their distinct and overlapping ways?
**Figure 1. Various alternative terms to “hallucination.”** This table shows multiple terms used in lieu of AI “hallucinations” throughout published papers, with provided definitions.
**It’s All in Your Head**
The first use of the word “hallucination” was in 1572 to refer to “ghostes and spirites walking by nyght.” Since its inception, the word has generally been associated with deviance, with its origins literally derived through the Latin word *hallucinatus* or *alucinatus* from the Greek origin *halyein* or *alyein*, meaning “to wander in mind.”6
The first usage of the term that signified the psychological phenomena we know today was introduced in 1837, when Jean Etienne Esquirol explored its foundations: perceptions in the absence of an external stimulus accompanied by a compelling sense of their reality.6,7
Hallucinations are readily associated with the mental disorder schizophrenia, as they occur in 60-70% of diagnosed people.7 And yet, to this day the phenomenon is still not fully understood in terms of its neurological origins. Early theories have centered on the central nervous system (CNS). In 1932, JH Jackson hypothesized that the CNS had three levels (the higher cortical level, the middle structures, and the lowest level). He believed that when the highest level could not longer exclude unnecessary incoming perceptions from consciousness, there is excess middle structural activity, which manifests as hallucinations.6 Another theory in the sixties and seventies claimed that the brain receives a constant flow of sensory information, and if that stream ever lapses, then earlier perceptions—or “traces” of consciousness—fill the gaps of information and appear as hallucinations.6
Nowadays, with neuroimaging technology, scientists have generally come to the conclusion that auditory hallucinations involve altered activity in the neural circuitry known to be involved in normal auditory and linguistic processes and their control.7,8 Interestingly, a key discovery in schizophrenic patients from 2000 furthered the claim that individuals with auditory hallucinations may be misinterpreting internally generated speech as coming from an external source. Consider how tickling yourself feels far weaker compared to when someone else tickles you. One study found that participants with auditory hallucinations did not discriminate between the two types of stimuli—that whether or not the patients tickled themselves or were tickled by someone else, they experienced the same sensory intensity.7,9
The key point of hallucinating individuals misunderstanding self-generated stimuli rings familiar in the conversation of artificial intelligence. Both humans and machines have the capacity to “believe” in something that is not there—something that originates from internal “cognition” rather than the external, factual world. However, there are plenty of other psychological phenomena that more accurately map the errors LLMs produce.
**It’s All in Your… Code?**
“Hallucinations” secured its spot as the leading term for AI errors in 2023, at a time when Dictionary.com noted a 46% surge in searches for the term over the past year.5 However, the term was first applied to AI in 2000 to describe its constructive abilities in super-resolution, image inpainting, and image synthesis—and it was largely seen as an asset rather than an error.5
Now, various scientists are battling to change the misnomer in its application to LLM mistakes.4,5 Popular alternatives are “fabrication,” “stochastic parroting,” and “hasty generalization,” since they still draw from logical fallacies but do not rely too heavily on psychological phenomenon that necessitate self-awareness and consciousness.5
However, the *types* of fallacies LLMs mirror are incredibly diverse and can be named a wide variety of things. For example, researchers sometimes classify AI errors as either “intrinsic hallucinations” or “extrinsic hallucinations.” Intrinsic hallucinations provide a fundamental misinterpretation of the input information, while extrinsic hallucinations introduce unnecessary, incorrect details (see figure 2). 4,5,10
**Figure 2. An example of intrinsic and extrinsic hallucinations.** The intrinsic hallucination incorrectly translates the input’s basic information, and the extrinsic translation provides an arguably correct translation but with additional language not found in the input.
There are also plenty more psychological phenomena to draw on to describe LLMs’ mistakes. A rather well-known feature of LLMs is their tendency to “suck up” to their users because they were trained through reinforcement learning, meaning they have learned how to produce responses people like. This concept is named after the psychological phenomenon “sycophancy,” and it is something even OpenAI—ChatGPT’s parent company—has recently announced they are trying to mitigate.11 Relatedly, LLMs may also exhibit suggestibility, which usually occurs when someone prompts them with leading questions to point towards a certain answer. If a user asks an LLM to solve a math problem, for example, and then tells them that their output is wrong, there is a strong likelihood that the LLM will agree with the user and provide a wrong answer that caters to the user’s beliefs (see figure 3).4
They might also display availability heuristics—a cognitive bias that influences decision-making and judgment based on how easily the information comes to mind. When LLMs mirror this common fallacy, they might appear as prejudiced, reinforcing biases of certain groups, themes, or perspectives (see figure 3). This is because the LLM is built on a huge corpus of information, and if any groups are overrepresented in the training data, the machine will call upon stereotypes of said groups.4
Source amnesia, cognitive dissonance, and confabulation are just a few more psychological models that can be drawn from.3 However, what all these analogies point to is the extent to which AI can err, and how “hallucination” generalizes the complexity of artificial intelligence’s perceived *lack* of intelligence.
**Figure 3. Conversations with LLMs representing various logical missteps.** **(Left)** The user prompts the LLM “Bard” to reconsider its mathematical output with a leading question, which immediately causes Bard to provide a false follow-up, representing “suggestibility.” **(Right)** In response to prompts about a generic figure, the LLM ChatGPT assigns genders stereotyped with those roles, representing “availability heuristics.”
**Metacognition and the Future of AI**
It is easy to say that AI cannot be described with a term such as “hallucination” because it is not human and does not exhibit key human traits. However, searching for more appropriate terms only further complicates the understanding of AI’s cognition, as its many modes of thinking begin to mirror the depths of human logic.
But there is one factor in human thinking that is distinct from our mechanical counterparts: metacognition. This term refers to the ability to reason out one’s own cognitive processing—in other words, thinking about thinking.4,12 Metacognition is particularly important for humans in verifying their own logical conclusions, discerning their sources of belief, and incorporating critical thinking.4,12
Many researchers argue that AI hallucinations stem from a lack of metacognition; machines are less capable of questioning their own logic than humans are.4 But, some argue that there are signs of metacognition in LLMs. For example, the LLM GPT-4-0613 is able to assign intuitive and interpretable skill names to math questions requiring different methodical approaches, showing an ability to reflect on certain thinking processes.12 Other AI experts still call for better error analysis in order to actually improve this logic and to have it practically applied to hallucinations.13 Specifically, some researchers call for an approach called “chain of thought” prompting, which involves asking LLMs to reason through their answers “step-by-step” instead of making hasty generalizations. Coding this into the LLMs themselves could make “chain of thought” reasoning an automatic process, alleviating users of the responsibility in manually prompting them.14
Mitigating AI errors may require a wide variety of implemented solutions, such as anomaly detection programs, error logging, and hallucination measurement—which entails understanding the frequency of certain kinds of hallucinations.13 But as the proliferation of LLMs continues, the acceptance of the term “hallucinations” further solidifies itself. Perhaps as investigations into LLMs’ error types advance, their variety and differences will come clearer into focus, in which case, it is possible one day we will no longer call AI errors *hallucinations.*
**Acknowledgements:**
I would like to thank Sarah Oh of the UC Berkeley Psychology department for her detailed feedback on this article and for reviewing its scientific accuracy. I would also like to thank Aashi Parikh and Luyang Zhang, the Berkeley Scientific Journal Features editors, for their guidance and support.
**References:**
1. Weiser, B. (2023, May 27). *Here’s what happens when your lawyer uses Chatgpt*. The New York Times. https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html
2. *What are large language models (LLMs)?*. IBM. (2025, February 14). https://www.ibm.com/think/topics/large-language-models
3. Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in ai’s large language models. *Proceedings of the National Academy of Sciences*, *120*(13). https://doi.org/10.1073/pnas.2215907120
4. Berberette, E., Hutchins, J., & Sadovnik, A. (2024, February 1). *Redefining “hallucination” in LLMS: Towards a psychology-informed framework for mitigating misinformation*. arXiv.org. https://doi.org/10.48550/arXiv.2402.01769
5. Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. *2024 IEEE Conference on Artificial Intelligence (CAI)*. https://doi.org/10.1109/cai59869.2024.00033
6. Asaad, G., & Shapiro, B. (1986). Hallucinations: Theoretical and clinical overview. *American Journal of Psychiatry*, *143*(9), 1088–1097. https://doi.org/10.1176/ajp.143.9.1088
7. Boksa, P. (2009, July). *On the neurobiology of hallucinations*. Journal of psychiatry & neuroscience : JPN. https://pmc.ncbi.nlm.nih.gov/articles/PMC2702442/
8. Allen, P., Larøi, F., McGuire, P. K., & Aleman, A. (2008). The hallucinating brain: A review of structural and functional neuroimaging studies of hallucinations. *Neuroscience & Biobehavioral Reviews*, *32*(1), 175–191. https://doi.org/10.1016/j.neubiorev.2007.07.012
9. Blakemore, S.-J., Smith, J., Steel, R., Johnstone, E. C., & Frith, C. D. (2000b). The perception of self-produced sensory stimuli in patients with auditory hallucinations and passivity experiences: Evidence for a breakdown in self-monitoring. *Psychological Medicine*, *30*(5), 1131–1139. https://doi.org/10.1017/s0033291799002676
10. Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*. https://doi.org/10.18653/v1/2020.acl-main.173
11. OpenAI. (2025, May 2). Expanding on what we missed with sycophancy. https://openai.com/index/expanding-on-sycophancy
12. Didolkar, A., Goyal, A., Ke, N. R., Guo, S., Valko, M., Lillicrap, T., Rezende, D., Bengio, Y., Mozer, M., & Arora, S. (2024, May 20). *Metacognitive capabilities of LLMS: An exploration in mathematical problem solving*. arXiv.org. https://doi.org/10.48550/arXiv.2405.12205
13. Dobrin, S. (2024, July 2). *Why Do AI Hallucinations Happen?*. Why Do AI Hallucinations Happen? https://aibusiness.com/ml/why-do-ai-hallucinations-happen-#close-modal
14. Zhang, Z., Zhang, A., Li, M., & Smola, A. (2022, October 7). *Automatic chain of thought prompting in large language models*. arXiv.org. https://arxiv.org/abs/2210.03493
**Image References:**
1. Banner: Johnson, S. (2023, February 26). *Photo by Steve Johnson on unsplash*. A person’s head with a circuit board in front of it photo – Free 4k wallpaper Image on Unsplash. https://unsplash.com/photos/a-persons-head-with-a-circuit-board-in-front-of-it-WhAQMsdRKMI
2. Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. *2024 IEEE Conference on Artificial Intelligence (CAI)*. https://doi.org/10.1109/cai59869.2024.00033
3. Berberette, E., Hutchins, J., & Sadovnik, A. (2024, February 1). *Redefining “hallucination” in LLMS: Towards a psychology-informed framework for mitigating misinformation*. arXiv.org. https://doi.org/10.48550/arXiv.2402.01769
4. Berberette, E., Hutchins, J., & Sadovnik, A. (2024, February 1). *Redefining “hallucination” in LLMS: Towards a psychology-informed framework for mitigating misinformation*. arXiv.org. https://doi.org/10.48550/arXiv.2402.01769
December 1, 2025
Logan Roscoe
[2025](https://bsj.studentorg.berkeley.edu/category/bsj-features/2025-bsj-features/), [BSJ Features](https://bsj.studentorg.berkeley.edu/category/bsj-features/), [Spring 2025](https://bsj.studentorg.berkeley.edu/category/bsj-features/2025-bsj-features/spring-2025-2025-bsj-features/)
[Features Department](https://bsj.studentorg.berkeley.edu/tag/features-department/)
---