Hallucination
Hallucination is the error phenomenon of large language models (LLMs) when the system states incorrect, false, or completely fabricated information with a high degree of confidence. This problem stems from the probabilistic operation of the models: AI does not store facts but rather statistical correlations between words, thus filling in gaps when precise knowledge is absent. Managing hallucinations (e.g., with RAG technology, fact-checking) is one of the most critical challenges of enterprise AI adaptation, as it directly affects the reliability and credibility of systems.