Abstract:With the development of deep learning technologies such as pre-trained models, represented by Transformer, large language models (LLMs) have shown excellent comprehension and creativity. They not only have an important impact on downstream tasks such as abstractive summarization, dialogue generation, machine translation, and data-to-text generation but also exhibit promising applications in multimodal fields such as image description and visual narratives. While LLMs have significant advantages in performance, deep learning-based LLMs are susceptible to hallucinations, which may reduce the system performance and even seriously affect the trustworthiness and broad applications of LLMs. The accompanying legal and ethical risks have become the main obstacles to their further development and implementation. Therefore, this survey provides an extensive investigation and technical review of the hallucinations in LLMs. Firstly, the hallucinations in LLMs are systematically summarized, and their origin and causes are analyzed. Secondly, a systematical overview of hallucination evaluation and mitigation is provided, in which the evaluation and mitigation methods are categorized and thoroughly compared for different tasks. Finally, the future challenges and research directions of the hallucinations in LLMs are discussed from the perspectives of evaluation and mitigation.