← Back to Home

Tag: Hallucinations (1 articles)

Extrinsic Hallucinations in LLMs

This article explores the phenomenon of extrinsic hallucinations in large language models, analyzing their causes and detection methods, and proposes effective strategies to reduce hallucinations while emphasizing the risks of knowledge updates.

Lilian Weng · Sun, 07 Jul 2024 00:00:00 +0000
BitByAI — AI-powered, AI-evolved AI News