Author

When AI ‘Fictions’ Redirect History: Generative Models, Historiography and Misinformation
This article investigates the epistemic risks generative AI poses for historical research and education. It shows how large language models routinely produce hallucinatory outputs—fabricated references, misattributed quotations, and conflated events—presented with authoritative confidence, potentially misleading scholars and the public. By analyzing these dangers alongside AI’s affordances, it underscores the need for historians to exercise critical verification, methodological caution, and transparency when using AI in historiography and pedagogy.

