Simulating the Past: Ethics of AI Historical Chatbots
Artificial intelligence has introduced a new form of historical mediation: interactive chatbots that simulate real people from the past. In educational platforms, museums, and digital archives, users can now “converse” with scientists, writers, and political figures whose voices are reconstructed from letters, memoirs, and archival materials. This development raises a fundamental question: where is the boundary between historical interpretation and digital reconstruction?
A chatbot is, technically, a system designed to simulate human conversation by processing written or spoken input and generating responses. In the context of historical figures, such systems serve as tools for digital education, enabling engagement with the past through dialogue rather than static narration (Matić et al. 2019; Bouras et al. 2023). Yet this apparent immediacy conceals a deeper epistemological problem.
Historical simulations cannot reproduce authentic subjectivity. Available data are fragmentary, historical sources are incomplete, and personal experiences often remain undocumented. Artificial intelligence, therefore, constructs a filtered projection rather than an authentic historical voice (Adamopoulou & Moussiades, 2020). When such reconstructions are presented through conversational interfaces, they may appear more authoritative than they truly are.
Authenticity and the illusion of presence
The ambition to recreate historical personalities through AI emerged partly as a reaction to the lack of authenticity in existing digital simulations. Some widely used platforms allow users to “talk” with figures such as Mozart or Aristotle. Yet, these simulations frequently produce anachronistic responses, modern vocabulary, or opinions about events that occurred long after the individual’s death. Such inconsistencies reveal a structural limitation: most systems rely on generic public data rather than verified historical sources (Nafis et al. 2021).
This problem extends beyond technical inaccuracy. AI models often fabricate statements, attribute ideas never expressed, and simplify complex personalities into idealized figures. Political and social contradictions disappear, controversial positions are omitted, and historical figures are transformed into neutral, easily consumable identities. In some cases, simulations even become commercial tools detached from their cultural context.
These tendencies highlight a deeper issue: the transformation of historical identity into digital performance.
Digital resurrection and its ethical implications
The emergence of generative AI has intensified the idea of “digital resurrection,” the attempt to recreate historical individuals as interactive entities capable of conversation. While legal frameworks often permit the use of a person’s likeness after copyright expiration, ethical responsibility does not disappear with legal permission (Rodríguez Reséndiz & Rodríguez Reséndiz, 2024).
The simulation of a historical figure is not merely a technical act. It shapes public perception of truth, identity, and cultural memory. When AI reconstructions are presented through realistic voices and conversational interaction, audiences may struggle to distinguish between documentation and interpretation. This risk becomes particularly pronounced in educational contexts, where digital tools can influence historical understanding.
Legal systems attempt to regulate certain aspects of these practices, particularly through publicity rights and copyright law. In some jurisdictions, the use of a deceased person’s likeness may remain restricted for decades, especially in commercial contexts (Hopkins 2023). Yet legal permission does not guarantee ethical legitimacy.
The central challenge is not whether AI can simulate a historical figure, but whether such a simulation respects historical truth and cultural dignity.
From legal permission to moral responsibility
The expiration of copyright allows unrestricted use of historical works, but it does not resolve the ethical dilemmas associated with digital reconstructions. Transforming a historical personality into a conversational entity may alter the symbolic meaning that the individual holds within collective memory.
In educational settings, this transformation carries particular weight. Students may encounter historical figures through AI interfaces rather than primary sources, forming impressions shaped by algorithmic interpretation rather than documented evidence. Without transparency, such tools risk contributing to myth-making and unintentional historical revisionism.
An ethical approach requires responsibility toward truth, cultural heritage, and the integrity of the simulated individual. Transparency about sources, interpretive frameworks, and technological limitations becomes essential. AI simulations must not present themselves as authentic voices of the past but as mediated reconstructions grounded in historical research.
Principles for ethical simulations
As AI technologies evolve, the need for clear standards becomes urgent. Ethical simulations must be based on verified sources, respect historical context, and avoid anachronistic interpretations. The methodology behind the system—including datasets, engineering techniques, and human oversight—should remain transparent and open to review (Haneman 2024; Hutson et al. 2023).
Interdisciplinary collaboration is equally important. Historians, linguists, legal experts, and technologists must jointly shape the development process to ensure credibility and responsibility. Simulations should be clearly labelled as digital reconstructions rather than “revived” identities, allowing users to understand the limits of representation.
Technically, responsible models rely on verified corpora, metadata documentation, and retrieval-based architectures that minimize fabrication. Human oversight remains necessary to evaluate outputs and correct distortions (NIST 2023; ISO 2023).
Case studies: Tesla and Nušić
The development of chatbots based on Nikola Tesla and Branislav Nušić illustrates how AI simulations can function as educational and cultural tools when guided by ethical and methodological principles. These projects relied on verified sources, including letters, autobiographical writings, archival interviews, and scholarly research, while preserving linguistic style and historical context (Škobo & Šović 2025).
In Tesla’s case, particular attention was devoted to the authenticity of language and intellectual perspective. The system was adapted to multiple languages Tesla spoke and constrained within the temporal boundaries of his lifetime. Expert supervision ensured accuracy and ethical consistency.
The Nušić chatbot presented additional challenges. Fragmentary sources, anecdotal material, and cultural specificity of humor complicated faithful reconstruction.
Empirical testing revealed how convincing such simulations can be. In controlled evaluation, AI-generated texts were frequently mistaken for authentic Nušić writings, highlighting both the effectiveness and the risks of digital reconstruction (Škobo & Šović 2025).
These examples illustrate the dual nature of AI simulations: they can enhance engagement and accessibility, but they also create the illusion of authenticity.
Technology, memory, and responsibility
AI simulations of historical figures occupy a space between preservation and interpretation. They can make history more accessible, encourage dialogue, and connect younger audiences with cultural heritage. At the same time, they reshape the way societies encounter the past.
The key issue is not technological sophistication, but responsibility. Developers must acknowledge that digital reconstructions influence collective memory. Ethical safeguards—verified sources, transparency, and interdisciplinary oversight—become essential conditions for legitimacy.
Artificial intelligence can illuminate historical knowledge. It cannot replace historical truth.







