AI is Rewriting Holocaust History and Oversimplifying Its Complexity: UNESCO Report What if the history we know may no longer be the history our future generations will know? History has always been a sensitive concern in an ever-evolving society, amid deepfakes and Generative AI
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
History is a mosaic of stories, each piece contributing to the grand narrative of human existence. From Industrial Revolution, The Great Depression, WWI, and WWII, to Independence, historic Assassinations, and Civil Wars, all of them encompassed to influence the world we live in today.
But what if the history we know may no longer be the history our future generations would know? History has always been a sensitive concern in an ever-evolving society, amid deepfakes and Generative AI.
The 2023 released Mission: Impossible – Dead Reckoning Part One explored how sentient AI by the name 'The Entity' held the power to alter information available to the public and distort reality in real-time. Of course, all this is just fiction and near impossible right? Maybe not anymore.
A report by the United Nations Educational, Scientific and Cultural Organization (UNESCO) titled 'AI and the Holocaust: rewriting history' warns that AI could distort the historical record of the Holocaust and fuel antisemitism. But is it fatal to humankind? "If we allow the horrific facts of the Holocaust to be diluted, distorted or falsified through the irresponsible use of AI, we risk the explosive spread of antisemitism and the gradual diminution of our understanding about the causes and consequences of these atrocities," said Audrey Azoulay, Director-General, UNESCO said.
What is at risk?
Four in five (80 per cent) of young people between the ages of 10 and 24 are using AI several times a day for education, entertainment, and other purposes. This calls for ethical and privacy guidelines to ensure younger generations grow up with facts, not fabrications.
Deepfakes and Gen AI videos and audio have surfaced on the internet, which may be presenting a different picture to the younger generation. Gab Social, a far-right social media network, allows users to create AI chatbots, and impersonate historical and modern political figures. The list includes the Nazi German dictator Adolf Hitler. In an AI-altered video clip of Hitler, he proclaimed that the upcoming war would bring about the "annihilation of the Jewish race in Europe." The speech was translated from German to English.
Harry Potter star Emma Watson was shown reading Hitler's Mein Kampf. Google's Gemini chatbot generated images that put people of colour in Nazi-Era uniforms. The tech giant called it "inaccuracies in some historical" depictions.
These instances lead to the revival of antisemitism, and prejudice against Jewish people. Today, digital has a profound impact on how societies and individuals learn about the past.
According to the UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, "AI systems can order and manage access to the large volume of information about the past available online, including information about the Holocaust. AI systems used by search engines and social media platforms direct access to content about the Holocaust in response to user queries. When optimized for historical accuracy, this can support the dissemination of knowledge and information about the Holocaust; however, the algorithms used by social media and search engines have been found to prioritize and promote content (including disinformation) that is attention- and engagement-focused, and prone to bias, potentially working against accuracy."
AI threatening the Holocaust history
There are six major challenges and their implications for Holocaust disinformation and antisemitism. AI can be used as a tool to further Holocaust denial and hate as data flaws in AI design and deployment may end up referring to or recommending hateful and sometimes violent content. By using jailbreaking techniques, Generative AI systems have also been manipulated into generating Holocaust denial and distortion. Jailbreaking usually takes the form of users purposefully prompting AI systems to generate content in an unethical way. Not much can be done on these sides as many generative AI tools lack the filters and guardrails against harmful content.
Such models can generate content that sometimes creates misleading or false narratives. "If not properly supervised, guided, or moderated, AI systems can be especially prone to errors that result in factually incorrect statements about the Holocaust. These errors can sometimes be due to incorrect descriptions of Holocaust materials in data on which an AI model is trained or from which it draws information," states the report.
With a major challenge of hallucination, AI tends to invent things which never existed in the first place, "one example of such hallucination on ChatGPT is the concept of the 'Holocaust by drowning,' which assumes that mass murder campaigns took place in which Nazis and their collaborators drowned Jews in rivers and lakes. Although historically no such campaigns took place, AI invented them based on the concept of the Holocaust by bullets – i.e. large-scale murder by shooting."
Data voids can result in the retrieval of content that is irrelevant to the Holocaust.
AI-generated content cannot always be distinguished from human-generated content, even by experts. Such systems allow users to create content about the Holocaust without considering its potential misuse. Fake survivor testimonies or the personal reflections of perpetrators can be generated to make it easy to produce inauthentic materials that look convincing to non-experts. AI might be misused to modify or generate images, audio, or videos, to make it appear as if historical figures or survivors are saying or doing things they did not.
Holocaust denial may not require influential misuse of technology to cast doubt on the truth about the Holocaust. Rather, just the societal knowledge that content can be generated or manipulated with AI may increase disbelief in the real evidence of the Holocaust and survivor experiences.
History is complex. While historians and educators are skilled in explaining this complex past to different audiences with ease, research shows that AI-driven systems tend to focus on just a few aspects of the Holocaust. For instance, AI may prioritize images of the liberated camps or encyclopedia-style summaries. Such selective representation of the Holocaust is not new, the report states.
AI can lead to localized bias on such events. These systems are often programmed and designed to customize their performance for individual users depending on their location or the language they use. "While this customization can often be useful, it can reinforce gaps in understanding complex historical episodes among different social, cultural, and geographic groups,' the report added.
Countering AI-amplified information
UNESCO calls for AI designers, policymakers, educators, and researchers to collaborate closely. "Only AI systems equipped with robust safeguards and human rights assessments, coupled with an increased focus on developing digital literacy skills, can uphold the integrity of historical truth and ensure the responsible use of artificial intelligence," the report read.
For policymakers, the aim would be to emphasize principles such as fairness, transparency, accountability and respect for human rights, to ensure that AI systems are designed and used in ways that uphold historical accuracy, dignity and integrity. For education policymakers, there should be an urgent need to invest in educational programs that develop digital and AI literacy skills with a special emphasis on learners' ability to navigate disinformation, prejudice and hate speech. When it comes to education, Augmented Reality (AR) and Virtual Reality technologies have been increasingly applied to enable new ways of learning about history, Holocaust included.
For archives, museums, and memorials, UNESCO suggests digitization of Holocaust-related historical collections to expand the amount of data that can be used for training AI systems and improve their performance. This also includes how to develop guidelines on what information should be used by AI systems for retrieving and generating outputs. AI developers and companies should develop procedures that ensure a continual evaluation of the quality of training data for AI systems. Lastly, researchers should conduct more empirical research into how AI systems deal with information about historical information.