Frozen Truth: South Korea’s 123 Days and the Lessons of AI
In the 123 days from the beginning of December 2024, when the president declared martial law, followed by the National Assembly’s motion for presidential impeachment, and culminating in the Constitutional Court’s decision, South Korea was engulfed in extreme chaos and legal uncertainty. Amid public anxiety and rampant speculation, social tension, and economic recession, the political atmosphere became extremely frigid. During this ambiguous period in modern Korean history, one question echoed quietly but powerfully throughout society: “I am stopped, but what shall happen around us?”
While Korea focused on internal debates surrounding the role of democratic institutions and legal interpretation, global artificial intelligence (AI) made historical and groundbreaking advancements. OpenAI announced a language model with significantly improved multi-modal capabilities, Europe led global regulation by enacting the ‘AI Act’, and many countries, including China, rapidly established national-level AI governance systems. Many countries were already continuing innovation by actively utilizing AI systems in industrial automation and policymaking, but Korea was caught in internal disputes and social polarization, unable to move forward at all. However, this stalled situation allowed our society to fundamentally re-examine its existing institutional beliefs and, furthermore, to explore how we can design civic systems to restore truth and reclaim our future.
As all AI researchers know, large language models exhibit ‘hallucination’. These models generate information that is different from the truth without hesitation or doubt. This is not because the model intentionally tries to deceive people, but rather a result of predicting the next most likely words based on its training data. However, the human mind also exhibits similar phenomena under stressful situations, known as ‘confabulation’. When people face trauma, uncertainty, or conflicting information, they psychologically reconstruct memories in an understandable form, even if they are not based on facts.
During South Korea’s 123 days, polarized interpretations spread rapidly. Some questioned the legitimacy of the impeachment process, while others questioned the legal basis of the presidency itself. Social media amplified these tensions, and although the Constitutional Court eventually made a ruling, public opinion was already firmly divided by that point. Objective facts had to compete with emotionally more persuasive narratives. The hallucinations of AI and the confabulation of human memory have different origins but share a common danger: both can create realities that feel more real than the truth itself.
The AI research community is making significant progress in reducing ‘hallucinations’. These efforts provide useful insights for managing social truth as well. For example, guiding a model to think through problems step by step improves accuracy and consistency (Chain-of-Thought Prompting), and connecting a model’s output with verified external databases can further secure fact-based information (Retrieval-Augmented Generation). Additionally, training a model to avoid excessive confidence and clearly express uncertainty can increase reliability (Calibration). Furthermore, conducting extreme and intentional stress tests to identify vulnerabilities in the model and improve the robustness of the entire system (Adversarial Testing) is another approach. These approaches represent not just technical techniques, but a philosophy: the goal of intelligence is not simply to create plausible stories, but to be based on ‘verifiable reasoning .
If machine errors can be reduced through design, could human cognitive biases not be managed in a similar way? Yes. We can improve ‘civic memory’ through institutional design that can strengthen collective reasoning. Inspired by insights from AI research, we can consider the following four principles: First, public institutions must transparently explain their decision-making processes. Judgments, policy changes, institutional reforms, etc., should disclose not only the outcomes but also the processes as much as possible (inducing a chain of civic thought), and digital archives, public testimonies, chronologies, multimedia materials, etc., that are accessible to everyone should be systematically organized (establishment of a memory retrieval system). Second, we must encourage epistemic humility through education, teaching not only ‘what we know’ but also ‘how confident we should be’ (confidence calibration education). Furthermore, there needs to be a mechanism to structurally review and critique public narratives in public discourse. Democracy grows through organized dissent (operation of a collective red team). These principles are not just abstract concepts but can become practical blueprints for restoring civic awareness.
Korea has sufficient capabilities to lead AI technology. However, to become a true leader, a social and institutional foundation that allows for collective thinking even in uncertainty is essential. To this end, I propose the following vision: (1) Establishment of a National Memory Observatory: Provision of a public platform that uses AI to track the distribution paths of false information and distortions of collective memory. (2) Introduction of Cognitive Health Indicators: Regular measurement and management of public trust, accuracy of beliefs, and the degree of social polarization, along with economic and social indicators. (3) Operation of a Conversational Civic AI System: Strengthening civic education and public discourse by utilizing large language models based on national judicial, historical, and administrative data. (4) National Rituals for Reflection on Memory: Operating interactive events and multi-perspective digital platforms using AI tools to enable critical engagement with historical events. These efforts are not optional. In the digital age, “memory” is the epistemic infrastructure of a nation.