
The AI Race, Diplomacy, and Do We Really Need to Care?
“Koreans are not just the early adopters of ChatGPT, but they are also shaping how it is used across the world,” Jason Kwon, OpenAI’s chief strategy officer, remarked.
This comment was made in Kwon’s interview with Yonhap News on the establishment of an OpenAI subsidiary in Korea. Certainly, it is only natural that a country with the second-largest number of paid ChatGPT subscribers is the next target in OpenAI’s plans to branch out. I myself pay $20 per month for a ChatGPT Plus account, and I don't think I have gone a day without using it to check for errors in my coding assignments or to do quick research. Since the launch of ChatGPT in 2022, we have seen generative AI evolve at incredible speeds. According to the Ministry of Science and ICT, in 2024, 60.3% of internet users in Korea now use AI services. It is clear that generative AIs are here to stay. In that sense, the race for global AI dominance between countries does not come as a surprise. PwC estimates that AI will contribute over $15 trillion to global GDP figures by 2030, effectively meaning that governance over the AI market equates to power over the general flow of technology and trade.
The release of the generative AI chatbot DeepSeek-R1 model by the Chinese AI company DeepSeek was thus an alarming wake-up call for the US government and AI giants. The Chinese startup had created a model that mirrored the capabilities of US ones despite US export restrictions on high-end semiconductors, believed to be central to AI development. It became clear that US trade bans were simply not enough to maintain a competitive edge. Certainly, DeepSeek-R1 challenged US hegemony by opening the possibility for low-income states to pursue their own AI research and application agendas through a non-US alternative. The competition has intensified. Increasingly, both countries are pursuing policy enterprises centered around securing and exporting their AI models. At no point has AI diplomacy been as crucial as it is now.
But what does this all mean for us ordinary users of AI? Why should we care about who is behind the technology we use?
Turns out that the effects of AI diplomacy can be quite intrusive on our daily lives, and the dangers it poses are quite terrifying.
Feeding and Getting Fed: The dangers of generative AI
Many analysts have compared soft-power diplomacy and competition for AI dominance to the Cold War arms race. The eagerness of the Chinese and US governments to pitch their models abroad does indeed bear a resemblance to the rush to establish strong nuclear bases across the world in the 1960s. The differences, however, are what make the AI race more alarming.
Whereas during the 20th-century Cold War, most, if not all, understood the nuclear arms race as a perilous war for weapons of mass destruction, what is unsettling about the AI race is that nobody really thinks about generative AI models as weapons. Despite casual mentions of privacy concerns and the surface-level fear of AI tools “listening in” on us, AI is typically viewed in a positive light by its day-to-day users. We are vaguely aware of its potential issues, but not fully conscious of what they entail. Yet, this lack of clear recognition that AI is a weapon is essentially what makes users unknowing participants in the AI arms race. It is precisely what makes AI diplomacy insidious: countries are able to present AI models as simple productivity tools in order to export them to foreign populations, thereby creating a network of data collection that makes the technology more powerful. Regardless of whether users are using privatized American systems or state-sponsored Chinese ones, every time they interact with an AI model, they give it new information for it to retrain itself. If the 9.6 million inhabitants of Seoul interact with it once a day every day, that is still 3.504 billion new pieces of information. Such a quantity is probably more than enough for an AI model to learn the values, antiques, and current or future threats of a community. Considering the current capabilities of AI and the pace at which it’s developing, it isn’t hard to imagine a scenario where such data is weaponized by governments and fed into an AI military device. AI diplomacy is thus dangerous because it makes use of day-to-day technology and creates situations where foreign populations unconsciously supply systems with power.

An image generated using Gemini from the prompt “ChatGPT spoon-feeding people info”
The dangers of AI diplomacy don’t only concern users spoon-feeding AI models information. Have you ever thought about what information your AI model is feeding you? When DeepSeek-R1 was first launched, users were quick to discover that the Chinese model refuses to comment on topics considered taboo by the Chinese government, namely the Tiananmen Square massacre and Uyghur Muslims. When asked about Taiwan, it responds with the well-known government narrative: “Taiwan is an inalienable part of China.” Such self-censorship does not come as a surprise. However, we need to think of its broader implications. With AI diplomacy, China and the US are trying to integrate their AI models into state infrastructure abroad. The most pressing concern is that as foreign governments use these models in their key projects and public services, they are giving way to the subtle yet gradual assimilation of either US or Chinese socio-political values into their societies, as AI calculations inevitably encode the cultural and political values of their creators. The US and China are not unaware of this. OpenAI’s international expansion project and DeepSeek’s open-source nature are both stark displays of such efforts; as a Korean subsidiary of OpenAI is established and more Korean tech companies, such as Naver or Kakao, collaborate with OpenAI, Korean society may be more infused with US or Western political narratives. More so than we already are.
This might not seem that imminent an issue. Don’t we just need to avoid asking AI questions on politics? But imagine you’re asking for a briefing of today’s news, and your AI tool covertly only gives you articles from specific predefined far-right sources. Or maybe you’re asking for some movie recommendations for your Chinese history project, and your AI tool only gives you films praising the CCP and its achievements. When considering these practices, DeepSeek comes to mind easily because we can distinctly imagine China’s centralized control system. These unnerving instances, however, are not absent even when the technology comes from Western democracies. Simply look back at the 2018 Cambridge Analytica-Facebook incident for reference. Ultimately, the threat of AI diplomacy is that no matter whose model you are using, the technology can and will feed you bias, giving its creators the power to manipulate public and private beliefs. At a societal level, this could shape how a community thinks, acts, and perhaps even governs.
The AI race is about dominance over technology that is already deeply embedded and will have an increasingly large presence in everyday lives. What makes this especially sinister is the inherent opacity of the entire process. In many cases, we don’t even realize that we’ve become a part of this tech race, simply by using the AI tools available to us.
So, where does that leave us? We can’t realistically live in an AI-free world when AI has become so intrinsic to our civic and state technologies, and it’s true that it has brought unprecedented progress in areas such as healthcare and research. However, when considering the insidious consequences of the AI race, blind acceptance is also not the answer. As a generation that actively uses AI, it’s imperative for us to remain cognizant of the threats of generative AI and how these tools can be used by foreign and state governments to influence our belief system. Enhance your media literacy. And in any case, use ChatGPT; try not to let ChatGPT use you.