China is leveraging generative AI for a disinformation campaign for the 2024 Taiwanese Presidential Elections, complicating challenges in discerning truth from propaganda.
China aims to use generative AI to influence the 2024 Taiwanese Presidential Elections, employing sophisticated techniques to manipulate narratives and shape public perception on a massive scale. However, it is hardly unique in its propaganda fundamentals.
This month, Defense One reported that China is exploring the use of generative AI tools, similar to ChatGPT, to manipulate audiences around the world and shape perceptions about Taiwan, according to researchers from RAND.
Defense One notes that the People’s Liberation Army (PLA) and the Chinese Communist Party’s (CCP) prior intentions suggest that China would likely target Taiwan in the 2024 presidential elections. It also mentions, RAND researchers have been researching the use of technology to alter or manipulate foreign public opinion in crucial target locations since 2005.
The source says China has been playing at a disadvantage regarding weaponized disinformation due to the Chinese government’s obsession with censorship and blocking foreign media channels. It mentions that generative AI tools promise to change this by bridging the cultural gap for the party-state at scale. However, it notes that generative AI’s reliance on massive training data will be a crucial focus for the PLA, with PLA information warfare researchers complaining about the lack of internal data-sharing.
Defense One says that generative AI tools could help the PLA create many false personas that seem to hold a particular view or opinion, creating the impression that certain opinions or views have popular support when they do not. It also says generative AI could rapidly produce false news articles, research papers, and other pages, creating a false sense of truth.
In line with RAND’s assessment, the Taipei Times reported in April 2023 that Taiwan’s National Security Bureau Director-General Tsai Ming-yen mentioned during a meeting of the legislature’s Foreign and National Defense Committee that China could use its self-developed generative AI applications to intensify cognitive warfare against Taiwan.
“It has come to our attention that China has developed its chatbots, such as Ernie Bot. We are closely watching whether it will use new generative AI applications in disseminating disinformation,” Tsai said, as quoted by the source.
Taipei Times noted that Tsai’s bureau monitors China’s potential interference in Taiwan’s upcoming election via military or economic threats, disinformation campaigns, and hidden channels or virtual currency funding for proxy candidates.
Generative AI may revolutionize how disinformation and propaganda are carried out. In a June 2023 article in the peer-reviewed journal Science Advances, Giovanni Spitale and other writers mention that generative AI is better at disinformation than humans, as advanced AI text generators such as GPT-3 could have the potential to affect the dissemination of information significantly, as large language models currently available can already produce text that is indistinguishable from human-made text.
In a June 2023 article for Axios, Ana Fried notes three methods of how generative AI could be used for disinformation. In line with Spitale and other writers’ views, she says that generative AI can produce persuasive yet potentially inaccurate information even more effectively than humans. Also, she mentions that generative AI can quickly and inexpensively fuel disinformation campaigns with tailored content. In addition, she notes that generative AI applications can become targets for disinformation, as they could be fed biased data to influence discussions on specific topics.
China may have taken a pagel from Russia’s disinformation playbook and improve it with generative AI. In a March 2021 article for the Center for European Policy Analysis, Edward Lucas and other writers note that in 2020, China’s information operations (IO) tactics adopted the “firehose of falsehoods” model, which includes spreading multiple conflicting conspiracy theories to undermine public trust in facts.
Christopher Paul and Miriam Matthews note in a 2016 RAND report that the firehose of falsehoods model is high volume and multichannel; rapid, continuous, and repetitive; lacks commitment to objective reality; and lacks commitment to consistency. Paul and Matthews note that increased message volume and diversified sources enhance a message’s persuasiveness and perceived credibility, potentially overshadowing competing narratives.
Further, they say fast, persistent, multichannel messaging establishes initial impressions and fosters audience credibility, expertise, and trust. Moreover, they mention that the firehose of falsehoods model capitalizes on consistent narratives, audience preconceptions, and seemingly credible sources to gradually enhance misinformation’s acceptance and credibility. They also say that the model seems resilient to inconsistencies among channels or within a single channel, though it remains to be seen how these inconsistencies affect credibility.
However, China is not alone in using disinformation to its ends. In a March 2022 article for the Cato Institute, Ted Galen Carpenter notes that US journalists have a history of being willing conduits for pro-war propaganda, often in service to a military crusade that the US has launched or wants to initiate.
Carpenter points out egregious instances of US disinformation related to the ongoing Ukraine War, such as a widely circulated image of a Ukrainian girl verbally confronting Russian troops turning out to be a Palestinian girl confronting Israeli troops, reports that 2015’s Miss Ukraine was not taking up arms against the Russian invaders despite a well-covered photo op showing her brandishing an airsoft gun, aerial combat footage from Ukraine turning out to be a video game, reporting on the deaths of Snake Island’s defenders who turned out to be well and alive, and the supposed sinking of the Russian patrol ship Vasiliy Bykov which later turned out to be undamaged.
He mentions that the US press has a history of being conduits for foreign information operations aligning with US interests, citing how US newspapers retold fabricated British reports of German atrocities shortly before the US entry in World War I and how the Kuwaiti government used a sophisticated information campaign with US media acting as an echo chamber to stir US public opinion into going to war with Iraq in 1991.
However, the firehose of falsehoods model may not work in the US context. In a June 2022 article for Responsible Statecraft, Robert Wright notes that the US is a liberal democracy with a complicated media ecosystem. Wright says that it is harder in a pluralistic system than in autocratic systems to create a single dominant narrative, making propaganda much less straightforward with less in the way of centralized control, making it harder to pin down.
Wright also points out the role of US think tanks in advancing propaganda, saying that they exert influence by explicitly opining about policies and doing reporting and analysis that, at face value, is objective but implicitly favors specific policies. He notes that think tanks hire people who already believe things the funders of the think tanks want everyone to believe.
However, there may also be some parallelism between China’s firehose of falsehoods model and the US approach to propaganda. As the firehose of falsehoods model uses increased message volume and diversified sources to enhance a message’s persuasiveness and perceived credibility, Wright says that US institutional diversity, such as different newspapers, cable channels, and think tanks, can make US propaganda more inconspicuous and convincing.
[Photo by Pixabay]
The views and opinions expressed in this article are those of the author.
The author is a Moscow-based Russian government scholar. He holds a master’s degree in International Relations from the Peoples’ Friendship University of Russia.