SAN FRANCISCO — OpenAI, the creator of ChatGPT, revealed on Thursday that entities from Russia, China, Iran, and Israel exploited its technology to attempt to sway global political discourse, underscoring fears that generative artificial intelligence is easing the path for state actors to execute clandestine propaganda initiatives as the 2024 presidential election approaches.
OpenAI deactivated accounts linked to notorious propaganda entities in Russia, China, and Iran; an Israeli political campaign entity; and an unidentified group originating from Russia that the company’s researchers have named “Bad Grammar.” These factions leveraged OpenAI’s technology to craft posts, translate them into multiple languages, and develop software facilitating automated social media postings.
These entities garnered minimal engagement; the associated social media accounts reached a limited audience, with only a few followers, according to Ben Nimmo, the principal investigator on OpenAI’s intelligence and investigations team. Nonetheless, OpenAI’s findings indicate that long-active propagandists on social media are now employing AI technology to amplify their efforts.
“We’ve observed them producing text in greater volumes and with fewer mistakes than these operations have traditionally managed,” Nimmo, a former investigator at Meta specializing in influence operations, stated during a briefing with journalists. He acknowledged the potential for other groups to be utilizing OpenAI’s tools unbeknownst to the company. “This is not a moment for complacency. Historical patterns show that influence campaigns, after years of ineffectiveness, can suddenly succeed if left unmonitored,” he warned.
Governments, political entities, and activist groups have long used social media to attempt to sway political outcomes. In response to concerns over Russian meddling in the 2016 presidential election, social media platforms began scrutinizing their sites for efforts to manipulate voters. These platforms generally ban governments and political groups from disguising coordinated efforts to influence users, and political advertisements must disclose their sponsors.
As AI tools capable of generating realistic text, images, and even video become widely accessible, disinformation researchers have raised alarms about the increasing difficulty in identifying and countering false information or covert influence operations online. With hundreds of millions voting globally this year, the proliferation of generative AI deepfakes is a growing concern.
OpenAI, along with other AI companies like Google, is working on technology to detect deepfakes created with their tools, but this technology remains unproven. Some AI experts are skeptical that deepfake detectors will ever be fully reliable.
Earlier this year, a group affiliated with the Chinese Communist Party posted AI-generated audio of a candidate in the Taiwanese elections, falsely suggesting he endorsed another politician. However, the politician, Foxconn founder Terry Gou, made no such endorsement.
In January, New Hampshire primary voters received a robocall purportedly from President Biden, which was quickly identified as AI-generated. Last week, a Democratic operative who admitted to commissioning the robocall was indicted on charges of voter suppression and impersonating a candidate.
OpenAI’s report elaborated on how the five groups utilized the company’s technology in their attempted influence operations. Spamouflage, a previously known Chinese group, used OpenAI’s technology to analyze social media activities and compose posts in Chinese, Korean, Japanese, and English. An Iranian entity, the International Union of Virtual Media, also employed OpenAI’s technology to generate articles published on its site.
The newly identified “Bad Grammar” group utilized OpenAI technology to develop a program capable of automatic posting on Telegram. They then used OpenAI’s tools to create posts and comments in Russian and English, promoting the idea that the United States should withdraw support for Ukraine.
The report also disclosed that an Israeli political campaign firm, Stoic, used OpenAI to generate pro-Israel posts regarding the Gaza conflict, targeting audiences in Canada, the United States, and Israel. On Wednesday, Meta, Facebook’s parent company, also exposed Stoic’s activities, reporting the removal of 510 Facebook and 32 Instagram accounts linked to the group. Some accounts were hacked, while others belonged to fictitious personas, according to Meta.
These accounts frequently commented on the pages of prominent individuals or media outlets, posing as pro-Israel American college students, African Americans, and others. The comments supported the Israeli military and warned Canadians about the threat of “radical Islam” to liberal values, according to Meta.
AI played a role in the peculiar wording of some comments, which struck genuine Facebook users as strange and out of context. The operation was largely ineffective, attracting only about 2,600 legitimate followers.
Meta took action after the Atlantic Council’s Digital Forensic Research Lab discovered the network on X.
Over the past year, disinformation researchers have posited that AI chatbots could engage in lengthy, detailed conversations with specific individuals online, attempting to sway their opinions. AI tools could also potentially analyze vast amounts of data on individuals and tailor messages specifically to them.
OpenAI did not find such sophisticated uses of AI in its investigation, Nimmo stated. “It is very much an evolution rather than a revolution,” he remarked. “That is not to say we might not see such developments in the future.”
This article was originally published on washingtonpost. Read the orignal article.
FAQs
What is generative AI?
Generative AI refers to algorithms that can create content indistinguishable from human-produced material, including text, images, and videos.
How does AI-generated propaganda differ from traditional methods?
AI-generated propaganda can be produced and disseminated at a much larger scale and with greater personalization, making it more efficient and harder to detect.
What can social media users do to identify and report propaganda?
Users should critically evaluate the sources of information, look for signs of automation in posts, and report suspicious content to platform administrators.
How effective are current detection tools?
While detection tools are improving, they are not yet foolproof. Continuous development and adaptation are necessary to keep pace with evolving AI technologies.
What is the role of governments in combating AI-driven propaganda? Governments can implement regulations, support the development of detection technologies, and collaborate with tech companies to monitor and address propaganda efforts.