OpenAI has released a new report detailing attempts to use AI in political misinformation campaigns ahead of the 2024 U.S. elections. While these efforts, particularly by foreign adversaries, remain a concern, OpenAI noted that their impact appears limited. The most widely spread incident was a hoax that falsely claimed Russian trolls were using OpenAI's services. However, the company's models did not generate the content.
The report highlights that generative AI is being experimented with in various ways, including by creating eye-catching images for influence operations, like one attempt from a Russian group using OpenAI's Dall-E. However, OpenAI’s tools are generally used as part of larger schemes rather than end-to-end misinformation efforts. OpenAI also detected a China-based group attempting to gain access to its employees' credentials.
Despite these challenges, OpenAI is leveraging AI tools to detect and neutralize suspicious activities quickly. These tools have significantly increased the company's ability to disrupt campaigns, reducing analysis time from days to minutes.