At the start of the year, there were widespread concerns about how generative AI could be used to interfere in global elections to spread propaganda and disinformation. Fast forward to the end of the year, Meta claims those fears did not play out, at least on its platforms, as it shared that the technology had limited impact across Facebook, Instagram, and Threads.

The company says its findings are based on content around major elections in the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil.

“While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content,” the company wrote in a blog post. “During the election period in the major elections listed above, ratings on AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation.”

Meta notes that its Imagine AI image generator rejected 590,000 requests to create images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the month leading up to election day in order to prevent people from creating election-related deepfakes.

The company also found that coordinated networks of accounts that were looking to spread propaganda or disinformation “made only incremental productivity and content-generation gains using generative AI.”

Meta says the use of AI didn’t impede its ability to take down these covert influence campaigns because it focuses these accounts’ behaviors, not on the content they post, regardless of whether or not they were created with AI.

The tech giant also revealed that it took down around 20 new covert influence operations around the world to prevent foreign interference. Meta says the majority of the networks it disrupted didn’t have authentic audiences and that some of them used fake likes and followers to appear more popular than they actually were.

Meta went on to point the finger at other platforms, noting that false videos about the U.S. election linked to Russian-based influence operations were often posted on X and Telegram.

“As we take stock of what we’ve learned during this remarkable year, we will keep our policies under review and announce any changes in the months ahead,” Meta wrote.

 

Keep reading the article on Tech Crunch

This post was originally published on this site