Thu, Sep 12, 2024

Deepfakes and Misinformation: What Have We Learned About GenAI and Elections?

GenAI may not have broken the information environment, but it certainly has complicated matters. Now more than two-thirds of the way through the "Year of the Election," we explore the current state of GenAI and elections. For businesses, GenAI presents a new challenge to prepare for—both in responding to political developments and managing direct risks and opportunities for their brand.
Year-over-Year Mentions of GenAI Content
Figure 1: Year-over-Year Mentions of GenAI Content. This figure illustrates the dramatic increase of mentions related to GenAI content over the past few years. From 2022 to 2023, the number of mentions more than tripled. From 2023 to 2024, the mentions more than quintupled. Source: Brandwatch.

In an era of ongoing discussion about the potential negative impacts of Generative AI (GenAI), the dissemination of misinformation may become more prevalent. This is especially true in countries where there is significant concern about the integrity of information surrounding the 2024 elections. Although the focus on this issue may be driven by electoral events, it is likely to affect businesses as well.

2024 was proclaimed the “Year of the Election,” with voters in countries representing over half of the world’s population heading to the polls. It was also anticipated as the year GenAI could disrupt elections. Social media has already lowered the cost of sharing content online, and GenAI further reduces the cost of producing that content.

Indeed, conversations on many prominent social media platforms about GenAI in the first eight months of 2024 grew by 452% compared to the previous eight months. Could 2024 have turned out to be the year that deepfakes, long considered the next big risk in misinformation, actually proliferated and had real-world consequences for political developments?

Reality is always more nuanced than doomsday scenarios. While GenAI has been used for political purposes, text and audio have posed more problems than the oft-mentioned deepfake images and videos. In many countries, media and regulators proved ready and able to call out political images created using GenAI, and citizens have proven less easily deceived than pundits presumed. Evidence of GenAI actually having a material impact on election campaigns is rare; the most consequential video affecting the U.S. election to date was genuine footage of President Biden’s struggles in the first presidential debate, not a manipulated video.

However, the rise of GenAI has not been without cost to the reputations of politicians and businesses. One concern is what is known as the "liar's dividend". As discussions about fake content increase, convincing people of the truth becomes harder. In politically divided countries like the U.S., this can create a political world where it feels like politicians and their supporters are talking past each other, unable to agree upon even basic facts about the state of the world. As we will describe later, the Liar’s Dividend can also prove challenging for companies.

But as we reflect upon where we stand now with GenAI, two-thirds of the way through the year of elections, one lesson is clear. GenAI poses risks in both politics and business, but it has not yet fundamentally reshaped the nature of the information environment. Instead, like other risks to reputation and brands, it can be anticipated, planned for, and addressed when necessary. Despite initial fears, the sky is not falling. However, GenAI does create real risks that require careful planning much like other sources of risk.

What Have We Learned So Far?

Year-over-Year Increase in Unique Individual Accounts Discussing Gen AI
Figure 2: Year-over-Year Increase in Unique Individual Accounts Discussing Gen AI. This figure illustrates the dramatic annual growth in the number of unique authors discussing GenAI in relation to elections, with a significant increase in 2024. The increase from 2023 to 2024 is more than double the growth observed from 2022 to 2023. Source: Brandwatch.

Like conventional misleading content, AI-generated content is an effective vehicle for divisive messaging on polarizing topics, amplifying hateful narratives and the harassment of candidates. GenAI simply enables greater scale at lower cost. Data collected by Resolver, a Kroll business, shows that the number of unique accounts discussing GenAI more than doubled in 2024 compared to 2023 globally. However, despite large volumes of AI conversations, the worst fears about deepfake images and videos have not, so far, been realized in 2024.

There are several reasons why this is the case. Most importantly, the public’s ability to decipher GenAI content has surpassed initial expectations by commentators. Regulators and mainstream media, and especially fact-checking outfits, have been quick to recognize and call out fake videos and images. These counterweights have likely been more effective than expected in part because the production quality of some GenAI images and videos has not generally been as high as anticipated, enabling rapid identification and flagging of fake content.

Of course, there have been examples in elections around the world of GenAI content intended to impact public perception and voter intentions.

In France, deepfake videos purporting to show the National Rally (NR) leader Marine Le Pen and her niece Marion Maréchal of the Reconquête party, spread on social media. These videos were supposedly posted by a young niece of Le Pen whose account accumulated tens of thousands of followers. The account turned out to be fake and the content synthetic, but not before a significant public engagement and debate.

During the Indian election, GenAI content targeting the integrity of electoral processes and attempting to stoke sectarian tensions among the country’s religious minorities featured prominently in the online discourse surrounding the election.

Meanwhile, in the U.S., there have been two documented uses of the GenAI to impersonate speech by the Democratic candidates for president. This winter, fake robocalls in Joe Biden’s voice were employed urging voters to skip primary election in New Hampshire (although it is worth noting that the political consultant who implemented the scheme now faces criminal charges and a potential $ 6 million fine). Last year, a clip of Kamala Harris – then running for Vice President—speaking at a political rally was altered to make her words sound nonsensical.

While most attention has been devoted to the risk of GenAI video and images, Kroll’s analysis shows that the most problematic content in the EU and UK elections was deepfake audio and simple text. These types of media are generally more believable, harder to detect, as well as easier and cheaper to produce at a credible quality. In particular, GenAI text content is most interoperable with other assets, such as bot farms and fake media outlets, designed to influence elections. GenAI text content is easily disseminated through coordinated inauthentic networks which spread misleading information far and fast and can be subtly changed once in flight to evade detection techniques in a way online platforms, fact-checkers and the public find hard to identify.

As we progress through this year’s historic waves of elections, GenAI content is manifesting more like prior types of electoral and reputational risk than something fundamentally new. One of the most active years of electoral activity in history combined with the novel and awesome power of GenAI left some feeling that there was an informational sword of Damocles hanging over societies around the world. The reality is more nuanced.

Exploiting Misinformation

Despite the more doomsday scenarios posited about the impact of GenAI on elections not yet materializing, firms should take seriously the problem of liar's dividend that politicians also face. As people become more aware of how content can be manipulated, or as they become “sensitized” to GenAI content, it becomes easier for dishonest individuals to make others doubt the real content by claiming it is fake.1

However, the problem cuts the other way as well. While it was always challenging to “prove a negative” (i.e., that something did not happen), in a GenAI world this becomes even harder. If a politician accused of doing "X" is able to produce evidence showing that it is impossible that they actually did “X”, people can simply write off the evidence as being fake itself if they find the original allegation credible.

In the long term, the biggest lesson for firms around the use of GenAI in this year of elections may in fact be the growing ubiquity of beliefs that underline the liar's dividend, as fears of GenAI around elections continued to be trumpeted in ways that attract increased attention. Firms would be wise to revisit their approach to communication strategy and crisis management with an eye towards the liar's dividend.

Positive Uses

Attention has been mostly focused on the negative aspects of AI in politics. However, we should not overlook the positive potential of this new technology and the creative ways political campaigns have been using it. There are lessons here for business too.

In South Korea, AI avatars have been used for campaigning, creating virtual representations of candidates to engage with voters through a different medium. This was particularly popular with younger voters who were most likely to engage with the avatars.

In India, the phenomenon of party-authorized deepfakes of popular deceased politicians was seen on multiple occasions. This use of GenAI was well received by voters and seen as a way of connecting different generations of voters.

A particularly effective use of AI in political campaigns was demonstrated by the Pakistan Tehreek-e-Insaf (PTI) party, led by jailed former Pakistani Prime Minister Imran Khan. In the aftermath of the shock success of the PTI in the 2024 election, an AI-generated victory speech by Khan was viewed as an extremely innovative use of the technology and the social media post in which it was shared accumulated over 6 million views and over 58K reposts.

Meanwhile, Taiwan drafted an ambitious and groundbreaking law that would govern the use and reliability of GenAI models and the risks associated with them. Labeling, disclosure and accountability mechanisms would be established under the legislation. Alongside obligations for AI companies to uphold data protection and privacy rules in model training and enhanced requirements around content verification, individuals would be able to give their carefully defined consent for virtual representations of themselves to be used by businesses in marketing and advertising campaigns. Building on the AI Risk Management Framework established by the U.S. National Institute of Standards and Technology, this legislative proposal charts a potential new path for firms in other parts of the world to consider replicating the effectiveness of legitimate political campaigning in their efforts to engage and interact with their customers within a defined framework. Other countries are watching the evolution of this law with close interest as the issues it addresses will need to be dealt with elsewhere too.

Takeaways

Overall, as we look back on the first eight months of the year of the election, we see that elections in 2024 so far look a lot like those in years past. Candidates running for office have had to deal with mudslinging from their opponents, some of which comes in the form of unfounded rumors. While GenAI may have accentuated some of these rumors, these types of attacks during political campaigns are nothing new and there are legions of campaign professionals who get paid to try to figure out how to manage them. This is not to say that there will not be some consequential GenAI moments in the future, but rather that so far, we have not really seen it.2

The risks posed by GenAI for now remain best viewed through the lens of conventional risk management. GenAI may not have broken the information environment, but it certainly has complicated matters. Firms should be prepared both in their strategy and available toolset to react to political developments and meet direct risks head-on. The use of GenAI will continue to grow and the effect will continue to erode authority and trust. This will have a direct impact on the available time to respond to threats and pressure test the ability of organizations to parse through the noise to separate truth from falsehood and establish authority on relevant issues.

Explainer: How AI Can Be Used to Disrupt Elections

A principal threat of AI to free and fair elections, as well as financial markets, stems from its potential to turbocharge the dissemination of disinformation. Purposeful disinformation can impact elections and may fuel increased polarization. It can affect the national economy as well as our personal decisions around consumption, education and lifestyle. Disinformation is a tool of choice for foreign election interference, undisclosed interests and ideological warfare. It is used to attempt to sow division, create confusion and steer group behavior.

Deepfakes are probably the most well-known disinformation tool, but AI’s ability to impact the dissemination of disinformation includes three other key capabilities: algorithmic influence, content impact optimization and force multiplication.

  • Algorithmic Influence: This refers to the continually improving capabilities of content platforms to capture user attention by delivering compelling content matched to a user’s interests and preferences. AI can significantly enhance the power of algorithmic audience targeting by harnessing user data and combining it with quantitative psychological research, potentially targeting key blocs of voters. There is global concern that social media videos and messaging are disseminating manipulative political messages to influence public opinion and voting behavior.
  • Content Impact Optimization: AI could affect the substance of internet content in a manner similar to its use for algorithmic influence. It is well known among content producers and social media influencers that certain approaches to creating content can improve engagement levels, including the adoption of speech and delivery patterns that harness human emotions. The power of AI to analyze data makes it possible to generate content that is increasingly fine-tuned to provoke reactions, potentially aiding the spread of election interference through disinformation.
  • Force Multiplication: One area of particular concern regarding AI is its use as a force multiplier for spreading disinformation through communications and social media comments. Force multiplication amplifies a disinformation campaign by flooding the internet with trolling comments and divisive conspiracies. The anticipated availability of AI agents could lead to swarms of troll bots bombarding the marketplace of ideas with fake utterances that push a specific message. Transmitting these messages in large numbers can create the false impression that a certain viewpoint is widely held. This technique could also lend credibility to so-called "fake news" events.

The use of these capabilities to spread disinformation poses an ongoing risk that will require continuous monitoring and vigilance. Voters should carefully consider the source, reliability and veracity of any content they receive or share.

Sources
1Consider the now well known “Access Hollywood” audio tape that surfaced towards the end of the 2016 US presidential election campaign with recordings of then candidate Donald Trump making disparaging comments about women. At the time, the campaign did not deny the authenticity of the recordings. Fast forward to the Generative AI world of 2024 and it is hard to imagine a political campaign in a similar situation today not at least gesturing to the idea that the tape could have been created with Generative AI in an effort to undermine its impact.
2And perhaps the most concerning incident, in the Slovak parliamentary elections, involved audio, and not images or videos.

How Elections Influence Business Strategies—Adapting to Change

Join our experts for a webinar on October 30th that explores how elections influence global market conditions, business strategies and regulations.

Compliance and Regulation

End-to-end governance, advisory and monitorship solutions to detect, mitigate and remediate security, legal, compliance and regulatory risk.

Cyber Risk

Incident response, digital forensics, breach notification, managed detection services, penetration testing, cyber assessments and advisory.

AI Security Testing Services

AI is a rapidly evolving field and Kroll is focused on advancing the AI security testing approach for large language models (LLM) and, more broadly, AI and ML.


Business Continuity, Resilience and Disaster Preparedness

In today’s fast-paced world, disruptions can happen anytime. Kroll’s full suite of business continuity, resiliency and disaster preparedness capabilities is designed to prepare your enterprise for unexpected risks and maintain competitiveness throughout the full lifecycle of any disruption.