Wed, Oct 16, 2024

Elections, AI and Regulation: Balancing Security and Innovation

AI’s role in global elections is reshaping how governments balance innovation with the need for regulation. Discover how recent elections are driving new approaches to AI oversight and what it means for the future of technology.

There are two aspects to consider when it comes to AI and elections. One is the role of AI in shaping how elections are conducted and the risks it poses in promoting deception and disinformation, which is addressed in another part of Kroll’s election series, “What Have We Learned About GenAI and Elections?

The other consideration, and the focus of this article, is how the outcome of the U.S. and other recent elections in UK, France and elsewhere may impact the regulation of AI development as governments around the world struggle with balancing the need for security and AI risk management against the desire to promote innovation that would unlock the potential of AI technology.

Differing Approaches to AI Regulation and the Potential Impact of Forthcoming Elections

Thus far, regulatory approaches to AI have varied as governments around the world figure out how to deal with a dynamic and rapidly evolving AI landscape. 

The EU Artificial Intelligence Act (AI Act), which took effect in August 2024, seeks to harmonize rules across the EU and is the first comprehensive regulatory framework to specifically address AI with a risk-based approach. In short, the higher the perceived risk of AI in a particular use or circumstance, the more the AI Act rules apply. The highest classification bans AI completely where it is deemed a clear threat to fundamental rights. The AI Act seeks to promote trustworthy AI. Of greatest relevance to business are the ethical guidelines, the regulations with which they must comply, and the penalties for noncompliance, which can reach a maximum of EUR 35 million or 7% of global turnover.

It is not yet clear whether this year’s various EU elections will lead to efforts to fundamentally alter the AI Act. Several areas of the act are still not set in stone, and numerous critics argue that it places barriers to innovation. Some EU member states may seek to alter restrictions around high-risk and general-purpose AI, which they view as too restrictive.

In the U.S., a lighter regulatory touch has prevailed thus far. While the Biden administration has not passed comprehensive federal AI legislation, various states, including California, have enacted their own forms of AI regulation, which businesses will need to consider.

The impending U.S. election may alter things, as the Harris and Trump campaigns have expressed differing perspectives. Harris’ preferred approach can be seen in the Biden-Harris October 2023 Executive Order on safe, secure, and trustworthy AI, which enshrined a number of principles for the safe, secure and trustworthy development of AI. Trump has indicated that he favors deregulation and promoting innovation and has been reported as saying he would repeal the executive order because it is “dangerous” and “hinders innovation”.1

In the UK, the previous Conservative government spelled out a "pro-innovation" approach to its 2023 AI Regulation White Paper, aiming to promote innovation through using existing laws and regulators to implement a framework of ethical principles rather than imposing new regulations. It remains to be seen whether the new Labour government will alter this approach. Prime Minister Keir Starmer has indicated preference for regulation, though not as extensive as the EU’s, but details are limited.

China implemented Interim Measures for the Management of Generative Artificial Intelligence Services in August 2023 “to promote healthy development of generative AI, protect national security and public interests, and protect the rights of citizens, legal entities and other organizations.” The measures reflect a regulatory approach that has evolved from industry self-regulation to national standards to specific rules.

The UAE, through its UAE National Strategy for Artificial Intelligence 2031, seems to establish itself as a global leader and hub for AI development. Among its objectives is “optimizing AI governance and regulations” and promoting ethical use of AI through its AI Ethics Principles and Guidelines.

Challenges in Regulating AI

As with other new technologies, AI’s rapid development poses challenges for regulators. Regulations tend to be written to regulate known entities. Emerging capabilities, particularly in generative AI, are often impossible to anticipate and thus difficult to regulate. For example, early iterations of the EU AI Act did not anticipate the emergence of large language models and had to be updated in real-time to reflect the technology’s rapid development.

Invariably, there will be gaps or holes to be filled in as the technology and issues evolve. Finding the right balance between fostering innovation and protecting investors, consumers and the public is at least partly driven by political choices. The outcome of the upcoming U.S. election is likely to determine whether AI regulation at the U.S. federal level accelerates or decelerates.

And, as with other kinds of regulation, there is a need for global alignment, such as on ethical standards, to discourage more profit-oriented AI players from shopping around for the least restrictive jurisdiction. 

There does seem to be near universal agreement around the world on the need for AI to be safe, secure and transparent and to not cause harm or threaten fundamental rights. This principles-based approach is reflected in the International Treaty on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”), drafted by 46 Council of Europe member states and 11 non-member states, including the U.S., Japan and Israel. 

Navigating the AI Regulatory Environment

For businesses navigating this evolving and still uncertain regulatory environment, some key considerations include:

  • Finding the expertise and having properly trained people in-house to govern and manage AI is essential. There are still relatively few experts knowledgeable or familiar enough with tools and techniques for managing mega datasets and related risk. Such expertise is necessary to design and implement mitigation strategies.
  • Obtaining the vast amounts of data needed to train some types of AI models has significant implications in terms of confidentiality, terms of use, patents, trade secrets and undisclosed or unknown conflicts regarding the datasets that are used.
  • Vendor management can be an issue for firms that utilize data from third parties, especially if that dataset is used to inform trading decisions or recommendations for clients. Organizations need to understand the source of that data and ensure, for example, that it does not contain material non-public information that comes into the firm through its vendor system. This potentially entails additional compliance responsibilities.
  • Bias and related ethics issues must be addressed as data can sometimes be inherently biased based on where it is collected and whether all stakeholders, views and datasets are represented in the underlying dataset and algorithms. Allegations of bias and harm to individuals can generate global media coverage and quickly destroy a firm’s reputation.

Those considerations and risks can and will be addressed as the opportunity that AI presents is so compelling for making firms more efficient, enabling the high-speed processing of large datasets to obtain actionable intelligence and better detecting fraud and other risks across both structured and unstructured data.

With AI and generative AI, as with past disruptive technologies like the internet, regulators are concerned about conflicts, undisclosed risks and ineffective or nonexistent oversight due to the potential for significant impacts on financial markets and vital sectors like healthcare.

But, in much the same way companies and regulators became comfortable with the internet, which is regulated to some extent, comfort with AI and generative AI is likely to grow over time as use cases increase. That said, industry experts believe we are in the early stages of what is already incredibly powerful AI. Many industry leaders have publicly expressed concerns about the existential risks this technology poses to humanity. Effective guardrails are absolutely essential, and the responsible use of AI technologies will need to be driven by organizations, creators and developers.

1 Experts Worry Republicans Will Repeal Biden's AI Executive Order | TIME

How Elections Influence Business Strategies—Adapting to Change

Join our experts for a webinar on October 30th that explores how elections influence global market conditions, business strategies and regulations.

Financial Services Compliance and Regulation

End-to-end governance, advisory and monitorship solutions to detect, mitigate, drive efficiencies and remediate operational, legal, compliance and regulatory risk.

Cyber and Data Resilience

Incident response, digital forensics, breach notification, security strategy, managed security services, discovery solutions, security transformation.

AI Security Testing Services

AI is a rapidly evolving field and Kroll is focused on advancing the AI security testing approach for large language models (LLM) and, more broadly, AI and ML.