Thu, Nov 28, 2024

AI Risk and Governance: Foundations of a Documented, Defensible Program

U.S. Department of Justice Includes AI in Updates to Guidance on Corporate Compliance Programs

Given the many laws, frameworks and industry best practices surrounding artificial intelligence (AI), it’s not surprising that lawyers, compliance professionals and others charged with AI governance and compliance are seeking a starting point for AI guidelines. A solid foundation is vital to building a program that satisfies the growing matrix of requirements while allowing companies to simplify and execute their programs amid growing complexity and change.

The U.S. Department of Justice’s (DOJ) Evaluation of Corporate Compliance Programs details the foundation that legal and governance professionals should use when deciding on a programmatic approach to AI risk and compliance.

In September 2024, the DOJ released a revised version of the Evaluation of Corporate Compliance Programs (ECCP). This update included, among other things, a focus on AI and due diligence in M&A. Both updates are relevant to AI governance professionals. According to Principal Deputy Assistant Attorney General Nicole M. Argentieri: “Our updated ECCP includes an evaluation of how companies are assessing and managing risk related to the use of new technology such as artificial intelligence both in their business and in their compliance programs.”
This update underscores the DOJ’s recognition of AI’s transformative impact on business operations and the potential risks associated with its misuse.

Key Changes for AI Professionals in the Updated Guidance 

The updated ECCP introduces several key changes, including:

  • Increased Focus on AI
    The DOJ now explicitly addresses the role of AI in corporate compliance programs. This includes guidance on the development, deployment and oversight of AI systems to ensure they are used ethically and legally.
  • Data Privacy and Security
    Given the sensitive nature of data handled by AI systems, the guidance emphasizes the importance of robust data privacy and security measures. This includes implementing appropriate safeguards to protect against data breaches and unauthorized access.
  • Bias and Fairness
    The DOJ acknowledges the potential for AI algorithms to perpetuate biases and discrimination. The guidance encourages organizations to take steps to mitigate these risks, such as conducting regular audits and assessments of AI systems.
  • Transparency and Accountability
    The DOJ emphasizes the need for transparency and accountability in the use of AI. This includes documenting the development, testing and deployment of AI systems, as well as establishing clear lines of responsibility for their use.

Building the Foundation 
Effective AI compliance programs are essential to mitigating business and legal risks; maintaining trust with customers, employees, and shareholders; and fostering a culture of ethical conduct. Nowhere is this more essential than when designing a program to govern AI development and deployment. The ECCP details a defensible, executable way to design an AI governance program.

Implications for Businesses
The DOJ’s updated guidance has significant implications for businesses, particularly those that are already implementing AI or consider it a strategic priority. Organizations must ensure their compliance programs are aligned with the new requirements and they have the necessary resources and expertise to address the challenges posed by AI.

The inclusion of AI in the DOJ’s guidance for evaluating corporate compliance programs is a major development that underscores the growing importance of ethical AI practices. By following the DOJ’s framework, organizations can mitigate legal risks, protect their reputation and foster a culture of compliance in the age of AI. 

As AI continues to evolve, it is essential for businesses to stay informed about the latest guidance and best practices to ensure their compliance efforts are effective and remain so. Kroll’s AI risk professionals stand ready to help. 

 


Cyber and Data Resilience

Incident response, digital forensics, breach notification, security strategy, managed security services, discovery solutions, security transformation.

AI Security Testing Services

AI is a rapidly evolving field and Kroll is focused on advancing the AI security testing approach for large language models (LLM) and, more broadly, AI and ML.