“Regardless of the skill, good intentions and precautions embedded in the design of these systems, artificial intelligence systems will take actions that result in litigation,” says Senior Managing Director Alan Brill from Kroll’s Cyber Risk practice. This premise and the litigation-related evidence that organizations will be called to produce in these cases were explored in a panel discussion led by Alan at the recent RIMS 2019 Conference.
Alan was joined by Elaine Wood, Managing Director in Duff & Phelps’ Compliance and Regulatory Consulting practice and Lee Kurman from Context Data Solutions in the session, “Artificial Intelligence Incidents: Plan now to have evidence later.”
Watch the Video Replay
Given the likelihood of litigation, Alan declared that planning for effective evidence collection in AI systems must start in the development phase. This is because over the course of their operation, AI systems will inherently change greatly over time, making it almost impossible to go back and capture what was happening at any point in time unless steps have been taken to record or retain that data. To go one step further, Alan said this requires insight beyond that of technologists: “If the problem is likely to arise in litigation it’s a good idea to have a litigator help in development.”
Alan described the foundation of “defensible design-defensible action,” critical in litigation, as carefully considering the consequences (intended or otherwise) of AI-driven decision-making and proactively addressing and mitigating these risks during the development and testing stages. When it comes to AI versus other technology, he said “Nothing has changed in terms of the underlying requirements to understand what the system does, to understand why the system does it, to preserve information that allows you to go back and answer the question of what happened and why, and that it has sufficient monitoring and controls so that you can say this system is subject to reasonable controls, reasonable compliance, reasonable management and if something goes wrong we’re going to know it.”
The panelists all agreed on the need to include professionals enterprise-wide, specifically general counsel and compliance teams, not only in AI system development, but also in the controls and monitoring surrounding it. Alan talked about the need to think about “nontechnology, non-mathematical functions that are based on law, on ethics, on compliance while the system is being developed or at the very latest when it’s being tested…”.
Alan and Elaine have published a three-part series on this topic, published by LegalTech News:
- Part 1: Does Artificial Intelligence Need a General Counsel?
- Part 2: Does Artificial Intelligence Need a General Counsel? The Unintended Consequences of AI
- Part 3: Does Artificial Intelligence Need a General Counsel? Management in the Age of AI