Tue, Jul 23, 2019

Alan Brill and Elaine Wood - The Case for Defensible Design in AI at RIMS 2019

“Regardless of the skill, good intentions and precautions embedded in the design of these systems, artificial intelligence systems will take actions that result in litigation,” says Senior Managing Director Alan Brill from Kroll’s Cyber Risk practice. This premise and the litigation-related evidence that organizations will be called to produce in these cases were explored in a panel discussion led by Alan at the recent RIMS 2019 Conference. 

Alan was joined by Elaine Wood, Managing Director in Duff & Phelps’ Compliance and Regulatory Consulting practice and Lee Kurman from Context Data Solutions in the session, “Artificial Intelligence Incidents: Plan now to have evidence later.”

Watch the Video Replay

Given the likelihood of litigation, Alan declared that planning for effective evidence collection in AI systems must start in the development phase. This is because over the course of their operation, AI systems will inherently change greatly over time, making it almost impossible to go back and capture what was happening at any point in time unless steps have been taken to record or retain that data. To go one step further, Alan said this requires insight beyond that of technologists: “If the problem is likely to arise in litigation it’s a good idea to have a litigator help in development.” 

Alan described the foundation of “defensible design-defensible action,” critical in litigation, as carefully considering the consequences (intended or otherwise) of AI-driven decision-making and proactively addressing and mitigating these risks during the development and testing stages. When it comes to AI versus other technology, he said “Nothing has changed in terms of the underlying requirements to understand what the system does, to understand why the system does it, to preserve information that allows you to go back and answer the question of what happened and why, and that it has sufficient monitoring and controls so that you can say this system is subject to reasonable controls, reasonable compliance, reasonable management and if something goes wrong we’re going to know it.”

The panelists all agreed on the need to include professionals enterprise-wide, specifically general counsel and compliance teams, not only in AI system development, but also in the controls and monitoring surrounding it. Alan talked about the need to think about “nontechnology, non-mathematical functions that are based on law, on ethics, on compliance while the system is being developed or at the very latest when it’s being tested…”. 

Alan and Elaine have published a three-part series on this topic, published by LegalTech News:

 


Cyber and Data Resilience

Incident response, digital forensics, breach notification, security strategy, managed security services, discovery solutions, security transformation.

Data Protection Officer (DPO) Consultancy Services

Kroll's data privacy team provide DPO consultancy services to help you become and stay compliant with regulatory mandates.

Cyber Vulnerability Assessment

Proactively identify vulnerable systems and devices that may be exploited by an attacker or malicious software, often resulting in data loss or breach.