Why High-Quality Data is Crucial to Fighting Financial Crime

The fight against financial crime is becoming increasingly complex. As the debate around AI responsibility rapidly evolves, financial institutions must now consider its implications and effectiveness within their business strategies, especially in compliance functions, where AI is already becoming broadly embedded. The increasing prominence of AI also means that organizations need to carefully consider the quality of the data that fuels the AI system.

Findings from Kroll’s 2023 Fraud and Financial Crime Report indicate that nearly a quarter of financial institutions consistently use AI for compliance-related activities, with an additional 32% in the early stages of adoption. There are no signs that this investment will decelerate in the near term. The International Monetary Fund (IMF) reports that financial institutions will double their AI spending by 2027.

AI has proven transformative, particularly in combating financial crime. AI-powered systems can accurately analyze swathes of financial data in real-time, swiftly detecting suspicious or potentially criminal activities. As a result, AI has emerged as a crucial tool in preventing fraud, identifying money laundering risks and performing know-your-customer (KYC) checks.

AI in Financial Crime Prevention

While the role of AI in modern compliance practices is widely recognized, the significance of the data that fuels these AI systems is often overlooked. Financial institutions and organizations are eager to invest in the most advanced AI technology. However, the effectiveness of these systems hinges significantly on the quality of the underlying data. Without high-quality, well-structured data and financial crime expertise, the potential of AI is substantially diminished. 

Forging a Proactive AI Strategy

A common pitfall during the initial stages of implementing any new technology in financial institutions is that firms tend to deploy it without first having a comprehensive strategy in place. Instead of patching up inefficiencies in legacy systems with new layers of technology, organizations need to adopt a more proactive approach.

Six Steps for Successful AI Implementation

  1. Establish a strategic roadmap
  2. Define clear business and process objectives
  3. Select relevant technology
  4. Plan resources
  5. Train staff
  6. Rigorously assess risks and mitigation strategies

Without a well-defined AI strategy, organizations risk operating in silos, hindering communication between data, technology and business teams. Consequently, data entry, processing and reporting practices will not be functioning at an optimum level, which is detrimental to model outputs. Such problems are exacerbated as firms grapple with cybersecurity issues, data privacy concerns and budget constraints that affect the full-scale implementation of AI technology.

Issues with data integrity, such as poor KYC and onboarding practices, can compromise the robustness of AI systems. If the data is incomplete, unavailable, inconsistent, outdated or biased, AI systems may not be reliable. Our research indicates that nearly half (49%) of the respondents we surveyed for our 2023 Fraud and Financial Crime Report see data integrity as the leading challenge in technology implementation.

The risk increases if AI models are not properly validated before implementation. Regulators in the UK and U.S., among others, are intensifying demands for robust model validation processes to ensure AI performs as expected. They emphasize the need for a strong strategic vision before deploying AI, rather than merely reacting to problems as they arise.

The Risks Are Big

On a broad level, a reactive approach to data and AI means the technology may not be used to its full potential. Moreover, poor quality data or data shortages can cause more direct problems.

If the datasets used to train and test AI systems are incomplete or of poor quality, the resulting algorithms will be biased. There have already been allegations that AI, if not properly trained or managed, can make discriminatory decisions based on characteristics such as gender or ethnicity.

For example, if an AI system trained on predominantly male data assesses credit applications, it may favor a man’s application over a woman’s due to underrepresentation of women in the dataset. This highlights the critical need for organizations to train AI models with diverse datasets and implement comprehensive reporting processes to identify and rectify any data biases.

Authorities at the highest level are also recognizing the implications of poor data quality. In the U.S., initiatives are underway to mandate that financial institutions and other organizations ensure “systems should be used and designed in an equitable way”. The EU’s upcoming AI Act and the UK government’s AI White Paper, A Pro-Innovation Approach to AI Regulation, includes similar provisions, as authorities worldwide seek to balance AI-driven innovation with regulatory oversight. Organizations that fail to meet these standards risk reputational damage and hefty fines, which in the EU could amount to up to 7% of annual worldwide turnover.1

Given the vast amount of data usage by financial institutions and the complexity of AI systems, those without in-house expertise or robust external AI support may become overly reliant on their technology. This dependency might lead them to accept whatever outcomes their systems produce, potentially overlooking inaccurate or discriminatory decisions as they do not have the ability or know-how to challenge them.

Understanding Technology and Data

To avoid these risks, organizations must fully understand the technology they deploy and the data utilized by these machines. They must ensure their models are validated before any AI system goes live, while generative AI must be carefully trained on diverse data and properly configured. Once live, it is equally important for organizations to regularly test systems to identify any flaws or deviations from expected performance. Where there are clear signs that datasets are inaccurate, organizations should promptly refresh and retrain their models.

Four Key Steps for Understanding Data Quality

Good Governance

Establishing governance frameworks can set organization-wide guidelines for data management, ensuring that employees across all business functions understand their data-related responsibilities. As reliance on AI grows, it is crucial that all employees understand their obligations around data privacy and security. Good governance also aids in comprehensively understanding the technology being integrated, ensuring that decisions in the design, development and implementation process are effectively overseen and are explainable. Comprehensive understanding of the technology is essential so organizations can continuously monitor their AI systems and test accurately for biases, errors and performance issues.

Data Mapping

Data mapping plays a pivotal role in modern business operations, especially with the increasing emphasis on data-driven decision-making. Data mapping tools provide organizations with a deep understanding of how their data is being used, what it is being used for and a visual representation of how data flows throughout an organization. These tools help organizations implement robust data protection controls, such as encryption and access controls. To ensure the effectiveness of this process, organizations must verify that they are collecting the correct data by using strong KYC and onboarding processes and then implement controls to maintain data accuracy.

Data Sharing

By choosing to share their data, organizations can enhance transparency. This allows organizations to benefit from external input providing a fresh perspective and highlighting data gaps. By proactively deciding to share data with trusted partners and third-parties, organizations can independently verify that their high standards are being met. Data sharing can also involve organizations enriching their existing datasets with data available from external sources, helping to drive up the quality of their data and therefore optimizing AI-powered processes. 

Education

Integral to an organization’s understanding of data and AI lies a commitment to widespread education throughout the organization. Humans will continue to play a crucial role in managing the data that feeds into AI and ensuring its meaningful use. If we do not understand this technology and associated risks, an AI strategy is unlikely to succeed. Professionals from different business departments including anti-money laundering (AML) and compliance functions should be continuously trained on AI and data issues as this technology continues to develop. Fostering a broad, organization-wide culture of learning and innovation increases the likelihood that AI will be strategically embraced and also enhances the ability of individuals to be able to detect and mitigate the potential risks associated with this technology. 

Harnessing AI

AI is transforming how our clients approach compliance and financial crime detection. Organizations that do not invest in this technology risk falling behind their competitors.

Integrating AI is more than just picking an AI solution. Organizations need to ensure they take a strategic approach to integrating this technology and understand the business and financial crime risk exposure as well as ensuring data integrity. Strong controls, robust data quality and protection, AI governance, ongoing monitoring, testing and investment in education and resources are all critical for organizations to ensure they understand AI and data issues fully. Those that do so successfully will be able to harness the full potential of AI. 

 

Source
1Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | News | European Parliament (europa.eu).
*Findings from Kroll 2023 Fraud and Financial Crime Report.

2023 Fraud and Financial Crime Report

2023 Fraud and Financial Crime Report

Kroll analyzed global data from 400 senior leaders across three continents to gain insights into the current financial crime landscape and learn how technology might stop the threat of economic, crypto and ESG crimes.

Download Report


Forensic Investigations and Intelligence

The Kroll Investigations, Diligence and Compliance team consists of experts in forensic investigations and intelligence, delivering actionable data and insights that help clients worldwide make critical decisions and mitigate risk.

Data Insights and Forensics

We are the leading advisors to organizations, providing expertise and solutions to address complex risks and challenges involving technology and data. We advise clients with services to address risks in disputes, investigations and regulatory compliance.

Investigations and Disputes

World-wide expert services and tech-enabled advisory through all stages of diligence, forensic investigation, litigation, disputes and testimony.


Compliance Risk and Diligence

The Kroll Investigations, Diligence and Compliance team partners with clients to anticipate, detect and manage regulatory and reputational risks associated with global ethics and compliance obligations.

Compliance and Regulation

End-to-end governance, advisory and monitorship solutions to detect, mitigate and remediate security, legal, compliance and regulatory risk.

Financial Services Compliance and Regulation

End-to-end governance, advisory and monitorship solutions to detect, mitigate, drive efficiencies and remediate operational, legal, compliance and regulatory risk.