Improving Traditional Anti- Money Laundering Platforms with Machine Learning
By Tim Mueller, MD, Navigant Consulting Inc. And Amin Ahmadi, Associate Director, Navigant Consulting Inc.
Anti-money laundering (AML) transaction monitoring (TM) is based on the premise that if financial institutions (FIs) apply appropriately designed rules to financial transaction activities, they can identify patterns and/or activities that represent potentially suspicious behavior. Recently, however, better coverage of AML risk indicators through expanded rules and increased regulatory oversight have significantly increased the volume of non-productive and productive alerts generated by TM systems. It has thus become more challenging to effectively separate the non-productive alerts from those representing suspicious activity, and to disposition the alerts efficiently.
Improving the efficiency of rules and introducing automation to the alert disposition process are two ways to meet these challenges. Such improvements, however, must be transparent and defensible under compliance department and regulatory scrutiny, as well as flexible and scalable for institutions with different customer and transaction footprints.
"The goal of parameter tuning is to determine a threshold which separates the “normal” transactional behavior from “suspicious” behavior"
Fortunately, these requirements can be satisfied by artificial intelligence and machine learning (AI/ML), which have the capacity to train a set of algorithms on some representative data, in order to make intelligent inferences about the current or future data. Below, we briefly discuss application of ML/AI in two areas of transaction monitoring: rule tuning and alert prioritization.
Rule Tuning Leveraging ML/AI
The goal of parameter tuning is to determine a threshold which separates the “normal” transactional behavior from “suspicious” behavior. The quantities represented by a rule’s parameters, such as transaction amount, often behave in ways that violate the assumptions (e.g., normally distributed data) inherent in traditional approaches.
Cluster analysis is an ML/AI technique that can bypass this issue by segmenting observations into “clusters” based on an affinity measure, such as Euclidean distance. It then determines the optimum number of clusters based on a specified performance metric. Clustering is an iterative process that converges when a dissimilarity measure is minimized.
An example of a clustering technique is the “Partition Around Medoid” (PAM) algorithm done by:
1. Randomly selecting k number of observations (for k number of clusters) as “medoid” (i.e., the most central point of the cluster),
2. Assign observations to their closest medoid,
3. Calculate the distance between the observations and their respective assigned medoids;
4. Repeat the process until the average distance over all clusters is minimized.
One difficulty with parameter tuning for AML TM is that both the transaction amount and the number of transactions over a specified period of transaction activity depart significantly from a normal distribution. PAM is a particularly suitable clustering technique for tuning the parameters of TM rules because of the accuracy of the information it produces for non-normally distributed data. Clustering analysis like PAM segments the observations into isolated and monolithic groups, which in turn, allows one to assume with confidence that observations within each defined group have strong similarity. Groups are represented by their respective medoids’ values. The lines between clusters are potential demarcations between above-the-line (ATL) and below-the-line (BTL) observations.
An FI can also use Clustering Analysis to determine the parameters for a new TM rule by:
1. Selecting an initial boundary line (threshold) based on the institution’s risk appetite;
2. Sampling observations from clusters that fall immediately below and above the selected threshold; and
3. Submitting the sample for qualitative analysis (investigation) by the compliance team.
If the proportion of suspicious observations in BTL is less than the targeted value, and the observations in ATL exceed the targeted value, the threshold is acceptable. If these conditions are not met (e.g., both BTL and ATL exceed the targeted value), a new (more conservative) threshold should be selected and the process repeated.
Alert Prioritization Leveraging ML/AI
Prioritizing alerts based on risk allows FIs to make educated resourcing and staffing decisions, thus increasing the efficiency of a TM system. Alert disposition protocols can also be streamlined based on risk. For example, if FIs can identify and isolate higher risk alerts, they can assign investigation of those alerts to more senior staff and adjust the disposition protocols accordingly. Conversely, historically benign alerts can be outsourced or investigated by junior staff using adjusted disposition protocols.
FIs can use state-of-the-art ML/AI decision-trees algorithms such as bootstrap aggregating (bagging) and gradient boosting machine (gbm) to predict alert risk levels with acceptable accuracy (>80 percent). Dispositioned alerts from previous months can be used to “train” the model to predict the risk associated with the current month’s alerts. The number of months included in the training set must be large enough for reasonable precision, and recent enough for acceptable accuracy.
In conclusion, ML/AI has the opportunity to meet the challenges of AML transaction monitoring, by offering powerful and defensible tools that can augment traditional TM rules to improve both effectiveness and efficiency of the detection activities and disposition of generated alerts.