A short history of Screening in Financial Crime Compliance
In years gone by, armies of analysts working for a bank would receive a list on a daily basis. The list would contain 2000 names of key account parties that they need to manually assess for risk. In order to do this, they would key in the names into reputable data vendors which hold Sanctions and PEP lists and would generate a number of hits. However, as the matching uses fuzzy logic, which is controlled by definitives, each individual name could drum up hundreds of thousands of results. These results would then need to be reviewed and discounted manually.
To combat this, banks decided to automate the process of logic; if the matching percentage of the fuzzy logic was over a certain threshold then it wouldn’t be a match. However, the volume of hits did not reduce. Next, banks sought to develop better screening logic than the data service providers – the aim was to resolve alerts in a much faster way using AI and machine learning. The catch was that banks still required the data from the data vendors. And that none of the providers could guarantee 100 per cent coverage.
This led to banks requiring multiple user licences for all of their analysts – as well as others across their business who may want to do their own compliance checks. Add in the cost of hosting, and this all resulted in an exorbitant cost in maintaining a traditional screening solution. Many banks are still stuck in this never-ending loop of extortionate expenditure for a makeshift solution that does not incorporate all of their needs.
So what should banks be striving for?
In an ideal world, a bank would have real-time streaming, integration with multiple data vendors, and the ability to define their own screening logic within one system. In addition, they would have the ability to manage alerts, and be able to easily use all of the algorithms within the system itself, with an intuitive, clean user interface. The system would feed the data, enabling thousands of users to use the system at once, with only one data list. Most importantly, the bank wouldn’t need an extortionate user licence-based setup.
While this has long been an aim for banks to achieve in-house, it’s a huge challenge to gain access to all of these capabilities within one system in a way which both reduces cost and actually enables firms to make better decisions related to financial crime compliance. Currently, concern over growing regulator penalties is increasing demand for labour to distinguish between entities with similar identifying information. The reason for this is that firms are struggling to develop or acquire sophisticated AI capabilities that would automate some aspects and dramatically reduce false positives.
These AI capabilities include machine learning and natural language processing, which can be used to find both exact and fuzzy matches against desired watchlists. In addition, a matching algorithm can match attributes belonging to the same entity and group entities together, while an advanced knowledge graph can aid firms in distinguishing and determining the most accurate descriptive values of an entity.
The current approach firms are taking puts the onus on them to pay huge sums for system licences, user licences and algorithms – and integrate data providers and systems themselves. In contrast, the future operating model would enable firms to turn into consumers of data instead – benefitting from external technology with better algorithms, an improved user interface and lower costs. This means a bank does not have to rely on separate providers for every part of the setup and can consolidate data across data providers with a data-only licence model.
What would the end result look like?
In terms of pure dollar value, we’ve seen a bank moving to this model and making cost savings of at least 45 per cent because they’re paying less for the data and because the technology means they don’t need armies of analysts to do the work. In fact, the manual effort is reduced by almost 70 per cent.
Crucially, while there are many credible data providers that offer their own live screening solutions, banks will benefit from using independent technology so they can use multiple data providers and gain comprehensive coverage that any one data provider would not be able to offer.
It is for this reason that with this new operating model, banks have the highest quality data to screen against, which means more accuracy and more efficiency in terms of hits. It’s time for financial services firms to move away from armies of analysts, spiralling costs and insufficient coverage; a future operating model with advanced AI-enabled intelligent, integrated data sources awaits.
About the Author:Harinder Singh Sudan is Senior Vice President of the Financial Intelligence Unit at BlackSwanTechnologies. He has close to 20 years of industry experience in banking and financial services and leads BlackSwan Technologies’ FIU practice globally.