10/03/2021
Countering financial crime with the help of AI and machine learning post Covid-19
The majority of financial institutions’ efforts to combat financial crime are centred on traditional audit and compliance, customer risk assessments and due diligence (CDD), suspicious activity reporting (SARs) and transaction transparency. However, in light of evolving financial crime typologies in the past years – exacerbated by the Covid-19 pandemic – with an emphasis on sophisticated criminal groups adapting to and integrating technology advancements into their modi operandi, many financial institutions are turning to artificial intelligence (AI), and particularly, machine learning (ML), to help strengthen their efforts to detect and combat such threats.
Regulators have also begun encouraging financial institutions to explore new technologies and approaches to improve the identification and mitigation of financial crime threats. One such example is the UK Financial Conduct Authority’s (FCA) DigitalSandbox Pilot that took place on 8-10 February 2021, which brought together FinTechs and financial institutions to demonstrate the progress made in developing innovative solutions and products to detect and combat fraud scams.
Regulators around the world, likely inspired by the UK’s example, have been encouraging financial services organisations to explore new technologies and approaches to improve the fight against financial crime. This month (February 2021) saw the publication of the independent review led by Ron Kalifa OBE, which provides a roadmap for maintaining and enhancing the UK’s leading position in FinTech in light of challenges caused by increasing competition, Brexit and the Covid-19 pandemic.
The report recommends implementing a “scalebox”, which would support firms focusing on scaling innovative technology, through enhancing the regulatory sandbox and making permanent the digital sandbox environment designed by the FCA. According to the review, a continuation of the digital sandbox pilot would facilitate the assembly of a coalition on small and medium-sized business (SME) finance, on open finance, and on digital ID, which would contribute significantly to developing proofs of concept for new FinTech products.
On 10 June 2020, the US Financial Industry Regulatory Authority (FINRA) issued a white paper on AI in the securities industry, including an overview of broker dealers’ use of AI applications for customer identification and the prevention of financial crime, which notes that AI-based tools can be used to detect potential money laundering, terrorist financing, bribery, tax evasion, insider trading, market manipulation and other unlawful behaviour. The UK’s FCA has published several papers on AI and ML, including Machine learning in UK financial services published in October 2019, which provides the outcomes of a joint survey with the Bank of England (BoE), to which two thirds of respondent financial institutions answered that they already use ML. The EU Commission’s 19 February 2020 White Paper on AI recognises the opportunities that AI presents to Europe’s digital economy and presents its vision for promoting the implementation of AI in the EU and addressing the inherent risks.
How has Covid-19 accelerated the demand for automation?
The Covid-19 pandemic has significantly impacted every process related to risk management and compliance, speeding up the transition to automated and cloud-based systems. With emerging pandemic-related threats like cyber fraud and exploitation of cloud-based systems, money mule-induced laundering, imposter scams, and subversion of government aid schemes, financial institutions are more likely to turn to the AI approach to make full use of the data available and increase efficiency and effectiveness.
Key challenges faced by reporting entities in meeting their AML obligations during the Covid-19 crisis include:
- Restrictions have resulted in many organisations being forced to adapt to remote working, most client interactions now happen through online channels, including AML procedures. These changes, combined with minimal staffing, have encouraged the implementation of cognitive robotic process automation (RPA) based on SAR investigation and reporting solutions. RPA implies not only automated routine processes, but also support for AML compliance, through retention resolution and beneficial ownership analysis.
- Organisations may have experienced difficulties in filing timely SARs as a result of reduced numbers of employees, as well as filing an incomplete or inaccurate SAR due to a delay in responses to enquiries. Cloud-based technology or blockchain technology and machine learning algorithms could be implemented to monitor accounts for suspicious activity, as well as to develop predictive AML SARs.
- Many countries have introduced new systems to facilitate customer due diligence procedures, which require verified individuals to perform identifying procedures remotely, through a smartphone or computer, with the help of digital identities, to authorise transactions. This has repercussions for international supervisory bodies, such as financial intelligence units, that are now faced with large numbers of transactions to analyse, making the assessment process more cumbersome, which means that it is even more crucial that regulators continue to improve their monitoring systems and implement innovative solutions to tackle financial crime.
Synthetic identity fraud: one of the fastest growing forms of financial crime during Covid-19
A white paper published in July 2019 by the US Federal Reserve defines synthetic identity fraud as the type of financial fraud through which perpetrators combine real and fictitious information – for instance, legitimate social security numbers with false names, addresses or dates of birth, to create new identities to defraud financial institutions, government agencies, the private sector, or individuals. While this type of fraud is most frequently used to commit payments fraud, its consequences can extend to cover denial of disability benefits, rejection of tax returns, and inaccuracies in health records.
With synthetic identity fraud considered the “fastest-growing but little-understood” form of financial crime, up to 95 percent of applicants identified as potential synthetic identities are not flagged by traditional fraud models.
Additionally, the perpetuation of synthetic identity fraud is facilitated by the increased number of exposed personal identifiable information (PII) due to data breaches, social engineering, and personal information shared on social media. Between 2017 and 2018, a 126 percent increase in PII records was noted, with over 446 million records exposed following data breaches.
In order to prevent or minimise the impact of synthetic identity fraud, organisations are implementing non-traditional data into their verification processes. AI and ML-based tools can prove effective in combatting synthetic identity fraud, through the synthetisation of large and disparate datasets to confirm identities and detect potentially false ones. In addition, leveraging third-party data has proven to be a particularly powerful tool – a multi-layered approach could be created by implementing dedicated tools to verify phone records, social media activity, and property records.
Using synthetic data to tackle fraud – Digital Sandbox Pilot
However, synthetic data, as a ground-breaking machine learning technology which uses real data to calibrate the required parameters that allow the creation of realistic synthetic datasets, can be used to create privacy regulation compliant and highly accurate synthetic versions of digital sandboxes. Synthetic data contains no personal information
or disclosure of legal or private customer transactions, so it is completely compliant with privacy regulations like GDPR.
In response to the challenges of Covid-19, the UK’s FCA, in collaboration with the City of London Corporation, launched a Digital Sandbox Pilot in October 2020 to support and augment innovation within financial services. The pilot provided a digital testing environment for 28 teams, offering support to develop their products and services at an early stage. The pilot focused on solutions exploring three key priorities exacerbated by Covid-19: preventing fraud and scams, assisting the financial resilience of vulnerable consumers, and improving access to finance for small and medium-sized enterprises.
The Digital Sandbox Pilot project provided participants with access to a set of tools to collaborate and develop financial services proofs of concept for anti-financial crime algorithms. Access to synthetic data assets to enable testing, training and validation of prototype technology solutions was the foundation of the sandbox. The teams involved in the pilot presented their progress publicly on 8-10 February 2021.
In terms of fraud and scam detection and prevention, twelve teams presented their innovative tools and solutions, which included:
(1) an AI explainability for financial services tool, centred on explainable fraud detection, which is aimed at ensuring the rapid verification of repayments using an anomaly detection algorithm to identify potentially fraudulent behaviour;
(2) a tool providing on-demand generation of enriched synthetic data to be utilised within AI systems without strong restrictions or risk of data leaks;
(3) a synthetic identity model using synthetic data aimed at predicting fake applications using fake identities;
(4) a tailored tool which passively analyses device location behaviour, combined with other analytical risk feeds such as transaction risk, telecom intelligence, customer profile, and behavioural biometrics to identify potential authorised push payment fraud;
(5) an approach recommending using synthetic data to evaluate the risk score of focal entity businesses, by addressing shared UBO/directorships at third-party organisations, and those third-party focal scores, shared business relationships across subsidiary or parent companies, and direct or indirect transactional relationships;
(6) an open source transaction monitoring platform to detect fraudulent and money laundering activity;
(7) an approach based on multi-party computation, a cryptographic technique that enables multiple banks to perform analysis on the entire transactions network, without having to share their individual transaction data;
(8) a solution that allows financial institutions to securely share knowledge regarding clients or transactions, without disclosing the underlying data or information, utilising zero knowledge proof and secure multi-party computation to jointly compute a risk score, without sharing or exposing the party’s inputs;
(9) a platform enabling customers, banks and regulators to securely share verified KYC data, in order to utilise the ecosystem to submit and share SARs with the NCA and across institutions that have a relationship with the respective entity or customer;
(10) a technology exploring how risks and identifiable behaviours could and should be flagged in real-time;
(11) an approach to building up adaptive learning algorithms for fraud detection; and
(12) a solution aimed at securing PII and biometric templates by converting the biometric templates into an irreversible transformed identity token.
What is AI and how can it help anti-financial crime efforts?
A generally accepted and elaborate definition for artificial intelligence describes AI as a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation. Machine learning (ML) and cognitive computing are subsets of AI, which involve algorithms that learn from data and give the computer the capability to solve problems, create foresights and optimise human processes – specifically, cognitive computing is a technology that gives a computer the ability to simulate and complement human cognitive abilities, while ML allows developing self-learning algorithms to analyse data, learn to recognise patterns and make decisions accordingly.
When applied to combatting financial crime, the key areas where AI and cognitive solutions have the most significant impact are suspicious activity and transaction monitoring alert triage and process, due diligence assessments, payment fraud modelling, and conducting surveillance investigations. For instance, with regards to transaction monitoring, ML can be used to automate certain aspects of the review process in order to identify anomalies that indicate non-compliance, while identifying indicators and patterns of behaviour to build statistical models and calculate the potentiality of an occurrence.
Risks and concerns associated with AI deployment
Despite the beneficial and transformative potential of AI, regulators recognise that certain uses of AI present inherent risks and challenges, particularly because the technology is not infallible, and human oversight is required.
The European Commission’s 2020 White Paper on AI states that “high-risk” AI applications generally meet two cumulative criteria: (1) AI is deployed in a high-risk sector, such as healthcare, transport, energy and sections of the public sector, for which the Commission recommends “specific and exhaustive” regulatory listing; and (2) the application of AI in the sectors in question is wrongfully intended, for instance, to produce legal or similarly significant effects concerning the rights of an individual or a company. The European Commission further highlights that high-risk AI systems could be assessed in regard to certain requirements, including human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability.
Out of the seven requirements, the Commission stresses that human oversight is not specifically covered under current legislation in many sectors and notes that the type and degree of human oversight is likely to depend on the context in which the AI system is deployed. According to the paper, human oversight could be manifested, for example, through the review and validation of the AI system’s output; monitoring of the AI system while in operation and the capability to intervene in real time and deactivate; and imposing operational constraints in the design phase.
Another limitation of AI deployment could come about due to the use of personal data, which is regulated in the UK, the EU and across the globe. For instance, there is a potential risk that AI may be utilised, in violation of data protection rules, by national authorities and other entities for mass surveillance. The paper specifically underlines that AI may be used to retrace and de-anonymise data, creating new personal data protection risks even with regards to datasets that do not include personal data. If used in this way, AI could present the potential to affect the rights to freedom of expression, non-discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation, protection of personal data and private life.
In light of increasing concerns about AI being used by online intermediaries to prioritise information for users and perform content moderation, in a 22 February 2021 response to the House of Lords Select Committee on Artificial Intelligence, the UK government stated that it will continue to press ahead with legislation to establish a new online harms regulatory framework, which will apply to all services hosting user-generated content or enabling user interaction.
Other recent regulatory opinions include: the German Data Ethics Commission report calling for a risk-based system of regulation to cover everything from the most innocuous AI systems to a complete ban on the most dangerous ones; Denmark has launched the prototype of a Data Ethics Seal, which is a labelling programme for IT security and responsible use of data; and Malta has introduced the world’s first voluntary certification system for AI, which aims to promote AI solutions that are developed in an ethically aligned, transparent and socially responsible manner.
Conclusion
AI offers financial institutions the opportunity to better identify hidden threats – particularly through advanced pattern and behaviour recognition, meeting regulatory requirements – and cost efficiently resolve technical and process loopholes that criminals are exploiting. AI-based tools can easily analyse massive amounts of data, and quickly identify where anomalies exist, unveiling sophisticated intelligence networks which are utilised as a flow of communication between criminals. An AI ecosystem can also help foster information sharing and transparency, whilst enabling ML models to deliver valuable patterns and insights to help with complex analyses, especially in countering money laundering and terrorism financing.
However, the need for a common European framework that aligns member states’ strategies is essential in order to establish trust in AI among consumers and businesses. With several EU member states condemning the current absence of a common framework, and the respective national authorities implementing tailored local AI supervision and regulation, there is a risk of fragmentation in the internal market, which could pose a serious threat to legal certainty and market uptake.
By Oana Gurbanoaia, Editorial Assistant at Aperio Intelligence oana.gurbanoaia@aperio-intelligence.com