News

Legal Brief: Impact of AI, ChatGPT on Banks, Financial Crime Risk Mitigation

By Leily Faridzadeh

Editor’s Note: The ACAMS moneylaundering.com legal team reviews speculation surrounding the use of artificial intelligence in the financial services industry and exploitation of the new technology for illicit purposes.

Speaking at an industry conference in San Antonio, Texas, on June 16, Acting U.S. Comptroller Michael Hsu discussed the rapid pace of development of artificial intelligence and corporate adoption of the technology since OpenAI’s release of ChatGPT in November.

Hsu said at the conference that the financial services industry has generally approached AI with caution, an opinion shared by Luxembourg’s primary financial regulator, CSSF, which found that 30 percent of 138 banks in the Grand Duchy that responded to a survey by the agency had begun using machine learning tools or other forms of AI.

Advancements in machine learning and other forms of AI have meanwhile driven expectations among U.S. regulators and the Financial Action Task Force that banks should consider using these new tools to strengthen their anti-money laundering programs.

Testifying before the U.S. Senate Judiciary Committee on May 16, OpenAI chief executive Sam Altman touted ChatGPT’s potential for streamlining transactional assistance, account management and other service-related tasks at banks, and subsequently freeing staff to focus on providing tailored advice to clients based on the tool’s data analytics and research capabilities.

Altman echoed researchers from Stanford University in opining that ChapGPT and other large language models, or LLMs, can also bolster detection of fraud, money laundering, and other types of risk and transactional anomalies.

In January 2023,  researchers at Georgetown University reconsidered ChatGPT’s usefulness against its potential risks, seven months after the Federal Trade Commission, or FTC, warned Congress of the dangers posed by overreliance on AI to combat online fraud and other profit-motivated cybercrimes.

Similarly, the Consumer Financial Protection Bureau highlighted problems stemming from the expansive use of ChatGPT and similar AI-driven tools by banks, including the provision of inaccurate information and potential noncompliance with consumer protection standards.

More recently, the FTC shined light on the growing use of chatbots to engage in fraud and spear-phishing. According to the agency, ChatGPT’s ability to mimic natural language, for example, allows criminals to conduct sophisticated social-engineering schemes in which they more easily impersonate bank staff and harvest login details and other personal data from victims.

Sens. Ron Wyden (D-OR) and Chuck Grassley (R-IA) have also raised concerns to the IRS after reports showed how cybercriminals have used ChatGPT to generate deceptive messages in furtherance of tax fraud.

On the other side of the Atlantic, Europol has also raised an alarm over the use of LLMs in furtherance of financial crime, warning that even generally unsophisticated criminals can learn how to move and hide illicit funds by using ChatGPT to access “how-to” guides that simplify the understanding—and therefore the circumvention—of anti-money laundering controls.

Institutions such as the Wolfsberg Group and Brookings Institute have promulgated guidelines for building guardrails around AI and ChatGPT specifically.

Other interested parties in both the public and private sectors have gone further in choosing to block access to LLMs altogether, while lawmakers in the U.S. and abroad have advocated for additional regulatory measures.

Industry participants and policy centers alike meanwhile continue to push for collaboration between banks and regulatory authorities in mitigating the risks posed by AIs and LLMs.

Contact Leily Faridzadeh at lfaridzadeh@acams.org

Topics : Anti-money laundering , Fraud
Source: U.S.: OCC , Luxembourg , U.S.: Congress , FATF
Document Date: June 26, 2023