FCA not targeting major regulatory changes with AI review

The Financial Conduct Authority (FCA) is not planning to make major regulatory changes from its review into the long-term impact of artificial intelligence (AI) on retail financial services.

The regulator highlighted that it expects to remain an outcomes-based regulator by 2030 with current arrangements being able to adapt and evolve to advances in the use of AI.

“Existing frameworks such as the Consumer Duty, the Advice Guidance Boundary reforms, the Senior Managers & Certification Regime (SM&CR), Operational Resilience requirements and the nascent Critical Third Parties (CTP) regime all provide a flexible foundation for an AI to be implemented in retail financial services,” the FCA said.

In one example it noted that, it will assess “how relevant senior managers under SM&CR can continue to discharge their responsibilities for the deployment and maintenance of AI systems, and how these responsibilities might need to evolve under different future scenarios”.

The FCA also said it will learn from the progress of others, including considering the emerging approaches of other regulators and the wider legal approaches to AI globally, by financial and other regulators.

 

Address vulnerable people

However, the regulator acknowledged that there must be consideration for how existing consumer protection rules and policy may be shaped by AI in the longer term, for example around vulnerability.

“AI has great promise for people who might struggle with their finances or come to take decisions at a time when they are vulnerable or need support,” it said.

“However, there might be new ways in which firms or others might be able to target vulnerable customers using AI.

“Similarly, the industry is trending towards hyper-personalisation to boost their competitive advantages by leveraging AI. What could this mean for existing regulatory expectations?”

Moreover, the FCA highlighted that to remain effective in an AI-enabled future, regulatory approaches will need to support innovation while responding to new anti-money laundering (AML) and consumer protection challenges, including autonomous fraud, AI-powered social engineering and identity compromise.

“Clear expectations on accountability, auditability and the safe deployment of high risk AI could be increasingly important,” it added.

 

Exit mobile version