The FCA’s Artificial Intelligence (“AI”) update [1] shows the FCA’s approach to ensure safe and responsible use of AI within UK financial markets. Firms which operate in the financial services sector and use AI in their business should evaluate and understand the FCA’s approach and guidelines to avoid negative consequences. Here are some of the key points from the FCA’s update and the implications for your business.
AI Regulation
There is currently no general statutory regulation of AI in the UK. The UK’s General Data Protection Regulations (“UK GDPR”) and Data Protection Act 2018 govern the collection and use of personal data and place some restrictions on automated decision making.
The European Union however implemented its EU AI Act which came into force in August 2024. Currently in the UK, the Institute of Electrical and Electronic Engineers is developing its IEEE P7000 series of standards which relate to the ethical design of AI systems.
The FCA issued its AI update which has a principles-based and outcomes-focused approach, which aligns with the UK government’s pro-innovation AI strategy. The aim is to nurture innovation while mitigating risks to consumers and market integrity. The update contains commentary on areas such as data security, operational resilience, and fairness in AI applications.
Legal Implications for firms using AI
The update indicated several key areas for regulated firms to focus on, namely:
- Accountability & Governance – Firms must have robust governance structures for AI. Senior management under the SMCR regime will have to be fully accountable for AI deployments. Firms are also required to include clear accountability across the AI lifecycle. This can be a complex task as AI systems evolve and grow in scope.
- Transparency & Fairness – The FCA stresses the importance of transparency in how AI systems operate, particularly under the Consumer Duty. For example, firms must ensure that AI algorithms do not create unfair outcomes or amplify biases. This is a regulatory expectation and not only “good practice”. Firms should ensure that AI decisions are explainable and that mechanisms exist for contesting AI-driven outcomes. This will help meet the FCA’s call for transparency and contestability.
- Operational resilience & Outsourcing – The use of AI by Critical Third Parties is under scrutiny by the FCA. Firms which rely on third-party AI services must ensure compliance with SYSC 8 and SYSC 15A to manage outsourcing risks effectively.
Next Steps
- Review Governance structures – ensure senior management is equipped to oversee AI systems, with clear accountability set out in SMCR Statements of Responsibilities.
- Strengthen risk management – implement processes and timelines to conduct regular audits of AI systems to identify and mitigate potential biases or risks to operational resilience.
- Engage with innovative sandboxes – take advantage of the FCA’s Regulatory and Digital Sandboxes to test AI applications in a controlled environment.
- Prepare for future regulatory changes – The FCA will continue to refine its regulatory approach to AI, and there will likely be further guidelines / policy statements made by the UK government. Firms must therefore stay proactive by regularly reviewing updates and engaging with legal advisors so that the firm adapts quickly to any new requirements.
Contact us for expert legal advice on compliance with your FCA duties when using AI in financial services.
[1] https://www.fca.org.uk/publications/corporate-documents/artificial-intelligence-ai-update-further-governments-response-ai-white-paper
“Herrington Carmichael offers legal advice to UK and International businesses as well as individuals and families. Rated as a ‘Leading Firm 2023’ by the legal directory Legal 500 and listed in The Times ‘Best Law Firms 2023’. Herrington Carmichael has offices in London, Farnborough, Reading, and Ascot.”
Please visit the firm link to site