Through the publication of CSA Staff Notice and Consultation 11-348 Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets (the AI Staff Notice), the Canadian Securities Administrators (CSA) have put registered firms on notice that using AI in virtually any fashion may trigger additional regulatory expectations to demonstrate compliance with securities laws.
Best practices for the responsible use of AI
All firms should consider whether any of the following are applicable to them based on the wide- sweeping AI Staff Notice which expands on the previously articulated best practices for the responsible use of AI by asset managers:
- Implement policies and procedures tailored to the type of AI and to the purpose for which the AI system is being used.
- Implement necessary controls and supervision, which may include the need for capital markets-related AI expertise.
- Deliver clear and meaningful disclosure of any use of AI systems that may directly affect the registerable services provided to clients and any associated risks.
- Investment Fund Managers (IFMs) should consider the very specific discussion in the AI Staff Notice pertaining to fund managers. This includes guidance around required disclosure in a fund’s offering documents if the use of AI systems assists in meeting the fund’s investment objectives and strategies; exercising caution when describing the use of AI in sales communications; and the need to address the unique risks of the use of AI systems in risk factors.
- All registrants should note that the continued applicability of securities regulation (including oversight, documentation, conflicts of interest, etc.) does not change based on the use of AI systems. It is the activity being conducted, not the technology itself, that is regulated.
No matter what you use AI systems for, there’s a to-do for you
There are a wide range of potential AI uses by registered firms, but every kind of AI system used requires consideration in light of the AI Staff Notice which sets out a sliding scale of obligations.
AI systems being used to assist with or carry out registerable activity (which is broadly defined) are either (i) unlikely to be considered compliant with securities laws if the AI purports to substitute investment decision-making for a client, or (ii) will trigger more robust policies, controls, supervision and disclosure as compared with those AI systems used to gain operational efficiency, such as supporting back-office tasks.
Nonetheless, the use of any AI system triggers the CSA’s expectations that firms will implement policies and procedures, be mindful of the risks of employing AI, and have governance systems to address these risks, at a minimum.
Key concepts to keep in mind for your 2025 AI regulatory refresh
Oversight
- The use of AI systems must be overseen by registrants who have ultimate responsibility and accountability for their obligations towards clients. These include:
- the duty to act fairly, honestly and in good faith in dealing with clients;
- compliance with securities regulatory requirements – regardless of the type of technology employed to meet those requirements; and
- responsibility for all functions outsourced to a service provider.
- Oversight may require establishing or enhancing AI-specific policies, controls, supervision, and human verification of sample processes/outputs. The CSA state that outsourcing services based on or enhanced by AI systems is likely to require employees or professional advisers with specialized skills in this area, as well as an understanding of registrant conduct requirements. This could potentially further hamper registration-related issues with the regulators in a tight labour market for individuals with requisite investment management proficiency and relevant investment management experience.
- Governance and risk management are critical aspects of AI oversight and should be drafted to account for the unique features and risks of AI systems. Governance is an area of critical importance in AI regulation and may require additional consideration by registrants.
Explainability
- Because registrants remain ultimately responsible for the decisions they make and for understanding the systems they use, the quality of data being used by AI is critically important. Incorrect, incomplete, or biased data can result in failure to meet the standard of conduct and can result in material conflicts of interest that must be identified, managed, controlled for, and disclosed.
- While certain AI systems are more capable than others, the CSA advise that registrants must balance the need for advanced capabilities against the need for explainability (i.e., the ability to understand the functioning of and explain the output of a given AI system).
- Record keeping obligations are not negated due to the use of AI systems and so, firms should consider how and whether the use of AI will allow them to meet the record keeping requirements under National Instrument 31-103. In certain instances, we foresee record retention matters posing issues and/or undermining the efficiency gained by the ability to use AI – this should form part of a firm’s careful initial consideration of the risks and benefits of using AI for a particular purpose.
Disclosure
- Not dissimilar to the CSA’s evolving guidance on ESG-related investment fund disclosure, the AI Staff Notice is squarely focused on the completeness, accuracy, and importance of disclosure to investors about the use of AI systems that does not result in “AI washing” (or making embellished, inaccurate, or misleading claims about a firm’s use of AI). Note that the CSA may treat concerns about “AI washing” in the same manner they did “greenwashing” in the context of ESG disclosure and engage in a review of disclosure for the purposes of ensuring this is not the case. Firms making claims about their use of AI will want to carefully review that disclosure for accuracy and completeness.
- There is a sliding scale of disclosure required based on the materiality of the AI system being used and by which type of registrant for which purpose. IFMs should give careful consideration to the ways in which offering documents and other disclosure are treated in the AI Staff Notice.
- The CSA believe that investor interest in AI requires robust disclosure, though query what level of disclosure will be useful to a broad swath of investors.
- We predict that conflicts of interest arising out of the use of AI will be of increasing importance to regulators. These types of conflicts will be especially targeted in any AI Staff Notice-related sweeps.
Additional tidbits
- AI system use may trigger the requirement to file a change of primary business, target market or products and services offered by firms if the AI use may directly affect registerable services provided to clients.
- CSA Staff will consider the risks and benefits of a firm adopting AI for a particular purpose when a firm is applying for, or updating, registration information. Firms should be prepared to provide detailed information about the use of AI, noting that CSA Staff has signalled they may impose tailored terms and conditions on a firm’s registration, depending on the nature of the use of AI systems.
- Trade execution, KYC, onboarding, and client support may be accomplished through the assistance of AI, all with specific considerations and boundaries.
- In certain narrow situations, AI may be used to make automated decisions that are automatically executed – with human oversight – but will be considered highly problematic in the context of suitability and trading decisions unless there is a high degree of explainability, oversight, controls, et cetera. The CSA urge consultation with Staff prior to the intended use of AI in these circumstances, signalling a pre-approval approach to the adoption of such systems.
Assess your next steps and consider providing stakeholder feedback
Firms using AI should consider whether their policies and procedures, compliance manuals, registration information forms, conflicts of interest inventories and disclosures, relationship disclosure information, offering documents and sales communications require updates in light of the AI Staff Notice.
Please contact your usual BLG lawyer for assistance in assessing the impact of the AI Staff Notice on your firm or for ways we can assist your firm submitting responses to the consultation portion of the Staff Notice.
There are 10 questions in the consultation, presenting an opportunity to voice important stakeholder feedback on the AI Staff Notice and avenues for further tailoring or modifying the CSA’s approach to oversight and regulation of AI systems. Comments are due on March 31, 2025.
By Borden Ladner Gervais LLP “BLG” >>
“As Canada’s law firm, BLG provides high-value advice and advocacy to address our clients’ business challenges and problems. We go beyond legal to anticipate, consult and advise in a rapidly changing digital world.
We have extensive experience acting in specialized and complex deals and disputes. Vigilant, curious and collaborative, we harness technology and innovation to offer our clients exceptional service and value.
With 800+ lawyers across Canada, we serve clients throughout North America, Europe, and Asia. Offering expertise in intellectual property, disputes and corporate transactional matters, our connectivity gives our clients the next-level service required to achieve success in complex and international matters.”
Please visit the firm link to site