You are currently viewing Ten principles for the responsible use of artificial intelligence (AI) by Québec public bodies
  • Reading time:9 mins read
  • Post category:BLG law firm

The year 2024 is a pivotal year for artificial intelligence regulation around the world with the enactment  of the European Union’s Artificial Intelligence Act, which is already pictured as the golden standard, not unlike what the General Data Protection Regulation had previously achieved.

Canada showed leadership of its own in 2022 with the introduction of Bill C-27 and its proposed Artificial Intelligence and Data Act (AIDA). However, since its introduction, the bill’s legislative progress through the House of Commons has been particularly slow; the prospect of federal elections in just over a year now calls into question the bill’s very future.

Where does Québec stand on this issue?

The state of AI regulation in Québec

Despite the absence of a specific bill regulating AI at the provincial level, Québec has not been standing idle. In fact, Québec is a pioneer in the field of AI governance and ethics, having adopted the Montréal Declaration for a Responsible Development of Artificial Intelligence in 2018. The result of an important citizen co-construction process, the Montréal Declaration provides an ethical framework for the development and deployment of AI based on 10 principles: well-being, autonomy, privacy and intimacy, solidarity, democracy, equity, inclusion, prudence, responsibility and a sustainable environment.

More recently, in Feb. 2024, the Conseil de l’innovation du Québec tabled a report listing 12 recommendations, including one urging the government to adopt framework legislation that would regulate the development and deployment of AI throughout society.

Statement of Principles for the Responsible Use of Artificial Intelligence by Public Bodies

The latest innovation in the regulation of AI in Québec comes from the Ministère de la Cybersécurité et du Numérique (MCN), which has adopted, under section 21 of the Act respecting the governance and management of information resources of public bodies and government enterprises, a Statement of Principles for the Responsible Use of Artificial Intelligence by Public Bodies (available in French only).

The 10 guiding principles established by the MCN to guide the use of AI by public bodies are:

  1. Respect for individuals and the rule of law: Responsible use of AI systems must respect the rule of law, individual rights and freedoms, the law and the values of Québec’s public administration. More specifically, public bodies must ensure that AI systems’ learning data and other data are lawfully collected, used, and disclosed, taking into account applicable privacy rights. For example, the Act respecting Access to documents held by public bodies and the Protection of personal information provides for the requirement to produce a Privacy Impact Assessment (PIA) for the acquisition or development of an AI solution that involves the collection, use and disclosure of personal information.
  2. Inclusion and equity: Responsible use of AI systems must aim to meet the needs of Québecers with regard to public services, while promoting diversity and inclusion. Any AI system must therefore minimize the risks and inconveniences for the population, and avoid causing a digital divide. Staff members of public bodies must be able to benefit from the necessary support through the introduction of mechanisms and tools, particularly when jobs stand to be transformed by technological advances.
  3. Reliability and robustness: Measures must be taken to verify the reliability and robustness of AI systems. Remedial and control measures must also be put in place to ensure that these systems operate in a stable and consistent manner, even in the presence of new disturbances or scenarios. Data quality is a key element in addressing the reliability and robustness of an AI system; namely, ensuring that the data is accurate and free of bias that can pose risks, cause harm, or reinforce various forms of discrimination.
  4. Security: Responsible use of AI systems must comply with information security obligations. Security measures must be put in place to limit the risks involved and adequately protect the information concerned.
  5. Efficiency, effectiveness and relevance: Responsible use of AI systems should enable citizens and businesses to benefit from simplified, integrated and high-quality public services. The use of such systems should also aim at optimal management of information resources and public services. For example, an organization can demonstrate its adherence to this principle with an opportunity case that shows how AI is critical to solving a problem or improving a process.
  6. Sustainability: Responsible use of AI systems must be part of a sustainable development approach. For example, an organization can demonstrate its adherence to the principle by conducting an assessment of the environmental impacts of its AI project.
  7. Transparency: Public bodies should clearly inform citizens and businesses about the nature and scope of AI systems, and disclose when they are used, so as to promote public trust in these tools. For example, an organization can demonstrate its adherence to the principle by providing signage to indicate to users that the service they receive is generated by an AI system.
  8. Explainability: Responsible use of AI systems means providing citizens and businesses with a clear and unambiguous explanation of decisions, predictions, or actions concerning them. The explanation should provide an understanding of the interactions and their implications for a decision or outcome.
  9. Responsibility: The use of AI systems entails responsibility, including responsibility for their proper functioning. Putting in place control measures and adequate governance, including human oversight or validation, will contribute to this.
  10. Competence: Public body employees must be made aware of the standards of use, best practices, and issues that may arise throughout the life cycle of AI systems in the performance of their duties, in addition to fostering the development of their digital skills. Teams dedicated to the design and development of solutions targeting these systems must develop cutting-edge expertise if they are to enable the intended delivery of simplified, integrated, and quality services by a public administration. For example, an organization can demonstrate its adherence to the principle by providing training on AI-use best practices to its staff prior to deployment.

In addition, the MCN specifies that these principles apply even when a public body uses service providers or partners to develop or deploy an AI system; each organization is therefore responsible for ensuring that its suppliers and partners adhere to these principles at all stages of a project involving the integration of AI.

Towards a harmonized framework for AI? 

It is interesting to note that the principles outlined by the MCN are very similar to those identified by the federal government in its companion document to the Artificial Intelligence and Data Act (AIDA).

AIDA’s risk-based approach is precisely designed to align with evolving international standards in the field of AI, including the European Union’s AI Regulation, the Organization for Economic Co-operation and Development’s (OECD) AI Principles, and the National Institute of Standards and Technology’s (NIST) Risk Management Framework in the United States. The MCN also uses the OECD definition of “artificial intelligence system.”

Given the obstacles faced by federal Bill C-27, it bodes well to see the Québec government taking inspiration from this same approach to regulation, in accordance with international standards on AI. As more and more public bodies explore AI opportunities to improve their operations and the delivery of public services, the MCN Statement of Principles provides clear guidance that can be applied to all sectors of public administration, regardless of the nature of the activities or data involved.

Finally, to operationalize these principles, public bodies can consider putting in place an AI governance framework to strengthen their resilience in integrating AI.

Contact us

BLG’s Cybersecurity, Privacy & Data Protection group closely monitors legal developments that can inform organizations about data protection requirements in Canada. If your organization has questions about implementing an AI governance framework, please reach out to the contacts below or any other group members.


1 The MCN refers to the Déclaration de valeurs de l’administration publique québécoise (available in French only).

By Borden Ladner Gervais LLP “BLG” >>

“As Canada’s law firm, BLG provides high-value advice and advocacy to address our clients’ business challenges and problems. We go beyond legal to anticipate, consult and advise in a rapidly changing digital world.

We have extensive experience acting in specialized and complex deals and disputes. Vigilant, curious and collaborative, we harness technology and innovation to offer our clients exceptional service and value.

With 800+ lawyers across Canada, we serve clients throughout North America, Europe, and Asia. Offering expertise in intellectual property, disputes and corporate transactional matters, our connectivity gives our clients the next-level service required to achieve success in complex and international matters.”

 

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.