• Reading time:7 mins read
  • Post category:DLA Piper

On 1 June, the UK’s Competition and Markets Authority (“CMA”) issued their response (“Response”) to the UK Government’s (“Government”) AI White Paper (“Paper”) consultation, detailing their considerations on the proposed framework the UK seeks to implement in the regulation of artificial intelligence (“AI”). The CMA’s response is the latest contribution from UK regulators, following the Information Commissioner’s Office own response to the Paper in April 2023.

The Response is the latest in CMA activity in considering AI within its remit as the market regulator within the UK. In May 2023, the CMA released an initial review on competition and consumer protection principles within the UK that are intended to guide the ongoing development and use of foundation models (“Initial Review”). As in the Initial Review, the CMA recognise that it has an important role to play, as market regulator, in ensuring that a practical and effective framework of consumer protections is present in the UK for parties interacting with AI. The CMA acknowledges that the protection of consumers, whilst ensuring competitive practices are promoted, is at the core of their responsibility. It is therefore unsurprising that their Response focuses predominantly on these themes.

CMA Response to AI White Paper Consultation

The CMA Response to the Paper aligns with many of the positions taken by the ICO in their response. In particular, they emphasise the necessity of cross-regulator collaboration in order to effectively manage the challenges brought by the new technologies, whilst allowing innovation to flourish. The Response is largely supportive of a regulator-led approach but does note that central coordination is vital for this to function effectively. The Response is also positive towards the principles established by the Government, recognising that many are directly applicable to the work of the CMA.

Four key messages:

In their Response, the CMA seek to draw out four key messages:

  • the approach taken by the Government to implement non-statutory principles is supported;
  • consideration as to how these principles apply to current and future remits of the CMA has now commenced;
  • there is a recognised need for central coordination functions to support UK-wide implementation, monitoring, and development of the framework; and
  • cross-regulatory coordination and coherence is supported by the CMA and they encourage initiatives such as the Digital Regulation Cooperation Forum (“DRCF”).

The effectiveness of the non-statutory regime:

A non-statutory regime for the governance and regulation of AI is at the core of the Government’s approach and the CMA agrees with this in the first instance. The CMA agrees that monitoring the effectiveness of this non-statutory approach is key before moving to a statutory approach. The Response does however suggest that the regulator envisages elements of direct statutory intervention at a later stage. To have this function, the CMA notes that regulators must rely on their existing duties and responsibilities in their applicable remit to support the Government’s approach to AI regulation, whilst highlighting that a new duty for regulators to have due regard to the principles could increase the effectiveness of those principles in regulating AI.

Initial thinking on applying the framework to current and future CMA work:

The Response embraces the deployment of the Government’s cross-sectoral principles as a foundation for regulation of AI in the UK. In particular, the CMA highlights that these build on and support the Government’s intention to ensure that their regulatory approach is effective domestically, whilst complementing the work of other international jurisdictions (thereby allowing interoperability for organisations who seek to participate in multiple countries). It was however highlighted that further guidance on how these principles should be implemented would be welcomed, particularly to ensure that this is achieved in a manner that is compatible with other regulators.

A Principled Approach

The CMA also comments on each specific principle and how it currently understands them as applicable:

Principle 1: Safety, security, robustness

A key role of the CMA is preventing harm to consumers and competition. The CMA identifies that where AI use affects a consumer who may not be in a position to assess the technical functioning or security of the product, the CMA may need to intervene to ensure that consumers’ interests are protected.

Principle 2: Appropriate transparency and explainability

Appropriate transparency and explainability is identified as well aligned with the CMA’s objectives. It notes that consumers are already entitled to be told how their decision-making might be influenced by AI systems (as used by suppliers of products and services online), and to not be misled. The CMA also identifies the potential interrelationship between this principle and the application of the conduct requirement objective ‘Trust and Transparency’ as set out in the Digital Markets, Competition and Consumers Bill.

Principle 3: Fairness

Also identified as highly relevant to the CMA’s remit, the CMA acknowledges both the benefits and challenges for the consumer that AI poses – AI systems and algorithmic decision-making hold promise for enhancing consistency and fairness and reducing bias, relative to unaided human decision-making. However, there are clear risks of discriminatory outcomes arising from the use of AI. In further recognition of the need for cross-regulator working, the CMA acknowledges that it is not best placed to tackle all these biases and calls for a context-specific approach to defining fairness.

Principle 4: Accountability and Governance

The CMA acknowledges that it already has a variety of tools available to it to hold legal persons responsible for their deployment of AI falling within its remit. It also points to the Digital Markets, Competition and Consumers Bill as likely to make additional measures available to it to assist. It does however note that it may need guidance on novel scenarios.

Principle 5: Contestability and redress

The CMA’s position on this proposed principle maintains that – it is essential that regulators are adequately equipped with the resources and expertise to monitor potential harms in their remits, and the powers to act where necessary.

Technically capable

The CMA is keen to showcase its existing and ever-growing technical capabilities with respect to understanding and managing AI and takes its Response as an opportunity to highlight its Data, Technology and Analytics (“DaTA”) unit. Established in 2019 and made up on data scientists, data engineers, and technologists, the DaTA unit has led transformation activities at the CMA, collaborated with the Digital Markets Unit, and has responsibility for horizon scanning. Reinforcing its pro-cross-regulator view, the CMA uses its response to offer support to other regulators in building their respective AI capabilities.

A coordinated effort

Acknowledging the cross-sector impacts of AI, the importance of coordination is a central theme in the CMA’s response.  The CMA confirms that it will support the Government in delivering a cross-regulatory AI sandbox, acknowledging the need to ensure AI innovators are sufficiently supported.

The CMA makes several firm recommendations in this regard, including:

  • Duplication of effort should be avoided. The central coordination function should be established within the Department of Science, Innovation and Technology (“DSIT”), and if there are plans to form a new, independent organisation in the future, regulators should be consulted; and
  • Existing regulatory initiatives such as the DRCF should be utilised to evaluate how existing functions could adapt in response to the challenges posed by AI and enable further innovation and growth.
DLA Piper

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.