The CMA puts AI under the microscope

28.06.2023

The Competition and Markets Authority (CMA) recently consulted on its decision to carry out an initial review of the AI market. The CMA is examining competition and consumer protection considerations in the development of AI foundation models. This initiative provides useful insight into the potential future direction of AI regulation in the UK.

Legal and technological context

This review comes following the UK government’s March 2023 AI White Paper, which adopted a light-touch approach to regulating AI. Instead of opting for new legislation focussed on AI, the White Paper put forward five overarching principles that regulatory bodies should consider to facilitate the safe and innovative use of AI in the industries they monitor. These are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. The White Paper anticipates that regulatory guidelines and other soft law measures will be the preferred approach for sectoral regulators to take, with a degree of central co-ordination to mitigate potential cross-sectoral overlap. In the meantime, a legislative option is to be held in reserve in case the government deems it necessary in the light of market developments.

The CMA’s initial review is therefore one piece of the UK’s developing regulatory approach to governing AI. The focus on foundation models is significant as the CMA’s review is the one of the first initiatives by a UK regulator to focus on the technology that underpins ChatGPT and other Generative AI products that have seized the limelight during the first half of 2023.

The CMA’s initial review

As explained in the review launch document, the CMA will focus on foundation models, defined as a type of AI technology ‘trained on vast amounts of data that can be adapted to a wide range of tasks and operations’. The term itself was first proposed by machine learning experts from Stanford University in a 2021 paper. Foundation models can, for example, be used to create chatbots, write code and to recognise and even generate images. The CMA considers that foundation models could ‘transform much of what people and businesses do across the spectrum of human activity’, and the aim of its review is therefore to create an understanding of:

  1. How competitive markets for foundation models and their use could evolve;
  2. What opportunities and risks these scenarios could bring for competition and consumer protection; and
  3. Which principles can best guide the ongoing developments of these markets so that the innovation seen so far in the market is sustained.

The CMA has chosen to focus its review on three themes:

  1. Competition and barriers to entry in the development of foundation models – the CMA will explore potential barriers to entry (including access to data, computational resources, talent and funding), and ways that foundation models could disrupt or reinforce the position of the largest firms and the distribution of value in these systems.
  2. The impact foundation models may have on competition in other markets – since foundation models are likely to become an input to other markets, the CMA will examine how the market could develop in ways giving rise to competition concerns (for instance, if access to foundation models becomes necessary to compete in a market, but such access is restricted unduly or controlled by a small number of private companies who face insufficient competitive constraint).
  3. Consumer protection – the CMA is interested in the risks foundation models pose to consumers (including false and/or misleading information), and will explore the extent to which current market practices are leading to safe, accurate foundation models consistent with consumer protection obligations.

The CMA has also listed several key areas which will not be considered in its review. These include compliance with intellectual property laws, compliance with data protection laws, potential labour market impacts of foundation models and a detailed assessment of the supply of semiconductors and advanced chips.

In carrying out its review, the CMA will issue information requests to stakeholders including academic and industry labs developing foundation models, developers, researchers, suppliers of inputs like compute and data, customers, investors, and other industry participants and commentators.

In terms of next steps, the CMA will publish a short report setting out its findings in early September 2023.

An early indicator of increased AI regulation in the UK?

In a rare move, the CMA is carrying out its review under section 5 of the Enterprise Act 2002 (its general review function), rather than conducting a formal market study. This means that the CMA can take a flexible approach, as it will not be bound by the usual statutory time limits or committed to any particular outcome.

Within the UK’s regulatory landscape, the CMA’s review may illustrate how well the government’s approach to AI regulation is working. In its recently published Response to the government AI White Paper, the CMA is broadly supportive of the decision to leverage existing regulatory regimes to tackle AI while also creating a coordination function for monitoring and support. The CMA’s response also signals the importance of its initial review of AI foundation models to the CMA’s implementation of the government’s approach. For example, the CMA’s response states that ‘any guiding principles arising from this work will help inform how we plan to implement and apply the government’s framework in our remit’. The CMA also hints that its AI review may ‘result in recommendations to other regulators, or to government, with respect to its approach to AI regulation as outlined in the White Paper’.

The CMA’s examination of AI foundation models also suggests that the watchdog is continuing to focus on big tech. The initiation of the review came just a week after two important developments in this area: the CMA’s decision to block Microsoft’s £55 billion deal to purchase games developer Activision Blizzard, and the unveiling of the Digital Markets, Competition and Consumers Bill.

Looking further afield, the CMA’s review may also be an early indicator of heightened regulatory scrutiny of AI across multiple jurisdictions. In the US, senators recently introduced two bills on AI, one requiring US government agencies to tell people when the agency is using AI to interact with them, and the other aiming to establish an Office of Global Competition Analysis to ensure the US remains at the forefront of AI development.  Andreas Mundt, President of Germany’s Federal Cartel Office (FCO), also recently weighed in on AI, expressing a fear that AI could harm competition and signalling that the FCO will soon open a case tackling this issue.

Given the speed of uptake of AI technologies in the past months, and the increased clamour for regulation to be introduced to ensure that AI is used to the benefit of humanity, we can therefore expect more regulators to put AI under the microscope.