Busy administrator with Motion blur

Competition law aspects of AI regulation

Responsibility for AI

A few years ago it felt like everyone in the competition law world was debating the impact of pricing algorithms which allowed companies to track each other’s prices digitally and respond automatically. For example, what if pricing algorithms started engaging in coordinated conduct that reduced price competition? Would that amount to anti-competitive behaviour in the same way that it would if such coordination was carried out by humans?

This is an important question, as the risks of infringing competition law are serious. In addition to the reputational consequences, companies can be fined up to 10% of their turnover, Directors can be disqualified, those harmed can bring actions for damages and in the worst cases, those responsible can be liable for criminal convictions.

The simple answer from the competition regulators was ‘yes’. Those responsible for programming the algorithm are responsible for its conduct. It is no defence to say “it wasn’t me, it was the algorithm that did it”.

Context for competition regulation of AI

Now the hot topic is AI. This comes at a time when the EU has recently passed its ‘AI Act’, the UK government is progressing its Digital Markets, Competition and Consumers Bill (“DMCC”) and the UK government has hosted an international conference at Bletchley Park on AI safety, given the wider public concerns. EU, UK and other competition regulators worldwide are also focussing on all things ‘big tech’ with investigations going on into the activities of Google, Apple, Amazon and Microsoft in various different contexts.

Against this background, the primary UK competition regulator, the Competition and Markets Authority (“CMA”), is spending considerable resources engaging with a whole range of stakeholders to consider its approach to the development and use of AIs. It recognises that AI will lead to rapid changes and will have a significant impact on competition and consumers. While many of these changes may be very positive and lead to increased productivity and economic growth, there are concerns that they may harm competitive processes and potentially impact negatively on consumers (especially as regards false information).

CMA’s latest thinking

Providing an insight into its current thinking, the CMA has recently published some general principles that it intends to apply when looking at Foundation Models (i.e. large, machine learning models trained on vast amounts of data) in the context of competition and consumer protection issues:

Access – there should be ongoing and ready access to key inputs, particularly to allow competitors to develop alternative products and services;

Diversity – there should be a sustained diversity of business models, including both open and closed source models;

Choice – there should be sufficient choice for businesses so they can decide how to use AI;

Flexibility – there should be flexibility for businesses to switch or use multiple AI models according to need;

Fair Dealing – there should be no anti-competitive conduct, including anti-competitive self-preferencing, tying or bundling;

Transparency – consumers and businesses are given information about the risks and limitations of AI models generated content so they can make informed choices;

And, as an overarching principle:

Accountability – developers and deployers are accountable for outputs provided to consumers.

This last principle takes us right back to the start of this article and the position reached as regards pricing algorithms. Developers and deployers will need to understand the effects of the technology they are deploying and have a clear view of its likely effects on the markets in which they are operating as they will be held responsible for the impact of the technology.

The key areas of risk identified by the CMA in the report include:

  • Mergers or acquisitions that could lead to a substantial lessening of competition in markets for the development or deployment of AI models (including ‘killer acquisitions’ – where a company acquires control of an innovative smaller competitor, often a start-up, to eliminate them as a possible source of future competition);
  • If firms use their leading positions in key markets to block innovative challengers who develop and use AI;
  • Undue restrictions on customers’ ability to switch between or use multiple AI model providers;
  • The development of ecosystems that unduly restrict choice and interoperability;
  • If firms with market power in AI development or deployment engage in anti-competitive conduct such as the tying or bundling of products and services; and
  • If customers receive false and misleading content from AI services that impacts or is likely to impact their decision-making.

This report is just the start of the CMA’s engagement with various stakeholders in this crucial area of the economy.

Additional regulation for large digital businesses

The CMA is also likely to be getting additional powers that are relevant to this area following the passage of the DMCC. Amongst other things, the DMCC, if enacted as currently drafted, will give the CMA additional powers to regulate competition in digital markets. These additional powers will be focused on businesses that have ‘strategic market status’ (“SMS”). The CMA will be able to designate businesses as having SMS where it considers that the business has substantial and entrenched market power and a position of strategic significance in relation to its digital activity. Businesses designated as having SMS, which may well include those mentioned above which are the focus of various ongoing investigations, will potentially be subject to additional ex ante regulation of their conduct that would not otherwise be caught by the standard ex post application of UK competition law or merger control. However, given the pace of change and the speed of technical developments, there are questions about whether the CMA will be able to put in place ex ante regulatory protection in time to prevent consumer harm.

Some argue that all of this additional regulation is unnecessary and indeed risks stifling innovation. However, others argue that it does not go far enough given the risks.  What is certain is that we can expect to see some rapid developments that change the way markets work and in doing so generate competition law risks for those involved.

International aspects

Another factor to consider, as recognised by the UK’s international AI safety conference at Bletchley Park, is that AI is no respecter of national boundaries. The DMCC changes will address this to a degree, extending the scope of the CMA’s powers against anti-competitive agreements and concerted practices from those “implemented in the UK” to those “likely to have an immediate, substantial and foreseeable effect on trade within the UK”. However, how all of this plays out in practice, very much remains to be seen.

Conclusions

The expansion in the use of AI across multiple markets and the disruption that this causes across many economic environments will be bound to throw up a whole range of complex issues. From a competition law perspective, the current view seems to be that, subject to SMS businesses, the existing tools seem to be being viewed as adequate to protect competition and consumers. Therefore, given the risks, those developing and deploying AI will need to be careful to consider the effects of their products on markets and consider competition law compliance at all stages.

Our Commercial and Antitrust & Regulatory teams at Michelmores have the legal knowledge and market experience to advise both developers and deployers of AI technology to help them avoid the pitfalls and maximise the benefits. If it would be helpful to discuss any thoughts prompted by this article, do get in touch with Noel Beale or your usual Michelmores contact.