Who’s going to drive you home? Liability for autonomous vehicles: Part 2



In part one of this two-part analysis, we considered recent developments in the UK regarding autonomous vehicles (AVs) and how the insurance position for the same has been recently clarified in the Automated and Electric Vehicles Act 2018 (the AEVA). We also considered the problems in relation to the applicability of criminal law and road traffic offences to AVs. By way of recap, the AEVA essentially promotes a single defendant scenario for injured parties whereby insurers of an AV shall be liable for the damage occurring, with subsequent apportionment of liability in relation to manufacturers or other parties occurring thereafter and behind the scenes. Such an approach is welcome news for road users as it allows for quick recovery of damages without having to concern oneself with lengthy investigations and attribution of liability amongst other parties.

Notwithstanding this easy access road map for recovery of damages, insurers who seek to recover from other parties will still have to prove their case, in the absence of new and specific legislation, within the restrictions set by existing law. In this part, we discuss the relevant challenges arising in practice with the application of the current product liability framework to AVs and consider whether a revision of the same is required to apply it to AVs.

Product liability – the traditional approach

Prior to the introduction of AVs it has been well established where product liability might exist in the context of cars and automotives. Where damage or injury arises as a result of faulty parts, design failure, defect or failure to warn, this will generally lead to a potential liability against the manufacturer of the vehicle (and manufacturer of any defective component parts if applicable). For injury or damage arising from the operation of the car, then liability will normally rest with the driver, with the driver’s insurance company usually picking up the tab. The AEVA has clarified that insurers of cars “driving themselves”[1] on roads or other public places that are involved in an accident causing damage, will be liable for that damage. The AEVA permits the insurer to then seek recovery from other parties and this will be the car manufacturer or the software manufacturer in most instances, whilst in other cases it may be the manufacturer of a particular part of the AV.

There are three grounds for a potential product liability claim:

  1. Strict liability under the Consumer Protection Act 1987 (the CPA) which implemented the EU Product Safety Directive (2001/95/EC);
  2. Negligence; and
  3. Contract/the Consumer Rights Act 2015.

We will consider the application of each to AVs in turn below. However, before doing so, the first question is whether the entirety of the AV is actually a product.
While there is unlikely to be any debate over the physical car or its physical parts being a product for the purposes of product liability claims, the same ease of answer does not apply to the car’s software. There has been an ongoing debate over whether software is actually a product or service or whether it could be both. The definition of a product in the CPA (discussed in detail below) is: “any goods or electricity and (subject to subsection (3) below) includes a product which is comprised in another product, whether by virtue of being a component part or raw material or otherwise.”[2] For the purposes of this article, it is assumed that guidance[3] awaited next year will confirm that software can be considered a product for the purposes of product liability legislation. Of course, depending on how the software operates in or interacts with the AV it may be considered both a service and a product.

Consumer Protection Act 1987

The CPA provides a strict liability regime meaning that a manufacturer will be strictly liable in respect of defective products and the injured party will not have to establish fault, only that a defect caused the loss.

The CPA defines a product as defective when the safety of the product is “not such as persons generally are entitled to expect” taking into account “all the circumstances” including:

  • the manner and purposes for which the product had been marketed, and any instructions given;
  • what might reasonably be expected to be done with the product; and
  • the time the producer first put the product into circulation.

If the damage is caused wholly or partly by the defect, the producer of the product shall be liable for the damage.

The definition of what constitutes a defect has been the subject of a number of important cases over the last two years. In a landmark judgement on the meaning of defect in Wilkes v DePuy[4] the judge (Mr Justice Hickinbottom) departed from the previously established approach taken by Mr Justice Barton in A v National Blood Authority[5] over 15 years ago. The judge in Wilkes considered that the court must maintain a flexible and holistic approach to the assessment of the appropriate level of safety, including which circumstances are relevant and the weight to be given to each in each case.

The flexible and holistic approach in Wilkes was further recently endorsed in Gee v DePuy[6]. In Gee the court (Mrs Justice Andrews) held that a defect is determined by the level of safety that the public was entitled to expect at the time the product was first introduced to market. It was held that the Court could have regard to everything now known about the product, irrespective of whether that information was available at the time it was put on the market. The Court adopted the flexible approach in Wilkes when deciding which circumstances are relevant and the weight to be given to each when establishing whether a product is defective under the CPA. Following Wilkes, the Court held that it was entitled to have regard to all circumstances which may have a bearing on the assessment of the safety of the product, and that those circumstances might differ depending on the product and the nature of the complaint. Furthermore, proof of a causal connection between defect and damage should not be attempted without first ascertaining whether there is a defect. It was stated that the Court could consider the product’s cost, its risk-benefit profile, the avoidability of the safety risk, the existence of a learned intermediary, and the information and warnings passed on to such an intermediary. Compliance with regulatory requirements could also have considerable weight for determining safety. This flexible approach to establishing a defect is going to be of particular importance in relation to AVs.

So how would this apply to AVs? AVs are indeed a product and therefore if found to be defective, the manufacturers (producers) will be strictly liable under the CPA. There is nothing in the CPA to suggest that the liability regime it provides for could not be applied to AVs. However, one of the limited statutory defences available to manufacturers under the CPA may be particularly relevant when considering liability arising from AVs and that is that at the time the particular AV was first put into circulation, the state of scientific and technical knowledge was not such as to have enabled the existence of the defect to be discovered.

The difficulty associated with applying the CPA to AVs is around the assessment of what level of safety consumers are entitled to expect. While this will be straightforward for most domestic products, consumers are unlikely to have any significant understanding of the technology products used in AVs. The lack of potential understanding of consumers about the technology will bring with it, in practice, an increased obligation on manufacturers to warn consumers of the risks of use, the limits of use and the instructions for use. Manufacturers of software used in the vehicle will also need to provide consumers with all relevant information regarding use and risks. The information provided to consumers, in the event of any injury arising from an alleged defect, will be heavily scrutinised in legal proceedings and therefore the assessment of what information is provided to consumers is likely to be key.

A further challenge with the application of the CPA to AVs is that, given the fast pace of change of these new technologies, it may also be difficult to retrospectively determine what level of safety consumers were entitled to expect at a particular point in time.

Another of the difficulties currently in focus relates to moral dilemmas. Consider the scenario of a fully autonomous vehicle (which has no driver control/intervention function at all) driving at high speed when an elderly lady steps out from one side and a woman with a pram steps out on the other side. If hitting at least one of these pedestrians is inevitable, what action should the car take? Would the AV have to swerve to avoid hitting one of the pedestrians in order for it to be “safe” or to not have a “defect”? Would failing to swerve to try to avoid one of them mean it is defective? Could it instead be argued that by at least saving one individual from injury the AV was safe? How can the vehicle’s decision making be judged when it is unclear what a human driver in that scenario would do? These are all difficult issues that one day may have to be thrashed out in a court room. However in the interim large studies are being carried out by MIT, using a game to gather data on how humans react to similar situations[7]. Results published so far demonstrate some cultural differences between how and why people choose to spare the lives of others, whether based on gender, social status or age[8]. How this will translate into software remains to be seen.

Furthermore, one of the additional safety concerns that has arisen recently is whether the software being written, for example, in the USA will be safe for AVs in the UK given that the software has not been exposed to specific vehicles on UK roads (for example, red buses and hackney cabs). In addition, how will the software react to or deal with specific driving practices which might differ from those in other countries. For example in some states in the USA you can turn right on a red light, however, this is prohibited in the UK. Furthermore, what if someone takes the car on the ferry from the UK to France – how will the software deal with driving on the opposite side of the road, will this even be possible for AVs? What happens when a car crosses the border between the Republic of Ireland and Northern Ireland, will the software know that that there are different rules of the road and speed limits?

It is therefore important to note that just because software or an AV is considered appropriately safe in the USA or indeed certain EU Member States does not mean that it will be easily used or replicated in the UK. This could be an important issue in determining “defect” and the level of safety consumers are entitled to expect and manufacturers may find themselves liable if they have not taken into account different vehicle types or practices in different countries that the software should be programmed to recognize or deal with.

Finally, the CPA is more attractive in application to completely driverless cars, as for AVs that do not amount to being “driverless” there will always be the added complication of the actions of the driver and particularly their possible contributory negligence which may interfere with the attachment of strict liability.


Claims for product liability can also be made under negligence. Manufacturers owe a duty of reasonable care to those who might foreseeably use their products. Negligence claims generally allege fault with regard to the functions of a product, namely design defect, manufacturing defect and failure to warn. Unlike the CPA, negligence seeks to resolve the reasonableness of the conduct of the producer rather than a consumer’s expectations. However, the test of reasonableness in application to AVs or driverless cars is also not without pitfalls. In such unchartered territory and with rapidly developing technologies, what is considered reasonable could be changing on a daily or weekly basis. Conducting a look-back exercise to determine what was reasonable at that particular point in time may be extremely difficult. Because we are dealing with such new, evolving, and previously unused technologies, will reasonableness need to have a more lax or diluted meaning?

Issues of alleged design defect and duty to warn require consideration of the inherent risks of a product that is manufactured as intended. In these cases, the manufacturer’s negligence will likely be determined in accordance with industry standards. A design may be negligent in not complying with a standard because it inadequately provides for known risks, or causes unknown unacceptable risks.

Where there is a known risk, and an allegation of failure to warn, the producer may rely on the defence of the “learned intermediary”. This essentially involves the concept of the producer placing a warning on the product and an intermediary (usually the seller) not relaying the warning to a consumer at the point of sale. The defence of the “learned intermediary” generally arises where a warning has been given to retailers or other professional users, rather than the ultimate consumer and particularly where the end user may not be sufficiently qualified to understand the warning given or alternatively, where it would be impracticable to warn each individual user. The principle is of great significance in the pharmaceutical industry; however, one can see its application being potentially relevant to sellers of AVs.

The extent of warnings which the manufacturer and the software developer will need to convey requires careful consideration. For example, is the software manufacturer under an obligation to warn the consumer of the risks of failing to implement a software update? Is the manufacturer of the car obliged to do so too? Will industry standards be in a position to keep up with the pace of change to ensure they are fit for purpose when it comes to determining what warnings a consumer should receive? Should passengers in (as opposed to owners of) the AV also receive the same or some warnings too? If we took the example of a passenger in an AV taxi, what warnings should they be provided with and when? Will the AV taxi owner be liable in those circumstances?

Establishing fault in the event of a defect in an AV may also be a complicated task as responsibility may rest with a number of parties. For example, if a hardware component such as a sensor was at fault, a negligent repair or negligent implementation of the system, or insufficiently robust software, could have caused the sensor failures. In addition, if the driver assumes or fails to assume control of the car when the automatic function fails, this may result in contributory negligence.

The issue of causation is also potentially difficult. The Claimant has the burden of proving, on the balance of probabilities, that the producer’s negligence caused or materially contributed to the loss complained of, i.e. but for the negligence, the loss would not have occurred. Generally the courts will assess whether there is factual causation and, if there is, whether the damage is too remote to permit recoverability. Most AVs can still be manually driven if the automatic drive components fail which then also brings what the driver did into the frame and whether their actions could constitute a novus actus interveniens which, despite the automatic system failure, sufficiently breaks the chain of causation and results in the driver being liable for actions taken while s/he was manually operating the car.

Res ipsa loquitur

It will be interesting to see whether the doctrine of res ipsa loquitur will be relied upon by claimants seeking damages in negligence-based claims. The doctrine is applicable when the court is able to shift the burden of proof onto the defendant by reason of the claimant showing that the nature of the accident has no plausible explanation other than negligence on the defendant’s part. The doctrine, which is a rule of evidence rather than law, has three elements as set out by Erle CJ in Scott v London and St Katherine Docks[9]:

  1. the accident can only have happened due to negligence;
  2. the product that causes the damage was under the sole control of the defendant; and
  3. there must be no evidence as to how the accident took place.

In assessing whether each element is satisfied we shall consider the example of an AV in full autonomous mode swerving off the road without warning and for no obvious reason, injuring its passengers in the process. In such a scenario the first element is satisfied; AVs should not swerve without reason, and so negligence must be the causative factor. If the car was in full autonomous mode it was therefore under the full control of the defendant’s software and so the second element is also satisfied.

The third element, however, is likely to be a hurdle for claimants given that there should, in theory, be data recording why the AV decided to swerve. This third element was considered in the context of a road traffic accident in Widdowson v Newgate Meat[10]. Although Lord Justice Brooke acknowledged that “it is not common for liability to be established in a road traffic accident on the application of the maxim res ipsa loquitur”, he heard no evidence from the claimant or defendant, and therefore applied the maxim as the accident in question could only have occurred through negligent action. Therefore the maxim is only like to apply if the car has failed to record the relevant data during the crash, as the passengers or any witnesses will be unable to explain how the accident took place. It would seem likely that claimants will include this plea in claims as a “catch all” in any event.

Evidence & the role of Blockchain?

The accurate recording and storing of data in a black box will be key if manufacturers want to avoid being subject to the extremely claimant friendly doctrine of res ipsa loquitur in future proceedings. Further, where evidence is available expert evidence will likely be deployed to opine on any limitations of the software and how faults could have occurred. Software updates, implemented or not, will also likely be in focus.

In the event of an accident, retrieving the sensor data will assist in reconstructing the scene. However, the question arises as to whether this data can be tampered with. Reports have been made in the media that Blockchain technology[11] can assist with ensuring that there is untampered evidence of the conditions of an accident to inform decisions about liability.

The solution being proposed uses permissioned Blockchain technology so that only the relevant parties can record and access information from sensors. Blockchain tries to ensure data transparency and accuracy while protecting it from manipulation; malicious cyber-attacks being a significant challenge to autonomous car technology[12]. The potential role of Blockchain would therefore not just be in relation to achieving safety by preventing attacks, it would also, in theory, seek to preserve essential evidence contained within the AV’s black box and prevent it from subsequent manipulation or destruction.


Claims for product defects can also arise under contract. A claim in contract can be attractive as there is no need to prove fault on the part of the producer, only a breach of the contract. In addition to damages, there may also be a claim for economic loss, such as the cost of repairing the unsafe product (subject to rules on recoverability or remoteness). There are still, however, significant restrictions to a claim in contract and generally a claim is available only to the purchaser and can be brought against only the seller.

Consumers also have a clear right to the repair or replacement of products (including faulty digital content) under the Consumer Rights Act 2015. A consumer may make a claim for a refund, repair or replacement when digital content they have bought doesn’t meet the following standards:

  • satisfactory quality
  • fit for purpose
  • as described.

This will go towards addressing consumer’s rights but it does not address the wider issue of civil liability.

So what next?

In 2017 the European Commission conducted a formal evaluation of the existing EU product liability legislation to determine whether it is fit for purpose. This evaluation included a public consultation, a summary of which was published in May 2017. This found that a quarter of producers, 54% of consumers and 40% of other respondents (including public authorities and civil society) consider that the Directive needs to be adapted for innovative products, such as smartphones and other connected devices. Assessing the responses as a whole, half of the views in the position papers consider that the current regulatory framework is adequate to address liability issues related to new technological developments, while others would welcome a revision of the Directive.

The Commission has now established an expert group on liability and new technologies which will assist the Commission on producing guidance on the existing legislation and in assessing the implications of new digital technologies on the product liability regime. The Commission expects to issue the results of this exercise in mid-2019. However, whether the UK will even be bound by this new guidance or any revisions to the legislation in light of Brexit remains to be seen.


Types of self-driving AVs may be on the road sooner than we think. Two weeks ago, global transportation business Addison Lee Group announced its partnership with Oxbotica in a strategic alliance that aims to bring autonomous vehicles to London streets as quickly as possible. The companies hope to provide customers with self-driving services in London as soon as 2021.

As AVs become more and more common manufacturers now have to start becoming aware of the new liability issues. A careful balance needs to be struck between protecting the consumer without impacting on innovation. There are many challenges along the way for manufacturers of AVs but also for those insuring AVs and those in a court room determining issues of liability. Whether the current framework is really fit for purpose or capable of application in practice to AVs remains to be seen.

[1] Section 2(1) AEVA.
[2] Section 1(2)(c) CPA.
[3] The European Commission will publish this guidance in mid-2019, aided by its expert group on liability and new technologies.
[4] Wilkes v DePuy International Ltd [2016] EWHC 3096 (QB).
[5] A v National Bloody Authority (No. 1) [2001] 3 All ER 289.
[6] Gee v DePuy International Ltd [2018] EWHC 1208 (QB).
[7] http://moralmachine.mit.edu/
[8] https://www.nature.com/articles/s41586-018-0637-6 (£)
[9] Scott v London and St Katherine Docks [1865] 3 H & C 596.
[10] Widdowson v Newgate Meat Corp [1998] PIQR P138.
[11] Blockchain is a decentralized and distributed public digital ledger that is used to store static records and dynamic transaction data across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network.
[12] MIT Technology Review – https://www.technologyreview.com/s/608618/hackers-are-the-real-obstacle-for-self-driving-vehicles/

Related Articles