The ethics of Artificial Intelligence – the next step?

04.01.2018

The AI sector is moving fast, with most experts revising down their estimates of when it will infiltrate most, if not all, areas of life.  Given the powerful nature of this (and other) technology, discussions about the role of ethics in AI have been rife in 2017. However, this Summit aimed to push the discussion on to the next step. We need to foresee future problems. We need to establish how to implement an ethics framework. The catchphrase of the morning was “2018 is the Year of Doing”.

Martha Lane Fox (LastMinute.comDotEveryone) noted that the lack of trust in the tech industry is a huge obstacle for the implementation of AI.  It is particularly worrying that this distrust is based on current technologies (data breaches from social media companies, for example); future technologies which could have more cogent consequences pose an even bigger difficulty.  In this setting, Fox argued that Britain needs to carve out its future role in the tech landscape.  We cannot compete with the funding available in the US, or the sheer quantity of data collected in China. We need to lead in the practical implementation of such technologies, i.e. standard-setting.  Diversity and transparency are key to achieving this: gender equality, education of children, and an understanding of supply chain (similar to what has happened in the clothing industry, but something that is harder in tech) are all key factors in improving trust in the sector.

Panel Discussion

A panel discussion followed, ‘What is the current AI landscape and how to ensure ethical foresight?’, with speakers Prof Luciano FloridiDr Stephen CaveDr Clare CraigGeorge Zarkadakis and Rob McCargow. Floridi opened the discussion, noting 4 key issues that need to be kept in mind as we implement AI:

  •   Delegation: to whom or what are we giving this technology to?
  •  Responsibility: how does that person/thing keep control?
  •  Manipulation: are we aware of the extent to which this technology will affect our lives?
  •  Prudence: there is a risk that humans become reliant on technology; if something goes wrong with that tech, will a human be able to fix it?

The panellists went on to consider the irony that despite the institutional distrust in technology, businesses will have to adopt it soon.  To ease the acceptance of AI, and therefore ensure its success, they used the analogy of the 5p plastic bag charge here in the UK.  Acceptance of the change (and consequent decline of plastic bag use), was achievable after regulation and corporate participation harnessed the good intentions of the public.  The regulators and corporate entities provided the nudge to the ‘willing-in-theory’ public.

The panellists discussed what shape the regulatory framework could take. They opined that the ‘agile’ methodology generally used to introduce new software technologies (where incremental tweaks and improvements are made to a product following release, upon finding faults and glitches) simply might not be appropriate for AI.  Given the power of the technology and the potential unintended consequences, there is simply not scope for ‘trials’ where those consequences might be huge.  For example, ‘beta’ tech products we often see released into the current public arena would be wholly unacceptable if they could have adverse consequences for human life. This scenario would exacerbate the public trust problem, and could lead to a swift rejection of a potentially beneficial technology.

Offering solutions to this dilemma, the panel considered pharmaceutical-style testing  – something that is at the opposite end of the spectrum to technology’s so-called ‘move-fast and break-things’ approach.  Whilst the rigorous framework allows confidence in safe products, it was highlighted that adopting the pharma approach in tech would stifle the rate of innovation. This is largely incompatible with the current funding and development structure of the tech industry.

Finally, the panellists discussed the need for a consistent, intelligible discourse to effectively explain new technologies to the general public. The conversation needs to be diverse: it should include philosophers, historians, psychologists as well as engineers. Only with this diversity of voice and expertise can a well-rounded framework be built.

In our analysis of future issues facing AI, we often hover in the middle of the general and the specific. We only think about what we can visualise (e.g. the trolley problem) rather than other less tangible effects (e.g. obesity as a result of AI). Rather, analysis needs to be taking place both in general, universal terms as well as in specific terms. Only then can we begin to frame the future world in which AI is seamlessly integrated.

Matt Handcock MP (Department for Digital, Culture, Media and Sport)

A brief talk from the Minister in charge of Digital Policy explained the function of the new Government Centre for Data Ethics & Innovation. The Centre will engage in public communication; it will propose measure to regulators, without being a regulator itself; and it will merge across various government departments such as DCMS, Transport, Health, and Business, Energy &  Industrial Strategy  in order to ensure the consistent discourse mentioned above.

Handcock commented that the UK is well-positioned to be a world leader in AI standards and ethics. It has a level of freedom to develop a practical framework that will, in turn, carry influence around the world. By contrast the US Government, with its separation of powers, simply cannot be this nimble.

Keynotes

The rest of the day included the three Keynote speeches by Microsoft’s Dr Carolyn Nguyen, the ICO’s Elizabeth Denham (see here), and Nuffield’s Chief Executive Tim Gardam (see here).  Unfortunately there is no copy of Nguyen’s talk, but Denham’s and Gardham’s are well worth a read.”

This article was first published in the IPKat blog, January 2018

Alex Calver

Related Articles