Skip to content

AI, ethics and trust

24.04.2017

This article was first published in techUK, April 2017
As part of techUK’s AI Week, Robert Bond, Partner at Bristows LLP, who is a member of techUK, has provided a blog on ‘AI, ethics and trust’
The increased reliance upon algorithms and artificial intelligence (AI) to produce outcomes and profiling in big data projects such as humanitarian actions, healthcare plans, financing decisions, connected autonomous vehicle infrastructures and generally in marketing and advertising, raise ethical and trust questions around the risks associated with a lack of emotional human intervention.
A recent report by the Alan Turing Institute in London and the University of Oxford has suggested that there is a need for the creation of an AI watchdog to act as an independent third party that can intervene where automated decisions create discrimination. The report indicates that where there is no human intervention in an outcome based on algorithmic automated decisions, then the results may be flawed or discriminatory because the data samples are too small or based upon incorrect or incomplete assumptions or statistics.
Leaving aside the question of whether or not AI needs the watchdog that the Report above calls for, there is the further question as to whether or not individuals have the right to know how algorithms are working that may impact upon their data protection and human and consumer rights. Well there are such rights under the current Data Protection Act 1998. Section 12 gives individuals the right to understand the methodology applied to automated decision making such as performance at work, creditworthiness, reliability or conduct. However this right has seldom been used, and historically there has always been human intervention in profiling activities. Now however, the advances in profiling technology mean that AI functions more and more without human intervention. The Report above and guidance from the Information Commissioner’s Office reinforce the need for individuals to have enforceable rights and for data controllers to comply those rights.
The EU General Data Protection Regulation (GDPR) specifically deals with automated decision making in Article 22, although it is a limited right in that an individual can only object where the algorithm or AI produces a legal or similar outcome that adversely affects the individual. There is no right to object where the profiling is necessary for the entering into a contract or where the individual has expressly consented to the automated decision making. GDPR does, however, place strict obligations on businesses that use AI to put in place security by design and privacy by default to protect the human rights and privacy of individuals, and where AI and profiling uses sensitive data such as biometrics, religious and philosophical beliefs, health data and criminal records, then in addition to security and privacy, the business must have obtained explicit consent to the processing.
Whilst GDPR focuses on aspects of automated decision making, it leaves controllers to take responsibility for compliance and ethics in the use of AI and profiling. It does not expand in Article 22 on how privacy by design nor privacy impact assessments must be applied to automated decision making practices. GDPR does however generally require adherence to privacy by design, security and privacy impact assessments. So controllers and in some cases, processors, must put in place policies and procedures in anticipation of the exercise of rights by individuals under not only current law but also GDPR from 2018.
As individuals begin to understand their enhanced data subject rights under GDPR, such as the right to object to automated decision making but also the rights of erasure, rectification and information, then they will also realise they have rights to compensation for not only actual but also emotional damages where their personal data is abused. So we may see a growth in compensation claims by aggrieved individuals who feel that AI and profiling may have unfairly discriminated against them and businesses that are not prepared to respond to such claims may find themselves not only embarrassed in court but also subject to further investigation by the relevant data protection authority.

Robert Bond

Related Articles