In recent months, we have seen much discussion amongst the European Data Protection Authorities (DPAs), as well as a growing body of case law and enforcement actions, with regard to the use of AI technologies, and particularly facial recognition technology (FRT) to process personal data.
One of the key issues arising from these discussions and actions is the need for a clear, comprehensive data protection impact assessment (DPIA) in respect of AI data use cases, which considers the risks of using such technologies in detail and provides adequate mitigations to address such risks. This article takes a look at some of the key cases and enforcement actions across Europe. It also reflects on relevant guidance from the UK DPA, the Information Commissioner’s Office (ICO), and considers ‘lessons learned’ from these various sources, which inform how organisations should conduct DPIAs for AI use cases.
When is a DPIA required?
In accordance with the General Data Protection Regulation (GDPR) and applicable ICO guidance, DPIAs are required in a number of data processing situations, including:
- the deployment of innovative technology;
- the processing of biometric data;
- the processing of any special category of personal data on a large scale; or
- any automatic decision making or profiling, which significantly affects the individual, for example to provide or deny access to a service to that individual.
It is likely that at least one of the above will apply to most AI data use cases and therefore the general assumption is that a DPIA will be required when implementing any form of AI technology for processing personal data.
Case law and DPA actions: Approved use of FRT
Recent European case law and DPA enforcement actions provide some useful indications of how the courts and regulators have reacted to FRT use cases.
In May 2019, the Danish DPA (‘Datatilsynet’) granted private company Brøndby IF permission to use FRT to identify individuals. The specific use case was to identify fans who had been banned from a football stadium to ensure they were not entering the stadium. The primary reasoning behind this decision was that the technology conferred an ‘essential public benefit’.
The permission granted by the Danish DPA was subject to a number of conditions, primarily focussed on data minimisation, transparency and security. In particular: (i) personal data not matched to a ‘banned individual’ must not be stored; (ii) clear signs must be displayed explaining how FRT is being used and what data is being collected; (iii) personal data must be encrypted; (iv) FRT cameras must not be accessible via the internet; and (v) two factor authentication must be in place for all authorised personnel with access to the data. These conditions provide useful indicators of the types of protections regulators will expect to see evidenced in a DPIA with respect to AI technologies, and are of particular relevance where biometric data is being processed.
The UK case of R (Bridges) v Chief Constable of South Wales Police and other  EWHC 2341 (Bridges) came to a similar conclusion. In this case, the use of FRT by the police force at public events to identify individuals on police watch-lists was held to be lawful. The basis of this decision was that there was a clear legal framework in place for the use of such technologies by the police and the processing of personal data was held to be proportionate. Whilst this decision was very much based on the specific role of the police as a public authority, some of the points discussed by the court appear relevant to organisations in the private sector, who are looking to make use of such technologies.
As with the Danish DPA case, there was a focus on transparency and data minimisation. In Bridges, clear notices had been given via social media, large printed signs and cards handed to members of the public in the vicinity. Furthermore, all personal data was automatically deleted if there was ‘no match’.
Of more direct relevance, were the court’s specific comments on the DPIA conducted by the police. There was a clear acknowledgement that the DPIA focused more on the treatment of the personal data of those on watch-lists and was lighter on the treatment of the personal data of general members of the public. However, the court held that the DPIA was sufficiently clear and detailed to discharge the police force’s obligations under section 64 of the Data Protection Act 2018 (DPA 2018). In particular, the court was satisfied that the DPIA contained a ‘clear narrative that explains the proposed processing’ and ‘identifies the safeguards that are in place’.
This should not however be taken to suggest that organisations should attempt to get away with half-hearted DPIAs. Notably, the court emphasised that, ‘what is required is compliance itself, i.e. not simply an attempt to comply that falls within a reasonable range of conduct.’ Controllers who treat DPIAs as a purely administrative, tick-box exercise should clearly heed this warning.
Another interesting point to note from Bridges is that whilst the GDPR does not require consideration of the potential for breach of the privacy right under article 8 of the European Convention on Human Rights, it was nonetheless viewed positively by the court that the police force’s DPIA addressed this.
Finally, the court was very clear that whilst it takes account of guidance issued by the ICO, this is non-statutory and should not cause controllers to lose sight of their statutory obligations under section 64 of the DPA 2018 as these have been ‘expressly formulated’.
DPA actions: Termination of the use of FRT
Last October, an action by the French DPA (‘Commission nationale de l’informatique et des libertés’ (CNIL)) saw the termination of FRT programs at two high schools on the basis that the GDPR principles of proportionality and data minimisation had not been met. The schools had been using FRT to reduce the time taken in controlling access to the schools, on the basis of consent provided by the pupils.
The CNIL found that the same objective could be achieved by less intrusive means and that using FRT was particularly invasive and gave the impression of the pupils being under surveillance. The fact that this use case related to children was also relevant, as personal data relating to children has special protection under the GDPR. Concerns were also raised over the security of the special category data (i.e. biometric data) that was being collected.
A similar decision was made by the Swedish Data Protection Authority (‘Datainspektionen’) back in August 2019, where a secondary education board was using FRT in a trial project in a school to register pupils’ attendance. Again the key driver behind the use of this technology was to streamline this process in order to save time.
The school had incorrectly concluded that there were no high risks to the data subjects (though the use of FRT involved the processing of biometric data relating to children), and therefore had not carried out a DPIA. The Swedish DPA held this to be a breach of Article 35 GDPR.
Again, the DPA focused on proportionality and data minimisation, finding that the use of FRT was very intrusive, was not proportionate and that the purpose (i.e. registration) could be achieved in a less invasive manner. Like the CNIL, the Datainspektionen recognised the importance of affording special protection to the personal data of children under the GDPR. Further the DPA held that the consent of the children was not a valid legal basis for processing, given the imbalance in power between the pupils and the school.
The Swedish DPA issued the education board with a warning under Article 58.2(a) of the GDPR and a fine of SEK 200,000 (approximately €18,000).
Key points to note for completing DPIAs
Taking these various cases and enforcement actions together, it would seem at this stage, that there are four key areas of focus for the courts and DPAs across Europe when looking at FRT cases, namely:
- Transparency: clear notices must be made available to the data subjects, of the use of FRT to collect and process their personal data. In Bridges, a three-tiered approach of posting to social media, displaying large notices on police vehicles and handing out cards to members of the public was deemed sufficient.
- Proportionality: if the objective can be achieved by less invasive means, as the French and Swedish DPAs deemed to be the case when such technologies were used to control access and registration of school pupils, then FRT should not be used.
- Data minimisation: personal data that is not needed should not be stored and should be deleted at the earliest opportunity. In Bridges and the Danish DPA’s decision, the immediate deletion of images relating to data subjects, who were not on the relevant ‘wanted’ list, was key to the approval of using FRT.
- Security: where biometric data is processed, security measures will need to be more stringent, including features such as encryption, two-factor authentication and no access to the data via the internet.
When carrying out a DPIA for a new AI activity, organisations should therefore look to comprehensively tackle each of these four key principles, ensuring that each of these have been adequately addressed, and that sufficient protections and mitigations are put in place, to meet the standards identified by the courts and DPAs in the cases outlined above.
Noting the court’s findings in Bridges, whilst guidance issued by DPAs is important, organisations should remain focused on their core obligations under section 64 of the DPA 2018 (Art 35 GDPR) when it comes to completing a DPIA and must look to ensure compliance with these.
Following Bridges, in October 2019 the ICO issued an opinion on the use of FRT by law enforcement in public places. Whilst this opinion was specific to the use of this technology by public sector law enforcement, its recommendations regarding the content of DPIAs may provide some useful guidance for private companies looking to use AI technology. In particular, the ICO confirmed that DPIAs should:
- be completed or updated before each and every deployment of the technology;
- clearly and comprehensively demonstrate why the technology is strictly necessary and why a less intrusive option is not available;
- include a clear assessment of the likelihood that the objectives of using the technology will be met and how effectiveness can be measured;
- explain how effective mitigating measures have been implemented; and
- be kept subject to continual review, including where there are any changes of circumstances with regard to the data processing or the nature of the risks that apply.
With regard to AI technologies more generally, the ICO published a blog post in October 2019, which specifically tackled the undertaking of DPIAs in relation such technologies.
The blog post set out clear guidance on the level of detail needed in respect of how and why AI is being used to process personal data. For example, the DPIA should document the volume, variety and sensitivity of the input data and the intended outcomes for individuals or wider society and for the controller of the data. In accordance with the GDPR, the guidance states that the DPIA must include a ‘systematic description of the processing activity’, which includes data flows and describes the stages at which the AI may produce effects on data subjects. The post recommends documenting the views of individuals on the use of the technology and maintaining two versions of the DPIA: a more technical version for AI specialists and a high level version for the average lay-reader. In line with the GDPR, other key focus points should be:
- necessity and proportionality, in particular to ensure that there is not a less intrusive alternative available;
- identification of key risks and documenting the safeguards to mitigate these risks, including ensuring that the DPO or other privacy professionals are involved in the project from the earliest stages and ensuring that all those involved in the project are adequately trained on the relevant data protection implications; and
- ensuring that the DPIA is maintained and regularly reviewed and updated as a ‘living’ document, to take account of any changes to the processing of personal data as the AI and its use develops.
With regard to identifying risks, the blog post highlights that organisations will need to consider legislation that goes beyond privacy and data protection. An issue that often arises in relation to AI technologies such as FRT is the potential for discrimination, which may be in breach of equality legislation. For example, there is a risk that historic patterns in data could cause bias based on gender or ethnicity. The ICO recommendations make it clear that organisations must consider any detriment that may be suffered by individuals as a result of such bias or inaccuracies in the algorithms or data being used.
Though not specifically referencing AI technologies, the ICO’s AdTech blog published on 17th January 2020 makes some interesting observations about what the ICO has been seeing in terms of DPIAs. A general comment is made that these tend to be ‘immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual’.
It is clear from these comments that many organisations are not currently taking DPIAs seriously, nor affording them the time or effort that is required in order to meet the standards expected by the ICO.
In the last 6 months or so we have seen a number of cases and DPA actions, as well as commentary from the ICO, which provide helpful guidance about conducting DPIAs for AI use cases, particularly in the area of FRT. There is a strong focus in particular on the GDPR principles of transparency, proportionality, data minimisation and security, as well as clear guidance on specific points to cover in the DPIA, such as the level of detail needed to describe data processing activities and the importance of recognising the potential for risks areas such as bias and discrimination and documenting how such risks have been mitigated.
The ICO has highlighted a clear deficiency in the way many DPIAs are currently being completed. If organisations do not start to place more importance on completing comprehensive DPIAs, prior to introducing an AI use case, ensuring that their obligations under the GDPR are fully discharged, and taking account of the relevant case law, enforcement actions and guidance in this area, then we may well start to see a rise in DPA enforcement actions under Article 35 of the GDPR.
A final point worth considering is the view expressed by the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, in his blog on FRT published in October 2019, where he stated that he believes it is ‘highly doubtful’ that FRT can be compliant with principles such as data minimisation and data protection by design. Of course, this leaves us wondering what this means for the future of organisations that are developing and using these types of AI technologies. How can these organisations seek to comply with the law if the EDPS is simply stating that this isn’t possible? One possible option could be a statutory code for the deployment of AI technologies. Indeed the ICO has called on the UK Government to introduce a clear and comprehensive, statutory, binding code of practice for FRT in public spaces as a matter of priority. It remains to be seen how the Government will respond.
 This first legal challenge in the UK to the police use of FRT is under appeal.
 Information Commissioner’s Opinion: The use of live facial recognition technology by law enforcement in public places (31 October 2019) 2019/01
 European Data Protection Supervisor Blog: Facial recognition: A solution in search of a problem
 Information Commissioner’s Opinion: The use of live facial recognition technology by law enforcement in public places (31 October 2019) 2019/01