There is little doubt that artificial intelligence (AI) can transform healthcare practices and drive change in the healthcare sector – whether tracking the fastest ambulance routes, communicating with patients about their symptoms, or forming conclusions about scans or test results.
This use of AI can be a challenging nexus from a data protection perspective; involving the processing of personal data in ways that may be less than obvious to individuals, which can significantly affect the way they are treated, and almost always involves the processing of sensitive “special category” health data.
So what are some of the key data issues surrounding the use of AI to gain insight from patient data?
Challenge 1: Check your role
It is important to determine the role of each organisation, under the GDPR, as soon as possible when providing or using AI solutions in healthcare, to understand the obligations that each will have under the legislation, and set the correct compliance path. Are you a controller, processor, or joint controller?
The September 2020 EDPB guidance on the concepts of controllers and processors expands even further the concept of joint controllers, introducing the idea of ‘converging’ as well as common decision-making. Given the complexity of AI processes (which can make it difficult to clearly delineate decision-makers), it seems likely that many AI use cases will involve parties acting as joint controllers under this new concept of ‘converging decisions’.
Challenge 2: Confirm the obligations that attach to your role
Once you have identified where you stand under the GDPR, either as a controller, processor or joint controller, you and your counterparties should reflect this in a contract. This will allow all parties to streamline compliance around their specific responsibilities under the legislation. The parties should also work together to ensure they assist each other with compliance.
In situations of joint control, the parties will need to have a plan in place for complying with these responsibilities jointly (dividing up the practicalities between you, but noting that each remain responsible for complying with all controller obligations in the legislation).
Challenge 3: Ensure transparency
Providing comprehensive, clear and meaningful information about processing personal data using AI is more than a matter of satisfying legal requirements. The use of AI systems will only be taken up if they can be easily explained to, and therefore trusted by, front-line providers and end users, such as doctors and patients. Providing information is a controller obligation under the legislation, but where processors are providing or assisting customers with AI technology, controllers may well require their input to help explain the technology to individuals in a meaningful way.
The ICO has teamed up with The Alan Turing Institute (the UK’s national institute for data science and artificial intelligence), to provide guidance on how to explain decisions made with AI. The guidance suggests an “explanation-by-design” approach, breaking down explanations of AI systems into different explanation types (such as “rationale” and “responsibility” explanations), and prioritising them. Certain explanation types may need more emphasis in certain contexts. For instance patients may want to know more about the accuracy of AI used to diagnose them with a particular condition, than AI used to simply check them into a doctor’s surgery.
Challenge 4: Consider the legal basis
There are two key factors to take into account when considering the lawful basis for processing patient data using AI. Firstly, health, biometric and genetic data is all “special category” personal data, and therefore subject to more restrictive conditions for processing. Secondly, if the use of AI leads to automated decision-making about patients, the grounds for processing may be narrowed further by the application of Article 22 of the GDPR. Lawful bases should not be switched, so need to be considered and factored in at the outset.
Using AI to improve medical diagnosis, streamline consultation processes, or assist with surgical procedures, typically involves the processing of special category data. Therefore, as well as a condition for processing “regular” personal data, an additional condition is required, such as that the processing is necessary for reasons of substantial public interest, or for the provision of health treatment. A processor will rely on the basis established by the controller.
In relation to automated decision-making (i.e. decisions taken about individuals without human involvement), under Article 22 individuals have the right not to be subject to a decision based solely on automated processing which produces legal or similarly significant effects concerning them. Decisions based solely on automated processing which, for instance, lead to diagnosis of a particular condition, or treatment plans, would likely fall into this category. The automated processing can therefore only take place if it is necessary to fulfil a contract with the individual, based on the individual’s explicit consent, or authorised by law, with sufficient safeguards in place.
Conclusion
Using AI to process health data requires some tailored and stringent planning from a data protection perspective, because of the technology used, the types of personal data processed, and the potential impact on individuals. However as well as presenting challenges for all parties involved, data protection requirements present opportunities for technology providers to distinguish themselves from competitors, better support their customers, and drive innovation in healthcare at the same time.