Skip to content

AI ‘deployment risks’ and the importance of managing them effectively

24.04.2018

This article was first published in Tech UK on 23 April 2018.
As AI becomes more and more ubiquitous in modern businesses and society, whether we are aware of its presence or not, there are also more alarm bells being rung about the inherent risks of using AI. Although some fears are commonly agreed to be overblown, such as visions of a dystopian future run by sentient robots, there are real risks in the systems we are already using to make profound decisions. Without understanding the nature of these risks, we cannot develop effective methods to manage them. Here is our view of the main ‘deployment risks’ associated with AI applications.
Bad data
The current AI revolution depends heavily on the availability of large quantities of data to analyse, detect patterns and inform decision-making processes. An inherent limitation of AI systems implementing machine learning is that the quality of any output will depend on how good the input data is. Data quality in this context requires looking not only at how large and comprehensive the data set is, but also whether it has come from the ‘real world’, whether it is corrupt, biased or discriminatory. The implications of ‘bad data’ differ depending on the application: a social media bot taking on the worst, most offensive qualities of the users that feed it its inputs is of less concern than incorrect health care decisions being made by AI systems analysing incomplete medical data.
Transparency
Sometimes known as the ‘black box’ problem of AI, the opaque design of many AI systems means that it is nearly impossible to scrutinise how individual decisions are being made. These technical limitations are compounded by owners of proprietary technologies not wanting to reveal their inner workings for scrutiny. It is going to be difficult to build users’ trust in AI systems if the mechanics of a technology are not easily interpretable by a human. Adoption of AI should not mean that companies and organisations are subject to lower standards of accountability or be able to hide behind invisible processes. If the autonomous vehicle industry is going to be reliant on AI for its driverless cars and requires safety certification from regulators, if the regulators do not see what is going on and the manufacturers are not willing to divulge workings, then how can any classification/standardisation take place?
Misappropriation
A direct consequence of the black box problem is the difficulty in determining with confidence whether the use of AI is appropriate to apply to a particular issue or to solve a particular problem. Any business process carries with it inherent risks and for a non-transparent, unpredictable system it is much harder to assess what might go wrong, when it might go wrong and the adverse effects that failure might have. AI implementers need to address these limitations by understanding the environment in which AI deployment will take place, conducting a full risk assessment and then getting comfortable with those risks. Any testing should also take place in a non-live environment, if possible.
Misuse
In addition to poor data causing poor outcomes, malicious data can cause harmful outcomes than may outweigh an AI application’s overall benefits. The risk is amplified when it is considered that AI systems will typically be hosted in the ‘cloud’ or be otherwise internet-enabled, opening them up to cybersecurity issues and hacking – if this occurs would the owner or user even know such misuse was happening, let alone the individuals affected by the AI’s decisions? Several research institutions have co-authored a recent report on the malicious uses of AI, highlighting the very disruptive impact that AI-based attacks may have in the coming years. The report warns that the scalability of AI means cheaper attacks which would nonetheless be more effective, precisely targeted and more difficult to track to the offender than conventional cyberattacks. Users’ systems could be compromised by their online information being used to generate custom malicious websites and emails, sent from addresses resembling the victim’s real contacts and mimicking their writing style.
These are by no means the only risk posed by applied AI but they will certainly need to be considered by the AI industry in order to create and implement appropriate industry standards and codes of practice. If the industry fails to adequately manage the risks itself, then lawmakers may feel forced to intervene by way of (potentially onerous) top-down regulations that may stifle innovation and prevent the clear benefits of AI from ever being fully realised.

Related Articles