In this post, we introduce a series of articles on artificial intelligence and the law. We will seek to set out the key legal issues involved in AI, how businesses and society can manage those issues and use AI responsibly, and the legal trends we are seeing in this emerging space.
Firstly, we would like to (try to) put AI in context, and this is the subject of this introductory post.
AI can be difficult to define. For the purpose of this series, we distinguish AI as a sub-set of the broader area of Robotics and Autonomous Systems (RAS) that we have written about extensively on the Cookie Jar over the last few years. This is because, in our view, there are some unique characteristics of AI systems that warrant their own analysis and discussion separately from RAS. When we refer to AI in this series, we tend to mean software algorithms that solve particular problems – human-written code performing actions that would (as Alan Turing put it) “require intelligence when done by humans”. (To note, there are further sub-categories of AI, such as the distinction between ‘General’ and ‘Narrow’ AI, and machine learning, which may become important through the series as we touch on certain issues.)
AI systems are already being used to improve our lives and change the way business works. Examples of the real-world use of AI are all around us: ‘AlphaGo’, Google DeepMind’s system that defeated the human world champion of the board game ‘Go’; IBM’s ‘Watson’, which is using AI and machine learning to, among other things, make health diagnoses smarter; voice-activated personal assistants, such as ‘Siri’, ‘Alexa’ and Google’s ‘Home’, that help people find information, schedule appointments and many other things. Those are some popular examples that have received widespread media coverage, and the use of AI is already prevalent in areas such as finance, defence, education, healthcare, manufacturing, and even law. But why are we seeing such a rise in AI systems now?
The development of AI has not happened overnight, nor has it occurred in a vacuum. In fact, the reasons we are seeing such rapid improvements in AI technology are broadly the same factors as those driving other so-called ‘Industry 4.0’ or Fourth Industrial Revolution technologies such as big data, the Internet of Things and robotics. One factor is that computer processing power has fallen in price sharply over recent decades, allowing computing systems to process more and more data and execute more and more tasks. Another factor is the rise in sophistication and speed of fixed and mobile communications networks, making it easier and cheaper to connect previously ‘dumb’ devices to the internet and make them ‘smart’. Further, the hosting of data in the ‘cloud’ is cutting the cost and time to market of data-heavy technologies. The result of these trends is the ‘knee-curve’ rise in smart systems and devices which is really powering the growth in AI, among other technologies.
As the development and use of AI becomes increasingly available at an ever-decreasing cost, businesses in particular are starting to explore how AI systems can increase their competitiveness, drive customer engagement and make better use of the large datasets they have accumulated. Business leaders are recognising the vast potential for AI to deliver efficiencies and cost savings, provide insights into their businesses that they previously were incapable of assessing, and exploiting whole new markets for their products and services. Equally, opportunities are being identified in the public realm to use AI for the benefit of the wider public, including in public health where health organisations all over the world are starting to explore how AI can be applied to improve diagnoses and ultimately health outcomes.
We at Bristows are AI advocates and we believe the technology has the potential to transform society and business for the better. However, we are clear that the development and use of AI is certainly not without risk. Poor performance, misuse, a reliance on (bad) data, privacy impacts, a perceived lack of transparency and accountability, discrimination and adverse impact on labour are just some of the risks and issues relating to AI and how and where it is used that will need to be recognised, understood and carefully navigated in order for AI’s potential to be realised to its fullest extent. This area is already attracting close scrutiny from regulators worldwide: the European Parliament recently presented a report on civil law rules for robotics, the UK’s Information Commissioner’s Office has published a report on AI and data protection, and the House of Lords Select Committee on Artificial Intelligence is due to report on this subject soon. As with any new technology, the focus of the regulators makes it even more important for developers and users to lead the way in developing best practice if the market is to avoid the imposition of laws that might stifle innovation in this area.
Having set the scene for the discussion in this introductory post, we look forward to using the rest of this series to explore those key risks and issues, and how business and society can manage them in order to exploit AI while mitigating downside effects, with a view to developing some key principles for the responsible development and use of AI.