This article was first published by Finextra, April 2020
In response to the European Commission’s strategy for data released in February, Vikram Khurana, argues that “The EC is saying trust needs to be the bedrock of AI. If the industry can tackle this issue and build trustworthy AI systems, AI is more likely to be accepted and taken up by businesses and individuals.”
“Better uptake means the benefits promised by AI can be realised more fully, for business as well as society at large.”
Companies recognise the denigration of trust between companies and their customers following high-profile scandals and data attacks and are working to flip the narrative on its head. Focusing on re-building lost trust allows firms to distinguish themselves using this unique selling point and drive loyalty among their base and potential customers.
While the question remains as to whether the UK will reach an adequacy decision with the EU on the topic of GDPR before Brexit, commentary indicates that firms see the advantage of working to proactively protect and inform consumers about the use of data in relation to AI.
Miriam Everett, partner and global head of data and privacy at Herbert Smith Freehills doesn’t believe that a divergence from the regulatory demands of GDPR would lead to instability or run the risk of undermining trust across AI products and services.
In fact, she believes that companies can and are using this arguably unstable landscape to their advantage.
“I’m seeing a lot of organisations looking into data ethics. Rather than throwing money and technology at the ethical problems AI presents, firms have shifted their approach to think about what they should be doing with this data.”
By taking this approach, firms are addressing the seemingly endless wait for legislation to be drafted, passed, and implemented by pre-empting the regulatory frameworks and applying data strategies that they perceive will be compliant with future requirements.
This not only protects their reputation by taking the initiative to lead with consumer-first strategy but encourages regulators to be more accommodating when the frameworks are inevitably put into place.
Everett elaborates: “While it may not be an entirely altruistic approach, if firms are behaving well and paving a responsible path with their data strategy the regulators may be willing to implement less stringent requirements because they see the market working toward an effective solution itself.”
Khurana points to AI tech powers including the US and China to draw a comparison with the UK and Europe. When it comes to unparalleled data and resources in conjunction with less privacy regulation and minimal (read: nil) consumer rights in the US and China respectively, the UK and Europe simply can’t match up. However, where they can compete Khurana argues, “is in skills, thought leadership, and – shown by the white paper – a concerted effort in developing a regulatory and governance framework for AI.”
If the UK seeks to diverge from the burdens of the GDPR, as Everett suggests, “For some that will be a popular choice, but for others an onerous burden. Either way, for someone, it will be a difficult decision to make.”