Skip to content

What a time to be (AI)live! What’s next for Generative AI?

Naina Mangrola reflects on our recent event and discusses some of the key takeaways from the three sessions

09.05.2023

Recordings from the event are available below.

On 25 April 2023, Bristows hosted a conference focused on Large Language Models (LLMs) and Generative AI. Our panel comprised industry experts who provided insight into the future of AI and considered the implications for the legal industry, wider enterprises and also individuals.

Bristows partner Chris Holder chaired the event and introduced the speakers and panellists:

Legal & Regulatory Snapshot

Charlie Hawes began by emphasising the impact that Generative AI and, in particular, LLMs have had on the world recently. Most Generative AI products are under a year old, so the increase in use of new LLMs in the last few weeks and months, as demonstrated through the popularity of OpenAI’s ChatGPT, has left regulators and legislators struggling to keep up. Charlie explained that the key legal challenges of using Generative AI include:

  • IP infringement;
  • Privacy;
  • Hallucination;
  • Misinformation; and
  • Alignment and Safety.

Some of these problems are proving difficult to solve for legislators, particularly hallucination and misinformation. The EU AI Act has been on the horizon for the past two years and is being specifically designed to regulate AI. However, Charlie noted that LLMs have upended these proposals, which are now being reviewed in light of these new developments.

China has also recently published a ‘Draft Policy for Generative AI’ which, although focused on the concerns mentioned above, takes a more restrictive and ‘China-centric’ approach, while the UK has positioned itself as pro-AI innovation. It will be interesting to see if the UK government continues on this trajectory or if it begins to adjust its approach as the technology continues to develop.

Exploring the landscape of LLMs

Before beginning his presentation, Dr Claudio Calvino provided a detailed explanation of how LLMs work. He noted that the probabilistic approach of LLMs results in them producing an output that is not entirely new, as it instead works on the combination of word patterns and on predicting what the next word in a sentence will be, based upon the data which the AI model has been trained on.

Claudio also pointed out some of the main limitations of LLMs, with a particular concern relating to potential bias. Nowadays, LLMs can be trained on data which can include almost all publicly available online information, including biased information created by humans. One example is Wikipedia which can be edited by anyone in the world without much, if any, moderation. Claudio spoke about the future of LLMs and how to mitigate this risk, as well as what success looks like for LLMs. He concluded with a reminder that continued development and research into LLMs must be accompanied by a responsible and transparent approach into both the data included in training sets and the output generated.

Panel discussion

Moderated by Toby Headdon, the panel discussion produced lively debate between the industry experts with involvement from the audience, who asked some very thought-provoking questions. Consideration of artists’ rights together with the rights of technologists was high on the agenda following complaints of copyright infringement of images and, more recently, songs, created using Generative AI from existing artists’ works. Professor Lilian Edwards predicted that there would be an upcoming battle between the music industry and the technology sector, which would be one to watch.

An interesting question from the audience led to a conversation about whether or not human arrogance created the assumption that only humans could be creative, and that therefore a machine could not. Sean Williams was supportive of the notion that AI could be considered to be creative and provided a further explanation and live demonstration of an LLM from his company, Autogen AI. He also noted how large the parameters for LLMs are now – in the hundreds of billions – and how difficult it would be to remove individual pieces of data, which is a consideration when thinking about IP infringement and data privacy.

Data privacy remains a challenge for many businesses, and both Claudio and Lilian agreed regulation would need to be introduced to hold AI safe and accountable. Lilian thought that, in the meantime, amending the contracts and terms of service of a company using Generative AI would be a good starting point to ensure remedies, such as indemnity clauses, are up-to-date. In addition, Anita Shaw raised the fact that education for employees and the wider public of what it means to input data into Generative AI systems and the consequences of doing so, should be prioritised. Press rules may also need to be put into place to ensure journalists report on AI using the correct terminology, rather than sensationalising the threat of the technology just to make headlines.

Claudio and Sean both agreed technology is still a while away from reaching the ‘artificial general intelligence’ stage and that we are unlikely to see this achieved in the near future. However, there was a clear consensus that this new technology will change the way the world will operate.

You can find more of our insights on AI & Robotics here.

Recordings

Part one: Chris Holder, Charlie Hawes and Dr Claudio Calvino

Part two: Panel discussion