Skip to content

Online harms and safety in the metaverse

25.10.2022

If someone steals your brand-new Bentley parked on the street, you would call the police to report a theft. But, what happens when someone’s avatar drives off with your Bentley in the metaverse?

The answer to this depends on a whole host of factors: which metaverse platform you’re on, why they’ve driven off with your virtual car, where you are, etc. Crucially, it depends on which laws will apply and who is ultimately responsible.

In this eleventh article in our series, we explore how the metaverse is challenging lawmakers around the world to modernise how they protect their citizens online.

The need to rethink online harms in the metaverse is obvious. There have already been reported incidents of sexual assault on virtual reality games and platforms. As the metaverse develops, platforms need to plan and account for user safety, and they are already implementing user-activated features to address this. For example, in Meta’s Horizon, avatars can activate a “Safe Zone” to create a protective bubble around themselves where they cannot be touched, spoken to, or interacted with by other users. But, is it already too late if the user needs to activate these tools? Is protecting yourself online really the users’ responsibility? If not, whose responsibility is it?

These are the types of questions that have driven legislative change in this area. The UK, for example, has now gone several steps beyond the e-Commerce Directive’s “notice and take down” rule (which required online intermediaries to remove illegal user-generated content from their platforms once they became aware of it, or face liability). Now, the proposed Online Safety Bill (“OSB”) makes it the platforms’ responsibility to proactively protect users. In the metaverse, user-generated content is dynamic and there may be several iterations of the circumstances where one user may “encounter” illegal or harmful content shared by another user. The real challenge for platforms will be the costs of complying with these new duties for online safety.

For example, platforms allowing users to share and encounter content from other users in the UK must, under the OSB, conduct risk assessments of illegal content and remove the most heinous, such as terrorism or child abuse. There is, in addition, a duty to regulate and separately assess risk for legal, but harmful content. The platforms with the most risky services, as will be categorised by the UK’s newly designated body for online safety, OFCOM, must set out clearly and accessibly, in their terms of service, how different kinds of legal but harmful content available on their platforms will be treated i.e. whether it will be taken down, given less access, or afforded less promotion. By contrast, the EU approach in the Digital Services Act is lighter, with no duty on platforms to regulate harmful, lawful content, but to provide transparency reports on the actions taken to remove harmful content.

This begs the question: can commercial platforms be trusted to assess what is, and isn’t, harmful content and how to treat it? In the UK, OFCOM will issue codes of practice to guide platforms on compliance; however, where the nature of the metaverse remains unclear, how effective will these guidelines be? We do not know, for example, whether the nature and scale of harms in the metaverse will be different to what we currently understand to be harmful in social media.

If the defining feature of the metaverse is a feeling of presence in the virtual world which goes beyond social media, then regulators will need to rethink what harm might look like and feel like, and how it should be handled. For example, what does assault look like in the metaverse where a victim experiences abuse through an avatar rather than through social media messaging? Also, what qualifies as assault in virtual games that already feature violence? Moreover, how will local legislation affect how such harms will be prosecuted and proven, if at all? These are questions about the nature of harms in the metaverse, but there is also a question about scale. It is currently unclear whether embodiment in an avatar form will make users more or less likely to engage in misconduct, because it will feel closer to real life.

Part of the complexity in understanding what harm in the metaverse will look like is the anonymity that users get online. On the one hand, anonymity can be highly beneficial, enabling the most vulnerable in society to find help by expressing themselves through a different persona in the virtual world. On the other hand, such anonymity can give power to perpetrators of harm online, through the ability to hide behind the façade of an avatar which does not resemble their true identity or intentions. There may not be a unified approach from all metaverse platforms, with some following Twitter’s approach to social media with anonymous accounts, and others following in Meta’s footsteps with verified profiles. However, perhaps if the metaverse did become an ordinary part of day-to-day life, where it is used by public and professional services, platforms would ultimately need to provide a means of verifying user identities, as is already required of some platforms in the OSB.

The metaverse will open up new forms and means of connecting with other users around the world; but, unfortunately, the new risks of harm that this brings will require very careful regulation. Over-regulation of content could curtail users’ fundamental rights to freedom of expression and privacy, but under-regulation may pave the way for greater scale and access to harmful content, in a world which is difficult to moderate because the experience is designed to be completely user-specific. The future of safety online will ultimately depend on how harmful content is defined by leading regulators, how far platforms comply with the regulations, and how effectively the regulations are enforced.

Still have questions? Be sure to read our metaverse articles below:

Bristows Tech Summit 2022

What should innovative technology companies be looking out for over the next year? Join us on Thursday 24 November for an afternoon of insightful discussions, as our team of leading experts tackle the most important legal and commercial issues that face the technology industry. 

Register your interest here.