The metaverse developed by Meta has already been robbed of its age of innocence, if it ever had one. The virtual environment Horizon Worlds, accessible with Oculus VR headset, is already under accuse since its release in December on the US and Canada markets. Criticism addresses abuse of women, already documented, and potential children abuse, which could quickly occur due to the number of kids reported wandering in the platform without parental control.
In late November, a woman reported being “virtually gang raped” by a group of users: “Within 60 seconds of joining—I was verbally and sexually harassed—3-4 male avatars, with male voices, essentially, but virtually gang raped my avatar and took photos”. Since the event, Nina Jane Patel’s story gained increasing media attention. Vivek Sharma, Meta’s vice president of Horizon, reacted by saying that Patel could have activated a “safe zone” tool, which creates an isolating bubble and prevents others from interacting with the user. But the victim said everything happened too fast to use the feature: hard to react with cold blood in a situation made to mimic real-life embodiment as much as possible. Hence, the recommendation sounds like telling a woman she provoked rape because of her dress code.
Concerning kids, Will Oremus on the Washington Post explores the presence of unmonitored kids on the platform. He notes many reviews of the Oculus report the issue, with comments like “Way too many children on there”, or “Of the 4 to 5 times I’ve tried to explore this app I’ve been trolled by kids every single time”. Encountering kids seem like an ordinary experience on the platform, one that could attract abuse in a format hard to detect and prevent. Parental policies are very light: although Horizon is for 18+, once the Facebook account of an adult is connected, anybody in the household can pick up the Oculus and have a tour in Horizon Worlds. Always on the Oculus rating page, users claiming experience in platform moderation point to the need for human moderators.
How is Meta dealing with this (besides telling women to “dress up their defence”)? So far, the “safe zone” was the only real option. However, as a reaction to the media attention, a few days ago, Oculus announced a “Personal Boundary” feature, a sort of “mandatory distance” default. While this new standard could prove helpful for kids (hoping they don’t deactivate it…), it forgets verbal violence and abuse.
As Patel notices, We have a tricky personality relationship with our avatars: “The Proteus Effect is the tendency for people to be affected by their digital representations, such as avatars, dating site profiles and social networking personas. Typically, people’s behaviour shifts in accordance with their digital representatives.” Another reason to push the limits is the perception that the virtual world is disconnected from the real one, with a tendency to forget a person’s existence behind an avatar. And even when there is not a human on the other side, like for the chatbot platform Replika, which allows you to create a chatbot to have a (friendly or love) relationship with it, abuse is behind the corner. Although Replika’s website clarifies that users chat with an AI that they are training, having a look at the dedicated Reddit channel is like watching an out-take of the 2013 movie Her, where Joaquin Phoenix falls in love with his vocal assistant. There’s a whole strain of users who developed the habit of abusing the chatbot: “We had a routine of me being an absolute piece of sh*t and insulting it, then apologizing the next day before going back to the nice talks”. The lines are blurred.
Such hybrid short circuits will keep expanding as VR experiences gain users, challenging the notion of what is appropriate – to say the least – and what is not.
What can we expect for the future?
- “Meta abuse” will become a sub-topic of psychology, like cyberbullying, pushing for the development of new forms of therapy.
- The same goes in the law field, with novel digital rights and accountability forms.
- The technosolutionist zero-effort approach will be using AI to filter certain gestures and words, with the censorship downsides we know from automated content moderation. Eventually, companies will develop some sort of “kids mode”, like in gaming and media platforms, and considering the embodied experience, some way to check from the keyhole what’s happening inside the VR glasses.
- Needless to say, we” ll need more women and diversity in the developer ranks to approach the creation of new worlds from different angles.
So far, politicians like EU digital chief Margrethe Vestager spoke of regulating the metaverse in economic terms, focusing on fair competition, forgetting social aspects. However, the metaverse experience is far more immersive than the screen-mediated ones citizens got used to until now. Early evidence of abuse suggests that besides considering the metaverse a new market, governing the environment and ensuring human dignity and safety will be a topic that cannot be neglected.