Part 3: Proceed with Caution - An Interview with Mel Slater
This is the third piece in a multi-part series exploring the ethics of virtual reality technology.
Professor Mel Slater has been at the forefront of exploration of virtual reality applications for decades. As a distinguished Investigator at the University of Barcelona and founder of Virtual Environment and Computer Graphics group at University College London, he has conducted many key research studies that other academics use as a basis their own work. Slater’s investigations have contributed significantly to our understanding of the embodiment illusion and how it might be applied in a therapeutic setting. Papers which he helped author are cited in nearly every article in this series. Most recently, Slater worked as first-author on a study titled “The Ethics of Virtual Realism in Virtual and Augmented Reality”. Organized by Digital Catapult and supported by well-known companies in the virtual reality industry (HTC Vive and Magic Leap stand out as notable examples), the paper performs a general survey of the current state of ethics in virtual reality, speculating on how users interact with virtual characters, whether hyperrealism is a benefit to user experience, and if more effectivity always means a better experience. For this installment of “Strange Illusions”, I spoke to Professor Slater about his recent work on this publication and his thoughts on the most pressing dangers users of virtual reality experience.
Conducting an interview over Zoom across two different timezones is not a particularly easy task. However, despite the bad audio and occasionally poor WiFi connection, one thing became abundantly clear — we do not know very much the implications virtual reality for its users. The key word of the conversation was “speculation”. As a community of academics, developers, and creators, we lack the knowledge to fully understand the ramification of our actions. The ethics paper listed several potential ethical issues that can arise as the result of hyperrealism in virtual reality experiences: users spending too much time in virtual reality and preferring their lives in there to their ones in the physical world, more cases of body dysmorphia if users feel more comfortable in and prefer their digital avatars to their actual bodies, and other types of deep psychological distortions were raised as potential risks. I asked Professor Slater which threat he felt was most eminent and what the virtual reality community should be focusing on to create the least harm to users.
Mel Slater: “It’s difficult to say. One of the points of the paper is that, basically, we don’t know! Although virtual reality has been around for a long time, it’s been under controlled conditions in a lab with ethics approval by institutions. There’s very little research that shows what’s going to happen when people use it on a massive scale, which is beginning to happen now.”
In other words, any one of the issues his paper examined could be a major problem but we have no way of knowing which one at present. He expressed particular concern over people spending too much time in virtual reality simply because we do not know what the consequences of continued and extended use would be. “We don’t have any information as to what happens when people spend as much time in virtual reality as they do, say, playing video games.” His main concern worry, though, was deepfakes and identity theft in virtual experiences [1]. The example he gave was unnerving, “You’re at home conversing in augmented or virtual reality with your grandmother, and then it turns out her identity has been hacked and the whole time you were talking to whoever hacked her.” Essentially, bad actors could use machine learning to build a replica of and masquerade as you in a social virtual experience. They could make you say and do things you would never do, get sensitive informations out of your loved ones, and ruin your existing relationships. While this seems more like science fiction than something plausible (Professor Slater explained that this concern might take a few years to manifest), it is far more realistic than it might seem.
“Deepfakes are pretty easy to do. We’ve done them in the lab. There are videos of me signing a John Lennon song with my face, but it’s his body and his voice. We can do these kind of things and it’s not difficult at all to do them.”
In a lab setting or for personal gaming where the user consents to their face being used as part of research or individual entertainment, Professor Slater notes that this should be fine. It is only outside the confines of consent that it becomes anissue. In such cases, identity faking should be treated like a crime. Slater draws the connection between the virtual world and the physical world. If the virtual world is an extension of our physical one, should we not be governed by the same laws we would be governed by in real life? If someone steals an ID or impersonates someone in person, it is illegal. Slater posits that publicity may be part of the solution. Perhaps the main issue is that people think their actions in virtual reality do not count and need to be made aware that they do — that their actions in VR may be judged just the same as any action the commit in the physical world.
While the premise that virtual reality crime should be treated the same as physical crime is sound, defining the rules and enforcing regulation becomes a bit more difficult. In an article titled, “If Virtual Reality Is Reality, Virtual Abuse Is Just Abuse” tech ethicist Fiona J. McEvoy argues that, despite no physical injury being inflicted in the virtual world, any form of virtual mistreatment, harassment, or assault should be dealt with in the same manor as their physical equivalents. Indeed, people have spoken out about their experiences getting sexually assaulted in virtual reality and how it had strong, damaging psychological effects on them. However, a devil’s advocate could easily point out how those traumatic incidences never really happened. Someone else’s avatar inflicting violence on your avatar in a Social VR setting surely is not the same as it occurring in real life. To this, Slater has a strong response:
“To me, whether it happened in real life of virtual reality is irrelevant. If you’re assaulted in VR, you’re assaulted. You don’t have physical injuries, but you have mental injuries. It all comes down to what we said in the paper, “Do to others, as you would have done to yourself”. This is a bad experience, it shouldn’t be done….If you wake up in the morning and your Twitter account is full of abuse, you’re going to be hurt by that. It’s not that they’re physically hurt, but they’re mentally hurt. That can be as bad as physical hurt. It’s a difficult area. Behavior is behavior, whether it is behavior in the physical world or the virtual world. Over time, the space between the physical world and the virtual world will become smaller and smaller, especially as augmented reality becomes as wide-spread as mobile phone use is today. If we don’t do this regulation right now, who knows what will happen in the future. So if it’s not okay for someone to be walking down the street and receive racial abuse from a person, it also shouldn’t be okay if they’re racially abused by a virtual person that they see in virtual reality. To me, it’s more or less the same thing.”
Slater then goes on to say that, unless someone willingly consents to experiencing some kind of abuse in virtual reality, it should be be completely disallowed and penalized. He mentioned an experiment he had done years prior where his lab put experienced public speakers in front of a virtual and wildly rude audience. The public speakers were so shaken by this that, despite their years of experience, they could not give a speech.
The next concern in enacting regulation is how to find the people who commit virtual crimes. Unlike the physical world, a victim cannot describe who hurt them, they can only describe the avatar. Would that not make the law a lot harder to enforce? Slater agrees that it may potentially be hard to get law enforcement to follow up on any of these cases because there is no physical injury, but that we have the technology to be able to track someone down — to use their IP information to find out who they are. He also suggests that a base level of protection be built into Social VR systems.
“How these SocialVR systems are set up, that sort of thing should be built into them. On the one hand, we don’t want Big Brother looking into everything that we do in social virtual reality, but if someone does something wrong, then it should be possible to trace who they are. However, I’m not an expert in this kind of security issue. I would think it should be possible.”
Slater’s paper discusses virtual characters and their rights as a main focus, so the next natural line of inquiry was to examine the rights that virtual characters have. Assuming the premise that real world laws apply to the virtual representations of real people in virtual worlds (i.e. you in your avatar), do those same laws apply to non-humans? Virtual characters can look, sound, and act like people as as technology improves, they will become more and more convincing. If someone beats up a virtual character, have they committed assault? Surely if someone is taking care of their violent urges in virtual reality and not physically or mentally harming anyone real, that must be better, right? The answer exists in one of the many virtual reality gray-zones that lab testing has yet to provide a clear answer to.
“Virtual characters themselves are not people. They’re just bits of code, they have no intelligence. Nothing. But they’re a representation. In our minds, they are people. It’s the difference between representation and what’s actually there which is just a piece of software that in itself has no knowledge, no noting whatsoever. But what those characters represent in our minds is something like real people so it’s a very difficult thing to say. So what I want to say is that those representations have rights. Not the characters themselves but how we represent them in our minds somehow have rights. So the argument is that if you have horrible urges and you do those urges in virtual reality, no one is physically being harmed and you get the urges out of your system or something like that. I don’t understand it completely. On the other side, like we said in the paper, we refer to something Kant said. If you treat animals badly, you may be more likely to treat real people badly. So none of these are empirical statements. We don’t have the evidence about them. We don’t know if it’s true that somebody who has the impulse to, I don’t know, kill people goes and kills people in virtual reality and is then less likely to kill people in the physical world. I don’t know. We have no evidence about that. Until we have evidence, I would be on the safe side and say we should not abuse virtual characters because somehow, in our minds, they represent real people.”
The basic premise of this idea is that virtual characters seem real to us, therefore we should not be encouraged to be violent towards them. Slater points out that some people may compare this type of virtual reality violence to that of video games. Some people strongly believe that violence in video games causes no harm and; while there are many papers that disagree, there are also many of papers that support this view. This might indicate, to some, that virtual reality violence towards virtual characters is completely okay. Slater thinks the lack of research in this area means we should err on the side of caution.
"I would say that whatever’s found about computer games may not apply to virtual reality because it’s a qualitatively different experience. So to be on the safe side I would say no, don’t cause violence to virtual characters because in your mind it’s represented like they’re something like real people. This is the point of virtual reality actually. If that wasn’t the case, there’d be no point to it.”
Violence, assault, and identity theft are all heavy topics and sensitive. Users should not be exposed to them lightly due to the potential damage they may cause. Where does the responsibility lie? While the answer is likely with the creators, not everything is their fault. There is no concrete way to warn users about what they are watching. Slater supports the idea that a more in-depth rating system needs to be created to account for the immersion and psychological effects users have in virtual reality that they do not experience with other forms of media. Another problem is that proper behavioral norms for virtual reality are not yet socialized in the general public. Compared to older forms of media like movies or plays, virtual reality does not have years of implicit craft knowledge that comes with it. The best creators can do is to try and guess whether or not something might be acceptable in virtual reality — and they may miss the mark. Like most other things in virtual reality at this point in time, the responsibility is ambiguous because there is not enough research for people to rely on.
Professor Slater believes that now is the time to act — to be having these critical discussions. Virtual reality companies need to be supporting academic exploration. While Slater draws his conclusions in his paper from a strong body of research, much of it is guesswork until it is confirmed in user testing. It makes it hard to make any concrete decisions about what laws might need to be created to make virtual reality safe, let alone what the biggest risks even are (though we can quite reasonably guess).
“As we said in our paper, almost everything we put in there is speculation. We have no real idea about what’s going to be happening. First is to get knowledge, to have some kind of scientific basis on which to make decisions and, as I said before, many types of wrongdoings that could happen in virtual reality are probably covered by the law anyway like fraud. The most important thing is to have the research funding to find out what’s going on. I wouldn’t rush into regulation because things that are really bad are probably already covered by law, and everything else we don’t know enough about. Regulation should be on the basis of scientific knowledge.”
Slater is personally keen to find out how much the human brain is able to separate between the digital world and the physical one:
“I’d like to know more about whether people are able to distinguish between reality and virtual reality. So if they have an experience in virtual reality, how much does it carry over to the real world? If there’s always a very clear separation between an experience in virtual reality and an experience in reality, then we don’t have a lot to worry about. But if there’s a carry over, then we have more to worry about and we have to be more careful. Obviously, there’s going to be some carry over. Like everybody, I personally have seen a movie that would stay with me for weeks and weeks because of something in the story or something that happened in the movie. So if that’s the case for movies, then it’s very likely that that’s the case for virtual reality. But again, we don’t know for certain and this is the kind of thing that has to be studied.”
At this point in time, we do not know much. Professor Slater was clear to state that speculation does not mean truth. We can plan for the worst but not actually know what to prepare for. There are so many unexplored avenues with this particular medium which should raise some alarm and cause developers to proceed with caution. It could be years until we have solid scientific backing to fully understand what goes on in human minds when they enter a virtual world and it could be even longer until multi-user experiences are considered safe. While the burden (but not the blame) falls heavy on the creator, it is imperative to develop experiences with kindness, thought, and as much foresight as possible until we know more.
Notes
[1] A deepfake is a replication of a human, made through machine learning, that is convincing and realistic. A good example is Halsey Burgund’s “In Event of Moon Disaster” where ‘Richard Nixon’ recites a speech written — but never spoken — about the deaths of the astronauts sent to explore the moon on television.