Part 1: Welcome to the Matrix

This is the first piece in a multi-part series exploring the ethics of virtual reality technology.

“We felt so free — but we should have been more thoughtful” — Jaron Lanier, “You Are Not A Gadget”

Being online in this decade is terrifying. When millennials first got online in the early aughts, the biggest dangers were catfish, fake information packaged as truth on wikipedia, and Slenderman. While these are still present in the internet ecosystem (with, arguably, the exception of Slenderman), it is impossible to surf the web in this era without being confronted by threats to our civil liberties by everyday technology use. 

Searching with Google, connecting on Facebook, and shopping on Amazon have become vital parts of our daily routine, and it is hard to function as a full member of society without at least one direct plug into the net. The internet brings us convenience, but as recent conversations around the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have revealed, that convenience comes at a cost. Our privacy and our data are not secure. To the technical giants, we are pieces of information to buy and sell, our online lives fund their companies. While this violation alone should unnerve us, it is only one small part of a massive ethical web we have found ourselves entangled in. Despite recent regulatory efforts, it will still take years until we have full control over our online personas. We will have to undo years of technological practices that have shaped how we use the internet and rethink the basics. How did we get into such a mess? How did no one see this coming? Why did we not design something better and less exploitable?

In his book “You Are Not A Gadget,” computer scientist and philosopher Jaron Lanier introduces the concept of Lock-In. Lock-In is when a technological feature becomes unchangeable, built into the very foundation of computer development. When the internet was shiny and new, it was lawless. There were no rules or restrictions; you could make and share whatever you wanted. It was a paradise — but there is danger in excess. Due to the lack of regulation, certain companies gained monopolies. Practices that started with good intentions were put into place and remained there for decades, being warped over time by the people who controlled them. 

For an example, look to internet Cookies. Cookies are small bits of data passed between a website and a browser, passing along user information. They keep your items in your shopping cart, store your passwords so you do not have to log in every time you go to a new page on a website, and help auto-fill forms. Overall, they are helpful, efficient, and convenient — but they have a dark side. Cookies also track users across the web, latching onto their browser and recording whatever sites they visit and for how long. While this may seem harmless, Cookies send this data back to major advertising companies which then use this information to target you. Companies can know intricate details of your thought patterns and daily routines, as well as the secrets you intend to keep hidden in your browser. Any hope of anonymity is gone as Cookies follow your trail, recording every search path you take and linking it back to you. Because of Lock-In, we are stuck with Cookies despite their significant faults. Cookies are so foundational to today’s internet that we cannot improve upon or circumvent them without undermining the entire system. Lock-In is the result of an idea with infinite positive applications being weighed down by the finite risks, risks that were not thought of during development and that have become a core design feature of the web.

We are at a unique moment in technological development where we can stop history from repeating itself. When Ivan Sutherland created the first Head-Mounted Display (HMD), the “Sword of Damocles System,” in 1968, he propelled the field of visual illusions into the digital realm. With that, he created a new area of computer science: virtual reality. A half-century later, this field is still in flux. We are still in the wild, uncharted waters of what this technology can do, both in terms of hardware and software. Now is the time to be talking about the risks. There is plenty of media that sensationalizes the dangers that come with virtual reality, most famously The Matrix. More recently, the popular television show Black Mirror has tried to start conversations about the ethics of emerging tech. These works are abstract. They envision technological applications that are hard to picture with the devices we currently have, making it easy to write-off the importance of the discussions we need to be having now as a type of Uncanny Valley SciFi. We cannot afford to do that. While the worlds these fictions present are extreme, if virtual reality developers have learned anything from their web developer predecessors, it is that we must be vigilant. We must try to perceive potential threats before they become real issues and before we get Locked-In.

The risks of virtual reality are different than the risks of the internet. Virtual reality holds a lot more power over the user than a website does. It relies on tricking the eye and the mind, convincing the user not only that 2D pixels are genuine 3D objects, but that their virtual environment is just as real as their physical one. To do this, virtual reality uses three illusions: place (convincing users they are in another location), plausibility (convincing users that the objects and characters in the virtual world are behaving in a believable way - both in realistic and non-realistic scenarios), and embodiment (convincing the user that a digital avatar is their body). When place, plausibility, and embodiment are successfully implemented, they create an immersive experience that users believe to be real, eliciting interactions with virtual characters and objects that directly mirror real-world behaviors. 

The embodiment illusion is particularly strong because it has the power to convince people that they are in another body. Our brains rely on our eyes to feed us information about our physical form. We look down, we see our arm move, and we know it is our arm. In a headset, when the sight of your actual body is obscured, an avatar can take its place. Suppose you are in a spaceship virtual reality game. You look down and see a green alien body with goat feet. You know that you are not green and that your feet are not that of a goat, but when you raise your leg in your physical space, the green, goat-footed virtual leg moves at the same time and in the same place. Your eyes see this and tell your brain that that virtual leg is, in fact, yours. Our minds are flexible, allowing us to inhabit bodies that look nothing like our own and to use them naturally.

This phenomenon, called homuncular flexibility, was demonstrated by an experiment that placed humans in a virtual lobster body and observed them maneuvering it naturally, as if it were their own. According to the work and research of Mel Slater, a leading researcher in the field of virtual reality applications, the power that these illusions hold can alter our sense of self. Psychologically, it can decrease racial bias after users spend time in an avatar of a different ethnicity. It can stop domestic abusers from re-offending by placing them in a situation similar to that of their victims. These applications are amazing and have the power to do so much good.

We cannot let ourselves fall into the Cookie trap. Identifying the good is easy. Now, we have to talk about the hard questions surrounding a technology that is powerful enough to alter our psyches. What do we do with this? In addition to making sure that we have reasonable solutions to ethical dilemmas surrounding other forms of technology (right to privacy, freedom of speech, etc.), we have a unique set of questions to answer for virtual reality. Given the power of the embodiment illusion, could a shooting game in virtual reality cause psychological damage in the physical world? How do we handle harassment in Social VR spaces? How do we ethically conduct experiments on potential users? Ignoring these emerging and pressing questions now, in the early stages of virtual reality entering mainstream consumption, we risk creating our own Lock-In; developing harmful systems at the very base of development that will be difficult to escape.

This is the introductory piece in a series that will attempt to start answering these questions. With interviews with prominent scientists and developers in the field, as well as analysis of published research across a variety of related disciplines (psychology, design, and computer science to name a few), we will attempt to explore the ethics of this new world before we become too set in our ways to be able to change it. Stay tuned for more!

Kalila Shapiro is a digital media researcher, virtual reality and human-computer interaction designer, and creative technologist. You can connect with her on Twitter and keep up with her work at kalilashapiro.com.

References

Slater, Mel, and Maria V. Sanchez-Vives. “Transcending the Self in Immersive Virtual Reality.” Computer, vol. 47, no. 7, 22 July 2014, pp. 24–30., doi:10.1109/mc.2014.198.

Seinfeld, S., et al. “Offenders Become the Victim in Virtual Reality: Impact of Changing Perspective in Domestic Violence.” Scientific Reports, vol. 8, no. 1, 2018, doi:10.1038/s41598-018-19987-7.

Won, Andrea Stevenson, et al. “Homuncular Flexibility: The Human Ability to Inhabit Nonhuman Avatars.” Emerging Trends in the Social and Behavioral Sciences, 2015, pp. 1–16., doi:10.1002/9781118900772.etrds0165.

Kalila ShapiroComment