Part 4: Cautionary Tales from the World of AI

This is the fourth piece in a multi-part series exploring the ethics of virtual reality technology.

On March 23rd, 2016, a 19-year-old girl named Tay made a Twitter account. Normally, this wouldn’t be noteworthy; but Tay wasn’t like other girls. In fact, Tay wasn’t a girl at all. Tay was a collection of codes and algorithms, nothing more than an artificial intelligence chatbot built by Microsoft. She was designed to mimic the speech patterns of a teenager and was deployed on twitter to learn how to have conversations. In terms of artificial intelligence training, it was a clear design. People would tweet at Tay (using her handle @TayAndYou) and the algorithm would learn how people talked via Twitter and use it to generate original content that Tay would then tweet out. She would be educated by the conversation found in threads and retweets. 16 hours after Tay’s birth, Microsoft took her offline and started deleting as many of her Tweets as possible. Within her short lifespan, she had been trained, but not in the way her inventors expected. Tay had become a full-blown Nazi [1].

How can something that’s lifeless — an algorithm without consciousness or humanity —  become a Nazi? The answer is simple. An artificial intelligence chat bot is purely mathematical. It does not deal with semantics or meaning. It is given a training set (a set of data for it to learn from) and mimics what it sees. For example, the website thiscatdoesnotexist.com generates random images of cats. Its creator fed it a bunch of pictures of cats and the algorithm learned what a cat is supposed to look like. However, the algorithm has no innate idea what a cat is. It cannot describe the components of a cat. It is more like a parrot — repeating what it hears its owner say without understanding any of the meaning. When Tay began interacting with users on Twitter, they fed her their opinions and thoughts. If they wrote comments like “all Jews should die”, Tay learned that that was an acceptable form of conversation and repeated it.  Microsoft was horrified at what happened which is why they took the bot down so quickly.

Who was to blame for this? Some people blamed the trolls who spammed Tay with vitriolic thoughts. Microsoft, however, felt the blame lied more within themselves. If a bot can be taught to be hateful, to be racist or sexist or discriminatory, that is a flaw from the creators. According to Microsoft Cybersecurity Field CTO Diana Kelley, technological companies need to train their algorithms to weed out bias and abuse. At the Asia Pacific and Japan RSA Conference in 2019, Kelley said, “Looking at AI and how we research and create it in a way that's going to be useful for the world, and implemented properly, it's important to understand the ethical capacity of the components of AI.” The same can be said for other emerging technologies, including virtual reality. The ethical problems that have plagued the development of AI provide insight into the types of ethical conversations that virtual reality creators need to be having now, before it’s too late. 

Bias in AI and VR

In 2015, Google released an auto-labeling system for photos. The algorithm behind it was supposed to identify a person and label what they were doing (i.e. “men sitting”). The problems began shortly after it was released. When black people had their photos labeled, they found themselves getting tagged as “apes” or “gorillas”. When Google began training their algorithm, they used their employees as samples for what people looked like. Their workers, though, were primarily white. The algorithm only learned to recognize pale skin and traditionally European features as “person”. This was not a one-off from Google, either. Google tends to deny responsibility for any errors the algorithm makes. In 2012, a French lawsuit accused Google of auto-filling anti-semitic suggestions into the search bar. When someone searched “are Jews”, Google Search finished off the question with “evil?”, which lead to an increase of users finding anti-semitic articles and websites. Google tried to deny it was a problem with their algorithm, saying that most people who search the word “jew” are looking for vitriolic content [2]. Until the mid-10’s, searches of “asian girl”, “black girl”, and “latina girl” would pull up porn sites first because the algorithm searched for the most keywords. If a porn company tagged their video with “black girl sex”, “black girl butt”, and “black girl naked”, it would be more likely to turn up that video than a non-pornographic video tagged “black girl codes” [2]. 

Google is not alone in creating not-so-intelligent artificial intelligences. Many companies test their algorithms in-house, but many tech companies are primarily composed of white men, so the artificial intelligence learns from a limited pool of subjects with a limited pool of needs. Another example is Apple’s iPhone X’s inability to tell Asian people apart with FaceID, leading to people being able to unlock phones that did not belong to them [3]. While unintentional, bias has leaked into the algorithm. We are inclined to focus on ourselves — if something works for us, it is hard to imagine it not working for other people. Facial recognition is a dangerous example of biases in technology being left unchecked. These algorithms are now being used to identify criminals, despite their high error rate [4]. This has negative implications. There is talk of using algorithms to judge whether or not someone is likely to commit a crime like some kind of digitized Minority Report, but such algorithms would be making predictions based on assumptions about what a criminal looks like. The problem is, of course, that no one can determine who a criminal is from their face. This inherently flawed system makes room for the developers of the algorithm to teach it what they think a criminal looks like — a bias that unfairly targets black people [5].

So artificial intelligence algorithms can be unethical in design and implementation. What does that have to do with virtual reality? Virtual reality can’t provide search results or decide someone’s criminality based on their face. Like artificial intelligence, virtual reality is an emerging technology. While it has been around in primitive forms for many years, it is only being commercialized and commodified now. Unlike artificial intelligence, though, it has yet to reach dissemination in the public sphere and is not easily accessible yet. Artificial intelligence algorithms highlight that new technology can carry old biases, and those biases can encourage a negative and dangerous worldview. Algorithms must serve as a warning for virtual reality creators. It is particularly vital for this specific technology that developers take every step to minimize the biases because of the unique psychological power virtual reality has. 

When the embodiment illusion is applied correctly, and users spend time feeling as though their avatar – a body which looks different from their own – is theirs, it gives people the chance to relate to a situation they would not have experienced otherwise. In doing so, it can directly reduce implicit biases people hold [6]. For example, a study put parents into avatars that resembled young children. In this virtual experience, these parents were yelled at by an adult authority figure. In the bodies of their child avatars, the authority figure looked like a giant. The participants found themselves genuinely scared and nervous. They left the study with more empathy towards their children and new ideas on how to relate to them. Instead of yelling at them from above, perhaps it was better to get down on eye-level and communicate directly without making their kids feel dwarfed. This same principle has been used in experiments on race. In these tests, researchers found something similar. When embodying an avatar of a different race, users left the experience with more empathy towards people of different skin tones. Innate biases had begun to dissipate and remained lower several weeks after the experiments [6]. This technology can even be used to decrease violence. Researchers in Barcelona (in conjunction with the government) use it as a therapy to rehabilitate domestic abusers and stop them from hurting someone again by putting them in the body of a victim. There has been a visible percentage drop in reoffenders from the people who undergo this virtual reality treatment [7]. 

While all of this is positive, it does hint at a darker side. If spending time in a virtual avatar can help you relate to different groups of people positively, it can also do the reverse. If you experience abuse in a virtual world, it feels real. You leave the experience feeling threatened, helpless, and vulnerable [8]. While it would be wildly unethical to actively try to use virtual reality to cause harm for the sake of an experiment, it is not unrealistic to believe that a poorly made virtual experience could end up reinforcing negative prejudices people already have. For example, assume you are playing a war game in virtual reality. You are playing as the hero waging war in a war-torn city in the Middle East. All your enemies you are fighting are brown men in turbans screaming about the glory of god. This particular negative stereotype was popular in video games post 9/11 and reinforced the belief that all middle-easterners (particularly Muslims) were primitive and violent [9]. While this had a negative impact on small screens, it is worse when you are immersed. Virtual reality feels like reality to your brain, so you are unconsciously learning or reinforcing this stereotype. It stays in your head as a real memory and is interpreted as an accurate depiction of what the totality of this group of people must be.

While this is all very negative and seems almost hopeless (how are we supposed to avoid accidentally implementing biases we didn’t even know we had?), one practice can make a big difference: co-creation. Co-creation as a concept is relatively simple. If you are creating something for or about a certain group or community, you include them actively in the development process. A great example is the TV special “Family Album USA”:

“Artist Thomas Allen Harris co-creates a living and growing family picture album of America by traveling across the country and inviting community members to share images and stories from their personal family archives. The resulting work involves live interactive performances, documentary films, web projects, and now, a TV special series on the PBS (US) television network.” [10]

The final work painted a diverse picture of America created by actual citizens. It portrayed a wide variety of cultures all sharing their stories and traditions in their own words. 

If we are creating technological products for everyone to use, everyone should be involved in the process. One person’s biases may be caught easily by another person and vice-versa. Active collaboration and knowledge from different groups of people is vital to avoiding accidental exclusion, misalignment, and danger. The failure of artificial intelligence algorithms is a direct product of the homogeneity of the people creating it. If algorithms teach virtual reality creators anything, it is the importance of equitable experiences. Before artificial intelligence became part of our daily lives, it was hard to imagine that it could falsely accuse people of crimes they did not commit. Unfortunately, this has become a reality. Virtual reality is fast approaching wide consumer availability and we cannot let it follow in its technological predecessor’s footsteps.

References

1. Neff, Gina & Nagy, Peter. (2016). Talking to Bots: Symbiotic Agency and the Case of Tay. International Journal of Communication. doi:10. 4915-4931.

2. Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.

3. Zhao, Christina. “Is the IPhone X Racist? Apple Refunds Device That Can't Tell Chinese People Apart, Woman Claims.” Newsweek, 18 Dec. 2017.

4. Hill, Kashmir. “Wrongfully Accused by an Algorithm.” The New York Times, 24 June 2020.

5. Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.

6. Slater, Mel, and Maria V. Sanchez-Vives. “Transcending the Self in Immersive Virtual Reality.” Computer, vol. 47, no. 7, 22 July 2014, pp. 24–30., doi:10.1109/mc.2014.198.9.

7. Seinfeld, S., et al. “Offenders Become the Victim in Virtual Reality: Impact of Changing Perspective in Domestic Violence.” Scientific Reports, vol. 8, no. 1, 2018, doi:10.1038/s41598-018-19987-7.

8. Gonzalez-Leincres, C., et al. “Being the Victim of Intimate Partner Violence in Virtual Reality: First- Versus Third-Person Perspective.” Frontiers in Psychology, 08 May 2020, https://doi.org/10.3389/fpsyg.2020.00820

9. 5. Ibaid, Taha. “The Waging of a Virtual War against Islam: An Assessment of How Post-9/11 War-Themed Video Games Stereotype Muslims.” University of Ontario Institute of Technology, University of Ontario Institute of Technology, 2019.

10. Cizek, K., et al. (2019). PART 1: ‘WE ARE HERE’: STARTING POINTS IN CO-CREATION. In Collective Wisdom (1st ed.). Retrieved from https://wip.mitpress.mit.edu/pub/collective-wisdom-part-1

Kalila ShapiroComment