Changemakers: Fiona J. McEvoy on Tech Ethics in Academia and Today's Toughest AI Ethics Questions

Fiona J. McEvoy is an AI ethics writer, researcher, speaker, and thought leader and founder of the popular website YouTheData.com. She has been named one of the 30 Women Influencing AI in SF (RE•WORK) and one of the 100 Brilliant Women in AI Ethics (Lighthouse3). Fiona contributes often to outlets like Slate, VentureBeat, and The Next Web and regularly presents her ideas at conferences around the globe. Andrew from ATIH sat down with her to discuss how her humanities background influences her work, what tough questions she’s tackling these days, and more.

07_3429.jpg

Andrew: Tell us about your journey into the tech ethics world.

Fiona: I took a career break to go to grad school and study philosophy. While I was there I became interested in a host of new ideas, but landed up focusing on some of the main ethical and moral considerations for new technology. I wrote my thesis on how big data can be deployed to restrict human autonomy, and I’ve been writing and thinking about these topics ever since. My blog, YouTheData.com, has been a great outlet for me to think about the way technology is changing the human experience. I’m really gratified that others have found it thought provoking enough to ask me to write and speak publicly on these subjects. 

You hold degrees in classical studies, english literature, and philosophy. How does your humanities education influence your approach to tech ethics work today?

The road that has brought me to tech ethics has been winding, but the common thread has been an interest in human beings and the civilizations we create. I believe that having an arts and humanities background is actually a huge asset when it comes to examining this new technological epoch within its broader historical and cultural context. 

On the one hand, I have quite literally written about technology with reference to the creative process, Shakespeare, the Stoics, and Aristoelian ethics (among others). But outside of those direct references I believe my early immersion in the humanities helped me acquire a broad and useful understanding of human motivations, incentives, organization, hierarchy, behaviors, relationships and, of course, how new things can change our societies and our lives as individuals. 

Very few sequential evolutions are entirely unique, and in anticipating harm it can be as helpful to look backwards as it is to look forwards. 

In addition to your popular work as a keynote speaker and author at YouTheData.com, you recently published a number of articles in academic journals. How have you seen the conversation around tech ethics change in academia in recent years?

I cannot claim to be right at the heart of the current tech ethics conversation in academia, but I’m certainly buoyed to see a growing number of opportunities for scholars wishing to write and speak on the topic; the corollary being that there are now more resources for people like me to draw down on. There have always been excellent academic voices at the heart of the dialectic -- like Shannon Vallor and Luciano Floridi -- but now the conversation is becoming both broader and deeper, with many more participants. 

On a personal level, my forthcoming paper is about the plight of human autonomy as we embrace deeper and deeper levels of technological immersion, like with AR and VR. Not long ago, this would have seemed like an extremely niche topic, but I’m encouraged by the interest and discussions I’ve had over the last year or so; I put it down to the intensifying interest in tech ethics of late and the growing realization that these topics are among the most important facing humanity. 

What are some tough questions related to AI or data ethics that you haven’t found satisfying answers for yet? What kind of work do you think needs to be done to answer them?

The work I do doesn’t provide answers and I am rarely prescriptive about courses of action. In scholarly research, I simply try to identify where problems may arise and I do something similar in my other commentary. As unhelpful as that might sound, I’ve been pretty quick out of the blocks in publicly discussing the inherent problems with technologies like deepfakes and emotion AI. Part of the process of finding a solution is recognizing the problem to begin with. Some of the biggest tech catastrophes we’ve had so far are the result of attitudinal negligence when it comes to forecasting bad outcomes. 

From my observation point, there’s still a long way before we can say we’ve found a solution to fully mitigate even the most discussed tech-driven harms, like algorithmic bias. In fact, the list of technologies that have not been satisfactorily evaluated for their potent effects is far too long. We already live with AI driven-artefacts and platforms that discriminate, infringe on our privacy, subtly manipulate decision-making, steal personal data, and deceive us. All with unknown downstream effects for our behaviors and mental health. 

It strikes me that there is work to be done on all fronts here. Sometimes it will be about finding solutions, but there is a whole body of work in determining which types of technology should be tightly controlled or entirely suppressed in their deployment. 

In what areas of business, society, or life in general do you see the greatest potential for AI to do good? Can you give some examples of promising work that you’ve seen?

I think the potential for AI to do good is huge and, for all my harm-spotting, I do try to use the blog to point these out too. The opportunities for health, education, the distribution of resources, and general convenience are clearly tremendous. I attend lots of conferences and I’m always encouraged to see how many people are dedicating their lives to truly exciting new technologies. 

I don’t like to name companies as I’m not always sure I’m familiar with the best version of a certain product or solution, but there are systems helping those with physical and mental impairments navigate the physical and sensory environment that I find incredible, and actually quite moving. There are also artificially intelligent businesses working in sustainable agriculture that are already preventing waste and maximizing the earth’s precious resources. It’s good to know that while some technologists are busy creating fake news bots or building the software for Sophia the robot, others are using their powers for good and in many cases helping provide smart solutions to some of the longstanding issues facing us globally. 



You can connect with Fiona on Twitter and LinkedIn and can keep up with her latest work at YouTheData.com.

Guest UserComment