CHANGEMAKERS: Milena Pribic on AI Ethics and Making Change in Big Tech

Milena Pribić is an advisory designer on the IBM Design for AI team and the co-author of Everyday Ethics for AI. She is passionate about creating and scaling resources on ethical AI for designers and developers and has spoken about AI ethics and design to diverse, international audiences. Andrew from All Tech his Human spoke with Milena about her experience as an ethical change agent in a large tech company, AI ethics in the midst of the COVID-19 pandemic, and more.

Milena-2 copy.jpg

Andrew: You started at IBM 5 years ago as a product designer. Tell us about your journey from there to your current work in AI design and ethics. In particular: what first inspired you to think critically about the ethics of AI design?

I started at IBM as a developer and pivoted to design on my own. I started thinking about AI ethics as I was designing the personality of an AI tutor a few years ago. It struck me how subtle improvements in voice and tone drastically altered the students’ perception of the tutor and their engagement— the dynamic between student and tutor shifted so quickly. The students started referring to the tutor like it was a real person and I just had a moment of… is this normal? Are they aware that they’re in the driver’s seat? Transparency is foundational to trust. And a healthy dynamic means continued interactions means improved technology over time. If you’re working with AI, you’re a relationship designer. It’s your job to make sure it’s a healthy one. 

You’re a co-author of Everyday Ethics for Artificial Intelligence, a guide to ethical AI design and development. What was the impetus for this project, and what changes have you seen take place within IBM as a result of it?

We needed a resource that any member of a team could pick up and point to when they came across situations like I did with the AI tutor. We’ve always called Everyday Ethics the first step in the process because it’s meant for group discussion but also offers specific resources and to-dos without feeling like a checklist. The guide was originally created for designers and developers but as we wrote it, it became clear that it would be useful for any role. 

The guide was published in 2018, so a lot has evolved since then in terms of how plugged in the designers are. There were passionate people across the company who were already doing cool things in the space so they were some of the first to rally around the guide. There’s a system of governance and standards that’s crucial to any (large) company but it’s also important to foster a community organically. I sit with a birds eye view to all of the teams but I’ll facilitate collaboration and critiques, or I’ll pull in IBM Research to share their ethical expertise with the designers. But ultimately, it’s the individual designers on those teams that know their users the best — and it’s been rewarding to watch them realize the gravity of that responsibility and encourage the rest of their team to start thinking that way as well. 

In Everyday Ethics for Artificial Intelligence, you identify five areas of ethical focus for AI: Accountability, Value Alignment, Explainability, Fairness, and User Data Rights. How did you and your colleagues arrive at these 5 areas? Which area(s) do you feel are most often missed by AI designers, and why?

We were super inspired by IEEE’s Ethically Aligned Design and wanted to create a digestible version of that in our own voice. We worked closely with Francesca Rossi, IBM’s Global AI Ethics Ambassador, to narrow the focal points down to what we felt were the most pressing and relevant ones.  

It’s hard to say which is most often missed by AI designers since all five of the focus areas are intertwined and dependent on one another. The problem is usually not a matter of missing one or the other, it’s more about how deeply each area is addressed within an actual design process. You can’t just read the guide and check it off the list, or have a conversation about it at the proverbial water cooler and go on about your day. This type of work and thinking is a daily practice. If you’re not actively incorporating those ethical insights into your design decisions, you’re not internalizing them. All of this work is about behavior change for practitioners. It has to become like muscle memory! 

What advice would you give to someone who’s looking to spearhead ethical change within their own company? 

Think about it like this: every AI job is now an AI ethics job. If you’re really thinking in a human-centered way, you don’t need permission to start designing better experiences that are rooted in building trust. See how you can leverage or build on what’s already available (like Kat Zhou’s design thinking exercises). Take our Everyday Ethics guide or use it as a jumping off point to write your own.

Find other people like you. The most passionate collaborators are those that carve out the time and resources to address ethical questions for their particular domain. From there, what sticks and scales to different scenarios? An internal AI ethics community could start with a Slack channel where you can share your experiences with each other.

What unanswered questions have been on your mind lately regarding AI or design ethics? What type of work needs to be done to answer them?

Since my team is focused on human/AI relationships, I’ve always been interested in power dynamics and how we define them for different use cases. Humans are so weird. We’re full of contradictions when it comes to those questions on healthy/optimal relationships. I’ll loop in with anthropologists, trust researchers, and even moral philosophers to discuss and share notes. We have to keep pulling in experts from different disciplines as primary collaborators.

The COVID-19 pandemic has taken up the majority of my brainspace lately like it has for all of us. How do we protect citizens from the virus while also making sure we protect them from potentially overarching surveillance? What makes a healthy human/AI relationship in a time of crisis? People who rely solely on technology to save the day don’t realize that AI tech is initially an empty vessel that’s meaningless without the context and the values we assign to it. As designers, we have a lot of power in defining those values in the best interest of our users. Remember that responsibility as you design these experiences— the human has to be prioritized over the technology every step of the way. 

You can connect with Milena on LinkedIn and Twitter.

Guest UserComment