CHANGEMAKERS: Lauren Maffeo on AI Bias and Explainability
Lauren Maffeo leads business intelligence research at GetApp and is a globally-recognized expert on AI ethics. She has presented her research on bias in AI at Princeton and Columbia Universities, Twitter’s San Francisco headquarters, and Google DevFest DC, among others. She was also recently elected a fellow of the Royal Society of Arts. Andrew from All Tech is Human spoke with Lauren about AI bias and explainability and the future of AI ethics research.
Andrew: You began your career as a journalist writing for outlets like The Guardian and The Next Web. How does your background as a journalist influence your approach to ethical issues in data and AI?
I think starting my tech career as a journalist was hugely helpful for covering AI. As a reporter, my job wasn't to be a tech expert - it was to find the right tech experts per story, then ask them the right questions. I had wanted to work as a journalist for years and didn't care which beat I took; it was more important to me that I get hands-on experience.
Over time, I found that I was genuinely interested in tech, especially its impact on society. As a reporter, I learned to start with tech's impact on people, then work backwards to assess how we arrived there. It looks like a counterintuitive approach, but it allowed me to see that tech isn't built and deployed in a vacuum - it has consequences, both good and bad.
Much of your research focuses on indirect bias in machine learning algorithms. What first inspired your interest in this subject?
Around Fall 2016, I started reading more articles about how AI was forecast to "steal" millions of jobs from humans. I wondered how accurate this was - how could AI be advanced enough to cause such drastic job displacement? I started trying to answer this question by reading my colleagues' research on AI, then writing my own content on this subject.
Eventually, a guest speaker came to one of our AI Research Community meetings to discuss bias in AI. I thought that was a fascinating subject, and started researching it in more detail. This was in the summer of 2018, as the issue of bias in AI started to gain traction in the computer science field. Since there was increased need for knowledge on this subject, I was able to start speaking about it at tech conferences around the world.
How does the topic of AI explainability relate to all of this?
Explainable AI is a design decision that development teams make. It applies open source packages and toolkits to the algorithmic modeling process. Its goal is to minimize the use of black box algorithms, which occur when algorithms' designers can't see how these algorithms teach themselves new skills.
Black box algorithms risk reinforcing bias in the data that they're trained on. This happens when sensitive attributes (race, religion, sexual orientation, etc.) correlate with non-sensitive attributes (home address, years of experience, etc.). This can result in algorithms which produce results that marginalize certain groups, like not recommending qualified women for jobs due to their gender.
What are some areas within the field of AI ethics where you believe further research is needed?
As an analyst, I've seen great advances in augmented writing, which combines natural language processing with sentiment analysis. The most advanced augmented writing tools analyze millions of data points to predict how likely people from certain groups are to answer calls for customers, job applications, and more.
These tools can make such predictions with high accuracy, which helps users learn which words are most likely to motivate or demotivate people. What they can't do is explain why those groups of people are turned off by such language. For example, women are less likely to apply for roles that use words like "Ninja" or "Rock star" in job descriptions. But, it's not currently clear why those words are such a turnoff to women.
Languages have so much implicit meaning, which varies across cultures and geographies. AI isn't great at understanding nuance, which is important for bias reduction.
What are some of the common pitfalls that practitioners fall into when designing and developing AI systems? How can these pitfalls be avoided?
I see two common mistakes when developing AI. The first is not prioritizing measures of fairness in the product specification phase. This involves writing out which methods of fairness you'll use to train your model, and how you plan to prioritize them. Since the concept of "fairness" is subjective, you should share what you mean in your documentation.
The second mistake is building new models to retroactively "explain" how black box algorithms made decisions. This approach creates more work for dev teams and risks reinforcing the same bias that the first models produced. A more effective approach is for teams to imbue interpretability directly into their first models.
Are there any areas in which you think that AI has particularly great potential to do good in the world?
Given how painful the COVID-19 pandemic is, I hope AI researchers will use techniques to predict and prepare for future pandemics. Likewise, I hope they'll keep building AI like chatbots to help assess patients' symptoms from home. This could play a key role helping hospitals avoid overcapacity.