CHANGEMAKERS: Rana el Kaliouby on Emotionally-Intelligent AI and Her New Book, Girl Decoded
Rana el Kaliouby, Ph.D., is the co-founder and CEO of Affectiva, an Emotion AI startup spun off from the MIT Media Lab. A pioneer in the Emotion AI field, Rana has addressed audiences at TED and Aspen Ideas and has been honored by Forbes (America’s Top 50 Women in Tech), Fortune (40 under 40), and others. She sat down with Andrew from All Tech is Human to discuss her new book, Girl Decoded: A Scientist's Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology, which recounts her personal journey and mission to humanize our interactions with technology.
Andrew: What new insights did you learn about yourself through the process of writing Girl Decoded?
Rana: When I first set out to write Girl Decoded, my goal was to evangelize human-centric technology and advance the AI industry. But as I reflected on my path to humanize technology, I realized that my professional mission was so closely tied to my own personal journey, and that I had a more universal story to share about perseverance and embracing your emotions.
Even though I’ve spent my career teaching computers to read emotions, I wasn’t always in tune with my own. For the longest time, I didn’t embrace them – I thought it was easier to push them out. But writing Girl Decoded forced me to reflect on my emotions at different points throughout my professional and personal journey. Over time, I’ve realized the power of being open about your emotions – first to yourself, and then to others.
For the longest time, I had this Debbie Downer voice in my head telling me that I’d fail. As a Muslim-American single Mom who always pictured herself as an academic, it seemed impossible that I could also be a successful entrepreneur and CEO in the male-dominated tech industry. But the turning point came when I realized that the voice in my head – which was my biggest obstacle – could also become my greatest advocate, if I re-framed the message.
By embracing my emotions, I was able to empower myself. And I’ve found that when I’m able to share my emotions with others, people will reciprocate. That’s when we’re able to truly connect with one another.
Your company, Affectiva, had its genesis in the MIT Media Lab. How did the unique environment of the lab shape you and the company during your time there?
The ethos of the MIT Media Lab is that it’s okay to take risks – in fact, it’s encouraged. Everyone there is building something that’s never been built before, so there’s a lot of acceptance around the fact that what you’re doing may or may not work the first time. If something wasn’t working, we got creative, moved on and iterated without fearing failure or dwelling on it. Rather than spending time debating, the focus was on action – we were encouraged to just go build.
It was at the MIT Media Lab that Affectiva’s co-founder, Rosalind Picard, and I started seeing commercial interest in our technology. That shaped our research in artificial emotional intelligence (Emotion AI), and led us to spin the company out of the Media Lab so that we could bring the technology to market at scale. And the attitude and environment at the Media Lab has continued to shape our approach at Affectiva in the years since.
You’ve shared a story about the early days of Affectiva, when you turned down a lucrative investment offer from a government agency who wanted to use your technology for surveillance. What was your thought process that led to that decision? What gave you the courage to take that ethical stand?
From day one, Affectiva has had strong core values around the ethical development and deployment of AI. Early on, we outlined the areas where our technology could improve people’s lives – in healthcare, automotive, autism and more – and pursued those use cases, while avoiding applications that could aggravate inequality or violate people’s privacy.
Then, years ago, we received a preliminary investment offer from a government agency that wanted to use our technology for surveillance. We desperately needed the money – in fact, I was concerned about being able to make payroll.
At the end of the day I simply couldn’t imagine taking the investment. I’d play it out in my head and thought about what it would look like if we took the offer, pivoted our focus, and went in that direction. But I couldn't ignore the fact that this application of our technology would be in direct violation of our core values, so we walked away from the investment. I feel strongly that these are the choices tech leaders critically need to make, and stand by, to ensure AI benefits people.
Your willingness to forgo a profit opportunity for the sake of ethics seems like the exception rather than the rule in the tech industry, particularly amongst startups facing pressure to scale quickly. What needs to happen to change this?
One of the reasons I wrote the book was to reach a general audience. I think consumers have a strong voice and role to play in shaping how technologies like AI are deployed. And, they’ll only embrace technology that’s ethical and mindful of their privacy.
So, tech companies – from startups to larger tech giants – need to consider and prioritize ethics. It won’t make sense for tech companies to scale up in a way that has consequences on consumer rights or data privacy. Because at the end of the day, tech that’s unethical won’t be widely adopted or embraced.
You make a similar point in the book, that consumers can use their buying power to influence how companies deploy AI. What do you think it will take for consumers to realize and exercise this power in a widespread way?
We’ve seen industries transformed by empowered consumers. For example, there’s been a movement toward sustainable, fair trade and organic food, which is increasingly important to a lot of consumers. I think we’ll see the same reformation in the tech and AI industry as consumers become more familiar with emerging technologies and realize their buying power. Perhaps there will come a time when, in the same way that food products are certified as organic, or non-GMO, AI products will have an ethical seal of approval.
But we can’t rely on consumers’ voices alone. We need multi-stakeholder consortiums to help shape guidelines for the ethical development and deployment of AI, and to ensure different voices are represented. There are already some organizations doing this work, like the Partnership on AI, which seeks to address the ethical, moral and social implications of AI. The Partnership on AI is comprised of tech giants as well as startups like Affectiva; but also includes academics and non-profits. In fact, the ACLU and Amnesty International are key stakeholders in the Partnership on AI. But, we need more organizations like this to enact and ensure a widespread movement toward ethical AI that benefits companies and consumers alike.
You can connect with Rana on Twitter and can order a copy of Girl Decoded here!